The science of imaging has “developed” (sorry!) greatly in the past few decades. Advances in hardware and software have given us the ability to scan objects with light in order to create accurate 3D images of them. Among other applications, these advances have allowed us to create 3D images of the human body and accurate maps of the earth’s surface.
One of the best ways to capture a 3D image of an object is to use a technique called LIDAR (Light Detection and Ranging). In this technique, a laser sends out pulses of light. Each pulse contains many photons (particles of light); the brighter and more intense the light, the more photons are being sent.
The light pulses are directed at a specific location on the object. They are reflected by the object back to an electronic camera sensor, which measures the brightness of the reflected light and the time it took for the pulse to make the round trip from the laser to the object and back to the sensor.
The laser scans over the object; each point on the object corresponds to a single pixel of the image. A computer program puts the data together into a detailed 3D-image.
However, as anyone with a camera knows, lighting is very important when taking any type of pictures—if you don’t have enough light, you will end up with a dark image that does not show very much detail. In regular LIDAR, hundreds (if not more) of photons must hit the camera sensor in order to create a single pixel of an image. The fewer photons hit the sensor, the darker and more indistinct the image.
But there is light at the end of the tunnel (once again, sorry!). A group of electrical engineers from the Massachusetts Institute of Technology (MIT) have developed a way to capture detailed 3D-images of an object even if it is enshrouded in near-darkness. The MIT engineers have created an algorithm that can take the information measured from just the first few photons that reflect from the object to the sensor, and use that information to construct a detailed 3D image. With this technology, you’d need fewer photons to capture the image, which means you could reduce the intensity of the scanning laser to such a level that it can’t even be seen with your eyes. So far, this new process can only use monochromatic lasers to provide the dim scanning light, which means the images are not in colour.
This type of low-light imaging could become very useful for the scanning/imaging of materials and biological tissues that are extremely vulnerable to high-levels of illumination. For example, if a doctor wanted to create a high-resolution 3D image of a patient’s eye using regular LIDAR, they would risk damaging the eye by scanning it with bright laser light. With the new technique a very low-level of light could be used to create high-resolution images without harming the patient’s eye. Being able to scan for images in near-darkness would also be useful for intelligence-gathering missions where you could take pictures without anyone knowing (please use responsibly).
Although we don’t have this technology at TELUS World of Science (yet!) you can explore the concepts of imaging by visiting the Eureka! Gallery and checking out our recently refurbished Infrared Camera exhibit. The special camera lets you see real-time images of yourself created by capturing the infrared light that you emit naturally. Also, our Recollections exhibit takes live video footage of our guests dancing to music, processes it through a computer to add some snazzy effects, and then projects the altered video back up on a screen.
For more information:
The research mentioned above was published in the journal Science, but a nice summary of the research can be found over on the website Nature. A video demonstrating the science behind the MIT discovery was put online on YouTube.