Stanford Researchers Developed a Camera That Can See in 4D

Stanford 4d camera .jpg
(Assistant prof. Gordon Wetzstein and postdoctoral scholar Donald Dansereau with a prototype of the monocentric camera that captured the first single-lens panoramic light fields - Photo by - Linda A. Cicero.)

The camera has now been around for over a hundred years. It's evolved greatly, but its main principle has remained the same. A camera is a device that uses reflected light to capture an image, whether that image is on glass or in ones and zeros on a flash drive, it doesn't matter. That technique, however, is only good for taking a regular 2d image. As we move into the future with drones, self-driving cars, and robotics, it's becoming clear that seeing in 2D isn't going to cut it.

Human eyes see images in essentially 4D and we don't have a problem navigating the world because of it. We can see the world around us as it is without guessing whether or not a rock is an obstacle or a flat image on a wall. Researchers at Stanford have come up with a camera that can see in 4D, something that has huge implications if developed correctly. 
Mallika Singh recently interviewed the scientists behind the camera for the Stanford Daily:

"The problem is, consumer cameras aren't really well suited to robotics applications," Dansereau said. "So we asked ourselves, 'what would be the right camera for a robot that needs to make decisions quickly and with low power?"
Dansereau said the answer came in the form of a light field camera, which uses an array of microlenses to take in more information between the camera's sensor and the outer lens. In addition to the image captured by a conventional 2D camera, the newly-developed camera records information about the direction and distance of the light hitting the lens, generating what is known as a 4D image."
"A light-field camera builds an array of many eyes covering an area, like a bee's eye," Dansereau said. "[It] is like measuring the light through a window, whereas a normal camera is like looking through a peephole."
Dansereau also noted the importance of the speed of the camera in informing the robot. He said normal cameras use multistep algorithms, which prolong information gathering.
"Many of the algorithms that are popular in computer vision today are not very compatible with [the speed necessary]," he said. "Even the fast ones are multi-layered: They take a long time to come up with the first answer. The algorithms we use on our light field cameras are one-step algorithms -- they're very fast."

For more information, check out the full article at the Stanford Daily

Source: Mallika Sing