Researchers design camera that can see around corners
A research team from Stanford University has designed a camera system that, by analysing single particles of light, can reconstruct room-size scenes and moving objects that are hidden around a corner. It is hoped that the research may help allow autonomous cars and robots to see better.
The camera system builds upon previous around-the-corner cameras the team has developed, with the ability to capture more light from a greater variety of surfaces, to see wider and further away, and to be fast enough to monitor out-of-sight movement.
“People talk about building a camera that can see as well as humans for applications such as autonomous cars and robots, but we want to build systems that go well beyond that,” said Gordon Wetzstein, assistant professor of electrical engineering at Stanford. “We want to see things in 3D, around corners and beyond the visible light spectrum.”
A key part of the team’s advancement was a laser 10,000 times more powerful than what was used a year ago. The laser scans a wall opposite the scene of interest and that light bounces off the wall, hits the objects in the scene, then bounces back to the wall and to the camera sensors. By the time the laser light reaches the camera only specks remain, but the sensor captures every one, sending it along to a highly efficient algorithm, also developed by the team, that untangles these echoes of light.
“When you’re watching the laser scanning it out, you don’t see anything,” described Stanford electrical engineering graduate student David Lindell. “With this hardware, we can basically slow down time and reveal these tracks of light. It almost looks like magic.”
The system can scan at 4 fps. It can reconstruct a scene at speeds of 60 fps on a computer with a graphics processing unit, which enhances graphics processing capabilities.
The team prioritised practicality when designing the technology, choosing hardware, scanning and image processing speeds, and a style of imaging already common in autonomous car vision systems.
Unlike previous systems that relied on objects that either reflect light evenly or strongly, the Stanford-designed system can handle light bouncing off a range of surfaces, including disco balls, books and intricately textured statues.
The team hopes to move toward testing their system on autonomous research cars while looking into other possible applications, such as medical imaging that can see through tissues. Among improvements to speed and resolution, the team will work at making their system even more versatile to address challenging visual conditions that drivers encounter, such as fog, rain, sandstorms and snow.
Honeywell has introduced a suite of building integration and cyber solutions to help enterprises...
Computer scientists from the University of Massachusetts Amherst have developed a tool to...
Quantify Technology's simple, modern and futureproof smart home solution is heading to the US...