Researchers at UCLA have demonstrated a 3D imaging method able to provide excellent depth range, while also imaging around scene occlusions.
The work was published in Nature Communications.
The technique is called compact light-field photography (CLIP). Study leader, Liang Gao, an associate professor of bioengineering at the UCLA Samueli School of Engineering, said the novel computational imaging framework 'for the first time enables the acquisition of a wide and deep panoramic view with simple optics and a small array of sensors.'
The researchers combined CLIP with lidar sensors. Conventional lidar, without CLIP, would take a high-resolution snapshot of the scene but miss hidden objects.
Using seven lidar cameras with CLIP, the array takes a lower-resolution image of the scene, processes the output from the individual cameras, then reconstructs the combined scene in high-resolution 3D imaging.
The researchers demonstrated the camera system could image a complex 3D scene with several objects, all set at different distances.
The work is a new class of bionic 3D camera system that can mimic flies’ multi-view vision combined with depth sensing, resulting in multi-dimensional imaging.
'If you’re covering one eye and looking at your laptop computer, and there’s a coffee mug just slightly hidden behind it, you might not see it, because the laptop blocks the view,' explained Gao, who is also a member of the California NanoSystems Institute and runs the Intelligent Optics Laboratory. 'But if you use both eyes, you’ll notice you’ll get a better view of the object. That’s sort of what’s happening here, but now imagine seeing the mug with an insect’s compound eye. Now multiple views of it are possible.' According to Gao, CLIP helps the camera array make sense of what’s hidden in a similar manner.
The researchers state in the paper that 'compact light-field photography will broadly benefit high-speed 3D imaging and open up new avenues in various disciplines.'