Skip to main content

Praying mantis-inspired imager enables real-time spatial awareness

Mantis eye Virginia

A photograph of the artificial compound eye prototype (image: Submitted photo/Tom Cogill, UVA Engineering)

Researchers have replicated the vision systems of praying mantises to improve motion tracking and depth perception while maintaining a wide field of view.  

A new prototype delivers precise spatial awareness in real time, which could advance imaging in drones, self-driving vehicles, surveillance and security systems, and smart home devices.

In addition, the prototype boasts a potential reduction in power consumption by more than 400 times compared to traditional visual systems. 

Binocular vision with depth perception in 3D space

A praying mantis’ field of view also overlaps between its left and right eyes, creating binocular vision with depth perception in 3D space.

Combining this insight with optoelectrical engineering and edge computing – where data is processed in or near the sensors that capture it – researchers at the University of Virginia in the US have developed artificial compound eyes that overcome challenges in how machines currently collect and process real-world visual data.

These limitations include accuracy issues, data processing lag times and the need for substantial computational power.

“After studying how praying mantis eyes work, we realised a biomimetic system that replicates their biological capabilities required developing new technologies,” said Byungjoon Bae, a PhD candidate and first author of the team’s recent paper in Science Robotics.

Combining microlenses and multiple photodiodes

The team’s ‘eyes’ mimic nature by integrating microlenses and multiple photodiodes, which produce an electrical current when exposed to light. The team used flexible semiconductor materials to emulate the convex shapes and faceted positions within mantis eyes.

“Making the sensor in hemispherical geometry while maintaining its functionality is a state-of-the-art achievement, providing a wide field of view and superior depth perception,” Bae said.

“The system delivers precise spatial awareness in real time, which is essential for applications that interact with dynamic surroundings.”

Such uses include low-power vehicles and drones, self-driving vehicles, robotic assembly, surveillance and security systems, and smart home devices.

Among the team’s important findings on the lab’s prototype system was a potential reduction in power consumption by more than 400 times compared to traditional visual systems. 

In-sensor memory component

Rather than using cloud computing, Lee’s system can process visual information in real time, nearly eliminating the time and resource costs of data transfer and external computation, while minimising energy usage.

“The technological breakthrough of this work lies in the integration of flexible semiconductor materials, conformal devices that preserve the exact angles within the device, an in-sensor memory component, and unique post-processing algorithms,” Bae said. 

The key is that the sensor array continuously monitors changes in the scene, identifying which pixels have changed and encoding this information into smaller data sets for processing.

The approach mirrors how insects perceive the world through visual cues, differentiating pixels between scenes to understand motion and spatial data. 

Topics

Media Partners