Skip to main content

Robotic camera makes evolutionary step after taking inspiration from the human eye

By
James Wormald
Rapid, involuntary movements of the eye is what allows humans to track fast-moving objects. Now the same technique could be used to improve robotic vision

Rapid, involuntary movements of the eye is what allows humans to track fast-moving objects. Now the same technique could be used to improve robotic vision

A new camera mechanism, invented by computer scientists at the University of Maryland, mimics the involuntary movements of the human eye to create sharper and more accurate images for use in robotics and commercial image capture.

The process of evolution means that many human systems are far more advanced than they’re often given credit for. By taking inspiration from the complexities of human vision, a team of computer scientist researchers from the University of Maryland (UMD), US, has invented a new robotic camera mechanism that improves how machines see and react to the world around them.

The system is able to maintain clear and stable vision of fast moving objects by mimicking the tiny involuntary movements made by the human eye in such situations. “Event cameras are a relatively new technology, better at tracking moving objects than traditional cameras,” says Botao He, a computer science Ph.D. student at UMD and lead author on a research paper published by the research team, “but today’s event cameras struggle to capture sharp, blur-free images when there’s a lot of motion involved.”

A vision problem

“It’s a big problem, because robots and many other technologies, such as self-driving cars, rely on accurate and timely images to react correctly to a changing environment. So, we asked ourselves; how do humans and animals make sure their vision stays focused on a moving object?

A solution comes into view

The answer found by He’s team was microsaccades. These are small, quick eye movements made involuntarily when someone tries to refocus. Minute but continuous, these miniature movements allow humans to keep objects, even fast-moving ones, in focus. “We figured that just like how our eyes need those tiny movements to stay focused, a camera could use a similar principle to capture clear and accurate images without motion-caused blurring,” said He.

Replicating microsaccades

By placing a rotating prism inside a prototype camera the team named the Artificial Microsaccade-Enhanced Event Camera (AMI-EV), they were able to replicate microsaccades and redirect the light beams captured by the lens. This worked because the continuous rotation of the prism simulated the human eye’s natural movements, and allowed the camera to stabilise the textures of visible objects, just as the eye would.

“Our eyes take pictures of the world around us and those pictures are sent to our brain, where the images are analysed,” said Yiannis Aloimonos, professor of computer science at UMD and co-author of the paper. “Perception happens through that process and that’s how we understand the world. When you’re working with robots, replace the eyes with a camera and the brain with a computer. Better cameras mean better perception and reactions for robots.”

Implications and applications of human-like robotic eyesight

Any industry that relies on accurate image capture and shape detection can benefit from the innovation, suggest the researchers. “With their unique features, event sensors and the AMI-EV are poised to take centre stage in the realm of smart wearables,” said Cornelia Fermüller, research scientist and senior author of the paper. “They have distinct advantages over classical cameras such as superior performance in extreme lighting, low latency and low power consumption, [making them] ideal for virtual reality applications for example, where a seamless experience and the rapid computations of head and body movements are necessary.”

Early testing suggested AMI-EV could capture motion in tens of thousands of frames per second (fps), as opposed to the 30 to 1,000 fps speed of commercial cameras. The quicker and therefore smoother motion capture could be important to develop better augmented reality experiences, better security monitoring and even to improve space astronomy.

The speed of the camera system can “solve many specific problems,” said Aloimonos, “like helping a self-driving car figure out what on the road is a human and what isn’t. As a result, it has many applications that the general public already interacts with, like autonomous driving systems or smartphone cameras.”
 

Topics

Media Partners