The consumer electronics show (CES 2018) has kicked off in Las Vegas showcasing a host of clever tech a lot of which uses imaging in one way or another. Whether it’s self-driving vehicles, drones, virtual reality, or devices for healthcare, all can take advantage of vision data.
In the case of virtual and augmented reality, sensors for eye tracking are incorporated in the headsets to give a better user experience. Image sensor manufacturer Omnivision was showing prototype augmented reality glasses that the company has built in collaboration with Hong Kong Applied Science and Technology Research Institute company (ASTRI). The glasses use Omnivision's OV9282 image sensor for simultaneous localisation and mapping (SLAM), 3D room scanning, and hand and tool tracking.
Eye tracking forms the basis of health technology company RightEye, which offers a range of eye tracking tests to diagnose health conditions including concussion, autism, and Parkinson’s disease.
RightEye’s EyeQ system tracks eye movement in a vision test to provide information about how a person’s eyes and brain are working together. The device uses an embedded eye tracker to measure and analyse eye movements to screen for brain health issues or autism, as well as providing reports and recommendations to improve reading and learning in young children.
Israeli firm ICI Vision embeds HD cameras, eye tracking algorithms and retinal projection in eyewear designed to restore perception for those who are visually impaired. The eyewear uses eye tracking to see what the person is looking at, and then projects an optimised image onto the healthier parts of the retina.
Table tennis playing robot
One of the robots on display at CES 2018 is Forpheus from Omron, a robot arm able to play table tennis. The machine brings together a lot of the capabilities of Omron. It has a three-camera system to identify the ball and evaluate players, a high-speed robotic arm that moves in response to its AI controller, trajectory prediction, a motion controller which tells it how to hit the ball, and machine learning to evaluate its opponents' capabilities.
Cameras mounted on the left and right of the robot detect the ping-pong ball in 3D space, and the ball's location is imaged up to 80 times per second. In addition to the two side cameras, Forpheus has a third central camera used to evaluate the player and, based on their movements, judge their ability level. The robot can, for instance, predict from the player’s movements what type of shot they are likely to play.
In addition, by evaluating the opponent's movements and the motion of the ball, the machine’s AI can judge whether the opponent is experienced or a beginner. Forpheus will adjust its ability level based on its opponent to try and maximise the potential for a successful rally.
Artificial intelligence is now behind a lot of the innovations at CES – List, a research institute of France-based CEA Tech, is exhibiting its DeepManta multi-task deep neural network algorithm, which gives real-time analysis of video streams for applications such as guiding blind people, video surveillance or aspect control of products on manufacturing lines.
The low cost of vision hardware in combination with faster processing and the advances coming through in AI mean that imaging is now an integral part in the consumer electronics world.