Greg Blackman reports from the Embedded Vision Summit in Santa Clara, where Allied Vision launched its new camera platform
Allied Vision has launched a €99 camera with onboard ASIC processor specifically for the embedded vision market. The camera aims to be a bridge between the high performance, costly and low volume industrial vision market and the higher volume, lower cost embedded market.
It was launched at the Embedded Vision Summit, a computer vision conference organised by the Embedded Vision Alliance and held in Santa Clara, California from 1 to 3 May.
Andreas Gerk, Allied Vision’s CTO, said during the show that the typical cameras in the embedded market are not as rich in features as in the machine vision sector. Allied Vision hopes to offer some of the functionality found in machine vision to embedded vision developers through its new product line. Gerk added that the new camera platform is ‘totally different to what we have done before’.
Embedded vision is a hot topic in the machine vision sector at the moment, with the VDMA organising a panel discussion at the Vision show in Stuttgart last year, and companies like Basler introducing its online vision community, Imaginghub, for those building embedded vision solutions. Mark Hebbel, head of new business development at Basler, gave a tutorial at the Embedded Vision Summit on choosing time of flight sensors for embedded applications.
Other notable machine vision names that were exhibiting at the event in Santa Clara included Ximea, MVTec, Euresys, and Vision Components.
Jeff Bier, the founder of the Embedded Vision Alliance, said during the conference that embedded vision can mean many things: it can be an industrial camera with a processor inside; an embedded system with an integrated camera or with an external camera; or even a system sending images to the cloud.
Neural networks
Half of the technical insight presentations at the conference focused on deep learning and neural networks. These are algorithms that can be trained to recognise objects in a scene using lots of data, as opposed to the traditional method of writing an algorithm for a specific task. Bier said that 70 per cent of vision developers surveyed by the Alliance were using neural networks, a huge shift compared to only three years ago at the 2014 summit when hardly anyone was using them.
Bier gave a presentation at the conference predicting that the cost and power consumption for the computation required for vision will decrease by 1,000 times over the next three years, much of this thanks to neural networks.
Bier clarified the statement saying that the 1,000 times came from, firstly a 10 times improvement in the efficiency of the neural networks, which have largely been developed for accuracy rather than efficiency, compounded with a 10 times efficiency improvement in the processors running neural networks, and a 10 times improvement in the software that mediates between the processors and the algorithms.
In an article on the Embedded Vision Alliance’s website, Bier noted five computer vision trends that are likely to have a big impact on society in general: huge amounts of image data; deep learning; 3D sensing; simultaneous location and mapping (SLAM) used in robotics; and computing on the edge, a term that means doing processing on the device rather than on a server or in the cloud.
The advances in computer vision are opening up all kinds of new ways of using vision technology, from the embedded vision inside Microsoft’s Hololens augmented reality headset – Marc Pollefeys, director of science at Microsoft and a Professor at ETH Zurich, gave a keynote presentation about Hololens – to cameras for generating analytics for retail. Embedded vision also has the potential to disrupt more traditional markets, like surveillance – Michael Tusch at ARM gave a presentation on this topic.
Rudy Burger at Woodside Capital Partners though made the point that there haven’t actually been many large scale embedded vision products – he mentioned Kinect and Mobileye, which Intel acquired last year, as two examples. ‘We’re just at the very beginning,’ he said.
Turning back to machine vision, and Arun Chhabra at 3D surface inspection company 8tree gave a presentation about making an embedded 3D vision system for mapping dents on aircraft, a practice that traditionally is extremely rudimentary and labour intensive. 8tree’s system is a 3D scanner that operates by pattern projection and can annotate the area of the plane being inspected to measure any dents.
The embedded vision sector is not just a new market for machine vision companies, but the way vision in general is being deployed is changing, which in turn could impact machine vision. The reason why Allied Vision, Basler and others are starting to provide embedded vision products is to cater for these new ways of using vision technologies, so that if a customer asks whether a system can be employed on an ARM chip, for instance, then they are able to do that.