Xilinx has added to the growing number of processors designed for vision AI acceleration with the launch of its Kria system-on-module (SOM) portfolio.
The first product in the family, the Kria K26, specifically targets vision AI applications in smart cities and smart factories.
The release is the latest AI compute resource targeting factory automation, evidence that chip and compute providers now consider manufacturing a sizeable market – Amazon recently launched a cloud service for vision AI in manufacturing.
The Kria K26 SOM is built on top of the Zynq UltraScale+ MPSoC architecture, which features a quad-core Arm Cortex A53 processor, more than 250,000 logic cells, and a H.264/265 video codec. The SOM also features 4GB of DDR4 memory and 245 IOs for connecting to virtually any sensor or interface.
The SOM has 1.4 tera-ops of AI compute, which Xilinx says enables developers to create vision AI applications offering more than three times higher performance at lower latency and power compared to GPU-based SOMs.
In a benchmark test creating a video pipeline running AI to read number plates, the Kria K26 was able to increase performance by 1.5 times compared to a competitive GPU, while consuming 5W of power per video stream (the GPU consumed 7.5W).
The machine vision sector is watching closely the advances being made in compute hardware for vision AI. In a discussion during the Embedded World digital show earlier in the year, Olaf Munkelt, managing director of MVTec Software, commented: 'Speed for processing image data is super important, and will be important in the future. Everyone in our vision community is looking at these AI accelerators because they can provide a big benefit.'
He noted that there are many start-ups providing interesting hardware, which can perform 10 or 20 times better than existing GPU hardware from established vendors.
Improvements in performance for running neural networks are being achieved thanks to domain-specific architectures, rather than through chip fabrication. At Embedded World, Jeff Bier, president of the Edge AI and Vision Alliance, and BDTI, said that there are approaching 100 companies now offering processors specialised for deep learning and visual AI. ‘This is very different from what the industry looked like a few years ago,’ he said.
As an example, last week chip maker BrainChip announced it had begun volume production of its Akida AKD1000 neuromorphic processor chip for edge AI devices.
Xilinx is offering production-ready, vision-accelerated software applications in its Kria SOM portfolio. These applications remove FPGA hardware design work and only require software developers to integrate their custom AI models, application code, and optionally modify the vision pipeline.
Xilinx is also announcing its first embedded app store for edge applications. The company's offerings are open source accelerated applications that range from smart camera tracking and face detection to natural language processing with smart vision.
Kria SOMs also enable customisation and optimisation for embedded developers with support for Yocto-based PetaLinux. Xilinx is also announcing a coming collaboration with Canonical to provide Ubuntu Linux support, the Linux distribution used by AI developers. Customers can develop in either environment and take either approach to production. Both environments will come pre-built with a software infrastructure and helpful utilities.
The company is also offering a starter kit, which is built to support accelerated vision applications available in its app store.
The starter kit is priced at $199, while the commercial and industrial variants cost $250 and $350, respectively.
The KV260 Vision Starter Kit is available immediately, with the commercial-grade Kria K26 SOM shipping in May 2021 and the industrial-grade K26 SOM shipping this summer. Ubuntu Linux on Kria K26 SOMs is expected to be available in July of 2021.