Canadian machine learning company, Algolux, has won the ‘Most exciting startup’ award at the AutoSens show in Brussels on 20 September.
Algolux develops machine learning algorithms for vision systems, including those used in autonomous vehicles.
Speaking to the AutoSens conference organisers before the event, Felix Heide, CTO and co-founder of Algolux, said that both imaging and perception need to be much more robust for autonomous driving, and camera-based systems are central to that. He noted that there are up to eight cameras in today’s most advanced vehicles, and this will grow to more than a dozen per vehicle in the near future.
One problem is how these systems are built and tuned. ‘Current pipeline approaches propagate errors from each stage to the next, and the stages are tuned independently and subjectively. So, even the most advanced computer vision stacks perform poorly under difficult imaging scenarios, otherwise known as the real world,’ Heide said.
‘I think we stood out because we’re addressing both of these major issues by developing AI to objectively train imaging and perception stacks for optimal image quality and next-generation levels of computer vision performance. This also aligns very well with the IEEE P2020 working group, which we participate on, to establish objective metrics for automotive image quality for both viewing and computer vision, and reinforces the importance of our work,’ he added.
The AutoSens awards were judged by an expert panel drawn from OEMs, Tier 1s and Tier 2s, industry organisations and academia.
One of Algolux’s products is Crisp-ML, which can combine large real-world computer vision training data sets with standards-based metrics and chart-driven key performance indicators to improve the performance of a vision system.
Using deep learning, the company has also developed its Camera-Aware Neural Architecture, which integrates the image formation model (optics, sensor) with the high level computer vision model.
Related article:
Sensing the road ahead - Rob Ashwell looks at how vision fits into the battery of sensors onboard autonomous vehicles