Next robot vision generation counts on embedded architecture and “Touch & Automate”
With a new generation of 2½D- and 3D-sensors ISRA VISION sets the next milestone in its innovation roadmap. As “Touch & Automate” is developed further, two established 2½D- and 3D-sensors will now be equipped with embedded technology to enable adaptable sensor technology with WiFi and on-board processing power for the future of industrial production. Thanks to the embedded technology automated gripping will be more intelligent and flexible. With the new sensor ISRA VISION moves towards networked production.
In recent times 3D robot vision systems have achieved an enormous operational area. The flexible technology is used in different applications, such as de-palletizing and de-racking. Two of these systems are the established MONO2½D- and MONO3D-sensors. ISRA VISION is setting the next innovation step forward: to extend the limits of the successful technology, the MONO2½D- and MONO3D-sensors will be equipped with embedded technology. Manufacturing at highly automated production lines will become more flexible and cycle times will be reduced further.
Embedded technologies with “Touch & Automate” concept
In the course of ongoing development of the “Plug & Automate“ portfolio with 3D sensors that easily can be installed, the MONO2½ and MONO3D sensors will be equipped with on-board processing power and WiFi. Combined with efficient multi-touch-navigation automation with the “Touch & Automate” concept makes the sensors smart. The combination of on-board-processing power and wireless connectivity enables system navigation with smartphones or tablets and collaborative data-usage. The sensors will also be able to connect to a sensor network for communication with each other or data bases. At the same time the sensors will receive updates and will be characterized by ease of use. The on-board PC is operated via a clearly structured user interface with multi-touch navigation. Thanks to “Touch & Automate”, users will benefit from full process transparency and cost efficiency in the production lines.
Stepping into the third dimension – with four or six degrees of Freedom
With the new product series ISRA VISION enables its sensors to access all spatial dimensions. By determining the object’s distance and its rotation around the vertical axis (MONO2½D) or all the spatial dimensions including all rotations (MONO3D), the sensors allow for automated execution of pick and place tasks and 3D object recognition. Both systems use contour based detection; in order to recognize the position of an object robust they need nothing more than small features like holes, edges or corners. This opens the application to a wide range of detectable parts and enables the system to scan on-the-fly. Detectable objects range from standard parts on palettes, to vehicle parts like doors, hoods or tailgates on a rack. Besides the de-racking, the system can also be used with entire car bodies. It is especially important to determine their precise position before these bodies are painted. Thanks to the on-board processing power and the WiFi connectivity a flexible production will be possible.
The new product series of the MONO2½ and MONO3D sensors is optimized for minimum cost of ownership and ease of installation. In addition to that, all cameras are delivered pre-calibrated and provide the best measurement results for different applications. Return on investment is quickly achieved and users can profit from the increased efficiency and yield. With these smart sensors, ISRA continues to advance the “Touch & Automate” concept and using embedded technology provides solutions to develop the product portfolio for networked production and INDUSTRIE 4.0.