When it comes to automation the Holy Grail is probably random bin picking, which has seen a great deal of investment over recent years, but without many full-scale industrial installations as yet. One of the reasons is that, from a vision perspective, it’s incredibly difficult to engineer a robust solution that can discern individual parts for a robot gripper to pick up among a pile of components.
‘There is a big difference in complexity, picking parts stacked in multiple layers compared to a single layer,’ says Thor Vollset, CEO of Norwegian vision company Tordivel. ‘It’s relatively easy using standard Scorpion Vision Software [from Tordivel] to pick objects on a level surface, and a lot of robot vision is based on these sorts of application. The complexity goes up a great deal when picking multiple objects jumbled on top of each other.’
Tordivel provides a complete set of software tools for robot guidance based on its Scorpion Vision Software platform, as well as its Scorpion 3D Stinger camera, a stereovision device. The company has also launched its robot vision solution called Scorpion 3D Stinger for Robot Vision, which consists of robot vision modules from its software platform and one or more of its stereo cameras. The cameras can be supplied with or without lasers for structured light to provide additional contrast to the scene, and are available with resolutions from VGA to five megapixels in monochrome or colour.
The system is designed to pick parts from pallets and has pre-loaded solutions for picking pipes or cylinder-like objects, gears or circular-like objects, and blocks. As an example of the capabilities of the system, the company has designed a 3D simulation to show how a solution can be engineered to pick pipes out of a bin at various heights. The simulation uses a Scorpion 3D Stinger camera with four static laser lines to create a small point cloud and locate the pipes in 3D without any scanning.
‘Our target is that our 3D solutions should be as fast as 2D solutions,’ states Vollset, explaining that the standard set-up would be to have a laser scanner on top of the bin and spend a number of seconds scanning it to create a large point cloud, which is then processed. The stereovision-based Scorpion 3D Stinger camera, on the other hand, can locate the object in real time with moving parts – which, says Vollset, is a lot faster than having to scan the bin. The system operates at speeds faster than one second; in the pipe-picking simulation the processing time is 500ms.
A large part of Scorpion 3D Stinger for Robot Vision was developed in the AutoCast R&D project, which ran from 2008 to 2012 and was sponsored by the Norwegian Research Council. In the project the system was picking from bins where the objects were only partially sorted. ‘This project dealt with a very complex problem that showcased the capabilities of the system,’ explains Vollset, adding that now the company tries to promote simpler, less random solutions – because, in his opinion, there is a bigger market for that. ‘We, in a way, want to engineer straightforward, predictable solutions where objects are more or less ordered,’ he says. ‘With our system we are able to pick objects that are slightly unordered in a robust way.
‘When the subject of 3D vision comes up it is often in relation to random bin picking,’ he continues, which, he says, has seen a lot of development effort put into it. Vollset comments that Tordivel’s strategy is to provide systems that ‘never fail’. ‘We provide robust and proven technology for robot vision, based on 3D vision in most cases. If you target random bin picking it is generally impossible to engineer a system that never fails.
‘With a robot vision system, it is obvious that it will fail some of the time,’ he clarifies, ‘but it should not be because of limitations in the vision system. When I say "never fails", the system will never send out an invalid result. It might be that the camera doesn’t locate the object, but it knows that is the case. This is a lot more sophisticated than giving an approximation of the location of the object or getting it completely wrong, but still sending the result out anyway. Scorpion 3D Stinger would never output an invalid result.’
This is one of the reasons why Vollset emphasises the need for 3D vision in robotic systems, as it makes the systems much more robust. With 2D vision the assumption is made that the part is lying flat and, if it’s not flat, the system will fail. With 3D, within certain limits, the system will be able to handle the variation. ‘There is redundant information with 3D vision, so it’s possible to validate the result. That is the feature of 3D that we consider most valuable,’ says Vollset.
An effective 3D robotic system allows the manufacturer to eliminate fixtures, because it locates the part in 3D. One of the initial tests Tordivel carried out was at Raufoss Neuman, a supplier of lightweight aluminium suspension parts to the automotive industry. The solution implemented was designed to replace a 2D vision system being used to guide a robot picking various small parts, where the parts were manually taken out of the box and placed in a fixture for the robot to locate. The Scorpion stereovision system removed the manual step of moving the parts to the fixture by allowing the robot to pick straight out of the box. The box wasn’t solid, so there were height variations and some random angles, but the parts were largely ordered. ‘We engineered a 3D system where the robot could pick straight out of the box, and which didn’t require any rearranging of the parts before being manipulated by the robot,’ says Vollset.
Flexible robots
Robots tend to find their way into large volume production processes, where nothing much changes in terms of the parts or the task the robot is expected to perform. But, for smaller set-ups where the parts are constantly changing, a robotic solution might be too expensive. This is the thinking behind Robomotive, a robotic cell developed by Yaskawa Motoman, Beltech, and Robotiq combining the expertise and technology of the three companies to a flexible and adaptable solution.
‘Using robotics to automate a process that involves small batches with a large mix of products is a costly undertaking,’ notes Michael Vermeer, general manager of the company Robomotive, explaining that product-specific grippers, jigs and feeders are expensive. ‘With our solution, users have a good return on investment. Multiple product-specific grippers and jigs are not necessary, because the robot is flexible; it has eyes, it has adaptive grippers, it can use its own tool box. With this solution, you don’t need to change the environment because of the robot’s hardware. The robot is human-like and can be placed at a workstation and trained to do the task a human would do.’
A lot of cost and time is involved when switching between different product batches with conventional robotics. With the Robomotive solution, the user can load a number of programs depending on the product and the task and switch between them easily.
The system uses laser triangulation, using a laser from Z-Laser and a camera from Photonfocus. Beltech is Robomotive’s vision partner, while Yaskawa Motoman provided the humanoid robot. Robomotive is the turnkey integrator of the robot cell.
‘At the moment, laser triangulation is the best option for us for robot guidance,’ states Vermeer, although the company is experimenting with other technologies, such as time of flight. ‘Laser triangulation is a very robust, reliable technique that won’t be disturbed by variations in lighting or reflections. We use a filter on the camera to filter out all extraneous light apart from the laser wavelength. We’re experimenting with other 3D vision technologies, but they aren’t as robust as triangulation.’
The steps for robot guidance begin with gathering a 3D point cloud, which has to be processed to locate the object and its orientation. Most of the software was written by Beltech to customise the image processing parameters. The location and orientation of the object is then sent to the robot in order for the gripper to pick it up.
The 3D image has to be interrogated to ensure there are no other objects blocking the gripper’s path; the software defines the gripper of the robot and its landing zone. The landing zone of the gripper can be a cylinder of the diameter of the gripper or a 3D model of the gripper. Defining the zone is based on 3D coordinates, as well as the angle and the plane of the object, but care has to be taken to avoid collisions between the gripper and its surroundings. For the system to work, the communication between robot and vision has to be calibrated based on the same coordinates.
The Robomotive robot is currently being used in Japan. ‘There are humanoid robots operating in Japan, there are experiments with adaptive grippers, a lot of experiments with 3D bin picking, but the combination of all this is fairly unique,’ says Vermeer. ‘There are also a lot of academic projects concerning service robotics, but these [the robots] are not that reliable. This is an industrial system fit for constant use in a manufacturing plant.’ The Robomotive cell is available for use and testing by potential customers.
Many other companies provide 3D vision solutions suitable for robot guidance – German company Isra Vision, for example, provides various sensors including its Shapescan3D, which contains scanning laser lines and is designed for bin picking-type applications. Isra Vision’s sensors are compatible with Gigabit Ethernet and Profinet interfaces, so that the vision component is easier to integrate into a robot cell.
The technology for full-scale, robust bin picking might need further development to make these types of solution commonplace in industry, not just in terms of the vision systems, but also the grippers and feeders. Nevertheless, the 3D solutions available are fairly well advanced and flexible robotic cells like Robomotive are allowing greater degrees of automation in industry.