Colour is often said to be a matter of taste, and in some applications of colour imaging this can be taken literally. What colour is a ripe banana? It’s a simple question, and it usually has a simple answer: yellow. Ask a scientist or engineer working with colour machine vision, however, and the answer would probably be more complicated. In the rigid, physical terms of machine vision, colour ceases to be an intuitive property of an object, and becomes a complex interplay between the source of illumination, the absorption and scatter properties of the subject, the angles of illumination and viewing, and the response characteristics of the sensor. If you asked such a scientist or engineer what colour a banana is, the response should be ‘under what conditions?’
The Mk.1 Eyeball (or the human eye as we know it) sets a high standard when it comes to colour perception, boasting a dynamic range many orders of magnitude greater than that of any electronic sensor. In addition, the visual cortex of the brain is able to gauge illumination conditions, subconsciously adjusting our perception of colour in order to maintain colour constancy – a term used by psychophysicists (those who study human perception). Colour constancy means that a ripe banana looks yellow to the human eye, whether it’s viewed sideways or upright, in bright sunlight or under the glow of a fluorescent tube.
Imaging systems are not yet able to perform this on-the-fly adjustment, and so many factors must be controlled if a true-colour representation is required, as Ben Dawson, director of strategic development at Dalsa, explains: ‘Colour machine vision doesn’t have colour constancy, so we must artificially limit the lighting, the illumination and the view geometry.’ Sunlight, he says, is the standard to which illumination for colour imaging is compared, as it is spectrally uniform, although difficult to use in a controlled environment. ‘We always have to start with a white light,’ he says, ‘but not all white lights are the same; if you have a “white” LED, it actually tends to be quite blue because of the way the phosphors are generated. Likewise, compact fluorescent tubes have spikes at certain frequencies. Incandescent bulbs do have a good spectrum, but their output varies with respect to their drive current, so they are not often used.’ Incandescent bulbs and sunlight are both examples of black body radiators, meaning that their output colours can be characterised by the single value of colour temperature – around 5,500K for sunlight and around 3,000K (more red) for a halogen bulb.
Michael Schwaer, senior product manager at Basler Vision Technologies, confirms that selecting illumination for colour imaging applications can be a challenge: ‘We have learnt that colour reproduction is very dependent on the light source we use,’ he explains. ‘You can imagine that, when you’re using candle light, it makes the objects you’re looking at appear very red. If you’re using xenon light or a halogen bulb, the object appears differently. We therefore have to compensate our colour correction for whatever light source we’re using.’
Schwaer says different applications put different demands on the imaging system in terms of the quality of the colour reproduction required, and the calibration requirements for each may be different. ‘To separate greens, from yellows, from reds, in a traffic light application for example, is not so difficult, but when we have an application in food inspection, or medical device manufacture, it is important to reproduce the natural colours as closely as possible.’
The human eye is particularly sensitive to green colours, and with good reason: ‘During the evolution of the eye, early humans would have gained an advantage from being able to discern subtle differences in the colour of plants. In nature there is not so much red or blue,’ says Schwaer. In contrast, the highest sensitivity of CCD or CMOS sensors used in machine vision cameras is at red wavelengths. ‘We know that the sensitivities of the cameras we are using are at a different range to those of the human eye – and so, in order to reproduce colours as the eye sees them, colour cameras must be calibrated to the sensitivity curve of the human eye.’ This sensitivity curve is defined by the International Commission on Illumination (CIE) – the ‘standard observer’ curve. For each light source in use, the image processing software (or the camera itself) will apply a calibration matrix, adjusting the intensities recorded by the sensor to correspond to the colours in the standard observer curve.
A given calibration matrix will only last so long, however, as even the most stable of LED light sources will degrade slightly over time. In colour-sensitive applications, colour reference samples known as ‘colour checkers’ can be used to recalibrate the machine vision system in situ. Colour checkers are just a piece of card with highly standardised colours painted onto them (not printed). Basler cameras, Schwaer says, allow the camera to calculate a new correction matrix automatically simply by focusing on a colour checker. ‘It’s a nice feature, and we like being able to allow our customers to do this on their site,’ says Schwaer.
Machines with human vision
Dalsa’s Ben Dawson splits colour machine vision into two categories: human referenced colour, and comparative colour imaging. Human referenced, he says, refers to any imaging system that attempts to mimic how a human would judge colour, e.g. any system calibrated to the standard observer curve. ‘The complexity and variability of human vision make this kind of machine vision a challenging problem,’ he says. Applications for human referenced colour include print inspection, and matching of paints, pigments, and colorants – areas in which subtle differences are important.
Comparative colour, on the other hand, asks the machine vision system to learn certain colours and report on the presence of a colour, or how close a measured colour is to a specified colour. ‘There is a large market for this kind of machine vision, including agriculture, colour code reading, colour search (finding an object with a certain range of colours), industrial inspection, medical devices, and print inspection, such as the labels on cans or boxes or the printing on bottle caps,’ he says, adding that Dalsa has a significant presence in this market, along with other machine vision vendors.
Comparative colour imaging is used to read the codes on these tyres. Accurate colour reproduction is not required for all applications. Image courtesy of Dalsa
Matrox Imaging is one such machine vision vendor working on colour imaging products, and is particularly active in developing software-based tools for colour imaging. Arnaud Lina, head of the company’s software tool development team, explains that colour matching algorithms are the starting point for many, but not all comparative colour applications: ‘When the colours of the product you are inspecting are easy to match, you do not need a matching [software] tool,’ he says, adding that distinguishing between a red, a green, or a yellow would be an example of an easy matching task. ‘You would simply cluster the RGB in the hue-space, and that is enough… but when it’s time to deal with shadings or nuances of colour, or when looking at subtle changes of colour to determine if a product is fresh or not, for example, we are no longer looking at large colour changes. Now we need to go with more advanced colour analysis algorithms.’
Since the first release of colour software tools as part of the Matrox Imaging Library (MIL) in 2008, Matrox Imaging has added functionalities to deal with non-uniform colours, such as textures or mixtures of colours. In the food industry in particular, says Lina, a product does not possess a single colour, but it is more like a texture of mixed colours.
While the company’s colour imaging products use the comparative method for the time being, Matrox is aiming towards systems that can be more perfectly calibrated in the future. ‘This means learning the properties of the illumination light, and learning the properties of the camera sensor’s response to colour,’ says Lina. ‘So far the applications we’re working in do not require us to gain such an accurate knowledge of the colour system – that is the lighting, the product and the camera’s response – but more mature matching techniques will one day require this accurate colour matching.’
For truly accurate colour matching, however, no camera can be as accurate as a spectrometer, as Lina is quick to acknowledge, noting that the two technologies are employed in different circumstances. ‘While the goal is ultimately to become as accurate as a spectrometer, it is impossible; the camera gathers a lot of information that the spectrometer cannot see. Also, the response of the camera is not uniform, and it is quantified. An eight-bit camera does not have the dynamic range or the resolution of a spectrometer.’
Dealing with data
Pierantonio Boriero, product line manager at Matrox Imaging, outlines changes to the ways in which the oldest applications for colour imaging have evolved: ‘It’s fair to say that the printing industry is the sector that has been doing colour analysis for the longest, since the beginning of optical inspection being used in that industry in fact. We’ve been talking to customers in the print industry since the beginning of Matrox. Back then you needed dedicated hardware in order to do the basic colour analysis – you just couldn’t get the throughput with off-the-shelf personal computers. Since then, however, the computational power at the customers’ disposal has increased to the point that most solutions rely purely on software tools.’
Colour imaging is more data-intensive than monochrome data, in part because of the way in which the colour data is captured, as Dalsa’s Ben Dawson explains: ‘Colour cameras usually use a Bayer pattern (named after Bryce Bayer, who invented the pattern while at Kodak), where tiny, coloured filters are placed over a monochrome sensor to turn it into a colour sensor. This is the typical sensor found in a consumer camera and in most machine vision cameras. The Bayer pattern effectively reduces the sensor resolution by a factor of two in each direction and introduces colour artefacts,’ he says, adding that Dalsa and other suppliers produce algorithms that remove these artefacts. Higher colour resolution and quality can be obtained by using three separate sensors (red, green and blue spectral bands), but most applications, he says, use the lower-cost Bayer pattern cameras.
The Bayer pattern means that colour images contain three times as much data as their monochrome counterparts, and Matrox Imaging’s Lina believes that the difficulties of processing this data may have deterred customers from using a colour imaging solution in the past. ‘I think that the price of a colour camera five years ago and the speed and the power of the available PCs was definitely a limiting factor when deciding whether to go with a colour system, but this is not the case any more. With the multicore CPUs and multi-CPU machines that we have today, the cost of computing power is much lower.’ Boriero adds that the increased computing power made available by the relatively new field of GPU-acceleration will be useful to colour imaging customers, as will the new breed of GPUs integrated directly into CPUs (the Sandy Bridge architecture from Intel and the APU architecture from AMD).
Computing power, however, is no longer an obstacle to the adoption of colour imaging, but Boriero notes that many applications are yet to make full use of colour imaging, or are using it for unusual reasons. ‘There are quite a few customers who actually integrate colour cameras into their systems, solely for the purpose of producing a colour image for the operator, even though the image processing is all being done in the background in monochrome,’ he says. The challenge, he says, is making sure that the customers understand the capabilities of modern colour imaging.
-
Dalsa: www.dalsa.com
-
Basler Vision Technologies: www.baslerweb.com
-
Matrox Imaging: www.matrox.com/imaging