The standard 1288 of the European Machine Vision Association (EMVA) is well established and used worldwide for objective characterisation of the quality parameters for industrial cameras. Shortly, the current version, 3.1, will be replaced by release 4.0.
The new release has become necessary because of the progress being made in image sensor technology. In the past, monochrome and colour sensors dominated the market for cameras, whereas, now, multimodal image sensors are entering the market. These include sensors with an extended spectral range, especially into the shortwave infrared (SWIR); multispectral sensors with more than three colour channels; polarisation sensors and time-of-flight sensors.
Two other trends are of importance: high dynamic range image sensors with a non-linear characteristic curve, and pre-processing within the camera to optimise image quality.
Release 3.1: linear characteristic curve and no pre-processing
Release 3.1 of standard 1288 can only be applied to cameras with a linear characteristic curve. Furthermore, no pre-processing was possible, which changes the temporal noise, except for simple operations such as binning or time delayed integration (TDI). These prerequisites resulted in a simple linear camera model with just three unknown model parameters: temporal dark noise, quantum efficiency and system gain.
These parameters can be determined from an irradiation series measuring the linear characteristic curve and the photon transfer curve (temporal noise variance versus mean of the digital camera signal). Further application-oriented parameters describing the quality of the camera, such as the absolute sensitivity threshold, the saturation capacity, the dynamic range and the signal-to-noise ratio (SNR) can be computed from these measurements.
New in release 4.0: abandonment of a model
For a camera with a non-linear characteristic curve, the linear model of release 3.1 cannot be applied. However, a non-linear camera or a camera with pre-processing can be characterised without any model. This is thanks to the general system theoretical approach of the standard 1288.
The input signal – the number of photons hitting a pixel during the exposure time – is known by absolute radiometric calibration. The digital output signal and its temporal variance can be measured directly. From both, the characteristic curve and the output SNR are known.
The input SNR, which determines the signal quality, is computed from the output SNR and the slope of the characteristic curve. Then everything needed is known to compute the application-oriented quality parameters as with the linear model. What’s even more important is that the same measurements as with release 3.1 can be performed. Depending on the properties of the camera, either a linear model or none can be used for the evaluation of the measurements. Therefore, release 4.0 will have two documents: ‘Release 4.0 general’ and ‘Release 4.0 linear’ – the latter as a direct successor of release 3.1.
Comprehensive extensions
In order to cope with modern image sensors, release 4.0 includes many extensions. The most important are:
• Extended wavelength range from UV to SWIR;
• Raw data of any image modality can be characterised according to the standard;
• Characterisation of the polarisation angle and degree of polarisation of a polarisation image sensor;
• Extended characterisation of non-uniformities: split in row, column and pixel non-uniformities;
• Optionally, cameras with lenses or illumination by a given exit pupil can be measured. In this way it is possible to also measure image sensors with micro lenses that are shifted towards the edge of the sensor;
• Better measure for the linearity of the characteristic curve;
• The double-logarithmic SNR plot will be extended with measurements of the non-uniformity at all intensity steps (important for non-linear cameras).
Release 4.0 takes into consideration the rapid progress of imaging sensors. It will be possible to characterise a much wider spectrum of cameras and sensors using the standard: UV and SWIR-sensitive; multispectral; polarisation; intensified, such as EMCCDs; multilinear; and high dynamic range.
Also, cameras with lenses and pre-processing to enhance the image quality can be characterised. Despite the diversity, the quality of cameras can still be described with a minimum set of application-oriented quality parameters. Release 4.0 will be published this autumn.
__
In co-operation with several member companies, EMVA has prepared an education programme with an optional certification as an EMVA 1288 expert. The first courses are already scheduled: in collaboration with Aeon, online training, on 3 and 4 November and 1 to 3 December (www.aeon.de/training_emva1288.html); and with Framos, online, on 19 and 20 May 2021: (www.framos.com/en/company/events/trainings/).
Demonstrator shows power of OPC Machine Vision
Suprateek Banerjee, of VDMA Robotics and Automation, updates on the progress made on OPC UA for Machine Vision, the machine communication standard
Members of the VDMA OPC Machine Vision initiative, together with the OPC UA Foundation, have developed a hardware demonstrator that includes a practical implementation of the OPC UA for Machine Vision (OPC MV) Part 1 companion specification.
‘From the point of view of automation technology and factory IT, this specification represents enormous progress. Image processing systems are among the most complex components in machine building. Their integration is considerably simplified by manufacturer-independent uniform methods for control and data administration,’ said Dr Peter Waszkewitz, software project manager at Robert Bosch Manufacturing Solutions, and a member of the core working group.
OPC MV Part 1 describes an abstraction of the generic image processing system, i.e. a representation of a digital twin of the system. It handles the administration of recipes, configurations and results in a standardised way, while the contents remain manufacturer-specific and are treated as a black box. The demonstrator establishes an infrastructure layer that enables a simplified and uniform integration of all possible image processing systems into higher-level IT production systems (such as PLC, Scada, MES, ERP, Cloud). It demonstrates the generalised control of a vision system and abstracts the necessary behaviour via the concept of a state machine.
The VDMA Machine Vision initiative
Open Platform Communications Unified Architecture (OPC UA) is a vendor- and platform-independent machine-to-machine communication technology, recommended by the German Industry 4.0 initiative and other international consortiums, such as the Industrial Internet Consortium (IIC), to implement Industry 4.0. The specification of OPC UA can be divided into two areas: the basis specification and companion specifications. The basis specification describes ‘how’ data can be transferred, while the companion specifications describe ‘what’ information and data are transferred.
The OPC Foundation is responsible for the development of the basis specification. Sector-specific companion specifications are developed in working groups, usually organised by trade associations. The VDMA is a key player in the Industry 4.0 initiative.
Working towards Part 2
In Part 1 of the specification, the behaviour of a machine vision system is described and depicted with the help of black boxes. It specifies the way a machine vision system functions, rather than standardising the data in those black boxes. For future parts of the specification, the proprietary input and output data black boxes need to be broken down and substituted with standardised information structures and semantics. This follows the idea of implementing information model structures derived from OPC UA DI (Device Integration, Part 100), as OPC Robotics has already done in Part 1 of the OPC Robotics companion specification. It therefore ensures the idea of cross domain interoperability, so that machine vision systems can talk to robots and vice versa, and at a later stage to all kinds of devices.
In September last year, there was a clear consensus by the core working group and various other stakeholders, that for Part 2 of the specification the topics of interest should be asset management, condition monitoring and result data standardisation. With this aim in mind, in January the OPC Machine Vision core working group kicked off the work for Part 2 of the specification. Unfortunately, shortly thereafter, the world was hit by the pandemic. As an international working group, it was difficult to meet physically, but we continued working and met digitally.
The core working group comprises experts from Vitronic, Asentics, Robert Bosch Manufacturing Solutions, MVTec Software, Peer Group, Stemmer Imaging, Silicon Software, Isra Vision, Kolektor, Matrix Vision, Kuka and Beckhoff Automation.
In order to be able to describe a machine vision system at the component and sub-component level, the working group listed several user stories describing the kind of information different user groups would need to know about a vision system and its components. This resulted in filtering the asset management and condition monitoring parameters outlined for the components of a vision system.
The OPC Machine Vision core working group is in the final stages of this filtering process. The next step is to start on the information model for Part 2. After that, the group will implement the model and include the same information in the existing OPC machine vision demonstrator.
A separate work package has been dedicated to standardising the result data generated by a vision system. As this is a topic that multiple working groups are interested in, the OPC MV core working group is talking with groups involved in other standardisation initiatives, such as OPC UA for Tightening Systems, OPC UA for Grippers, and OPC UA for Robotics.
The VDMA Robotics and Automation association is trying to come up with standardised result metadata that can be customised by all stakeholders, as the first step towards standardising branch-specific result data.
__
The OPC UA for Machine Vision Part 1 companion specification can be downloaded here: https://ibv.vdma.org/en/viewer/-/v2article/render/37795049