Making machine vision robust
10 November 2017
What makes a machine vision system robust? Robustness in this context is more than just reliability. It is a reliability that is maintained within the natural variations of the environment in which the system is being used, as this article from Mark Williamson at Stemmer Imaging outlines
A number of factors come into play here, including influences from the surrounding environment, object variations and machine vision component effects. Choosing the optimum components for a machine vision system is therefore a challenging task and benefits from the knowledge and experience offered by machine vision systems integrators and specialist suppliers. There is a very large difference between a solution that works in a demonstration lab environment and one that deals with all the variations that an industrial environment will expose the system to.
Machine vision requirements
Machine vision systems consist of several component parts, including illumination, lenses, camera, image acquisition and data transfer, and image processing and measurement software. The capabilities offered by machine vision have grown exponentially as technology continues to deliver improved performance in all areas. The overall complexity of the system is determined by the specific application requirements. Choosing the optimum components for a robust vision system should be based not only on their ability to achieve the required measurement (appropriate resolution, frame rate, measurement algorithms etc) but also on external machine and environmental influences and conditions. Especially in an industrial environment, these can include part variations, handling, positioning, the process interface, vibrations, ambient light, temperature, dust, water, oil and electromagnetic radiation. For extremely hostile environmental conditions, it may be necessary to utilise specialist housings to protect machine vision components. A common example would be the use of camera housings in hygienic environments that require washdown capability. However, there are many industrial applications where various environmental conditions can be accommodated using the most appropriate ‘off the shelf’ components.
The challenges posed by external factors can have implications both in terms of potential damage to the machine vision components themselves, and the effects that they might have on the actual measurements. This is perfectly illustrated in considering a vision system that has to cope with temperature variations. Many modern cameras are built to work in temperatures as low as -5˚C or as high as 65˚C without damage. However, increased temperatures can lead to more noise in the image from the camera sensor, but this can be countered by ensuring that sufficient illumination is used to improve S/N. In addition, temperature affects the performance of LED illumination, the most commonly used illumination source in machine vision. As LEDs heat up their brightness drops. This can be compensated for by using a lighting controller that can adjust light output in line with a temperature compensation profile for the LED. LEDs themselves generate heat as well as light and this can cause accelerated ageing or even total failure, so requires efficient heat management in the system design. Other components can be chosen on their temperature resistance.
For example industrial and embedded PCs are available with the computing power needed for modern vision systems but the ability to operate over a wide temperature range without fans, a significant source of failure in PCs. While this illustrates how vision components can be chosen to provide a robust solution, it is also important to recognise that temperature could affect the object being measured. For example, temperature effects can cause expansion or contraction particularly in metal components, leading to variations in their actual linear and volumetric dimensions. In 3D measurement systems the change in geometry of the 3D sensor will generate errors, unless the sensor’s calibration has temperature compensation included. Many other environmental conditions can be addressed by choosing the optimum components. For example:
Vibration & shock – Many modern cameras are designed with high resistance to vibration and shock. Robot or track-grade cables are available for applications where the camera moves. Lockable connectors prevent them being dislodged by vibration. Ruggedised PCs and embedded computers offer good mechanical stability. Fixed focus lenses in metal mounts with lockable screws provide shock and vibration protection. Filters can provide simple protection of the lens surface.
Ambient light – The use of daylight cut filters can make an application independent of ambient light such as changing sunlight. By using a high brightness pulsed LED, reduced sensor exposure time and stopping down the aperture, the effects of ambient light can be minimised. Using a wavelength such as IR for measurements reduces the influence of visible light fluctuations.
Dust/dirt/water – Many cameras are available in housings rated to IP65/67 which effectively protects against dust, dirt and water splashes. Dust, dirt, liquids or vapours can stick to the LED or the surfaces of the lens system, reducing the light reaching the sensor. This can be overcome by increasing the camera gain, by software processing of the image or by adjusting the output of the LED.
These and other factors affect the quality of the images produced by the sensor which is critical since these images are used for the actual measurements.
Machine vision measurements are handled according to the system configuration. Smart cameras have image acquisition, processing and analysis capabilities embedded within them. Compact embedded vision systems designed for demanding machine vision and automation applications requiring multiple cameras provide image acquisition, processing and analysis capabilities within the processing unit. PC-based systems will have the software on the PC. The accuracy and repeatability of results depends on the particular software algorithms used and their sub-pixel accuracy. High quality software products and libraries often provide more robust software tools than cheaper or open source systems, but often differences can only be evaluated by direct comparison and with varying inspection environments.
Today's vision systems can even tolerate a limited degree of variation in product size and shape, and can recognise classes of natural product with their inevitable variations within them. Even with the most robust vision system, however, external influences can lead to poor measurement results. For example, vibrations can lead to blurry images, while variable part feeding could lead to variable image perspectives. Motion blur can arise when using too long an exposure time to image moving objects.
What you see is not what you get
One pitfall for the untrained machine vision user is the significant difference between the human eye and even the most advanced camera. Eyes automatically adjust to deal with apparent significant dynamic range while a fixed camera is unable to see significantly bright and dark areas at the same time. Sunlight through a roof light or a shadow of a tall machine operator can change a camera’s images image where the human eye would compensate without you even knowing.
Planning & specifying
Planning, specifying and implementing a machine vision system that is fit for purpose should involve more than simply choosing the most robust machine vision components. One way is to make use of the VDI/VDE/VDMA 2632 series of standards for machine vision, published by the VDI/VDE Society Measurement and Automatic Control, developed in conjunction with VDMA Machine Vision in Germany. Part 2 of these standards is the ‘Guideline for the preparation of a requirement specification and a system specification’ which places particular emphasis on the representation and description of influencing factors as well as on their effects. This framework begins the specification process by evaluating the application in detail. This will include:
- Identifying the exact measurement task to be undertaken
- Identifying the exact objective of the testing, characteristics to be validated, specimen parts to validate, special requirements
- Identifying all the details about the test object such as range of types, preliminary processes, object contamination, thermal/ mechanical object stability
- Accurately describing the scene in terms of positioning, machinery situation, any disturbing environmental influences
- Accurately describing the process including process integration, interfaces, spatial constraints, operating modes
-Determining any additional information, such as the human-machine interface, operating concept, visualisation.
Following the VDI/VDE/VDMA 2632 process not only allows the determination of an optimised solution but ensures that if proposals are sought from several suppliers, they all follow the same terms and definitions and use a consistent terminology. This allows exact ‘like for like’ comparisons to be made. To raise awareness of how the VDI/VDE 2632-2 standard can help to smooth the successful integration of machine vision into production equipment, Stemmer Imaging holds a number of training courses, in association with the European Imaging Academy. These are ideal for end users looking to embark on a machine vision project, as attendees will learn what questions to ask suppliers, how to evaluate proposals and understand the completeness of any proposal. In this way, users can be confident that they will get a truly robust vision system.
- Capabilities offered by machine vision have grown exponentially as technology continues to deliver improved performance
- Smart cameras have image acquisition, processing and analysis capabilities embedded within them
- The VDI/VDE/VDMA 2632 series of standards for machine vision can be of help when specifying and implementing