- Register


Home>IIot & Smart Technology>Big Data>Turning inspectors into detectives
Home>IIot & Smart Technology>Industry 4.0>Turning inspectors into detectives

Turning inspectors into detectives

24 June 2022

Founded on teaching a machine to recognise optical features or patterns, machine vision is at the forefront of applied artificial intelligence technologies. Martin Short explores the potential

WHEN MACHINES can learn new skills in a way comes naturally to humans, engineers will have conquered some of the most challenging uncharted territories of industrial automation.

Traditional machine vision inspection tools use “if-then” rules to compare images against a set of pre-determined geometrical or measurement parameters to answer specific questions. The system ‘sees’ a pattern, a colour, contrast, or measurement and compares it to what a ‘good’ product should look like.

But, for products or scenes where there are potentially an infinite number of variations, the expert advice has always been that machine vision has its limits; some products, processes or components have been just too varied to teach to a machine.

These barriers are being swept away by deep learning. Deep Learning machine vision mimics human responses to make sense of more unpredictable visual data that can’t be classified with a set of rules. Deep Learning can now give a machine the skill to make judgements based on its knowledge of a highly variable dataset, to solve applications such as spotting defects or foreign objects, localising objects in the camera’s field-of-view, or classifying natural products based on visual appearance.

Examples versus rules

A Deep Learning system is taught by being shown many real-life variations of the same product. For this reason, some people call it an “example-based”, as opposed to a “rule-based” process. Unlike traditional vision systems, there is no need to select from the conventional toolbox of algorithms used to identify defects, such as pattern finding or edge detection. Deep Learning cameras can automatically detect, verify, classify and locate objects or features by referring to the complete library of images that it has previously learnt.

Deep Learning vision tools can be used to sort and classify products or components that could be organic, like wood or food; easily deformable, like a plastic bottle; widely variable, like solder spots on an electronic component or glue spots on an automotive part; or they could be highly-reflective, such as a metal part or a packaging film.

Intelligent inspection

With SICK’s Intelligent Inspection Sensor App, which runs on its Inspector P 2D vision sensors, users begin by collecting example images of their product, package, or component in realistic production conditions. Following step-by-step prompts, they teach the system to recognise examples as pass or fail. The images are uploaded to SICK’s cloud-based training service, dStudio, where the training process is completed by specially-optimised SICK neural networks. The custom-trained Deep Learning solution is then downloaded to the vision sensor and the automated inference process can begin with no further cloud connection necessary.

The Intelligent Inspection SensorApp also offers the flexibility to retrain machines when new products are added, adapt when processes are changed, and respond to a higher variety of items being produced at the same time.

Anomaly detection

Now, SICK is expanding its Intelligent Inspection Deep Learning service to incorporate anomaly detection. In Deep Learning classification and sorting solutions, a system is trained with many examples to decide if the object belongs to a pre-defined category. From this, it can learn to decide whether an inspection is ‘good or bad’. In anomaly detection, Deep Learning similarly inspects and evaluates data from a scene, with many variations, to give a pass or fail judgement. However, in this case, it is looking for ‘outliers’ or defects within a region of interest to decide on the pass or fail. The system can be trained by being only shown a few ‘good’ images.

Where traditional imaging tools must know all possible defects based on using rules, a Deep Learning system can be trained to recognise when there is an anomaly outside of the norm, while tolerating numerous acceptable variations in the objects being inspected. So, for example, it could be trained to recognise when a film label is applied out of alignment on a bottle. Or it could be trained to detect when there is dust or a scratch on a reflective dashboard display.

Usefully, with anomaly detection, operators can then build up a picture in real-time of a “heat-map” of defects, which can be used to identify and correct abnormal trends in a timely manner. We are already seeing how data from smart sensors can be visualised on smart phones, watches or PCs through new digital services being offered often by the sensor manufacturers themselves. Simple graphic user interfaces enable operators to better understand and interpret key process parameters in real time, and to analyse historical trends so it is easy to predict what will happen next.

Usually, the data is displayed using graphic interfaces custom developed for that application to highlight the most important metrics. Meanwhile, the data can also be integrated with a higher-level system, such as an MES or ERP system, to enable management decisions to be taken.

Overall equipment effectiveness

With a Machine Vision sensor, while the output of an inspection may be binary - a pass or fail decision - the data being processed along the way contains a host of valuable information to help production teams to make judgements that can improve Overall Equipment Effectiveness (OEE). Broadly speaking, calculating OEE depends on the availability of three metrics: machine availability, volume of output, and quality. Machine Vision can provide data for two of them.

Investigating root causes

So, thinking about the capability of Deep Learning algorithms, in particular, to detect ‘outliers’ to the expected norms, makes it clear how machine vision systems can start to add more value to processes overall. Combine Deep Learning anomaly detection with real-time digital monitoring and visualisation software and it becomes possible to trend results and investigate root causes. So, for example, a production manager could look to see where differences in anomalies appear between shifts. Perhaps the position of a label applied to a bottle is drifting. Is this because operators set up the machine up differently on different days? How many times has the label position not been perfect to the left, right, top or bottom? What can we infer from the results? Often, seeing a visual representation of trends can help engineers to see surprising patterns and make rapid judgements.

At the moment, most digital monitoring systems for sensors stop at visualisation, but it is easy to see how, in the future, the final piece of the jigsaw is for the system to self-optimise, spot trends and make adjustments to adapt the process in real time. So instead of just providing visual data for interpretation and historical analysis, a ‘closed loop’ is reached where the data output can automate the system’s responses and correct the trend.

Deep Learning has taken off remarkably quickly as a powerful new machine vision tool. It’s easy to see, in the future, being able to visualise and interpret the data generated through image processing, and Deep Learning in particular, will add greater value to the smart factory of the future.

Martin Short is machine vision application specialist at SICK UK