|Home>||IIot & Smart Technology||>Industry 4.0||>How can AI improve AOI?|
|Home>||AUTOMATION||>Inspection||>How can AI improve AOI?|
|Home>||AUTOMATION||>Vision Systems||>How can AI improve AOI?|
How can AI improve AOI?
05 July 2021
In the manufacturing sector, inspection is an essential function. It’s a visual check that ensures a product meets its intended function and appearance and delivers important benefits for both the manufacturer and customer. Mark Patrick takes a look at how AI can transform the process
Most obviously, the inspection results provide an assurance of quality that may be communicated directly to the customer using a sticker or label or recorded within the manufacturing organisation as part of their quality control process. Inspection reports can also aid troubleshooting if a unit is returned from the field and can help the manufacturer deal with any claims.
Moreover, identifying any failed items during production can help spot whether manufacturing processes or procedures need to be adjusted. The results can help identify the cause of a defect, such as a blocked nozzle in an electronic surface-mounting machine, a defective bottle-filling unit, or a misaligned labelling mechanism. Identifying defects in real-time allows production to be stopped for the cause to be fixed. The earlier quality problems are detected, the less costly they are to solve. The ten-times rule is often cited: finding a mistake in product development costs ten times less than in production, which, in turn, costs ten times less than in the field.
From manual inspection to AOI
Inspection is typically carried out on each unit produced. A trained operator can perform inspections manually, particularly when working with simple products or as a final check on the overall appearance. Some applications may need magnifying equipment, such as printed circuit board assembly (PCBA). The smallest feature sizes, such as high-density IC interconnects and 01005 SMD chip components soldered on the board (Figure 1), challenge human inspectors' visual acuity.
Figure 1: Surface-mount chip components soldered on PCB
However, as product complexity increases, typical assemblies can contain large numbers of such components. The combination of the visual challenges and the takt time, during which the inspector must perform the inspection and record the results, can mean manual inspection is not practical. In some instances, such as a high-speed bottling process, manual inspection may be simply impossible.
As the feature size, complexity, and throughput aspects become more challenging, automatic optical inspection (AOI) becomes the only practical way to ensure each item is inspected adequately.
AOI comprises image-sensing, lighting, and computing subsystems that work together to capture and analyse images. The system may compare captured images with reference images to identify defects, such as imperfections in the surface of a material, soldering defects, or missing or misplaced components on a PCBA. Alternatively, rules-based systems measure the dimensions of features, such as the components themselves, or the volume of solder in each joint, to determine Good (G) or Not Good (NG) status. If a defect is detected, the machine may isolate the defective item and continue with subsequent inspections or pause and alert an operator.
Although AOI has eclipsed manual inspection in situations where complexity, throughput, or both, are high, traditional image-processing systems and algorithms have some drawbacks that are apparent in system and software development and when setting up equipment on the factory floor.
From conventional image processing to AI
The underlying principles of image recognition rely on digitising each captured image and applying various filters to detect patterns and features. Edge detection filters are often used to detect objects within an image. An algorithm to recognise humans may then apply slope detection to identify features that could be arms, shoulders, legs. The algorithm will also need to check the orientation of these detected features relative to each other as further qualifying criteria.
An algorithm for inspecting solder joints may apply edge detection and colour detection to identify the joints and check that the gradient of the fillet is within acceptable limits. The optical system may illuminate the inspected unit from various angles using different colours. If the slope of the fillet is correct, a greater proportion of, say, green wavelengths may be reflected. More red wavelengths or changing colour combinations across the surface may highlight a shallow fillet, indicating insufficient solder volume or solder balling that indicates poor wetting during the soldering process.
Whatever the application, whether recognising people for surveillance or automotive pedestrian-detection applications, facial recognition for social media applications, or defect detection in industrial inspection, conventional image recognition faces numerous challenges.
Defining rules and creating algorithms for detecting and classifying objects within digitised images is extremely complex. In industrial inspection, developing robust algorithms is expensive and time-consuming. When inspecting a PCBA, the quality of solder joints is only one criterion to inspect. The presence of each component must also be verified, as well as the position and orientation relative to the solder mask, component coplanarity, and the presence of unwanted objects such as solder spatter on the surface of the board. It is almost impossible to create rules for all cases and all exceptions.
Fine-tuning the algorithms, and adding more algorithms to cover additional situations, is a never-ending task that requires continuous updating of the software. Each time new items – such as advanced electronic component packages – come into use within the industry, new algorithms must be developed to inspect them.
AI can tackle the challenges posed by these infinite numbers of variations by mimicking, to an extent, humans’ ability to apply learned experiences to image-recognition challenges. Among the various computing structures encompassed under the general heading of AI, convolutional neural networks (CNNs) are typically used for image recognition. These comprise interconnected artificial neurons arranged in layers (Figure 2). They are usually deep neural networks that contain multiple inner or hidden layers between input and output layers. The hidden layers perform specific, tightly defined pooling and convolutional computations on data received from the preceding layer. The results are sent to the following layer, ultimately reaching the output layer that indicates whether the sought object has been identified or not.
Figure 2: CNN layers
Before being deployed, the CNN is trained to identify a specific object. Each neuron's importance or weight in generating that answer is adjusted with each correct or incorrect answer during this process. After many iterations, the CNN can identify images with a high probability of correctness. At this point, it is considered trained, redundant neurons can be removed, and the neural network is then ready to deploy as an inferencing engine, either in the cloud or on an embedded computing platform.
Bringing the two domains together
AI can deliver advantages for vendors and users of AOI equipment. From the vendor’s perspective, algorithm development can be simplified if the AI can judge the probability that it is seeing a particular object. This helps cut time to market for new equipment and reduces ongoing software-support costs by reducing the need to define every object and acceptability criteria. For users, AOI enhanced with AI can streamline setting up an inspection system, programming, and fine-tuning the thresholds for Good / Not Good alerts.
AI is now entering the market for AOI equipment. One example is an embedded industrial machine-vision computing platform and multi-processor expansion card for AI inference, created by AAEON in conjunction with an AOI vendor partner. This platform enables the AOI to inspect multiple product lines without needing to be reconfigured. It delivers greater accuracy and fewer false positives than traditional systems and can also be quickly trained to inspect new products or identify previously unknown defects.
Another example is the MEK (Marantz) ISO-Spector M1A for PCBA inspection. Built upon AI, this system learns the production process values of assembled and reflowed PCBs and recognises defects based on hundreds of pre-set parameters. This reduces the human element involved in programming by handling typical challenges such as determining optimum light levels, camera position, camera settings for each view to assist defect detection, and adjusting detection thresholds to ensure defective units are captured without making excessive false NG calls. AI can automatically adjust multiple parameters much faster than human experts and make decisions with a significantly reduced risk of mistakes, enabling consistent inspection results whether the AOI system is programmed by a beginner or by an expert.
The Chinese manufacturer VCTA has also added AI to its AOI systems for PCB manufacturing, delivering enhanced operation: reducing scrap rate, and increasing capacity and quality.
AOI System Architecture
The benefits of systems like these highlight the advantages AI can bring to inspection applications in many sectors, including security and retail. Where there are requirements such as searching images to detect objects and features or identifying individuals, AI can simplify setup and programming, eliminate human error, minimise latency, and support better decision-making.
To help developers take advantage of the technology, camera modules are now entering the market supported by software to simplify AI development. Examples include the Basler AI Vision Solution Kit. The kit (Figure 3) comes ready to use with a 13Mp Basler dart camera and the pylon camera software suite for configuring and operating the camera. Pre-trained machine-learning models for object detection and people detection are available in the Basler cloud, ready to deploy on the kit. Developers are also free to use their own models for any application.
Figure 3: The Basler AI Vision Solution Kit
The Intel RealSense D400 stereo vision depth camera system integrates the RealSense D4 vision processor, a stereo depth module, an RGB sensor with colour image signal processing, and an inertial measurement unit to address applications such as robot vision, drones, virtual reality, and home security. The depth module combines left and right imagers with an optional infrared projector that projects a non-visible static IR pattern to improve depth accuracy in scenes with low texture.
RealSense Depth Cameras can bring extra value to applications such as object detection and classification when used with a machine-learning platform such as TensorFlow or OCV. The camera-module’s per-pixel depth information helps solve additional challenges such as estimating object sizes in the field of view. A link to a tutorial and sample code is available through the Intel RealSense website, showing how to achieve this.
Inline AOI, operating at the line speed, has enabled manufacturing businesses in a wide range of industries to enhance quality assurance, safeguard productivity, and continuously improve processes. Enhancement with AI is the next step for AOI. Algorithms trained for optical inspection applications bring the added benefit of decision-making capability, enabling reduced operator involvement, simplified programming, and robust performance that increases the certainty of defect detection while at the same time reducing false calls.
Developers and makers can start exploring how AI can enhance various machine-vision applications using AI-ready camera kits from leading manufacturers, available from Mouser.
Mark Patrick is technical marketing Manager for EMEA, Mouser Electronics