
![]() |
Charlotte Stonestreet
Managing Editor |
Home> | AUTOMATION | >Vision Systems | >It's time for AI |
It's time for AI
17 October 2022
Despite its many benefits, AI vision is not yet being evaluated across the board and planned into new projects. Heiko Seitz looks at the reasons behind this

AI-based image processing will improve the competitiveness of many companies from different sectors. It not only opens up new applications and makes creating machine vision applications easier, but is also well suited for rapid prototyping and can thereby accelerate development cycles. Those who have already tested and implemented their first applications are enthusiastic about how quickly good results can be achieved. So what is holding businesses back?
One factor might be that artificial intelligence for machine vision is not (yet) as intuitive and easy to use as often described. And even if users no longer need to be an image processing professional to perform AI-based image analysis, providing sufficient training data is usually time-consuming and costly. In addition, it requires a certain understanding of how confident conclusions can be generated from them and how these are to be evaluated. In a recent Bitkom survey, for example, every second respondent stated that they do not use AI in their company for fear of errors in programming and a lack of controllability of AI systems. Only when the user-friendliness of AI is increased and its results, which are difficult to assess, become explainable, will trust in and acceptance of AI vision increase.
Technology meets user-friendliness
With IDS NXT, IDS Imaging Development Systems has designed such an AI vision ecosystem of hardware and software components, which in addition to machine learning also intuitively maps the complete application workflow. The programmable IDS NXT cameras can process tasks "on device", thus providing image processing results themselves and can trigger subsequent processes in networked systems directly via REST or OPC-UA. The range of tasks is determined by vision apps that run on the cameras. Their functionality can therefore be changed at any time. Those who use IDS NXT can realise their own AI vision applications quickly and with little prior knowledge of camera programming and Deep Learning and in a time- and cost-saving way. But how does this work exactly?
With the AI Vision Studio IDS NXT lighthouse, users can take their first steps with AI, test the suitability of its methods for their applications and also create vision apps for IDS NXT cameras to solve complex tasks. There is no training or setting up a development environment necessary. This makes it easy to get started, which also includes implementation and commissioning of an individual AI vision system. For this purpose, the entire programming is hidden behind easy-to-understand interfaces and tools that cover all steps of an AI vision development. With Amazon (AWS) and Microsoft (Azure), professional cloud computing services are provided that can be adapted to the customer's requirements. This means that training performance can be increased or new training models can be supported if necessary.
More assistance, quick labelling
Right at the start of a project, an application wizard with an interview mode helps to identify specific tasks, select the required AI methods and prepare a suitable vision app project. Users who prefer a more individual approach can use the block-based editor to build individual process sequences from ready-made function blocks by drag & drop, without having to deal with platform-specific programming or the special syntax of a programming language. This opens up greater flexibility in the application description and at the same time makes the processes easy to understand.
In the future, the AI Vision Studio will provide further support when preparing training data. An automatic labelling system allows imported image data and specific content with ROIs to be organised more quickly into data sets with suitable labels. This helps to expand data sets with image content in order to continuously improve networks through re-training.
Providing sufficient data in balanced amounts for all targeted classes is often time-consuming. Since error cases can occur in all possible forms, there is often an imbalance of GOOD and BAD parts. Therefore, it is important to offer solutions that require less training data in preparation. Thus, in addition to classification and object detection, users will in future benefit from anomaly detection, which identifies all known as well as unknown error cases that exceed the normal deviations of a GOOD part. This requires relatively little training data compared to the other AI methods. In other words, anything that would be noticed by a human being who spends a long time learning what objects appear to be "typical" can also be identified by an AI system with anomaly detection. Anomaly detection is thus another useful tool to support quality control by reducing manually performed visual inspections and at the same time detecting and avoiding errors in the production process at an early stage.
Explainable AI
For better understanding, among other things, a heat map visualisation of the AI attention is provided directly in the AI Vision Studio. For this purpose, special network models are used during training, which generate a kind of heat map during the evaluation of test data sets. It highlights those image areas that receive the most attention from the neural network and therefore influence the results and performance. Incorrect or underrepresentative training images can also sensitise the AI to unwanted features. Even an accidentally trained product label can falsify the results. The cause of such "wrong" training is called data bias. These attention maps help to reduce concerns about AI-based decisions and to increase acceptance in the industrial environment.
IDS is constantly developing its AI system with a special focus on user-friendliness and time efficiency. This will enable AI to be used more quickly across the board, including SMEs. On the hardware side, the IDS NXT camera family is also being extended by an even more powerful hardware platform that can execute neural networks much faster. This enables AI vision even in applications with high clock rates. However, what helps most in expanding AI vision are companies that have already implemented successful AI vision projects and can tell others about them.
Heiko Seitz is technical author at IDS Imaging Development Systems
Key Points
- AI for machine vision is often not as intuitive as described; providing sufficient training data can be time-consuming and costly
- IDS NXT is AI vision ecosystem of hardware and software components, which also intuitively maps the complete application workflow
- Users realise their own AI vision applications quickly, with little prior knowledge of camera programming and Deep Learning
- USB 3.0 industrial camera reaches more than 1,000 fps with AOI
- 3D camera series: Ensenso X models with 5 MP resolution cameras
- Robust GigE uEye FA industrial camera series now available from IDS
- Added customisation options for board-level cameras
- Furniture assembly using robot vision
- High-res lenses
- Adaptive vision machine vision software available from IDS in UK
- App-based cameras
- Flexible 3D camera system with 100W projector & Gigabit Ethernet connector
- Stereo 3D camera
- Turnkey hovercraft drivetrain guarding
- More Ways of Identifying Objects
- HD Machine Vision
- Bespoke Vision Sensor Packages
- B&R NEW WEBSITE
- View Images & Overlay Graphics
- Get A Lock On 3D Measurement
- Entry-level vision system
- Multipix will launch the NEW Datalogic MATRIX 450
- Process, print and packaging inspection systems on show