
![]() |
Charlotte Stonestreet
Managing Editor |
From process automation to industrial IoT
27 February 2017
What is now called industrial or enterprise IoT is a natural extension of historical process automation and process control systems. Hans O’Sullivan, CEO of StorMagic is a specialist in IP-based storage and protocols and looks at how the use of IoT devices has fundamentally changed automated processes
In the past, controlling a process literally meant walking around from one system to the next to watch for warning lights, read and respond to meters in the “red zone,” and listen for alarm bells. However, the ability to consolidate data from multiple components or stages of an automated process using IoT devices has influenced the way systems are designed, and how processes are monitored and controlled.
Modern monitoring and process control applications are built on traditional IT systems, using industry-standard IT operating systems, hypervisors, and application-development tools. These applications collect, filter, clean, transform, consolidate and analyse data from multiple systems in real-time, to deliver a monitoring and process-control dashboard.
- Server virtualisation has enabled the consolidation of multiple applications onto fewer servers, reducing the IT footprint.
- Monitoring tools from major software companies such as VMware and Microsoft have enabled centralised monitoring of remote site IT infrastructure, reducing or eliminating the need for on-premises IT staff in remote locations.
- Clustering and automated failover tools from the same companies enable non-disruptive maintenance and upgrades of both software and hardware.
- Software-defined storage and the increased storage capacity and performance within industry-standard servers enabled the elimination of expensive external disk arrays and storage area networks, replacing them with internal hard disk drives, solid state drives, and flash memory, together with data mirroring for high availability and the data sharing required to enable automated failover.
Branch offices of large companies, especially in the retail sector, were early adopters of these technologies, because each of the applications they used ran on traditional data-centre compatible infrastructure. Consolidation at the branch office was similar to what had already occurred at the core data centre. Manufacturing companies, however, were slower to adopt these technologies on the factory floor, however, because the industrial-controls applications were often built on proprietary systems with industry-specific interfaces and application program interfaces.
On-premises monitoring
Using this newer approach, it is possible to monitor and control automated processes running at remote locations from a central site. However, the reality for geographically distributed organisations, especially those that are operating in low labour cost, non-urban locations, or developing countries, where much manufacturing takes place, is that bandwidth reliability, performance and affordability are inadequate to support real-time process control decision making from a centralised location. These systems need to run on-premises in the factories, and should operate on highly available infrastructure. When equipment is operating outside good quality or safety tolerances, operators need to take action immediately.
Even if the application is only a monitoring dashboard and not a control system, the infrastructure should be highly available. Without monitoring and alerting, though the process may continue to run, the supplier may fail to detect or prevent a product defect that later results in a costly recall and damage its customer relations and reputation. And without continuous monitoring, the evidence needed to defend a liability claim may be lost.
It is critical for industrial organisations to understand uptime and response-time requirements and the different ways in which data can be collected, analysed, and used. Technology and budget constraints have a major impact on IT architecture decisions, but what is clear is that there is no one architectural approach that meets all industrial IoT requirements. IT and application architects should plan for and implement:
- High-availability systems in each remote location to support distributed, real-time decision making and corrective actions to insure quality and safety
- Centralised systems to support corporate-wide quality initiatives, compliance reporting, supply-chain management, and improved analytics
- Centralised systems to incorporate their suppliers into a corporate-wide dashboard supply-chain dashboard to avoid product delays and product recalls.
Key Points
- The ability to consolidate data from multiple components or stages of an automated process using IoT devices has influenced systems design
- Software-defined storage and increased capacity and performance within servers enables elimination of external disk arrays and storage area network
- Manufacturing companies slower to adopt technologies on the factory floor as industrial-controls often built on proprietary systems
- No related articles listed
- No related articles listed