January 18, 2021
Flavio Bonomi, Board Technology Advisor, Lynx Software Technologies
Across a range of industries, and specifically in the industrial automation vertical, there is broad agreement that the deployment of modern computing resources with cloud native models of software lifecycle management will become ever more pervasive. Placing virtualized computing resources nearer to where multiple streams of data are created is well established. It is the path to address system latency, privacy, cost and resiliency challenges that a pure cloud computing approach cannot address. This paradigm shift was initiated at Cisco Systems around 2010, under the label “fog computing” and progressively morphed into what is now known as “edge computing”.
The requirements of mission critical industrial systems
That said, the full potential of this transformation in both computing and data analytics is far from being realized. The mission critical requirements are much more stringent than what the cloud native paradigms can deliver. This is especially true because mission critical applications have four specific requirements
- Heterogeneous hardware – Typical industrial automation settings have different architectures, x86, Arm, as well a variety of compute configurations on the floor
- Security – The security requirements and their mitigations vary from the device to device and need to be handled carefully
- Innovation – While some of the industrial applications can continue with the legacy paradigm of remaining the same for over a decade, most of the industrial world now additionally requires modern data analytics and monitoring of applications in their installations
- Data privacy – as in other areas of IT, data permission management is increasingly complex within connected machines and needs to be managed right from the origination of the data
- Real-time and determinism – the real-time determinism provided by controllers remains critical to the safety and security of the operation.
For these reasons, the market is seeking what Lynx Software Technologies refers to as the “mission critical edge”. This concept is born out of the incorporation of requirements typical of embedded computing (security, real-time and safe, deterministic behaviors), into modern networked, virtualized, containerized lifecycle management and data and intelligence rich computing.
The role of mission critical edge
Without a fully manifested mission critical edge we will not be able to address the many pain points characterizing the current industrial electronic infrastructure. In particular, we will not be able to securely consolidate, orchestrate and enrich with the fruits of data analytics and artificial intelligence (AI) the many poorly connected, fragmented and aging subsystems controlling today’s industrial environments.
This broad architecture picture below illustrates our vision for enabling this, as:
- Distributed and interconnected, mixed criticality capable, virtualized multi-core computing nodes (system of systems)
- Networking support that includes traditional IT communications (e.g., Ethernet, WiFi) but also deterministic legacy field busses, moving towards IEEE time sensitive networking (TSN), and public and private 4G/5G, also moving towards determinism
- Support for data distribution within and across nodes, based on standard middleware (OPC UA, MQTT, DDS, and more) will also strive towards determinism (e.g., OPC UA over TSN)
- Distributed nodes which have to be remotely managed and software will be delivered and orchestrated as virtual machines (VMs) and containers, the model of modern cloud native microservices.
Lynx has identified the evolution of the industrial operational architecture (the architecture of the infrastructure on the industrial automation floor) as one of the most appropriate targets for the realization of the full mission critical edge paradigm.
Paired with more powerful and scalable new multicore platforms, a mission critical edge computing approach can provide a unified and uniform infrastructure, going from the machine, through the industrial floor, and into the telco edge and cloud, enabling a fundamental decoupling between hardware and software. Applications, packaged as VM and, more and more, as containers, can be lifecycle-managed and orchestrated across all the layers of this infrastructure.
Integration into today’s fragmented industrial environments
Many poorly connected, fragmented and aging subsystems controlling today’s physical environments can be effectively and securely consolidated, orchestrated and enriched with the fruits of data analytics and artificial intelligence.
The diagram above shows how the infrastructure would look when the mission critical edge is deployed, embedded into the operational technologies area of the factory. There are a distributed set of nodes, some very close to the plant, some far away. Effectively this is like a distributed datacenter, yet contains a far more heterogeneous, interconnected virtualized set of computing resource which can host the applications where needed and when needed. These will be deployed in the form of virtual machines and containers orchestrated from the cloud or locally.
Let’s discuss a specific use case at an Audi manufacturing plant, more specifically for the Audi A3. The Neckarsulm plant has 2,500 autonomous robots on its production line. Each robot is equipped with a tool of some kind, from glue guns to screwdrivers, and performs a specific task required to assemble an Audi automobile.
Audi assembles up to approximately 1,000 vehicles every day at the Neckarsulm factory, and there are 5,000 welds in each car. To ensure the quality of its welds, Audi performs manual quality-control inspections. It is impossible to manually inspect 1,000 cars every day, however, so Audi uses the industry’s standard sampling method, pulling one car off the line each day and using ultrasound probes to test the welding spots and record the quality of every spot. Sampling is costly, labor-intensive and error prone. So, the objective was to inspect 5,000 welds per car inline and infer the results of each weld within microseconds.
A machine-learning algorithm was created and trained for accuracy by comparing the predictions it generated to actual inspection data that Audi provided. Remember that at the edge there is a rich set of data that can be accessed. The machine learning model used data generated by the welding controllers, which showed electric voltage and current curves during the welding operation. The data also included other parameters such as configuration of the welds, the types of metal, and the health of the electrodes.
These models were then deployed at two levels, firstly at the line itself and also the cell level. The result was that the systems were able to predict poor welds before they were performed. This has substantially raised the bar in terms of quality. Central to the success of this exercise was the collection and processing of data relating to a mission critical process at the edge (ie: on the production line) rather than in the cloud. In consequence, adjustments to the process could be made in real time.
Harvesting the benefits of integration at the boundary between embedded and IT
There are a number of technical areas where there still needs to be quite a lot of progress. The focus at Lynx is primarily around two areas
- Delivering deterministic behavior in multicore systems; As multiple systems are consolidated to operate on a single multicore processor, the sharing of resources like memory and I/O cause interference, which means that guaranteeing the behavior of time-critical functionality becomes problematic
- Delivering strict isolation for applications to ensure high levels of system reliability and security
There are a number of other topics that include providing time-sensitive data management, edge analytics and networking functionality for these complex connected systems. For example, what will be the right approach for deploying the orchestration and scheduling for these deterministic, time-sensitive systems?
In conclusion, the mission critical edge is here. It is starting to realize the original intent of fog computing. We are starting to harvest the great benefits from the real integration at the boundary between embedded technology and information technology. Much more work is needed, and this will take a village. This will need a broad set of ecosystem partners to simplify how this technology is delivered to the marketplace.
Flavio Bonomi is a board technology advisor for Lynx Software Technologies, He is a visionary, entrepreneur, and technologist who thrives at the boundary between applied research and advanced technology commercialization. Bonomi is the co-founder and was the first CEO of Nebbiolo Technologies, a Silicon Valley startup focused on delivering the power of fog computing to the industrial automation market and beyond. Previously, he was a Fellow, vice president, and head of the advanced architecture and research organization at Cisco Systems.