Categorie: Cybersecurity

Are sensors secure, is life an unhealthy affair?

Introduction

Another topic increasingly discussed in the OT security community is the security of our field equipment. The sensors through which we read process values and the actuators that control the valves in the production system. But if we would look at the automation functions in the power management systems we might add devices like IEDs and feeder relays. In this blog I focus on the instrumentation as used for controlling a refining processes, chemical processes, or oil and gas in general. The power management is an entirely different world, using different architectures and different instrumentation. Another area I don’t discuss are the wireless sensors used by technologies such as ISA 100 and WirelessHART. Another category I don’t discuss in this blog, are the actuators using a separate field unit to control the valve. The discussion in this blog focuses on architectures using either the traditional I/O or field bus concepts, which represent the majority of solutions used in the industry.

The first question I like to raise is when is a device secure, is this exclusively a feature of the intrinsic qualities of the device or do other aspects play a role? To explain I like to make an analogy with personal health. Am I considered not healthy because my personal resilience might be breached by a virus, toxic gas, fire, a bullet, or a car while crossing a zebra? I think when we consider health we accept that living is not a healthy exercise if we don’t protect the many vulnerabilities we have. We are continuously exposed by a wide range of attacks on our health. However we seem to accept our intrinsic vulnerabilities and choose to protect these by wearing protective masks, clothes, sometimes helmets and bullet free vests depending on the threat and the ways we are exposed. Where possible we mitigate the intrinsic vulnerabilities by having an operation or taking medicine, and sometimes we adapt our behavior by exercising more, eating less, and stop smoking. But I seldom have heard the statement “Life is an unhealthy affair”, though there are many arguments to support such a statement.

So why are people saying sensors are not secure? Do we have different requirements for a technical solution than the acceptance we seem to have of our own inadequacies? I would say certainly, when ever we develop something new we should try our best to establish functional perfection and nowadays we add to this resilience against cyber attacks. So any product development process should take cyber security into account, including threat modelling. And if the product is ready for release it should be tested and preferably certified for its resilience against cyber attacks.

But how about these hundreds of millions (if not billions) sensors in the field that sometimes do their task for decades and if they fail are replaced by identical copies to facilitate a fast recovery. Can we claim that these sensors are not secure, just because their intrinsic cyber security vulnerabilities exist? My personal opinion is that even an intrinsically vulnerable sensor can be secured when we analyze the cyber security hazards and control the sensor’s exposure. I would even extend this claim for embedded control equipment in general, it is the exposure that drives the insecurity once exposure is controlled we reduce the risk of becoming a victim of a cyber attack.

Some cyber security principles

Which elements play a role in exposure? If we look at exposure we need to differentiate between asset exposure and channel exposure. The asset here is the sensor, the channel is the communication protocol, for example the HART protocol or some other protocol when we discuss Foundation Fieldbus or Profibus or any of the many other protocols in use to communicate with sensors. Apart from this, in risk analysis we differentiate between static exposure as consequence of our design choices, and dynamic exposure as consequence of our maintenance (or rather lack of maintenance) activities. To measure exposure we generally consider a number of parameters.

One such parameter would be connectivity, if we would allow access to the sensor from the Internet it would certainly have a very high degree of exposure. We can grade exposure by assigning trust levels to security zones. The assets within the zone communicate with other assets in other zones, based on this we can grade connectivity and its contribution to increasing the likelihood of a specific cyber security hazard (threat) to happen. Another factor that plays a role is complexity of the asset, in general the more complex an asset the more vulnerable it becomes because the chance on software errors grows with the number of code lines. A third parameter would be accessibility, the level of control over access to the asset also determines its exposure. If there is nothing stopping an attacker to access and make a change in the sensor, and there is no registration that this access and change occurred, the exposure is higher.

If we would have a vault with a pile of euros (my vendor neutrality objective doesn’t imply any regional preferences) and would put this vault with an open door at the local bus stop, it would soon be empty. The euros would be very exposed. Apart from assessing the assets we also need to assess the channel, the communication protocols. Is it an end to end authenticated connection, is it clear text or encrypted, or is it an inbound connection or strictly outbound, who initiates the communication, these type of factors determine channel exposure.

We can reduce exposure by adding countermeasures, such as segmentation or more general the selection of a specific architecture, adding write protections, authentications and encryption mechanisms. Another possibility available is the use of voting systems, for example 2oo4 systems as used in functional safety architectures to overcome the influence of a sensor’s mean time to failure (MTTF) on the probability of failure on demand (PFD), a specification that is required to meet a  safety integrity level (SIL).

I/O architectures

Input / output (I/O) is the part of the ICS that feeds the process controllers, safety controllers and PLCs with the process values measured by the sensors in the field. Flows, pressures, temperatures, levels are measured using the sensors, these sensors provide a value to the controllers using I/O cards (or at least that is the traditional way, we discuss other solutions later), this was originally an analog signal (e.g. 1 – 5V or 4-20 mA) and later a digital signal, or in the case of HART a hybrid solution where the digital signal is superimposed on the analog signal.

Figure 1 – Classic I/O architecture

The sensors are connected through two wires with the I/O module / card. Generally there are marshaling panels and junction boxes used to connect all the wires. In classic I/O, an I/O module also had a fixed function, for example an analog input, analog output, digital input, digital output, or counter input function. Sometimes special electronic auxiliary cards were used to further condition the signals such as adding a time delay on an analog output to control the valves travel rate. But also when much of this was replaced with digital communication the overall architecture didn’t change that much. A controller, had I/O cards (often multiple racks of I/O cards connected to an internal bus) and I/O cards connected to the field equipment. A further extension was the development of remote I/O to better support systems that are very dispersed. A new development in today’s systems is the use of I/O cards that have a configurable function, the same card can be configured as an analog output, a digital input, or analog input. This flexibility created major project savings, but of course like any function can also become a potential target. Just like the travel rate became a potential target when we replaced the analog auxiliary cards based on RC circuitry with a digital function that could be configured.

From a cyber security point of view, the sensors and actuators have a small exposure in this I/O architecture, apart from any physical exposure in the field where they could be accessed with handheld equipment. Today, many plants still have this type of I/O, which I would consider secure from a cyber security perspective. The attacker would need to breach the control network and the controller to reach the sensor. If this would happen the loss of control over the process controller would be of more importance than the security of the sensor.

However the business required, for both reasons of safety and cost, that the sensors could be managed centrally. Maintenance personnel would not have to do their work within the hazardous areas of the plant, but could do their work from their safe offices. The centralization additionally added efficiency and improved documentation to the maintenance process. So a connection was required between a central instrument asset management system (IAMS) and the field equipment to realize this function. There are in principle two paths possible for such a connection, either through an interface between the IAMS function and the controller over the network or a dedicated interface between the IAMS and the field equipment. If we take the HART protocol as example, the connection through the controller is called HART pass-through. The HART enabled I/O card has a HART master function that accepts and processes HART command messages that are send from the IAMS to the controller, which passes it on to the I/O card for execution. However for this to work the controller should either support the HART IP protocol or embed the HART messages in its vendor proprietary protocol. In most cases controllers accept only the proprietary vendor protocols, where PLCs often support the HART-IP protocol for managing field equipment. But the older generation process controllers, safety controllers, and PLCs doesn’t support the HART-IP protocol so a dedicated interface became a requirement. In the next diagram I show these two architectures in figure 2.

Figure 2 – Simplified architectures connecting an IAMS

The A architecture on the right of the diagram shows the connection over the control network segment between the controller and the IAMS, the traffic in this architecture is exposed if it is not using encryption. So if an attacker can intercept and modify this traffic he has the potential to modify the sensor configuration by issuing HART commands. Also when an attacker would get access to the IAMS it becomes possible to make modifications.

However most controllers support a write  enable / disable mechanism that prevents modifications to the sensor. So from a security point of view we have to enforce that this parameter is in the disable setting. Changing this setting is logged, so can be potentially monitored and alerted on. Of course there are some complications around the write disable setting which have to do with the use of what are called Device Type Managers (DTM) which would fill a whole new blog. But in general we can say if we protect the IAMS, the traffic between IAMS and controller, and enforce this write disable switch the sensors are secure. If an attacker would be able to exploit any of these vulnerabilities we are in bigger trouble than having less cyber resilient field equipment.

The B architecture on the left side of figure 2 is a different story. In this architecture a HART multiplexer is used to make a connection with the sensors and actuators. In principle there are two types of HART multiplexers in use, with slightly different cyber security characteristics. We have HART multiplexers that use the serial RS 485 technology allowing for various multi-drop connections with other multiplexers (number 1 in figure 2), and we have HART multiplexers that directly connect to the Ethernet (number 2 in figure 2).

Let’s start with the HART multiplexers making use of RS 485. There are two options to connect these with the IAMS, a direct connection using an RS 485 interface card in the IAMS server or by using a protocol converter that connects to the Ethernet in combination with the configuration of a virtual COM port in the IAMS server. When we have a direct serial link the exposure is primarily the exposure of the IAMS server, when we use the option with the protocol converter the HART IP traffic is exposed on the control network. However if we select the correct protocol converter, and configure it correctly, we can secure this with encryption that also provides us a level of authentication. If this architecture is correctly implemented then security of the sensor depends on security of the protocol converter and the IAMS. An important security control we miss in this architecture is the write protection parameter. This can be compensated by making use of tamper proof field equipment, this type of equipment has a physical dip switch on the instrument providing a write disable function. The new version of the IEC 61511 also suggests this control for safety related field equipment that can be accessed remotely.

The other type of HART multiplexer with a direct Ethernet interface is more problematic to secure. These multiplexers typically use a communication DTM. Without going into an in depth discussion on DTMs and how to secure the use of device type managers, there are always two DTMs. One DTM communicating with the field device understanding all the functions provided by the field device, and a communication DTM that controls the connection with the field device. A vendor of a HART multiplexer making use of an Ethernet interface typically also provides a communication DTM to communicate with the multiplexer. This DTM is software (a .net module) that runs into what is called an FDT (Field Device Tool), a container that executes the various DTMs supporting the different field devices. Each manufacturer of a field device provides a DTM (called a Device DTM) that controls the functionality available for the IAMS or other control system components that need to communicate with the sensor. The main exposure in this architecture is created by the Communication DTM, frequently the communication DTM doesn’t properly secure the traffic. Often encryption is missing, so exposing the HART command messages to modifications, and also authentication is missing allowing rogue connections with the HART multiplexer. Additionally there is often no hash code check on the DTM modules in the IAMS, allowing the DTM to be replaced by a Trojan adding some malicious functionality. The use of HART multiplexers does expose the sensors and adds additional vulnerabilities we need to address to the system. Unfortunately we see that some IIoT solutions are making use of this same HART multiplexer mechanism to collect the process values to send them into the cloud for analysis. If not properly implemented, and security assessments frequently conclude this, sensor integrity can be at risk in these cases.

Systems using the traditional I/O modules are costly because of all the wiring, so bus type systems were developed such as Foundation Field bus and Profibus. Additionally sensors and actuators gained processing power, this allowed for architectures where the field equipment could communicate and implement control strategies at field device level (level 0). Foundation Fieldbus (FF) supports this functionality for the field equipment. Just like with the I/O architecture we have two different architectures that pose different security challenges. I start the discussion with the two different FF architectures used in ICS.

Figure 3 – Foundation Fieldbus architectures

Architecture A in figure 3 is the most straight forward, we have a controller with an a Foundation Fieldbus (FF) interface card that connects to the H1 token bus of FF. The field devices are connected  to the bus using junction boxes and there is a power source for the field equipment. Control can take place either in the controller making use of the field devices or of field devices connected to its regular I/O. All equipment on the control network needs to communicate with the controller and its I/O and interface cards for sampling data or making modifications.

DTMs are not supported for FF, the IAMS makes use of device descriptor files that describe a field device in the electronic device descriptor language (EDDL). No active code is required, like we had with the HART solution when we used DTMs, the HART protocol is not used to communicate with the field equipment.

Also the HART solution in figure 2 supports device descriptor files, this was the traditional way of communicating with a HART field device, however this method offers less functionality (e.g. missing graphic capabilities) in comparison with DTMs. So the DTM capabilities became more popular, also the Ethernet connected HART multiplexer required the use of a communication DTM.

In architecture A the field device is shielded by the controller, the field device may be vulnerable but the attacker would need to breach the nodes connected to the control network before he / she can attempt to exploit a vulnerability in the field equipment. If an attacker would be able to do this the plant would already be in big trouble. So the exposed parts are the assets connected to the control network and the channels flowing in the control network, not so much the field equipment. Weakest vulnerability are most often the channels lacking secure communication that implements authentication and encryption.

In architecture B we have a different architecture, here the control system and IAMS interface with the FF equipment through a gateway device. The idea here is that all critical control takes place locally on the H1 bus. But if required also the controller function can make use of the field equipment data through the gateway. Management of the field equipment through the IAMS would be similar as in architecture A, however now using the gateway.

The exposure in this architecture (B) is higher the moment the control loops would depend on traffic passing the control network. This should be avoided for critical control loops. Next to the channel exposure, the gateway asset is the primary defense for access to the field equipment. It depends on the capabilities to authenticate traffic and the overall network segmentation how strong this defense is. In general architecture A offers more resilience against a cyber attack than architecture B. From a functional point of view architecture B offers several benefits over architecture A.

There are many other bus systems in use such as for example Devicebus, Sensorbus, AS-i bus, and Profibus. Depending on the vendor and regional factors their popularity differs. To limit the length of the blog I pick one and end with a discussion on Profibus (PROcess FieldBUS), which is an originally German standard that is frequently used within Europe. Field buses were developed to reduce point to point wiring between I/O cards and the field devices. Their primary difference between the available solutions is the way they communicate. For example, Sensorbus communicates at bit level for instance with proximity switches, buttons, motor starters, etc. Devicebus allows for communication in the byte range (8 bytes max per message), a bus system used when for example diagnostic information is required or larger amounts of data need to be exchanged.  The protocol is typically used for discrete sensors and actuators.

Profibus has several applications, there is for example Profibus DP, short for Decentralized Peripherals, this is an RS 485 based protocol. Profibus DP is primarily used in the discrete manufacturing industry. Profibus PA, short for Process Automation, is a master / slave protocol developed to communicate with smart sensors. It supports such features as floating point values in the messages, it is similar to Foundation Fieldbus with an important difference that the protocol doesn’t support control at the bus level. So when we need to implement a control loop, the control algorithm runs at controller level where in FF it can run at fieldbus level.

Figure 4 – Profibus architectures

Similar to the FF architectures, also for the Profibus (PB) architectures we have the choice between an interface directly with a controller (architecture A) or a gateway interface with the control network (architecture B). An important difference here is that contrary to the FF functionality, in a PB system there can’t be any local control. So we can collect data from the sensors and actuators and we can send data.

Because we don’t have local control, the traffic to the field devices and therefore all control functionality is now exposed to the network. To reduce this exposure often a micro firewall is added so the Profibus gateway can directly communicate with the controller without its traffic being exposed.

In architecture A the field devices are again shielded from a direct attack by the controller, in architecture B it is the gateway that shields the field devices but the traffic is exposed. Architecture C solves this issue by adding the micro firewall which shields the traffic between controller and field equipment. Though architecture B is vulnerable, there are proper solutions available and in use to solve this. So this architecture should be avoided.

Never the less also in these architectures the exposure of the field equipment is primarily indirectly through the control network, and when an attacker gains access to this network no direct access to the field equipment should be possible.

So far we didn’t discuss IIoT for the FF and PB architectures. We mentioned a solution when we discussed the HART architecture, where we saw a server was added to the system that used the HART IP protocol to collect the data from the field equipment, similar architectures exist for both Foundation Fieldbus and Profibus where a direct connection to the field bus is created to collect data. In these cases the exposure of the field equipment directly depends on a server that either has a direct or indirect connection with an external network. This type of connectivity must be carefully examined for which cyber security hazards exist, what are the attack scenarios, what are the vulnerabilities exploited and what are the potential consequences and what is the estimated risk. Based on this analysis we can decide on how to best protect these architectures, and determine which security controls contribute most to reducing the risk.

Summary and conclusions

Are the sensors not sufficiently secure? In my opinion this is not the case for the refining, chemical, and oil and gas industry. I didn’t discuss in this blog SCADA and power systems, but so far never encountered situations that differed much from the discussed architectures. Field devices are seldom directly exposed, with exception of the wireless sensors we didn’t discuss and the special actuators that have a local field unit to control the actuators. Especially the last category has several cyber security hazards that need to be addressed , but it is a small category of very special actuators. I admit that sensors today have almost no protection mechanisms, with the exception of the tamper proof sensors, ISA 100 and WirelessHART sensors, but because the exposure is limited I don’t see it as the most important security issue we have in OT systems. Access to the control network reducing exposure through hardening of the computer equipment, enforcing least privilege for users and control functions is far more effective than replacing sensors for more secure devices. A point of concern are the IIoT solutions available today, these solutions increase the exposure of the field devices much more and not all solutions presented seem to offer an appropriate level of protection.

Author: Sinclair Koelemij

Date: May 23, 202

Cyber Security in Real-Time Systems

Introduction

In this blog I like to discuss some of the specific requirements for securing OT systems, or what used to be called Real-Time Systems (RTS). Specifically because of the real-time requirements for these systems and the potential impact on these requirements by cyber security controls. RTS needs to be treated differently when we secure them to maintain this real-time performance. If we don’t we can created very serious consequences for both the continuity of the production process as well as the safety of the plant staff.

I am regularly confronted with engineers with an IT background working in ICS that lack familiarity with the requirements and implement cyber security solutions in a way that directly impact the real-time performance of the systems with sometimes very far reaching consequences leading to production stops. This blog is not a training document, its intention is to make people aware of these dangers and suggest to avoid them in future.

I start with explaining the real-time requirements and how they are applied within industrial control systems and then follow-up with how cyber security can impact these requirements if incorrectly implemented.

Real-Time Systems

According to Gartner, Operational Technology (OT) is defined as the hardware and software that detects or causes a change, through direct monitoring and / or control of industrial equipment, assets, processes and events.

Figure 1 – A generic RTS architecture, nowadays also called an OT system

OT is a relatively new term that primarily came into use to express that there is a difference between OT and IT systems, at the time when IT engineers started discovering OT systems. Before the OT term was introduced the automation system engineering community called these systems Real-Time Systems (RTS). Real-Time Systems are in use for over 50 years, long before we even considered cyber security risk for these systems or considered the need to differentiate between IT and OT. It was clear for all, these were special systems no discussion needed. Time changed this, and today we need to explain that these systems are different and therefore need a to be treated differently.

The first real-time systems made use of analog computers, but with the rise of mini-computers and later the micro-processor the analog computers were replaced by digital computers in the 1970s. These mini computer based systems evolved into micro-processor based systems making use of proprietary hardware and software solutions in the late 1970s and 1980s. The 1990s was the time these systems started to adopt open technology, initially Unix based technology but with the introduction of Microsoft Windows NT this soon became the platform of choice. Today’s real-time systems, the industrial control systems, are for a large part based on similar technology as used in the corporate networks for office automation, Microsoft servers, desktops, thin clients, and virtual systems. Only the controllers, PLCs and field equipment are still using proprietary technology, though also for this equipment many of the software components are developed by only a few companies and used by multiple vendors. So also within this part of the system a form of standardization occurred.

In today’s automation landscape, RTS are everywhere. RTS is implemented in for instance cars, air planes, robotics, space craft, industrial control systems, parking garages, building automation, tunnels, trains, and many more applications. Whenever we need to interact with the real world by observing something and act upon what we observe, we typically use an RTS. The RTS applied are generally distributed RTS, where we have multiple RTS that exchange information over a network. In the automotive industry the Controller Area Network (CAN) or Local Interconnect Network (LIN) is used, in aerospace we use ARINC 629  (named after Aeronautical Radio, Incorporated) for example used in Boeing and Airbus aircraft, and networks such as Foundation Fieldbus, Profibus, and Ethernet are examples connecting RTS within industrial control systems (ICS)such as DCS and SCADA.

Real-time requirements typically express that an interaction must occur within a specified timing bound. This is not the same as that the interaction must be as fast as possible, a deterministic periodicity is essential for all activity. If an ICS needs to sample a series of process values each 0.5 second, this needs to be done in a way that the time between the samples is constant. To accurately measure a signal, the Nyquist-Shannon theorem states that we need to have a sample frequency with at minimum twice the frequency of the signal measured. If this principle is not maintained values for pressure, flow, and temperature will deviate from their actual value in the physical world. Depending on the technology used tens to hundreds of values can be measured by a single controller, different lists are maintained within the controller for scanning these process values each with a specific scan frequency (sampling rate). Variation in this scan frequency, called jitter, is just not allowed.

Measuring a level in a tank can be done with a much lower sampling rate than measuring a pressure signal that fluctuates continuously. So different tasks exist in an RTS that scan a specific set of process points within an assigned time slot. An essential rule is that this task needs to complete the sampling of its list of process points within the time reserved for it, there is no possibility to delay other tasks to complete the list. If there is not sufficient time than the points that remain in the list are just skipped. The next cycle will start again at the start of the list. This is what is called a time triggered execution strategy, a time triggered strategy can lead to starvation if the system becomes overloaded.  With a time-triggered execution, activities occur at predefined instances of time, like a task that samples every 0.5 second a list of process values, and another task that does the same for another list every 1 second, or every 5 second, etc.

There also exists an event triggered execution strategy, for example when a sampled value (e.g. a level) reaches a certain limit an alarm will go off. Or if a sampled value has changed for a certain amount or if the process point is a digital signal that changed from open to closed. Apart from collecting information an RTS also needs to respond to changes in process parameters. If the RTS is a process controller the process operator might change the setpoint of the control loop or adjust the gain or another parameter. And of course there is an algorithm to be executed that determines the action to execute, for example the change of an output value toward an actuator or a boolean rule that opens or closes a contact.

In ICS up to 10 – 15 years ago this activity resided primarily within a process controller, when information was required from another controller this was exchanged through analog wiring between the controllers. However hard-wiring is costly, so when functionality became available that allowed this exchange of information over the network (what is called peer-2-peer control) it was more and more used. (See figure 2) Various mechanisms were developed to prevent that a loss of communication between the controllers would not be detected and could be acted upon if it occurred.

Figure 2 – Example process control loop using peer-2-peer communication

One of these mechanisms is what is called mode shedding. Control loops have a mode, names sometimes differ per vendor, but commonly used names are Manual (MAN), Automatic (AUTO), Cascade (CASC), Computer (COM). The names and details differ between different systems, but in general when the mode is in MAN, the control algorithm is not executed anymore and the actuator remains in its last position. When the mode is AUTO the process algorithm is executed and makes use of its local setpoint (entered by the process operator) and measured process value to adjust its output. When the mode is CASC the control algorithm receives its setpoint value from the output of another source. this can be a source within the controller or an external source that makes use of for example the network. If such a control algorithm doesn’t receive its value in time, mode shedding occurs. It is generally configurable to what mode the algorithm falls back but often manual mode is selected. This freezes the control action and requires an operator intervention, failures may happen as long as the result is deterministic. Better fail than continuing with some unknown state. So within an ICS network performance is essential for real-time performance, essential to keep all control functions doing their designed task, essential for a deterministic behavior.

Another important function is redundancy, most ICS make use of redundant process controllers, redundant input/output (I/O) functions, and redundant servers, so if a process controller fails, the control function continuous to operate, because it is taken over by the redundant controller. A key requirement here is that this switch-over needs to be what is called  a bump-less transfer. So the control loops may not be impacted in their execution because another controller has taken over the function. This requires a very fast switch-over that the regular network technology often can’t handle. If the switch-over function would take too long, we would have again this mode-shedding mechanism triggered to keep the process in a deterministic state. The difference with the previous example is that in this case mode shedding wouldn’t occur in a single process loop but in all process loops configured in that controller. So a major process upset will occur. A double controller failure would normally lead to a production stop, resulting in high costs. Two redundant controllers need to be continuously synchronized, an important task running under the same real-time execution constraints as the example of the point sampling discussed earlier. Execution of the synchronization task needs to complete within its set interval, this exchange of data takes place over the network. If somehow the network is not performing as required and the data is not exchanged in time the switch-over might fail when needed.

So network performance is critical in an RTS, cyber security however can negatively impact this if implemented in an incorrect manner. Before we discuss this let’s have a closer look at a key factor for network performance, network latency.

Network latency

Factors affecting the performance in a wired ICS network are:

  • Bandwidth – the transmission capacity in the network. Typically 100 Mbps or 1000 Mbps.
  • Throughput – the average of actual traffic transferred over a given network path.
  • Latency – the time taken to transmit a packet from one network node (e.g. a server or process controller) to the receiving node.
  • Jitter – this is best described as the variation in end-to-end delay.
  • Packet loss – the transmission might be disturbed because of a noisy environment such as cables close to high voltage equipment or frequency converters.
  • Quality of service – Most ICS networks have some mechanism in place that set the priority for traffic based on the function. This to prevent that less critical functions can delay the traffic of more critical functions, such as an process operator intervention or a process alarm.

The factor that is most often impacted by badly impacted security controls is network latency, so let’s have a closer look at this.

There are four types of delay that cause latency:

  • Queuing delay – Queuing delay depends on the number of hops for a given end-to-end path. This typically caused by routers, firewalls, and intrusion prevention systems (IPS).
  • Transmission delay – This is the time taken to transmit all the bits of the frame containing the packet. So the time taken between emission of the first bit of the packet and the emission of the last bit. The main factor influencing transmission delay is cable type (copper, fiber, dark fiber), and cable distance. For example very long fiber cables. This is a factor normally not influenced by specific cyber security controls, exception is when data-diode is implemented. The type of data diode can have influence.
  • Propagation delay – This is the time between emission of the first bit and reception of the last bit. Propagation delay is created by all network equipment, but also a firewall (and type of firewall) and IPS contribute to this.
  • Processing delay – This is the time taken by the software execution of the protocol stack. Processing delay is created by access control lists, by encryption, by integrity checks build in the protocol, either for TCP, UDP, or IP.

Let’s discuss the potential conflict between real-time performance and cyber security.

The impact of cyber security on real-time performance

How do we create real-time performance within Ethernet, a network never designed for providing real-time performance? There is only one way to do this and that is creating over-capacity. A typical well configured ICS network has a considerable over-capacity to handle peak loads and prevent delays that can impact the RTS requirements. However the only one managing this over-capacity is the ICS engineer designing the system. The time available for tasks to execute is a design parameter that needs to meet small and large systems. To make certain that network capacity is sufficient is complex in redundant and fault tolerant networks. A redundant network has two paths available between nodes, a fault tolerant network has four paths available. Depending on how this redundancy or fault tolerance is created, can impact the available bandwidth / throughput. In systems where the redundant paths are also used for traffic network paths can become saturated by high throughput, for example caused by creating automated back-ups of server nodes, or distributing cyber security patches (Especially Windows 10 security patches). Because this traffic can make use of multiple paths, it becomes constrained when it hits the spot in the network redundancy or fault tolerance ends and the traffic has to fall back to a much lower bandwidth. Quality of service can help a little here but when the congestion impacts the processing of the  network equipment, extra delays will occur also for the prioritized traffic.

Another source of network latency can be the implementation of anomaly detection systems making use of port spanning. A port span has some impact on the network equipment, partially depending on how configured, generally not much but this depends very much on the base load of this equipment and its configuration. Similarly low cost network taps also can add significant latency. This has caused issues in the field.

Another source of delay are the access filters. Ideally when we segment an ICS network in its hierarchical levels (level 1, level 2, level 3) we want to restrict the traffic between the segments as much as possible, but specifically at the levels 1 and level 2 this can cause network latency that potentially impacts control critical functions such as the peer-2-peer control and controller redundancy. Additionally the higher the network latency the less process points can be configured for overview graphics in operator stations, because also these have a configurable periodic scan. A scan that can also be temporarily raised by the operator for control tuning purposes.

The way vendors manage this traffic load is by their system specifications limiting the number of operator stations, servers, and points per controller, and breaking up the segments into multiple clusters. These specifications are verified with intensive testing to meet performance under all foreseen circumstances. This type of testing can only be done on a test bed that supports the maximum configuration, the complexity and impact of these tests make it impossible to verify proper operation on an operational system. because of this vendors will always resist against implementing functions in the level 1 / levels 2 parts of the system that can potentially impact performance and are not tested.

In the field we see very often that security controls are implemented in a way that can cause serious issues that can lead to dangerous situations and / or production stops. Controls are implemented without proper testing, configurations are created that cause a considerable processing delay, networks are modified in a way that a single mistake of a field service engineer can lead to a full production stop. In some cases impacting both the control side as well as the safety side.

Still we need to find a balance between adding the required security controls to ICS and preventing serious failures. This requires a separate set of skills, engineers that understand how the systems operate, which requirements need to be met, and have the capabilities to test the more intrusive controls before implementing them. This makes IT security very different from OT security.