Summary: Vendors who are focused on compromise of Level 0 to Level 1 communications should pivot to process variable anomaly detection.
There are a handful of vendors (Siga being the most active, Mission Secure, Fortiphyd, … and a couple I likely missed) who focus on monitoring Level 0 (sensor/actuator) to Level 1 (PLC/controller) communications and comparing the monitored points (data) to the same points at higher levels of the ICS. Any differences would flag either an attacker modifying the data or some non-malicious cause of the error.
This product category hasn’t garnered much mindshare or sales. Deploying and maintaining essentially a second network for monitoring the Level 0 to Level 1 communications is expensive, and there are many better ways to spend that money to efficiently reduced risk. It may be a fit for some special customers and special situations, but it is unlikely to ever be a mainstream OT security solution.
I recently received a demonstration from Siga, and the most interesting features to me was their analysis of the process data to identify process variable anomalies. Cases where the data representing the state of the physical process doesn’t make sense. Cases where some of the data must be wrong, whether via a cyber attack showing false data or one of the many other failures that are possible.
Most of the attention to process variable anomaly detection have involved digital twins, with GE’s Digital Ghost being a good example. A digital twin or sophisticated model is created and then correlations between the various points in the process can be identified through machine learning. Ideally you would find non-obvious correlations spread across multiple PLC’s or controllers to thwart replay attacks (think Stuxnet).
The problem with this is you need to create a digital twin for each site. It may be feasible for something that is fairly consistent such as a GE Mark VIe control system for a GE Turbine. This is not the typical situation. Even in the rare example of a company that buys primarily the same ICS, we see that each site is different and would need their own digital twin or model development. Many organizations have a hodgepodge of systems due to acquisitions, changing management decisions, and the preferences of the decision maker at individual plants.
The benefits of digital twins outside of security will increase the deployment of digital twins, but not at a pace and market penetration level that would block other lower cost and lower effort process variable anomaly solutions.
Siga’s process variable anomaly approach was different in that it didn’t try to model or understand the process. Rather it just looked at it as a set of data. A set of data that they already had in their product from two sources (the ICS and their Level 0/1 monitoring network). It was unclear whether this was unstructured machine learning, but it did essentially approach the problem that way. Here is a set of data, what appears to be anomalous?
First, this can be a gradual pivot from the market’s perspective. The company doesn’t need to abandon their Level 0/1 monitoring product or the stake in the ground they have placed on the importance of this technology. From a technical perspective there isn’t much advancement required or available to comms monitoring, so a pivot wouldn’t be diverting R&D dollars.
The emphasis of development and marketing dollars shifts to the analysis and identification of process variable anomalies in unmodeled/unstructured data that has been collected somehow. The companies in this space shouldn’t care how they get this data. It will be most available from historians or other SCADA/DCS servers. Available without the cost of deploying the secondary Level 0/1 network.
Maybe even more important, available without changing the process or ICS. As I noted in last week’s article, the common factor in the two most successful OT security product markets:
They both can be deployed and used without making any changes to the ICS or the physical system being monitored and controlled. Detection got its foothold by being passive, only listening to the network … SW/FW analysis is an offline process not even taking place on the ICS. The input comes from the vendor, or from an asset owner providing a vendor package.
The OT detection market is a great example of a set of successful pivots. When it was too hard or too early for asset owners to deal with the detection data, the market pivoted to creating an asset inventory, then vulnerability management, and now risk management. OT detection never left the product, and is now being used by asset owners with a more mature OT security program.
The biggest problem with this pivot is mindshare. The OT detection space overwhelmed the market with incessant and quality (usually) messaging over a five year period. It got to the point that a CISO better have an answer when executives or the board asked what they were doing vis-a-vis this product category. The SBOM / composition analysis market has the wind at its back with the focus on supply chain and US Government mandates. The same could be said for “zero trust”, although this is harder in OT where there is typically no trust required in control commands … the opposite of zero trust.
Could a vendor, or even better a group of vendors, push the message of process variable anomaly detection from existing and accessible process data? I don’t know. I do know it is a pathway to a large market, and Level 0/1 monitoring is not. There are also exits if the market doesn’t develop to the desired size such as selling the technology to the Claroty, Dragos, Nozomi’s of this world. Or AWS or Azure. AWS, Azure and the king of historians AVEVA/OSIsoft’s PI system would be potential competitors. Claroty, Dragos and Nozomi though have shown that focus and an omnipresent marketing message can create $1B companies.