I had a number of public comments and private “yes, and” conversations after last week’s US Government (USG) Reset article similar to:
just as government needs to show results, so does industry. Outside of entrenched, IT specific security providers, the public record is somewhat mixed.
I agree with the sentiment, and the USG and private industry cases are different. The link between most USG OT security efforts and a result are amorphous. Long sets of recommendations or demonstrations of known technology provided at great volume. These are supposed to accomplish what? It’s one of the reason metrics have been nearly non-existent in USG programs.
Most of the comments on measuring and holding private sector efforts accountable to proven results were related to OT security product and service purchases. In these case the link between the promise and result is typically direct. A detection solution should detect cyber incidents. A secure remote access solution should prevent unauthorized remote access and detect suspect attempts.
The commenters are right. We are well past the time to begin evaluating the effectiveness and usefulness of these solutions in various product and service categories.
Effectiveness and usefulness are two different things. A solution could be highly effective at doing what it claims and still not be useful in reducing OT cyber risk. An example could be detecting spoofed (not actual request or response) communication between Level 1 and 2 devices. What is the likelihood of this happening? And what is the reduction of consequence if this is detected? If the answer is very low to these questions it isn’t a useful OT cyber risk reduction solution even if it is 100% effective.
Are OT Detection Solutions Worth The Investment?
This is the question that is percolating in the community. By OT detection solutions, people are asking about Armis, Claroty, Dragos, Nozomi and their smaller competitors. My intuition is they are worth some level of investment at the appropriate point in your OT security program maturity. There are many actions that will result in more efficient risk reduction and then you will reach a point where these detection solutions deployed at some level of visibility would be the next on the efficient risk reduction list.
Detection has benefited from my and others’ intuition as well as detection circular logic. It’s logical, and examples in cybersecurity and other fields have demonstrated, that detection of an attack/incident prior to it causing damage can allow the defender to prevent or reduce the damage from the attack. A case can be made that detection solutions can be useful.
How effective are they?
What if the detection system doesn’t detect the attack? Yes, it failed. The answer is usually we need more and better detection rather than maybe we shouldn’t be spending these resources on detection. If the better detection fails, we need to invest in even better detection. Einstein 1, and 2, and 3 … and so on and so on.
In an infinite resources world this might make sense. If we have limited resources to apply to addressing OT cyber risk there are combinations where lower levels of effectiveness and increased cost no longer make sense.
I don’t know how we answer this question, yet. I know that we need to start trying something as detection is closer to a religious belief than science. Maybe we need to look to Karl Popper:
A theory is part of empirical science if and only if it conflicts with possible experiences and is therefore in principle falsifiable by experience.
Put another way, what would have to happen to prove that OT detection is not worth the resources? There has to be an answer to this question.
I don’t believe we can get to certainty on detections worth. Its value. So much depends on the attacker and other defender controls.
We can and should begin collecting data and running experiments to gain more confidence in our answer. We should define metrics and collect data on what success of these solutions mean. And importantly, in a Popper-lite manner, identify what would lead to a verdict that the detection solutions are not worth the continued cost.
If I’m a company, what attacks has my OT detection solution identified? How much were the consequences avoided or reduced due to this detection? What attacks were not detected and what were the consequences of those attacks? How much does the OT detection solution cost, including our labor costs, annually?
With this data you can begin to answer the “is it worth it” question. Or more importantly is this what we should be spending our resources on? There are many likelihood and consequence reduction options to deal with the OT cyber threat. The community tends to fixate on a small number of the “cyber” solutions.
80 / 20 Rule
One last thing … I’m seeing a push to get visibility for detection throughout the OT environment. To monitor and analyze 100% of the communications and cyber assets. My intuition leads me to believe this is a mistake.
Why wouldn’t monitoring for detection follow the all too often true Pareto Principle or 80/20 rule. I’d guess that you would get at least 80% of the useful detection information by monitoring 20% of the communications and cyber assets.
Of course you can be scared into 100% monitoring. You don’t want to miss anything. You don’t want to be blamed for not requesting 100% monitoring.
It’s time to get past my or anyone else’s intuition. To my asset owner friends, let’s get some of these numbers. Overall detection numbers and numbers if you only monitored what you identified as the most effective 20%.