SCADA Security Economics

I’m in Berlin preparing to attend the Workshop on the Economics of Information Security (WEIS). ICS owner/operators act in their own best self interest. This is rational behavior for any person or organization. Owner/operators that don’t spend money on ICS security do this because they don’t believe it is in their self interest. They don’t believe spending this money will prevent losses of sufficient value to warrant the expense and effort.

A number or ICS security conferences have sessions on making the business case for security to convince management to spend time and money. They have been largely unsuccessful in convincing C-level executives to spend more money. Then there is the question of how much money should be spent on ICS security and what should it be spent on. Lots of questions and hopefully some new ideas on how to pursue answers at WEIS.

There isn’t a critical infrastructure security economics paper here, but I’ll be looking for answers, or at least hints, for the following three issues.

1. Extremely high consequences skew the risk equation

This is the problem the ICS security space has been dealing with for years now. The economic, loss of life, environmental consequences of successful attacks on truly critical infrastructure are massive — often exceeding the value of the company that owns and runs the control system. Even a very small threat and tiny vulnerability can result in a high risk given a massive consequence in the traditional risk equation.

I’m very curious to learn the economic and statistical approach to deal with high consequence events.

2. A threat that is small in number but high in motivation and means

APT, Stuxnet and all the talk of cyberwar leads to increased concern of a nation state devoting substantial resources to compromise a critical infrastructure component. These threat agents are different than even talented hackers who may not want to spend the time and money to develop a tailored, devastating attack on a specific facility.

3. Threats that want the capability but are unlikely to launch an attack

All the talk of cyberwar has me thinking what would I do if tasked with developing an offensive capability. I’d be looking at three phases. 1) Learn what potential adversaries have and develop reliable ways to attack the deployed control systems. 2) Stage the attack on their system so it can be started whenever needed. This requires getting it on the system and maintaining communication with it. 3) Push the button and launch the attack.

Phase 1 is likely being pursued by every nation and many non-state actors. An organization with an offensive mission, even limited to counterattack, would be failing if they didn’t at least have the capability. Phase 2 is a harder decision because their are consequences of getting caught. Scholars have written numerous articles on whether this is allowed, is an act of war, …

Phase 3 is attacking an adversaries physical infrastructure. A much harder decision with serious consequences since reprisal is allowed and expected.

I’ll be blogging and tweeting, #WEIS2012, next Monday and Tuesday.

Image by Todd Huffman