ICS security regulation

A number of related issues brought up at ICSJWG have been floating around my head in the long flight to Asia: market failure, regulation and the public’s right to know.

At ICSJWG a friend reminded me that in his S4 keynote Dr. Ross Anderson said that regulation is required when the consequence of an event exceeds the value of the company. Imagine you have $1,000 invested in asset. If the asset goes away your maximum loss is $1,000, so you will take action to address risk to this asset but the maximum consequence to you is a loss of $1,000. You will spend much less than a $1,000 to protect your investment, and whether you go through a formal risk assessment or not it will be based on the risk equation of perceived vulnerability x likelihood of attack/incident x $1,000.

Now imagine the impact to the community of the loss of this asset is much greater than $1,000. The asset owner is not incentivized to consider this larger impact in their risk equation, and this is why government regulation is needed to get them to address the risk. This is highly applicable to the ICS critical infrastructure space where the economic impact of a sustained outage would greatly exceed a utility company’s value. And there is an issue of loss of life, which is very difficult to put a price on.

This is the most compelling argument I’ve heard for regulation. However if you follow this argument then the regulation should only be applying to events that would be large incidents that would have an impact greater than a company’s value. They would not be highly detailed and attempting to require a complete security program and perhaps it would focus more on limiting consequence. It would be interesting to go through NERC CIP or some other regulation and see how much would survive with this viewpoint.

It also might drive a regulation saying safety systems must be separated from control systems despite the recent results of LOGIIC2. This would have a much greater impact on preventing catastrophic, bigger than company value incidents than documenting all running services in a spreadsheet.

In my ICSJWG disclosure panel I focused on vendor’s providing asset owners with honest, forthright and clear vulnerability information so they could respond as they view appropriate to manage risk. A few tweeters also said that the public had a right to know about vulnerabilities as well.

While this is a populist sentiment, it may not be helpful or true. The term “actionable intelligence” comes to mind. What action can the public take if they know a transmission SCADA system will crash if the servers are port scanned? In theory a person or family could move, buy a generator, or lobby his elected officials for more regulation. This is unlikely and in almost all cases ineffective.

This is not an argument for keeping information secret or beating up anyone’s particular disclosure practice. It is an argument against saying the potentially affected public has a right to know about a specific vulnerability and the benefit of their knowing about a vulnerability.

Even if the public is provided macro information on a company’s security posture, such as the letter grades that the US Government has given out on NIST SP800-53 compliance, it is unclear how this would help. The White House has argued that providing some information to the SEC would affect stock price and this would drive a company to be more secure. As Sean McBride pointed out in the September podcast, the evidence actually shows that an incident has at best a short term impact on stock price.

Image by Deacon MacMillan