It’s always a pleasure to talk with Ralph Langner of Langner Communications at S4. He is a leader and independent control system security voice in Europe. Ralph has developed some interesting tools to demonstrate vulnerabilities and lack of security that I hope to share with our readers soon. He had an interesting idea of “infinite day” vulnerabilities, and wrote it up in the article below.


Last week’s S4 gave a whole lot to chew on. One presentation that spawned new ideas was “Estimating the number of 0Days in control systems” from INL. In a nutshell, the authors start out with asserting that vulnerabilities are software defects (i.e. bugs) that, once known to the vendor, are going to be fixed; a concept by the way that is not in accordance with ISA-99 definitions (part 1, section 3.2.130). 0Day (pronounce: oh-day) was defined as “a vulnerability which has been discovered but has not yet been publicly announced or fixed”. Although not clearly stated in the presentation or paper, the underlying assumption here seems to be that 0Days should be one of our biggest concerns – a notion that is in line with what we hear from hackers and vendors of antivirus solutions. For control systems, this may fairly be disputed. 0Days may earn hackers bragging rights and antivirus vendors revenues, but they hardly need to cost asset owners their sleep, as I will try to explain.

To make my point I have coined the term iDay vulnerability. iDay stands for “infinite days” and refers to a vulnerability that won’t get fixed, even if publicly known. Examples are exploitable bugs in legacy products that won’t get fixed simply because the product is obsoleted by the vendor (think about Windows NT, for example), or because the vendor may be out of business. Other examples are vulnerabilities that in essence are not bugs but exploitable design features, such as the well-known lack of authentication in field device protocols, the design flaw that Jake Brodsky brought up in his presentation on jamming wireless, or design vulnerabilities in wireless hardware as presented by Travis Goodspeed. Some iDays are even advertised by vendors, such as “open” connectivity interfaces for third party applications – an interesting area where vendors deliberately leave the well-beaten path of security by obscurity which is usually taken to deal with iDays. Exploitable design features do not necessarily have to be iDays though, as vendors may decide to fix them anyway. An example is the rogue PLC firmware upload as presented by Dale and Daniel. Here, the vulnerability is not a bug in the strict sense of the word as the device in question actually does some basic integrity checking. Now the vendor could simply hide in ignorance and thereby making the vulnerability an iDay – or address the problem, and the iDay is gone.

If we look at control systems, the existence of substantial iDays in bright daylight is so overwhelming that one has to question the intelligence of any attacker who messes around with 0Days or, even worse, invests precious lifetime trying to find new ones. In some way, staging an attack based on exploiting 0Days is like sending spam mail. A targeted attack is impossible, because it is unpredictable who the actual recipients will be and what they’re going to do with the mail, if it is opened at all. Not even the number of vulnerable targets is predictable. This is not to say that we may never see attacks against control systems based on 0Days; I’m just arguing that such attacks are highly unlikely and will hardly result in major damage. iDays, in contrast, affect all products of a specific product line, regardless of version number. Since they won’t be fixed, the window of opportunity stays open for an indefinite amount of time. They may be completely non-appealing to the hacker community, but for a serious attacker, iDays are a safe bank. iDay exploits work pretty much deterministic.

So what kind of damage might result from iDay exploits? Here’s an example. One area which is notorious for critical iDays is PLC command protocols. With a decent amount of reengineering it is possible to do virtually everything that the vendor’s engineering tools do, but with malicious purpose, and automated. For example, it is possible to write exploit code that scans for specific PLC models in the PCN and enumerates configuration information (easy). For the result list of identified devices, malicious command sequences may be executed simultaneously. This may go so far as to disabling ladder logic execution and directly manipulate outputs at will – reliably on all identified PLCs of a specific make, regardless of firmware version. What most people don’t recognize is that all this is possible without any level of insider knowledge or engineering skills. With the proper tools, it’s just a matter of a few mouse clicks, or fully automated execution in a malware/worm scenario. As an asset owner, just think about what it means if all (or most of) your outputs are suddenly and simultaneously stuck in the “on” position, regardless of sensor inputs or alarms: Valves remain open, drives remain running, burners keep burning and so forth. As long as we are facing problems as basic and well-known such as these, I question if it is worth the effort of spending a lot of time on 0Days.

Final note – the goal of science is to expand knowledge. The generally accepted way to do this is to criticize and ultimately falsify existing theory. I hope that nobody, especially the INL authors whom I hold in high respect, will view this post as a devaluation of INL’s paper and research.