Last week the MITRE Engenuity team released the results from their first ATT&CK Evaluations for ICS. I spent hours looking at the MITRE published results and the evaluated vendors’ write ups of the results. It was a professionally executed and realistic scenario that showed, unsurprisingly, that ICS detection products can detect cyber attacks on ICS and connected safety systems. Unfortunately the stated approach to avoid competitive analysis minimizes the value of the effort to asset owners and the ICS security community.
The Attack Scenario (Adversary Emulation)
The attack scenario emulated the actions of the Triton attacker in a Rockwell Automation / Allen Bradley environment, as opposed to the Schneider Electric system in the actual attack. It extended the scenario by reasonably assuming the end goal of the Triton attacker would be to cause an incident on the control system after the integrity of the safety system was compromised.
Wouldn’t all of the detection solutions be able to easily detect this attack? I had brought this issue up with Otis Armstrong Alexander of MITRE in an Unsolicited Response episode in 2020.
How difficult is it? … I wouldn’t think that would really accomplish much to say “is someone trying to change the way … or put a remote access trojan on an Allen-Bradley controller?”. I wouldn’t think that would be terribly difficult to detect. I would expect they do that today, unless you pick some really strange unknown one they don’t support. That’s one of the first things these products do.
The Rockwell Automation products and CIP protocol are among the best and most widely supported protocols in the ICS detection space, and have been for almost 5 years now. On top of this, the Triton inspired scenario would have two easily detected aspects:
- The need to modify the safety controller and the process controller.
- The need to establish new communication from outside the OT firewall to one or more cyber assets inside the OT firewall.
Every ICS detection product I’m aware of would detect this scenario and rate it at a high severity to launch an investigation. This is realistic though. If the Triton target had been doing basic monitoring / detection they would have identified the attack. Therefore it was unsurprising that all five solutions detected the key elements of the attack.
There were 25 steps and 100 substeps that were defined by MITRE and scored for each vendor. The rigor in the scenario development and scoring is admirable. One nit I would pick is many of the steps that interacted with the safety and ICS controllers were launched by custom attacker code when they could have just as easily used existing Rockwell Automation software capabilities with a lower chance of being detected.
Surprisingly the easiest way to understand the detailed attack steps is through a Dragos white paper rather than the report. Similar information is available, in a less narrative form and requiring clicking through 12 Phases, on the MITRE site’s Operational Flow page.
The Reports
The Triton 2021 Evaluation page and individual vendor reports made available online were the most disappointing part of the MITRE Evaluations. They could be of some value to the participating vendors to identify potential product enhancements, but they are of little value to an asset owner trying to compare the products.
To be fair, MITRE was straightforward from the start that they did not view this as a competition and would not provide any comparisons between the vendors. Having experienced the hesitancy of vendors to risk a bad review, I understand MITRE’s decision. It is, however, a regrettable decision for asset owners that would have benefited greatly from the analysis and explanations of the pro’s and con’s of the vendor participant solutions.
The vendor reports themselves are very hard to compare beyond the top level statistics, see table below. There is almost no text. The report has drill down detail to support the scoring for each Tactic, Technique, and Substep. So again a vendor can drill down and look for areas of improvement, but it is of little value to anyone else. After this drill down detail there are three charts for each value that graphically show some of the drill down detail.
As you can see from the table the four participants with a commercial product all had a visibility, an analytic or telemetry coverage of a substep, of 90% or more. There is detail that would allow a painstaking comparison for each substep and each participant. For example:
Step 25.G.2, Tactic: Impair Process Control (TA0106), Technique: Unauthorized Command Message (T0855)
Criteria: Evidence of a privileged write or force point action being used to overwrite polled tag values on the control PLC when the adversary initiated the CIP service 0x51 within the class 0x6A. The tags associated with the Ignitor (3XY2070) and Flame Sensor (3HS2070) were the target of these actions.
Screen shots then provide the evidence of the coverage, if any.
There were a small number of clear conclusions, beyond the fact that all five products would have detected this attack at every step of the kill chain.
- Having access to and considering the Windows logs provides a lot of useful detection information. Claroty only looked at the network traffic and was scored on 50 substeps rather than the 100 substeps the other four vendors were scored on. Claroty will likely answer this deficiency with their new Claroty Edge product.
- The scenario didn’t envision any of the solutions using OT firewall logs, which would have been another great data source for detecting activity in the early phases of the attack. Endpoint detection logs and other data sources with higher fidelity than monitoring a span port were not part of the evaluation. This is in line with the industry trend to leapfrog over simple detection to a more complex and time intensive detection solution.
- None of the products actively queried the PLC. There were only two substeps related to this, and they were excluded from all reporting. It was unclear what they were hoping to see a detection solution do in these substeps. Having access to PLC / controller information can be very helpful to engineers trying to determine the goal of an attack.
Conclusions
The MITRE Engenuity team successfully delivered on what they planned and promised. Credit is deserved for having a plan and goal, and achieving it, even if the new information provided from this Triton evaluation was minimal.
It reminds me a bit of Project Basecamp back in 2012. We knew the PLC’s would be easily compromised, and as Reid Wightman famously said, “it was a bloodbath”. Even with the Project Basecamp team’s high confidence in the results before beginning, the project still had value as a public, dramatic, undeniable demonstration of the widespread PLC insecurity issue and its potential consequences.
The Triton ATT&CK Evaluation is similar. It was almost certain the detection solutions would detect the attack in significant and helpful detail. MITRE’s efforts proved and documented this well. Hopefully this removes any doubt that this product category will detect the TTPs of ICS cyber attacks that have succeeded, if the appropriate talent is using the products as designed.
We did not, however, repeat Project Basecamp year after year, and the ICS ATT&CK Evaluations would be of significant lesser value in subsequent rounds unless something changes. Detection products will detect known attack tactics and techniques. Water is wet.
Two changes to consider if another round of ATT&CK Evaluations for ICS are:
- An attack scenario that is not given in advance to the vendors and is believed to evade at least some of the detection solutions. A real test.
- A competitive analysis of the solutions.