Code signing is a security feature that has been around for quite some time, and has been proven in many other areas, but is uncommon to find it in any control system component and very rare to find in control devices where firmware uploading is an important feature. Without a doubt the technology is useful, and provides a high level of assurance that the code running on the device is the code that you want running on it, but lately I’ve been in too many conversations where code signing is seen as a panacea for any and all security issue we may ever face and many involved in securing, administering, or pontificating about control systems don’t have a real understanding of the technology even as they praise or denigrate it.
So what does code signing give you? It usually boils down to is some sort of digital signature mechanism that lets the system (or the user) know that a sheep is actually a sheep and not just a wolf with wool glued to it. This usually takes the form of a cryptographic key of some sort, and the device that the firmware is being uploaded to won’t accept the firmware unless it’s signed with a valid key (this check should also be done at runtime, not just upload, but we’ll keep it simple). Having a mechanism like this in place is a great step in the right direction, but it has to use real cryptography. That means something that one of those PHD types, who probably has an algorithm with his last name in it, came up with and has had it peer reviewed for decades, not an in house routine that divides the total firmware size by 10 and does a crc-16 on each of those offsets and makes sure it equals the value in the header of the file.
The integrity of the code at the time it starts is more than reasonably assured. That is definitely a good thing, but more than a few voices in our little corner of the security community think that this means that vulnerabilities in that code aren’t exploitable. This is not the case; once the code is loaded and running the checks are finished until the next time it runs. That means that arbitrary code can still be executed on the device if a vulnerability was successfully exploited, it just means that the exploit code has to do some serious work before the next reboot or it’s going to be homeless. Code signing makes one set of attacks significantly more difficult, and makes any permanent changes tougher but it’s not a cure all.
So with that out of the way, are we going to see a time in the future where code signing of firmware is common? I’ve had SCADA admins tell me that they’ll never use a device that required it, citing issues with key management, verification, what happens if they want to keep the devices in place but the company providing firmware stops supporting them, and so on. These are valid concerns, but ones that could be overcome with proper planning and forethought. But by far the more common excuse is the same one we hear for why HMI stations don’t have passwords, usually something along the lines of “What if someone needed to login immediately and they forgot their password?” But honestly, if your system is a configuration having to push firmware to a device immediately without any sort of safeguards is a problem, it’s probably time to go through some of those emergency scenarios and get some mitigating controls in place that would keep that from being a deal breaker.