Threatpost and a handful of other news outlets are reporting on a worm actively exploiting the Shellshock bug against unpatched NASes. As an aside I find it a bit strange that the attackers are only performing clickjacking attacks — a much more obvious attack would be to use CryptoLocker or other data ransomware, since the current worm is targeting storage devices.
The question becomes, whose job is it to find and patch these kinds of bugs?
I hate to always say ‘the vendors,’ although that is my default response. Vendors however often don’t have the personnel to do reviews on code that they write themselves, let alone to review external code. Third-party components are usually open source and are often volunteer-driven.
I feel that a group of vendors would be well-served to get together and fund code reviews for commonly-used components. Those vendors could then share back those findings with the public, or, at their discretion, keep the findings internal to their group for proper patching. ‘Collaboratition,’ is a phrase often used in national labs for this kind of information-sharing — not ideal financially, but oftentimes it is the right thing to do (or the only way to get it to work).
Lightweight web servers seem like a good candidate for review, since so many embedded systems make use of them. We came up with our list of candidate servers based on devices in our lab, then searched for fingerprintable servers on Shodan to get a feel for their popularity overall. Results are rounded to the nearest 10,000 to help anonymize the actual software that we’re looking at:
Server A: 150,000
Server B: 100,000
Server C: 80,000
Server D: 60,000
Server E: 20,000
We then went ahead and did a cursory code review on every server that we could find code for. This review was just a really basic ‘grep’ analysis, looking for unsafe uses of unsafe C functions: blind strcpy() calls or strncpy() calls that use user-supplied lengths, uses of malloc() that never check for success, calls to sprintf() that never check lengths of input. Our ‘code quality’ analysis is a generalization based on how much of a headache we got looking at the code: the bigger the headache, the more unmaintainable the codebase is and the more it will cost to fix.
Server A: Sadly, no code available for download without an NDA.
Server B: Decent code quality, some null pointer dereferences; may be exploitable on certain architectures. Good candidate for use on your next project, but I’d still do a thorough review.
Server C: So-so code quality, a few sketchy memory management practices (chiefly integer overflows+memory management, adding some user supplied data to a fixed length prior to allocations). Could probably develop exploits for a lot of the bugs that we see, though the obvious bugs we see would take a lot of data to trigger; some bugs won’t be exploitable on low-memory systems.
Server D: So-so code quality, a few null pointer dereferences; may be exploitable on certain architectures. Interesting the authors are beginning to apply some static code analysis tools, but I’m a bit surprised at some of the bugs that still live inside.
Server E: Reading this code made my head hurt a bit. There is a lot of preincrement/postincrement loop craziness in the code involving pointer indexes. Bounds-checking is spread throughout and functions tend to be massive with dozens of nested if-blocks, making it difficult to tell where buffers could be blatted over and where they are properly handled.
Obviously take this ‘review’ with a healthy dose of salt. We spent just 5-10 minutes on each source tree, and writing a webserver from scratch in C is a really hard thing to do even on the tenth try. If your product has writable memory mapped to address 0, though, you’re going to have a rough time with a few of these servers: there is a lot of blind ‘buf = malloc(size); memcpy(buf, src, size);’ in these servers.
Most of these servers are just plain old open source projects without commercial backing (only two have any sort of commercial backing), but none of the vendors who are using them appear to pay much attention to them, security-wise. Some of our lab products with very recent firmware updates are running >10 year old versions of these software packages. That, even with much newer version available, and changelogs that tout security fixes in recent years.
So, will anyone step up to the plate on code review and patch submission? It’s something that we are considering, although funding such a project is not cheap.
I wonder if Kickstarter or something similar would be a good funding source for code reviews on such open-source projects. For example awarding sponsor logos in reports and access to reports as rewards for project backers, or even providing named forks/project releases for big sponsors. I don’t think any of these things would be bad ideas…
image by Yusuke Kawasaki