There was an interesting discussion and information on what is the “best way from an ROI measure” to fuzz test at the CERT sponsored Vulnerablity Disclosure Workshop in DC this week. It led to some tweets back and forth between Digital Bond alumni Matt Franz and myself. First some background:

Fuzz testing is used by vendors, I hope, to look for common coding errors that can lead to vulnerabilities, such as buffer overflows. Consultants, researchers and hackers of all hat colors use fuzzing to look for exploitable vulnerabilities. Steve Lipner of Microsoft and co-author of the Security Development Lifecycle [SDL] said in his 2008 S4 keynote that fuzz testing and threat modeling proved to be the most effective ways to reduce exploitable vulnerabilities. Asset owner should be asking their vendors in RFP’s and User Group Meetings to explain their SDL and insure fuzz testing is part of it.

We have two security vendors that are trying to sell products to the control system market: Wurldtech with their Achilles platform and Mu Dynamics with their Mu Test Suite. [FD: Wurldtech is a past Digital Bond client and advertiser] One of the features of these products is they both send a large number of malformed packets at an interface – – typically crashing protocol stacks that have ignored negative testing.

While this greatly simplifies the issue, the two vendors have taken different approaches on how to create those negative packets. Wurldtech touts the use of a structured grammar to create the malformed packets, and MU takes more of an expert systems approach where there security engineers determine what would be the most effective malformed packets to send. There is certainly some overlap between the two approaches, but the question has always been what is more effective at identifying protocol stack errors.

So with that as background, Matt’s tweet “At CERT vuln discovery workshop. Interesting MSFT says grammar based fuzzing has lower ROI than dumb fuzzing” caught my attention. Matt was kind of enough to expand on his summary in an email:

The consensus of the talks was that you can’t rely on a single tool or technique but the ROI was higher for dumb “mutation based” fuzzers and white box approaches like SAGE than the time and effort to develop grammar based approaches, model the target, etc.

The direct comment was they still used “smart fuzzers” for highly critical code, Office, IE but that it wasn’t practical for other platforms like Exchange due to the way that it would hold up development and release cycles. Even/Especially in MSFT and CSCO devtest resources are precious and finite. Relatively poorly skill devtesters were able to achieve good enough results.

So if you are a vendor, or even an asset owner, starting from scratch you will have a low ROI on developing a grammar based fuzzer. But what if the grammar based solution already exists, such as in the case of Achilles, and you can buy it? This makes the ROI decision more interesting because you could compare the Achilles and the Mu Test Suite head to head and take into account any cost differences. So actually the question still remains if you are looking to purchase a control system fuzzer.

In a future blog post we will have Daniel cover Microsoft’s fuzzing efforts in the form of their SAGE tool which does “white box fuzzing” using symbolic execution and negative constraints. SAGE is still an internal Microsoft tool, but the approach is public.