Based on the reviews from early adopters, the Bandolier security audit files exceeded many expectations in 2008, including my own. We have received some very encouraging feedback from vendors, asset owners, consultants, and even our own assessment teams.
With each new Bandolier release, though, we have a challenge. How do we appropriately communicate the effectiveness of each security audit file? We’ve been very careful not to oversell the project as a whole (e.g. not presenting Bandolier as a control system security panacea, being very careful to qualify that the audit files are only for servers and workstations, that they audit the best possible security config for a server given the realities of the server’s security features/architecture and not comparing to best practice, etc…). But if you spend some time with the audit files, you’ll quickly learn that some are simply better than others. So much so that we are considering a mechanism for grading the files.
There are still some details to work out but we think an asset owner would appreciate knowing whether the file they are using is at an ‘A+’ or a ‘C-‘ level. A ‘C-‘ Bandolier security audit file still has value because it will likely audit hundreds of security settings. However, we are less confident that this is close to a complete list of OS and application security settings than a security audit file with an A+ grade.
Why are some audit files more effective than others? There are several reasons but the one important factor is vendor commitment. The vendors that have embraced the project, made their top security resources available, and actively participated in the project have the best sets of audit files. They realize that there is benefit for them, not just for a positive PR perspective, but also as a real value-add for their customer and even for internal use during system builds and acceptance testing. It shouldn’t be a surprise that this same set of vendors are the ones who take security seriously and have already done things to secure their applications hardening the underlying OS or providing guidance on how to do so, identifying the minimal set of ports and services required to operate their system, and implementing measurable security features at the application level.
Vendors help with the process in a number of ways including identifying security parameters, collaborating on the optimal setting, and then testing that setting. The latter is very important. There are cases where we know the default and typical setting is not optimal, but if we cannot test what is believed to be the optimal security setting we cannot include it in the Bandolier security audit file. A good example of this is file and directory permissions. These can be very granular with role based access control, but it is not unusual to see Everyone/Full Control permissions.
There are other factors in audit file effectiveness, particularly at the application level. Some architectures just lend themselves to auditing better than others. Some have settings locked away in a database or in configuration files that are difficult to parse. We are actively working on ways to extract that information into something meaningful and measurable, even if it means adding an extra step to the audit process. The audit file for OSIsoft PI (due for release soon) is a great example of this. The audit file reads in information extracted using piconfig.exe and other PI executables. The result is a very thorough Bandolier audit that probably has a chance at a 4.0 GPA.
2009 holds even more exciting developments for Bandolier as we release new security audit files, improve on the old ones, and expand the program to additional vendors. Rest assured we will do our best to let you know what’s happening. We may even send home a report card!