Last week at PCSF there were a few issues that seemed to work their way into every presentation and discussion. It seems that both vendors and asset owners are looking hard for the government or some other entity to provide vulnerabilities with some sort of risk equation, but as of yet no one has really stepped up to the task and there’s a good reason for that. Any measurements/scores that come from sources outside of the organization affected are going to be less than perfect, and often much less than perfect.
Let’s take a look at a scoring system that’s used extensively, CVSS. The base score is based on a few seemingly simple variables, the access vector, attack complexity, level of authentication needed and the confidentiality, integrity, and availability impact. The problem with these scores is that so much of the information used to generate them is that they’re bound to the common case, and while that may account for 9 out of the 10 asset owners using it the 10th one is working from incorrect starting information that can lead to false conclusions that may negatively impact the business. This could come in the form of patching when it’s unnecessary, not patching when a clear threat was present, or any number of other things. And let’s not forget that these base metrics are in a state of flux and often deal with incomplete information, a vulnerability that is unexploitable one day may be made trivially exploitable the next depending on additional information or understanding of the technology, these scoring systems are valid for exactly the moment that it is calculated and grows stale quickly.
The feeling I got from a lot of the crowd at PCSF is that they’re looking for someone (government?) to do the scoring for them and that’s just asking for trouble. For these metrics to be of any real value they’re going to have to be analyzed, tweaked, and recalculated by the asset owners anyways, so let’s not put a middle man in place to begin with.