Often times those involved in operating critical infrastructure are given a false sense of security when looking over the daily stream of vulnerability disclosures and patch information, as these feeds/lists seems to seldom contain anything specific about their systems.  But there is a lot of code dwelling on the purpose built servers and embedded systems in the infrastructure that is often ignored, and may be leading to incorrect approaches, or at least large blindspots, in risk/mitigation plans.

Many vendors will from time to time include software from external sources, some fully integrating from source and others simple bundling a package along with their own, and that’s not a bad thing at all.  It can significantly speed up development time by giving the developers a mature base to build upon, and also provides a community to rely on for solving problems that may be encountered.  But these benefits aren’t without cost that far too many just ignore. As the source tree within the company grows so does the external one, each usually making significant changes, and decisions have to be made, such as how often to merge changes, should internally developed code/security patches be released, how to deal with security patches released by the external project?

Often these questions aren’t addressed at all. Fixes are cherry picked along with feature additions when merges occur, or the developers assume that their code is too different for the patch to be usable (it usually isn’t).  These problems often occur in the parts of the system that are farthest from the core functionality that it was designed for.  Like the ftp or webserver in a field device, or a standard protocol that is “supported” alongside a proprietary one but not given nearly as many resources during development.  Information on what external sources are used should be available from the vendor, but if not (or just to double check), you can find out for yourself.

Most of the time the versioning information for common services are readily available if you know how to find it, and one of the easiest ways is to do it using something like netcat to do a simple assessment by way of banner grabbing.  For most protocols simply initiating a connection to the applications port will give you all the information you need:

$ nc -v -n 10.0.0.42 22

(UNKNOWN) [10.0.0.42] 22 (ssh) open
SSH-2.0-OpenSSH_5.1p1 Debian

But for others you might have to know a bit about the protocol and send a few bytes across the wire to get a response, for example HTTP:

$ echo -e "HEAD\r\n" | nc -v -n 10.0.0.42 80
(UNKNOWN) [10.0.0.42] 80 (www) open
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
</p>
<hr>
<address>Apache/2.2.9 PHP/5.2.6-2ubuntu4.2 with Suhosin-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g Server at www.example.com Port 80</address>
</body></html>

The results can then be compared against a vulnerability database, like the osvdb, though often times embedded systems aren’t very good about changing version numbers even if the vulnerability has been patched, so these findings should be verified with any proof of concept exploit/crashes if possible.

And don’t just think its smaller/niche vendors who have these problems, the larger and more complex a system gets the more likely it is to include external sources to achieve functionality. Apple is currently having problems that this can lead to if not approached properly and their end users are having to deal with reduced functionality or increased risk.