Handling Detected Vulnerabilities
Bugs that have known security implications are handled slightly differently. No one likes running software with security holes, so bugs with security implications receive a higher priority from developers (unless a management team is telling the developers that new features are more important than fixing bugs). The significant issue in this case becomes that of disclosure.
In idealistic circles, there is a view that disclosure of vulnerabilities should be full and immediate. The problem with this approach is that it gives attackers everything they need to craft an exploit—and puts all of the program’s users at risk. For closed-source software, avoiding disclosure is the norm. For example, no one outside Microsoft knows how many unfixed vulnerabilities are listed in the bug-tracking system for Internet Explorer. Some companies take this approach to extremes—Cisco recently attempted to take legal action against a security researcher who disclosed vulnerabilities in Cisco’s Internetwork Operating System (IOS).
In open source, community-developed software, some disclosure is required. The community is able to fix only the vulnerabilities it knows about. Many of these projects, such as Mozilla, have a private mailing list for information about vulnerabilities. Only a subset of the developer community is allowed on this list, and any security-related bugs are locked, preventing malicious users from viewing the information.
This varying level of openness makes it very difficult to compare the security of two software products objectively. A slightly better measurement is how long it takes to fix security holes. This is something that open source projects tend to do well, because they follow a strategy of "release early, release often." This setup means that a fix can often be written and released within a day of a vulnerability being discovered. On the surface, this sounds good. In practice, things are a little more complicated.
Microsoft recently realized that a lot of attackers were downloading the Microsoft security updates, looking at the changes, and then reverse-engineering exploits from the fixes. Because it took a while for everyone to deploy a patch, there was often a gap between an exploit being developed in this way, and the majority of systems being patched. To help curb this problem, Microsoft instituted "patch Tuesday," a regular security-release schedule that allowed system administrators to plan better for patch deployment and help close this window. Of course, this arrangement doesn’t help home users, who don’t find out about the holes until they next run Windows Update.
A better measure than how long it takes a security patch to be written is how long it takes to deploy. Several factors affect deployment:
- Ability to download only the changes. A lot of Internet
users are still on modem connections, so having to download a full copy of an
app just to get security updates means that the user isn’t likely to
Open source wins here if you’re building from source, but not always if you’re doing a binary upgrade. If you’re doing a source upgrade, you can usually just sync your CVS (or equivalent) tree with a remote server, gaining the latest changes but nothing else, and recompile. Most open source binary distributions require the entire package to be downloaded again. Closed solutions tend to be slightly better about distributing binary changes, but the support varies a lot between companies.
- Ability to download only the security update. System
administrators are wary of upgrading working systems, so bundling a security fix
with new features is likely to mean that the security fix won’t be
Again, this issue varies a lot from project to project in the open source world. Many have different branches for different kinds of patches. FreeBSD is an example; they have a -RELEASE branch for each official release. These branches receive security updates, but nothing else. The -STABLE branch gains new features once they’ve been tested sufficiently. Finally, the -CURRENT branch contains new and untested features. Many other projects follow a similar release scheme, although the names of the branches often differ.
In the closed software world, the picture is somewhat different. Discontinuing support, including security fixes, is often seen as a legitimate tactic for "encouraging" users to upgrade; for example, it’s no longer possible to get security fixes for old versions of Windows. Security fixes for Windows NT Server 4.0 are no longer available—if you want to have a security hole patched on this system, you need to upgrade to a newer version of Windows. The same is true of older versions of Internet Explorer.
- Frequency of notification of new updates. Ideally, a user
should learn immediately that a new update is ready. If the system has to poll
for updates, what’s the frequency of this polling? If there is no
automatic update system, how do users learn that security updates are
Performance here is highly variable. Mozilla Firefox does particularly badly, releasing updates to their automatic update server several days after the updates are made available to the public. Open source operating systems, in general, do better—many have an automatic update mechanism that checks for updates to all software installed via the system’s default package-management system. Windows and OS X don’t do quite as well, since their automatic updates system is not available to third-party developers.