Software [In]security: Attack Categories and History Prediction
Would You Like Your Attack Medium Rare?
By taking a lo-resolution, historical view of attacks seen in computer security, we can discern four distinct categories of attack and even anticipate attacks yet to come. This kind of analysis is useful not only to practitioners in the field who need to understand today's attacks, but also to researchers concerned with gearing up to address the problems of tomorrow. Some findings we will cover: the age of bugs is over (from a research perspective), we need to do lots of work on design problems, and trust problems are the hard problem of future.
Four Basic Attack Categories
Attacks come in four basic categories: configuration attacks, attacks on implementation defects in systems (aka bugs), attacks on design and architecture defects in systems (aka flaws), and attacks on confusion surrounding trust. By taking a historical view, we can see how these attack categories align over time.
When computers not designed to be networked together were connected into a massive network that included possibly malicious actors, the first category of attacks and defenses was born — configuration attacks. Common problems with configuration include: running old versions of network services with known vulnerabilities (sendmail, wu-ftpd, etc), incorrect installation of services with too much privilege (running apache as root), allowing your ARP-table to be rewritten remotely (something that firewalls were designed to fix), incorrect separation of network segments, etc.
Configuration problems are fairly straightforward and can often be spotted by automated systems. Network scanners including SATAN, nmap, COPS, and Tripwire are designed to uncover configuration problems so that they can be fixed. Ironically, Dan Farmer (one of the two inventors of SATAN) was fired from his corporate job for releasing his tool back in 1995 as it "could possibly be used as a hacking tool." A decade or so later, any sys admin not using a SATAN-like tool is fired for incompetence. Things have come a long way in the configuration world!
As systems became better configured and as firewall technology became widespread, a new category of attacks surfaced — attacks against bugs in software comprising systems. Beginning with the infamous buffer overflow, the bug parade continues to this day with the likes of cross-site scripting (XSS) problems, SQL-injection attacks, and an entire raft of Web-related security problems commonly encountered in poorly implemented Web applications. For a point-in-time, generic view of the bug parade, check out the OWASP top ten list; but be forewarned that generic bug lists have their problems.
We are just beginning to eradicate bugs in software (after several years of piling up countless problems which were identified but sadly not fixed). New technologies including static code scanners, dynamic testing tools for Web protocols, and factory approaches that combine various methods are helping to automate bug finding and drive down the cost per defect. At the same time, mature software security initiatives are generating real data showing the value of finding security bugs early in the software development lifecycle.
So what happens when we start squeezing out bugs (not that we're in any real danger of accomplishing that for several years)? Fortunately, researchers like Professor Fred Schneider from Cornell (recently interviewed for the Silver Bullet security podcast) have been looking ahead. Fred believes that from a research perspective, the age of bugs is over.
The next category of attacks to expect are attacks that target defects in design and architecture — which I call flaws. Software security practitioners have know for years that bugs and flaws are divided roughly 50/50 when it comes to serious security problems in software. However, methods to find and eradicate flaws are much less mature and much more expertise-intensive than methods for finding bugs. Early work on threat modeling and architectural risk analysis exists, but automating these intensely manual processes remains out of reach. Not only that, but we lack a taxonomy of flaws such as the ones we have for bugs (see the Seven pernicious kingdoms and the CWE).
In order to get ahead of the curve in potential attack space, we should be concentrating some of our research effort on flaws: tagging and bagging, creating taxonomies, building bullet-proof interfaces, and automating discovery. There is plenty of work to be done.
Attacks of Tomorrow: Trust
We have been making some tangible progress against attacks targeting the low hanging fruit categories of configuration problems and bugs. We can even envision making some progress against flaws.
Looking even farther ahead, we can anticipate a second category of attacks to come — attacks involving trust problems. These problems are the hardest of all to deal with. Today, most of our systems have been designed in terms of enclaves. Systems that are members of an enclave are set up to trust other systems in the enclave more than those outside the enclave. As an example, consider the kinds of trust afforded a corporate file server that resides in your building versus a file server run by another corporation and housed elsewhere. The notion of "local trust" in an enclave is certainly convenient, but it opens us up to attacks from inside the enclave. Whether such an attack is carried out by a rogue insider or an attacker who gains inside access by hacking a machine in the enclave, it's easy to see how the enclave system quickly breaks down.
To solve this problem we must create systems that are significantly more paranoid than those of today. In essence, we must carve up trust with much finer granularity than we do now. This will shatter the notion of trust into many pieces, but will allow us to apply the principle of least privilege much more coherently. This shades of gray trust model will, for example, involve moving from Read Write Execute permissions on files to privileges associated with particular fields and data values.
The problem is that we can barely manage the lo-resolution trust granularity of today. Even role-based access control and entitlement systems break down under the strain of tens of thousands of users with thousands of security bits each. Put bluntly, we have a huge security policy management problem.
There are plenty of issues to sort out for future trust models. Automation of thorny policy management issues is likely to help, as are abstractions that allow us to build and enforce higher level policies. Add to this a more intuitive notion of partial trust, and the buildup of trust over experience, and we begin to realize something more akin to people's inherent trust models.
Getting ahead of the attack category curve is possible with proper research investment. Of course we can't just assume that we'll always get the easy categories exactly right, but progress on configuration problems and bugs has been noteworthy. Next up, flaws and trust problems.