Software [In]security: Balancing All the Breaking with some Building
Software security involves both building things properly and understanding how things break. In my view, these two aspects of security should be in balance (and properly feed off each other). This concept is so central to my approach to security that I created the Yin/Yang cowboy hat icon that adorns the front cover of Software Security and is used to brand other books in the software security series .
The Yin/Yang is the classic Eastern symbol describing the inextricable mixing of standard Western Polemics (black/white, good/evil, Heaven/Hell, create/destroy, et cetera). Eastern philosophies are described as holistic because they teach that reality combines polemics in such a way that one pole cannot be sundered from the other. In the case of software security, two distinct threads—black hat activities and white hat activities (offense/defense, construction/destruction)—intertwine to make up software security. A holistic approach, combining yin and yang (mixing black hat and white hat approaches) is what is required.
The balance is out of whack. From where I sit, it appears that computer security is spending more time and effort on offense than defense. Interestingly, this problem is evident in very diverse domains of cyber security, ranging from the hacker crowd that attends conferences such as Blackhat to the military leadership of US Cyber Command.
Simply put, we need more proactive defense in computer security.
Overcoming the NASCAR Effect
One of the problems to overcome is that exploits are sexy and engineering is, well, not so sexy. I've experienced this first hand with my own books, as the black hat "bad guy" books such as Exploiting Software outsell the white hat "good guy" books by a ratio of 3:1. I attribute this to the NASCAR effect.
Nobody watches NASCAR racing to see cars driving around in circles; they watch for the crashes. That's human nature. People prefer to see, film, and talk about the crashes rather than talk about how to build safer cars. There is a reason that there are several NASCAR channels available via satellite TV and zero Volvo car safety channels.
This same phenomenon happens in software. It seems that when it comes to software security, people would rather talk about software exploit, how things break, and the people that carry out attacks than to talk about software security best practices. I suppose that instead of being discouraged by this effect, we need to take advantage of it to get people interested in the software security problem.
In the end, we will need to integrate security into the software development lifecycle (as described in Software Security). But maybe the best way to start is to get a better handle on what attacks really look like. At least then we'll learn more about what deep trouble we're in.
Real Defense is Proactive, Even in Cyber War
War has both defensive and offensive aspects, and understanding this fundamental dynamic is central to understanding cyber war. Overconcentrating on offense can be very dangerous and destabilizing as it encourages actors to attack first and ferociously, before an adversary can since no effective defense is available. On the other hand, when defenses are equal or even superior to offensive forces, actors have less incentive to strike first because the expected advantages of doing so are far less. The United States is supposedly very good at cyber offense today, but from a cyber defense perspective it lives in the same glass house as everyone else. The root of the problem is that the systems we depend on—the lifeblood of the modern world—are not built to be secure.
This notion of offense and defense in cyber security is worth teasing out. In my view, offense involves exploiting systems, penetrating systems with cyber attacks and generally leveraging broken software to compromise entire systems and systems of systems (see, for example Exploiting Software). On the other hand, defense means building secure software, designing and engineering systems to be secure in the first place and creating incentives and rewards for systems that are built to be secure (see, for example, Software Security).
The United States has reportedly developed formidable cyber offenses. Yet America's cyber defenses remain weak. What passes for cyber defense today—actively watching for intrusions, blocking attacks with network technologies such as firewalls, law enforcement activities, and protecting against malicious software with anti-virus technology—is little more than a cardboard shield. Those kinds of defenses are too reactive, and they happen well after a system is built.
It's worth noting that the kind of defenses usually adopted by Capture the Flag (hacking contest) participants at Defcon and during the international Capture the Flag (iCTF) contest tend to be very much reactive as well. If the hacker community could figure out a way to include security engineering in the mix during their contests (especially at the software level) I would welcome that evolution.
Case in Point: Charlie Miller's Spectacular Hacking
Besides the NASCAR effect and the intertia against being proactive, there is another problem to overcome as well. It is far easier to break systems and become famous than it is to build systems properly and become equally famous. Consider the case of Charlie Miller, whose work hacking apple products has been extensively covered in the media. Charlie uses a very straigtforward strategy to find problems in popular systems. He starts with an outside→in fuzzing tool and sifts through the mountains of results. Reportedly, some of his fuzz input generators are incredibly simple five liners that do things like changing an input file one bit at a time, throwing it at the application under test, and keeping track of the results. (For what it's worth, these kinds of techniques have been around for a long time in computer security. See, for example, the ancient software testing tome I co-authored with Jeff Voas back in 1997, Software Fault Injection.) By automating this kind of testing and running it repeatedly (sometimes for several weeks at a time), Charlie is able to uncover enough exploitable flaws to make a name for himself.
This kind of thing raises two basic questions. The first (and probably most important) is why Apple can't do this kind of testing itself. It could, and it should. Charlie says it best when he says that all it takes is patience. The second question is what kind of software engineering process results in software that can be defeated by automatic fuzzing?! Apparently, Apple needs to take a closer look at its software security initiative and tighten things up.
Given this example, our challenge as a discipline should be clear: even though the Charlie Millers of the world can break systems automatically and garner spectacular coverage, we need to somehow leverage this into impetus for better software security engineering. We need better building and less breaking.
A Call for Engineering
Perhaps the Blackhat conference will add a track about security engineering to begin to balance out its over-focus on breaking. (I've talked to some of the advisors about it.) Maybe contests like Microsoft's Bluehat Prize—a security engineering challenge—will help. Maybe the narrative in software security will shift from "what is the best way to find bugs" to "how do we fix these bugs."
One thing is for sure: the demand for people who can practice software security professionally and build more secure systems is growing steadily. Proactive security—once a pipe dream—is now a reality in many leading firms. The security engineeting field is progressing apace; now it is time for the rest of security to grow up.