Software [In]security: Getting Past the Bug Parade
Software security practitioners have known for years that software defects lead to serious security problems. What we all seem to forget sometimes is that defects come in two basic flavors: bugs in the code and flaws in the design. In the last few years we have made impressive progress against bugs using static analysis tools like those made by Coverity, Fortify, and Ounce to review code. The time has come to focus more attention on finding flaws through threat modeling and architectural risk analysis.
Bugs, Flaws, and Defects
The software security bug parade continues apace, partially driven by fast growth in Web-based applications. Any decent bug list includes: cross-site scripting, SQL injection, cross-site request forgery, buffer overflows, input validation problems, and so on. Take a look at the OWASP top ten to round out your list of bug parade floats, but don’t forget that plenty of security bugs can be found in non-Web software.
The real problem is that in my experience (and in Microsoft’s) bugs account for only half of the problem. The remaining 50% of software defects leading to security problems are higher-level flaws. To give you some idea of what flaws are like, consider the following list: interposition problems, type safety confusion, insecure auditing, broken access control over tiers, method over-riding, and misuse of crypto. As you can see, flaws happen at a much higher level than bugs do.
Finding and eradicating bugs involves looking through the code to find, for example, use of the gets() function in C. The gets() function is dangerous because it allows a potential attacker to provide as much input as she wants, almost guaranteeing a buffer overflow. On the other hand, finding and eradicating flaws involves taking a forest-level view of software at the architectural level. Flaws like susceptibility to “attacker in the middle” are rarely uncovered during code reviews; instead they are found during design and architecture reviews.
I like to use an analogy to underscore the importance of focusing on both bugs and flaws. Imagine we’re trying to build a solid house. Today’s bug-o-centric focus pays lots of attention to making sure that the bricks we’re constructing the house from are solid and won’t fail. Of course, it’s just as important to focus some attention on the placement of walls, windows, and doors to get a solid house as it is to ensure that the bricks are solid. Just as a solid house requires attention to both bricks and walls, solid software security requires attention to both bugs and flaws.
Pushing Tools to the (Bug) Limits
Things are never as simple as they seem at first blush when it comes to security. Software defects are no exception. Bugs and flaws actually define two ends of a spectrum of defects, some of which may be hard to peg as one kind or another. Not only that, but code review technology for security has advanced radically in the last decade. In 1999 when we released ITS4 (the world’s first security code scanning tool), only really simple bugs of the gets() variety could be uncovered. False positives were a serious and pervasive problem. These days, the standard rules that come with commercial tools like Fortify are far superior. False positives are much less of an issue. At the current limits of the code-scanning space is the idea of writing customized rules to scan for particular patterns. My company Cigital has spent considerable effort creating custom rules for customers. Some of these rules find defects that look as much like flaws as they do like bugs. We have just released a set of open source custom rules for enterprise Java.
Even when enhanced with custom rules, code scanning and code review only go so far. To get a handle on flaws, a different kind of approach is required.
Architectural Risk Analysis
The biggest challenge with finding flaws in software is that the process cannot be automated with scanning tools. Architectural analysis also requires specialized (and rare) experience and expertise. As a result, there are several approaches to architectural risk analysis available for consideration, but they are all labor intensive. Microsoft uses the STRIDE model as part of what it calls threat modeling. Cigital performs Architectural Risk Analysis (ARA). You can find a chapter from Software Security describing ARA here on InformIT.
In my work I detail a set of best practices for software security called the touchpoints. Though there are seven touchpoints all told, the top two are: code review with a static analysis tool and architectural risk analysis. It’s no coincidence that together, the top two touchpoints for software security cover both bugs and flaws. Make sure that your approach to software security does both too.