Home > Articles > Security > Software Security

Software [In]security: Application Assessment as a Factory

  • Print
  • + Share This
In security testing, the cost per defect (or cost per bug) metric is too-often misused or misunderstood by non-technical managers. Gary McGraw explains how creating an application assessment factory can salvage the power of this valuable metric while mitigating the potential for misuse.
From the author of

One very useful metric for software security management is cost per defect, or cost per bug. The idea is elegant and compelling — determine how many security defects (usually bugs) are uncovered using various different security analysis methods, and do what you can to drive down cost per defect.

Unfortunately, the cost per defect number is just as easy to mis-use as it is powerful when properly used. Different methods are suited for different needs, and comparing them can be like comparing apples and oranges. By creating an application assessment factory, all the power of the cost per defect metric can be salvaged even as the potential for mis-use is properly managed.

Apples, Oranges, Pen Testing, and Static Analysis

Penetration tests find bugs. Web application security testing tools find bugs. Static analysis tools find bugs. Manual code review finds bugs. The question is, which method is best at finding security bugs? Or is it?

Web application security tools are particularly well-suited for finding problems in XML configuration files for Web apps (think Authorization issues). Penetration tests (when they are well-designed) uncover low hanging fruit that malicious hackers go after every time. Static analysis tools are great at finding all kinds of security problems in software, especially when they are properly tuned. In our work at Cigital, we have determined that a properly-tuned static analysis tool (read that to mean a tool with tailored rules) is very powerful indeed. That’s all good.

The problem comes when upper-level management starts comparing these methods directly using cost per defect. The nature of the defects found by different methods are distinct enough that they all have their place. So if you compare these methods by cost per bug alone, you may end up throwing out the baby with the analysis bathwater. For example, you might determine that both Web-app testing tools and pen tests tend to find the same number of bugs, and one method is cheaper, so why not abandon the other? Out in the real world, we have seen this happen more than once, even though bug categories were completely orthogonal. Classic mis-use of cost per bug data.

The Application Assessment Factory

By sheltering multiple methods in an assessment factory, you can take full advantage of cost per bug metrics without comparing apples and oranges. In fact, you can use apples for what they are good at and oranges for what they are good at and still lower the cost per bug number.

The factory metaphor works like this. Code that builds is submitted to the factory by various development groups (say using WAR files which include source, binaries, and dependencies). The factory does its bug finding thing. The results that come from the factory are standardized to fit into the existing defect management processes that development already uses.

This “actionable results” aspect is no small matter. Many methods on their own (say, penetration testing) result in a list of security problems that development may have no idea how to fix or even how to find in the code. An application assessment factory helps to ensure that security analysis results can be used (and that security actually improves when the bugs are fixed). For too many years now software security groups have been satisfied with identifying problems and throwing their (often variable) results over the wall. In many cases this has resulted in a large pile of known security problems that remain in the code to this day.

Once a factory is established, the trick is applying the right methods on the factory floor. A very good way to tune a factory is by automating as much as possible, measuring effectiveness using actual defect data collected over time to drive decisions. Future columns will explore how to go about evolving the factory floor. For now, suffice it to say that there are clear benefits to the factory metaphor:

  1. Business units see automated submission and results tracking: raw material goes in the factory, and the factory produces consistent and actionable results with small turn around (measured in days).
  2. The factory model allows an internal security group to focus on cost per defect and frees them to select the correct method of analysis to produce each category of result in the cheapest possible way.
  3. The factory door allows the internal security team to waste less time with a single externally exposed approach (like static analysis or pen testing) and to take the shortest path to each cost-savings (finding and improving automation).

Acknowledgement: John Steven of Cigital is the originator of the factory idea, which he has applied successfully for multiple customers.

  • + Share This
  • 🔖 Save To Your Account