Software [In]security: BSIMM versus SAFECode and Other Kaiju Cinema
We've covered the BSIMM extensively in this column (see, for example, BSIMM3, Driving Efficiency and Effectiveness in Software Security, and Cargo Cult Computer Security). Unfortunately, the BSIMM continues to be confused for a software security methodology (which it is not) both by the trade press and even by some of the BSIMM participants themselves!
The latest misunderstanding involves SAFECode , a non-profit organization of eight independent software vendors (ISVs) dedicated to software assurance. Let's get to the bottom of the SAFECode confusion by discussing our official BSIMM reaction to the SAFECode paper, Interpreting the BSIMM: A SAFECode Perspective on Leveraging Descriptive Software Security Initiatives. In general, while the SAFECode piece provides additional insight into the SAFECode Practices, it also attempts to provide positioning and uses for the BSIMM that we disagree with. The article you are reading now is a clarification from the BSIMM authors.
Measurement, Methodology or Approach?
The most obvious place to start is by describing what the BSIMM is. The BSIMM is a data-driven model and measurement tool developed by careful study and analysis of the software security initiatives of over 42 firms. To date, over 85 distinct measurements have been made with the BSIMM. Participating firms include Adobe, Aon, Bank of America, Capital One, The Depository Trust & Clearing Corporation (DTCC), EMC, Google, Intel, Intuit, Microsoft, Nokia, QUALCOMM, Sallie Mae, Standard Life, SWIFT, Symantec, Telecom Italia, Thomson Reuters, VMware, and Wells Fargo. The list includes plenty of heavy hitters building software that we all use and rely on every day.
The big idea behind the BSIMM remains very straightforward. Go out and gather data first, and then use the data to derive a data-driven model. Just to be extra clear: data first, model second. Once the data-derived model exists, it can be used to compare software security initiatives to each other. It can also be used to compare groupings of similar-flavored initiatives (for example, independent software vendors (ISVs) participating in the study, of which there are 12, or financial services firms participating in the study, of which there are 19).
As a measurement tool, the BSIMM includes a simple scorecard organizing 109 activities observed among any of the 42+ firms to date. The scorecard was derived from the data we gathered. In other words, the BSIMM is the union of activities that member firms ascribe to their own software security initiatives. The activities are grouped into twelve central practices making up the Software Security Framework. The practices are: strategy and metrics, compliance and policy, training, attack models, security features and design, standards and requirements, architecture analysis, code review, security testing, penetration testing, software environment, and configuration management and vulnerability management. To be sure, the BSIMM casts a wide net in this regard and no claim is made that any firm should attempt to adopt all 109 activities. In fact on, page 4 of the BSIMM document we state:
You should tailor the activities that the BSIMM describes to your own organization (carefully considering the objectives we document). Note that no organization carries out all of the activities described in the BSIMM.
On the other hand, mature firms leading the world in software security have proven to be "well rounded" and carry out activities in all twelve of the BSIMM practices.
When we began work on the BSIMM project, we were reacting to the proliferation of well-meaning (but often utterly unrealistic) software security methodologies that were popping up like mushrooms after a summer rain. In 2007, everybody and their brother introduced a new software security methodology it seemed. We wanted to do something different with the BSIMM. We wanted to do science, and that meant focusing on measurement.
Over the last three years as we have been gathering data during the BSIMM project, we have closely observed over 42 distinctly different software security approaches/methodologies. That's right, — no two firms we have studied do software security exactly the same way. Every single firm has a different story about how their initiative started, how it has evolved, and where it is going. The stories are as unique as the cultures of the firms themselves. (To understand the range here, consider the cultural differences between Google and Microsoft or between Fidelity and JPMorgan Chase…not to mention between Microsoft and Fidelity—by the way, all four firms are active BSIMM participants.) We were hoping that the BSIMM data might give rise to the world's premiere data-driven software security methodology (to do software security, first do this, then do that, and so on)…but it did not. Instead, it resulted in a data-driven measurement tool describing activities common among many diverse methodologies. When it comes to software security methodology, one size apparently does not fit all.
The BSIMM is not a methodology. In our opinion, attempting to create a detailed prescriptive methodology for all future software security initiatives on the planet to follow in lock step fashion (including the 42+ distinct software security initiatives we are most familiar with) is folly.
The good news is that the BSIMM can be used to measure all 42+ initiatives and to help describe their current state in common terms that all of the firms understand. That goes a long way towards addressing our lack of science in the field of software security. You must measure first. Then you can improve.
So that's what the BSIMM is. But what is a software security methodology?
Probably the most widely known software security methodology is Microsoft's Secure Development Lifecycle (SDL). Others that are well understood and documented include the Cigital Touchpoints Software Security, OWASP CLASP, and OpenSAMM.
By far the most common methodology that we observe in practice in the 42+ BSIMM firms is some kind of a hybrid model combining aspects of the SDL and the Touchpoints in novel and sometimes interesting ways unique to an individual firm.
In our view, SAFECode is attempting to build a new methodology by combining the software security expertise and experiences of eight firms: Adobe, EMC, Juniper Networks, Microsoft, Nokia, SAP, Siemens, and Symantec. (Note that six of the eight SAFECode members participate in the BSIMM, with Juniper Networks and Siemens currently not involved.) We laud the effort at creating a joint approach and methodology. Because SAFECode members are limited to ISVs, perhaps the SAFECode methodology will sum up to a success for other ISVs.
It is interesting to note that the SAFECode members all use slightly different software security methodologies today. Here are some specific examples:
- Adobe's ASSET group follows a methodology called the Adobe Secure Product Lifecycle (SPLC). See the Adobe security portal.
- EMC's Product Security Office follows the EMC product security policy, a wide-ranging approach encompassing policy, people, process and technology with a strong focus on ranking according to set standards clearly defined in risk-based guidelines.
- Microsoft uses the SDL, well described in books and also on the Web.
- Nokia's approach is driven through root cause analysis and is highly distributed, allowing product groups lots of autonomy in practice. General published guidelines exist but are not mandatory.
- SAP integrates security gates and practices directly into its product lifecycle (PIL) and internally tracks sixteen measures related to ISO standards 9001 and 15408.
- Symantec follows a methodology called Symmunize, which defines gates in the product lifecycle and includes metrics directly tied to the CWE.
A common unification of these six distinct methodologies (plus two) under the banner of SAFECode is an interesting proposition. Maybe when it comes to ISVs and software security, one size does fit all?! Perhaps Microsoft will begin to champion the SAFECode Practices over the SDL. We'll just have to see.
The SAFECode paper refers to both the SAFECode Practices and the BSIMM as "approaches to advancing software security." This is true when we think of the two initiatives writ large—they both aim to help organizations build more secure software. Where the rubber hits the road, however, neither "approach" is a real, day-to-day business solution that can be directly implemented by the uninitiated. Neither the SAFECode Practices nor the BSIMM includes decision-making data, metrics, business process advice, or many other useful things that we would expect to be included in something touted as a complete methodology. Instead, both approaches are inventories of software security activities. The SAFECode inventory is a subset of software security activities from the eight SAFECode members while the BSIMM inventory is a superset of software security activities from 42 firms (including six of the eight SAFECode members). BSIMM is not really an approach so much as it is the sum of the activities in the approaches of the 42 participating firms. This may seem minor, but it's a very important distinction we'll revisit.
Regarding the connection between BSIMM and SAFECode Practices, our advice is to use BSIMM to measure the current state of your software security initiative and determine other software security activities you should probably start doing. If your organization determines that one of the areas requiring investment matches one of the SAFECode Practices, you'd be silly not to take advantage of their collective wisdom for that activity. Seriously, these are smart, experienced experts doing really cool stuff. We really appreciate the collective wisdom they've shared. However, we disagree that SAFECode Practices in any way provide "a better starting point for implementation."
Pragmatic Software Security
As we mentioned above, the BSIMM activities are the union of software security activities observed in 42+ BSIMM firms. Because the BSIMM data are a union, and because six of eight SAFECode members participate, we can safely assume that the BSIMM data set includes the SAFECode data (barring direct input from Siemens and Juniper Networks). In fact, SAFECode member firms are distinguished in the BSIMM data set by holding six of the ten highest scores observed in the BSIMM project! Six of eight SAFECode members are indeed quite distinguished in their software security initiatives according to the BSIMM. The cool thing is the BSIMM can be used to measure a much wider set of different sized firms in different vertical businesses with different levels of maturity and it still works. That's precisely because of its inclusive nature.
The BSIMM retains its relevance over time in two ways. Firstly, the data set continues to expand, roughly doubling with each release of the model. Note that the model is always updated to correspond to the data. The BSIMM evolves as the field evolves. Secondly, with the release of BSIMM3, the BSIMM is now a Longitudinal study. That is, a number of firms have been measured twice with a period of several months between measurements. If you wonder how software security initiatives change over time in practice, look no further than the BSIMM.
Finally, the BSIMM is about realistic software security initiatives as they are actually practiced today. There is no wishful thinking. There is no vendor special sauce. And there are no artificial measurements. The BSIMM is a description of the state of the practice.
If you want data to compare your software security initiative to others worldwide, the BSIMM is the best place to go for that. We believe that each firm has a unique culture and approach to software development (what works at Google would probably fail at Micrsosoft and vice versa), and we believe accordingly that each firm will have a unique software security development lifecycle (SSDL). Driving such an initiative with real data that inform a unique strategy, and measuring with a common measuring stick is exceptionally powerful and is the best of what we can realistically hope for.
The Compliance Problem
Compliance is not software security. There we said it. But compliance often drives software security activities in some industries—financial services in particular. In the best of cases, software security (the notion of building security in) is more about the spirit than about the letter of compliance. That is, if a firm is serious about protecting its customers' data, it will already be practicing good software security and compliance will be an easy (almost trivial) side effect. (See Beyond the PCI Band-Aid for how this concept fits the PCI compliance nonsense to a T.)
Because compliance came up over and over again in our discussions of software security initiatives, there are a number of compliance-related activities identified in the BSIMM. The FFIEC and the OCC (banking regulators in the United States) are both begining to think about software security very seriously. They have been learning about the BSIMM and its capability to measure software security initiatives. Financial services firms are taking note.
We think it is interesting that the SAFECode types appear to be allergic to the BSIMM compliance activities. Many of their largest customers are certainly required to abide by various regulations (and thus turn around and ask them about certain compliance activities during the COTS software acquisition process). Along these lines, the most interesting thing to consider is whether SAFECode is an organization whose number one goal is to avoid government regulation through a process of "self-certification." That is an opinion we have heard bandied about more than once in the software security community, and it would not be unusual as the payment card community took this approach with PCI.
In any case, the SAFECode members seem to believe that pursuit of compliance can be a bad thing. To wit, "In fact, companies may race to become compliant, but not necessarily secure, if they choose to emulate the most observed BSIMM activities." This is a canard and a brief look at but two BSIMM activities can lay it to rest.
CP1.1 (Know all regulatory pressures and unify approach) includes the following language: "If the business or its customers are subject to regulatory or compliance drivers such as FFIEC, GLBA, OCC, PCI DSS, SOX, SAS 70, HIPAA or others, the SSG acts as a focal point for understanding the constraints such drivers impose on software. The SSG creates a unified approach that removes redundancy from overlapping compliance requirements. A formal approach will map applicable portions of regulations to control statements explaining how the organization will comply." In other words, the point of this "compliance" activity is to harmonize compliance requirements and feed them to the development process.
CP1.2 (Identify PII obligations) includes the following language: "The way software handles personally identifiable information (PII) could well be explicitly regulated, but even if it is not, privacy is a hot topic. The SSG takes a lead role in identifying PII obligations stemming from regulation, customer demand, and consumer expectations. It uses this information to promote best practices related to privacy. For example, if the organization processes credit card transactions, the SSG will identify the constraints that PCI DSS places on the handling of cardholder data." Again, let's take some arcane, contradictory information and make it consumable by the people who write code and build systems.
Not quite the promotion of compliance for the sake of compliance.
In addition, if the ISVs believe that compliance doesn't apply to them, they are simply wrong. If they haven't yet had to respond to, "Explain to me how your software won't make me non-compliant," or "Explain to me how your software supply chain doesn't violate my compliance needs," or something similar, they should wait by the phone. The call is coming.
Incidentally, in the BSIMM project, we lump policy and compliance together into one of twelve fundamental BSIMM practices. Compliance is not a practice unto itself.
Measuring Security During Software Acquisition
We've been making BSIMM measurements and gathering real data for more than three years. During that time, we've spoken directly with several dozen firms—BSIMM participants, non-BSIMM participants, and even ISVs that buy software from other ISVs. These firms buy hundreds of millions of dollars of products from ISVs (including SAFECode members). We can safely say that the acquirers are turning up the heat in their vendor management processes. They're negotiating new SLAs, asking for proof of certifications, doing site visits, demanding security testing results, asking for source code, and so on.
With respect to software security, however, most acquirers are currently at the stage of simply trying to determine something quite simple—which software vendors have a software security clue, and which do not. Collectively, acquirers are not at the stage of trying to tease out the software security nuances between Giant A and Behemoth B; they're more worried about differentiating New Vendor A and Tiny StartUp B who both produce some valuable piece of code. The vBSIMM was our first response to a very simple measurement need from acquirers. We are still evolving the vBSIMM.
In any case, the BSIMM itself measures breadth of software security activity much more than it measures depth of software security investment. That is, BSIMM reveals the total scope of software security activities, but may not account for whether you are doing certain things really, really well with lots of rigor and investment.
Software acquirers generally understand that very large ISVs currently spend many millions on software security while small ISVs will spend many thousands. However, they also understand that ISVs are not necessarily putting millions into the software security of each product; they're putting millions into their software security people, process, and technology, which is then distributed over the portfolio. Therefore, a BSIMM measurement of the breadth of software security activities performed is to people who make the buying decisions a reasonable proxy for the software security issues they may be bringing into their organization, regardless of provider size. Again, in the current state of sophistication for the average acquirer, a BSIMM score provides much more than enough information about a vendor's software security initiative to support a buying decision. For many, even a vBSIMM score is sufficient for simply sorting vendors into "have a clue" and "don't have a clue" piles.
So, we have buyers who want a "software security measurement" for software producers but a measurement that does not allow the really sophisticated players to distinguish themselves from the crowd. Given a mature software security program, it's possible for a startup, a Web storefront with one application, or a small middleware provider to get the same BSIMM score as an ISV with hundreds of applications, thousands of developers, and millions in annual investment. Unfortunately, this seems to be an issue for SAFECode members who do lots of activities with rigor and enormous investment. And we do mean "unfortunately"; we're certainly not trying to cause any grief in the marketplace.
Do both acquirers and vendors desire a more granular, easy-to-obtain, comparable-across-vendors-of-various sizes, reasonably-current, low-impact-on-vendor, inexpensive-to-obtain, breadth-and-depth-cognizant, trusted-third-party-produced measurement that helps predict software security risk associated with software produced by a given vendor? Who wouldn't!?
We certainly understand that SAFECode members—and likely all vendors, and maybe even all BSIMM participants—would like their depth to be recognized when being compared to a competitor. There should be a way to provide another level of granularity that distinguishes between a "mom-and-pop" shop with a handful of applications that achieved a high BSIMM score and a mega-giant ISV that achieved the same score, but over hundreds of applications using thousands of developers. We'd like to get that done through BSIMM as well, but questioning BSIMM's current utility is not the way to move the field forward. We're working with a few acquirers to outline such a measurement, and such research could easily find its way into the BSIMM.
The current sea change has to be a pain for vendors. A few years ago, these firms had to respond to hundreds or thousands of requests for pen testing access or results, then similar requests for site certification data, then requests for SLAs dealing with product security, then requests for source code to allow static analysis, and so on. Now there is a growing influx of requests for evidence related to software security activity. The requests ask for different things in different ways. It has to be a huge, expensive mess all over again and the vendors likely want to get out in front of it and both differentiate their advanced state and have acquirers accept their methodology as proof of software security prowess. The SAFECode Practices and assurance that these practices are followed by SAFECode members seems like a reasonable response to this situation.
Engineers Do it Alone
The SAFECode paper implies that some activities identified in the BSIMM may be irrelevant to software security. Our working assumption is that if professional leaders and members of multiple software security groups are carrying out an activity, there is some logical reason for them to do so. By paring down software assurance activities to a small engineering-focused subset, SAFECode's approach runs the risk of throwing the baby out with the bathwater. SAFECode proposes a one size fits all aproach to software security appropriate for ISVs narrowly focused on engineering practices. The BSIMM has much wider applicability. BSIMM activities encompass not only software providers but also software acquirers. Put another way, they reflect the complexity of software assurance as practiced on the ground.
Suffice it to say the belief that software security can be solved strictly within the engineering ranks has been rejected in the overall BSIMM community and in the vast majority of our other clients. Building security in is business problem that requires and end-to-end business solution.
That's why BSIMM covers much more than just what architects, developers, and testers do. We asked firms, including SAFECode members, for data on everything that contributes to specifying, creating, and deploying secure software. We recorded the answers, and the superset became the BSIMM. Our continuing work with dozens of firms (including software vendors) of various sizes and across multiple verticals, shows that a software security program encompassing policy, risk, compliance, governance, metrics, operations, SLAs, and related items—in addition to the all-important and crucial efforts in design, coding, and testing—reflects their belief that it is the sum of these business processes and the culture and environment they produce that is critical to producing and maintaining secure software. Few, if any, of these organizations believe that secure software can result from efforts strictly within the development group, few if any would abdicate corporate responsibility strictly to the development group, and few if any believe that simply providing guidance only to developers to the exclusion of other stakeholders can result in secure software.
Mothra and Godzilla Sing Kumbaya
In the end, this entire methdology versus measurement dustup is only worth discussing so we can continue to clarify the field of software security. We agree with the SAFECode statement, "practitioners involved in the creation of a software security initiative will find value from both the SAFECode guidance and the BSIMM when reviewing or selecting their security processes." SAFECode is doing important work, and if your firm looks like the SAFECode firms, you should pay careful attention. The BSIMM is doing good work as well, especially when it comes to measurement. If you wonder how your firm stacks up against the state of the practice in software security, BSIMM is right for you. Peaceful co-existence and co-evolution is a good thing. We just want to know...which one of us is Godzilla?