Home > Articles > Security > General Security and Privacy

Software [In]security: vBSIMM (BSIMM for Vendors)

  • Print
  • + Share This
It's tough enough to build secure software inside of your own firm, but how do you ensure that your third-party software vendors practice good software security? As software security expert Gary McGraw explains, the Building Security In Maturity Model (BSIMM) can play a central role in this effort.
Like this article? We recommend

Software security involves some thorny issues: how to train thousands of developers, how to review millions of lines of code, what exactly constitutes architecture risk analysis, and so on. Things are tough enough when you are charged with building secure software inside of your own firm, but what about all the third-party software you rely on that others produce? How do you ensure that your software vendors (including both those who build code especially for you and commercial off-the-shelf (COTS) vendors) practice good software security in a way that doesn't damage your business? Originally conceived as a way to measure software security initiatives, the BSIMM can play a central role.

We have covered the BSIMM extensively in this column (for one example, see the May 2010 article introducing BSIMM2). Here, we introduce a compact version of the BSIMM for vendors called vBSIMM that leverages the power of attestation. You can think of vBSIMM as a foundational security control for vendor management of third-party software providers.

Why Third-Party Software Matters

Every modern enterprise uses lots of third-party software. Some of this third-party software is custom built to specifications, some of it is COTS, and some lives in the cloud as part of a software-as-a-service (SaaS) model. Many big firms, especially in the financial services vertical, are working hard on software security and are looking for ways to identify and manage the risk of third-party software.

Ideally, we could directly measure a piece of software with a "security-meter" and determine whether or not it is suitable. You know, stick the software in a magic box and see whether the green light or the red light turns on. Unfortunately, this is a technical impossibility (see, for example, the Halting Problem). In the end, even the most extensive penetration testing and/or static analysis regimen is simply a badness-ometer. It is very nice to know that a piece of software you are thinking of buying may be missing one or two silly bugs, but don't overlook the fact that the OWASP top ten makes a pretty lousy badness-ometer.

Because of the badness-ometer problem, we retreated from the direct measurement of software security to observing software security activities in the lifecycle (that is SDLC + touchpoints) in the BSIMM project. The notion is to gain a clear understanding of the role that security plays in a firm's SDLC (or, more realistically, its multiple SDLCs). As an example, the BSIMM has been used to measure Microsoft's SDL efforts twice.

The BSIMM contains 109 activities divided into 12 practices built through the process of observing 33 firms. To date, we have made over 60 measurements with the BSIMM and validated its utility with hardcore statistical analysis. A direct BSIMM measurement can shed plenty of light on a software security initiative and can even be used to compare business units, firms, and verticals. Ultimately, a BSIMM score provides objective evidence of software security controls embedded in an SDLC.

Vendor Control in the BSIMM: Measuring Yourself

The BSIMM includes five specific activities (out of 109) that are relevant to controlling the software security risk associated with third-party vendors. These are worth calling out because they are activities that should be performed by all firms acquiring third-party software. They are:

  1. Compliance & Policy activity 2.4: Paper all vendor contracts with SLAs compatible with policy. Vendor contracts include a service level agreement (SLA) ensuring that the vendor will not jeopardize the organization's compliance story. Each new or renewed contract contains a standard set of provisions requiring the vendor to deliver a product or service compatible with the organization's security policy.
  2. Compliance & Policy activity 3.2: Impose policy on vendors. Vendors are required to adhere to the same policies used internally. Vendors must submit evidence that their software security practices pass muster. Evidence could include code review results or penetration test results.
  3. Standards and Requirements acivity 2.1: Communicate standards to vendors. The software security group (SSG) works with vendors to educate them and promote the organization's security standards. A healthy relationship with a vendor cannot be guaranteed through contract language. The SSG engages with vendors, discusses the vendor's security practices, and explains in concrete terms (rather than legalese) what the organization expects of the vendor. Any time a vendor adopts the organization's security standards, it's a clear win.
  4. Standards and Requirements activity 2.5: Create SLA boilerplate. The SSG works with the legal department to create standard SLA boilerplate for use in contracts with vendors and outsourcing providers. The legal department understands that the boilerplate helps prevent compliance or privacy problems. Under the agreement, vendors and outsourcing providers must meet company software security standards.
  5. Training 3.2: Provide training for vendors or outsource workers. The organization offers security training for vendors and outsource providers. Spending time and effort helping suppliers get security right is easier than trying to figure out what they screwed up later on. In the best case, outsourced workers receive the same training given to employees.

Every firm that acquires third-party software (whether custom, COTS, or anything in between) should take the time to determine how well they are performing these five activities with each supplier. If the answer is "not very well" or "not at all," there should be urgency to improve, especially with respect to the providers of your most critical applications.

Now that you have an idea of the maturity with which your firm deals with software vendors, we can turn to the problem of measuring the maturity with which the software vendors deal with the security of the software they provide. Probably the most effective third-party vendor software risk management approach would be to ask for an objectively-determined BSIMM score from each third-party vendor. With seven major software vendors currently part of the BSIMM Community (Adobe, Google, Intuit, Microsoft, SAP, Symantec, and VMware), this may work in some cases.

The problem is that getting a real BSIMM measurement is heavyweight and takes time. Firms wanting a lightweight alternative to use with their smaller or less critical third-party vendors (not already part of the BSIMM Community) can consider vBSIMM an easier route with a much lower bar.

vBSIMM: Measuring Vendors

Of the twelve practices in the BSIMM Software Security Framework (see below), notice that the five activities above recommended for internal measurement are clustered in Governance (Compliance and Policy, Training) and Intelligence (Standards and Requirements). This is one of the reasons we have chosen to emphasize five different practices in the vendor-focused vBSIMM approach. They are: Architecture Analysis, Code Review, Security Testing, Penetration Testing, and Configuration Management & Vulnerability Management.



SDL Touchpoints


Strategy and Metrics

Attack Models

Architecture Analysis

Penetration Testing

Compliance and Policy

Security Features and Design

Code Review

Software Environment


Standards and Requirements

Security Testing

Configuration Management and Vulnerability Management

Within these five practices, there are 13 level one (easy) activities in the BSIMM model, broken out as follows: Architecture Analysis (4), Code Review (3), Security Testing (2), Penetration Testing (2), and Configuration Management & Vulnerability Management (2). Of these 13 activities, six are very commonly observed and we will refer to these as the six core activities.

The vBSIMM analysis involves a self-assessment (with legal attestation) of the 13 activities, with a special emphasis on the six core activities. Here's how it works.

The six core vBSIMM activities that we feel should probably be carried out by any third-party vendor are as follows:

  1. Architecture Analysis activity 1.1: Perform security feature review. To get started with architecture analysis, center the analysis process on a review of security features. Reviewers first identify the security features in an application (authentication, access control, use of cryptography, etc.) then study the design looking for problems that would cause these features to fail at their purpose or otherwise prove insufficient. At higher levels of maturity this activity is eclipsed by a more thorough approach to architecture analysis not centered on features.
  2. Code Review activity 1.4: Use automated tools along with manual review. Incorporate static analysis into the code review process in order to make code review more efficient and more consistent. The automation does not replace human judgment, but it does bring definition to the review process and security expertise to reviewers who are not security experts.
  3. Security Testing activity 1.1: Ensure QA supports edge/boundary value condition testing. The QA team goes beyond functional testing to perform basic adversarial tests. They probe simple edge cases and boundary conditions. No attacker skills required.
  4. Penetration Testing activity 1.1: Use external penetration testers to find problems. Many organizations are not willing to address software security until there is unmistakable evidence that the organization is not somehow magically immune to the problem. If security has not been a priority, external penetration testers demonstrate that the organization's code needs help. Penetration testers could be brought in to break a high-profile application in order to make the point.
  5. Configuration Management & Vulnerability Management activity 1.1: Create or interface with incident response. The SSG is prepared to respond to an incident. The group either creates its own incident response capability or interfaces with the organization's existing incident response team. A regular meeting between the SSG and the incident response team can keep information flowing in both directions.
  6. Configuration Management & Vulnerability Management activity 1.2: Identify software defects found in operations monitoring and feed them back to development. Defects identified through operations monitoring are fed back to development and used to change developer behavior. The contents of production logs can be revealing (or can reveal the need for improved logging).

By considering other level one activities in the five Practice areas emphasized by vBSIMM, we can round out our simple scoring system. Here are the seven "halo" activities:

  1. Architecture Analysis activity 1.2: Perform design review for high-risk applications. The organization learns about the benefits of architecture analysis by seeing real results for a few high-risk, high-profile applications. If the SSG is not yet equipped to perform an in-depth architecture analysis, it uses consultants to do this work.
  2. Architecture Analysis activity 1.3: Have SSG lead review efforts. The SSG takes a lead role in performing architecture analysis in order to begin building the organization's ability to uncover design flaws. Architecture analysis is enough of an art that the SSG needs to be proficient at it before they can turn the job over to the architects, and proficiency requires practice. The SSG cannot be successful on its own either — they will likely need help from the architects or implementers in order to understand the design. With a clear design in hand, the SSG might carry out the analysis with a minimum of interaction with the project team. At higher levels of maturity, the responsibility for leading review efforts shifts towards software architects.
  3. Architecture Analysis activity 1.4: Use risk questionnaire to rank applications. At the beginning of the AA process, the SSG uses a risk questionnaire to collect basic information about each application so that it can determine a risk classification and prioritization scheme. Questions might include, "Which programming languages is the application written in?" "Who uses the application?" and "Does the application handle PII?" A qualified member of the application team completes the questionnaire. The questionnaire is short enough to be completed in a matter of hours. The SSG might use the answers to bucket the application as high, medium, or low risk.
  4. Code Review activity 1.1: Create a top N bugs list (real data preferred). The SSG maintains a list of the most important kinds of bugs that need to be eliminated from the organization's code. The list helps focus the organization's attention on the bugs that matter most. A generic list could be culled from public sources, but a list is much more valuable if it is specific to the organization and built from real data gathered from code review, testing, and actual incidents. The SSG can periodically update the list and publish a "most wanted" report.
  5. Code Review activity 1.2: Have SSG perform ad hoc review. The SSG performs an ad hoc code review for high-risk applications in an opportunistic fashion. For example, the SSG might follow up the design review for high-risk applications with a code review. Replace ad hoc targeting with a systematic approach at higher maturity levels.
  6. Security Testing activity 1.2: Share security results with QA. The SSG shares results from security reviews with the QA department. Over time, Quality Assurance Engineers learn the security mindset.
  7. Penetration Testing activity 1.2: Feed results to defect management and mitigation system. Penetration testing results are fed back to development through established defect management or mitigation channels, and development responds using their defect management and release process. The exercise demonstrates the organization's ability to improve the state of security.

The scoring works like this. Sum the number of core activities. That is component X. Sum the halo activities. That is component Y. Score is X.Y. So a "perfect" vBSIMM score would be 6.7.

As the software consumer (that is a firm with third-party vendors), you are welcome to set the bar where you will as far as vBSIMM use is concerned. You can even codify the rules into an SLA as suggested in Security Requirements activity 2.5: Create SLA boilerplate.


A self-assessment according to this scheme is easy. The main difficulty is that people (and firms) tend toward "grade inflation" during self-assesment. One way to combat this is by asking people to sign on the dotted line attesting to the fact that the information they are providing is correct.

Here is a simple attestation form for use with the vBSIMM.

vBSIMM is only a start

The vBSIMM scheme is far from perfect and it does nothing to guarantee that any particular vendor product is actually secure enough for all uses. The vBSIMM scheme is far superior to no vendor control at all, however, and in our opinion is much superior to a badness-ometer-based approach using after-the-fact penetration testing focused only on a handful of bugs.

  • + Share This
  • 🔖 Save To Your Account