Software [In]security: A Software Security Framework: Working Towards a Realistic Maturity Model
By understanding and measuring ten real software security inititiatives, we are building a maturity model for software security using a software security framework developed after a decade of commercial experience.
Software security is coming into its own as a discipline. There are now at least twenty large scale software security initiatives underway that we are either aware of or directly involved in. Though particular methodologies differ (think OWASP CLASP, Microsoft SDL, or the Cigital Touchpoints), many initiatives share common ground. In this article we introduce a software security framework (SSF) to help understand and plan a software security initiative. This framework is being used to build an associated maturity model.
A Software Security Framework
These days many developers and development managers have some basic understanding of why software security is important. In 1999 when John Viega and McGraw started writing Building Secure Software as a series of articles for developerworks, there was very little published software security work. The idea of software security, though compelling, was a new one in the commercial software world. After a decade of hard work, we are pleased with the progress the field has made. Business numbers from the field provide objective evidence of the tremendous growth software security is experiencing.
By 2006, a critical point was realized in software security with the publication of two books describing what kinds of activities an organization should carry out in order to build secure software (Software Security and The Security Development Lifecycle). Today, over 20 large-scale software security initiatives are underway in organizations as diverse as multi-national banks, independent software vendors, the U.S. Air Force, and embedded systems manufacturers.
Over the years we have come to know that getting security right requires more than just the technical chops necessary to do things like create a better sorting algorithm. Security encompasses business, social, and organization aspects as well.
Our aim with the software security framework is to capture an overall high-level understanding that encompasses all of the leading software security initiatives. Note that individually these initiatives follow different methodologies (including the top three mentioned above or a homegrown approach). Regardless of methodology, we have identified a number of common domains and practices shared by most software security initiatives. Our SSF provides a common vocabulary for describing the most important elements of a software security initiative, thereby allowing us to compare initiatives that apply different methodologies, operate at different scales, or create different work products.
Software security is the result of many activities. People, process, and automation are all required. The SSF allows us to discuss them all without becoming mired in details. To that end, we believe a simple approach that gets to the heart of the matter trumps an exhaustive approach with a Byzantine result.
The table below shows the SSF. There are twelve practices organized into four domains. The domains are:
- Governance: Those practices that help organize, manage, and measure a software security initiative. Staff development is also a central governance practice.
- Intelligence: Practices that result in collections of corporate knowledge used in carrying out software security activities throughout the organization. Collections include both proactive security guidance and organizational threat modeling.
- SDL Touchpoints: Practices associated with analysis and assurance of particular software development artifacts and processes. All software security methodologies include these practices.
- Deployment: Practices that interface with traditional network security and software maintenance organizations. Software configuration, maintenance, and other environment issues have direct impact on software security.
Strategy and Metrics
Compliance and Policy
Security Features and Design
Standards and Requirements
Configuration Management and Vulnerability Management
There are three practices under each domain. We are currently in the process of fleshing out a maturity model for each practice. To provide some idea of what a practice entails, we include a one or two sentence explanation of each.
In the governance domain, the strategy and metrics practice encompasses planning, assigning roles and responsibilities, identifying software security goals, determining budgets, and identifying metrics and gates. The compliance and policy practice is focused on identifying controls for compliance regimens such as PCI and HIPAA, developing contractual controls such as Service Level Agreements to help control COTS software risk, setting organizational software security policy, and auditing against that policy. Training has always played a critical role in software security because software developers and architects often start with very little security knowledge.
The intelligence domain is meant to create organization-wide resources. Those resources are divided into three practices. Attack models capture information used to think like an attacker: threat modeling, abuse case development and refinement, data classification, and technology-specific attack patterns. The security features and design practice is charged with creating usable security patterns for major security controls (meeting the standards defined in the next practice), building middleware frameworks for those controls, and creating and publishing other proactive security guidance. The standards and requirements practice involves eliciting explicit security requirements from the organization, determining which COTS to recommend, building standards for major security controls (such as authentication, input validation, and so on), creating security standards for technologies in use, and creating a standards review board.
The SDL Touchpoints domain is probably the most familiar of the four. This domain includes essential software security best practices that are integrated into the SDLC. The two most important software security practices are architecture analysis and code review. Architecture analysis encompasses capturing software architecture in concise diagrams, applying lists of risks and threats, adopting a process for review (such as STRIDE or Architectural Risk Analysis), and building an assessment and remediation plan for the organization. The code review practice includes use of code review tools, development of customized rules, profiles for tool use by different roles (for example, developers versus analysts), manual analysis, and tracking/measuring results. The security testing practice is concerned with pre-release testing including integrating security into standard quality assurance processes. The practice includes use of black box security tools (including fuzz testing) as a smoke test in QA, risk driven white box testing, application of the attack model, and code coverage analysis. Security testing focuses on vulnerabilities in construction.
By contrast, the penetration testing practice involves more standard outside→in testing of the sort carried out by security specialists. Penetration testing focuses on vulnerabilities in final configuration, and provides direct feeds to defect management and mitigation. The software environment practice concerns itself with OS and platform patching, Web application firewalls, installation and configuration documentation, application monitoring, change management, and ultimately code signing. Finally, the configuration management and vulnerability management practice concerns itself with patching and updating applications, version control, defect tracking and remediation, incident handling, and security sign-off by the PMO.
Science and pragmatism
As you can see, the SSF covers lots of ground, and the practices deserve a more detailed treatment. The next step is to create a maturity model based on the SSF that reflects reality.
A maturity model is appropriate because improving software security almost always means changing the way an organization works — something that doesn’t happen overnight. We aim to create a way to assess the state of an organization, prioritize changes, and demonstrate progress. We understand that not all organizations need to achieve the same security goals, but we believe all organizations can be measured with the same yardstick.
One could build a maturity model for software security theoretically (by pondering what organizations should do) or one could build a maturity model by understanding what a set of distinct organizations have already done successfully. The latter approach is both scientific and grounded in the real world, and it is the one we are taking with the SSF. Towards that end, we have scheduled fact-finding interviews with executives in charge of ten of the top software security initiatives. Once we have gathered and processed data from those interviews, we will construct a realistic maturity model.