Design and Architecture
Now, with our security requirements and specifications document, it’s finally time to start considering the architecture and design of our application—or for our purposes here, the security aspects of the design. If you’ve been following our advice on collecting requirements, you’re likely to find that many of the security details in the design will essentially “write themselves.” Okay, that’s an exaggeration, but at least many of the architectural security decisions should be made easier from a good collection of security requirements.
These decisions will also be heavily driven by the expectations for security tiers introduced earlier in this chapter—in particular, we’re faced with architectural decisions of where to place specific security mechanisms for each security tier. If “tier 1” is your target, the decisions are likely to be quite simple; you can accomplish most of tier 1 within an application’s presentation layer, especially server-side, with a relatively small amount of back-end coding being required.
On the other hand, if your target is tier 2 or 3, you’ll want to give careful consideration to the security mechanisms you employ, and where they should be placed within your application. Building (essentially) intrusion detection functionality into an application is generally best suited for the business logic layer of an application, but there will still be some security functionality in other areas of the application, from the presentation layer through the data layer.
The important thing is to consider the architectural ramifications carefully, and then implement consistently throughout the application. Mixing and matching architectural components for convenience or familiarity is not a recipe for success.
So it’s on to considering the design or architecture of our application. We’re big believers in prescriptive guidance of positive practices, rather than (just) reviewing a design for flaws. So we’ll start there.
Prescriptive Design Practices
Perhaps the most important prescriptive practice to follow in designing secure applications is relying on common well-tested infrastructure and reusing already vetted design patterns, but that presents a “chicken and egg” sort of dilemma. By that we mean repeatedly using a set of design components that have proven themselves to be secure. Additionally, it’s useful to use design checklists that verify certain positive compliance aspects of design components. Let’s consider these things in some detail.
But first, let’s briefly take a look at the origins of these practices. Our secure designs should be built on top of sound architectural principles, such as those described by Saltzer and Schroeder in the 1970s.
From there, we should look at various security aspects of our design for construction soundness and turn those into checklists that can then be reused in later projects or verified and validated in the current project. Ideal targets for such checklists should include the following set of focus points:
Identification and authentication
Things to look for in a strong “I & A” mechanism include con–forming to corporate standards for username and password—minimum length, acceptable character sets, and so on—but extend well beyond that. Software designers should also ensure that all sensitive information is adequately protected while at rest as well as while in transit. They should also ensure appropriate credentials management practices, that is, that all passwords are securely hashed and salted, and then stored into a repository. The credential repository itself should conform to any standardized architecture that is in place. Login credentials should also never be exposed in the URL field of a web browser, via a GET method; they should instead be embedded in the HTTP request body as a POST parameter. For that matter, it is entirely possible that your enterprise already has a standardized identification and authentication, or identity management, architecture, and you’ll need to make use of that. That is all well and good, but the preceding criteria should still be considered carefully.
Although the topic of key management is substantial, private and symmetric keys, used by an application, require specific handling and have to be adequately protected according to the IT/IS guidelines. The most frequent solutions here are the use of key stores, file permissions, or specialized hardware tokens for highly protected systems.
This one is more problematic to find through a simple checklist process per se, but there are still some things that can be verified. The basic principle to doing access control inside an application is to ask the question of whether a user, an entity, or a process should have access to the data or function it is requesting. That is, all data and function calls should in fact be designed as requests that can be authoritatively answered.
As such, the thing to look for at a design level is a centralized access control mechanism that enforces policy. The policy itself will be set—perhaps dynamically—elsewhere, but internally, there needs to be a means of answering the question of whether a request should be permitted.
What makes this problematic is that every access needs to follow this requesting methodology. We’ll discuss this in more detail in Chapter 5, “Testing Activities.”
Every application of even moderate complexity has numerous boundary layers. They can be between components, servers, classes, modules, and so on, but at a design level we generally have a bird’s eye view of all of them—at least if we’re doing it right. From a security standpoint, boundary layers offer a positive opportunity to verify good practices like input validation and access control.
Most risk assessment methodologies implicitly map out boundary layers in their threat modeling process by defining security zones (aka trust boundaries). Each zone is then studied for the risks it poses.
Looking at an application’s boundary layers is similar, but generally offers a slightly more data-centric perspective on things.
Network connections, including both physical and VPN, are essentially a single boundary between components, but we list them separately here because they offer different types and levels of security controls. For one thing, in most enterprises, the networks themselves are operated by the IT organization. And further, network-level security controls tend to be outside the direct scope of the application itself, but nonetheless can be useful at independently enforcing some policies.
For example, a network layer between an application server and a database can enforce permitting only SQL network traffic between the two components. Although the network can’t often do much more than that, it does provide us with some useful controls that help us enforce some of Saltzer and Schroeder’s design principles—namely, compartmentalization, graceful failure, and least privilege in this case.
These are simply another form of boundary layer, but much like the network boundaries, separate application components can offer up different types of opportunities for security controls.
Event logging is a big topic for application developers, and it is one that is almost always not well understood or adequately implemented. The key point that most developers don’t get is that the customer for event logging should be the IT security or incident response team. As such, it’s vital to consider their use cases with regard to event logging. More often than not, application event logging mostly contains debugging information. Although that information is useful for debugging purposes, it’s not all that’s typically needed when responding to security incidents. In addition, the security team often needs more business specific logging in order to determine the “who, what, where, when, and how” sorts of things they need to do.
From a design checklist standpoint, you should be verifying that the security team has provided their log use cases and that those use cases will be incorporated into the application’s design. For details and examples of such use cases, see the discussion in Chapter 6.
Most modern platforms and application servers these days provide more than adequate tools for building robust session management into applications running on them, but mistakes can still happen. We’ll discuss this further in Chapter 4, “Implementation Activities,” but for now, let’s ensure that the available infrastructure for doing session management will be used for our application.
Protection of sensitive data
Enterprise applications carry all sorts of sensitive data these days, and it’s up to the developers to ensure that the sensitive data is being properly protected. As with protecting any secret, it’s vital to consider sensitive data at rest and in transit, because each state carries with it a different set of protection mechanisms that should be considered.
Any time our application has to make use of an external service—command-line interface, LDAP directory, SQL database, XML query engine, and so on—we have to ensure that the data being sent to the service is safe, after we’ve mutually authenticated those components, of course. There are several key concepts in that sentence. First off, we can’t assume that a service we’re calling is going to adequately protect itself. Second, we have to ensure that the intent of our service request is immutable from change due to whatever data we’re sending to the service.
Essentially, we have to do proper input validation for the current module and output encoding of what will be sent to the service before making the call, and we have to do it in the context of understanding what the next service call is intended to do. That’s a pretty tall order, because encoding for a database call will be different from the one for LDAP query, for instance.
As you might imagine, the earlier list is by no means a comprehensive one, but it does represent a pretty common list of application aspects. For each of these common problems, we should put together a checklist of issues to ensure that we have properly addressed them in our own designs. Plus, since this list of things is pretty common, we can use it as a basis for some design patterns that we’ll be making repeated use of.
Now, some of the elements in the list aren’t necessarily discrete application components—say, for example, protecting sensitive data—but they are all things we should address as we consider the security aspects of our design.
It’s also worth giving careful thought to the feasibility of security aspects of a design. It is a common mistake to overengineer a design and basically attempt to protect everything all the time. The problems with this approach are numerous. For one thing, we’re likely to end up with a solution that is too costly for the problem we’re trying to solve. It might also be too complicated to be successful, or building it that way might simply take far more time than we have available. For that matter, it is also common to underengineer a design due to tight timelines, budgets, and so on. The key is to find the right balance, as in so many things.
So we have to make compromises, but we have to be able to do so in a principled, businesslike, and repeatable sort of way. That is, we have to have sound justification for the design decisions we make or don’t make. That’s where a risk management methodology comes in.
Risk management is one of McGraw’s three pillars of software security,4 and it helps us make decisions with confidence. Without a sound way of considering business risks, we’re inevitably going to make the wrong decisions by accepting risks we don’t understand. That’s a big gamble to subject the business to.
We’ll discuss threat modeling shortly, but one method that can be useful is to consider an application’s most likely threat profiles. Think of these as use cases, but from a security perspective: What are the most likely avenues of attack your application absolutely must be able to defend itself against?
For example, we often refer to the “coffee shop attack” when discussing web application security. That is, consider an attacker on an open Wi-Fi network (say, in a coffee shop) using a network sniffing tool to eavesdrop on all the network traffic traversing the Wi-Fi. Now, consider one of your application’s users using your application in that same coffee shop. Can your application withstand that level of scrutiny, or does it hemorrhage vital data such as user credentials, session tokens, or sensitive user data?
Similarly, when a mobile application is designed for a smartphone or tablet computer, the most likely risk a user faces is from data left behind on a lost or stolen device. If your application is on that lost/stolen device, what information could an attacker find on the device using forensic tools? Does your mobile application store sensitive data locally on the mobile device?
In designing your security mechanism, keeping a handful of these threat profiles in mind is a healthy thing to do. Of course, the threat profiles need to be specific to your application’s architecture, but it’s generally not too difficult to find data on attacks against similar architectures.
There are many aspects of design that directly overlap with implementation. One aspect, in particular, is in designing where to place various security implementations in an application’s design. For example, the next chapter discusses input validation extensively. If we were to implement input validation without any regard for our application’s design, we would be quite likely to end up with a functioning but unmaintainable mountain of junk.
The reason for this is that it is entirely feasible to build input validation code at just about any layer of abstraction within an application. Further, if input validation is implemented “on the fly,” there is a tendency to include such things as regular expressions throughout the code base. This is what can easily result in code that is basically impossible to maintain over time.
So, despite the fact that something like input validation—which essentially does nothing to enhance the functional features of an application—is generally viewed as an implementation detail, it is important to design it carefully into one’s security architecture. In the case of input validation code, it should be centralized in implementation and features, as appropriate. If there are multiple distinct functional layers in an application (for example, Web UI, XML processing, LDAP access, and so on), validation can be centralized on a per-layer basis—but never scattered around randomly. The same thing goes for access control, which would be quite impossible to slap on after the fact, if it was not properly designed into the product from its conception. And in each of these cases, care should be taken to retain control on the server side of the application, regardless of what validation or access control might be done on the client.
These design considerations can have an enormous impact as well. Again citing the case of input validation, if we implement our input validation at the application’s presentation layer, it will largely preclude us from being able to implement tier 2 or tier 3 security features into the application, simply because of the lack of features available to us in the presentation layer.
We’ll discuss detailed examples of this in the next chapter, but for now, let’s at least be entirely cognizant that our design decisions need to have a firm footing in the reality of our intended implementations.
Design Review Practices
The common denominator in all development methodologies is to start with a clear understanding of the proposed design of the product. Although we’ve seen design documents span an enormous spectrum of detail, there is simply no substitute to really knowing and understanding the product’s design before proceeding.
Irrespective of your development methodology, you should have a fundamental description of the application and its components. A diagram visualizing all the components is a good starting point (see Figure 3.1). It should include all the physical as well as logical components of the application, at a bare minimum.
Figure 3.1 Diagram of a typical enterprise application design, top view.
Other useful things to include in design documentation include the following:
Each component of an application should be described, at least at a top level. What is it called? What is its primary functionality? What security requirements does it have? Who should access it?
What data does the application handle? What are the sensitivities of the data? Who should be allowed to view the data? Who should be allowed to alter the data?
How will data be exchanged through the various components of the application? What network protocols will be used? What format(s) will the data be exchanged in?
How is the overall system instantiated? What security assumptions are made during the bootstrapping? How do the application’s components mutually authenticate?
On shutdown, what is done with system resources, such as open files, temporary file space, and encryption keys? What residue is left behind and what is cleaned up?
This is the process and components for event logging, to include debugging and security logs.
This includes processes for handling system failures, from relatively simple to catastrophic.
This consists of the processes for installing component updates, at various levels and components within the application.
The preceding description is highly simplistic, but at the same time it can seem pretty daunting. If you really take the time to do each of these steps in detail, threat modeling a typical application can be an enormously time-consuming process, which can be time and cost prohibitive for many enterprises. As such, this outline is intended merely to give you an overview of what is involved in the threat modeling process. What’s important from our standpoint is how to put any of this into practice.
We’ve had success at breaking down this threat modeling approach into a more simple process, based on the work of many people in the software security field today.
After you have a clear picture of how the application will work, it’s useful to break the design into individual operational security zones, particularly for a distributed application with multiple servers and one or more clients.
Next, for each zone and each element in each zone, articulate (preferably in a table format) each of the following: who, what, how, impact, and mitigation.
Who can access the zone or element? Not just the registered, authorized users, but who could access the item in general. If it is (say) an application on a mobile device, consider the legitimate user, a user accessing a lost or stolen device, and so on. Try to be reasonably comprehensive in this step. What you’re doing is articulating the threat agents.
What can each threat agent do to potentially harm the system? Can the threat agents steal a hard drive from a server? Access a table in a SQL database? Masquerade as a legitimate user and get to sensitive data inside the application? And so forth.
How can the threat agents carry out each of the things in the what list?
If the attack is successful, what is its impact on the business? What are the direct costs as well as the indirect costs? In what ways is the business’s reputation tarnished?
For any given attack, what mitigation options are available? How much would they cost (in “low,” “medium,” and “high” terms if more quantifiable data are not available)?
Assuming you’ve assembled the right team of design reviewers, it’s quite likely this approach will result in some useful and meaningful discussions about the application’s design.
You should try to consider as many aspects of the application’s design as feasible. These reviews can vary from taking a few hours to taking many days, depending on an application’s complexity, and how much detailed analysis is expected.
So it needn’t be so difficult to do. And we’ve certainly found the payoffs to justify the time spent. You might find Cigital’s approach to be pretty similar in many ways to the threat modeling process we’ve just described.