Home > Articles

Enterprise Software Security: Design Activities

  • Print
  • + Share This
A perfectly coded but poorly designed application can end up having egregious security defects. The authors of Enterprise Software Security: A Confluence of Disciplines take a look at "design" in a general sense and include some aspects that you might or might not consider to be design work per se. These include requirements and specifications.
This chapter is from the book

Let’s get down to business by diving into some specific things we can accomplish together in our software development efforts. Design is a great starting point. Even for those of you following various agile (or other nonwaterfall) development methodologies, there’s always some thought (if not documentation) given to the design aspects of a software project. As such, we’re going to take a look at “design” in a general sense and include some aspects that you might or might not consider to be design work per se. These include requirements and specifications. And again, even agile practitioners should find value in these discussions.

But let’s start with laying some foundations of what can and should be achieved—from a security standpoint of course—while we’re designing our project. We know that a perfectly coded but poorly designed application can end up having egregious security defects. Perhaps more to the point, having an exceptionally clear picture of the application before implementing it, at least fully, can only serve to help. We realize that this concept smacks in the face of some development practices, most notably the family of agile development techniques. At the same time, we also support the notion of prototyping portions of code in order to better develop and understand the design itself. Such prototyping can take many forms, from rudimentary software, to wireframes, to notecard interactions with co-developers.

We discuss here two different categories of things to consider: positive practices to follow, and reviewing an existing design for security defects. Both of these are important to consider, but they’re also very different in how we’ll approach them.

Security Tiers

Before we proceed, though, we want to introduce a concept here; we refer to it as security tiers. We think it’s useful to consider at least three tiers of security readiness as defined shortly. Note that we’re in no way trying to define a maturity model here; it’s simply worthwhile to consider a few security tiers, which will help steer us in the right direction as we proceed. Also, some projects might deem a low tier of security to be quite adequate, even when developed by teams that are highly mature in their software security practices. Thus, these security tiers refer to the state of the end product, not the maturity of the development team per se.

We’re also not referring here to identity realms like one might find with single sign-on and other identity management solutions. In those situations, one has to pay close attention to transitive trust models in which an intruder can gain access to a user’s session in a low state and use those shared credentials to breach a higher security state.

No, our concept here of security tiers is simply one of readiness within a single system. We believe that the concept is useful particularly at a design level to decide what security solutions to include and how to include them within an application system, whether it be simple or highly complex.

With that in mind, we’ll keep the tier definitions to a simple low, medium, and high here and define them as shown in Table 3.1.

Table 3.1 Tier Definitions




We think of low level here as meeting a bare minimum set of security standards for secure software. Basically, software written to this level ought to be able to withstand attacks such as those discussed in Chapter 1, but not necessarily contain any more security functionality per se. This is, of course, in addition to meeting its normal functional requirements.


At a medium tier, software not only should be able to withstand attacks, but should also be reporting and alerting security personnel appropriately about the nature of the attacks. (Of course, care must be taken to ensure that the event logs can never be used as a means of attack, such as XSS.)


At this level, our software can withstand attacks, report problems to security personnel, and be able to programmatically take evasive maneuvers against its attackers. The evasive maneuvers might include simple account locking (with due care to prevent intentional denial of service), user data encryption, recording of intruder information to be used as evidence, and myriad other activities. We think of this tier as a highly desirable state, particularly for enterprise software conducting substantial and valuable business.

These tiers will serve as a simple but fundamental basis for discussing different things that the development team and the security team can concentrate on during a project’s design. Naturally, we’ll see them again in subsequent chapters.

It’s also worthwhile emphasizing that the low tier is at or above much of today’s software in and of itself, because so much of what’s running today is unable to withstand even relatively basic attacks.

We should also briefly talk about the rationale for having tiers in the first place. To illustrate our reasoning, let’s use a common attack like cross-site scripting (commonly called XSS). For the sake of this discussion, let’s assume that our application contains a customer registration form page that prompts the user for his name, street address, email address, and so on. Now, along comes an attacker who attempts to enter some maliciously constructed XSS data into one or more of the fields of our registration form.

If our software has been written to the low tier described previously, it would prevent the XSS data from causing any damage. The <script> information will be stopped and the user typically be asked to reenter the malformed data. Perhaps this would even happen in the client browser by way of some JavaScript input validation.

However, in an enterprise computing environment, we might want our software to do something more. After all, a street address containing <script>alert(document.cookie)</script> (or some far more dangerous scripting nastiness) can only be an attempt to attack our software and not a legitimate street address. Particularly if our application’s context is a business processing system, merely stopping an attack is just not adequate.

For a business system, we’d no doubt want to provide some information logging for our security team to look at, perhaps by means of an existing enterprise intrusion detection and monitoring infrastructure. That’s where the medium tier comes in. Here, we’d make use of the security monitoring capabilities to provide useful, actionable business data to the security team. We’ll discuss what sorts of things should be logged later in this chapter, as well as in Chapter 6, “Deployment and Integration,” but for now, suffice it to say that we’d want the security team to have the data they’d need in order to take some appropriate administrative action against the application user.

And in some contexts, we might still want to take this concept further. When we detect a clear attack like the one in this scenario, we might want to have our software itself take some evasive actions. These might include locking the offending user’s account, scrubbing the user’s account of any privacy information, and so forth.

This scenario helps put in context how and why you might consider designing and writing a particular piece of software for an appropriate security tier. And, more to the point here, it’s vital to start thinking about how you’ll design these things into your software as early in the process as possible.

A great starting point when you’re getting started down this path is to consult with your local IT security team and/or your incident response team. Since they are ultimately the “consumers” of the security components of an application, they absolutely need to be included in this process. For example, the contents of the logging information (tier 2) should be deliberately generated to support the incident response process. The type of information needed by a CSIRT (computer security incident response team) tends to be significantly different than traditional debugging logs, because it must include business-relevant data to find and catch an intruder. The principal purpose of debugging logs, on the other hand, is for developers to find and remove software bugs from a system.

It turns out that many of the decisions we make at this early design stage of a project, irrespective of any software development life cycle (SDLC) methodology we’re following, have long-reaching ramifications from a security standpoint. For example, it might seem like a good idea to design and build some input validation all the way out at the application client code—perhaps for simplicity or to unburden the server with these seemingly trivial operations. Even though we know that client-side validation can be trivially bypassed, there are significant usability factors involved that might persuade us to do some of the input validation there—and then validate the data again on the server. For that matter, the server must never presume the client to be free of tampering. Quite the contrary, the design team and hence, the server itself must always assume the client can and will be tampered with. The client side code, after all, resides and executes entirely outside of the server’s realm of control.

In some business contexts, however, this approach might not be what we want and need. In particular, if a user does attempt to attack our client code, we’ve by design eliminated our ability to detect the attack and respond appropriately.

On the other hand, if we design the security mechanisms into the core of our software, we stand a significantly better chance of not only detecting an attempted attack, but being able to properly log what has taken place and to potentially take evasive action. We can choose, for example, to lock a user’s account if the user has attempted to break our software. (In fact, this response might well even be mandated by the IT security or compliance team.) If that response is programmed in, procedures must be in place to review the triggering actions along with authenticating a user requesting that an account be unlocked. Consider the case when an attacker purposely triggers the locking of the enterprise’s CFO’s account. The CFO is going to want to get back into the system, but how do you verify that the requester is the CFO when he is yelling at you, and what if the actions were the CFO’s and the logs indicate some insider financial manipulation?

All of these things are possible and feasible, but our design decisions can have a tremendous impact on how or whether we go about doing them. For this reason, we need to carefully consider our security design and make consistent architectural decisions that will properly support our business needs later.

  • + Share This
  • 🔖 Save To Your Account