Home > Articles

This chapter is from the book

Requirements

Although many software developers these days eschew the practice of formally gathering and documenting their software requirements, there are many things worth considering at this earliest stage of development. Even if this is done only at an informal or “whiteboard” level, it can significantly help the team in understanding and capturing a project’s security needs in addition to its functional needs.

We’ll describe these considerations and steps here in several areas: abuse cases, regulatory requirements, and security requirements. Later, we will consider these requirements together with the security tiers we described earlier in the chapter. All these will come together as we discuss the topic of secure designs later in this chapter. Although the overall process is described as a team exercise, the role of the SA is extremely important throughout these activities, because he or she serves as both an anchor and a guiding force for all the participants.

Abuse Case Analyses

To start with, although abuse case analyses had been used in various ways for some time, McGraw’s Software Security: Building Security In1 provides us with one useful description of abuse case analysis. In essence, abuse case analysis looks at the intended functionality of a piece of software and seeks ways in which the software can be misused for evil purposes. As such, it is a review-based process to help us ensure that we’re not building something that can be used to cause harm. That said, abuse case analysis can be a powerful means of finding problems with a project before it ever begins. If the software will likely be misused or abused in a way the owner really does not want to happen, it is a serious problem.

Let’s illustrate this with an example. Suppose you’re the engineering team leader of your company’s customer-facing web presence. One day, the vice president of marketing walks into your office and asks you to add a new feature to the web application: a mechanism for customers to subscribe to a new monthly newsletter the marketing department is launching. Simple enough; you can add a basic web form that asks the customers for their email address and perhaps some other contact data. After you have the information, you simply add the incoming addresses into a database of customers who receive the newsletter. All done? What could go wrong with this scenario? After all, all the functionality that the VP asked for is now complete, right?

Although it’s true that this scenario fulfills all the functional requirements, there’s a big problem. You probably recognized immediately that anyone could enter a “customer’s” information and have her added to the subscription list. That’s an abuse case. Heck, someone who really wanted to disrupt us could write a short script that would submit thousands or millions of addresses into our database if we’re not careful. That’s another abuse case, and one with obvious and really bad consequences. Now, let’s take that further, to its logical conclusion.

If we recognize the potential for abuse, we’d want to prevent that from happening, naturally. A first step could be to add a security requirement to the functional requirement that might say something like “only verified email addresses may be added to the subscriber list.” It’s a good, actionable requirement. Our development team might implement that by sending an email confirmation to each address submitted for inclusion in the subscriber list. Now are we done?

Not so fast. Let’s think a bit fiendishly here. If an email confirmation goes to the (intended) subscribers and requires them to verify that they want to be on the list, what could go wrong? Well, there’s still an abuse case potential here. The mere act of sending out those confirmation emails could be disruptive. If an attacker bombards our subscription mechanism with fake but carefully chosen email addresses—say, at one of our key business partners—what would happen if our system then sends thousands and thousands of confirmation emails?

So it’s not enough to send out a confirmation email; we have to ensure that our application is talking to a human, and not a script. There’s another security requirement to consider. We note that CAPTCHAs are routinely used to address this issue. (CAPTCHAs are automated tests used to verify that a user is in fact a human. They usually show a distorted image of a word or phrase that an artificial intelligence would be unable to recognize but the user can read and enter correctly.) Nonetheless, let’s add a security requirement such as “subscription requests may be issued only by human users of the system.” See where this is going?

It’s always best to consider abuses such as the ones we’ve described here before a system is rolled out into a production environment. But that requires the development team to be able to really think fiendishly, ignoring the mere functional requirements, and to consider how the system can be abused. It has been our experience that this can be a difficult leap for many developers. Security professionals, on the other hand, have been worrying about abuses like this for decades, and thinking fiendishly comes naturally to them. Invite them to participate.

In considering abuse cases, the following are some questions and important areas of concern to consider for each application. These questions are similar to those we’ll address while doing a threat model, but let’s consider them separately here while we ponder abuse cases.

  • How?—Means and Capabilities

    • Automated versus manual

      In our mailing list scenario given previously, we saw an automated attack against a simple function. Often, designers consider a single use case with blinders on when thinking about how an application might be used. In doing this, they fail to see how the (usually simple) act of automating the functionality can be used to wreak significant havoc on a system, either by simply overwhelming it or by inserting a mountain of garbage data into the application’s front end. Never underestimate the determination of a while true do {} block.

  • Why?—Motivations and Goals

    • Insider trading

      Automating a user interface into an application is in no way the end of the myriad of ways an attacker can abuse an application. Consider the human aspects of what an application will be capable of doing, and what sorts of bad things a maliciously minded person might be able to make of those capabilities. Insider trading should be a significant concern, particularly in publicly traded companies. Automation is, after all, a double-edged sword of sorts. We not only are automating a business function, but also might well be inadvertently automating a means for someone to attack a business function.

    • Personal gain

      Similarly, look for avenues of personal gain in an application. Ask whether a user of the application could use the information to “play” the stock market, for example, in a publicly traded company. This can be a significant concern in major business applications in enterprise environments.

    • Information harvesting

      Here, we look for opportunities for an authorized application user to gather—perhaps very slowly over time—information from an application and use that information for bad purposes. A prime example could include a customer database that contains information on celebrity or otherwise VIP customers, such as a patient database in a hospital where the VIP has been treated. That information could be very valuable on the black market or if sold to the media.

    • Espionage

      Although several of these issues overlap significantly, it’s useful to consider them separately. Espionage, whether corporate or otherwise, could well be simply a case of information harvesting, but it’s still worthy of separate consideration. Consider not just information like the celebrity database, but also company proprietary information and how it could be collected and sold/given to a competitor. What opportunities does the application being analyzed offer up to a user who might be persuaded to do such a thing?

    • Sabotage

      Even in the best of economic climates, you’ll occasionally find disgruntled employees who are bent on damaging a company for all manner of reasons. Their actions might be clear and unambiguously malicious—such as deleting files or destroying records in a company database—but they might also be more subtle and difficult to detect. Consider how a malicious-minded application user might be able to harm the company by sabotaging components in an application.

    • Theft

      This one is sort of a catchall for things that weren’t brought up in the previous ones, but it’s worthwhile considering general theft at this point. Credit card account information is a prime candidate here.

Now, it’s quite likely a software developer will look at a list like this and throw her arms up in the air in frustration, thinking it’s not feasible to brainstorm something like this comprehensively. After all, it is fundamentally an example of negative validation, which we generally seek to avoid at all costs. Although that’s true, there’s still significant merit in doing abuse case analysis. Of course, the secret to getting it right is to do it collaboratively with some folks who are practiced at this sort of thing—like, say, the information security team.

It is also a good idea to consider separately the likelihood of an attack and the impact of a successful attack. These two things are quite different and bear separate analysis. Impacts can be imagined or brainstormed quite effectively, whereas likelihood can be more deeply analyzed, or even quantified.

Here’s how the collaborative approach can work for analyzing abuse cases. After you’ve gathered a basic understanding of the functional goals of your project, invite a few key folks to take a look at the project and “throw stones” at it. You will want to ensure that all the interested parties are at the meeting; these should include at a minimum the business process owner, the design team, the information security team and/or incident response team, and the regulatory compliance monitoring team.

At this point, the best way to proceed is to describe the project to the assembled group. Discuss how the system will function and what services it will provide. You should be sure to list any existing security requirements that are already understood. At this point, run through a brainstorming session to collect any and all concerns that come up. The most important thing is to discuss the issues and enable—encourage even—the team to be as harsh as possible.

Take each security concern the group raises to its logical conclusion, and be sure to understand each one in detail. Make a list, for example, of any preconditions that would need to exist for an attack to be successful. So if an attack would need direct access to a server console, make sure that’s clearly annotated in the list of issues the group comes up with.

Next, take the list of issues and rigorously consider what security requirements could be added or enhanced to prevent the underlying cause from being exploitable. If an issue is not avoidable, consider security requirements that would enhance the ability to detect an attack if it does take place. A security requirement such as “all access to the application will be logged, with all user actions being recorded and monitored” can be useful, for example, in such situations.

It’s also helpful to watch out for some common pitfalls with abuse case analysis. First and foremost, this process must be finite and has a clearly defined stopping point—which should be clearly communicated from the beginning to all participants. Any time you put a bunch of technical-minded folks together in a room, you’re never guaranteed the outcome you expect. Engineers have a near-overwhelming inclination to digress in ways you can’t begin to fathom. Abuse case analysis is no exception to this. Expect them to discuss low-level technical details such as buffer overflows, cross-site scripting, and a myriad of other things that just aren’t relevant at this stage.

To get value out of abuse case analysis, it is absolutely vital to facilitate and guide the brainstorming process carefully but firmly.

Asset Inventory

We’ve also found it useful to start at this point to generate an inventory of the sensitive assets the system will need access to. This inventory should include such things as customer records, passwords, and encryption keys, as well as the high-value functions in the application. If applicable, consider prioritizing the inventory in terms of value to the company. Although a large enterprise might have to set up a large corporate project to identify key assets (a customer database, for example), security-conscious folks in smaller places will have their arms around those assets all the time.

In the Microsoft SDL approach,2 they describe a process called threat modeling. An asset inventory is absolutely vital to doing threat modeling, but it’s not the same thing. We’re trying to articulate here a very clear understanding of everything of value in our application. In other words, what are the targets an attacker is most likely to go after? If we can build a solid understanding of what the targets are and prioritize them in a relative manner (say, low, medium, and high business value, recognizing that one company’s “low” could well be another company’s “high” and so forth), then we can also understand what can and should be protected, and how much effort we should put into protecting each.

As we said previously, an application’s assets can include important data, but also functions. For example, many applications have an identification and authentication mechanism. Since these are by nature accessible to unknown attackers, they almost always should be included as high-value targets in an asset inventory. That will help us later in allocating the necessary resources for reviewing and testing those modules.

Now, although developing an asset inventory isn’t something that can or must be done during a requirements process per se, it’s still a good idea to start thinking (and documenting) this as early as possible. Microsoft’s SDL process starts this step early as well.

Regulatory Requirements

Next, we should consider the security-related regulatory requirements that our software is going to have to operate under. In many industries today, our business systems are required to conform to myriad security laws and guidelines. This is particularly true in publicly traded companies as well as certain highly regulated industry sectors such as financial and insurance services, pharmaceutical and healthcare, and public utility companies.

Additionally, companies that operate internationally might have country-specific regulatory and privacy requirements to comply with. Naturally, this can greatly complicate the security requirements process. In some cases, the application itself might need to operate differently based on where the customer, employee, or other user is located.

And especially in these extremely complex environments, it is commonplace these days to find corporate-level compliance officers or at least a compliance monitoring team. Often, the compliance team will organizationally fall under the CIO, COO, Audit, or even General Counsel’s office.

Step number one in this part of the design process is to seek out the person or department in charge of compliance monitoring and engage him or her in the process. As a starting point, specifically look for issues such as the following:

  • Data or information that needs to be protected for privacy

    Many business systems are required to safeguard the privacy of customer data, Social Security numbers, credit card numbers, and so on. These are vital to the security of the application, and the sooner the development team is explicitly aware of the requirements, the better off everyone will be. Find out the specific privacy issues for each data element. In some circumstances, it might also be useful to consider privacy requirements for various markets, even if a product isn’t (yet) marketed in some of those markets. Considering those requirements now might well save us substantial grief later, should the company decide to expand into those markets.

  • Data retention requirements

    Several U.S. Government bureaucracies—and no doubt many others—have stringent requirements on data retention, covering things such things as email and transaction records. It is important to gather all of these requirements and investigate how the application itself can help support them, instead of simply dismissing them to the data center staff to implement. As an example, consider the data retention requirement the Securities and Exchange Commission in the U.S. imposes on broker-dealers. It’s called “Rule 17a-4,” and it dictates that certain records (trade blotters, order tickets, trade confirmations, and much more) be preserved in nonrewritable and non-erasable format for specified periods. For “communications that relate to the broker-dealer’s business as such,” the retention requirement is three years.3 If your app will operate in a regulated environment, we recommend you get expert help to ensure that you facilitate appropriate data retention.

  • Data or processes that require special reporting

    Many security regulations have explicit requirements for reporting particular types of data access and such. Credit card transactions, for example, might be required to be logged (but not with customer-sensitive information in the logs) under the Payment Card Industry Data Security Standards (PCI-DSS) requirements. There might well also be breach reporting requirements for many applications and the jurisdictions in which they operate.

  • Entity identification or authentication requirements

    Some sensitive application environments are required to meet minimum standards for strong user and/or entity identification and authentication. PCI-DSS again provides us with ample examples, such as in Requirement 8.3, which says, “Incorporate two-factor authentication for remote access (network-level access originating from outside the network) to the network by employees, administrators, and third parties.” Portugal’s Digital Signature of Invoices law represents another example of Entity Identification requirement; it attempts, among other things, to bind invoice documents to the software that was used to create it.

  • Access control requirements

    Sensitive data or functions within an application can require additional access controls for read and/or write access. These are often designated in industry requirements such as PCI-DSS once again. PCI-DSS Requirement 7 states, “Restrict access to cardholder data by business need to know.”

  • Encryption requirements

    In addition to access control, many sensitive data elements need additional privacy and integrity protection using cryptographic controls. PCI-DSS 8.4, for example, tells us, “Render all passwords unreadable during transmission and storage on all system components using strong cryptography.” Note here two things: that the requirement covers passwords while both at rest and in transit, and that it leaves open significant options in how to implement the standard, even though it does define “strong cryptography” in the document. It is nonetheless actionable and exactly the sort of security requirement we should be looking for. Further, it is the sort of requirement that can and should evolve with time, as cryptographic algorithms are retired, new practices discovered, and so on.

  • Change-management requirements

    Many highly regulated industries, such as the pharmaceutical and healthcare sector in the U.S., have rigorous requirements for change management of production business data processing systems. Even though change management is not something that a software developer always has a direct role in, it is still important to be aware of these requirements and to adapt the software practices to fit into them. One exception here regarding change management has to do with source code repositories. Strong access control for both “read only” and “read write” permissions in a source repository should be emphasized, even if only to safeguard things like comments in source code containing sensitive information about a project. The same holds true for a project’s bug tracking system.

It would be easy to assume that some of the topics in the preceding list are “someone else’s job” and thus outside of the scope of the development team’s efforts, but that would be unfortunate. Although some of these topics are in fact someone else’s responsibility, in order to be effective, there must be a clear interface between them and the application itself. The more cohesive the bond between these requirements and the development team’s efforts, the better the end product will be. Put another way, it should be clear by now that there are many stakeholders in the overall security of a typical business application, and they should all be consulted and included in the planning and implementation.

Security Requirements

In the preceding section, we discussed regulatory requirements. These tend to be driven by governments, industries, or other standards bodies—but nevertheless external to the company that develops or owns the software. Let’s now focus on some internal requirement issues.

From the development team’s perspective, the thought process to go through is largely similar: Seek out the appropriate stakeholders and engage them in the process to find out what their security requirements are. The primary difference is the stakeholders themselves. Whereas we looked to the compliance team previously, now we should be looking at the internal information security group directly in this part of the process.

In looking internally for security requirements, there is another place to take a look at as well: internal security standards. Much like their government and industry counterparts, internal standards will often include requirements for such common security mechanisms as authentication, passwords, and encryption. Although they are typically more detail-oriented, quite often they fall short of being truly actionable, but they’re still a good place to start looking.

The sorts of things to look for at this point include the following:

  • Identification and authentication requirements

    Many large corporations have application security policies and guidelines that include identification and authentication. Some go so far as to designate tiers of sensitivity for applications—generally three to five levels of security, such as “level 1: customer data,” “level 2: company proprietary information,” and “level 3: public information.” Note: These levels are for illustrative purposes. It is often the case, for example, that customer data covers multiple data types and corresponding sensitivities. Password and authentication factor guidelines are also commonly found. They’ll typically include minimum length, frequency of password changes, and character set requirements for all passwords and such, along with any multifactor authentication requirements for the most sensitive applications. It is not uncommon for a company’s internal security standards or policies to prescribe specific guidelines for proper credentials storage, specifying certain hashing and/or encryption algorithms and when those should be applicable. The development team must follow these guidelines if they exist. If they don’t, now is a good time to define them for this project and those in the future.

  • Event-logging requirements

    Despite the fact that many enterprise data centers have existing architectures in place for centralized event logging, it has been our experience that most event logging actually takes place at the operating system and web/app-server level. That is, we’ve rarely seen application-level logging that is truly adequate for the incident responders to properly do their jobs. We’ll discuss this topic in much more detail later, as part of Chapter 6, but for now, let’s at least ensure that the development team is fully aware of any and all event-logging requirements and infrastructures that are in place. We say this in the plural because many enterprise environments log both operational and security event data, and they often separate those two types of logs quite substantially.

  • Disaster recovery and business continuity requirements

    Most large enterprises do significant disaster recovery and business continuity (“DR/BC”) planning these days. Although much of this has to do with natural disasters such as hurricanes, floods, fires, and earthquakes, it is still important to engage in conversation with the folks doing this planning. In particular, look for requirements around alternative data centers (or hosting services, and so on) and other contingency planning in order to understand how your application will be able to support that type of requirement. The plans often include requirements for rotating to alternative “hot” or “warm” data centers with specific minimum downtime requirements and such. These things are often considered outside the direct scope of the application development process, but it is important to have at least a minimum understanding of the requirements. There might well be, for example, requirements to be able to have an application seamlessly, without downtime, of course, switch to different event logging or other infrastructure servers. These can have a significant impact on how the development team designs and implements many such configuration settings.

  • Incident response requirements

    Increasingly, corporations have in place incident response teams, either in-house or outsourced. These teams are generally faced with one or more of the following challenges when an incident occurs: diagnose the problem, contain and/or stop the incident, investigate (or support the investigation of) an incident, or perform a damage assessment after an incident has taken place. In pretty much every case, the common denominator and the “lifeblood” of the incident response teams is having a clear and accurate situational awareness of what is going on or what did go on inside the affected business application. This invariably leads to event logging.

    Now, although we’ve already raised the event-logging requirements previously, incident response requirements can be quite different from ordinary event logging. For example, the incident response team often has a need to capture and store log data and maintain a chain of evidence so that the information will subsequently be useful in a court of law. They also often need to assemble from disparate information sources a clear picture and timeline of what an attacker did (or attempted to do) during an incident, which requires log data to be rather detailed across all the components and layers of a complex business application. This can be a daunting task under the best of circumstances. Timelines can be better reconstructed if components generating log entries have synchronized their times. “System time” policies are becoming even more important as enterprises utilize distributed systems and cloud computing environments spanning multiple time zones, and so on.

    As such, it’s quite possible that the incident response team will have quite a “wish list” of things they will need from your application when doing their jobs. That wish list is generally borne from experience and operational need when it comes to performing their jobs as rapidly as possible. In reality, their normal mode of operation is to adapt to and to work with the information they have available, but what better time to ensure that they’ll have what they need than during the early phases of developing an application?

    The best way to do this is to gather a clear understanding of the incident response team’s use cases for how they will need to interact with your application during an incident. Meet with them as you would the business owner or user community to find out how they’ll use your application.

  • Account management requirements

    Another commonly set of functional and security requirements can found in account management practices. Corporations often have guidelines and policies for user and employee accounts, as well as for third parties, contractors, consultants, and so on. Employee accounts on enterprise applications might need to be synchronized with employee records in the Human Resources department, for example, to ensure that accounts for departing employees get deactivated when an employee leaves the company. As you might imagine, we’ve seen many mistakes made in this area over the years. All too many business applications are written in the absence of any means of verifying employment.

    You’re likely to find several relevant stakeholders when it comes to account management practices. You might find, for example, that Human Resources can contribute substantially, in addition to the IT Security department for their policies on opening and closing application accounts. But don’t stop there. Even the incident response team might be able to contribute with requirements for deactivating accounts during incidents while maintaining their data for forensic analysis or evidentiary purposes.

  • Access control requirements

    In our experience, access control requirements for applications tend to be rather superficial even in larger corporations. It’s not uncommon, for example, to find access control statements that designate user-class and administrator-class users of an application and what they should be allowed or not allowed to do. However, it’s not common to find access control requirements that go beyond these simple one or two dimensions.

    That might well be quite adequate for many applications, but it still bears consideration at this early stage. Among other things, it can open up a significant set of possibilities to have more rigorously defined role-based solutions for some more complicated applications.

    With that in mind, the most relevant stakeholder for gathering access control requirements is most often the business process owner—the person who is responsible for the business functionality of the application itself. In talking with the business owner, it’s important to listen for language that would lead you to need more stringent access controls than simple user/administrator accesses. Listen, for example, for language like “anyone in accounting should be able to do ‘x’,” whereas “those in HR should be allowed to only view the information, not change it.”

    Irrespective of which departments and stakeholders are considered, a well-designed access control system and policies should be based on the venerable principle of least privilege.

  • Session management requirements

    Session management is a big issue at a technology level for web-based applications, but it’s still relevant for many other application architectures as well. The sorts of things to look for with regard to session management should include timeout periods for inactivity, time-of-day restrictions, location restrictions, failover capabilities, and so forth.

    As with access control requirements, the most likely relevant stake-holder for these issues tends to be the business owner. And the way to approach the topic of session management requirements is to seek use-case scenarios. (These might well already be defined, so be sure to read up on what information has already been gathered.)

    At a more technical level, there also might be company standards or guidelines on how to implement session management in an application, particularly if the application is web-based. Enterprise data center environments often make use of either single sign-on or other centralized authentication services and APIs, which are equally important to be aware of and make use of when designing an application.

    Hopefully, these standards include security guidelines on such issues as session fixation, safeguarding session cookies, and cookie contents. Great care, too, should be taken in cookie generation. Persistent cookies should be rigorously encrypted, for example. This type of technical session management requirement is not at all likely to come from the business owner, but rather the security team, because these are things that often are discovered during security reviews.

  • Encryption standards

    Particularly in regulated industries, there are often policies for encrypting sensitive data. Sometimes, these requirements don’t come from external regulations, but from the security department directly. At a bare minimum, it is important to find out what these standards are and to conform to them. Most often, the guidelines serve to specify what encryption algorithms are acceptable for particular types of application data. In most cases, they explicitly and strictly ban any attempts to come up with “homemade” cryptographic functions, requiring teams to rely on existing and vetted algorithms and implementations instead. What is often missing in encryption standards and requirements is detailed information on how the entire crypto system should work, such as key generation and management. Those details are typically up to the developer, and great care must be taken in how these things are done.

    These standards are all important, of course, but we should point out that there is still plenty of room to make mistakes. In our experience, far more encryption problems arise from poor key management practices than from selecting algorithms that aren’t up to the task—and very few encryption standards even address the topic of how best to do key management.

    Password storage is another topic that should be taken up in encryption requirements. It is recommended that password storage standards use a one-way hash function approach that combines the password with other information such as the user account identifier. In this manner the same password used by different accounts will not result in the same password validation value (hash). Being a one-way hash, it should also be computationally expensive to derive the password from the hash value. This helps to minimize the impact if the account and password values are stolen from their storage on a server component.

  • Change-management requirements

    Most even moderately mature enterprises have documented processes and procedures for handling changes to production applications. For the software developers, the key is to know how best to interact with that process and work within its boundaries. As with disaster recovery and business continuity, these requirements can have an impact on how best to design and implement an application. For example, if an application must maintain login credentials to connect to a database server, it’s generally best to keep those credentials in a properties file (of course, protected, as, hopefully, specified in credential management and encryption standards) and never hard-coded in the application’s source code. Apart from the security vulnerability introduced by hard-coding login credentials in an application, keeping them in a properties file often makes things easier from a change-management standpoint. This is because changing a properties file on a production system is typically far easier than changing the source code and rebuilding an application. So, as a starting point, the stakeholder to look for on this is generally in either the CIO or the COO environment, or perhaps IT and IT security, depending on who sets the change-management processes in your organization.

  • Patching requirements

    All software has to be patched periodically. Patching and updates follow very formal processes when external customers are involved, but for internal-only products these rules can be somewhat less rigid, although they need to take into account all of an application’s components, from its operating systems through its libraries, frameworks, and others. In any case, the development team should take future patches and possible formal requirements around this process into consideration early in the design phase.

    Where things become more complicated is with updating software components that require rebuilding the underlying application software. It is not uncommon for organizations in an enterprise to have an unclear or otherwise unrealistic understanding of which organization is responsible for deploying specific patches. For example, installing operating system patches is generally fairly easy, whereas replacing a software framework requires a complete rebuild of an application. In these cases, the patching is best not left to the IT operation staff.

    Also, it is worth noting here in passing that large enterprises are increasingly insisting on security requirements, including patching, in their contract verbiage with software vendors.

As you can see, this list is far more internally focused than the one in the preceding section. It’s no less important, however. It’s also worth noting that it’s more than likely you’ll find that no standards exist for many or even all the items in the list just given. In that case, it would be far too easy to dismiss the topic and continue in a “business as usual” mode, but that too would be unfortunate. Instead, we suggest you consider it an opportunity for collaboration between the development team and the security team to put together a meaningful and actionable set of guidelines and requirements to address this list (and more).

Bringing It All Together

Many of you reading this might well feel overwhelmed at this point. We’ve just laid out a highly ambitious list of things to consider when gathering the security requirements for a business application. The list is daunting, we agree. However, the news isn’t all bad. Let’s consider some of the positive aspects of what we’ve been covering in this section.

For one thing, it’s highly likely you won’t need to do all of this with every application. There’s an economy of scale to be found here. So if, like many organizations, your organization handles multiple business applications, then you can definitely expect to see a reduction in the level of effort with each passing application. Also be cautious of single sign-on environments where a weakness in a relatively low-risk application can result in stealing credentials to compromise a higher-risk application. Either way, the first project that embarks on this path must have executive support because they are more likely to see the return on investment later as other enterprise application projects arise.

Also, remember the notion of having a security advisor working with development teams? Well, here’s an opportunity for the security advisor to shine and prove his value to the development effort. The security advisor should, among other things, be expert at all the security requirements an organization needs to conform to, external as well as internal.

At the very least, the organization’s security team should be able to provide a significant list of applicable requirements, laws, and guidelines that will need to be followed on an ongoing basis. This list will need to be periodically updated since many of the standards change over time, and the list should include a resource library with searchable documents for each set of requirements.

So although doing this correctly would require a rather significant initial outlay of effort and possibly electronic resources, every application project should be able to benefit from that effort, reducing the per-project costs significantly over time. This, by the way, is a compelling argument for at least some level of centralization of software development infrastructure in an organization.

With that in mind, it has been our experience that the best way of collecting security requirements is via meetings and interviews with the stakeholders. Ideally, this will be done with the project’s security advisor on hand, but even if your organization does not have a security advisor, it’s still quite achievable. Here are some procedural considerations.

  • Do your homework

    No matter how eager your stakeholders are to contribute and help, they’ll always appreciate it when you spend some time in advance and do some preparation. Start by doing some online research and information gathering about the external and internal regulations you believe are applicable to your application. Download and read the latest version of all of them. If there are preproduction versions of any of these standards in the development pipeline, get those as well. It’s quite likely those will become relevant to your application after they’re released, so you’ll want to be aware of what’s coming along, even if they’re not currently in their final form.

    Make a list of the regulations, standards, laws, and so on, including the internal security policies and guidelines that are applicable to your project.

    While reading through all the standards, make a list of questions.

    Use this time also to ensure that you deeply understand the business intent of the application. Know what it is intended to do and what it is not intended to do. Ensure that the stakeholders agree to this as well, and understand the pitfalls of mission creep. And be realistic that some mission creep is simply inevitable.

    Make a list of all the stakeholders you’ll need to talk with. These might be individual people, or perhaps roles or departments (e.g., General Counsel or Compliance Officer).

  • Start with abuse cases and asset inventory

    Using the most preliminary and basic set of functional requirements for your application, go through an abuse case analysis process for the application as described earlier in this chapter. Some of the issues uncovered in the analysis could well turn out to be addressed in the various security requirements, but it never hurts to spend the time to really understand how your application might be misused after it is deployed. Plus, it’s been our experience that understanding the abuse cases helps build your own knowledge of the application and what aspects of it really need to be well protected later.

  • Invite the stakeholders

    Depending on the nature of your questions and agenda, you might end up doing one-on-one interviews with the various stakeholders, or you might end up inviting them to one meeting. Whichever works best for you, invite all the relevant stakeholders to participate in this stage of the application development process, again being cautious to avoid the “too many cooks in the kitchen” problem.

  • Brainstorm and refine

    With your stakeholder(s) gathered, ask your questions and dive deeply into the answers. Where answers seem vague, push to make them explicit. You want to seek clear and actionable answers here wherever possible. Toss out topics for discussion that perhaps the stakeholder hadn’t considered. You might need to illustrate things via examples and case studies to make them clear. You’re also likely to be questioned about the likelihood of something bad happening. Has it happened before? When? And so on.

Gathering the security requirements for an application can seem like a lot of effort indeed. However, getting this step done well will undoubtedly have significant payoffs throughout the development effort, irrespective of the software development methodology your organization follows. And again, we haven’t even begun to discuss the security tiers we mentioned early in the chapter yet.

We’ve placed a lot of emphasis on requirements gathering because mistakes made now can have a multiplying impact later. Neglecting something like a policy on encrypting customer data can result in massive reengineering if we get “surprised” by the requirement after we’ve designed and built the application. Things get more dynamic and exciting when Agile processes (and variations) are introduced, because they tend to be less planned and more flexible. As the authors have frequently observed, Agile teams skip documentation updates altogether during these so-called springs (literally living by the motto “code is the best documentation”), which creates quite a lot of issues for security analysis. As a result, the security requirements have to be continuously reintroduced and readjusted based on the current project planning—most likely, as a mandatory integral part of weekly sprints.

There’s a secondary benefit from going through a rigorous security requirements gathering process early, and it’s one we haven’t discussed yet. By engaging all the application stakeholders in dialogue long before starting to actually design or develop any code, you’re including them in the process rather than asking them to just review and accept your work later. It’s rare to find the individual who doesn’t appreciate that approach.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020