Home > Articles > Programming

This chapter is from the book

Software Development Strategies and Life-Cycle Models

Here we will describe from a rather high altitude the various development methods and processes employed for software today. We focus on designing, creating, and maintaining large-scale enterprise application software, whether developed by vendors or in-house development teams. The creation and use of one-off and simple interface programs is no challenge. Developing huge operating systems such as Microsoft XP with millions of lines of code (LOC), or large, complex systems such as the FAA’s Enroute System, bring very special problems of their own and are beyond the scope of this book. This is not to say that the methodology we propose for robust software architecture is not applicable; rather, we will not consider their applications here. The time-honored enterprise software development process generally follows these steps (as shown in Figure 1.1):

  • Specification or functional design, done by system analysts in consort with the potential end users of the software to determine why to do this, what the application will do, and for whom it will do it.

  • Architecture or technical design, done by system designers as the way to achieve the goals of the functional design using the computer systems available, or to be acquired, in the context of the enterprise as it now operates. This is how the system will function.

  • Programming or implementation, done by computer programmers together with the system designers.

  • Testing of new systems (or regression testing of modified systems) to ensure that the goals of the functional design and technical design are met.

  • Documentation of the system, both intrinsically for its future maintainers, and extrinsically for its future users. For large systems this step may involve end-user training as well.

  • Maintenance of the application system over its typical five-year life cycle, employing the design document now recrafted as the Technical Specification or System Maintenance Document.

This model and its variations, which we overview in this chapter, are largely software developer-focused rather than being truly customer-centric. They have traditionally attempted to address issues such as project cost and implementation overruns rather than customer satisfaction issues such as software reliability, dependability, availability, and upgradeability. It may also be pointed out that all these models follow the "design-test-design" approach. Quality assurance is thus based on fault detection rather than fault prevention, the central tenet of this book’s approach. We will also discuss—in Chapters 2, 4, and 11 in particular—how the model that we propose takes a fault-prevention route that is based not only on customer specifications but also on meeting the totality of the user’s needs and environment.

A software development model is an organized strategy for carrying out the steps in the life cycle of a software application program or system in a predictable, efficient, and repeatable way. Here we will begin with the primary time-honored models, of which there are many variants. These are the build-and-fix model, the waterfall model, the evolutionary model, the spiral model, and the iterative development model. Rapid prototyping and extreme programming are processes that have more recently augmented the waterfall model. The gradual acceptance of OOP over the past decade, together with its object frameworks and sophisticated integrated development environments, have been a boon to software developers and have encouraged new developments in automatic programming technology.

These life-cycle models and their many variations have been widely documented. So have current technology enhancements in various software development methods and process improvement models, such as the Rational Unified Process (RUP), the Capability Maturity Model (CMM), and the ISO 9000-3 Guidelines. Therefore, we will consider them only briefly. We will illustrate some of the opportunities we want to address using the RSDM within the overall framework of DFTS technology. It is not our purpose to catalog and compare existing software development technology in any detail. We only want to establish a general context for introducing a new approach.

Build-and-Fix Model

The build-and-fix model was adopted from an earlier and simpler age of hardware product development. Those of us who bought early Volkswagen automobiles in the 1950s and ’60s remember it well. As new models were brought out and old models updated, the cars were sold apparently without benefit of testing, only to be tested by the customer. In every case, the vehicles were promptly and cheerfully repaired by the dealer at no cost to their owners, except for the inconvenience and occasional risk of a breakdown. This method clearly works, but it depends on having a faithful and patient customer set almost totally dependent on the use of your product! It is the same with software. A few well-known vendors are famous for their numerous free upgrades and the rapid proliferation of new versions. This always works best in a monopolistic or semimonopolistic environment, in which the customer has limited access to alternative vendors. Unfortunately in the build-and-fix approach, the product’s overall quality is never really addressed, even though some of the development issues are ultimately corrected. Also, there is no way to feed back to the design process any proactive improvement approaches. Corrections are put back into the market as bug fixes, service packs, or upgrades as soon as possible as a means of marketing "damage control." Thus, little learning takes place within the development process. Because of this, build-and-fix is totally reactive and, by today’s standards, is not really a development model at all. However, the model shown in Figure 1.2 is perhaps still the approach most widely used by software developers today, as many will readily, and somewhat shamefully, admit.

Figure 1.2

Figure 1.2 Build-and-Fix Software Development Model

Waterfall Model

The classic waterfall model was introduced in the 1970s by Win Royce at Lockheed. It is so named because it can be represented or graphically modeled as a cascade from establishing requirements, to design creation, to program implementation, to system test, to release to customer, as shown in Figure 1.3. It was a great step forward in software development as an engineering discipline. The figure also depicts the single-level feedback paths that were not part of the original model but that have been added to all subsequent improvements of the model; they are described here. The original waterfall model had little or no feedback between stages, just as water does not reverse or flow uphill in a cascade but is drawn ever downward by gravity. This method might work satisfactorily if design requirements could be perfectly addressed before flowing down to design creation, and if the design were perfect when program implementation began, and if the code were perfect before testing began, and if testing guaranteed that no bugs remained in the code before the users applied it, and of course if the users never changed their minds about requirements. Alas, none of these things is ever true. Some simple hardware products may be designed and manufactured this way, but this model has been unsatisfactory for software products because of the complexity issue. It is simply impossible to guarantee correctness of any program of more than about 169 lines of code by any process as rigorous as mathematical proof. Proving program functionality a priori was advantageous and useful in the early days of embedded computer control systems, when such programs were tiny, but today’s multifunction cell phones may require a million lines of code or more!

Figure 1.3

Figure 1.3 Waterfall Model for Software Development

Rapid Prototyping Model

Rapid prototyping has long been used in the development of one-off programs, based on the familiar model of the chemical engineer’s pilot plant. More recently it has been used to prototype larger systems in two variants—the "throwaway" model and the "operational" model, which is really the incremental model to be discussed later. This development process produces a program that performs some essential or perhaps typical set of functions for the final product. A throwaway prototype approach is often used if the goal is to test the implementation method, language, or end-user acceptability. If this technology is completely viable, the prototype may become the basis of the final product development, but normally it is merely a vehicle to arrive at a completely secure functional specification, as shown in Figure 1.4. From that point on the process is very similar to the waterfall model. The major difference between this and the waterfall model is not just the creation of the operational prototype or functional subset; the essence is that it be done very quickly—hence the term rapid prototyping.3

Figure 1.4

Figure 1.4 Rapid Prototyping Model

Incremental Model

The incremental model recognizes that software development steps are not discrete. Instead, Build 0 (a prototype) is improved and functionality is added until it becomes Build 1, which becomes Build 2, and so on. These builds are not the versions released to the public but are merely staged compilations of the developing system at a new level of functionality or completeness. As a major system nears completion, the project manager may schedule a new build every day at 5 p.m. Heaven help the programmer or team who does not have their module ready for the build or whose module causes compilation or regression testing to fail! As Figure 1.5 shows, the incremental model is a variant of the waterfall and rapid prototyping models. It is intended to deliver an operational-quality system at each build stage, but it does not yet complete the functional specification.4 One of the biggest advantages of the incremental model is that it is flexible enough to respond to critical specification changes as development progresses. Another clear advantage is that analysts and developers can tackle smaller chunks of complexity. Psychologists teach the "rule of seven": the mind can think about only seven related things at once. Even the trained mind can juggle only so many details at once. Users and developers both learn from a new system’s development process, and any model that allows them to incorporate this learning into the product is advantageous. The downside risk is, of course, that learning exceeds productivity and the development project becomes a research project exceeding time and budget or, worse, never delivers the product at all. Since almost every program to be developed is one that has never been written before, or hasn’t been written by this particular team, research program syndrome occurs all too often. However, learning need not exceed productivity if the development team remains cognizant of risk and focused on customer requirements.

Extreme Programming

Extreme Programming (XP) is a rather recent development of the incremental model that puts the client in the driver’s seat. Each feature or feature set of the final product envisioned by the client and the development team is individually scoped for cost and development time. The client then selects features that will be included in the next build (again, a build is an operational system at some level of functionally) based on a cost-benefit analysis. The major advantage of this approach for small to medium-size systems (10 to 100 man-years of effort) is that it works when the client’s requirements are vague or continually change. This development model is distinguished by its flexibility because it can work in the face of a high degree of specification ambiguity on the user’s part. As shown in Figure 1.6, this model is akin to repeated rapid prototyping, in which the goal is to get certain functionality in place for critical business reasons by a certain time and at a known cost.5

Figure 1.5

Figure 1.5 Incremental Model

Figure 1.6

Figure 1.6 Extreme Programming Model
Adapted from Don Wells: http://www.extremeprogramming.org.
Don Wells XP website gives an excellent overview of the XP development process. A more exhaustive treatment is given in Kent Beck. Extreme Programming Explained (Boston: Addison-Wesley, 2000)

Spiral Model

The spiral model, developed by Dr. Barry Boehm6 at TRW, is an enhancement of the waterfall/rapid prototype model, with risk analysis preceding each phase of the cascade. You can imagine the rapid prototyping model drawn in the form of a spiral, as shown in Figure 1.7. This model has been successfully used for the internal development of large systems and is especially useful when software reuse is a goal and when specific quality objectives can be incorporated. It does depend on being able to accurately assess risks during development. This depends on controlling all factors and eliminating or at least minimizing exogenous influences. Like the other extensions of and improvements to the waterfall model, it adds feedback to earlier stages. This model has seen service in the development of major programming projects over a number of years, and is well documented in publications by Boehm and others.

Figure 1.7

Figure 1.7 Spiral Model
Adapted from B. W. Boehm, "A Spiral Model of Software Development and Enhancement," IEEE Computer, 21 (May 1988), pp. 61–72.

Object-Oriented Programming

Object-Oriented Programming (OOP) technology is not a software development model. It is a new way of designing, writing, and documenting programs that came about after the development of early OOP languages such as C++ and Smalltalk. However, OOP does enhance the effectiveness of earlier software development models intended for procedural programming languages, because it allows the development of applications by slices rather than by layers. The central ideas of OOP are encapsulation and polymorphism, which dramatically reduce complexity and increase program reusability. We will give examples of these from our experience in later chapters. OOP has become a major development technology, especially since the wide acceptance of the Java programming language and Internet-based application programs. OOP analysis, design, and programming factor system functionality into objects, which include data and methods designed to achieve a specific, scope-limited set of tasks. The objects are implementations or instances of program classes, which are arranged into class hierarchies in which subclasses inherit properties (data and methods) from superclasses. The OOP model is well supported by both program development environments (PDEs) and more sophisticated team-oriented integrated development environments (IDEs), which encourage or at least enable automatic code generation.

OOP is a different style of programming than traditional procedural programming. Hence, it has given rise to a whole family of software development models. Here we will describe the popular Booch Round-Tripping model,7 as shown in Figure 1.8. This model assumes a pair of coordinated tool sets—one for analysis and design and another for program development. For example, you can use the Uniform Modeling Language (UML) to graphically describe an application program or system as a class hierarchy. The UML can be fed to the IDE to produce a Java or C++ program, which consists of the housekeeping and control logic and a large number of stubs and skeleton programs. The various stub and skeleton programs can be coded to a greater or lesser extent to develop the program to a given level or "slice" of functionality. The code can be fed back or "round-tripped" to the UML processor to create a new graphical description of the system. Changes and additions can be made to the new UML description and a new program generated. This general process is not really new. The Texas Instruments TEF tool set and the Xcellerator tool set both allowed this same process with procedural COBOL programs. These tools proved their worth in the preparation for the Y2K crisis. A working COBOL application with two-digit year dates could be reverse-engineered to produce an accurate flowchart of the application (not as it was originally programmed, but as it was actually implemented and running). Then it could be modified at a high level to add four-digit year date capability. Finally, a new COBOL program could be generated, compiled, and tested. This older one-time reverse engineering is now built into the design feedback loop of the Booch Round-Trip OOP development model. It can be further supported with code generators that can create large amounts of code based on recurring design patterns.

Figure 1.8

Figure 1.8 Round-Tripping Model

Iterative Development or Evolutionary Model

The iterative development model is the most realistic of the traditional software development models. Rather than being open-loop like build-and-fix or the original waterfall models, it has continuous feedback between each stage and the prior one. Occasionally it has feedback across several stages in well-developed versions, as illustrated in Figure 1.9. In its most effective applications, this model is used in an incremental iterative way. That is, applying feedback from the last stage back to the first stage results in each iteration’s producing a useable executable release of the software product. A lower feedback arrow indicates this feature, but the combined incremental iterative method schema is often drawn as a circle. It has been applied to both procedural and object-oriented program development.

Figure 1.9

Figure 1.9 Iterative Model of Software Development

Comparison of Various Life-Cycle Models

Table 1.1 is a high-level comparison between software development models that we have gathered into groups or categories. Most are versions or enhancements of the waterfall model. The fundamental difference between the models is the amount of engineering documentation generated and used. Thus, a more "engineering-oriented" approach may have higher overhead but can support the development of larger systems with less risk and can support complex systems with long life cycles that include maintenance and extension requirements.

Table 1.1 Comparison of Traditional Software Development Models





OK for small one-off programs

Useless for large programs


Disciplined, document-driven

Result may not satisfy client

Rapid prototyping

Guarantees client satisfaction

May not work for large applications

Extreme programming

Early return on software development

Has not yet been widely used


Ultimate waterfall model

Large system in-house development only


Promotes maintainability

Can degenerate to build-and-fix


Supported by IDE tools

May lack discipline


Can be used by OOP

May allow overiteration

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information

To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.


Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.


If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information

Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.


This site is not directed to children under the age of 13.


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information

If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information

Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents

California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure

Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact

Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice

We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020