Home > Articles > Software Development & Management

  • Print
  • + Share This
  • 💬 Discuss

Software Development Strategies and Life-Cycle Models

Here we will describe from a rather high altitude the various development methods and processes employed for software today. We focus on designing, creating, and maintaining large-scale enterprise application software, whether developed by vendors or in-house development teams. The creation and use of one-off and simple interface programs is no challenge. Developing huge operating systems such as Microsoft XP with millions of lines of code (LOC), or large, complex systems such as the FAA’s Enroute System, bring very special problems of their own and are beyond the scope of this book. This is not to say that the methodology we propose for robust software architecture is not applicable; rather, we will not consider their applications here. The time-honored enterprise software development process generally follows these steps (as shown in Figure 1.1):

  • Specification or functional design, done by system analysts in consort with the potential end users of the software to determine why to do this, what the application will do, and for whom it will do it.

  • Architecture or technical design, done by system designers as the way to achieve the goals of the functional design using the computer systems available, or to be acquired, in the context of the enterprise as it now operates. This is how the system will function.

  • Programming or implementation, done by computer programmers together with the system designers.

  • Testing of new systems (or regression testing of modified systems) to ensure that the goals of the functional design and technical design are met.

  • Documentation of the system, both intrinsically for its future maintainers, and extrinsically for its future users. For large systems this step may involve end-user training as well.

  • Maintenance of the application system over its typical five-year life cycle, employing the design document now recrafted as the Technical Specification or System Maintenance Document.

This model and its variations, which we overview in this chapter, are largely software developer-focused rather than being truly customer-centric. They have traditionally attempted to address issues such as project cost and implementation overruns rather than customer satisfaction issues such as software reliability, dependability, availability, and upgradeability. It may also be pointed out that all these models follow the "design-test-design" approach. Quality assurance is thus based on fault detection rather than fault prevention, the central tenet of this book’s approach. We will also discuss—in Chapters 2, 4, and 11 in particular—how the model that we propose takes a fault-prevention route that is based not only on customer specifications but also on meeting the totality of the user’s needs and environment.

A software development model is an organized strategy for carrying out the steps in the life cycle of a software application program or system in a predictable, efficient, and repeatable way. Here we will begin with the primary time-honored models, of which there are many variants. These are the build-and-fix model, the waterfall model, the evolutionary model, the spiral model, and the iterative development model. Rapid prototyping and extreme programming are processes that have more recently augmented the waterfall model. The gradual acceptance of OOP over the past decade, together with its object frameworks and sophisticated integrated development environments, have been a boon to software developers and have encouraged new developments in automatic programming technology.

These life-cycle models and their many variations have been widely documented. So have current technology enhancements in various software development methods and process improvement models, such as the Rational Unified Process (RUP), the Capability Maturity Model (CMM), and the ISO 9000-3 Guidelines. Therefore, we will consider them only briefly. We will illustrate some of the opportunities we want to address using the RSDM within the overall framework of DFTS technology. It is not our purpose to catalog and compare existing software development technology in any detail. We only want to establish a general context for introducing a new approach.

Build-and-Fix Model

The build-and-fix model was adopted from an earlier and simpler age of hardware product development. Those of us who bought early Volkswagen automobiles in the 1950s and ’60s remember it well. As new models were brought out and old models updated, the cars were sold apparently without benefit of testing, only to be tested by the customer. In every case, the vehicles were promptly and cheerfully repaired by the dealer at no cost to their owners, except for the inconvenience and occasional risk of a breakdown. This method clearly works, but it depends on having a faithful and patient customer set almost totally dependent on the use of your product! It is the same with software. A few well-known vendors are famous for their numerous free upgrades and the rapid proliferation of new versions. This always works best in a monopolistic or semimonopolistic environment, in which the customer has limited access to alternative vendors. Unfortunately in the build-and-fix approach, the product’s overall quality is never really addressed, even though some of the development issues are ultimately corrected. Also, there is no way to feed back to the design process any proactive improvement approaches. Corrections are put back into the market as bug fixes, service packs, or upgrades as soon as possible as a means of marketing "damage control." Thus, little learning takes place within the development process. Because of this, build-and-fix is totally reactive and, by today’s standards, is not really a development model at all. However, the model shown in Figure 1.2 is perhaps still the approach most widely used by software developers today, as many will readily, and somewhat shamefully, admit.

Figure 1.2

Figure 1.2 Build-and-Fix Software Development Model

Waterfall Model

The classic waterfall model was introduced in the 1970s by Win Royce at Lockheed. It is so named because it can be represented or graphically modeled as a cascade from establishing requirements, to design creation, to program implementation, to system test, to release to customer, as shown in Figure 1.3. It was a great step forward in software development as an engineering discipline. The figure also depicts the single-level feedback paths that were not part of the original model but that have been added to all subsequent improvements of the model; they are described here. The original waterfall model had little or no feedback between stages, just as water does not reverse or flow uphill in a cascade but is drawn ever downward by gravity. This method might work satisfactorily if design requirements could be perfectly addressed before flowing down to design creation, and if the design were perfect when program implementation began, and if the code were perfect before testing began, and if testing guaranteed that no bugs remained in the code before the users applied it, and of course if the users never changed their minds about requirements. Alas, none of these things is ever true. Some simple hardware products may be designed and manufactured this way, but this model has been unsatisfactory for software products because of the complexity issue. It is simply impossible to guarantee correctness of any program of more than about 169 lines of code by any process as rigorous as mathematical proof. Proving program functionality a priori was advantageous and useful in the early days of embedded computer control systems, when such programs were tiny, but today’s multifunction cell phones may require a million lines of code or more!

Figure 1.3

Figure 1.3 Waterfall Model for Software Development

Rapid Prototyping Model

Rapid prototyping has long been used in the development of one-off programs, based on the familiar model of the chemical engineer’s pilot plant. More recently it has been used to prototype larger systems in two variants—the "throwaway" model and the "operational" model, which is really the incremental model to be discussed later. This development process produces a program that performs some essential or perhaps typical set of functions for the final product. A throwaway prototype approach is often used if the goal is to test the implementation method, language, or end-user acceptability. If this technology is completely viable, the prototype may become the basis of the final product development, but normally it is merely a vehicle to arrive at a completely secure functional specification, as shown in Figure 1.4. From that point on the process is very similar to the waterfall model. The major difference between this and the waterfall model is not just the creation of the operational prototype or functional subset; the essence is that it be done very quickly—hence the term rapid prototyping.3

Figure 1.4

Figure 1.4 Rapid Prototyping Model

Incremental Model

The incremental model recognizes that software development steps are not discrete. Instead, Build 0 (a prototype) is improved and functionality is added until it becomes Build 1, which becomes Build 2, and so on. These builds are not the versions released to the public but are merely staged compilations of the developing system at a new level of functionality or completeness. As a major system nears completion, the project manager may schedule a new build every day at 5 p.m. Heaven help the programmer or team who does not have their module ready for the build or whose module causes compilation or regression testing to fail! As Figure 1.5 shows, the incremental model is a variant of the waterfall and rapid prototyping models. It is intended to deliver an operational-quality system at each build stage, but it does not yet complete the functional specification.4 One of the biggest advantages of the incremental model is that it is flexible enough to respond to critical specification changes as development progresses. Another clear advantage is that analysts and developers can tackle smaller chunks of complexity. Psychologists teach the "rule of seven": the mind can think about only seven related things at once. Even the trained mind can juggle only so many details at once. Users and developers both learn from a new system’s development process, and any model that allows them to incorporate this learning into the product is advantageous. The downside risk is, of course, that learning exceeds productivity and the development project becomes a research project exceeding time and budget or, worse, never delivers the product at all. Since almost every program to be developed is one that has never been written before, or hasn’t been written by this particular team, research program syndrome occurs all too often. However, learning need not exceed productivity if the development team remains cognizant of risk and focused on customer requirements.

Extreme Programming

Extreme Programming (XP) is a rather recent development of the incremental model that puts the client in the driver’s seat. Each feature or feature set of the final product envisioned by the client and the development team is individually scoped for cost and development time. The client then selects features that will be included in the next build (again, a build is an operational system at some level of functionally) based on a cost-benefit analysis. The major advantage of this approach for small to medium-size systems (10 to 100 man-years of effort) is that it works when the client’s requirements are vague or continually change. This development model is distinguished by its flexibility because it can work in the face of a high degree of specification ambiguity on the user’s part. As shown in Figure 1.6, this model is akin to repeated rapid prototyping, in which the goal is to get certain functionality in place for critical business reasons by a certain time and at a known cost.5

Figure 1.5

Figure 1.5 Incremental Model

Figure 1.6

Figure 1.6 Extreme Programming Model
Adapted from Don Wells: http://www.extremeprogramming.org.
Don Wells XP website gives an excellent overview of the XP development process. A more exhaustive treatment is given in Kent Beck. Extreme Programming Explained (Boston: Addison-Wesley, 2000)

Spiral Model

The spiral model, developed by Dr. Barry Boehm6 at TRW, is an enhancement of the waterfall/rapid prototype model, with risk analysis preceding each phase of the cascade. You can imagine the rapid prototyping model drawn in the form of a spiral, as shown in Figure 1.7. This model has been successfully used for the internal development of large systems and is especially useful when software reuse is a goal and when specific quality objectives can be incorporated. It does depend on being able to accurately assess risks during development. This depends on controlling all factors and eliminating or at least minimizing exogenous influences. Like the other extensions of and improvements to the waterfall model, it adds feedback to earlier stages. This model has seen service in the development of major programming projects over a number of years, and is well documented in publications by Boehm and others.

Figure 1.7

Figure 1.7 Spiral Model
Adapted from B. W. Boehm, "A Spiral Model of Software Development and Enhancement," IEEE Computer, 21 (May 1988), pp. 61–72.

Object-Oriented Programming

Object-Oriented Programming (OOP) technology is not a software development model. It is a new way of designing, writing, and documenting programs that came about after the development of early OOP languages such as C++ and Smalltalk. However, OOP does enhance the effectiveness of earlier software development models intended for procedural programming languages, because it allows the development of applications by slices rather than by layers. The central ideas of OOP are encapsulation and polymorphism, which dramatically reduce complexity and increase program reusability. We will give examples of these from our experience in later chapters. OOP has become a major development technology, especially since the wide acceptance of the Java programming language and Internet-based application programs. OOP analysis, design, and programming factor system functionality into objects, which include data and methods designed to achieve a specific, scope-limited set of tasks. The objects are implementations or instances of program classes, which are arranged into class hierarchies in which subclasses inherit properties (data and methods) from superclasses. The OOP model is well supported by both program development environments (PDEs) and more sophisticated team-oriented integrated development environments (IDEs), which encourage or at least enable automatic code generation.

OOP is a different style of programming than traditional procedural programming. Hence, it has given rise to a whole family of software development models. Here we will describe the popular Booch Round-Tripping model,7 as shown in Figure 1.8. This model assumes a pair of coordinated tool sets—one for analysis and design and another for program development. For example, you can use the Uniform Modeling Language (UML) to graphically describe an application program or system as a class hierarchy. The UML can be fed to the IDE to produce a Java or C++ program, which consists of the housekeeping and control logic and a large number of stubs and skeleton programs. The various stub and skeleton programs can be coded to a greater or lesser extent to develop the program to a given level or "slice" of functionality. The code can be fed back or "round-tripped" to the UML processor to create a new graphical description of the system. Changes and additions can be made to the new UML description and a new program generated. This general process is not really new. The Texas Instruments TEF tool set and the Xcellerator tool set both allowed this same process with procedural COBOL programs. These tools proved their worth in the preparation for the Y2K crisis. A working COBOL application with two-digit year dates could be reverse-engineered to produce an accurate flowchart of the application (not as it was originally programmed, but as it was actually implemented and running). Then it could be modified at a high level to add four-digit year date capability. Finally, a new COBOL program could be generated, compiled, and tested. This older one-time reverse engineering is now built into the design feedback loop of the Booch Round-Trip OOP development model. It can be further supported with code generators that can create large amounts of code based on recurring design patterns.

Figure 1.8

Figure 1.8 Round-Tripping Model

Iterative Development or Evolutionary Model

The iterative development model is the most realistic of the traditional software development models. Rather than being open-loop like build-and-fix or the original waterfall models, it has continuous feedback between each stage and the prior one. Occasionally it has feedback across several stages in well-developed versions, as illustrated in Figure 1.9. In its most effective applications, this model is used in an incremental iterative way. That is, applying feedback from the last stage back to the first stage results in each iteration’s producing a useable executable release of the software product. A lower feedback arrow indicates this feature, but the combined incremental iterative method schema is often drawn as a circle. It has been applied to both procedural and object-oriented program development.

Figure 1.9

Figure 1.9 Iterative Model of Software Development

Comparison of Various Life-Cycle Models

Table 1.1 is a high-level comparison between software development models that we have gathered into groups or categories. Most are versions or enhancements of the waterfall model. The fundamental difference between the models is the amount of engineering documentation generated and used. Thus, a more "engineering-oriented" approach may have higher overhead but can support the development of larger systems with less risk and can support complex systems with long life cycles that include maintenance and extension requirements.

Table 1.1 Comparison of Traditional Software Development Models

Model

Pros

Cons

Build-and-fix

OK for small one-off programs

Useless for large programs

Waterfall

Disciplined, document-driven

Result may not satisfy client

Rapid prototyping

Guarantees client satisfaction

May not work for large applications

Extreme programming

Early return on software development

Has not yet been widely used

Spiral

Ultimate waterfall model

Large system in-house development only

Incremental

Promotes maintainability

Can degenerate to build-and-fix

OOP

Supported by IDE tools

May lack discipline

Iterative

Can be used by OOP

May allow overiteration


  • + Share This
  • 🔖 Save To Your Account

Discussions

comments powered by Disqus