- The Paradox of Software Engineering
- The Modern Definition of Software Engineering
- Is Software Engineering a Good Choice for Your Project?
In order to understand software engineering, we first need to look at the projects that were reported in the early software engineering literature. One feature is immediately strikingthe absence of reports on commercial applications. Most case studies are of either large defense projects or small scientific projects. In either case, the projects typically involved severe hardware and software challenges that are not relevant to most modern projects.
A typical example is the SAFEGUARD Ballistic Missile Defense System, which was developed from 1969 through 1975.3 "The development and deployment of the SAFEGUARD System entailed the development of one of the largest, most complex software systems ever undertaken." The project took 5,407 staff-years, starting with 188 staff years in 1969 and peaking at 1,261 staff-years in 1972. Overall productivity was 418 instructions per staff-year.
SAFEGUARD was a very large software engineering project that challenged the state of the art at the time. Computer hardware was specially developed for the project. Although the programming was done in low-level languages, the Code and Unit Test activities required less than 20% of the overall effort. System Engineering (requirements) and Design each consumed 20% of the effort, with the remainder (more than 40%) being accounted for by Integration Testing.
The Paradox of Software Engineering
In trying to understand software engineering, we need to keep two points in mind:
Projects the size of SAFEGUARD are extremely rare.
These very large projects (1,000-plus staff-years) helped to define software engineering.
Similarly, The Mythical Man-Month4 by Fred Brooks was based on IBM's experiences when developing the OS/360 operating system. Even though Brooks wrote about the fact that large programming projects suffer management problems that are different from the problems encountered by small ones due to the division of labor, his book is nevertheless still used to support the ideas behind software engineering.
These really large projects are really systems engineering projects. They are combined hardware and software projects in which the hardware is being developed in conjunction with the software. A defining characteristic of this type of project is that initially the software developers have to wait for the hardware, and then by the end of the project the hardware people are waiting for the software. Software engineering grew up out of this paradox.
What Did Developers Do While Waiting for the Hardware?
Early in the typical software engineering project, there was plenty of time. The hardware was still being invented or designed, so the software people had plenty of time to investigate the requirements and produce detailed design specifications for the software. There was no point in starting to write the code early, because the programmers lacked hardware on which to run the code (and in many early examples, the compilers and loaders for the code were not ready either). In some cases, the programming language wasn't even chosen until late in the project. So, even if some design specifications were complete, it was pointless to start coding early.
In that context, it made sense to define a rigorous requirements process with the goal of producing a detailed requirements specification that could be reviewed and signed off. Once the requirements were complete, this documentation could be handed off to a design team, which could then produce an exquisitely detailed design specification. Detailed design reviews were a natural part of this process, as there was plenty of time to get the design right while waiting for the development of the hardware to advance to the point where an engineering prototype could be made available to the software team.
How Did Developers Speed Up Software Delivery Once the Hardware Became Available?
The short answer is, "Throw lots of bodies at the problem." This was the "human wave" approach that Steven Levy described and that can be seen in the manpower figures reported from the SAFEGUARD project. As soon as the hardware became available, it made sense to start converting the detailed design specifications into code. For optimum efficiency, the code was reviewed to ensure that it conformed to the detailed design specification, because any deviation could lead to integration problems downstream.
Lots of people were needed at this stage because the project was waiting for the software to be written and tested. So, the faster the designs could be converted into tested code, the better. Early software engineering projects tended to use lots of programmers, but later on the emphasis shifted toward the automatic generation of code from the designs through the use of CASE tools. This shift occurred because project teams faced many problems in making the overall system work after it had been coded. If the code could be generated from the design specifications, then projects would be completed faster, and there would be fewer problems during integration.
Implications for the Development Process
Software engineering projects require lots of documentation. During the course of a project, three different skill sets are needed:
Analysts to document the requirements
Designers to create the design specifications
Programmers to write the code
At every stage, the authors of each document must add extra detail because they do not know who will subsequently be reading the document. Without being able to assume a certain, common background knowledge, the only safe course is to add every bit of detail and cross-referencing that the author knows. The reviewers must then go through the document to confirm that it is complete and unambiguous.
Complete documentation brings with it another challenge: Namely, team members must ensure that the documents remain consistent in the face of changes in requirements and design changes made during implementation. Software engineering projects tackle this challenge by making sure that there is complete traceability from requirements through to implemented code. This ensures that whenever a change must be made, all of the affected documents and components can be identified and updated.
This document-driven approach affects the way that the people on the project work together. Designers are reluctant to question the analysts, and the programmers may be encouraged not to question the design nor to suggest "improvements" to the design. Changes are very expensive with all of the documents, so they must be controlled.
A great way to control changes from the bottom is to define a project hierarchy that puts the analysts at the top, with the designers below them, and the programmers at the bottom of the heap. This structure is maintained by promoting good programmers to become designers and allowing good designers to undertake the analysts' role.