Introduction to Requirements Analysis
- About Requirements Analysis
- From Analysis to Design
- About This Book
A few years back your author attended a dress rehearsal of the Houston Grand Opera's production of Richard Wagner's Lohengrin. I was part of an audience of maybe five people in Houston's great opera theater, the Wortham, and it was as though the entire production were being put on for me personally. It was wonderfully impressive.
During one of the more spectacular scene changes, where it takes about thirty minutes for our hero to arrive on stage in a boat pulled by swans (figuratively speaking, at leastthe swans weren't real), I started thinking about what I was seeing. In addition to the dozen or so leads, there were seventy-five choristers. The orchestra in the pit had over one hundred players. There had to be close to fifty technicians aboutstage crew, lighting engineers, the guy who ran the sur-title machine, etc.not counting the set designers and builders, the makeup people, the costumers, and so forth. And then there was the Houston Grand Opera administration. Altogether, nearly three hundred people were working together to produce one of the most spectacular pieces of stage work I had ever seen.
In our industry, we're lucky if we can get three people to cooperate. Why is that?
The secret to Lohengrin is, of course, Richard Wagner. Some 150 years ago he conceived this opera and documented it to a high degree of detail. Most significantly, he produced the score and the libretto. Every actor, every chorister, and every musician has a script to follow. The set designer, to be sure, has some latitude. In this case Adrianne Lobel based the sets on the surrealist works of Ren Magritte. This certainly gave the stage a distinctive appearance. But even the stage crew, who have less direct guidance from Wagner, have tasks that follow both from the set designs and the actions on stage.
What we so often are missing in our business is the score.
Requirements analysis is the process of creating a score for a systems effort. What is the objective of the effort? What are its components? Who should do what? Absent the score, each person does what seems appropriate, given a particular view of things. The result is neither coordinated nor integrated and often simply does not work. It certainly does not last 150 years.
Back in the old days, programmers simply wrote programs to perform specific tasks. If you knew what the task was, you could write the program. Improvisation was fine back then. Programming was more like a jazz concert than an opera. Now, however, we are building systems to become part of the infrastructure of an organization. We cannot build them without understanding the nature of that infrastructure and what role the systems will play in it. You cannot construct an opera without a score.
There is an unfortunate tendency in our industry to respond to the various pressures of system development by short-circuiting the analysis process. We don't sit down before creating a system to decide what it will look like and, by implication, how we will get there. It's not that we don't know how. It's just that multiple, conflicting demands often force us to take shortcuts and skip the specification step.
This invariably costs us more later. We clearly do not produce the systems equivalent of great opera.
One main problem with short-circuiting the analysis process is that it leads to unnecessarily complex systems. It is important to understand that, while simple systems are much easier to build than complex ones, simple systems are much harder to design.
You have to be able to see the underlying simplicity of the problem. This is not easy.
Analysis of requirements should be done by people who are able to focus on the nature of a business and what the business needs by way of information. It should not be done by people immersed in the technology they assume will be used for solving whatever problems are discovered.
Consider, for example, the following poem:
Un petit d'un petit
S'etonne aux Halles
Un petit d'un petit
Ah! desgr_ te fallent
Indolent qui ne sort cesse
Indolent qui ne se m_e
Qu'importe un petit d'un petit
Tout Gai de Reguennes.
Luis d'Antin van Rooten
Mots d'Heures: Gousse, Rames [Beer, 1979, p. 301]
If you know French, you will find this impossible to read. It looks like French. It has all the structures of French. But it is completely wrong! It makes no sense. ("A little of a little astonishes itself at Halles"?) On the other hand, if you don't know French but have a friend who does, ask that person to read it aloud. If you listen very carefully with a non-French ear, you will figure out what it really is.1
The point is, your ability to see the problem depends entirely on your perspective. No matter how hard you study it, if you come at it from the wrong direction, you simply will not see what is in front of you.
The techniques described in this book will show you how to look at problems from a different direction in order to see the true nature of an enterprise and, with that, its requirements for new systems. Then you can design systems that, as part of the infrastructure of that enterprise, truly support it rather than adding yet another burden to its operation.
About Requirements Analysis
How do we capture what is required of a new software product? How do we do so completely enough that the requirement will last at least until the product is completed, if not longer?
In 1993, after spending over half a billion dollars on it, the London Stock Exchange scrapped its "Taurus" project (intended to control the "Transfer and AUtomated Registration of Uncertified Stock"). It had been Europe's biggest new software undertaking. What went wrong?
The problem was failure to do an adequate analysis of requirements. Requirements for the project were not clearly defined, and they were changed constantly by the myriad of players in the project. "Share registrars, anxious to protect their jobs, demanded a redesign of the system to mimic today's paper-shuffling. Scrapping share certificates called for 200 pages of new regulations. Computer security, with all messages encrypted, went over the top. Companies' insistence on the 'name on register' principle, which allows them to identify their shareholders instantly, made design harder. And so on." [Economist, "When the bull turned", 1993, p. 81]
The Economist, in an essay accompanying the story of the crash, discusses the reasons projects fail. "Software's intangibility makes it easy to think that the stuff has a Protean adaptability to changing needs. This brings with it a temptation to make things up as you go along, rather than starting with a proper design. [Even] if a proper design is there to begin with, it often gets changed for the worse half-way through. . . . Engineers in metal or plastic know better than to keep fiddling and so should software engineers.
"The fact that software 'devices' can have flexibility designed into them should not mislead anyone into the belief that the design process itself can be one of endless revision . . . .
"Successful software projects require two things: customers who can explain what sort of job needs doing, and engineers who can deliver a device that will do the job at a price that makes doing the job worthwhile. Lacking either, engineers must be honest enough to say that they are stymied." [Economist, "All fall down", 1993, p. 89]
This book is about understanding an organization well enough to determine "what sort of job needs doing". This requires several things:
A close relationship with the project's customersideally via a project champion
Effective project management
A known and understood set of steps
Our first requirement is for development of a special sort of relationship with our customers, as well as skill in knowing how to capture and represent what we are told.
The second requirement, effective project management, means nothing other than assuring that you have chosen the most capable project manager available.
The third requirement is a clearly defined set of steps. This is where this book is especially helpful. Chapter 2 describes the steps required for success, and the remaining chapters describe the work to be done during those steps.
What is this company (or government agency)?2 What is it about? How does it work? If we are to create a system significant enough to affect its infrastructure, we'd better know something about that infrastructure. This means that defining requirements for an enterprise begins by describing the enterprise itself. This book is primarily a compendium of techniques to do just that.
There are numerous ways to describe an enterprise: data models, data flow diagrams, state/transition diagrams, and so forth. Many people have been working for many years to develop the techniques we use today.
In the mid-1970s Ed Yourdon and Larry Constantine wrote their seminal book, Structured Design,3 which for the first time laid out coherent criteria for the modular construction of programs. It presented the structure chart4 and described what makes one modular structure effective and another not so effective.
Mr. Yourdon next collected around himself a number of other talented people who themselves contributed greatly to the body of system development knowledge. Among others, these included Tom DeMarco, Chris Gane, and Trish Sarson. In 1978, Mr. DeMarco wrote Structured Analysis and System Specification, and a year later, Ms. Sarson and Mr. Gane wrote Structured Systems Analysis: Tools and Techniques. Both books described the data flow diagram (albeit with different notations) as a technique for representing the flow of data through an organization. Later, in their book Essential Systems Analysis, Stephen McMenamin and John Palmer refined the data flow diagram technique with a formal way of arriving at the essential activities in a business.
Together with structured design these techniques became the industry standard for describing information systems, although their use was limited by lack of tools for producing the diagrams. Only those souls deeply dedicated to the principle of disciplined system development were willing to prepare the diagrams by hand. And once they were complete, these diagrams couldn't be changed. They had to be re-drawn if circumstances changed.
The first CASE (computer-aided systems engineering) tools appeared in about 1980, making the diagramming much easier to carry out and therefore more accessible to more people. Even so, it was clear that by organizing our efforts around the activities of a business, we were vulnerable to the fact that business processes often change. While the use of good structured design techniques made programs more adaptive to change, it was clear that it would be nice for them to accommodate change better in the first place.
In 1970 Dr. E. H. Codd published "A Relational Model of Data for Large Shared Data Banks", defining the relational model for organizing data. While the technology for taking advantage of his ideas would not be practical for another fifteen years, he planted the seed that there was a way to understand and organize data which was far superior to any that had gone before. The process of normalization is a profound way to understand the true nature of a body of data. It provides a set of rules for assuring that each piece of information is understood once and only once, in terms of the one thing that it describes. Databases organized around this principle can now keep data redundancy to an absolute minimum. In such databases, moreover, it is now possible easily to determine where each datum belongs.
From this came Peter Chen's work in 1976, "The Entity-Relationship Model: Towards a Unified View of Data", in which he was the first to describe the entity/relationship model (a kind of data model). Here you had a drawing that represented not the flow of information through an organization, but its structure.
Inspired by his work, Clive Finkelstein created a notation derived from Mr. Chen's and went on to create what he called information engineering, which recognized that data structure was much more stable than data flows when it came to forming the foundation for computer systems.5 He also recognized that the process of building systems had to begin with the strategic plans of the enterprise and had to include detailed analysis of the requirements for information. Only after taking these two steps was it appropriate for a system designer to bring specific technologies into play.
Mr. Finkelstein collaborated with James Martin to create the first publication about information engineering in 1981. This was the Savant Institute's Technical Report, "Information Engineering". Mr. Martin then popularized information engineering throughout the 1980s. With the appearance of viable relational database management systems and better CASE tools, information engineering, with its orientation toward data in systems analysis and design, became the standard for the industry by the end of the decade.
As these things were going on in the methodology field, object-oriented programming was developing. Whereas programs originally tended to be organized around the processes they performed, the real-time systems and simulation languages developed in the 1960s revealed that organizing these programs instead around the data they manipulated made them more efficient and easier to develop.
All data described "objects", so identifying objects and defining the data describing those objects provided a more robust program structure. In the late 1970s Messrs. Yourdon's and Constantine's ideas about modularization also contributed to this approach to program architecture.
As business programs became more and more oriented toward "windows" or screen displays, it became clear that they shared many characteristics with real-time systems, so the object-oriented approach fit there as well.
In 1988 Sally Shlaer and Stephen J. Mellor brought the concepts underlying object-oriented programming together with information engineering and its data-centric approach to system architecture. In their 1988 book, Object-oriented Systems Analysis: Modeling the World in Data, they renamed entity/relationship diagrams "object models" and created their own notation for them. Thus, for the first time, a data model could be either an entity/relationship model or an object model. Then in 1991 James Rumbaugh and his colleagues followed with Object-oriented Modeling and Design, again referring to object modeling but adding their own notation. In 1990 Ed Yourdon and Peter Coad added their object-modeling notation in Object-oriented Analysis. Other books added yet more notation schemes.
Then, in 1997, the first version of the Unified Modeling Language ("the UML") was published by the Object Management Group. It was intended to replace all of the object modeling notation schemes with a single technique for entity/object modeling. This was brought about through the collaboration of James Rumbaugh, Grady Booch, and Ivar Jacobson, but it was in fact based on the work of David Embley, Barry Kurtz, and Scott Woodfield (Object-oriented Systems Analysis: A Model-Driven Approach, first published in 1992). The UML has since been the basis for yet more books on the subject of object-oriented modeling.
Note that this "object-oriented analysis" is not significantly different from information engineering. Both are concerned with entities and entity types that are "things of significance to the enterprise" (called "objects" and "object classes" by the object-oriented community). That is, both view systems development from a data-centric point of view.
What is new in object-oriented modeling is the combination of entity/relationship models and behavioral models. In the object-oriented world, each object class (entity type) has defined for it a set of activities that it can "do". This made more sense, however, in the world of object-oriented programming, where the object and the behavior were both bits of program code. The activities of an enterprise are often far more complex than can be described on an entity-type by entity-type basis. The idea is not unreasonable, but it cannot readily be done with a kind of pseudocode typically associated with object classes. Behavior of entities in analysis is better described with a technique called "entity life histories". (See Chapter 7.)