Home > Articles > Programming

  • Print
  • + Share This
This chapter is from the book

This chapter is from the book

Process, Process, All the Way

Together ControlCenter doesn't constrain you to work in a particular way, but if you want to change the way you work, then you can. We have a similar aim for this book. We don't want to constrain teams to a particular process; rather, we want to introduce the philosophy and ideas that have inspired us regarding modern development processes and to help you to evolve the appropriate development processes for your own organizations and projects. So, let us lay out some of our goals.

Building only What Is Needed

We asked in the preface to the book, What's the least that a team needs to do to deliver excellent software? There's a related question: Is it a bad thing to do more than this? We think it is.

This is a variant of what might be termed the "girls' boarding school rule," which goes along the lines of, If it's not compulsory, then it's forbidden! While this is probably a slur on modern girls' boarding schools, it is quite useful when considering the standards you want to apply to an agile software development processes. We are looking for the minimum set of artifacts that must be produced and maintained to be consistent with the actual software. At the same time, we want to remove the need to maintain all other artifacts.

To say that additional documents are forbidden is definitely an overstatement. Team members and others should feel free to produce documents that are not in the minimum set if they will be useful. The key point is that such documents will not be maintained unless they are subsequently recognized as being additions to the minimum essential set. At that point, the must be reviewed for completeness and consistency, and updated build by build.

team members often gather around a whiteboard to figure out how to implement some feature or functionality. The diagram on the whiteboard is not considered a deliverable of the project; once it has done its job, it is erased. Only if the content of the diagram becomes the adopted design does the diagram become a delivered artifact. Of course, when you are using Together, you can get the design from the implemented source code or you can enter the diagram and use the resulting code as the starting point for the implementation. Either way, it is these elements that form the essential maintainable artifacts.

Occasionally, a document will be found to be so useful that it will become a formal part of the minimal deliverables. But when this is done, the attendant costs of maintaining that document must be balanced against its usefulness.

A Big Sheet of Paper

Dan Haywood

While contracting at one of the investment banks, I found myself assigned to keep track of hardware and system software (databases and operating systems) for the global credit and market risk department. This was at the end of the last century(!), and the bank had three major projects upcoming: merging with another investment bank, upgrading systems to support the new euro currency, and Y2K.

There was a fair degree of reconciliation required for these projects in terms of decommissioning old hardware,upgrading system software, and rationalizing system and application software onto fewer faster machines. The contingency hardware also was either nonexistent or pitifully underpowered, so this also had to be brought into consideration as part of disaster recovery planning, part and parcel of the Y2K program.

Initial efforts to come to grips with the production, test, and development environments using spreadsheets and such proved ineffective. Then I decided to use a drawing tool, invent a few symbols for elements of the configuration, and print on really big sheets of paper. Suddenly, it all became easy. The managers could immediately see which boxes were to be decommissioned, what system and application software ran on what boxes, what software would be migrating to new hardware, and which system software required upgrade.

I left that contract a while ago, though occasionally still visit the folks at the bank. I note that even now they keep a big sheet of paper up on the wall, even though those three projects are long finished. Though this particular artifact was discovered almost by accident, its value was proved, and so now the effort to keep it up to date is accepted.

Essential Elements

If we are seeking to build only what is needed, what are the essential elements of the system and its documentation? Every development process seems to involve the following essential elements:

  • Requirements (use case, user story, feature, requirements paragraph)

  • Design Statements (interaction diagrams—sequence or collaboration)

  • Implementation Elements (class, operation)

  • Tests (functional test, unit test or level-of-service test)

These are the four basic elements that we must maintain and keep in step and up to date. These elements have a natural relationship with each other, as Figure 1-4 shows.

Figure 1-4Figure 1–4 The essential elements of Requirements, Design, Implementation, and Test.

Each requirement is tested by a functional test, and its design is shown by interactions, for example one or more sequence diagram. These interactions reference a set of operations, each belonging to some class, and in turn tested by at least one unit test.

The relationships between these elements of requirements, design, implementation and test are important, and we need to capture the links in the model we store in Together. The relationships are

  • Design (interaction diagrams) fulfills Requirement(s) (use cases, say)

  • Design is implemented by Implementation Elements (classes and operations)

  • Functional Test tests (Functional) Requirement(s)

  • Unit Test tests Implementation Element(s)

The directionality of these relationships in Figure 1-4 shows the dependency between the elements,which is also important as it shows what needs to be checked and updated when things change. The design fulfills the requirements and so is dependent on it. The design refers to implementation elements, which are used to implement the feature, and so is dependent on them. The implementation elements, however, are not necessarily dependent on the requirements of this project. Many classes and operations, for example, may be reusable elements developed independently from the specific functional requirements.

When a requirement is added, deleted, or changed, the design and its tests must be reviewed to ensure they are still consistent and applicable. Dependency has a well-known effect in source code where it controls, for example, the order of compilation and recompilation. In an iterative lifecycle this effect extends as well to the other elements of the single model and will drive the consequential actions resulting from a change.

We'll revisit the essential elements shortly, looking at how they might be represented within the Together model. Let us consider for a moment the process of change that surrounds them.

Nonlinear lifecycles are always in the middle

Most software development processes get defined in terms of what you do in sequence from the start point, say the award of the contract for the project to customer sign-off of the finished software. Like our waterfall model in Figure 1-2, they are defined from end to end with feedback loops. However, as soon as the process has started and some of the feedback loops have been followed, we are "in the middle" with activities from multiple phases actually happening concurrently. We therefore have to address concurrent updating of all the essential artifacts.

From the point of view of the technical tasks of defining the requirements, the tests, the design, and the implementation, it is perhaps more useful to think of the development process in this way: when a reasonably complete set of artifacts exists (from end to end of a traditional lifecycle), what do you need to do to change the model, rather than define it from scratch? You are never at the start here, or for that matter at the end; you are always in the middle of the lifecycle, addressing the four essential elements of requirements, design, implementation and tests in all the phases of the project.

Linear lifecycles are an oversimplification. It is useful to stress that comprehension (the goal of analysis) must precede invention (the goal of design), and invention must precede implementation. But the actual activities cannot be divided into phases of the project lasting some number of weeks or months. D. L. Parnas and P. C. Clements expressed this well in the title of a paper they published in 1986, "A rational Design Process: How and why to fake it." They emphasized that the usefulness of the linear lifecycle was not that it told team members what activity to do next—many different activities from the different phases of the lifecycle in fact have to be addressed simultaneously. The real value of such a rational design process was that it defined a scheme for organizing the artifacts produced in the process in such a way that designers, reviewers, new members of the project, and maintainers could find the necessary information easily.

When developing a software system, there will be some features that are being specified—the requirements are being gathered and tests for them being defined. Meanwhile, the design is being worked out for other features. Yet other features are being implemented, and there could be a unit, integration, or system test being carried out on yet other features. At different times in the project, a different emphasis and differing amounts of time will need to be spent on each of the four elements. The important point here is that to some extent all of these things are happening at the same time.

We talk more about this style of development and how well it works in Chapter 5, "The Controlling Step: Feature-Centric Management."

The Minimum Metamodel

Let us combine the two ideas presented above: the four essential elements and being always in the middle. We like to refer to the essential elements of the process as the minimum metamodel, constituting the minimum set of deliverables needed for the system.

A Model of Completeness

A metamodel defines the basis of the modeling system (Carmichael 1994a), and this minimum metamodel provides the basis (as simply as possible) for defining the model's completeness. Figure 1-5 shows a metaclass diagram for the concepts concerned. Effectively, this figure views the same metamodel as Figure 1-4 but displays more details. It also shows some optional features, such as business process activity diagrams that describe the process behind a given requirement, and state diagrams, which give more detailed definition to the behavior of a Class.

Figure 1-5Figure 1–5 The minimum metamodel.

The minimum metamodel builds upon the four essential elements introduced earlier, making concrete the artifacts and required relationships that realize these elements within a Together project. Let's walk through this diagram observing how Together can make the relationships concrete.

  • Requirements are realized as use cases, features, or both.

    • Use cases are catalogued in use case diagrams.

    • Features can be recorded using feature lists.

    • Together allows use cases to be hyperlinked to lower level use cases or features that implement a higher level use case.

  • Business process activity diagrams also have a relationship with requirements; usually, such a diagram specifies the human activity or business process associated with the requirement.

    • In Together, activity diagrams can hyperlink to the use cases (or features) that realize the requirements.

    • Units of user documentation can also be hyperlinked to their related requirements.

  • Interactions are realized as either sequence or collaboration diagrams.

  • Interactions fulfill requirements.

    • In Together, interaction diagrams can be linked to their requirements.

  • Interaction diagrams contain references to objects and messages, where an object is instantiated from a class and a message is an instance of a call to an operation provided by the class of the object to which the message is sent.

    • These links are contained within a Together interaction diagram. If an object's class is identified in a sequence diagram, then the message can be associated directly with the operations of that class.

  • Classes have operations.

    • This relationship is explicit in the source for the class. In Together, this relationship is obtained directly by the LiveSource engine parsing the code.

  • Some classes may also have a state chart diagram.

    • The state chart diagram should reference the class directly in Together.

  • State chart diagrams identify a number of actions that can occur.

    • When the target class for the state chart diagram is identified, the actions within the state chart diagram can be associated with the operations of the target class.

  • Test suites consist of test cases, which in turn consist of tests. Tests are functional tests, nonfunctional tests, or simple unit tests.

    • Functional tests link to functional requirements.

    • Unit tests link to operations of classes.

    • Together's test facilities can be used to implement the tests and link them to the tested elements.

When following the minimum metamodel in Together, some links between artifacts are implicit and some are explicit. Explicit links can be defined using Together's hyperlinks or by references in the Properties Inspector. We discuss in more detail which are appropriate links to add in Chapter 6. We also discuss how we can customize the audit facilities to report on the completeness of our models.

Perturbations cause Iterations

The point of this metamodel is to define the set of artifacts (within a single model) that should be created by specifiers, designers, and implementers, and the cross-references between them. When a stable build is produced, its requirements will have corresponding designs, and these designs will reference its implementation, and the requirements and implementation will have valid tests that run and pass. This is our destination, but it is also our starting point—a starting point that unfortunately does not last! Accepting another requirement into the build (or indeed a fault report, which is a kind of requirement) disturbs the equilibrium and will result in the model becoming inconsistent, incomplete, or both.7 This triggers the development process to return the build to equilibrium.

In fact, not all of the requirements that have been defined at any point in time need have valid tests and fully implemented designs available. Requirements can be considered to be in one of two states relative to a given build:

  • on (i.e., implemented in this build), or

  • off (i.e. not implemented in this build, but planned for a later build)

If a requirement is on, then (according to the meta-model)

  • there should be at least one valid functional test.

  • that test should pass (this is verified by running the test).

  • there should be a design for the requirement.

  • the design should reference (be implemented by) implementation elements (that is, classes and operations).

  • all implementation elements should be tested by a valid unit test.

  • that unit test should pass (verified by running the test).

Conversely, if a requirement is off, then the constraints in the metamodel (for example, that there is at least one test) do not apply.

Suppose you have a valid build of your application, and now you wish to start work on a new feature. First, you must ensure that your current build represents a stable system; we are wasting our time if we attempt to make a change to an unstable system. So, accepting the feature into a build triggers the change.

Let's run through some sample iterations:

  • We start off by running the full set of tests on our build. These pass and confirm we have a stable starting point.

  • We then change the state of a particular new requirement from off to on.

  • We retest the existing code with the full set of tests that now include those for our new requirement. Unsurprisingly, the tests fails8 (say, with 20 errors).

  • We consider how to design a solution for this feature and express that as one or more sequence diagrams.

  • We make some changes to the code. We compile, and get compilation errors. We refine our code some more and get a clean compile.

  • We test our code, and it fails, say with 15 errors. We figure out where the problem is (it's either in the implemented code or in the test) and make the correction. We test again, and we now get fewer errors.

  • We continue to refine our code or our tests, all the time learning more about both the problem and the solution.

  • We update our design diagrams to ensure the references to objects and messages link directly to real classes and operations.

  • Maybe we find an error in the specification, so we clarify that and update the tests before continuing.

  • Eventually, all the tests pass; the feature is implemented.

  • We have code that can be integrated with the team's current build. This may be different by now from the build we started from if other team members have integrated changes in parallel, so we book in our changes and rerun the tests.

  • Sadly, an interaction between our change and the parallel changes arises. We go about fixing the problem.

  • Eventually. all the tests pass, and the feature is now implemented and integrated with the team's latest build. We are back to the equilibrium state.

Figure 1-6 shows a simplified view of this sequence of activities.

Figure 1-6Figure 1–6 Perturbations cause iterations.

There are a few observations that we need to make on this process:

  • In order to start enhancing a system, you need to deliver a system; you need a stable state to begin with. This means that delivering a system—even a very small one—is critical to future development.

  • The system you are trying to change must be stable.

  • Testing (and measuring) is the activity that "closes the loop"—it tells us when we no longer need to iterate.

  • Small perturbations will require fewer iterations than larger perturbations.

  • If the perturbation introduced is too large, then there's the chance that no number of iterations will get back to the stable state.

There is another interesting change that this approach to development. The whole team may be involved in evolving all the different artifacts and the different aspects of the application. This is in contrast to more rigid team environments that consist of business analysts, designers, coders, and testers organized into separate teams, possibly even using different tools and languages. Here, everyone is involved in all the aspects of software development lifecycle concurrently, sometimes collaborating with other team members and sometimes working by themselves, in order to deliver a particular feature. One model, one team. We believe this a very positive aspect of the approach and is good for both morale and the effectiveness of development.

  • + Share This
  • 🔖 Save To Your Account