Home > Articles > Software Development & Management > Object Technology

Wheels Within Wheels: Model Problems in Practice

  • Print
  • + Share This
This article by Kurt Wallnau, provided courtesy of the Software Engineering Institute, covers how to put model problems into action by using the 3-Rs of design risk reduction in component-based systems: Realize model solutions, Reflect on their utility and risk, and Repair the risks.
Special permission to reproduce "Wheels Within Wheels: Model Problems in Practice" (c) 2000 by Carnegie Mellon University, is granted by the Software Engineering Institute.
Like this article? We recommend

In my Spring 2000 COTS Spot column I explained the role of technology competence in the design of COTS-based systems, and described how to obtain this competence quickly by building toys and model solutions to model problems. In this article I will take the next step and describe how to put model problems into action by using the 3-Rs of design risk reduction in component-based systems: Realize model solutions, Reflect on their utility and risk, and Repair the risks. I will then relate the 3-RS—which is denoted typographically as R3—to increasingly popular iterative software development processes.

The R3 Process

The characteristics of COTS software components—what those components do and how they do it—can and do influence the design activity. This influence might be felt as early as in the conception phase of a software project. For example, it might be known that a component provides a capability that would be difficult or costly to implement as a custom solution. In this situation the component capability could very well wind up as a requirement for the system. In other words, the component capability would effectively contribute to defining the scope of a system, and the act of defining this scope would effectively lead to a de facto component selection decision. The same is often true later in the design activity, and quite often (I would go so far as to say usually) this involves not just the characteristics of single components but ensembles of components acting in concert to provide some service.

It is irrelevant whether we view the component as having scoped the system or the system as having defined requirements that led to the selection of a component: in either case competency about component capabilities is essential. Where there are gaps in our competence, and competence gaps are inevitable as the number of commercial components used in a system increases, there is risk. As I discussed in the May COTS Spot, toys and model problems can be used to generate this competence on the fly. But where does the idea for a particular model problem come from? How do we know which model problems to solve and, having solved them, what to do with the solutions? Answering these questions is what the R3 process is all about. An outline of this process is depicted in Figure 1, which includes (naturally) three key steps:

Figure 1: The R3 Process for Design Risk Reduction

  1. Realize a model solution. R3 begins with assumptions about system needs and a component ensemble that is believed to satisfy those needs. The designers sketch the workings of an ensemble, perhaps using component-interaction blackboards. Inevitably, questions will arise about how an ensemble works, or the manner in which the ensemble satisfies a need. If the need to be satisfied is critical, then it is essential to increase the level of understanding of the ensemble. The unknown property will itself suggest what kind of toy to build; previous design commitments (for example, component selections) will constrain how the toy is built, and the needs will define the evaluation criteria.

  2. Reflect on the qualities of the model solution. Model solutions are implementations that must be evaluated against criteria. Did the solution satisfy the criteria? Were additional evaluation criteria discovered that must be considered and, if so, how did the model solution stack up to these new criteria? (The discovery of new criteria usually heralds some sort of failure). Answering these questions may involve benchmarking, snooping, or other invasive "black box visibility" techniques. In any event, one of two possibilities arise from this reflection: the model solution passes muster, in which case it becomes part of the design baseline, or it fails in some way to satisfy the evaluation criteria. Failure is not necessarily fatal to an ensemble’s prospects.

  3. Repair the ensemble. It pays to be an optimist—or at least to be doggedly persistent—when developing COTS-based systems. Ensembles can be repaired by introducing new components, by using alternative components or component versions, by developing wrappers, or by any number of other strategies. Indeed, there are often several possible repair strategies for each deficiency detected. This has led us to develop evaluation techniques such as risk/misfit (the topic of a future column) to help structure component selection decisions that are dominated (or complicated) by the presence of multiple repair options. In any case, non-trivial repairs are hypotheses that must be tested, triggering a new iteration of R3.

What happens if, despite all optimism and doggedness, an ensemble simply will not pass muster? In this case all is still not lost for the ensemble, but salvaging the situation may require a different repair strategy—one which involves altering the requirements that gave rise to the ensemble, rather than changing the ensemble itself. But before we tackle this issue (which will involve us with the theory of iterative development), a practical illustration of the R3 process from our own case book will be useful.

  • + Share This
  • 🔖 Save To Your Account