Home > Articles > Software Development & Management > Architecture and Design

  • Print
  • + Share This
This chapter is from the book

2.4 What Result Does an Architecture Evaluation Produce?

In concrete terms, an architecture evaluation produces a report, the form and content of which vary according to the method used. Primarily, though, an architecture evaluation produces information. In particular, it produces answers to two kinds of questions.

  1. Is this architecture suitable for the system for which it was designed?

  2. Which of two or more competing architectures is the most suitable one for the system at hand?

Suitability for a given task, then, is what we seek to investigate. We say that an architecture is suitable if it meets two criteria.

  1. The system that results from it will meet its quality goals. That is, the system will run predictably and fast enough to meet its performance (timing) requirements. It will be modifiable in planned ways. It will meet its security constraints. It will provide the required behavioral function. Not every quality property of a system is a direct result of its architecture, but many are, and for those that are, the architecture is suitable if it provides the blueprint for building a system that achieves those properties.

  2. The system can be built using the resources at hand: the staff, the budget, the legacy software (if any), and the time allotted before delivery. That is, the architecture is buildable.

This concept of suitability will set the stage for all of the material that follows. It has a couple of important implications. First, suitability is only relevant in the context of specific (and specifically articulated) goals for the architecture and the system it spawns. An architecture designed with high-speed performance as the primary design goal might lead to a system that runs like the wind but requires hordes of programmers working for months to make any kind of modification to it. If modifiability were more important than performance for that system, then that architecture would be unsuitable for that system (but might be just the ticket for another one).

In Alice in Wonderland, Alice encounters the Cheshire Cat and asks for directions. The cat responds that it depends upon where she wishes to go. Alice says she doesn't know, whereupon the cat tells her it doesn't matter which way she walks. So

If the sponsor of a system cannot tell you what any of the quality goals are for the system, then any architecture will do.

An overarching part of an architecture evaluation is to capture and prioritize specific goals that the architecture must meet in order to be considered suitable. In a perfect world, these would all be captured in a requirements document, but this notion fails for two reasons: (1) Complete and up-to-date requirements documents don't always exist, and (2) requirements documents express the requirements for a system. There are additional requirements levied on an architecture besides just enabling the system's requirements to be met. (Buildability is an example.)

I Believe You?

Frequently when we embark on an evaluation we are outsiders. We have been called in by a project leader or a manager or a customer to evaluate a project. Perhaps this is seen as an audit, or perhaps it is just part of an attempt to improve an organization's software engineering practice. Whatever the reason, unless the evaluation is part of a long-term relationship, we typically don't personally know the architect, or we don't know the major stakeholders.

Sometimes this distance is not a problem—the stakeholders are receptive and enthusiastic, eager to learn and to improve their architecture. But on other occasions we meet with resistance and perhaps even fear. The major players sit there with their arms folded across their chests, clearly annoyed that they have been taken away from their real work, that of architecting, to pursue this silly management-directed evaluation. At other times the stakeholders are friendly and even receptive, but they are skeptical. After all, they are the experts in their domains and they have been working in the area, and maybe even on this system, for years.

In either case their attitudes, whether friendly or unfriendly, indicate a substantial amount of skepticism over the prospect that the evaluation can actually help. They are in effect saying, "What could a bunch of outsiders possibly have to tell us about our system that we don't already know?" You will probably have to face this kind of opposition or resistance at some point in your tenure as an architecture evaluator.

There are two things that you need to know and do to counteract this opposition. First of all, you need to counteract the fear. So keep calm. If you are friendly and let them know that the point of the meeting is to learn about and improve the architecture (rather than pointing a finger of blame) then you will find that resistance melts away quickly. Most people actually enjoy the evaluation process and see the benefits very quickly. Second, you need to counteract the skepticism. Of course they are the experts in the domain. You know this and they know this, and you should acknowledge this up front. But you are the architecture and quality attribute expert. No matter what the domain, architectural approaches for dealing with and analyzing quality attributes don't vary much. There are relatively few ways to approach performance or availability or security on an architectural level. As an experienced evaluator (and with the help of the insight from the quality attribute communities) you have seen these before, and they don't change much from domain to domain.

Furthermore, as an outsider you bring a "fresh set of eyes," and this alone can often bring new insights into a project. Finally, you are following a process that has been refined over dozens of evaluations covering dozens of different domains. It has been refined to make use of the expertise of many people, to elicit, document, and cross-check quality attribute requirements and architectural information. This alone will bring benefit to your project—we have seen it over and over again. The process works!

—RK

The second implication of evaluating for suitability is that the answer that comes out of the evaluation is not going to be the sort of scalar result you may be used to when evaluating other kinds of software artifacts. Unlike code metrics, for example, in which the answer might be 7.2 and anything over 6.5 is deemed unacceptable, an architecture evaluation is going to produce a more thoughtful result.

We are not interested in precisely characterizing any quality attribute (using measures such as mean time to failure or end-to-end average latency). That would be pointless at an early stage of design because the actual parameters that determine these values (such as the actual execution time of a component) are often implementation dependent. What we are interested in doing—in the spirit of a risk-mitigation activity—is learning where an attribute of interest is affected by architectural design decisions, so that we can reason carefully about those decisions, model them more completely in subsequent analyses, and devote more of our design, analysis, and prototyping energies to such decisions.

An architectural evaluation will tell you that the architecture has been found suitable with respect to one set of goals and problematic with respect to another set of goals. Sometimes the goals will be in conflict with each other, or at the very least, some goals will be more important than other ones. And so the manager of the project will have a decision to make if the architecture evaluates well in some areas and not so well in others. Can the manager live with the areas of weakness? Can the architecture be strengthened in those areas? Or is it time for a wholesale restart? The evaluation will help reveal where an architecture is weak, but weighing the cost against benefit to the project of strengthening the architecture is solely a function of project context and is in the realm of management. So

An architecture evaluation doesn't tell you "yes" or "no," "good" or "bad," or "6.75 out of 10." It tells you where you are at risk.

Architecture evaluation can be applied to a single architecture or to a group of competing architectures. In the latter case, it can reveal the strengths and weaknesses of each one. Of course, you can bet that no architecture will evaluate better than all others in all areas. Instead, one will outperform others in some areas but underperform in other areas. The evaluation will first identify what the areas of interest are and then highlight the strengths and weaknesses of each architecture in those areas. Management must decide which (if any) of the competing architectures should be selected or improved or whether none of the candidates is acceptable and a new architecture should be designed.1

  • + Share This
  • 🔖 Save To Your Account