Home > Store > Software Development & Management > Architecture and Design

Evaluating Software Architectures: Methods and Case Studies

Register your product to gain access to bonus material or receive a coupon.

Evaluating Software Architectures: Methods and Case Studies

Book

  • Your Price: $63.99
  • List Price: $79.99
  • Usually ships in 24 hours.

Description

  • Copyright 2002
  • Dimensions: 6-1/4x9-1/4
  • Pages: 368
  • Edition: 1st
  • Book
  • ISBN-10: 0-201-70482-X
  • ISBN-13: 978-0-201-70482-2

Praise for Evaluating Software Architectures

“The architecture of complex software or systems is a collection of hard decisions that are very expensive to change. Successful product development and evolution depend on making the right architectural choices. Can you afford not to identify and not to evaluate these choices? The authors of this book are experts in software architecture and its evaluation. They collected a wealth of ideas and experience in a well-organized and accessible form. If you are involved in the development of complex systems or software, you will find this book an invaluable guide for establishing and improving architecture evaluation practice in your organization.”

         —Alexander Ran, Principal Scientist of Software Architecture, Nokia

“Software engineers must own this book. It is a well-written guide to the steps for evaluating software architecture. It argues for the inclusion of architecture evaluation and review as a standard part of the software development lifecycle. It introduces some new and innovative methods for analyzing important architecture characteristics, like extensibility, portability, and reliability. I believe these methods will become new engineering cornerstones for creating good software systems.”

         —Joe Maranzano, AT&T Bell Labs Fellow in Software Architecture (1990), and former head of the Bell Labs Software Technology Center

“Experience and teamwork are the only approaches I know of to deliver products faster, cheaper, and yet to delight your customers. In their first book, Software Architecture in Practice, Paul and Rick (and Len Bass) helped me match my experience with theory. Their invaluable approaches and case studies changed my practice and the way I proceed to design systems and software architectures. This second book, with Mark, covers what I will look at before I feel good about an architecture. It is about how I can tap other people's experience to produce an improved outcome, using other people's feedback. I have used many of the concepts explained in this book for my customers' benefit. Using this book, you—architects, developers, and managers—will develop a common language and practice to team up and deliver more successful products.”

         —Bertrand Salle, lead architect with a major telecommunications company

“If architecture is the foundation of system construction, architectural evaluation is part of the foundation of getting to a ‘good’ architecture. In this book, the authors put their considerable expertise to one of the most pressing issues in systems development today: how to evaluate an architecture prior to system construction to ascertain its feasibility and suitability to the system of interest. The book provides a practical guide to architecture evaluation using three contemporary evaluation methods. It should prove valuable to practitioners and as a basis for the evolution of architectural evaluation as an engineering practice.”


         —Rich Hilliard, Chief Technical Officer, ConsentCache, Inc., and technical editor, IEEE Recommended Practice for Architectural Description of Software-Intensive Systems

“Too many systems have performance and other problems caused by an inappropriate architecture. Thus problems are introduced early, but are usually detected too late—when the deadline is near or, even worse, after the problem makes the headlines. Remedies lead to missed schedules, cost overruns, missed market windows, damaged customer relations, and many other difficulties. It is easy to prevent these problems by evaluating the architecture choices early, and selecting an appropriate one.”

         —Connie U. Smith, Ph.D., principal consultant, Performance Engineering Services Division, L&S Computer Technology, Inc., and coauthor of the new book, Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software

“The ATAM an evaluation method described in this book is the natural quality-gate through which a high-level design should pass before a detail design project is initiated. Why use the ATAM to evaluate an architecture? Mitigation of design risk is a major reason, but more importantly, the ATAM provides an interactive vehicle that can give key development and user stakeholders architectural visibility—visibility that can lead to an important ‘early buy-in.’”

         —Rich Zebrowski, Software Technology Manager, Motorola, Inc.

“Caterpillar's experience with architecture reviews includes SAAM, ATAM, ARID, and ADR evaluation methods described in this book, the first three in detail. These reviews ensured that the needs of the user community were being met, and they exposed the architecture to others in the organization helping with understanding and organizational buy-in. The SAAM- and ATAM-based evaluations worked well to expose the architecture early in the development cycle to a broad range of people. The ARID- and ADR-based evaluations facilitated the exposure of technical details of the architecture later in the development cycle. As the architect of the pilot project for ARID, I observed that this review even served as an architecture training session before the architecture was fully documented.”

         —Lee R. DenBraber, former Lead Software Architect, Caterpillar, Inc.

“We’ve heard all the management hype about harnessing the innovative creativity of our teams, establishing integrated customer-developer-product teams, and better targeting our systems to meet end user needs. The ATAM techniques described in this book give technical managers, system architects, and engineers proven tools for breaking down the communications barriers that impede our ability to realize these goals. We have successfully integrated the ATAM techniques throughout our lifecycle, including development and maintenance, and have found that they provide the strong technical basis we need to evaluate the many difficult trades required by a system as large as EOSDIS.”

         —Mike Moore, Deputy Manager, Science Systems Development Office, Earth Observing System Data Information System (EOSDIS) Project, NASA Goddard Space Flight Center

“If you know how difficult architecture reviews are, you will be amazed how effective ATAM evaluations can be. For example, an ATAM evaluation we conducted on an important software product line identified a major architectural risk, which we subsequently were able to avoid-a benefit we expect to continue seeing. Moreover, ATAM techniques have enabled us to explain such risks to stakeholders far more clearly than by any other review method.”

         —Stefan Ferber, Corporate Research, Robert Bosch GmbH

Drawing on clearly identified connections between architecture design decisions and resulting software properties, this book describes systematic methods for evaluating software architectures and applies them to real-life cases. It shows you how such evaluation can substantially reduce risk while adding remarkably little expense and time to the development effort (in most cases, no more than a few days). Evaluating Software Architectures introduces the conceptual background for architecture evaluation and provides a step-by-step guide to the process based on numerous evaluations performed in government and industry.

In particular, the book presents three important evaluation methods:

  • Architecture Tradeoff Analysis Method (ATAM)
  • Software Architecture Analysis Method (SAAM)
  • Active Reviews for Intermediate Designs (ARID)

Detailed case studies demonstrate the value and practical application of these methods to real-world systems, and sidebars throughout the book provide interesting background and hands-on tips from the trenches.

All software engineers should know how to carry out software architecture evaluations. Evaluating Software Architectures is the chance to get up to speed quickly by learning from the experience of others.



Sample Content

Online Sample Chapter

Evaluating a Software Architecture

Downloadable Sample Chapter

Click below for Sample Chapter related to this title:
clementsch02.pdf

Table of Contents



List of Figures.


List of Tables.


Preface.


Acknowledgments.


Reader's Guide.


1. What Is Software Architecture?

Architecture as a Vehicle for Communication among Stakeholders.

Architecture and Its Effects on Stakeholders.

Architectural Views.

Architecture Description Languages.

Architecture as the Manifestation of the Earliest Design Decisions.

Architectural Styles.

Architecture as a Reusable, Transferable Abstraction of a System.

Summary.

For Further Reading.

Discussion Questions.



2. Evaluating a Software Architecture.

Why Evaluate an Architecture?

When Can an Architecture Be Evaluated?

Who's Involved?

What Result Does an Architecture Evaluation Produce?

For What Qualities Can We Evaluate an Architecture?

Why Are Quality Attributes Too Vague for Analysis?

What Are the Outputs of an Architecture Evaluation?

Outputs from the ATAM, the SAAM, and ARID.

Outputs Only from the ATAM.

What Are the Benefits and Costs of Performing an Architecture Evaluation?

For Further Reading.

Discussion Questions.



3. The ATAM—A Method for Architecture Evaluation.

Summary of the ATAM Steps.

Detailed Description of the ATAM Steps.

Step 1: Present the ATAM.

Step 2: Present the Business Drivers.

Step 3: Present the Architecture.

Step 4: Identify the Architectural Approaches.

Step 5: Generate the Quality Attribute Utility Tree.

Step 6: Analyze the Architectural Approaches.

Step 7: Brainstorm and Prioritize Scenarios.

Step 8: Analyze the Architectural Approaches.

Step 9: Present the Results.

The Phases of the ATAM.

Phase 0 Activities.

Phase 1 Activities.

Phase 2 Activities.

Phase 3 Activities.

For Further Reading.

Discussion Questions.



4. The Battlefield Control System—The First Case Study in Applying the ATAM.

Preparation.

Phase 1.

Step 1: Present the ATAM.

Step 2: Present the Business Drivers.

Step 3: Present the Architecture.

Step 4: Identify the Architectural Approaches.

Step 5: Generate the Quality Attribute Utility Tree.

Step 6: Analyze the Architectural Approaches.

Phase 2.

Step 7: Brainstorm and Prioritize Scenarios.

Step 8: Analyze the Architectural Approaches.

Step 9: Present the Results.

Results of the BCS Evaluation.

Documentation.

Requirements.

Sensitivities and Tradeoffs.

Architectural Risks.

Summary.

Discussion Questions.



5. Understanding Quality Attributes.

Quality Attribute Characterizations.

Performance.

Availability.

Modifiability.

Characterizations Inspire Questions.

Using Quality Attribute Characterizations in the ATAM.

Attribute-Based Architectural Styles.

Summary.

For Further Reading.

Discussion Questions.



6. A Case Study in Applying the ATAM.

Background.

Phase 0: Partnership and Preparation.

Phase 0, Step 1: Present the ATAM.

Phase 0, Step 2: Describe Candidate System.

Phase 0, Step 3: Make a Go/No-Go Decision.

Phase 0, Step 4: Negotiate the Statement of Work.

Phase 0, Step 5: Form the Core Evaluation Team.

Phase 0, Step 6: Hold Evaluation Team Kick-off Meeting.

Phase 0, Step 7: Prepare for Phase 1.

Phase 0, Step 8: Review the Architecture.

Phase 1: Initial Evaluation.

Phase 1, Step 1: Present the ATAM.

Phase 1, Step 2: Present Business Drivers.

Phase 1, Step 3: Present the Architecture.

Phase 1, Step 4: Identify Architectural Approaches.

Phase 1, Step 5: Generate Quality Attribute Utility Tree.

Phase 1, Step 6: Analyze the Architectural Approaches.

Hiatus between Phase 1 and Phase 2.

Phase 2: Complete Evaluation.

Phase 2, Step 0: Prepare for Phase 2.

Phase 2, Steps 1-6.

Phase 2, Step 7: Brainstorm and Prioritize Scenarios.

Phase 2, Step 8: Analyze Architectural Approaches.

Phase 2, Step 9: Present Results.

Phase 3: Follow-Up.

Phase 3, Step 1: Produce the Final Report.

Phase 3, Step 2: Hold the Postmortem Meeting.

Phase 3, Step 3: Build Portfolio and Update Artifact Repositories.

For Further Reading.

Discussion Questions.



7. Using the SAAM to Evaluate an Example Architecture.

Overview of the SAAM.

Inputs to a SAAM Evaluation.

Outputs from a SAAM Evaluation.

Steps of a SAAM Evaluation.

Step 1: Develop Scenarios.

Step 2: Describe the Architecture(s).

Step 3: Classify and Prioritize the Scenarios.

Step 4: Individually Evaluate Indirect Scenarios.

Step 5: Assess Scenario Interactions.

Step 6: Create the Overall Evaluation.

A Sample SAAM Agenda.

A SAAM Case Study.

ATAT System Overview.

Step 1: Develop Scenarios, First Iteration.

Step 2: Describe the Architecture(s), First Iteration.

Step 1: Develop Scenarios, Second Iteration.

Step 2: Describe the Architecture(s), Second Iteration.

Step 3: Classify and Prioritize the Scenarios.

Step 4: Individually Evaluate Indirect Scenarios.

Step 5: Assess Scenario Interactions.

Step 6: Create the Overall Evaluation—Results and Recommendations.

Summary.

For Further Reading.

Discussion Questions.



8. ARID—An Evaluation Method for Partial Architectures.

Active Design Reviews.

ARID: An ADR/ATAM Hybrid.

The Steps of ARID.

Phase 1: Rehearsal.

Phase 2: Review.

A Case Study in Applying ARID.

Carrying Out the Steps.

Results of the Exercise.

Summary.

For Further Reading.

Discussion Questions.



9. Comparing Software Architecture Evaluation Methods.

Questioning Techniques.

Questionnaires and Checklists.

Scenarios and Scenario-Based Methods.

Measuring Techniques.

Metrics.

Simulations, Prototypes, and Experiments.

Rate-Monotonic Analysis.

Automated Tools and Architecture Description Languages.

Hybrid Techniques.

Software Performance Engineering.

The ATAM.

Summary.

For Further Reading.

Discussion Questions.



10. Growing an Architecture Evaluation Capability in Your Organization.

Building Organizational Buy-in.

Growing a Pool of Evaluators.

Establishing a Corporate Memory.

Cost and Benefit Data.

Method Guidance.

Reusable Artifacts.

Summary.

Discussion Questions.



11. Conclusions.

You Are Now Ready!

What Methods Have You Seen?

Why Evaluate Architectures?

Why Does the ATAM Work?

A Parting Message.



Appendix A: An Example Attribute-Based Architectural Style.

Problem Description.

Stimulus/Response.

Architectural Style.

Analysis.

Reasoning.

Priority Assignment.

Priority Inversion.

Blocking Time.

For Further Reading.



References.


Index. 020170482XT10082001

Preface

The foundation of any software system is its architecture, that is, the way the software is constructed from separately developed components and the ways in which those components interact and relate to each other. If the system is going to be built by more than one person—and these days, what system isn't?—it is the architecture that lets them communicate and negotiate work assignments. If the requirements include goals for performance, security, reliability, or maintainability, then architecture is the design artifact that first expresses how the system will be built to achieve those goals. The architecture determines the structure of the development project. It is the basis for organizing the documentation. It is the first document given to new project members, and the first place a maintenance organization begins its work. Schedules, budgets, and workplans all revolve around it. And the senior, most talented designers are paid to create it.

A system's longevity—how viable it remains in the face of evolutionary pressure—is determined primarily by its architecture. Some architectures go on to become generic and adopted by the development community at large: three-tier client-server, layered, and pipe-and-filter architectures are well known beyond the scope of any single system. Today, organizations are recognizing the importance and value of architectures in helping them to meet corporate enterprise goals. An architecture can give an enterprise a competitive advantage and can be banked like any other capitalized asset.

The right architecture is the first step to success. The wrong architecture will lead to calamity. This leads to an important question: If your organization is betting its future—or at least a portion of it—on an architecture for a system or family of related systems, how can you be sure that you're building from the right architecture and not the wrong one?

The practice of creating an architecture is maturing. We can identify causal connections between design decisions made in the architecture and the qualities and properties that result downstream in the system or systems that follow from it. This means that it is possible to evaluate an architecture, to analyze architectural decisions, in the context of the goals and requirements that are levied on systems that will be built from it.

And yet even though architecture is regarded as an essential part of modern system development, architecture evaluation is almost never included as a standard part of any development process. We believe it should be, and this book is an attempt to help people fill that gap.

The time has come for architecture evaluation to become an accepted engineering practice for two reasons. First, architecture represents an enormous risk in a development project. As we've said, the wrong one leads to disaster. It makes good sense to perform an evaluation on such a pivotal artifact, just as you would plan risk-mitigation strategies for other sources of uncertainty. Second, architecture evaluation can be remarkably inexpensive. The methods described in this book add no more than a week to the project schedule, and some abridged forms require no more than a day or two. Architecture evaluation represents a very cheap insurance policy. Compared to the cost of a poor architecture, the modest expense of a software architecture evaluation makes all the sense in the world. What has been lacking up to this point is a practical method for carrying it out, which is where this book comes in.

This is a guidebook for practitioners (or those who wish to become practitioners) of architecture evaluation. We supply conceptual background where necessary, but the intent of the work is to provide step-by-step guidance in the practice of architecture evaluation and analysis. To help put the methods into practice, we have included sample artifacts that are put into play during an architecture evaluation: viewgraph presentation outlines, scenarios, after-action surveys, final report templates, and so forth. The goal is that after reading this book, you will feel confident enough to try out the methods on an architecture in your own organization. We have tried to help answer the question, during an evaluation, "What should I do now?"

While the book is written from the point of view of the evaluator, there are others involved in an evaluation—project managers, architects, other stakeholders—who will gain valuable insights by reading this book. They will come to understand how their products will be evaluated and thus can position themselves to make those products fare better with respect to the evaluation criteria. This is rather like scoring well on a test because you've seen an early copy of the test, but in this case it isn't cheating but rather sound management and engineering practice. But know that when we use the word you in the text, we are speaking to the evaluator.

The techniques in this book are based on actual practice in government and industry. Most of the methods were developed by ourselves and others at the Software Engineering Institute and applied by ourselves and others to our customers' and collaborators' systems. Other material was gleaned by holding industrial workshops whose participants were experts in the analysis and evaluation of architecture. In short, we have learned by doing, and we have learned from others' doing.

This book will not teach you how to become a good architect, nor does it help you become fluent in the issues of architecture. We assume that you already have a good grasp of architectural concepts that comes from practical experience. This book will not help you assess the job performance of any individual architect nor a project's architecture (or development) process. What it will do is show you how to evaluate an architecture with respect to a broad spectrum of important quality attributes having to do with the architecture and the future system(s) that will be built from it.

Finally, we should say a word about software versus system architecture—that is, the architecture of software-intensive systems. This is a book about the evaluation of software architectures, but we often hear the question, "Well, what about the architecture of the system, not just the software? It's just as vital." We couldn't agree more. System architectures embody the same kinds of structuring and decomposition decisions that drive software architectures. Moreover, they include hardware/software tradeoffs as well as the selection of computing and communication equipment, all of which are completely beyond the realm of software architecture. System architectures hold the key to success or failure of a system every bit as much as the software architecture does for the software. Hence, they deserve to be evaluated every bit as much and for exactly the same reasons.

The methods presented in this book will, we believe, apply equally well to system architectures as to software architectures. If modifiability is a concern, the methods can be used to gauge the expense of making changes over the system's lifetime; if performance is a concern, the methods can be used to spot bottlenecks and problem areas in the system as well as the software; and so forth.

Why, then, do we call it a book about software architecture evaluation? Because that is the realm in which the methods were invented, developed, tested, and matured. In the remainder of this book when we speak of architecture, you can always safely prefix it with software. You can prefix it with system depending on how applicable you feel the methods are to system architectures and how confident you are about our intuition in the matter.

As a final word, we invite you to share your experiences with us. We would be keenly interested in knowing what you discover works well and what doesn't work so well. Writing a book is an opportunity to share lessons, but more importantly to us, it is an opportunity to gather new ones.

—PCC, Austin, Texas
—RK, Pittsburgh, Pennsylvania
—MHK, Pittsburgh, Pennsylvania



020170482XP10082001

Index

A

ABASs. See Attribute Based Architectural Styles
Abowd, Gregory, xvii, 273
Acknowledged communication, 95
Acknowledging backups, 96, 98, 100
Acme language, 266
Active design reviews, 241, 245, 288
nbsp conventional reviews versus, 242-243
origins of, 243-244
"Active Design Reviews: Principles and Practices" (Parnas and Weiss), 243
Active Reviews for Intermediate Designs, xiv, 20, 32, 241-253, 287
as ADR/ATAM hybrid, 245
advantages with, 252
case study, 248-252
overview of, 288
Phase 1 for, 246
Phase 2 for, 246-248
SAAM and ATAM compared with, 256
sample agenda for review, 250
steps of, 245-248. See also Architecture Tradeoff Analysis Method; Software Architecture Analysis Method
Actuators, 8
ADD. See Attribute Driven Design
ADLs. See Architecture description languages
ADRs. See Active design reviews
Alice in Wonderland (Carroll), 27
Allocation policies, 111
Analysis, within Attribute Based Architectural Style, 125
Analysis support, with ATAT, 222
Analytical point of view, 302
APIs. See Application program interfaces
Application builder, and architecture evaluation, 66
Application program interfaces, 88
Aqua satellite, 128
Architects
and architectural approach analysis, 197
architectural approaches identified by, 162
and architecture evaluation, 67
architecture presented by, 157-158, 159
and ARID, 251
checklist of questions for, 145
and evaluation process, 215-216
expressing appreciation to, 152
finding, 277
missing, 292
Phase 1 for EOSDIS Core System and attendance by, 143, 146
and quality attribute characterizations, 125. See also Stakeholders
Architectural approaches, 13, 81, 186
analysis of, for Battlefield Control System, 92-94, 102
analyzing, 45, 68, 172-173, 196
benefits in identifying, 181
example of analysis of, 59
example of description of, 123
identifying, 44, 47-50, 90, 162-163
template for analysis of, 58, 122
Architectural decisions, 111, 112, 122
availability-related, 116, 117
modifiability-related, 119, 120
performance-related, 114, 115
Architectural documentation, improving quality of, 37
Architectural risks, with Battlefield Control System, 106-107
Architectural styles, 12-13, 43, 302
attribute-based, 124-125
importance of, 49
Architectural views, 4, 47
Architecture, 3, 20, 21-23
audits, 259, 260, 276
example template for presentation of, 48
importance of, ix
presentation of, 47
presentation of, for Battlefield Control System, 89
quality attributes determined by, 109, 110
reasons for evaluating, 288-290
subdesign problems in, 241
Architecture-based design, 16
Architecture description languages, 10, 266
Architecture documentation, 133, 143, 162
discussing/reviewing, 148
as reusable artifacts, 284
reviewing, 164
walk throughs of, 145, 147
Architecture evaluations, x, 19-42
benefits and costs of, 37-41
comparison of methods for, 255-273
countering skepticism during, 28
customizing methods for, 296
danger signals to look for with, 258-259
follow-up survey: organizational impacts, 281
follow-up survey: project impacts, 280
full readiness for, 287, 291-294
goals of, 109, 289
growth of capability for, by organizations, 275-285
importance of, x
leading, 182-183
maximizing evaluators efforts in, 97-98
outputs of, 34-36
people involved in, 26
qualities in, 30-32
and quality of stakeholders, 187
reasons for, 23-24
results produced by, 27, 29-30
stakeholders for, 65-68
techniques comparison, 271-272
timing for, 24-25
vague quality attributes and, 32-33
and what's architectural?, 21-23
Architecture presentations, distillation of, 164
Architecture review, people issues involved in, 182
Architecture Tradeoff Analysis Method, xiii, xvii, 20, 30, 32, 43-85, 241, 245, 272, 273, 287
analysis put in, 80-82
approximate cost of medium-size checklist-based evaluation with, 41
approximate cost of medium-size evaluation with, 39
approximate cost for small evaluation with, 40
conceptual flow of, 295
consensus-building qualities of, 133
description of steps within, 45-51, 55-62, 68-70
end-of-exercise participants' survey, 279
enhanced ability of, in finding risks, 294
EOSDIS Core System (NASA) application of, 127-210
experiences with, 104-105
hiatus between Phase 1/Phase 2, for EOSDIS Core System, 183-184
as hybrid technique, 268
maximizing evaluators efforts with, 97-98
outline of standard, 151
outputs only from, 35-36
overview of, 288
Phase 0 in, 71-76
Phase 0 of, for EOSDIS Core System, 129-148
Phase 1 information exchange in, 149
Phase 1 of, for EOSDIS Core System, 148-167, 170-181
Phase 2 of, for EOSDIS Core System, 184-202
Phase 3 of, for EOSDIS Core System, 202-209
phases of, 70-78, 79, 82-83
postmortem meeting sample agenda, 206
presentation of, for Battlefield Control Systems, 89
quality attribute characterization used in, 121-124
reasons behind success of, 290, 294-296
results presented from, 68-70
SAAM, ARID compared with, 256
sample agenda for Phase 1, 147
sample outline for final report, 204
and sensitivities and risks, 69
steps and outputs of, correlated, 71
summary of steps within, 44-45
supply checklist for exercise in, 144
two faces of, 80-82. See also Active Reviews for Intermediate Designs; Software Architecture Analysis Method
Architecture Tradeoff Analysis Tool
assessment of scenario interactions, 235-236
classifying/prioritizing scenarios, 228-231
creation of overall evaluation: results/recommendations, 236-238
describing architecture(s) of, 224-225, 227-228
evaluation of indirect scenarios, 231
functional view of architecture, 225
SAAM applied to, 222-231, 235-238
system overview, 222-223
ARID. See Active Reviews for Intermediate Designs
Artifact repositories, maintaining/updating, 83, 202, 207, 285
Asynchronous calls, and metrics, 264
AT&T, 259, 276
ATAM. See Architecture Tradeoff Analysis Method
ATAT. See Architecture Tradeoff Analysis Tool
ATAT editor
code view of generic, 238
code view of MVC-based candidate architecture of, 228
code view of PAC-based candidate architecture of, 229
ATAT-Entities, 228
Attribute-Based Architectural Styles, xiii, 49-50, 124-125, 126, 181
analysis, 299-302
architectural style, 298
example of, 297-302
parts of, 297
problem description, 297-298
as reusable artifacts, 284
stimulus/response, 298
Attribute Driven Design, 16
Audits, architecture, 259, 260, 276
Authentication, 11
Automated tools, 266
Availability, 19, 31, 93, 94, 164
and backup commander approach, 94, 95, 96
and BCS system, 102
characterization, 116
and ECS architecture, 166
and ECS utility tree, 169
as quality attribute, 115-117
sample questions about, 118
and scenarios, 52
Avritzer, A., 259, 260, 261B
Back-of-the-envelope performance model, 98
Backup commander approach, 94-96, 98
Backups, and BCS architecture, 96
Bandwidth limitations, mitigating against, 103
Barbacci, Mario, 268
Bass, Len, xvii, 8, 15, 239
Battlefield Control System (BCS), 87-108, 289
evaluation results for, 103, 105-107
hardware view of, 90
Phase 1, 89-90, 92-96, 98-100
Phase 2, 100-103
preparation, 88
sample scenarios for evaluation of, 101
Benefits surveys, 83
Berra, Yogi, 109
Blocking
sources of, 301-302
time, 299, 300, 301-302
Boehm, Barry, 19, 84
Bosch, J., 15
Bottom-up strategy, for organizational buy-in, 276
Brooks, Fred, 1
Buildability, 27, 29
Build-time, 31
Buschmann, F., 15, 126, 239, 302
Buses, 111
Business case presentation, example template f

Updates

Submit Errata

More Information

ONE MONTH ACCESS!

WITH PURCHASE


Get unlimited 30-day access to thousands of Books & Training Videos about technology, professional development and digital media If you continue your subscription after your 30-day trial, you can receive 30% off a monthly subscription to the Safari Library for up to 12 months.