Home > Articles > Software Development & Management > Management: Lifecycle, Project, Team

  • Print
  • + Share This
This chapter is from the book

Abstraction Is the Heart of Architecture

In all these cases, we move from the general to the specific, with the next layer of detail expanding upon the previous level of abstraction. This movement from general to specific gives architecture its power to simplify, communicate, and make ghastly complexity more aesthetically pleasing.

Abstraction is the heart of architecture. This powerful and persuasive concept has been at the center of most of the advances in complex systems architecting for the last 15 years. It underpins the history of software engineering—objects, components, and even IT services have their roots in abstraction. Because abstraction is one of our most powerful tools, we should consider its capabilities and limitations.

As systems have become more complex, additional layers of abstraction have been inserted into the software to keep everything understandable and maintainable. Year by year, programmers have gotten further away from the bits, registers, and native machine code, through the introduction of languages, layered software architectures, object-oriented languages, visual programming, modeling, packages, and even models of models (metamodeling).

Today, programs can be routinely written, tested, and deployed without manually writing a single line of code or even understanding the basics of how a computer works. A cornucopia of techniques and technologies can insulate today's programmers from the specifics and complexities of their surrounding environments. Writing a program is so simple that we can even get a computer to do it. We get used to the idea of being insulated from the complexity of the real world.

Mirror, Mirror on the Wall, Which Is the Fairest Software of All?

Software engineering approaches the complexity and unpredictability of the real world by abstracting the detail to something more convenient and incrementally improving the abstraction over time.

Working out the levels of abstraction that solve the problem (and will continue to solve the problem) is the key concern of the software architect. IBM's chief scientist Grady Booch and other leaders of the software industry are convinced that the best software should be capable of dealing with great complexity but also should be inherently simple and aesthetically pleasing.1

Thus, over time, we should expect that increasing levels of abstraction will enable our software to deal with more aspects of the real world. This is most obviously noticeable in games and virtual worlds, where the sophistication of the representation of the virtual reality has increased as individual elements of the problem are abstracted. Figure 6.4 shows how games architectures have matured over the last 20 years.

Figure 6.4

Figure 6.4 Games architectures have matured immensely over the last 20 years.

The current sophisticated, shared online games of the early twenty-first century exhibit greater descriptive power compared to the basic 2D games of the 1970s. Hiding the complexity of the physics engine from the graphical rendering system, and hiding both of these from the user server and the system that stores the in-world objects, enables increasing levels of sophisticated behavior.

Abstraction has its drawbacks, however. Each level of abstraction deliberately hides a certain amount of complexity. That's fine if you start with a complete description of the problem and work your way upward, but you must remember that this isn't the way today's systems integration and architecting methods work.

These methods start from the general and the abstract, and gradually refine the level of detail from there. Eventually, they drill down to reality. This sounds good. Superficially, it sounds almost like a scientific technique. For example, physicists conduct experiments in the real world, which has a lot of complexity, imperfection, and "noise" complicating their experiments. However, those experiments are designed to define or confirm useful and accurate abstractions of reality in the form of mathematical theories that will enable them to make successful predictions. Of course, the key difference between software engineering and physics is that the physicists are iteratively creating abstractions for something that already exists and refining the abstraction as more facts emerge. The architects, on the other hand, are abstracting first and then creating the detail to slot in behind the abstraction. Figure 6.5 should make the comparison clearer.

Figure 6.5

Figure 6.5 Who's right? Physicists or IT architects?

The IT approach should strike you as fundamentally wrong. If you need some convincing, instead of focusing on the rather abstract worlds of physics or IT, let's first take a look at something more down to earth: plumbing.

Plumbing the Depths

The IT and plumbing industries have much in common. Participants in both spend a great deal of time sucking their teeth, saying, "Well, I wouldn't have done it like that," or, "That'll cost a few dollars to put right." As in many other professions, they make sure that they shroud themselves in indecipherable private languages, acronyms, and anecdotes.

Imagine for a moment a heating engineer who has been asked to install a radiator in a new extension. He has looked at the plans and knows how he's going to get access to the pipes. From the specifications he's read, he knows what fixtures he needs. After doing some pretty easy calculations based on room size, window area, and wall type, he even got hold of the right size radiator to fit on the wall that will deliver the right amount of heat for the room. It's an hour's work, at most.

The job is done and he leaves a happy man. A few days later, the homeowner is complaining that the room is still cold. Only when the plumber arrives back on-site and investigates the boiler does he find out that the output of the boiler is now insufficient for the needs of the house. He recommends that the homeowner order a new 33-kilowatt boiler and arranges to come back in a week.

A week later, he's back to begin fitting the new boiler. Right at the start of the task, it becomes obvious that the old boiler was oil-fired and the new one is gas. This is slightly inconvenient because the property is not connected to the gas main, even though it runs past the property.

Another few weeks pass while the homeowner arranges for the house to be connected to the gas supply. On the plumber's third visit, everything is going swimmingly. Then he notices that there are no free breaker slots on the electricity circuit board to attach the new boiler. A week later, he replaces the circuit board. The boiler is installed, but another problem arises: Although the heat output of the boiler is sufficient, a more powerful pump is required to distribute the heat throughout the house.

And that's when the problems really start.

Don't Abstract Until You See the Whole Elephant

Judging from the architect's top-level view, the solution seemed pretty obvious. Only when the job was almost done was it obvious that it hadn't worked. Those other aspects of the problem—the supply, the pump, and the circuit board—were invisible from the Level 0 perspective the plumber received, so he ignored them in doing his analysis.

After all, nothing was fundamentally wrong with the plumber's solution; he just didn't have a good specification of the problem. The process of abstracting the problem to the single architectural drawing of the new room meant that he had no visibility of the real problem, which was somewhat bigger and more complex. He simply couldn't see the hidden requirements—the environmental constraints—from his top-level, incorrectly abstracted view of the problem.

Unfortunately, abstractions, per se, always lack details of the underlying complexity. The radiator was a good theoretical solution to the problem, but it was being treated as a simple abstract component that, when connected to the central heating system, would issue the right amount of heat. Behind that simple abstraction lays the real hidden complexity of the boiler, gas main, and circuit board that leaked through and derailed this abstracted solution.2 Will such complexity always leak up through the pipe and derail simple abstract solutions?

Well, imagine for a moment that the abstraction was absolute and that it was impossible to trace backward from the radiator to the source of the heat. Consider, for example, that the heat to each radiator was supplied from one of a huge number of central utilities via shared pipes. If the complexity of that arrangement was completely hidden, you would not know who to complain to if the radiator didn't work. Of course, on the positive side, the utility company supplying your heat wouldn't be able to bill you for adding a new radiator!

Is this such an absurd example? Consider today's IT infrastructures, with layers of software, each supposedly easier to maintain by hiding the complexities below. Who do you call when there is a problem? Is it in the application? The middleware? Maybe it is a problem with the database?

If you become completely insulated from the underlying complexity—or if you simply don't understand it, then it becomes very difficult to know what is happening when something goes wrong. Such an approach also encourages naïve rather than robust implementations. Abstractions that fully hide complexity ultimately cause problems because it is impossible to know what is going wrong.

Poorly formed abstractions can also create a lack of flexibility in any complex software architecture. If the wrong elements are chosen to be exposed to the layers above, people will have to find ways around the architecture, compromising its integrity. Establishing the right abstractions is more of an art than a science, but starting from a point of generalization is not a good place to start—it is possibly the worst.

Successful Abstraction Does Not Come from a Lack of Knowledge

In summary, abstraction is probably the single most powerful tool for the architect. It works well when used with care and when there is a deep understanding of the problem.

However, today's methods work from the general to the specific, so they essentially encourage and impose a lack of knowledge. Not surprisingly, therefore, the initial abstractions and decompositions that are made at the start of a big systems integration or development project often turn out to be wrong. Today's methods tend to ignore complexity while purporting to hide it.

The Ripple Effect

Poor abstractions lead to underestimations and misunderstandings galore. Everything looks so simple from 10,000 feet. On large projects, a saying goes that "All expensive mistakes are made on the first day." From our experience, it's an observation that is very, very true.

Working with a lack of information makes abstraction easy but inaccurate.

All projects are most optimistic right at the start. These early stages lack detailed information; as a result, assumptions are made and the big abstractions are decided.

Assumptions are not dangerous in themselves—as long as they are tracked. Unfortunately, all too often they are made but not tracked, and their impact is not understood. In some ways, they are treated as "risks that will never happen." Assumptions must always be tracked and reviewed, and their potential impact, if they're untrue, must be understood. Chances are, some of them will turn out to be false assumptions—and, chances are, those will be the ones with expensive consequences.

We need to move away from this optimistic, pretty-diagram school of architecture, in which making the right decisions is an art form of second guessing based on years of accumulated instinct and heuristics.3 We need a more scientific approach with fewer assumptions and oversimplifications. A colleague, Bob Lojek, memorably said, "Once you understand the full problem, there is no problem."

Fundamentally, we need to put more effort into understanding the problem than prematurely defining the solution. As senior architects for IBM, we are often asked to intervene in client projects when things have gone awry. For example:

  • An Agile development method was being used to deliver a leading-edge, web-based, customer self-service solution for a world-leading credit card processor. The team had all the relevant skills, and the lead architect was a software engineering guru who knew the modern technology platform they were using and had delivered many projects in the past.
  • Given the new nature of the technology, the team had conformed strictly to the best-practice patterns for development and had created a technical prototype to ensure that the technology did what they wanted it to do. The design they had created was hugely elegant and was exactly in line with the customer requirement.
  • A problem arose, though. The project had run like a dream for 6 months, but it stalled in the final 3 months of the development. The reporting system for the project recorded correctly that 80 percent of the code had been written and was working, but the progress meter had stopped there and was not moving forward. IBM was asked to take a look and see what the problem was.
  • As usual, the answer was relatively straightforward. The levels of abstraction, or layering, of the system had been done according to theoretical best practice, but it was overly sophisticated for the job that needed to be done. The architecture failed the Occam's Razor test: The lead architect had induced unnecessary complexity, and his key architectural decisions around abstraction (and, to some extent, decomposition) of the problem had been made in isolation of the actual customer problem.
  • Second, and more important, the architect had ignored the inherent complexity of the solution. Although the user requirements were relatively straightforward and the Level 0 architecture perspectives were easy to understand, he had largely ignored the constraints imposed by the other systems that surrounded the self-service solution.
  • Yes, the design successfully performed a beautiful and elegant abstraction of the core concepts it needed to deal with—it's just that it didn't look anything like the systems to which it needed to be linked. As a result, the core activity for the previous 3 months had been a frantic attempt to map the new solution onto the limitations of the transactions and data models of the old. The mind-bending complexity of trying to pull together two mutually incompatible views of these new and old systems had paralyzed the delivery team. They didn't want to think the unthinkable. They had defined an elegant and best-practice solution to the wrong problem. In doing so, they had ignored hundreds of constraints that needed to be imposed on the new system.
  • When the project restarted with a core understanding of these constraints, it became straightforward to define the right levels of abstraction and separation of concerns. This provided an elegant and simple solution with flexibility in all the right places—without complicating the solution's relationship with its neighbors.
  • —R.H.

As a final horror story, consider a major customer case system for an important government agency:

  • We were asked to intervene after the project (in the hands of another systems integrator) had made little progress after 2 years of investment.
  • At this point, the customer had chosen a package to provide its overarching customer care solution. After significant analysis, this package had been accepted as a superb fit to the business and user requirements. Pretty much everything that was needed to replace the hugely complex legacy systems would come out of a box.
  • However, it was thought that replacing a complete legacy system would be too risky. As a result, the decision was made to use half of the package for the end-user element of the strategic solution; the legacy systems the package was meant to replace would serve as its temporary back end (providing some of the complex logic and many of the interfaces that were necessary for an end-to-end solution).
  • The decision was to eat half the elephant. On paper, from 10,000 feet, it looked straightforward. The high-level analysis had not pointed out any glitches, and the layering of the architecture and the separation of concerns appeared clean and simple.
  • As the project progressed, however, it became apparent that the legacy system imposed a very different set of constraints on the package. Although they were highly similar from an end user and data perspective, the internal models of the new and old systems turned out to be hugely different—and these differences numbered in the thousands instead of the hundreds.
  • Ultimately, the three-way conflict between the user requirements (which were based on the promise of a full new system), the new package, and the legacy system meant that something had to give. The requirements were deemed to be strategic and the legacy system was immovable, so the package had to change. This decision broke the first rule of bottom-up implementations mentioned earlier.
  • Although the system was delivered on time and budget, and although it works to this day for thousands of users and millions of customers, the implementation was hugely complicated by the backflow of constraints from the legacy systems. As a result, it then proved uneconomic to move the system to subsequent major versions of the package. The desired strategic solution became a dead end.
  • —K.J. and R.H.

In each of these cases, a better and more detailed understanding of the overall problem was needed than standard top-down approaches could provide. Such an understanding would have prevented the problems these projects encountered.

Each of these three problems stems from a basic and incorrect assumption by stakeholders that they could build a Greenfield implementation. At the credit card processor, this assumption held firm until they tried to integrate it with the existing infrastructure. The government department failed to realize that its original requirements were based on a survey of a completely different site (the one in which the legacy system was cleared away), resulting in large-scale customization of the original package that was supposedly a perfect fit.

Fundamentally, today's large-scale IT projects need to work around the constraints of their existing environment. Today's IT architects should regard themselves as Brownfield redevelopers first, and exciting and visionary architects second.

Companies that try to upgrade their hardware or software to the latest levels experience the same ripple effect of contamination from the existing environment. Despite the abstraction and layering of modern software and the imposed rigor of enterprise architectures, making changes to the low levels of systems still has a major impact on today's enterprises.

As we mentioned before, no abstraction is perfect and, to some extent, it will leak around the edges. This means there is no such thing as a nondisruptive change to any nontrivial environment. As a supposedly independent layer in the environment changes—perhaps a database, middleware, or operating system version—a ripple of change permeates around the environment. As only certain combinations of products are supported, the change can cascade like a chain of dominoes. Ultimately, these ripples can hit applications, resulting in retesting, application changes, or even reintegration.

Thus, to provide good and true architectures, we need to accept that we need a better understanding of the problem to engineer the right abstractions. Additionally, we need all the aspects of the problem definition (business, application, and infrastructure) to be interlinked so that we can understand when and where the ripple effect of discovered constraints or changes will impact the solution we are defining.

  • + Share This
  • 🔖 Save To Your Account

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information

To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.


Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.


If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information

Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.


This site is not directed to children under the age of 13.


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information

If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information

Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents

California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure

Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact

Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice

We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020