Introduction to Model Driven Architecture
There's no doubt about it: Software is expensive. The United States alone devotes at least $250 billion each year to application development of approximately 175,000 projects involving several million people. For all of this investment of time and money, though, software's customers continue to be disappointed, because over 30 percent of the projects will be canceled before they're completed, and more than half of the projects will cost nearly twice their original estimates.1
The demand for software also continues to rise. The developed economies rely to a large extent on software for telecommunications, inventory control, payroll, word processing and typesetting, and an ever-widening set of applications. Only a decade ago, the Internet was text-based, known only to a relatively few scientists connected using DARPAnet and email. Nowadays, it seems as if everyone has his or her own website. Certainly, it's become difficult to conduct even non-computer-related business without email.
There's no end in sight. A Star Trek world of tiny communications devices, voice-recognition software, vast searchable databases of human (for the moment, anyway) knowledge, sophisticated computer-controlled sensing devices, and intelligent display are now imaginable. (As software professionals, however, we know just how much ingenuity will be required to deliver these new technologies.)
Software practitioners, industrial experts, and academics have not been idle in the face of this need to improve productivity. There have been significant improvements in the ways in which we build software over the last fifty years, two of which are worthy of note in our attempts to make software an asset. First, we've raised the level of abstraction of the languages we use to express behavior; second, we've sought to increase the level of reuse in system construction.
These techniques have undoubtedly improved productivity, but as we bring more powerful tools to bear to solve more difficult problems, the size of each problem we're expected to tackle increases to the point at which we could, once again, barely solve it.
MDA takes the ideas of raising the levels of abstraction and reuse up a notch. It also introduces a new idea that ties these ideas together into a greater whole: design-time interoperability.
Raising the Level of Abstraction2
The history of software development is a history of raising the level of abstraction. Our industry used to build systems by soldering wires together to form hard-wired programs. Machine code let us store programs by manipulating switches to enter each instruction. Data was stored on drums whose rotation time had to be taken into account so that the head would be able to read the next instruction at exactly the right time. Later, assemblers took on the tedious task of generating sequences of ones and zeroes from a set of mnemonics designed for each hardware platform.
At some point, programming languages, such as FORTRAN, were born and "formula translation" became a reality. Standards for COBOL and C enabled portability among hardware platforms, and the profession developed techniques for structuring programs so that they were easier to write, understand, and maintain. We now have languages like Smalltalk, C++, Eiffel, and Java, each with the notion of object-orientation, an approach for structuring data and behavior together into classes and objects.
As we moved from one language to another, generally we increased the level of abstraction at which the developer operates, which required the developer to learn a new, higher-level language that could then be mapped into lower-level ones, from C++ to C to assembly code to machine code and the hardware. At first, each higher layer of abstraction was introduced only as a concept. The first assembly languages were no doubt invented without the benefit of an (automated) assembler to turn mnemonics into bits, and developers were grouping functions together with the data they encapsulated long before there was any automatic enforcement of the concept. Similarly, the concepts of structured programming were taught before there were structured programming languages in widespread industrial use (for instance, Pascal).
Over time, however, the new layers of abstraction became formalized, and tools such as assemblers, preprocessors, and compilers were constructed to support the concepts. This had the effect of hiding the details of the lower layers so that only a few experts (compiler writers, for example) needed to concern themselves with the details of how those layers work. In turn, this raises concerns about the loss of control induced by, for example, eliminating the GOTO statement or writing in a high-level language at a distance from the "real machine." Indeed, sometimes the next level of abstraction has been too big a reach for the profession as a whole, only of interest to academics and purists, and the concepts did not take a large enough mindshare to survive. (ALGOL-68 springs to mind. So does Eiffel, but it has too many living supporters to be a safe choice of example.)
As the profession has raised the level of abstraction at which developers work, we have developed tools to map from one layer to the next automatically. Developers now write in a high-level language that can be mapped to a lower-level language automatically, instead of writing in the lower-level language that can be mapped to assembly language, just as our predecessors wrote in assembly language and had that translated automatically into machine language.
Clearly, this forms a pattern: We formalize our knowledge of an application in as high a level a language as we can. Over time, we learn how to use this language and apply a set of conventions for its use. These conventions become formalized and a higher-level language is born that is mapped automatically into the lower-level language. In turn, this next-higher-level language is perceived as low level, and we develop a set of conventions for its use. These newer conventions are then formalized and mapped into the next level down, and so forth.
The next level of abstraction is the move, shown in Figure 1-1, to model-based development, in which we build software-platform-independent models.
Figure 1-1. Raising the level of abstraction
Software-platform independence is analogous to hardware-platform independence. A hardware-platform-independent language, such as C or Java, enables the writing of a specification that can execute on a variety of hardware platforms with no change. Similarly, a software-platform-independent language enables the writing of a specification that can execute on a variety of software platforms, or software architecture designs, with no change. So, a software-platform-independent specification could be mapped to a multiprocessor/multitasking CORBA environment, or a client-server relational database environment, with no change to the model.
In general, the organization of the data and processing implied by a conceptual model may not be the same as the organization of the data and processing in implementation. If we consider two concepts, those of "customer" and "account," modeling them as classes using the UML suggests that the software solution should be expressed in terms of software classes named Customer and Account. However, there are many possible software designs that can meet these requirements, many of which are not even object-oriented. Between concept and implementation, an attribute may become a reference; a class may be divided into sets of object instances according to some sorting criteria; classes may be merged or split; statecharts may be flattened, merged, or separated; and so on. A modeling language that enables such mappings is software-platform independent.
Raising the level of abstraction changes the platform on which each layer of abstractions depends. Model-based development relies on the construction of models that are independent of their software platforms, which include the likes of CORBA, client-server relational database environments, and the very structure of the final code.