1.2 Comparing Software Development and Reuse Techniques
Object-oriented frameworks don't exist in isolation. Class libraries, components, patterns, and model-integrated computing are other techniques that are being applied to reuse software and increase productivity. This section compares frameworks with these techniques to illustrate their similarities and differences, as well as to show how the techniques can be combined to enhance systematic reuse for networked applications.
1.2.1 Comparing Frameworks and Class Libraries
A class is a general-purpose, reusable building block that specifies an interface and encapsulates the representation of its internal data and the functionality of its instances. A library of classes was the most common first-generation object-oriented development technique [Mey97]. Class libraries generally support reuse-in-the-small more effectively than function libraries since classes emphasize the cohesion of data and methods that operate on the data.
Although class libraries are often domain independent and can be applied widely, their effective scope of reuse is limited because they don't capture the canonical control flow, collaboration, and variability among families of related software artifacts. The total amount of reuse with class libraries is therefore relatively small, compared with the amount of application-defined code that must be rewritten for each application. The need to reinvent and reimplement the overall software architecture and much of the control logic for each new application is a prime source of cost and delay for many software projects.
The C++ standard library [Bja00] is a good case in point. It provides classes for strings, vectors, and other containers. Although these classes can be reused in many application domains, they are relatively low level. Application developers are therefore responsible for (re)writing much of the "glue code" that performs the bulk of the application control flow and class integration logic, as shown in Figure 1.2 (1).
Figure 1.2: Class Library versus Framework Architectures
Frameworks are a second-generation development technique [Joh97] that extends the benefits of class libraries in several ways. Most importantly, classes in a framework collaborate to provide a reusable architecture for a family of related applications. Class collaboration in a framework yields "semi-complete" applications that embody domain-specific object structures and functionality. Frameworks can be classified by various means, such as the blackbox and whitebox distinctions described in Sidebar 1 (page 6).
Sidebar 1: Overview of Whitebox and Blackbox Frameworks
Frameworks can be classified in terms of the techniques used to extend them, which range along a continuum from whitebox frameworks to blackbox frameworks [HJE95], as described below:
Whitebox frameworks. Extensibility is achieved in a whitebox framework via object-oriented language features, such as inheritance and dynamic binding. Existing functionality can be reused and customized by inheriting from framework base classes and overriding predefined hook methods [Pre95] using patterns such as Template Method [GoF], which defines an algorithm with some steps supplied by a derived class. To extend a whitebox framework, application developers must have some knowledge of its internal structure.
Blackbox frameworks. Extensibility is achieved in a blackbox framework by defining interfaces that allow objects to be plugged into the framework via composition and delegation. Existing functionality can be reused by defin-ing classes that conform to a particular interface and then integrating these classes into the framework using patterns such as Function Object [Kuh97], Bridge/Strategy [GoF], and Pluggable Factory [Vli98b, Vli99, Cul99], which provide a blackbox abstraction for selecting one of many implementations. Black-box frameworks can be easier to use than whitebox frameworks since application developers need less knowledge of the framework's internal structure. Blackbox frameworks can also be harder to design, however, since framework developers must define crisp interfaces that anticipate a range of use cases.
Another way that class libraries differ from frameworks is that the classes in a library are typically passive since they perform their processing by borrowing the thread from so-called self-directed applications that invoke their methods. As a result, developers must continually rewrite much of the control logic needed to bind the reusable classes together to form complete networked applications. In contrast, frameworks are active since they direct the flow of control within an application via various callback-driven event handling patterns, such as Reactor [POSA2] and Observer [GoF]. These patterns invert the application's flow of control using the Hollywood Principle: "Don't call us, we'll call you" [Vli98a]. Since frameworks are active and manage the application's control flow, they can perform a broader range of activities on behalf of applications than is possible with passive class libraries.
Frameworks and class libraries are complementary technologies in practice. Frameworks provide a foundational structure to applications. Since frameworks are focused on a specific domain, however, they aren't expected to satisfy the broadest range of application development needs. Class libraries are therefore often used in conjunction within frameworks and applications to implement commonly needed code artifacts, such as strings, files, and time/date classes.
For example, the ACE frameworks use the ACE wrapper facade classes to ensure their portability. Likewise, applications can use the ACE container classes described in [HJS] to help implement their event handlers. Whereas the ACE container classes and wrapper facades are passive, the ACE frameworks are active and provide inversion of control at run time. The ACE toolkit provides both frameworks and a library of classes to help programmers address a range of challenges that arise when developing networked applications.
1.2.2 Comparing Frameworks and Components
A component is an encapsulated part of a software system that implements a specific service or set of services. A component has one or more interfaces that provide access to its services. Components serve as building blocks for the structure of an application and can be reused based solely upon knowledge of their interface protocols.
Components are a third-generation development technique [Szy98] that are widely used by developers of multitier enterprise applications. Common examples of components include ActiveX controls [Egr98] and COM objects [Box98], .NET web services [TL01], Enterprise JavaBeans [MH01], and the CORBA Component Model (CCM) [Obj01a]. Components can be plugged together or scripted to form complete applications, as shown in Figure 1.3.
Figure 1.3: A Component Architecture
Figure 1.3 also shows how a component implements the business application logic in the context of a container. A container allows its component to access resources and services provided by an underlying middleware platform. In addition, this figure shows how generic application servers can be used to instantiate and manage containers and execute the components configured into them. Metadata associated with components provide instructions that application servers use to configure and connect components.
Many interdependent components in enterprise applications can reside in multiplepossibly distributedapplication servers. Each application server consists of some number of components that implement certain services for clients. These components in turn may include other collocated or remote services. In general, components help developers reduce their initial software development effort by integrating custom application components with reusable off-the-shelf components into generic application server frameworks. Moreover, as the requirements of applications change, components can help make it easier to migrate and redistribute certain services to adapt to new environments, while preserving key application properties, such as security and availability.
Components are generally less lexically and spatially coupled than frameworks. For example, applications can reuse components without having to subclass them from existing base classes. In addition, by applying common patterns, such as Proxy [GoF] and Broker [POSA1], components can be distributed to servers throughout a network and accessed by clients remotely. Modern application servers, such as JBoss and BEA Systems's Web-Logic Server, use these types of patterns to facilitate an application's use of components.
The relationship between frameworks and components is highly synergistic, with neither subordinate to the other [Joh97]. For example, the ACE frameworks can be used to develop higher-level application components, whose interfaces then provide a facade [GoF] for the internal class structure of the frameworks. Likewise, components can be used as pluggable strategies in blackbox frameworks [HJE95]. Frameworks are often used to simplify the development of middleware component models [TL01, MH01, Obj01a], whereas components are often used to simplify the development and configuration of networked application software.
1.2.3 Comparing Frameworks and Patterns
Developers of networked applications must address design challenges related to complex topics, such as connection management, service initialization, distribution, concurrency control, flow control, error handling, event loop integration, and dependability. Since these challenges are often independent of specific application requirements, developers can resolve them by applying the following types of patterns [POSA1]:
Design patterns provide a scheme for refining the elements of a software system and the relationships between them, and describe a common structure of communicating elements that solves a general design problem within a particular context.
Architectural patterns express the fundamental, overall structural organization of software systems and provide a set of predefined subsystems, specify their responsibilities, and include guidelines for organizing the relationships between them.
Pattern languages define a vocabulary for talking about software development problems and provide a process for the orderly resolution of these problems.
Traditionally, patterns and pattern languages have been locked in the heads of expert developers or buried deep within the source code of software applications and systems. Allowing this valuable information to reside only in these locations is risky and expensive. Explicitly capturing and documenting patterns for networked applications helps to
Preserve important design information for programmers who enhance and maintain existing software. This information will be lost if it isn't documented, which can increase software entropy and decrease software maintainability and quality.
Guide design choices for developers who are building new applications. Since patterns document the common traps and pitfalls in their domain, they help developers to select suitable architectures, protocols, algorithms, and platform features without wasting time and effort (re)implementing solutions that are known to be inefficient or error prone.
Knowledge of patterns and pattern languages helps to reduce development effort and maintenance costs. Reuse of patterns alone, however, does not create flexible and effi-cient software. Although patterns enable reuse of abstract design and architecture knowledge, software abstractions documented as patterns don't directly yield reusable code. It's therefore essential to augment the study of patterns with the creation and use of frameworks. Frameworks help developers avoid costly reinvention of standard software artifacts by reifying common patterns and pattern languages and by refactoring common implementation roles.
ACE users can write networked applications quickly because the frameworks in ACE implement the core patterns associated with service access, event handling, concurrency, and synchronization [POSA2]. This knowledge transfer makes ACE more accessible and directly applicable compared to many other common knowledge transfer activities, such as seminars, conferences, or design and code reviews. Although these other activities are useful, they are limited because participants must learn from past work of others, and then try to apply it to their current and future projects. In comparison, ACE provides direct knowledge transfer by embodying framework usage patterns in a powerful toolkit containing both networked application domain experience and working code.
For example, JAWS [HS99] is a high-performance, open-source, adaptive Web server built using the ACE frameworks. Figure 1.4 (page 10) illustrates how the JAWS Web server is structured as a set of collaborating frameworks whose design is guided by the patterns listed along the borders of the figure. These patterns help resolve common design challenges that arise when developing concurrent servers, including encapsulating low-level operating system APIs, decoupling event demultiplexing and connection management from protocol processing, scaling up server performance via multithreading, minimizing server threading overhead, using asynchronous I/O effectively, and enhancing server configurability. More information on the patterns and design of JAWS appears in Chapter 1 of POSA2.
Figure 1.4: Patterns Forming the Architecture of JAWS
1.2.4 Comparing Frameworks and Model-Integrated Computing
Model-integrated computing (MIC) [SK97] is an emerging development paradigm that uses domain-specific modeling languages to systematically engineer software ranging from small-scale real-time embedded systems to large-scale enterprise applications. MIC development environments include domain-specific model analysis and model-based program synthesis tools. MIC models can capture the essence of a class of applications, as well as focus on a single, custom application. MIC also allows the modeling languages and environments themselves to be modeled by so-called meta-models [SKLN01], which help to synthesize domain-specific modeling languages that can capture subtle insights about the domains they are designed to model, making this knowledge available for reuse.
Popular examples of MIC being used today include the Generic Modeling Environment (GME) [LBM+01] and Ptolemy [BHLM94] (which are used primarily in the real-time and embedded domain) and UML/XML tools based on the OMG Model Driven Architecture (MDA) [Obj01b] (which are used primarily in the business domain thus far). When implemented properly, these MIC technologies help to Free application developers from dependencies on particular software APIs, which ensures that the models can be reused for a long time, even as existing software APIs are obsoleted by newer ones.
Provide correctness proofs for various algorithms by analyzing the models automatically and offering refinements to satisfy various constraints.
Generate code that's highly dependable and robust since the modeling tools themselves can be synthesized from meta-models using provably correct technologies.
Rapidly prototype new concepts and applications that can be modeled quickly using this paradigm, compared to the effort required to prototype them manually.
Reuse domain-specific modeling insights, saving significant amounts of time and effort, while also reducing application time-to-market and improving consistency and quality.
As shown in Figure 1.5, the MIC development process uses a set of tools to analyze the interdependent features of the application captured in a model and determine the feasibility of supporting different QoS requirements in the context of the specified constraints. Another set of tools then translates models into executable specifications that capture the platform behavior, constraints, and interactions with the environment. These executable specifications in turn can be used to synthesize application software.
Figure 1.5: Steps in the Model-Integrated Computing Development Process
Earlier efforts at model-based development and code synthesis attempted by CASE tools generally failed to deliver on their potential for the following reasons [All02]:
They attempted to generate entire applications, including the infrastructure and the application logic, which led to inefficient, bloated code that was hard to optimize, validate, evolve, or integrate with existing code.
Due to the lack of sophisticated domain-specific languages and associated modeling tools, it was hard to achieve round-trip engineering, that is, moving back and forth seamlessly between model representations and the synthesized code.
Since CASE tools and early modeling languages dealt primarily with a restricted set of platforms (such as mainframes) and legacy programming languages (such as COBOL), they did not adapt well to the distributed computing paradigm that arose from advances in PC and Internet technology and newer object-oriented programming languages, such as Java, C++, and C#.
Many of the limitations with model-integrated computing outlined above can be overcome by integrating MIC tools and processes with object-oriented frameworks [GSNW02]. This integration helps to overcome problems with earlier-generation CASE tools since it does not require the modeling tools to generate all the code. Instead, large portions of applications can be composed from reusable, prevalidated framework classes. Likewise, integrating MIC with frameworks helps address environments where application requirements and functionality change at a rapid pace by synthesizing and assembling newer extended framework classes and automating the configuration of many QoS-critical aspects, such as concurrency, distribution, transactions, security, and dependability.
The combination of model-integrated computing with frameworks, components, and patterns is an area of active research [Bay02]. In the DOC group, for example, there are R&D efforts underway to develop a MIC tool suite called the Component Synthesis with Model-Integrated Computing (CoSMIC) [GSNW02]. CoSMIC extends the popular GME modeling and synthesis tools [LBM+01] and the ACE ORB (TAO) [SLM98] to support the development, assembly, and deployment of QoS-enabled networked applications. To ensure the QoS requirements can be realized in the middleware layer, CoSMIC's model-integrated computing tools can specify and analyze the QoS requirements of application components in their accompanying metadata.