Home > Articles > Software Development & Management > Architecture and Design

  • Print
  • + Share This
This chapter is from the book

3.10 Distributed Infrastructures

Earlier, the concept of middleware was introduced. Middleware provides the software infrastructure over networking hardware for integrating server platforms with computing clients, which may comprise complete platforms in their own right.

Distributed infrastructure is a broad description for the full array of object-oriented and other information technologies from which the software architect can select. Figure 3.20 shows the smorgasbord of technologies available on both client server and middleware operating system platforms [Orfali 1996]. On the client platform, technologies include Internet Web browsers, graphical user interface development capabilities, system management capabilities, and operating systems. On the server platform, there is a similar array of technologies including object services, groupware capabilities, transaction capabilities, and databases. As mentioned before, the server capabilities are migrating to the client platforms as client-server technologies evolve. In the middleware arena, there is also a fairly wide array of client-server capabilities. These include a large selection of different transport stacks, network operating systems, system management environments, and specific services. These technologies are described in significant detail in a book by Bob Orfali, Dan Harkey, and Jeri Edwards, The Client Server Survival Guide [Orfali 1996].

03fig20.gifFigure 3.20. Infrastructure Reference Model

Some of the key points to know about client-server technologies include the fact that the important client-server technologies to adopt are the ones that are based upon standards. The great thing about standards is that there are so many to choose from. A typical application portability profile contains over 300 technology standards. This standards profile would be applicable to a typical large-enterprise information policy. Many such profiles have been developed for the U.S. government and for commercial industry. The information technology market is quite large and growing. The object-oriented segment of this market is still relatively small but is beginning to comprise enough of the market so that it is a factor in most application systems environments.

As standards evolve, so do commercial technologies. Standards can take up to seven years for formal adoption but are completed within as short a time as a year and a half within consortia like the OMG. Commercial technologies are evolving at an even greater rate, trending down from a three-year cycle that characterized technologies in the late 1980s and early 1990s down to 18-month and one-year cycles that characterize technologies today. For example, many vendors are starting to combine the year number with their product names, so that the obsolescence of the technology is obvious every time the program is invoked, and users are becoming increasingly compelled to upgrade their software on a regular yearly basis. Will vendors reduce innovation time to less than one year and perhaps start to bundle the month and year designation with their product names?

The management of compatibilities between product versions is an increasingly difficult challenge, given that end-user enterprises can depend upon hundreds or even thousands of individual product releases within their corporate information technology environments. A typical medium-sized independent software vendor has approximately 200 software vendors that it depends upon in order to deliver products and services, trending up from only about a dozen six years ago. Figure 3.21 shows in more detail how commercial technologies are evolving in the middleware market toward increasing application functionality. Starting with the origins of networking, protocol stacks such as the transmission control protocol (TCP) provide basic capabilities for moving raw data across networks.

03fig21.gifFigure 3.21. Evolution of Distributed Computing Technologies

The next level of technologies includes the socket services, which are available on most platforms and underlie many Internet technologies. These socket services resolve differences between platform dependencies. At the next layer, there are service interfaces such as transport-layer independence (TLI), which enables a substitution of multiple socket-level messaging services below application software. As each of these technologies improves upon its predecessors, additional functionality, which would normally be programmed into application software, is embodied in the underlying infrastructure. One consequence of this increasing level of abstraction is a loss of control of the underlying network details in qualities of services that were fully exposed at the more primitive levels. Beyond transport invisibility, the remote-procedure-call technologies then provide a natural high-level-language mechanism for network-based communications. The distributed computing environment represents the culmination of procedural technologies supporting distributed computing. Object-oriented extensions to DCE, including object-oriented DCE and Microsoft COM+, now provide mechanisms for using object-oriented programming languages with these infrastructures.

Finally, the CORBA object request broker abstracts above the remote procedure's mechanisms by unifying the way that object classes are referenced with the way that the individual services are referenced. In other words, the CORBA technology removes yet another level of networking detail, simplifying the references to objects and services within a distributed computing environment. The progress of technology evolution is not necessarily always in a forward direction. Some significant technologies that had architectural benefits did not become successful in the technology market. An example is the OpenDoc technology, which in the opinion of many authorities had architectural benefits that exceeded current technologies like ActiveX and JavaBeans.

Standards groups have highly overlapping memberships, with big companies dominating most forums. Groups come and go with the fashions of technological innovation. Recently Internet forums (W3C, IETF) have dominated, as have JavaSoft and Microsoft open forums.

Many networking and open systems technologies as well as other object-oriented standards are the products of now defunct consortia. The consortium picture is dynamic. Some of the former consortia such as the Open Software Foundation and X Open are now merged to form The Open Group. Other consortia, such as the Object Management Group and the Common Open Software Group, are highly overlapping in membership. A recent addition to the consortium community has been the Active Group. The Active Group is responsible for publishing technology specifications for already released technologies developed by Microsoft (Figure 3.22). The Open Software Foundation originated the distributed computing environment that supports remote procedure calls as well as other distributed services. The distributed computing environment is the direct predecessor of the Microsoft COM+ technologies. The distributed computing environment represents the consensus of a consortium of vendors outside Microsoft for procedural distributed computing.

03fig22.gifFigure 3.22. Commercial Software Technology Consortia

Along with CORBA, the distributed computing environment is a mainstream technology utilized by many large-scale enterprises (Figure 3.23). One important shortcoming of the distributed computing environment is the provision of a single-protocol-stack implementation. As distributed computing technologies evolve, it becomes increasingly necessary to provide multiple network implementations to satisfy various quality-of-service requirements. These requirements may include timeliness of message delivery; performance; and throughput, reliability, security, and other nonfunctional requirements. With a single-protocol-stack implementation, the developers of applications do not have the capability to provide the appropriate levels of service. The technology gap described here is properly described as access transparency, a term defined by an international standards organization reference model that is covered in Chapter 9. Proper object-oriented distributed computing infrastructures do provide access transparency and give developers the freedom to select the appropriate protocol stacks to meet the application quality-of-service requirements.

03fig23.gifFigure 3.23. Distributed Computing Environment

Figure 3.24 shows the infrastructure technologies from the Microsoft COM+ and ActiveX product lines. The basis of these technologies for distributed computing came from the original OSF environment, but that technology was extended in various ways with proprietary interfaces that also support the use of C++ programs in addition to the C program supported by DCE. The ActiveX technologies have a partition between capabilities that support distributed computing and capabilities that are limited to a single desktop. The desktop-specific capabilities include the compound document facilities. Compound document facilities support the integration of data from multiple applications in a single office document. When moving a document from desktop to desktop, there can be complications because of the lack of complete integration with the distributed environment.

03fig24.gifFigure 3.24. ActiveX Technology Elements

Figure 3.25 shows some of the underlying details of how the component object model and COM+ model interface with application software. Application software is exposed to Microsoft-generated function tables that are directly related to the runtime system from Microsoft Visual C++. The consequence of this close coupling between Visual C++ in applications software is that the mapping to other programming languages is not standardized and in some cases is quite awkward (e.g., when ordinary C programs are applied with the COM+ infrastructure). The CORBA technologies provide a resolution of some of these shortcomings.

03fig25.gifFigure 3.25. Component Object Model

Figure 3.26 shows the basic concept behind an Object Request Broker (ORB). The purpose for an ORB is to provide communications between different elements of application software. The application software providing a service is represented by an object. This object may encapsulate software that is not object oriented. An application client can request services from an object by sending the request through the ORB. The CORBA mechanism is defined to help simplify the role of a client within a distributed system. The benefit of this approach is that it reduces the amount of software that needs to be written to create an application client and have it successfully interoperate in a distributed environment.

03fig26.gifFigure 3.26. Object Request Broker Concept

Figure 3.27 shows some of the finer grained details from the CORBA model. Figure 3.27 relates to Figure 3.26 in that the client and object software interoperate through an ORB infrastructure. The part of the infrastructure standardized by CORBA is limited to the shaded interfaces between the application software and the ORB infrastructure. CORBA does not standardize the underlying mechanisms or protocol stacks. There are both benefits and consequences to this freedom of implementation. Because different implementers have the ability to supply different mechanisms and protocol stacks underneath CORBA interfaces, a number of different products support this standard and provide various qualities of service. Some implementations, in fact, provide dynamic qualities of service that can vary between local and remote types of invocations. The consequence of this freedom of implementation is that the mechanisms selected may not be compatible across different vendors. An additional standard called the Internet Inter ORB Protocol defines how different ORB mechanisms can interoperate transparently. The implementation of IIOP is required for all CORBA products.

03fig27.gifFigure 3.27. Key Interfaces in CORBA Architecture

The CORBA infrastructure provides two different kinds of mechanisms on both the client and implementation sides of the communication services. On the client side, the client developer has the option of using precompiled stub programs that resemble ordinary calls to the application software. The use of static stubs minimizes the special programming that is required because the application is potentially distributed. The stub programs appear like local objects in the application environment, but the stubs represent a proxy for the remote object.

The client developer has the option of using dynamic invocation (Figure 3.27). Dynamic invocation is an interface that enables the client to call an arbitrary message invocation upon objects that it discovers dynamically. The dynamic invocation gives the CORBA mechanism extensibility, which is only required in certain kinds of specialty applications. These applications might include program debuggers, mobile agent programs, and operating systems. The implementer of object services in the CORBA environment also has the capability to choose static invocation or dynamic invocation. The two options are generated as either static skeletons or dynamic skeletons.

The skeletons provide the software that interfaces between the ORB's communication infrastructure and the application program, and they do so in a way that is natural to the software developer. By using dynamic skeletons with dynamic invocation in the same program, interesting capabilities are possible. For example, software firewalls, which provide filtering between different groups of applications, can easily be implemented by these two dynamic capabilities.

Figure 3.28 shows the CORBA technologies in the object management architecture and how these technologies relate to the Cargill model discussed earlier. The object management architecture shown in Figure 3.9 provides a reference model for all the CORBA technologies. CORBA and the related standards, such as CORBA services and CORBA facilities, are examples of industry standards that apply broadly across multiple domains.

03fig28.gifFigure 3.28. Extensions of the Object Management Architecture

The CORBA domains comprise functional profiles in the Cargill model. In other words, the CORBA domain interface specifications represent domain-specific interoperability conventions for how to use the CORBA technologies to provide interoperability. Finally, the application objects in the object management architecture correspond directly with the application implementations in the Cargill model.

Other initiatives (besides CORBA) have attempted to specify comprehensive standards hierarchies. First Taligent, then IBM's San Francisco project attempted to define object standards frameworks, but neither garnered the expected popularity. Java J2EE has come closest to achieving the vision and represents outstanding progress toward completing the standards picture.

  • + Share This
  • 🔖 Save To Your Account