The Elements of eAI
Much of the confusion surrounding technologies for eAI arises because they take such apparently different approaches to solving the problem. In Figure 1, I've categorized the technologies in order to group together those that are similar. A number of categories exist, beginning with Data Exchange at the bottom and ending Business-to-Business Integration at the top. In the diagram, each category depends on those below it; so, for example, the Data Integration and Application Integration categories depend on technologies from the Data Exchange category. Let's look at the categories in more detail, beginning at the bottom.
The taxonomy of eAI technologies.
Communication lies at the heart of application integration. At its most fundamental level, communication is concerned with getting the correct data to the correct place in the correct format. This is the role of the Data Exchange category in Figure 1.
Included in this category are base-level technologies, such as networking. This includes, for example, use of programming interfaces to network protocols such as TCP/IP and IBM's Systems Network Architecture (SNA). These programming calls move packets over the network, a very low-level approach to provision of communications. Although all other forms of data exchange depend on these mechanisms, they are not usually used directly in application programs. Applications normally need more function than the raw networks provide.
At a slightly higher level, technologies such as remote procedure call (RPC) are available. RPC technology simplifies the use of networking for programmers by hiding all of the communications calls and reducing the amount of code that must be written. Most RPC implementations also carry out conversion among the different data representations used by different computer systems, an otherwise tedious and error-prone task. Examples of RPC technology include the Open Group's Distributed Computing Environment (DCE), an international standard from which Microsoft's DCOM developed initially. Other vendors, including Sun and Novell, have RPC schemes within their networking support.
Techniques such as RPC and direct use of networking still leave application programs open to the vagaries of the network. They also require developers to program for network failures and timeouts. These are very tricky techniques, and few application programmers have the skills and experience to appreciate the very real difficulties that are involved. The programming interfaces are deceptively simple. The true difficulties are associated with designs that protect applications from network failures without losing or duplicating data.
The difficulties that are associated with using the low-level technologies in critical applications have led to a steep rise in the popularity of more robust solutions, such as message queuing. This technology allows applications to operate asynchronously; its connectionless operation protects applications from network or server problems to the extent that failures simply delay processing until the failing resource has been recovered. This basic robustness has proven very attractive—especially in industries such as finance, where data in messages can have a high intrinsic value. Applications must be designed to exploit the asynchronous capabilities of the technology in order to benefit from it. However, this is usually much simpler and more flexible than explicitly coding to recover from the failures that are associated with techniques such as RPC.
Another advantage with message queuing is that the data in transit can be processed during its journey. In RPC, for example, applications do not have an opportunity to interact with the data between the time one program makes a request and when another receives it. The two applications are tightly coupled: They must agree on the nature of the data that they exchange. With the message-queuing approach, such tight coupling is removed. It is quite possible to introduce intermediate programs that process the data in transit. These programs have become known as message brokers. Their task is usually to interpret and transform messages coming from one program so that they can be understood at their destination. This type of capability is important because it can allow applications to communicate even though they were not originally intended to do so when they were written.
Transformation is described in terms of mapping rules that are defined via graphical user interfaces rather than being coded. This capability has become increasingly important in recent years as companies wrestle with the need to deliver new solutions more and more quickly in order to address market opportunities. Anything that can reduce the development effort can reduce time to market, and can therefore be an important commercial benefit.
Message broker products can also reroute messages, depending on their content. This is another useful function, which allows additional flexibility in applications that do not have a need for coding.
Basic message-queuing products are available from a number of vendors, including IBM with MQSeries, and Microsoft with MSMQ. Transformation and routing technology is available in message broker products from vendors such as NEON, STC, Mercator, and IBM. Distributed object technologies and distributed component models are a subject of great interest at the moment, especially as the intellectual battle between Microsoft's COM+ and Enterprise JavaBeans (EJB) rages. CORBA, the other major distributed object standard, recently seems to have been receiving much less focus. Although both COM+ and EJB are quite functional, their capability of providing integration between systems is based on modest data-exchange mechanisms. Distributed object systems effectively use a version of remote procedure call for synchronous invocation of methods. They may optionally use message queuing for asynchronous invocation. COM+ includes Microsoft's MSMQ message queuing product, and EJB includes Java Messaging Services within its specification. Sun and IBM, among others, have implementations as well.
As with other forms of data exchange technology, distributed object systems need schemes that allow objects to find one another within a network. This component is known as an object request broker (ORB) in distributed object systems. ORBs can normally create new instances of objects in order to also process requests, if necessary. As we'll see shortly, distributed object systems may provide additional integrity by layering transactional semantics on top of their data exchange mechanisms. This effectively means that remote methods can be invoked with full transactional integrity, protecting vital data from loss or duplication.
Data-exchange mechanisms must often deal with specific formats and representations. In techniques such as RPC or remote method invocation, the format of the data being transmitted is hidden from the programmer. In other techniques, such as networking or message queuing, the data must be formatted correctly before being transmitted. Some standard data formats exist, particularly in specific industries.
Many banks belong to the international funds transfer organization SWIFT, for example. They process payment transactions with one another using messages that are formatted according to that organization’s standards —message format for financial messages is a case in point. Similarly, electronic data interchange (EDI) formats have been used to exchange data in the manufacturing industry for many years.
Aside from these industry-specific standards, few, if any, have found favor across a broad spectrum of commercial use. However, with the huge rise in popularity of XML (eXtensible Markup Language), that is changing. Many vendors have quickly realized that XML is more than just an advanced markup language for Web documents—it is a framework for defining data exchange formats that has the potential to span industries. Efforts such as Resource Definition Format (RDF) and Microsoft's BizTalk initiative are building sets of definitions on top of XML that will ultimately provide much more widely accepted data formats. The huge interest in Web technologies guarantees the availability of the tools and products that are needed to manipulate the data in applications and during transmission.
Despite the fervor with which XML is being promoted, one major issue exists regarding its use in integration solutions. The problem is that XML is actually a standard for defining the way that data representations are defined. It doesn't specify a particular representation of, say, an address. Instead, it defines how such a representation can be constructed. XML is a bit like an alphabet—it defines the pieces that are needed in order to construct words, but it does not define what those words are. If the words you use are different from those used by other people, communication will be impossible. Having XML support in an integration product is just the first step. Applications that are integrated must be capable of understanding one another's data as it’s represented in XML. XML support alone is not enough. When choosing an integration product, it is important not to overlook this aspect of XML. If an application with which you would like to integrate uses a different XML-based representation of a customer's name, for example, you'll still need to interpret—and possibly transform—it before it can be used. XML does not mark the end of the message broker.
Finally, in the data exchange category are technologies associated with encryption. Apart from a few very security-conscious organizations, most integration technology customers need to protect data only when it travels over some medium that they do not control. This medium could be an external network that connects them to their suppliers or customers, for example. Increasingly, of course, this network is the Internet. Protection can be gained by encrypting or digitally signing the data that is transferred. Encrypting the data prevents it from being viewed by anyone as it traverses the network. Signing the data allows it to be seen but prevents it being modified. Either approach requires encryption technology. Data is signed by encrypting a relatively small amount of information; it is much less time-consuming than encrypting all of the data.
It is possible to use data exchange technologies directly in integration solutions. Indeed, much of the client/server activity in the late 1980s and early 1990s was based on direct use of RPC and its object analog, remote method invocation. For small, simple integration projects without rigorous reliability requirements, these sorts of techniques can be suitable; however, as reliability becomes more of an issue, development costs and complexity rise quickly. The solutions also tend to be difficult and expensive to modify. For these reasons, major integration projects are now less likely to adopt these low-level approaches.
One exception does exist: The combination of message queuing and message brokering promises reliability, recoverability, and flexibility in integration solutions. Message queuing provides robust, recoverable communications, and the transformation capability in brokers protects applications from changes in one another's data formats. This reduces the overall impact of modification, speeds implementation, and safeguards the ability to make future changes easily. Indeed, many organizations have implemented successful business-critical integration projects using this kind of technology.
The only major drawback is the current shortage of the rather specialized skills needed to implement interfaces based on message queuing. Although this approach solves the communication problems associated with application integration, it does not address many other issues—we'll cover these shortly.
Batch Data Integration
Data integration is a very common approach to application integration. In data integration, the raw data used in one system is made directly available to other systems for processing. As a result, the approach relies on multiple applications on multiple systems which accessing the same data in some way. Rather than having applications communicate with one another, they share data. Some of the oldest forms of integration, particularly the batch processes, are based on this approach. In the batch mechanisms, a copy of the data is made at some point in time and is then transferred to another machine. Real-time mechanisms allow multiple systems to share the same data concurrently. We'll look at that approach in the next section, but first we'll consider batch mechanisms.
File transfer is probably the oldest of all integration mechanisms. At some specific point in time, a data file is created on the source system. The creation itself may require some form of data adapter, which could be a database extract program, for example. The resulting file is transmitted over a network, using technologies from the data exchange category. On the destination system, the file is either used directly or transformed again using another data adapter. It might be used to load another database, for example.
File transfer is still widely used because of its apparent simplicity. Unfortunately, the approach is fraught with operational difficulties. Most obviously, this approach creates multiple copies of the data. Because file transfer is a batch mechanism, the copies are not guaranteed to be identical. They may be identical immediately after transfer, but they will subsequently diverge unless the copies are only ever read. For this reason, file transfer is best suited to distribution of data that is static for hours, or even days.
A second problem is that the file transfer process involves multiple steps, any of which may fail. Precautions must be taken to monitor the process and provide robust recovery procedures. The need for manual intervention is quite common. In practice, production-quality file transfer is much more complex than you might imagine. Tools are available which reduce the complexity, as well. For example, some products will reliably transmit files over networks using checkpoints and restart capabilities in order to protect against transmission failures. Some use data exchange technologies such as message queuing to achieve their reliability.
As a third problem, the process involves bulk transmission of data: It must arrive in time to be used, and applications cannot start until the data files have been received. This causes operational constraints. If the file transfer operation has been delayed, perhaps because the network was unavailable or busy, online applications may not be available when needed, therefore affecting a company's end users and Web customers alike. This problem of making sure that the batch work is completed within a fixed period of time has been an issue for production systems since the early mainframe days. Operations departments have been working for years to minimize the amount of work that must be done this way. Adding to the problem by using file transfers can be counterproductive.
In database replication, the contents of one database are periodically copied to a number of others on different systems. Applications on the various systems then use their own replica of the data. As a general integration technique, database replication suffers from the same types of problems as file transfer. Again, multiple copies of the data exist and must be maintained. These copies will tend to diverge between replications, meaning that applications cannot rely on the data being identical. Database replication does overcome many of the operational difficulties that are associated with file transfer. Replication takes place in transactional environments and under controlled conditions. However, all of the problems associated with multiple copies of data still exist.
Batch data integration is a very traditional approach. It has been used for decades in environments where immediate access to the most up-to-date data is not a prime consideration. For modern systems that face pressure to continuously provide online applications that support Web access to live data, the opportunity for using these batch approaches is rapidly diminishing.
Real-Time Data Integration
In contrast with the batch approach, real-time data integration makes data available to multiple applications concurrently. Distributed databases (which are now commonplace) allow applications that run on multiple systems to access the same data with the transactional integrity that is essential to production systems. All major relational databases have this kind of capability and can even interoperate with one another under the appropriate conditions.
Distributed file systems can be used to access data when less rigorous recovery requirements apply. Again, this is commonplace technology that we take for granted. Whether it is the Network File System (NFS) on Unix systems that permit file systems to be mounted on remote machines, or the capability to access disks on servers in a Windows network, file sharing is a widely used facility.
Both distributed databases and distributed file systems overcome the batch problems that are associated with the forms of data integration we discussed earlier. However, they share an additional problem: Because all of the applications use the same data at the same time, they must all understand its format and then access and update it appropriately.
Although this sounds obvious and unimportant, it actually masks some real difficulties when applications are written at different times and are maintained by different teams. Imagine that one of the applications using the data is a COBOL program from the early 1980s that runs on a mainframe. Now imagine that another is a brand-new JavaBean running in a Web application server, and you can start to see the problem. The initial implementation will be relatively straightforward. The JavaBean developers need to know the layout of the data as well as the proper way to access and update it. The JavaBean will probably need to reimplement some or all of the business logic of the COBOL program, too.
The real problems start, however, when one of the applications needs to be changed. Any modification that affects the database or the way that the data is used will cause changes for both the JavaBean and the COBOL programs. New versions must be installed at the same time. And, of course, the whole system must be tested together before it can be put into production. This is an example of very tight coupling between applications. It causes difficulties when only two applications are integrated. As the number of applications that share the data increases, the practical difficulties multiply; the cost and time to implement subsequent projects also rises significantly. In situations in which time to market is crucial, the delays caused by this kind of tight coupling become unacceptable rapidly.
Real-time access to databases and files is simple to understand and easy for vendors to implement. It appears in virtually every Web application server, for example. However, for many customers, the practical difficulties associated with using this approach on a wide scale leads them to seek other alternatives. Having exhausted the possibilities offered by data integration, it's time to turn our attention to the other major category at this level in the taxonomy.
Whereas data-integration techniques focus on raw access to data, application-integration techniques focus on access to an application function. Applications access one another's function by making some kind of remote call to one another. In a sense, each application encapsulates its raw data, therefore protecting other applications from having to know too much about it and avoiding the difficulties that are associated with data integration.
Rather than needing to know about the data itself, applications must understand and invoke the interfaces that are provided for their use. A significant advantage that application integration gains over data integration is the reuse of business logic. By asking for application-level functions to be carried out, current business logic can be reused in new applications without being reimplemented. In addition to speeding up the implementation, reduction in the amount of business logic that needs to be reimplemented saves development cost, decreases testing cost, and reduces the effort in future maintenance.
In Figure 1, the Application Integration category contains a list of technologies that are commonly associated with eAI. Application-level transactions provide protection, particularly when multiple resources—such as databases—must be updated together. Distributed transactions extend this capability across systems. They are particularly effective when resources, especially databases, on different machines must be updated concurrently. Examples of products that provide this kind of capability include BEA's Tuxedo and IBM's TXSeries. One disadvantage of distributed transactional applications is that they are synchronous: All of the systems involved must run at the same time for the transaction to be completed. In some situations, this is neither possible nor desirable. For those cases, some combination of transactional processing and message queuing is often used. This provides the integrity associated with transactional operations, but with the added flexibility of asynchronous processing.
Component models provide modern development environments that ease the tasks associated with design and programming. Enterprise-level component models seamlessly integrate distributed object technologies with technologies such as transactions and message queuing in order to provide powerful integration capabilities. As we have already seen, the prime examples are Enterprise Java Beans (EJB) and Microsoft's COM+. In and of themselves, component models don't add any new integration technologies. Rather, they provide a framework in which developers can exploit those technologies more easily. They also promote the concept of creation of application components that can, at least in principle, be used and reused in multiple applications.
Application-integration projects frequently encounter the need to incorporate programs that cannot or must not be changed. There are many reasons why applications must be used unchanged. For example, the cost of reprogramming may be prohibitive, or the necessary skills are not readily available. In this type of situation, application adapters can have an important role. They provide links to applications through existing interfaces or capabilities. This allows them to be integrated into the rest of the system unchanged. One common form of adapter drives the existing application via its own user interface. To the application, the adapter appears to be a user. The adapter takes in requests, therefore driving the application and returning any results.
Adapters can be built using virtually any technology that has the appropriate communications capabilities. For example, an adapter could appear as an EJB to the applications that use it, while appearing as a user to the application it integrates. A wide range of vendors carry adapters—especially those that supply eAI product families. Even so, because a huge variety of different, custom-written applications are still in common use, the need still exists to build specific adapters in particular circumstances. To reduce the cost and complexity of the construction of these custom adapters, many eAI vendors now supply adapter construction toolkits as well as specific adapters.
Just like their data-integration counterparts, application-integration technologies rely on the underlying capabilities of the Data Exchange category. For example, component models such as EJB and COM+ rely on remote method invocation from distributed object technologies. Remote method invocation is itself a variant of the remote procedure call technology. Application adapters always use some data exchange technology to communicate with the applications they support. Distributed transaction systems use data exchange technologies in conjunction with units of work in order to coordinate transactional operations. For example, Transarc's Encina, now part of IBM's TXSeries product family, uses a transactional form of remote procedure call technology as its data exchange technology.
Application-level security—in particular, encryption and digital signature—also rely heavily on the technology available in the data exchange layer. Other security requirements usually need to be provided at the application itself. For example, authorization controls which functions and data specific users or other applications can access.
This is usually highly application-specific. Providing authorization checks within a database or within a communications product can help, but only the application can apply rules about access to specific combinations of data and function. As the trend toward allowing customers and business partners direct access to applications increases, security is becoming more of a focus within integration projects. It is no longer sufficient to allow rather broad categories of users to access a wide range of data. Indeed, it may be necessary in business-to-business integration projects to guarantee that one partner can never see another partner's data. Imagine the embarrassment to a supplier if one customer discovered a competitor who pays less for the same component. Such fine-grained access control is increasingly important in applications that have a wide user base.
The notion of modeling the operation of a business as a series of linked processes is almost as old as business itself. In the early days of information technology, the processes and the flow of data between them were enshrined in application programs as well as the huge, multistep batch jobs that made up a large part of the workload for mainframe machines. In more recent times, technology has made it possible for business analysts to build abstract models of business processes that involve applications and people. Work-flow management products allow a physical realization of these models, routing work to staff and applications at the appropriate times. These products tie together the individual processes that constitute the business activities of an enterprise.
Only when a company's applications are integrated into the work flow is the power of technologies (such as work-flow management) fully realized. This is the goal of the Business Process Integration category in Figure 1. The technologies within this category have more to do with the modeling than with the integration. Integration at this level uses techniques from the application and data integration categories. For example, a workflow management system may be capable of initiating a file transfer or a database replication activity. Most work-flow management products also have interfaces that can be exploited by application adapters. It is possible to build adapters that can trigger execution of specific applications in response to the work flow. Also, adapters can initiate work-flow activity in response to requests from applications.
Business process integration offers the promise that business analysts can model company processes using advanced modeling tools, and that the results can be implemented as sophisticated combinations of work that is carried out by staff and by the organization's applications.
In terms of technology, there is little difference between eAI within a single business and eAI between businesses. All techniques of data exchange, data integration, application integration and business process integration are equally applicable. However, business-to-business integration brings a new set of challenges simply because the organizations that are involved are separate.
An immediate issue when two businesses are joined together concerns ownership and responsibility for the link that joins them. By definition, the link must involve systems in both businesses. If one business manages the link, the other business must surrender some level of control over at least one system on its premises. This may not be easy to arrange. Few, if any, new technical issues are involved in doing this. The problem is one of surrendering control.
An alternative approach that is usually much easier to arrange is the use of a third-party organization to operate a network to which businesses can connect. A long-standing example of this approach is the SWIFT organization that we have already mentioned. This interbank payments network is used extensively within Europe. Among other traffic, it carries monetary payment transactions between banks and other major financial institutions. Rather than connecting directly to one another, the banks connect to SWIFT. Each bank manages its own connection, and SWIFT takes responsibility for traffic when it has arrived at the interface to the network. It also takes responsibility for the network operation and configuration, especially when a new bank joins.
Networks operated by third parties are the basis for much commercial use of the Internet, of course. Internet service providers (ISPs) are the normal means by which individuals and companies access the network. As business-to-business traffic grows, additional levels of service providers are appearing. Using the basic fabric of the Internet, they offer virtual networks that serve communities of businesses with related interests. These might be parts suppliers and customers, for example.
Leading the way in this kind of network are communities that provide generic supplies to companies. More targeted supply chain operations are beginning to appear, however. For example, some are now starting to use Internet technologies to support communities that trade in the components for automobile construction. As this trend grows and as these communities become critical to the operation of major corporations, the pressure to integrate these marketplaces with existing core applications will grow.
All of the categories of integration can be expected to play a part in these future endeavors. The cost savings that both customers and suppliers can achieve from automation of the procurement process will be a major driver for increased focus on integration projects.
Two other aspects of Figure 1 have not yet been discussed here. One is security; the other is systems management. Both of these aspects can be important throughout the entire set of categories. For example, if an integration project is being implemented through the use data exchange, then security must be applied there. However, if the project uses application integration, it may be more appropriate to apply it at that level, assuming that the underlying data exchange can support it.
Several aspects of security are involved, of course. We've already seen that encryption is normally associated with the data exchange layer. Indeed, it may even be a property of the underlying network itself. We've also seen that authorization is often directly associated with applications, especially where fine-grained control is required.
That leaves authentication—authentication is the check to ensure that a user is who he or she claims to be. If authentication is successful, the user is assigned a set of capabilities that define the tasks that he or she is able to perform. When a task is attempted, the user's set of capabilities is checked to see if he or she is allowed to perform it. This second process is authorization. Because a user may need to perform tasks that were implemented using application integration, data integration, and data exchange technologies, it would be helpful to have authentication and authorization schemes that apply across all of the technologies and all of the systems on which they run. Sadly, such schemes are not as readily available as many customers would desire. In reality, the security of a set of integrated applications that span businesses will depend on a variety of technologies deployed in a number of the categories in the taxonomy. The exact nature of the employed technologies depends strongly on the specific computer systems and the software that’s chosen.
The final topics in Figure 1 are systems management and monitoring. Any set of business-critical systems needs some level of support in both of these areas. The more complex the applications and the wider the set of machines spanned, the more critical is the requirement. Mature products are available to provide management of lower-level components, such as processors and networks. However, application-level management is more problematic. This is especially true for solutions that involve integrated applications that run on a variety of systems.
Fortunately, vendors of application-integration and data-integration products have started to address the issues. In application-integration products, for example, data-exchange technologies can be instrumented, thus providing performance statistics in terms of the applications being served. This data is needed for monitoring the health of a distributed application. In addition to providing reports of performance and availability, this information can be used to drive alerts and escalation procedures when critical parameters fall outside of the predetermined levels. This kind of capability helps IT organizations deliver successfully against critical service-level agreement criteria.