13.3 Model Problem
We decided to implement a single use case that retrieved an image from the repository. We did this for several reasons. First, the retrieving images is a key capability of the system. Second, image retrieval involves components in all three tiers of the DIRS architecture. Third, the problem was simple to model and communicate. There were also several drawbacks however. This problem did not, for example, model the transactional properties of the system. However, the model problem was deemed sufficient to answer the motivating question, and any further effort would have been inappropriate.
Implementing the image retrieval problem required that we consider components in all three tiers of the architecture. The middle tier included a component representing the business rule interpreter (BRI). The BRI encapsulated DIRS business rules, supported connections with multiple, simultaneous clients and coordinated data flow throughout the system. The back-end server contained a component representing the storage manager (SM). This component maintained the actual image files. The client, middle tier and bottom tiers of the architecture were all hosted on separate platforms. A high bandwidth LAN connected the client to middle tier. But we also considered a low bandwidth WAN. The middle tier to lower tier connection is guaranteed to be a high-bandwidth LAN connection.
After brief discussion, three major options presented themselves. These are depicted in Figure 13-2. Each model solution provided an alternate approach for implementing control flow and data flow. In all three, control flow is via the IIOP connections. Our interest in data flow is principally with image retrieval, as movement of images is essential to both the model problem and the actual DIRS system. Additionally, the size of image files stored in DIRS could be extremely largeon the magnitude of 10-200 megabytes. As a result, handling these images is a critical, if not overriding consideration.
Figure 13-2 Three Ensembles
Each design alternative uses the same components but in a different way. The first ensemble transfers images over IIOP connections between the client and the BRI, and between the BRI and the SM. All communication in this solution travelled through the BRI. There was no direct connection between the client and the SM. The second ensemble uses HTTP to transfer image files directly from the storage manager to the client. The third ensemble transfers images directly from the storage manager to the client over IIOP.
The indirect IIOP ensemble had deficiencies that are readily apparent: it required images to be transferred twiceonce between the SM and the BRI and again from the BRI to the client. This was unacceptable due to the potential size of these images and the performance overhead of transferring images a second time. On the other hand, we had recently developed an application using this ensemble to transfer large images. Thus, we were confident that we could produce another implementation. This ensemble could become an interim solution should one of the remaining ensembles appear to be promising but infeasible. In a sense, then, the indirect IIOP option was a contingency within a contingency.
Before proceeding, we worked with the architect to develop evaluation criteria. That is, how would we know that an ensemble is feasible? We agreed that retrieval of an image from an image store would constitute success, at least at this stage of the design. Then, again working with the architect, we settled on implementation constraints, mainly concerning selection of components.
Java applets were developed using version 1.1.3 of the Java Development Kit (JDK) using OrbixWeb 2.0.1 for communicating with CORBA servers on the middle tier and back-end servers. We often refer to Java applets that communicate with CORBA servers as orblets. These orblets were run from versions 3.0 and 4.0 of the Netscape Browser, and from versions 3.0 and 4.0 of the Internet Explorer Browser. The client platform operating system was Windows NT 4.0. The BRI and SM servers were coded in C++, compiled with the SPARCompiler C++, and run on Solaris 2.5. Implementing C++ required installing an additional compiler and brushing off our C++ language skills. We felt this small, but additional, effort enabled us to more closely model the DIRS system. (After all, when looking for a reliable trail guide it is best to find someone who has been down the same trail, and as recently as possible.) Both the BRI and SM used Orbix 2.2 to communicate with each other and the client. Version 2.0.1 of the Netscape FastTrack Server was the HTTP server.
Some of these component selections were consistent with those made for the main design thread (see Figure 13-1), while others were intentionally varied to build team competence in the event that a switch-over would be required.