Home > Articles > Software Development & Management

Web Services Part 2: Current Technologies

  • Print
  • + Share This
Continuing his discussion of current technologies that lead to full web services, Alex Nghiem explores the advantages and limitations of distributed computing, component-based development (CBD), HTML, and XML.
Placing special emphasis on a comprehensive approach combining organization, people, process, and technology, Harris Kern’s Enterprise Computing Institute is recognized as one of the world’s premier sources for CIOs and IT professionals concerned with managing information technology.
Like this article? We recommend

Part I of this series identified market conditions and some fundamental problems in enterprise software development (reuse and interoperability). We listed several existing technologies (object-oriented technology, distributed computing and thin clients, component-based development, HTML, and XML) and discussed the first one, object-oriented technology, to see how it addressed the issues. This and future articles will cover the remaining technologies in this list and progress to our primary topic, web services.

Distributed Computing and Thin Clients

Client/server development was at the forefront of the PC revolution. As user demands became more sophisticated and as programs grew in complexity, software developers devised more techniques for handling this complexity. Client/server development is characterized by a client application that handles much of the processing logic and a server program that processes database requests. Unfortunately, this architecture requires expensive client machines because all the processing happens on the client side. For programs to be deployed to a larger audience (that is, the web), it's not feasible to always expect the users to own high-end client machines. Hence, alternative methods are needed for designing applications.

Distributed computing refers to the broad practice of implementing applications in an n-tier architecture (n being three or more). The new tier is often referred to as the middle tier and houses the business process logic that should ideally be independent of the underlying database logic and user interface. (The business logic is usually written in the form of classes or components; components are covered in the next section.)

To fully leverage the investment in business logic, the functionality can be shared by multiple applications. To accomplish that goal, the middle tier provides a set of infrastructure capabilities to handle issues such as session management, resource management, concurrency management, messaging, and so on. Rather than implementing these sets of capabilities for every business application, it's common practice to purchase them in the form of an application server. Popular examples of application servers include BEA WebLogic Server, IBM WebSphere, etc. With an application server, developers can then focus on writing business logic rather than infrastructure code.


The language in which the business logic is written depends on which application server is used. As an example, JavaSoft defines the J2EE standard; using a J2EE-compliant application server requires that the business logic be written in Java (or C++, since Java programs can call C++ programs).

The client program communicates with the application server, which in turn communicates with the database. Multiple copies of the application server replicated among the many physical servers can provide load balancing and resource pooling, which prevent performance degradation and improve stability as the numbers of users escalate.

One of the central ideas of distributed computing is the concept of location independence. That is, if a client program makes a request, it shouldn't matter whether the receiving program is on the local machine or remote machine(s). This location independence is often provided by a naming service—a mechanism for the calling program to find and then bind to the receiving resource, which can be an object, program, page, and so on. The naming service may or may not be part of the application server.

Since the business logic is centralized on one or more servers, rather than distributed across many client machines, as is the case with a client/server architecture, this architecture provides the following benefits:

  • With the processing happening primarily on the server, the client machine is relegated to mostly a display device. Consequently, the client machine can be a lower-end machine because it's primarily responsible for displaying information rather than processing business logic (which is more CPU-intensive). This in turn reduces hardware purchases and OS dependencies (of course, we now have browser dependencies).

  • Deploying or updating an application doesn't require installing an application on each user's machine. The client machine often requires only a browser; the middle tier executes the business logic and the client machine simply displays the data. Hence, distributed computing often leads to a thin client architecture.

  • There are potentially fewer compatibility issues because many of the software installations and updates are on the server side, which can be handled in a centralized fashion.

However, nothing in life is free, which means that there are significant disadvantages to distributed computing and thin clients as well:

  • Building distributed applications is notoriously more difficult than building client/server applications. Debugging and tuning these types of applications are not trivial because there are more points of failures and errors.

  • The user interfaces on thin client applications are rarely as sophisticated as those on client/server applications. These UIs tend to be browser-based, and the browser imposes limitations on what's possible; whereas a client/server application can interact directly with the native windowing environment and the native operating system as well. As an example, few (if any) thin client applications support a sophisticated drag-and-drop metaphor, but this is a very common metaphor in most client/server applications.

  • Performance tends to be slower on a thin client application than on a client/server application. With a client/server application, the client application can perform sophisticated operations without depending on the server, whereas a thin client application tends to be able to display only the results of processing that happened on a server. The processing is typically initiated by the client to the server over a network. The network latency tends to be the bottleneck of a thin client architecture, whereas the processing power of the client machine tends to be the bottleneck in a client/server architecture.

Even with these limitations, a thin client architecture is the more viable architecture for applications that have to be deployed to a large audience that wants minimal installations and hassles, as is the case with web applications.

As we'll show later, web services share many of the advantages of a thin client architecture (location independence and centralized management), but such challenges as performance tuning, debugging, and a consistent GUI are potentially magnified.

  • + Share This
  • 🔖 Save To Your Account