Home > Articles > Web Services > SOA

Parallel Computing and Business Applications

  • Print
  • + Share This
Cory Isaacson explains how Software Pipelines architecture enables you to easily scale your application to any size, maximize your resources, and best of all, do all this and still maintain critical business transaction and integrity requirements.
This chapter is from the book

If you own, manage, or work with a critical business application, you’re most likely dealing with performance problems. The application can’t handle the ever-increasing data volume, it can’t scale to meet new demand, or its performance is never good enough or fast enough. You need a higher level of performance; or even more daunting, you may need an order-of-magnitude increase so you can multiply the number of transactions your application can handle. In today’s computing environment, there’s really only one way to get there: Utilize a parallel architecture to run multiple tasks at the same time.

The fundamental concept of parallel architecture is this: Given a series of tasks to perform, divide those tasks into discrete elements, some or all of which can be processed at the same time on a set of computing resources. Figure 1.1 illustrates this process.

Figure 1.1

Figure 1.1 The fundamental concept of parallel architecture

To do this, you have to break the application into a series of steps, some of which can run in parallel. However, that’s really hard to do if you’re working with existing business applications that do not lend themselves to such decomposition. Whether monolithic or object-oriented, most modern applications are tightly coupled, and that makes it hard to decompose a given process into steps.

Over the years, computer scientists have performed extensive research into parallel architecture and they’ve developed many techniques, but until now they focused on techniques that don’t easily lend themselves to business systems. At the same time, demand for greater performance started over-reaching the limit of most business applications, and the recent trend toward a service-oriented approach has made the challenge even greater. Parallel processing can fix the problem, but common existing techniques are either too complex to adapt to typical business transactions, or they don’t even apply to the business arena.

Before we show you the solution, let’s look at the existing techniques for parallel computing. The three main approaches are

  • Mechanical solutions used at the operating system level, such as symmetric multiprocessing (SMP) and clustering
  • Automated network routing, such as round-robin distribution of requests
  • Software-controlled grid computing

Mechanical Solutions: Parallel Computing at the Operating System Level

Symmetric Multiprocessing

SMP automatically distributes application tasks onto multiple processors inside a single physical computer; the tasks share memory and other hardware resources. This approach is highly efficient and easy to implement, because you don’t need specific, detailed knowledge of how SMP divides the workload.

Mechanical solutions such as SMP are very useful as generic one-size-fits-all techniques. To get the most out of SMP, however, you have to write applications with multi-threaded logic. This is a tricky job at best and is not, in general, the forte of most corporate IT developers. Plus, SMP is a black-box approach, which can make it very difficult to debug resource contention. For example, if you have shared software components and run into a problem, finding the cause of the bug may be very hard and time-consuming.

There’s another drawback: Resource sharing between processors is tightly coupled and is not optimized for any particular application. This puts a lid on potential performance gain, and when you start scaling an application, shared resources will bottleneck at some point. So you might scale an application to eight processors with great results, but when you go to 16, you don’t see any real gain in performance.


In clustering, another widely used mechanical solution, separate physical computers share the workload of an application over a network. This technique provides some capabilities for automatic parallel processing and is often used for fail-over and redundancy.

Clustering techniques are automated and contain some inefficient functionality. If you’re not using centralized resources, the system has to copy critical information (or in some cases, all information) from one node to another whenever a change in state occurs, which can become a serious bottleneck. As is the case with SMP, clustering is often effective up to a point—then adding hardware results in severely diminished returns.

  • + Share This
  • 🔖 Save To Your Account