Home > Articles > Programming > General Programming/Other Languages

Using Interface Classes To Simplify Cluster (PVM and MPI) Application Programming

  • Print
  • + Share This
In a three-part series, Cameron and Tracey Hughes discuss how interface classes can be used to ease some of the pain of cluster and parallel programming. This first article focuses on the Parallel Virtual Machine (PVM) library, which supports cluster programming and parallel programming through a message-passing model.
From the author of


Cluster programs and parallel programs are typically more complex to design, develop, and maintain than traditional sequential programs. The coding required for the following issues all serve to make cluster and parallel applications challenging:

  • Distribution of workload

  • Synchronization of tasks

  • Prevention of data racing

  • Protection of critical sections of shared data

  • Exceptions due to partial or complete failure of communication or process

Our goal is to reduce this complexity where it's possible and practical. One of the approaches that we use to mask complexity is to encapsulate it in an interface class—a class that adapts the interface of a data element, function(s), or another class.

Interface classes can be used to clarify and streamline some of the logic and components required for cluster application programming and parallel programming. By using interface classes, we can

  • Simplify message-passing schemes and synchronization components

  • Classify, organize, and better manage errors and exceptions

Several good libraries are available that support parallel programming for projects that use C++. In this series of articles, we focus on the PVM, MPI, and Pthreads libraries. The PVM (Parallel Virtual Machine) and MPI (Message Passing Interface) libraries are de facto standards that support cluster programming and parallel programming through a message-passing model. The Pthreads library has a POSIX standard for operating systems and can be used in multiprocessor systems to achieve parallelism and in single-processor systems to simulate parallelism. Each of these libraries is in wide use and has open source or free versions readily available. We use these libraries because they're robust, stable, and practical in a C++ environment. These libraries support projects of all sizes, from small cluster-based applications to massively-parallel programming requirements.


You can find implementations for the major platforms for all these libraries. PVM and MPI have implementations for Windows NT, for example.

  • + Share This
  • 🔖 Save To Your Account