Home > Articles > Operating Systems, Server > Linux/UNIX/Open Source

  • Print
  • + Share This
From the author of

Active Hardware Components

The cluster's compute slices are the most numerous members of the active hardware elements—this is as it should be, because the cluster exists to provide parallel compute resources. One of the reasons you'll frequently see two-CPU compute slices in scientific and engineering clusters is the heavy floating-point activity that often characterizes those applications. More than two jobs (often more than one) doing heavy floating-point traffic can saturate the system bus, leading to diminishing performance for additional jobs. The memory-to-CPU bandwidth in the system chosen for the compute slice, coupled with the expected application, will dictate the best choice for a compute-intensive cluster.

Transaction- or I/O-intensive clusters may benefit from systems with more CPUs, although above four CPUs we start seeing SMP price creep, as I discussed in Part 1 of this series. There is a tradeoff between compute-slice cost, performance, and reduction of system administration effort for a given number of CPUs. Benchmarking and thoroughly characterizing your workload prior to designing the cluster is a terrific idea. Relying on information from someone who has already built a similar cluster is also a good approach.

  • + Share This
  • 🔖 Save To Your Account