Building a Linux Cluster, Part 2: What's Involved?
Editor's Note: Be sure to read the other articles in this series, Building a Linux Cluster, Part 1: Why Bother? and Building a Linux Cluster, Part 3: How To Get Started.
Part 1 of this series examined some of the whys of building Linux clusters. We shouldn't have to apply the "rule of five whys" (the rule states that you must ask "Why?" at least five times before you get the real answer) to determine whether you should consider a Linux cluster for your environment. If you're convinced—as you should be—that under the right circumstances a Linux cluster can substitute for an expensive SMP system, the next step in the process is the physical engineering of the solution. In this installment, we consider the what of cluster building: the hardware and software components that make up a Linux cluster, and some ways to think about integrating them into a solution for your environment.
Before we get too deep into the what portion of the discussion, I need to mention that I don't have all of the answers about Linux cluster design or implementation. For one thing, the available components, in terms of software and hardware technologies, change on what seems like an hourly basis. I monitor about 35 technology-related RSS feeds to keep up on things, along with closely watching cluster-specific software projects and other sites on a less frequent basis. Linux in general, and clusters in particular, can be rapidly moving targets. Blink twice and the world will have moved on a bit.
Before getting too worried about the pace of Linux cluster advancement, however, remember that you can still save money without trying to stay in the Top 500 list for supercomputers. The whole process of selecting the right hardware and software components is iterative—the only way to start is to do an initial design and improve it. Let's not let our management and friends accuse us of "analysis paralysis," right?