Home > Articles > Operating Systems, Server

A Short Treatise on Distributed Computing

  • Print
  • + Share This
In a brief historical trip through the Computing Age, Alex Vrenios takes a look at the progression of technology, from the early calculating devices that culminated in the "electronic brains" of the late 1940s, to the latest software-defined architectures for cluster servers and supercomputers. He also speculates about some architectural changes on the horizon and a few possible applications in our future.
Like this article? We recommend

Like this article? We recommend

An Introduction to Multiprocessor and Multicomputer Architectures - Title Page

A distributed computing system is a set of internetworked computers that behave as if it were a single computer. Specialized software makes them all work together as a team, presenting a single-system image to the user. This "specialized software" is the key: Given a distributed system, custom software can instill it with the personality of a highly reliable cluster server, a supercomputer, or something completely different, as we shall see.

Early Computing: What Were They Thinking?

You can only imagine my delight at finding the book Giant Brains, or Machines That Think, by Edmund C. Berkeley (J. Wiley & Sons, 1949), in an old bookshop. Berkeley says, "A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think." While we may not all agree with his assessment, it's a look into the issues of his time. That was in the late 1940s, what many consider to be the birth of the Computer Age.

The earliest computing devices were mechanical. Berkeley points out that the words calculate and calculus both derive from the Latin word for lime or limestone, suggesting a link to pebbles used as counting stones in ancient times, and later in the abacus. The slide rule, with numbers cleverly arranged on a logarithmic scale, is the most recent example of a purely mechanical calculator. The ultimate mechanical calculator, however, was the differential analyzer, solving differential equations with shafts, screws, wheels, and gears. Solutions were used to create tables for battlefield commanders to accurately aim their long-range artillery.

Mechanical calculators evolved into electromechanical and later purely electronic devices. In the beginning, one entered a value, an operation, and another value, as one might do on any hand calculator today. Later, one entered a list of operations along with a list of values, leaving places where intermediate values were stored. The concept of a "stored program," or general-purpose computer, freed the operator from that setup process. Once written and stored on external media, the program was there to be used with any new input data.

Early computers filled entire rooms and kept their programmers warm at night, glowing softly in the darkness. Their thousands of vacuum tubes were often proudly compared to neurons inside the giant electronic brain, but they were also a primary point of failure. (The October 1996 issue of the IEEE Computer magazine shows a moth that caused a short circuit, presenting it as evidence of the first "bug" in a system!) Technological advancements gave us the smaller and more reliable transistor, then integrated circuits, and later an entire computer system on a chip. Computers got faster, but computer architecture—the components inside, and how they all worked together—stayed pretty much the same over the decades. One day that all changed, splitting computer architecture into several directions at once.

  • + Share This
  • 🔖 Save To Your Account