I've always been interested in the history of science. It's fascinating to see how theories are influenced by their cultural backdrops. One example that has permeated all the way to everyday conversation is the colors in the spectrum. Most people, if they had to give a number of discrete colors visible in a rainbow, would say six. Isaac Newton, who first demonstrated the splitting of visible light, would have agreed. Unfortunately, he had a superstitious attachment to the number seven, so decided that the purple he saw was really two colors—indigo and violet—in spite of the fact that they were more similar than various shades of other colors that he treated singly. Today, children are taught about the seven colors of the rainbow, a completely arbitrary number chosen by a superstitious alchemist.
Programming languages are even more interesting because they're affected by several things:
- As languages, their syntax is often driven by psycholinguistics, and even today they're generally placed in Chomsky's hierarchy when describing their grammar.
- Unlike natural languages, programming languages are designed to be run on machines, so they're limited by the capabilities of the computers of the era.
- They also tend to embody some of the latest thinking in software engineering and of theoretical models of computing.
Often those last two factors are at odds with each other.
The first high-level programming languages were designed over 50 years ago now, and we've had a lot of experience creating them. These days, most new programming languages tend to be designed by combining features from older languages—or, in some cases, by removing features. That's not to say that there are no new ideas; the entire field of esoteric languages contains a number of languages that diverge from the accepted norm. My favorite example is Piet, a language in which programs look like the artwork of Piet Mondrian.
Most popular languages, however, inherit a lot from their predecessors. In this series, we're going to take a look at where some of the concepts found in modern programming languages originate.
Although some of the languages that we'll examine in this series are relatively obscure, the first one, ALGOL, is not. For a short while, it enjoyed considerable success. Being able to run ALGOL programs was seen as a major selling point for computers in the 1960s and early '70s. ALGOL is rarely used anymore, but its derivatives account for a significant fraction of all software. The web browser you're using to read this article, along with the operating system on which it runs, were almost certainly written in a close relative of ALGOL.
Goto Considered Harmful
It's not often that an article can be said to define the shape of an industry. There are two famous examples in computers. One is Gordon Moore's "Cramming More Components onto Integrated Circuits," a paper misquoted by far more people than have read it, which defined Moore's Law and set expectations for how computing power would increase over the next half-century.
Although many people (mis)quote Moore's Law, few can name the paper from which it came. The same is not true of Edsger Dijkstra's famous letter to the Association for Computing Machinery (ACM): "Go To Statement Considered Harmful." This letter was written a decade after the first version of ALGOL, but it cites a remark from the meeting that eventually lead to the creation of ALGOL 60 (the second version of the language) .
Dijkstra's letter is often used to mark the beginning of the structured programming movement. Structured programming was the software engineering buzzword of the 1970s. All modern programming styles can be seen as either special cases or continuations of the ideas from structured programming.
ALGOL was designed to allow programs to be separated into subroutines, which could potentially be reused. This was the essence of structured programming—that interactions between parts of the program should be strictly controlled. With ALGOL subroutines, you could create libraries of routines that were easy to reuse in other contexts. More importantly, because interactions were strictly limited, you could often narrow down the part of the source code that contained a bug. Remember, this was an era when a single iteration through a compile-test-debug cycle could easily take an entire day. Compiling and running a program would take several hours. Having to go through the entire source code to track a bug was not fun at all.