Home > Articles > Programming > Java

A Brief History of Programming, Part 1

  • Print
  • + Share This
In part 1 of his series on the history of programming, David Chisnall takes a look at some of the developments of the last few decades that have created the current crop of languages and discusses where they came from.
Like this article? We recommend

In the first half of the last century, Alan Turing proposed a theoretical mechanical programming engine, known as the Turing Machine. This machine had an infinitely long tape, an internal register storing its state, and a table of actions.

At each step, it would read the symbol from the current location on the tape and consult the table to find what it should do for that symbol and state pair. It would then perform some or all of the following actions:

  • Write a new symbol.
  • Change the state in the internal register.
  • Move the tape left or right.

With the right entries in its table, this simple machine was capable of computing any algorithm. One of the fundamental concepts of information theory governs relationships between sets; it is possible to uniquely map any item in one set to an item in another set with the same cardinality.

Turing realized that this meant you could represent a Turing Machine so that it could be read by another Turing Machine. You could then construct a Universal Turing Machine, which would take another Turing Machine (suitably encoded) as input and then run as if it were that machine.

This is the concept behind all programming: that a suitably general computing machine can emulate any specific ones. A computer program is nothing more than a means of turning a general-purpose computing engine into a special-purpose one.

It’s All Bits and Bytes

The first computers were highly specialized machines. Due to the source of their funding, they were focused heavily on running a set of simple algorithms that were used for code breaking. Whenever the algorithm (or, in many cases, the input) changed, the computers needed to be rewired.

It was a little while later that stored program computers emerged, such as the Manchester Baby. Like the Universal Turing Machine, these computers stored the algorithms they were to compute in the same way they stored data.

These early machines were programmed in pure machine code. The operations that the computer would perform were represented by short binary sequences, and programmers would enter them either by flipping switches, making holes in punch cards or tapes, or pressing buttons.

Instead of binary sequences, most systems enabled programmers to enter short sequences as a single ocal or hexadecimal digit, but this still wasn’t ideal.

This binary system wasn’t very human-friendly, so the idea of a symbolic assembler arose. Rather than entering the binary codes directly, programmers would enter mnemonics that represented them. While an add operation might be 01101011, the programmer would enter ADD, which was much easier to remember.

These assembly language sequences had a simple one-to-one mapping with machine code instructions, so a simple program comprising a lookup table was all that was required to turn them into real code.

One of the biggest innovations introduced by symbolic assemblers was that of symbolic branch destinations. Most programs involve large numbers of conditional statements: do one thing if a value is in a certain range; otherwise, do something else.

At the machine-code level, they are translated into jumps, either relative or absolute, which move the place from which the next instruction is read, either to a specific location or to a certain offset from the current one.

A machine code programmer had to calculate these offsets and enter them in the program as fixed numbers. If the programmer wanted to add another instruction somewhere, all jumps that ended after this new instruction (or backward relative jumps from after it to before) needed to be updated.

With a symbolic assembler, jumps could be given symbolic names, and the assembler would convert these names into real addresses when it ran. If you added a new instruction somewhere, you still needed to run the assembler again, but it would take care of the jump updates for you. This made programs a lot more flexible. It also made them slightly more efficient.

Programmers previously worked around this limitation by inserting short sequences of no-operation instructions in places where they thought they might need to add code later (often with an unconditional jump to skip over them). With an assembly-language program, this was no longer required.

  • + Share This
  • 🔖 Save To Your Account