In this sample chapter from Android Concurrency, author G. Blake Meike explains how concurrent programs are used in software and hardware, introduces threads for executing sequential instructions in concurrent orders, and discusses the Java memory model used to connect the work of application programmers and hardware developers into an optimized program.
- We propose to use the delays τ as absolute units of time which can be relied upon to synchronize the functions of various parts of the device.
- John von Neumann
In order to build correct, concurrent Android programs, a developer needs a good model of concurrent processes, how they work, and what they are for. Concurrency actually isn’t a big deal for most normal humans. For any multi-celled animal—arguably even for viruses—it is just normal existence. It is only those of us obsessed with computers that give a second thought to the idea of walking and chewing gum at the same time.
Concurrency Made Hard
Walking and chewing gum isn’t easy in the strange world of Dr. John von Neumann. In his 1945 paper, “The First Draft Report on the EDVAC” (von Neumann 1954), he describes the architecture of one of the very first electronic digital computers. In most ways, that architecture has changed very little in the seventy years since. Throughout their history, digital computers have been, roughly speaking, gigantic balls of state that are transformed, over time, by a sequence of well-defined operations. Time and order are intrinsic parts of the definition of the machine.
Most of computer science has been the discussion of clever sequences of operations that will transform one machine state into another, more desirable, state. As modern machines commonly have more than 1014 possible states, those discussions are already barely manageable. If the order in which transformations take place can vary, the discussion necessarily broadens to include all possible combinations of all possible states, and becomes utterly impossible. Sequential execution is the law of the land.
Concurrency in Software
Of course, computer languages are written for humans. They are intended to help us express an algorithm (the sequence of instructions that transforms the machine state) efficiently, correctly, and, perhaps, even in a way that future human readers can understand.
Early programming languages were, essentially, an extension of the hardware. Even today many are reflections of the machine architecture they were originally designed to control. Nearly all of them are procedural and consist of lists of instructions for changing (mutating) the state of memory. Because it is simply too difficult to reason about all of the possible states of that memory, languages have, over time, become more and more restrictive about the state changes they allow a developer to express. One way to look at the history of programming language design is as a quest for a system that allows developers to express correct algorithms easily, and not express incorrect ones at all.
The very first languages were machine languages—code that translated, one-for-one, into instructions for the computer. These languages were undesirable for two reasons. First, expressing even a very simple idea might take tens of lines of code. Second, it was much too easy to express errors.
Over time, in order to restrict and manage the ways in which a program could change state, languages have narrowed the choices. Most, for instance, restrict program execution from arbitrarily skipping around between instructions to using now-familiar conditionals, loops, and procedure calls. Modules and eventually OOP (Object-Oriented Programming) followed, as ways of separating a program into small, understandable pieces and then limiting the way those pieces can interact. This modularized, building-block approach makes modern languages more abstract and expressive. Some even have well-developed type systems that help prevent errors. Almost all of them, though, are still imperative: lists of instructions for changing machine state.
While most computer research and development focused on doing more and more complicated things, on bigger and faster hardware based on von Neumann architecture, a small but persistent contingent has pursued a completely different idea: functional programming.
A purely functional program differs from a procedural program in that it does not have mutable state. Instead of reasoning about successive changes to machine state, functional languages reason about evaluating functions at given parameters. This is a fairly radical idea and it takes some thinking to understand how it could work. If it were possible, though, it would have some very appealing aspects from a concurrency point of view. In particular, if there is no mutable state, there is no implicit time or order. If there is no implicit order, then concurrency is just an uninteresting tautology.
John McCarthy introduced Lisp, the first functional language, in 1958, only a year or two after the creation of the first commonly accepted procedural language, Fortran. Since then, Lisp and its functional relatives (Scheme, ML, Haskel, Erlang, and so on) have been variously dismissed as brilliant but impractical, as educational tools, or as shibboleths for hipster developers. Now that Moore’s law (Moore, 1965) is more likely to predict the number of processors on a chip than the speed of a single processor, people are not dismissing functional languages anymore. (By 1975, Moore formalized this concept when he revised his original thoughts, and said the number of integrated circuit (IC) components would double every two years.)
Programming in a functional style is an important strategy for concurrent programming and may become more important in the future. Java, the language of Android, does not qualify as a functional language and certainly does not support the complex type system associated with most functional languages.
Language as Contract
Functional or procedural, a programming language is an abstraction. Only a tiny fraction of developers need to get anywhere near machine language these days. Even that tiny fraction is probably writing code in a virtual instruction set, implemented by a software virtual machine, or by chip firmware. The only developers likely to understand the precise behavior of the instruction set for a particular piece of hardware, in detail, are the ones writing compilers for it.
It follows that a developer, writing a program in some particular language, expects to understand the behavior of that program by making assertions in that language. A developer reasons in the language in which the program is written—the abstraction—and almost never needs to demonstrate that a program is correct (or incorrect) by examining the actual machine code. She might reason, for instance, that something happens 14 times because the loop counter is initialized to 13, decremented each time through the loop, and the loop is terminated when the counter reaches 0.
This is important because most of our languages are imperative (not functional) abstractions. Even though hardware, registers, caches, instructions pipelines, and clock cycles typically don’t come up during program design, when we reason about our programs we are, nonetheless, reasoning about sequence.
Concurrency in Hardware
It is supremely ironic that procedural languages, originally reflections of the architecture they were designed to control, no longer represent the behavior of computer hardware. Although the CPU of an early computer might have been capable of only a single operation per tick of its internal clock, all modern processors are performing multiple tasks simultaneously. It would make no sense at all to idle even a quarter of the transistors on a 4-billion gate IC while waiting for some single operation to complete.
Hardware is physical stuff. It is part of the real world, and the real world is most definitely not sequential. Modern hardware is very parallel.
In addition to running on parallel processors, modern programs are more and more frequently interacting with a wildly parallel world. The owners of even fairly ordinary feature-phones are constantly multitasking: listening to music while browsing the web, or answering the phone call that suddenly arrives. They expect the phone to keep up. At the same time, sensors, hardware buttons, touch screens, and microphones are all simultaneously sending data to programs. Maintaining the illusion of “sequentiality” is quite a feat.
A developer is in an odd position. As shown in Figure 1.1, she is building a set of instructions for a sequential abstraction that will run on a highly parallel processor for a program that will interact with a parallel world.
Figure 1.1 A sequential program in a concurrent world