Now that you have an understanding of what real-time is and what it means, it’s time to expand on it. Real-time computing is the study and practice of building applications with real-world time-critical constraints. Real-time systems must respond to external, often physical, real-world events at a certain time, or by a deadline. A real-time system often includes both the hardware and the software in its entirety. Traditionally, real-time systems were purpose-built systems implemented for specific use; it’s only recently that the real-time community has focused on general-purpose computing systems (both hardware and/or software) to solve real-time problems.
Today, the need for specialized, dedicated, hardware for real-time systems has mostly disappeared. For instance, modern chipsets include programmable interrupt controllers with latency resolution small enough for demanding real-time applications. As a result, support for real-time requirements has moved to software; i.e., specialized schedulers and resource controllers. Algorithms that were once etched into special circuitry are now implemented in software on general-purpose computers.
This is not to say that hardware support isn’t needed in a real-time system. For example, many real-time systems will likely require access to a programmable interrupt controller for low-latency interrupts and scheduling, a high-resolution clock for precise timing, direct physical memory access, or a high-speed memory cache. Most modern computer hardware, including servers, workstations, and even desktops and laptops, support these requirements. The bottom line is whether the operating system software running on this hardware supports access to these hardware facilities.
The operating system may, in fact, support real-time tasks directly through its scheduling implementation, or may at least allow alternative scheduling algorithms be put in place. However, many general-purpose operating systems schedule tasks to achieve different goals than a real-time system. Other factors, such as overall system throughput, foreground application performance, and GUI refresh rates, may be favored over an individual task’s latency requirements. In fact, in a general-purpose system, there may be no way to accurately specify or measure an application’s latency requirements and actual results.
However, it is still possible to achieve real-time behavior, and meet real-time tasks’ deadlines, on general-purpose operating systems. In fact, this is one of the charter goals that Java RTS, and the RTSJ, set out to solve: real-time behavior in Java on general-purpose hardware and real-time operating systems. In reality, only a subset of general-purpose systems can be supported.
The remainder of this chapter provides an overview of the theory and mechanics involved in scheduling tasks in a real-time system. To be clear, real-time scheduling theory requires a great deal of math to describe and understand thoroughly. There is good reason for this: when a system has requirements to meet every deadline for actions that may have dire consequences if missed, you need to make assurances with the utmost precision. Characterizing and guaranteeing system behavior with mathematics is the only way to do it. However, we’ll attempt to discuss the subject without overburdening you with deep mathematical concepts. Instead, analogies, descriptions, and visuals will be used to bring the concepts down to earth, at a level where the average programmer should be comfortable. For those who are interested in the deeper math and science of the subject, references to further reading material are provided.
The Highway Analogy
One simple way to describe the dynamics of scheduling tasks in a real-time system is to use a highway analogy. When driving a car, we’ve all experienced the impact of high volume; namely, the unpredictable amount of time spent waiting in traffic instead of making progress towards a destination. This situation is strikingly similar to scheduling tasks in a real-time system, or any system, for that matter. In the case of automobile traffic, the items being scheduled are cars, and the resource that they’re all sharing is road space. Comparatively, a computer system schedules tasks, and the resource they share is CPU time. (Of course, they also share memory, IO, disk access, and so on, but let’s keep it simple for now.)
In the highway analogy, the lanes represent overall computer resources, or time available to process tasks. More capable computers can be loosely described as having more lanes available, while less capable systems have fewer. A car is equivalent to a task that has been released (eligible for execution). Looking at Figure 1-8, you can see tasks “traveling” down individual lanes, making forward progress over time. At moments when more tasks share the highway, the entire system is considered to be busy, and usually all tasks will execute slower. This is similar to the effects that high volumes of cars have on individual car speeds; they each slow down as they share the highway. Since, in this scenario, all tasks share the resources (the highway) equally, they are all impacted in a similar, but unpredictable, way. It’s impossible to deterministically know when an individual task will be able to complete.
Figure 1-8 As with cars on a highway, when there are more tasks executing, the system slows down, and execution times become unpredictable.
In the real world, engineers designing road systems have come up with a solution to this problem: a dedicated lane.
The Highway Analogy—Adding a Priority Lane
Figure 1-9 proposes a specialized solution to this problem: a dedicated high-priority lane (sometimes called a carpool, or HOV lane, on a real highway). We refer to it as specialized because it doesn’t help all tasks in the system (or all cars on the highway), only those that meet the requirements to enter the high-priority lane. Those tasks (or cars) receive precedence over all others, and move at a more predictable pace. Similarly, in a real-time system, dedicating system resources to high-priority tasks ensures that those tasks gain predictability, are less prone to traffic delay, and therefore complete more or less on time. Only the normal (lower-priority) tasks feel the effects of high system volume.
Figure 1-9 Introducing a high-priority lane to a highway ensures that the cars in that lane are less susceptible to traffic, and therefore travel more predictably towards their destinations.
This analogy goes a long way towards describing, and modeling, the dynamics of a real-time system. For instance:
- Tasks in the high-priority lane gain execution precedence over other tasks.
- Tasks in the high-priority lane receive a dedicated amount of system resources to ensure they complete on time.
- When the system is busy, only normal tasks feel the impact; tasks in the high-priority lane are almost completely unaffected.
- Overall, the system loses throughput, as fewer lanes are available to execute tasks.
- Tasks only enter the high-priority lane at certain checkpoints.
- Some system overhead is required at the checkpoints. Just as cars need to cautiously (and slowly) enter and exit a carpool lane, tasks are slightly impacted.
- Tasks may be denied access to the high-priority lane if their entry would adversely affect the other tasks already running.
Additionally, metering lights are used at the on-ramps to many highways. These lights control the flow of additional cars (analogy: new tasks) onto the highway to ensure the cars already on the highway are impacted as little as possible. These lights are analogous to the admission control algorithm of the scheduler in real-time system.
Most importantly, this analogy shows that there’s no magic involved in supporting a real-time system; it, too, has its limits. For instance, there’s a limit to the number of high-priority tasks that can execute and meet their deadlines without causing all tasks to miss their deadlines. Also, because of the need to dedicate resources to real-time tasks, the added checkpoints for acceptance of real-time tasks, the need to more tightly control access to shared resources, and the need to perform additional task monitoring; the system as a whole will assuredly lose some performance and/or throughput. However, in a real-time system, predictability trumps throughput, which can be recovered by other, less-complicated, means.
As we explore the details of common scheduling algorithms used in actual real-time systems, you will also see that simply “adding more lanes” doesn’t always resolve the problem effectively. There are practical limits to any solution. In fact, in some cases that we’ll explore, adding processors to a computer can cause previously feasible schedules to become infeasible. Task scheduling involves many system dynamics, where the varying combination of tasks and available resources at different points in time represents a difficult problem to solve deterministically. However, it can be done. Let’s begin to explore some of the common algorithms used, and the constraints they deal with.