The Linux world recently saw some controversy over the introduction of a new scheduler. While the politics behind this decision are somewhat interesting, they’re not the topic of this article, which seeks to explain exactly what a scheduler does, and how it works.
The basic task of a modern operating system is virtualization. Our friends over at Microsoft seem to have just realized this fact, and are trying to re-brand old operating system contexts as "virtualization" in order to be fully buzzword-compliant. Virtualization of resources is important, because it allows us to run more than one program on a single computer. Some of the earliest home computers didn’t do this at all; each program came on its own tape and used the computer’s resources to the full.
Even systems such as CP/M and MS-DOS did some basic virtualization by splitting the disk into files, thereby allowing different programs to share the same disk. Each file is, conceptually, a virtual disk. DOS also performed some virtualization of the computer’s memory, allowing small programs (.COM files) that could fit into a single 8086 segment (64KB) to install themselves in some memory that wouldn’t be given to other programs and could handle certain interrupts.
Later, windowing systems helped to virtualize the screen; individual applications would draw in a virtual screen (window) and then the windowing system would arrange these screens.
All of these options use some kind of spatial partitioning. Both disks and RAM are split into variable-sized blocks. The CPU is slightly different. With a modern multicore processor, you could allocate one core to each process, but you would quickly run out of cores by doing so. The solution is to make all the processes take turns using the CPU.