"Computers are never fast enough." That used to be the mantra of computer users everywhere. Then, CPU manufacturers hit the 1GHz barrier and, for most users, it stopped being true. Unless you are playing the latest games, performing scientific computation, editing video, or running a high-volume server for dynamic data, modern machines have far more power than what you need.
Most interestingly, this is even true of a lot of server workloads. An SME’s web server load, for example, could easily be handled on a 5- to 10-year-old machine. It seems somewhat wasteful to buy a new machine and use so little of it. Using old machines isn’t such a great solution, either, as they are often unreliable (and difficult to get spare parts for), have high power requirements, and take up a lot of space.
The ideal solution would be to buy just a small bit of a new machine and let other people use the rest. This is where the idea of virtualization comes in. You would get a virtual machine that ran on a real machine. It would behave like a real machine, but could be shared between users.
The traditional solution to the "problem" of having too much CPU power is to run more complicated and bloated software. A more efficient solution is provided by virtualization and paravirtualization applications such as Xen.
What Is Virtualization?
The idea behind virtualization is an extension of that found in a modern operating system. A program running, for example, on a UNIX machine has its own virtual address space. From the program’s perspective, it has a large chunk (4GB on a 32-bit machine) of RAM that it can use. The operating system is responsible for multiplexing this with other programs. This large, contiguous space doesn’t exist in the real machine; some of it will be scattered around real memory, while the rest of it might be stored on a hard disk.
Memory is not the only thing virtualized with a modern OS. The CPU is usually allocated to different processes using some form of pre-emption. When a process has used its fair share of the CPU, it is interrupted and another is allowed to take its place. From the process’ perspective, it has a CPU of its own (or more than one, if it is multithreaded).
Some resources, however, are not transparently shared. Things like access to the screen are often not quite so transparent. Processes must negotiate usage of these with others, often with the aid of a set of library functions that hide this from the developer.
The idea of virtualization is to take this concept to its logical conclusion. A physical computer is partitioned into several logical partitions, each of which looks like a real computer. Each of these partitions can have an operating system installed on it, and function as if it were a completely separate machine.
Virtualization is not a new concept. It has been in use in the mainframe world for some time, and has hardware support in high-end servers. It is quite difficult to accomplish on x86 hardware, however.
Complete virtualization requires two features:
- CPU virtualization
- Peripheral virtualization
The second of these is relatively easy, although time-consuming, to implement. It can be done either via specialized hardware or firmware or with a virtualization system running on top of an existing operating system. This exposes interfaces that look like real hardware to each of the virtual machines and then multiplexes them together for access to the real hardware.
The difficulty of the CPU virtualization depends on the architecture. For most instruction sets, instructions fall into two categories: privileged and unprivileged. Unprivileged instructions are the only ones that "normal" applications can use, and these are easy to virtualize; almost every modern operating system does. The privileged instructions deal with access to the physical machine and are used by the operating system. These are not always easy to virtualize.
The DEC Alpha instruction set implemented all its privileged instructions via the same interface. They were implemented as a sequence of the standard instructions running with special privileges, and were defined in replaceable firmware. Virtualizing such a CPU was comparatively trivial.
Other common RISC architectures have more complex privileged instruction mechanisms but, for the most part, they are easy to virtualize because all privileged instructions trap. This means that every time a privileged instruction is issued, an event is generated by the processor that allows a virtual machine monitor to emulate the privileged instructions.
Intel’s x86 architecture, however, lacks this property. A small number of instructions are very difficult to virtualize. A common approach to virtualizing x86 relies on dynamic recompilation. The instruction stream is scanned, and any occurrence of these instructions is replaced by something that the virtual machine software can intercept. This process is significantly slower than real virtualization.