Home > Articles > Operating Systems, Server > Microsoft Servers

Windows Server Reference Guide

Hosted by

Windows Server Virtualization

Last updated Sep 26, 2003.

The promise of virtualization is the ability to make abstract the relationship between the processor and the operating system, such that the real purpose of a running OS is to deliver an application and functionality to a designated set of clients. That may sound a little too ordinary at first, but I chose this phraseology carefully. Typically, a server OS delivers multiple applications serving a variable number of customers, whose business of management requires communication with any number of other servers. Allocating power to application-serving resources up to now has been a matter of finding or building the right processor box for the job.

With Windows Server 2008, all that changes. Processors in a network of servers become a unified ocean of power. Designating the proper amount of power to serve an application becomes a fluid matter; you're no longer annexing entire processor spaces in indivisible chunks.

In a very real sense, this could change the definition of "server," perhaps given a little more time. Up until today, it had two definitions—like a firewall, one pertained to hardware, another to software. Today, there are (at least) three levels: hardware, physical software, and virtual software. But in some systems, the physical or host OS' only purpose may be to facilitate the connection between the hardware and the virtual environment. Vista or XP could pull this off just as well as Longhorn.

#3: Windows Server Virtualization

In many of the early demonstrations of Windows Server Virtualization for Longhorn, code-named "Viridian," Microsoft maintained that the purpose of virtualization is so you can host virtual machines. This seems like a lot of nonsense; the truth is, there's a lot of sense behind the real technology. Here's a list that might make a bit more common sense than what you've read thus far:

  1. In the physical world of servers, processor capacities are rarely used to their full extent, especially in the case of multicore servers. Engineers estimate some server hardware whose workload appears to be taxing it to its maximum, may actually be only consuming 15% of CPU capacity in total—what's bogging down the system is all the I/O. Virtualization opens up the possibility of consolidation: using fewer processors or fewer server components or blades to do the same work. This is accomplished by running multiple virtual servers on the same system. If four servers really only consumed 15% of real-world CPU cycles, there's a very good chance that virtual editions could share the same box, and smarter throughput could compensate for the I/O rerouting.
  2. In the physical world, a virtual server is a handful of files. These files can be backed up and replicated physically. So if a virtual server is an active server, and it goes down or is taken down, it can be restored from the backup. The entire state of the machine, including the configuration under which hosted software runs, is recorded, literally for playback at a convenient time.
  3. On Windows Server 2008, a single virtual server can be allocated among multiple processors simultaneously—at the time of this writing, the practical maximum was eight processors, with the plan being to encompass as many as 16, and hopefully 64 in a future release. Consider this "software-driven multicore." This is scaling in the opposite direction, and it's not just for kicks: Whereas data centers used to construct intricate distribution schemes for forwarding calls to processors that had the free space and time to respond, a single server OS incorporating the same bunch of processors could reduce throughput time further by eliminating the distribution transactions altogether, while still achieving load balancing from a physical perspective.
  4. Virtual machines are more expendable. If a virtual machine is what's facing the Internet, then a typical hacking job could get the perpetrator nowhere. He could take down the virtual server, but where would that get him? A few seconds later, it's back online.
  5. The physical presence of certain features in a dedicated PC, such as local hard drives and local network adapters, though they are convenient and even necessary for applications managed by an operating system to maintain themselves, become terribly redundant and costly and even hot when multiplied to the extent necessary to manage a large enterprise. Virtualization cancels out this redundancy. Without completely remodeling the operating system to function in a more evolved world of hardware—which would arguably kick downward compatibility right out the window—virtualization gives software the resources it thinks it needs, with extra storage and networking and memory capacity made available in mere seconds.

Under WS2K8, virtualization becomes a key service of the operating system. You can utilize it through Virtual Server 2005 R2, though in a limited fashion—there are some new features that VS2005 can't fathom. One is 64-bit virtualization, or the creation of a VM that hosts a 64-bit OS.

System Center Virtual Machine Manager

Currently in beta at the time of this writing is an application with the typically dull and drawn-out name System Center Virtual Machine Manager. Although VS2005 is being distributed for free, its purpose is limited to running one or more VMs within the space of a single processor. Longhorn extends this concept by enabling scaling beyond a single processor. However, those services will require an application for you to do the scaling and control the outcome, and that's what SCVMM is.

Technically speaking, SCVMM is a higher-grade hypervisor—an overlord of running virtual machines accessible through one location. Indeed, remoting is possible here, so you can run multiple VMs on processors elsewhere in the network, and have these VMs utilize remote management. This way, they can all be accessible through one location.

After you wipe away the marketing hype around what Microsoft calls "self-managing dynamic systems" (virtualization isn't really about complete automation at all), the real principle of SCVMM is that it enables a virtualized datacenter. Here, the optimum layout and distribution of an enterprise's servers were money not a factor becomes possible. You can have one operating system devoted to SQL Server entirely, and another to managing identity, and another to Active Directory. All of these VMs can be managed by a kind of laboratory monitor (SCVMM) that maintains the illusion for the services they run.

In a sense, the SCVMM-monitored environment is actually better and more efficient than had all these applications been distributed over multiple processors. In reality, you're using fewer processors (or "cores"), fewer network adapters, and possibly eliminating unnecessary hard drive storage altogether, replacing it with the illusion of local storage maintained behind the storage network.

Online Resources