Home > Articles > Operating Systems, Server > Microsoft Servers

  • Print
  • + Share This
This chapter is from the book

Choosing to Virtualize Servers

The section "Virtualization as an IT Organization Strategy" identified basic reasons why organizations have chosen to virtualize their physical servers into virtual guest sessions. However, organizations also benefit from server virtualization in several areas. Organizations can use virtualization in test and development environments. They can also use virtualization to minimize the number of physical servers in an environment, and to leverage the capabilities of simplified virtual server images in high-availability and disaster-recovery scenarios.

Virtualization for Test and Development Environments

Server virtualization got its start in test and development environments in IT organizations. The simplicity of adding a single host server and loading up multiple guest virtual sessions to test applications or develop multiserver scenarios without having to buy and manage multiple physical servers was extremely attractive. Today, with physical servers with 4, 8, or 16 core processors in a single system with significant performance capacity, organizations can host dozens of test and development virtual server sessions just by setting up 1 or 2 host servers.

With administrative tools built in to the virtual server host systems, the guest sessions can be connected together or completely isolated from one another, providing virtual local area networks (LANs) that simulate a production environment. In addition, an administrator can create a single base virtual image with, for example, Windows Server 2003 Enterprise Edition on it, and can save that base image as a template. To create a "new server" whenever desired, the administrator just has to make a duplicate copy of the base template image and boot that new image. Creating a server system takes 5 minutes in a virtual environment. In the past, the administrator would have to acquire hardware, configure the hardware, shove in the Windows Server CD, and wait 20 to 30 minutes before the base configuration was installed. And then after the base configuration was installed, it was usually another 30 to 60 minutes to download and install the latest service packs and patches before the system was ready.

With the addition of provisioning tools, such as Microsoft System Center Virtual Machine Manager 2008 (VMM), covered in Chapter 11, "Using Virtual Machine Manager 2008 for Provisioning," the process of creating new guest images from templates and the ability to delegate the provisioning process to others greatly simplifies the process of making virtual guest sessions available for test and development purposes.

Virtualization for Server Consolidation

Another common use of server virtualization is consolidating physical servers, as covered in the section "What Is Server Virtualization and Microsoft Hyper-V?" Organizations that have undertaken concerted server consolidation efforts have been able to decrease the number of physical servers by upward of 60% to 80%. It's usually very simple for an organization to decrease the number of physical servers by at least 25% to 35% simply by identifying low-usage, single-task systems.

Servers such as domain controllers, Dynamic Host Configuration Protocol (DHCP) servers, web servers, and the like are prime candidates for virtualization because they are typically running on simple "pizza box" servers (thin 1 unit high rack-mounted systems). Chapter 3, "Planning, Sizing, and Architecting a Hyper-V Environment," shows you how to identify servers that are prime candidates for virtualization and server consolidation.

Beyond just taking physical servers and doing a one-for-one replacement as virtual servers in an environment, many organizations are realizing they just have too many servers doing the same thing and underutilized because of lack of demand or capacity. The excess capacity may have been projected based on organizational growth expectations that never materialized or has since been reduced due to organization consolidation.

Server consolidation also means that organizations can now decrease their number of sites and data centers to fewer, centralized data centers. When wide area network (WAN) connections were extremely expensive and not completely reliable, organizations distributed servers to branch offices and remote locations. Today, however, the need for a fully distributed data environment has greatly diminished because the cost of Internet connectivity has decreased, WAN performance has increased, WAN reliability has drastically improved, and applications now support full-feature robust web capabilities.

Don't think of server consolidation as just taking every physical server and making it a virtual server. Instead, spend a few moments to think about how to decrease the number of physical (and virtual) systems in general, and then virtualize only the number of systems required. Because it is easy to provision a new virtual server, if additional capacity is required, it doesn't take long to spin up a new virtual server image to meet the demands of the organization. This ease contrasts starkly with requirements in the past: purchasing hardware and spending the better part of a day configuring the hardware and installing the base Windows operating system on the physical use system.

Virtualization as a Strategy for Disaster Recovery and High Availability

Most use organizations realize a positive spillover effect from virtualizing their environments: They create higher availability and enhance their disaster-recovery potential, and thus fulfill other IT initiatives. Disaster recovery and business continuity is on the minds of most IT professionals, effectively how to quickly bring back online servers and systems in the event of a server failure or in the case of a disaster (natural disaster or other). Without virtualization, disaster-recovery plans generally require the addition (to a physical data center perhaps already bloated with too many servers) of even more servers to create redundancy (both in the data center and in a remote location).

Virtualization has greatly improved an organization's ability to actually implement a disaster-recovery plan. As physical servers are virtualized and the organization begins to decrease physical server count by 25%, 50%, or more, the organization can then repurpose spare systems as redundant servers or as hosts for redundant virtual images both within the data center and in remote locations for redundant data sites. Many organizations have found their effort to consolidate servers is negated because even though they virtualized half their servers, they went back and added twice as many servers to get redundancy and fault tolerance. However, the net of the effort is that the organization has been able to get disaster recovery in place without adding additional physical servers to the network.

After virtualizing servers as guest images, organizations are finding that a virtualized image is very simple to replicate; after all, it's typically nothing more than a single file sitting on a server. In its simplest form, an organization can just "pause" the guest session temporarily, "copy" the virtual guest session image, and then "resume" the guest session to bring it back online. The copy of the image has all the information of the server. The image can be used to re-create a scenario in a test lab environment; or it can be saved so that in the event that the primary image fails, the copy can be booted and bring the server immediately back up and running. There are more elegant ways to replicate an image file, as covered in the section "Using Guest Clustering to Protect a Virtual Guest Session" in Chapter 12, "Application-Level Failover and Disaster Recovery in a Hyper-V Environment." However, the ability for an IT department to bring up a failed server within a data center or remotely has been greatly simplified though virtualization technologies.

  • + Share This
  • 🔖 Save To Your Account