- What Are Business Critical Applications?
- Why Virtualize Business Critical Applications?
- Risks, Challenges, and Common Objections of Virtualizing Business Critical Applications
Why Virtualize Business Critical Applications?
Now that we’ve reviewed exactly what makes an application business critical, we will explore why organizations are looking to virtualize these applications. Many organizations have widely adopted virtualization in their environment and have virtualized a significant portion of their infrastructure. The business critical applications are likely to be the applications that remain on physical servers for various reasons. Virtualizing business critical applications can be the next step on an organization’s journey to its own private cloud infrastructure.
Although there are many benefits to virtualizing these critical applications, similar to the benefits for virtualizing any application, there are also extra risks and challenges that might make this process difficult. With careful understanding of the risks combined with the benefits an organization can realize, a solid business case can be built for virtualizing business critical applications.
The benefits of virtualization that apply to lower-tier or less-critical virtual machines, such as the reduction in cost of servers, power, and cooling, also apply to virtualizing business critical applications. Those benefits, in addition to others that are more specific to business critical applications, are discussed in the upcoming sections.
Business critical applications typically require high availability, because downtime to the application can be costly to the business. Some applications have native high-availability features built in, and even those can still achieve better availability when virtualized on the vSphere platform. High availability in this context can be defined as a system or application that is online and available for a high percentage of time, often approaching 100%. Highly available systems often have mechanisms to provide automatic and immediate resiliency to the application when a failure occurs. These systems often operate within the same physical site or location, though newer technologies can provide for high availability of applications across sites.
Both Microsoft Exchange Server and SQL Server serve as good examples of applications that typically require high availability in most organizations. Both applications offer their own native high-availability features, with Exchange utilizing Database Availability Groups and SQL having AlwaysOn Availability Groups (in addition to other high-availability features). These features work at the application level and can offer protection in the event of a server, network, or even storage failure. So does it make sense to virtualize them when they already support their own native high-availability features?
Each of the native high-availability features found in Exchange Server and SQL Server have their own requirements and complexities. For some organizations, these complexities raise the cost of maintaining the environment and are simply not worth it. By virtualizing these applications on vSphere, they can take advantage of vSphere High Availability (HA) to protect their business critical workloads. vSphere HA is used to automatically restart virtual machines if the host on which they are running fails for any reason. The virtual machines are restarted quickly on another host in the cluster, restoring service to the application and end users.
If an Exchange Mailbox server is running on an ESXi host that experiences a failure, that virtual machine will experience a failure and will be restarted on another ESXi host automatically. Organizations get this functionality without having to worry about the complexities of Exchange replication, active and passive database copies, extra storage requirements, and all the other configurations associated with using Database Availability Groups. They might face other challenges using this availability model, such as single points of failure at the operating system and storage level, but for some organizations the availability that vSphere HA provides is enough to meet their availability requirements.
For other organizations with more complex availability requirements, the native high-availability features provided by Exchange or SQL are necessary. To get the best possible availability, those organizations can combine the high-availability features of these applications with vSphere HA to provide better availability than would be easily possible with physical servers.
As an example, consider an organization that has virtualized its Exchange 2013 environment and utilizes Database Availability Groups to maintain three copies of each mailbox database. Let’s now say that the motherboard of an ESXi host running an Exchange 2013 Mailbox server virtual machine experiences a failure and the host powers off unexpectedly. In that scenario, the Database Availability Group will quickly detect the failure and activate database copies on a surviving Mailbox server (likely faster than it would take a Mailbox server virtual machine to restart if an organization were relying on vSphere HA alone). vSphere HA will then restart the failed Mailbox server on another ESXi host, restoring the full availability to the Exchange environment quickly and automatically.
Just how quickly could full availability be restored? VMware conducted a test of this exact scenario using the previous version of Exchange, and found that the failed Mailbox VM booted up and resumed replication in approximately three minutes from the time when the first ESXi host experienced a failure. The full details of the test can be found here: http://www.vmware.com/files/pdf/using-vmware-HA-DRS-and-vmotion-with-exchange-2010-dags.pdf. The quick restoration of email service via native high availability in Exchange combined with the ability of vSphere HA to restart virtual machines greatly enhances the availability of the application.
Now consider that same scenario with physical Mailbox servers. After the physical Mailbox server fails, the surviving Mailbox servers would activate the database copies just as they did in the virtual environment, restoring Exchange services quickly and easily. Unfortunately, the physical server that failed would need a hardware replacement. Most enterprise support contracts have a two- to four-hour service-level agreement (SLA) for replacement parts, extending the period during which the Exchange environment is less protected from another failure. When you compare a two- to four-hour restoration time to a three-minute restoration time, you can start to see why many organizations are looking to virtualize their business critical applications in order to enhance availability for their applications. These support contracts are still critical because a failed ESXi host reduces the overall capacity of the infrastructure, but proper design considerations (discussed for each application in each chapter of the book) can help mitigate the risk.
Not all business critical applications have their own native high-availability features. Take multitier applications that utilize middle-tier servers and back-end SQL Server databases as an example. The SQL environment might have its own native high-availability features but the middle-tier servers commonly do not. In that case, you can provide high availability to those applications that do not natively support it themselves. The capability to provide high availability to applications that do not natively support it is a compelling reason to consider business critical application virtualization.
Availability is one of the key reasons why organizations should strongly consider virtualizing their business critical workloads on VMware vSphere. By providing high availability to applications that do not natively support it, or by combining native high-availability features in vSphere with those found in the application, organizations can guarantee higher levels of availability for their applications. Less downtime for applications means more productive end users, a more agile (and relaxed) administrative staff, and a more successful business.
Disaster recovery is another key consideration for business critical applications. As recent history has shown us, hurricanes, earthquakes, and tsunamis are very real things that can have a significant impact on people and businesses. Protecting an organization’s most critical applications should be an important factor of any design.
We can define disaster recovery as the process of recovering a server or an application in the event of significant failure or negative situations. Disaster recovery is often a manual process involving the relocation of servers or applications from a failed location to a secondary location that is unaffected by the disaster. Information technology (IT) folks often talk about disaster recovery in terms of major disasters like earthquakes, asteroid impacts, or the zombie apocalypse (you know who you are), but the reality is that the disaster does not have to be so catastrophic. In reality, a disaster can be a water pipe bursting above your server room, a fire in the building, an extended loss of power, or even the failure of a critical piece of hardware. Organizations need to be prepared to deal with disaster and recover from it in order to keep the business operating.
At its core, virtualization can provide a means of disaster recovery simply by encapsulating virtual machines into individual files that are portable. Copying critical virtual machines to an external hard drive before a hurricane makes landfall is, while certainly not the best, a simplistic example of how virtualization facilitates easier disaster recovery for all applications.
Copying virtual machine disk files to an external hard drive is likely not going to be an acceptable disaster recovery plan for most businesses. Instead, companies will want to consider a more robust solution that can provide disaster recovery in a simpler, more unified way. By virtualizing business critical applications on vSphere, organizations can take advantage of technologies like VMware Site Recovery Manager to help aid in disaster recovery plans. Site Recovery Manager can be used to create recovery plans for critical virtual machines that are used to control which applications are protected, how they are replicated between sites, and in which order they are restarted in the secondary site. In addition, Site Recovery Manger offers the capability to fully test a disaster recovery failover without actually declaring a disaster. The more workloads that are virtualized on vSphere, the more of an organization’s servers that can be included in the disaster recovery plan.
Similar to the discussion around high availability, what if an application already includes native functionality that can allow for disaster recovery? Both Exchange Server and SQL Server offer native functionality that can replicate data between sites to provide for disaster recovery. As with before, for many organizations the complexity of maintaining multiple replication technologies for multiple applications means higher cost and higher complexity, which could lead to extended outages and longer recovery times. When all recovery plans are unified to a single tool, Site Recovery Manager, a single team is capable of restoring multiple applications in the event of a disaster. That can significantly reduce complexity and speed recovery, because multiple application teams are not required to execute recovery plans for the individual applications.
The features that provide high availability in Exchange Server and SQL Server are the same features that also provide disaster recovery. By combining those features with Site Recovery Manager, you can still utilize tools like SQL Server 2012 AlwaysOn Availability Groups to maintain high availability within the primary site while using Site Recovery Manager for disaster-recovery purposes. The combination of technology can often provide a “best of both worlds” configuration for an organization.
Many applications do not provide any native means for providing disaster recovery. For those applications, virtualizing them on the vSphere platform and leveraging Site Recovery Manager can once again provide functionality that is simply not possible by running the application on a physical server.
The disaster recovery capabilities of the vSphere platform and Site Recovery Manager present a compelling reason for organizations to virtualize their business critical applications. Even if the application already provides native disaster recovery functionality, businesses can still see additional benefits by virtualizing the application on vSphere.
With business critical applications, the ability to create an environment that is scalable to meet the demands of the business is of huge importance. Resource needs of an application can grow for various reasons, including differing business cycles, increases in demand, or normal end-of-month processing. An e-commerce company might see increased spikes in demand during the holiday season, when more orders are placed through their systems than during any other time of the year. Similarly, an accounting firm might experience huge increases in customer demand and activity as the deadline to file taxes approaches. For these and many other similar situations, it is important that an organization’s business critical applications are able to scale to meet demand.
If an organization deploys its critical applications on physical servers, scalability becomes much more difficult. Physical servers typically would have to be sized to meet the maximum expected demand despite the fact that this demand might occur only during brief periods every month, every quarter, or even every year. This raises the total cost of the environment and leaves the system largely underutilized for much of the year.
Virtualizing these business critical applications on VMware vSphere can aid in scalability in two key ways. First, vSphere utilizes a technology called Distributed Resource Scheduler (DRS) that can automatically balance workloads among ESXi hosts in a cluster to balance out resource demands. If an ESXi host is becoming overloaded because a SQL Server database is processing many transactions on the same server on which the web server is processing customer requests, causing a large spike in CPU utilization, DRS can automatically perform vMotion migrations to balance out the load on that host and return utilization to normal levels.
The other key scalability benefit that vSphere provides is the ability to add CPU and memory resources to running virtual machines with no downtime. The application and the operating system need to support this functionality in order for the new hardware to be recognized. For example, newer versions of Windows Server, starting with Windows Server 2008 R2, Enterprise Edition (and above) natively support it. In the scenario described previously, in which the CPU utilization on a SQL Server virtual machine is growing out of control due to high demand, a vSphere administrator can add CPU resources to the server and run a simple command to make the application aware of its new resources. By providing a way to scale resources on the fly in the event that demand requires it, vSphere administrators possess a powerful scalability tool that provides a huge advantage over deploying business critical applications on physical servers.
Application owners and developers commonly need new servers to deploy new applications, test application updates, or for a variety of other reasons. Databases in particular are a common request of developers who frequently need new databases or entirely new SQL servers to test their applications. If physical servers are used, these requests can take days or even weeks to complete.
By deploying these business critical resources on the vSphere platform, administrators can deploy new instances much more quickly than if they had to deploy them on physical servers. Administrators can take advantage of vSphere features like templates to create master images of particular applications and then deploy them quickly when needed.
Similarly, the provisioning of new resources to virtual machines is also done quickly and easily. In many cases, resources can be added on the fly without the need to shut down the virtual machine. If a virtual machine needs more space on its virtual hard drive, the drive can be expanded without the need to shut down the virtual machine. In a physical server, adding new local storage to a server typically requires downtime. Other resources, such as virtual network cards, can also be deployed on the fly without any downtime.
Administrators or application owners often need to test updates or changes to their applications that are live and running in production. If the application is virtualized, they can simply make a clone of the virtual machine and have an identical copy of the server to use as a test server. Clone operations do not cause any downtime to the virtual machine being cloned and, when properly isolated, enable developers or application owners to perform tests against actual data rather than on development or test servers. This is another example of a capability that is simply not easily possible with physical servers.
Another way virtual machines make testing applications much easier is with snapshots. Virtual machine snapshots provide a way to quickly and easily go back to a point-in-time copy of the virtual machine. This can make testing changes in a virtual machine much simpler, because a change that causes harm can be quickly undone by reverting to a clean state at the time the snapshot was taken. After it is verified that the change does not cause any undue harm, the snapshot can be deleted and the changes are committed to the virtual machine. This functionality would be difficult to duplicate with physical servers.
One of the “classic” benefits of virtualization has always been consolidation, or the capability to consolidate multiple physical servers into virtual machines running on fewer physical servers. This can provide an organization with potentially large savings on the cost of the servers, as well as the hard and soft costs for managing and maintaining those servers. There are also cost savings for the power and cooling that would be required to support those physical servers.
Business critical applications can also benefit from consolidation through virtualization. Many of these applications are complex, often broken up into multiple roles within the same application and requiring multiple servers. Or the application could have native clustering or high-availability features that require multiple servers deployed for one application.
Microsoft Exchange Server and SQL Server are both good examples of business critical applications that can benefit from consolidation. In Exchange 2010, the previous version of Exchange, there were five server roles that each served different purposes. If an organization wanted to separate those roles into separate physical servers, it would require a significant investment in hardware to accommodate that requirement. By virtualizing the roles of Exchange, the organization can achieve the same goal of role separation while still consolidating onto fewer physical servers. The new version, Exchange 2013, has reduced the server roles to two, still offering some benefits for consolidation.
SQL Server also has separate components as well as native high-availability features that can require multiple physical servers. By virtualizing SQL Server onto fewer physical servers, organizations can still use dedicated servers for specific components or utilize high-availability features while still reducing the physical footprint. In addition, consolidating SQL Server can have a significant impact on the cost of licensing. This topic is discussed further in Chapter 7, “Virtualizing Microsoft SQL Server 2012.”
Consolidation is typically not the main reason why organizations choose to virtualize a business critical application. That said, organizations can still realize the benefits of consolidation even with business critical applications.