Introduction to vSphere
VMware released the successor to Virtual Infrastructure 3 (VI3) in May 2009 with a new name, vSphere, and a new version number, 4.0. This release introduces many new features, both small and large, which we will cover in this chapter. However, don't be intimidated by all the new features. Overall, the core product is basically the same, so many of the things you know from VI3 will also apply to vSphere.
What's New in This Release
When it came time to release the successor to its VI3 datacenter virtualization product, VMware chose to change the name of the product family from VI3 to vSphere. In addition, VMware took the opportunity to sync up the version numbers between its ESX and ESXi products with that of its vCenter Server product to be more consistent and to avoid confusion. With VI3, vCenter Server was at version 2.x and ESX and ESXi were known as version 3.x. Now with vSphere, ESX, ESXi, and vCenter Server are at version 4.x, with the initial release of vSphere being 4.0. In this section, we will cover what is new in each major area and detail each new feature and enhancement so that you can understand the benefits and how to take advantage of them.
Storage, Backup, and Data Protection
vSphere offers many enhancements and new features related to storage, backups, and data protection, which is a compelling reason in and of itself to upgrade from VI3 to vSphere. From thin provisioning to Storage VMotion to the vStorage APIs, this area has greatly improved in terms of performance, usability, and vendor integration.
Thin Provisioning Enhancements
Thin provisioned disks are not new to vSphere, as they also existed in VI3; however, numerous changes have made them more usable in vSphere. The changes made to thin disks in vSphere include the following.
- In VI3, thin disks could only be created manually using the vmkfstools command-line utility. In vSphere, thin disks can be created using the vSphere client at the time a virtual machine (VM) is created.
- In VI3, thick disks could only be converted to thin disks using vmkfstools and only when a VM was powered off. In vSphere, existing thick disks can be easily converted to thin disks using the Storage VMotion feature while a VM is powered on.
- In VI3, the only way to see the actual current size of a thin disk was to use the command line. In vSphere, new storage views are available in the vSphere client that use a plug-in which provides the ability to see the actual size of thin disks.
- In VI3, there are no alarms to report datastores. In vSphere, configurable alarms are built into vCenter Server that allow you to monitor datastore overallocation and space usage percentages.
- In VI3, if a thin disk could no longer grow because of insufficient datastore space, the VM would crash and possibly corrupt. In vSphere, a new safety feature automatically suspends VMs with thin disks when datastore free space is critically low to prevent corruption and OS crashes.
These new improvements make thin disks more manageable and much easier to use in vSphere compared to VI3. We will cover thin disks in more detail in Chapter 3.
iSCSI storage arrays have become an increasingly popular storage choice for virtual hosts due to their lower cost (compared to Fibre Channel storage area networks [FC SANs]) and decent performance. Use of iSCSI software initiators has always resulted in a slight performance penalty compared to hardware initiators with TCP offload engines, as the host CPU is utilized for TCP/IP operations. In vSphere, VMware rewrote the entire iSCSI software initiator stack to make more efficient use of CPU cycles, resulting in significant efficiency (from 7% to 52%) and throughput improvements compared to VI3.
VMware did this by enhancing the VMkernel TCP/IP stack, optimizing the cache affinity, and improving internal lock efficiency. Other improvements to iSCSI include easier provisioning and configuration, as well as support for bidirectional CHAP authentication, which provides better security by requiring both the initiator and the target to authenticate each other.
Storage VMotion Enhancements
Storage VMotion was introduced in version 3.5, but it was difficult to use because it could only be run using a command-line utility. VMware fixed this in vSphere and integrated it into the vSphere Client so that you can quickly and easily perform SVMotions. In addition to providing a GUI for SVMotion in vSphere, VMware also enhanced SVMotion to allow conversion of thick disks to thin disks and thin disks to thick disks. VMware also made some under-the-covers enhancements to SVMotion to make the migration process much more efficient. In VI3, SVMotion relied on snapshots when copying the disk to its new location, and then committing those when the operation was complete. In vSphere, SVMotion uses the new Changed Block Tracking (CBT) feature to keep track of blocks that were changed after the copy process started, and copies them after it completes. We will cover Storage VMotion in more detail in Chapter 9.
Support for Fibre Channel over Ethernet and Jumbo Frames
vSphere adds support for newer storage and networking technologies which include the following.
- Fibre Channel over Ethernet (FCoE)—vSphere now supports FCoE on Converged Network Adapters (CNAs) which encapsulates Fibre Channel frames over Ethernet and allows for additional storage configuration options.
- Jumbo frames—Conventional Ethernet frames are 1,518 bytes in length. Jumbo frames are typically 9,000 bytes in length, which can improve network throughput and CPU efficiency. VMware added support for jumbo frames in ESX 3.5 but did not officially support jumbo frames for use with storage protocols. With the vSphere release, the company officially supports the use of jumbo frames with software iSCSI and NFS storage devices, with both 1Gbit and 10Gbit NICs to help improve their efficiency.
Both of these technologies can provide great increases in performance when using network-based storage devices such as iSCSI and NFS, and can bring them closer to the level of performance that the more expensive Fibre Channel storage provides.
Ability to Hot-Extend Virtual Disks
Previously in VI3 you had to power down a VM before you could increase the size of its virtual disk. With vSphere you can increase the size of an existing virtual disk (vmdk file) while it is powered on as long as the guest operating system supports it. Once you increase the size of a virtual disk, the guest OS can then begin to use it to create new disk partitions or to extend existing ones. Supported operating systems include Windows Server 2008, Windows Server 2003 Enterprise and Datacenter editions, and certain Linux distributions.
Ability to Grow VMFS Volumes
With vSphere you can increase the size of Virtual Machine File System (VMFS) volumes without using extents and without disrupting VMs. In VI3, the only way to grow volumes was to join a separate LUN to the VMFS volume as an extent, which had some disadvantages. Now, with vSphere, you can grow the LUN of an existing VMFS volume using your SAN configuration tools and then expand the VMFS volume so that it uses the additional space.
Pluggable Storage Architecture
In vSphere, VMware has created a new modular storage architecture that allows third-party vendors to interface with certain storage functionality. The pluggable storage architecture allows vendors to create plug-ins for controlling storage I/O-specific functions such as multipathing. vSphere has built-in functionality that allows for fixed or round-robin path selection when multiple paths to a storage device are available. Vendors can expand on this and develop their own plug-in modules that allow for optimal performance through load balancing, and also provide more intelligent path selection. The PSA leverages the new capabilities provided by the vStorage APIs for multipathing to achieve this.
Paravirtualized SCSI Adapters
Paravirtualization is a technology that is available for certain Windows and Linux operating systems that utilize a special driver to communicate directly with the hypervisor. Without paravirtualization, the guest OS does not know about the virtualization layer and privileged calls are trapped by the hypervisor using binary translation. Paravirtualization allows for greater throughput and lower CPU utilization for VMs and is useful for disk I/O-intensive applications. Paravirtualized SCSI adapters are separate storage adapters that can be used for nonprimary OS partitions and can be enabled by editing a VM's settings and enabling the paravirtualization feature. This may sound similar to the VMDirectPath feature, but the key difference is that paravirtualized SCSI adapters can be shared by multiple VMs on host servers and do not require that a single adapter be dedicated to a single VM. We will cover paravirtualization in more detail in Chapter 5.
VMDirectPath for Storage I/O Devices
VMDirectPath is similar to paravirtualized SCSI adapters in which a VM can directly access host adapters and bypass the virtualization layer to achieve better throughput and reduced CPU utilization. It is different from paravirtualized SCSI adapters in that with VMDirectPath, you must dedicate an adapter to a VM and it cannot be used by any other VMs on that host. VMDirectPath is available for specific models of both network and storage adapters; however, currently only network adapters are fully supported in vSphere, and storage adapters have only experimental support (i.e., they are not ready for production use). Like pvSCSI adapters, VMDirectPath can be used for VMs that have very high storage or network I/O requirements, such as database servers. VMDirectPath enables virtualization of workloads that you previously might have kept physical. We will cover VMDirectPath in more detail in Chapter 3.
VMware introduced the vStorage APIs in vSphere, and they consist of a collection of interfaces that third-party vendors can leverage to seamlessly interact with storage in vSphere. They allow vSphere and its storage devices to come together for improved efficiency and better management. We will discuss the vStorage APIs in more detail in Chapter 5.
Storage Views and Alarms in vCenter Server
The storage view has selectable columns that will display various information, including the total amount of disk space that a VM is taking up (including snapshots, swap files, etc.), the total amount of disk space used by snapshots, the total amount of space used by virtual disks (showing the actual thin disk size), the total amount of space used by other files (logs, NVRAM, and config and suspend files), and much more. This is an invaluable view that will quickly show you how much space is being used in your environment for each component, as well as enable you to easily monitor snapshot space usage. The storage view also includes a map view so that you can see relationships among VMs, hosts, and storage components.
In VI3, alarms were very limited, and the only storage alarm in VI3 was for host or VM disk usage (in Kbps). With vSphere, VMware added hundreds of new alarms, many of them related to storage. Perhaps the most important alarm relates to percentage of datastore disk space used. This alarm will actually alert you when a datastore is close to running out of free space. This is very important when you have a double threat from both snapshots and thin disks that can grow and use up all the free space on a datastore. Also, alarms in vSphere appear in the status column in red, so they are more easily noticeable.
ESX and ESXi
The core architecture of ESX and ESXi has not changed much in vSphere. In fact, the biggest change was moving to a 64-bit architecture for the VMkernel. When ESXi was introduced in VI3, VMware announced that it would be its future architecture and that it would be retiring ESX and its Service Console in a future release. That didn't happen with vSphere, but this is still VMware's plan and it may unfold in a future major release. ESX and ESXi do feature a few changes and improvements in vSphere, though, and they include the following.
- Both the VMkernel and the Linux-based ESX Service Console are now 64-bit; in VI3, they were both 32-bit. VMware did this to provide better performance and greater physical memory capacity for the host server. Whereas many older servers only supported 32-bit hardware, most modern servers support 64-bit hardware, so this should no longer be an issue. Additionally, the ESX Service Console was updated in vSphere to a more current version of Red Hat Linux.
- Up to 1TB of physical memory is now supported in ESX and ESXi hosts, whereas previously in VI3, only 256GB of memory was supported. In addition, vSphere now supports 64 logical CPUs and a total of 320 VMs per host, with up to 512 virtual CPUs. This greatly increases the potential density of VMs on a host server.
- In VI3, VMware introduced a feature called Distributed Power Management (DPM) which enabled workloads to be redistributed so that host servers could be shut down during periods of inactivity to save power. However, in VI3, this feature was considered experimental and was not intended for production use, as it relied on the less reliable Wake on LAN technology. In vSphere, VMware added the Intelligent Platform Management Interface (IPMI) and iLO (HP's Integrated Lights-Out) as alternative, more reliable remote power-on methods, and as a result, DPM is now fully supported in vSphere.
- vSphere supports new CPU power management technologies called Enhanced SpeedStep by Intel and Enhanced PowerNow! by AMD. These technologies enable the host to dynamically switch CPU frequencies based on workload demands, which enables the processors to draw less power and create less heat, thereby allowing the fans to spin more slowly. This technique is called Dynamic Voltage and Frequency Scaling (DVFS), and is essentially CPU throttling; for example, a 2.6GHz CPU might be reduced to 1.2GHz because that is all that is needed to meet the current load requirements on a host. The use of DVFS with DPM can result in substantial energy savings in a datacenter. We will cover this feature in detail in Chapter 2.
The new 64-bit architecture that vSphere uses means that older 32-bit server hardware will not be able to run vSphere. We will cover this in detail in Chapter 2.
VMs received many enhancements in vSphere as the virtual hardware version went from version 4 (used in VI3) to version 7. These enhancements allow VMs to handle larger workloads than what they previously handled in VI3, and allow vSphere to handle almost any workload to help companies achieve higher virtualization percentages. The changes to VMs in vSphere include the following.
- Version 4 was the virtual hardware type used for VMs in VI3, and version 7 is the updated version that was introduced in vSphere. We'll cover virtual hardware in more detail in Chapter 3.
- In VI3, you could only assign up to four vCPUs and 64GB to a VM. In vSphere, you can assign up to eight vCPUs and 255GB of RAM to a VM.
- Many more guest operating systems are supported in vSphere compared to VI3, including more Linux distributions and Windows versions as well as new selections for Solaris, FreeBSD, and more.
- vSphere introduced a new virtual network adapter type called VMXNET3, which is the third generation of its homegrown virtual NIC (vNIC). This new adapter provides better performance and reduced I/O virtualization overhead than the previous VMXNET2 virtual network adapter.
- In VI3, only BusLogic and LSI Logic parallel SCSI storage adapter types were available. In vSphere, you have additional choices, including an LSI Logic SAS (serial attached SCSI) and a Paravirtual SCSI adapter. Additionally, you can optionally use an IDE adapter, which was not available in VI3.
- You can now add memory or additional vCPUs to a VM while it is powered on, as long as the guest operating system running on the VM supports this feature.
- In VI3, the display adapter of a VM was hidden and had no settings that could be modified. In vSphere, the display adapter is shown and has a number of settings that can be changed, including the memory size and the maximum number of displays.
- You can now add a USB controller to your VM, which allows it to access USB devices connected to the host server. However, although this option exists in vSphere, it is not supported yet, and is currently intended for hosted products such as VMware Workstation. VMware may decide to enable this support in vSphere in a future release as it is a much requested feature.
- vSphere introduced a new virtual device called Virtual Machine Communication Interface (VMCI) which enables high-speed communication between the VM and the hypervisor, as well as between VMs that reside on the same host. This is an alternative and much quicker communication method than using vNICs, and it improves the performance of applications that are integrated and running on separate VMs (i.e., web, application, and database servers).
As you can see, VMs are much more powerful and robust in vSphere. We will cover their many enhancements in detail in Chapter 3.
vCenter Server has received numerous enhancements in vSphere that have made this management application for ESX and ESXi hosts much more usable and powerful. In addition to receiving a major overhaul, vCenter Server also has a simplified licensing scheme so that a separate license server is no longer required. Enhancements were made throughout the product, from alarms and performance monitoring, to configuration, reporting, and much more. Additionally, vCenter Server can scale better due to the addition of a new linked mode. The new features and enhancements to vCenter Server include the following.
- Host profiles enable centralized host configuration management using policies to specify the configuration of a host. Host profiles are almost like templates that you can apply to a host to easily change its configuration all at once, without having to manually change each setting one by one. This allows you to quickly configure a brand-new host and ensure that its settings are consistent with other hosts in the environment. You can use host profiles to configure network, storage, and security settings, and you can create many more from scratch or copy them from an existing host that is already configured. Host profiles greatly simplify host deployment and can help to ensure compliance to datacenter standards. This feature is available only in the Enterprise Plus edition of vSphere.
- vCenter Server has limitations to the number of hosts and VMs that it can manage; therefore, multiple vCenter Servers are sometimes required. The new linked mode enables multiple vCenter Servers to be linked together so that they can be managed from a single vSphere client session, which enables easier and more centralized administration. Additionally, linked mode allows roles and licenses to be shared among multiple vCenter Servers.
- vApps create a resource container for multiple VMs that work together as part of a multitier application. vApps provide methods for setting power on options, IP address allocation, and resource allocation, and provide application-level customization for all the VMs in the vApp. vApps greatly simplify the management of an application that spans multiple VMs, and ensure that the interdependencies of the application are always met. vApps can be created in vCenter Server as well as imported and exported in the OVF format.
- A new licensing model was introduced in vSphere to greatly simplify license management. In VI3, you had a license server that ran as a separate application from vCenter Server and used long text files for license management. In vSphere, licensing is integrated into vCenter Server and all product and feature licenses are encapsulated in a 25-character license key that is generated by VMware's licensing portal.
- Alarms in vSphere are much more robust, and offer more than 100 triggers. In addition, a new Condition Length field can be defined when you are setting up triggers to help eliminate false alarms.
- More granular permissions can now be set when defining roles to grant users access to specific functionality in vSphere. This gives you much greater control and protection of your environment. You have many more permissions on datastores and networks as well, so you can control such actions as vSwitch configuration and datastore browser file controls.
- Performance reporting in vCenter Server using the built-in charts and statistics has improved so that you can look at all resources at once in a single overview screen. In addition, VM-specific performance counters are integrated into the Windows Perfmon utility when VMware Tools is installed to provide more accurate VM performance analysis.
- The Guided Consolidation feature which analyzes physical servers in preparation for converting them to VMs is now a plug-in to vCenter Server. This allows you to run the feature on servers other than the vCenter Server to reduce the resource load on the vCenter Server.
vCenter Server has many enhancements in vSphere that make it much more robust and scalable, and improve the administration and management of VMs. Also, many add-ons and plug-ins are available for vCenter Server that expand and improve its functionality. We will cover vCenter Server in more detail in Chapter 4.
Clients and Management
There are many different ways to manage and administer a VI3 environment, and VMware continued to improve and refine them in vSphere. Whether it is through the GUI client, web browser, command-line utilities, or scripting and APIs, vSphere offers many different ways to manage your virtual environment. Enhancements to management utilities in vSphere include the following.
- The VI3 Client is now called the vSphere Client and continues to be a Windows-only client developed using Microsoft's .NET Framework. The client is essentially the same in vSphere as it was in VI3, but it adds support for some of the latest Windows operating systems. The vSphere Client is backward compatible and can also be used to manage VI3 hosts.
- The Remote Command-Line Interface (RCLI) in VI3, which was introduced to manage ESXi hosts (but which can also manage ESX hosts), is now called the vSphere CLI and features a few new commands. The vSphere CLI is backward compatible and can also manage ESX and ESXi hosts at version 3.5 Update 2 or later.
- VMware introduced a command-line management virtual appliance in VI3, called the Virtual Infrastructure Management Assistant (VIMA), as a way to centrally manage multiple hosts at once. In vSphere, it goes by the name of vSphere Management Assistant (vMA). Where the vSphere CLI is the command-line version of the vSphere Client, the vMA is essentially the command-line version of vCenter Server. Most of the functionality of the vMA in vSphere is the same as in the previous release.
- VMware renamed its PowerShell API from VI Toolkit 1.5 in VI3 to PowerCLI 4.0 in vSphere. The PowerCLI is largely unchanged from the previous version, but it does include some bug fixes plus new cmdlets to interface with the new host profiles feature in vSphere.
- The web browser access method to connect to hosts or vCenter Server to manage VMs is essentially the same in vSphere. VMware did include official support for Firefox in vSphere, and made some cosmetic changes to the web interface, but not much else.
We will cover all of these features in more detail in Chapter 10.
Although networking in vSphere has not undergone substantial changes, VMware did make a few significant changes in terms of virtual switches (vSwitches). The most significant new networking features in vSphere are the introduction of the distributed vSwitch and support for third-party vSwitches. The new networking features in vSphere include the following.
- A new centrally managed vSwitch called the vNetwork Distributed Switch (vDS) was introduced in vSphere to simplify management of vSwitches across hosts. A vDS spans multiple hosts, and it needs to be configured and set up only once and then assigned to each host. Besides being a big time-saver, this can help to eliminate configuration inconsistencies that can make vMotion fail to work. Additionally, the vDS allows the network state of a VM to travel with it as it moves from host to host.
- VMware provided the means for third-party vendors to create vSwitches in vSphere. The first to be launched with vSphere is the Cisco Nexus 1000v. In VI3, the vSwitch was essentially a dumb, nonmanageable vSwitch with little integration with the physical network infrastructure. By allowing vendors such as Cisco to create vSwitches, VMware has improved the manageability of the vSwitch and helped to integrate it with traditional physical network management tools.
- Support for Private VLANs was introduced in vSphere to allow communication between VMs on the same VLAN to be controlled and restricted.
- As mentioned earlier, VMware also introduced a new third-generation vNIC, called the VMXNET3, which includes the following new features: VLAN offloading, large TX/RX ring sizes, IPv6 checksum and TSO over IPv6, receive-side scaling (supported in Windows 2008), and MSI/MSI-X support.
- Support for IP version 6 (IPv6) was enabled in vSphere; this includes the networking in the VMkernel, Service Console, and vCenter Server. Support for using IPv6 for network storage protocols is considered experimental (not recommended for production use). Mixed environments of IPv4 and IPv6 are also supported.
The networking enhancements in vSphere greatly improve networking performance and manageability, and by allowing third-party vendors to develop vSwitches, VMware can allow network vendors to continue to offer more robust and manageable alternatives to VMware's default vSwitch. We will cover the new networking features in more detail in Chapter 6.
Security is always a concern in any environment, and VMware made some significant enhancements to an already pretty secure platform in vSphere. The biggest new feature is the new VMsafe API that allows third-party vendors to better integrate into the hypervisor to provide better protection and less overhead. The new security features in vSphere include the following.
- VMware created the VMsafe APIs as a means for third-party vendors to integrate with the hypervisor to gain better access to the virtualization layer so that they would not have to use less-efficient traditional methods to secure the virtual environment. For example, many virtual firewalls have to sit inline between vSwitches to be able to protect the VMs running on the vSwitch. All traffic must pass through the virtual firewall to get to the VM; this is both a bottleneck and a single point of failure. Using the VMsafe APIs you no longer have to do this, as a virtual firewall can leverage the hypervisor integration to listen in right at the VM's NIC and to set rules as needed to protect the VM.
- vShield Zones is a virtual firewall that can use rules to block or allow specific ports and IP addresses. It also does monitoring and reporting and can learn the traffic patterns of a VM to provide a basic rule set. Although not as robust as some of the third-party virtual firewalls available today, it does provide a good integrated method of protecting VMs. We will discuss vShield Zones in more detail in Chapter 6.
The security enhancements in vSphere are significant and make an already safe product even more secure. Protection of the hypervisor in any virtual environment is critical, and vSphere provides the comfort you need to know that your environment is well protected.
Availability is critical in virtual environments, and in VI3, VMware introduced some new features, such as High Availability (HA), that made recovery from host failures an easy and automated process. Many people are leery of putting a large number of VMs on a host because a failure can affect so many servers running on that host, so the HA feature was a good recovery method. However, HA is not continuous availability, and there is a period of downtime while VMs are restarted on other hosts. VMware took HA to the next level in vSphere with the new Fault Tolerance (FT) feature, which provides zero downtime for a VM in case a host fails. The new features available in vSphere include the following.
- FT provides true continuous availability for VMs that HA could not provide. FT uses a CPU technology called Lockstep that is built into certain newer models of Intel and AMD processors. It works by keeping a secondary copy of a VM running on another host server which stays in sync with the primary copy by utilizing a process called Record/Replay that was first introduced in VMware Workstation. Record/Replay works by recording the computer execution of the primary VM and saving it into a log file; it can then replay that recorded information on a secondary VM to have a replica copy that is a duplicate of the original VM. In case of a host failure, the secondary VM becomes the primary VM and a new secondary is created on another host. We will cover the FT feature in more detail in Chapter 9.
- VMware introduced another new product as part of vSphere, called VMware Data Recovery (VDR). Unlike vShield Zones, which was a product VMware acquired, VDR was developed entirely by VMware to provide a means of performing backup and recovery of VMs without requiring a third-party product. VDR creates hot backups of VMs to any virtual disk storage attached to an ESX/ESXi host or to any NFS/CIFS network storage server or device. An additional feature of VDR is its ability to provide data de-duplication to reduce storage requirements using block-based in-line destination de-duplication technology that VMware developed. VDR is built to leverage the new vStorage APIs in vSphere and is not compatible with VI3 hosts and VMs. VDR can only do backups at the VM level (VM image) and does not do file-level backups; full backups are initially performed and subsequent backups are incremental. It does have individual file-level restore (FLR) capability that is for both Windows (GUI) and Linux (CLI). We will cover VDR in more detail in Chapter 8.
- VMware made some improvements to HA in vSphere, and they include an improved admission control policy whereby you can specify the number of host failures that a cluster can tolerate, the percentage of cluster resources to reserve as failover capacity, and a specific failover host. Additionally, a new option is available to disable the host monitoring feature (heartbeat) when doing network maintenance to avoid triggering HA when hosts become isolated. We will cover HA in more detail in Chapter 9.
The FT feature is a big step forward for VMware in providing better availability for VMs. While FT is a great feature, it does have some strict limitations and requirements that restrict its use. We will cover the details in Chapter 10.
Compatibility and Extensibility
VMware continually expands its support for devices, operating systems, and databases, as well as its API mechanisms that allow its products to integrate better with other software and hardware. With vSphere, VMware has done this again by way of the following new compatibility and extensibility features.
- In VI3, ESX and ESXi only supported the use of internal SCSI disks. vSphere now also supports the use of internal SATA disks to provide more cost-effective storage options.
- In addition to supporting more guest operating systems, vSphere also supports the ability to customize additional guest operating systems such as Windows Server 2008, Ubuntu 8, and Debian 4.
- vCenter Server supports additional operating systems and databases including Windows Server 2008, Oracle 11g, and Microsoft SQL Server 2008.
- vSphere Client is now supported on more Windows platforms, including Windows 7 and Windows Server 2008.
- As mentioned previously, the vStorage APIs allow for much better integration with storage, backup, and data protection applications.
- A new Virtual Machine Communication Interface (VMCI) API allows application vendors to take advantage of the fast communication channel between VMs that VMCI provides.
- A new Common Information Model (CIM)/Systems Management Architecture for Server Hardware (SMASH) API allows hardware vendors to integrate directly into the vSphere Client so that hardware information can be monitored and managed without requiring that special hardware drivers be installed on the host server. In addition, a new CIM interface for storage based on the Storage Management Initiative-Specification (SMI-S) is also supported in vSphere.
As you can see, the enhancements and improvements VMware has made in vSphere are compelling reasons to upgrade to it. From better performance to new features and applications, vSphere is much improved compared to VI3 and is a worthy successor to an already great virtualization platform.