Microsoft Hyper-V holds the promise of consolidation and better utilization, but how do you know if certain servers belong on a Hyper-V host? In many companies, Hyper-V starts in the
There are plenty of opinions out there, largely from database administrators and developers concerned about available resources to their applications. There are also several stories involving substandard performance of virtualized applications, but that is usually because the process wasn't fully assessed.
Previously, we looked at some of the most common mistakes involved with deploying virtual machines, but that mainly covered what you shouldn't do. So what about what you need to do? Here are some of the key considerations to keep in mind regarding Hyper-V virtual machine deployments.
If you are coming from the physical world, the mantra is "more is better." When attempting to share resources across several machines, however, be sure to take a closer look at your needs. Servers that are reported to have intensive CPU requirements should be looked at with scrutiny. While you can expect a 2% to 12% performance sacrifice when migrating to a Hyper-V virtual machine, this is not normally a breaking point for many systems, especially if you are moving to better processors on a virtual host.
The simple fact is that many physical systems are over-engineered. For instance, take the PerfMon readings from the physical server and determine how much utilization the server really sees. You'll have up to four processors to assign to the virtual machine, but you'll need to balance that with the overall utilization on the host.
When judging the performance of the virtual machine, you'll want to use the performance counters to determine how the machine is performing for that application. To determine its impact on the host, load up PerfMon on the host machine and look at the following counters:
Hyper-V Hypervisor Virtual Processor\% Guest Run Time and the Hyper-V Hypervisor Virtual Processor\% Hypervisor Run Time
These counters show how much of the processor the guest is actually using and how much is going toward hypervisor management.
When assigning processors to virtual machines, a 1:1 processor assignment of available physical core to virtual processors will provide the absolute best performance, but will usually leave processors sitting idle. A 2:1 virtual CPU to physical core ratio is more common for better utilization of resources, but can affect performance for those systems that require intensive CPU. If you cannot give a machine the right CPU resources or find them too expensive to dedicate to a single machine, this may be a reason to eliminate that machine from your virtual environment.
While some might consider virtual memory management both a blessing and a curse, remember that Hyper-V doesn't have the same decision points about memory compared to VMware and its ability to over-commit memory. When looking at it from a performance standpoint, the assigned memory is also the available memory. It is important to allow the host operating system at least 512 MB, but 2 GB is recommended since you need to add 32 MB per virtual machine for the first 1 GB of RAM assigned as overhead, and 8 MB for each additional 1 GB.
The next step is to assign memory as you would a physical machine based on a simple calculation. If the physical server requires 8 GB of RAM, assign 8 GB to the virtual server. The option to assign additional memory will not be available until the release of Windows Server 2008 R2 Service Pack 1, which allows more memory management flexibility. For now, the decision to virtualize comes down to available memory in your virtual environment.
Storage I/O considerations
When deploying your Hyper-V virtual machines, storage I/O may cause an objection to the engine because Hyper-V puts its hard drive inside of a file by default. While convenient for management, this can translate to a loss of performance control. Just remember that storage requirements for performance don't change just because you are on a virtual machine. When it comes to high-performance applications such as Online Transaction Processing (OLTP) databases, it is still important to give that machine the I/O bandwidth it deserves.
Implementing the right I/O means presenting the right kind of storage to the application. In the SQL Server example, it is ideal to have a separate RAID1 set for the transaction log files and a separate RAID5 equivalent set for the data files. Using storage that is reserved for the Hyper-V server without considering the ramifications of its configuration is bound to cause problems, so plan storage for each virtual machine just as you would for each physical machine.
For high-performance I/O, use a dedicated volume that holds a single fixed-size virtual hard disk (VHD) file. Otherwise, use a pass-through disk for direct access to the storage system. The performance differences between these two options are minor. By using a pass-through disk you will lose the ability to create snapshots or perform host-based backups, but the upside is that you are accessing existing volumes and using existing logical unit numbers (LUNs).
A larger LUN can be presented to a Hyper-V host by using striped metaLUNs to push the load across multiple RAID groups on the storage area network (SAN). Avoid using dynamically expanding disks when performance is of the utmost concern. With these considerations addressed, there is little standing in the way of virtualizing a server, even when high-performing disk I/O is required.
Other concerns with Hyper-V virtual machine deployments
On some servers, migrating to Hyper-V virtual machines will not be so easy, as specific hardware connections don't play well with Hyper-V. For example, accessing a SCSI interface for a directly connected tape library is generally not supported. Although it can be configured through some trickery with an iSCSI initiator, don't expect support from Microsoft or your backup software support line.
Legacy machines -- those with Windows 2000 or earlier -- may also be cause for concern. If you are employing a legacy operating system during virtual machine deployment, use a legacy network adapter. Note this adapter can be quite expensive to your host depending on the other virtual machines sharing that host. Also consider leaving Integration Services disabled to avoid known stability problems with some Windows updates on older operating systems. If you are managing a larger number of virtual machines, however, this may be more trouble than it's worth. Instead of using Hyper-V to extend the life of legacy software, take the opportunity to move to software that will run on a supported operating system.
This also applies to non-Microsoft operating systems. For instance, be aware of driver issues with video, network and clock sync, as well as other concerns that may arise when running a flavor of Linux that is not on the Microsoft supported list. Managing a basic Linux installation that isn't fully compatible may not be a big deal due to the command-line nature of many installations and the light-weight quality of some Linux software, but test the system thoroughly before committing to hosting a Linux virtual machine that does not support Integration Services.
Some servers don't belong on a Hyper-V machine due to their function. For example, best practices dictate at least one Active Directory domain controller should be a physical server, as the potential for mistakes are all too real when you have the ability to take snapshots and move VHD files. There is also the possibility of hosting all of your DCs on a single host. It's not necessarily the performance that's the problem, but the ability to cause site-wide corruption of the Active Directory database through VHD mistakes or single points of failure.
Also consider security requirements when performing Hyper-V virtual machine deployments. If you have an at-risk server, in the demilitarized zone (DMZ) for example, you'll want to avoid hosting that server on the same host as your database server. If you don't have a host to dedicate to servers that belong to a lower security level, leave them physically dedicated to avoid potential security issues from sharing the same host.
The bottom line
Although many people take a contrarian look at virtualizing their servers because of fear of the unknown, it is important to understand the performance and visibility-based opportunities virtualization provides. By addressing systems uniquely and discussing server concerns with the proper respect and information, you can sell admins on the possibilities that don't exist with the systems they have today.
ABOUT THE AUTHOR
Eric Beehler has been working in the IT industry since the mid-90's, and has been playing with computer technology well before that. He currently provides consulting and training through his co-ownership in Consortio Services, LLC.
This was first published in April 2010