In the enterprise world, virtualization has typically run on high-end, fiber-attached SAN storage. As a budget-constrained administrator, you may get panicked over SAN equipment’s high price-tag and think that shared
The purpose of iSCSI is to take commodity hardware and convert it to work for storage. In this case, SCSI commands and data are wrapped into Ethernet frames and sent over common Ethernet networks. Ethernet is everywhere and even smaller businesses have gigabit switches and high-end Ethernet cards built in to their servers. This approach has become a fairly high-performing alternative to the investment required for a proper SAN. Hyper-V needs shared storage to take advantage of high availability (HA), live migrations and automatic failover in Windows Server 2008 R2 (the situation changes in Windows Server 2012, which includes several storage improvements) .
When looking at iSCSI implementations, you’ll want to be careful to implement the right options. Unlike SAN hardware which is produced and priced as high-performance gear, iSCSI solutions are all over the board in price and performance. There are very big, performance-based products from major players like EMC, Dell, and NetApp, as well as smaller players such as QNAP and NETGEAR, which combine cheaper, off-the-shelf components like commodity hard drives and NAS-type x86 CPUs with Linux kernels. Then there is the option to turn an extra server, or any computer for that matter, into an iSCSI target. Using a typical file server or NAS with direct-attached storage in an iSCSI storage solution is pretty popular in labs and other small environments. You do get what you pay for, so while an EMC option may start at $20,000 you’ll get real high-end performance; an NAS product using desktop-class SATA hard drives will comparatively have a significantly reduced IOPS. Whether this kind of performance is acceptable will depend on your environment, so do some load testing with a tool such as IOMeter, an open-source storage testing suite.
iSCSI drivers have classically been similar to SAN HBA drivers: custom and finicky. If you plan on using a single-vendor storage solution, ensure you are using the right iSCSI target software from the vendor for the specific HBA. In addition, Microsoft has now released standard iSCSI target software to enable iSCSI for almost any storage on your network. This software was previously available only for Windows Storage Server but now is available as a free download and is a fine way to start testing your iSCSI target.
It's important to understand the difference in the technologies and terminologies between iSCSI and SAN. Instead of using World Wide Name (WWN) to identify nodes as on a fiber SAN, iSCSI devices can be identified by IP address, DNS domain name, MAC address of the network card or the iSCSI Qualified Name (IQN). When you walk through an iSCSI target software setup, you will be prompted for this type of information, so be sure you’ve diagrammed your implementation with your network interface cards (NICs) and IP addresses and specified how they will connect to your network switches.
In order to provide high availability, you’ll need to implement at least two Hyper-V hosts in a cluster. Although it sounds difficult, all clustering magic is about having the right systems and a bit of configuration. Ensure you have two like systems, updated to the same version, service packs and patches, and that those systems have enough NICs to support your regular client network, your iSCSI storage and your Live Migration. You will need a separate network segment for each of these, so don’t put your iSCSI traffic on the client LAN and expect good performance.
Although you can implement with existing network gear, you’ll want to give your iSCSI as much bandwidth as it needs. If you are using existing equipment, ensure that you can provide plenty of bandwidth in the Ethernet switch you plan to use, with at least gigabit speed. Double check everything to validate that the network doesn’t have any connectivity issues and that those NICs are not taking errors on those interfaces. When pushing storage inside Ethernet frames, the rule is the bigger the packet the better. It’s best to use NICs and switches that support jumbo frames and set the NICs to that high frame size. You can even take advantage of NICs that feature TOE (TCP/IP Offload Engine) to gain an extra 5-10% CPU efficiency. This kind of NIC is not necessary in most cases, but if you have unpredictable high loads on your Hyper-V environment, it can only help.
When setting up your storage with Hyper-V, consider going with Cluster Shared Volumes, which allows each host to access the shared storage simultaneously. Another common SAN feature is multi-path, which provides redundant paths to the same storage. This is not that difficult to set up with Ethernet, and is highly recommended to prevent creating a single point of failure with just a few extra NIC ports and an extra switch.
Finally, get familiar with your iSCSI implementation and don’t place mission-critical virtual servers on that storage until you are ready to declare that storage speed and configuration ready for prime time. Since there are so many possibilities in configuration differences, from the speed of the disks to the networking equipment, you’ll want to make sure you’ve got the performance as well as the failover capability. As with everything in the Microsoft universe, your servers and other network equipment need to be certified for Microsoft Windows Server 2008 R2. Luckily, that qualifies many servers and peripherals.
ABOUT THE AUTHOR
Eric Beehler has been working in the IT industry since the mid-90's, and has been playing with computer technology well before that. His experience includes over nine years with Hewlett-Packard's Managed Services division, working with Fortune 500 companies to deliver network and server solutions and, most recently, I.T. experience in the insurance industry working on highly-available solutions and disaster recovery. He currently provides consulting and training through his co-ownership in Consortio Services, LLC.
This was first published in June 2012