Manage Learn to apply best practices and optimize your operations.

How to build a bulletproof Hyper-V failover cluster

Before deploying a Hyper-V failover cluster, heed these design instructions to keep VM performance from lagging due to a faulty configuration.

Windows Server admins have enough uncertainty in their lives. One way to reduce the stress from an overflowing...

workload is to deploy a Hyper-V failover cluster.

Failover clusters ensure Hyper-V VMs continue to run when a problem knocks a host out of commission. But admins need to set up the cluster properly -- paying special attention to the network configuration -- to make sure the Hyper-V cluster and apps inside the VM will perform at an optimal level in production.

Get to know the Hyper-V cluster traffic types

To optimize Hyper-V failover cluster performance, admins must understand Hyper-V traffic types and configure Hyper-V networking based on the requirements. Hyper-V uses a physical network adapter for network traffic types, such as cluster, live migration, VM communication, storage, Hyper-V Replica and Hyper-V management traffic.

The cluster service monitors the availability of all nodes in the cluster when it sends a packet via the physical network adapter. A node that doesn't respond with a health check -- known as a cluster heartbeat -- is removed from the cluster.

Hyper-V's live migration feature moves a VM to another Hyper-V node in the cluster if a failure occurs. To do this, Hyper-V uses the physical network adapter used by the other Hyper-V traffic types. If Scale-Out File Server or iSCSI Target Server is deployed, Hyper-V will use the same physical network adapter to communicate with the SOFS cluster or iSCSI Target Server. Similarly, Hyper-V Replica and management network traffic types use the same physical network adapter.

While it's possible to run all Hyper-V network traffic types through a single network adapter, this configuration might not be suitable for production environments. Some networking applications that run inside the VMs want a dedicated network queue to avoid communication delays. Multiple physical network adapters also help to live-migrate VMs as quickly as possible and avoid disruption. Some IT shops prefer a separate physical network adapter dedicated to management traffic.

To isolate Hyper-V network traffic, install the appropriate number of physical network adapters on the Hyper-V host, map each one to a Hyper-V virtual switch and then assign a unique subnet to each virtual network adapter. To complete the setup, use Failover Cluster Manager or Hyper-V Manager to configure the network settings to isolate network traffic.

For example, to isolate live migration traffic, open Failover Cluster Manager, right-click on Networks in the left navigation pane, click on Live Migration Settings and select a network adapter. In a similar fashion, admins can use Failover Cluster Manager to isolate cluster traffic. Go to Networks, right-click on a network and click on Properties. On the Properties page, select Allow cluster network communication on this network, and uncheck Allow clients to connect through this network. This dedicates a network for cluster-specific communications.

NIC teaming delivers resiliency

Microsoft introduced network interface card teaming with Windows Server 2012 to let admins group several virtual NICs across different physical NICs to add redundancy.

NIC teaming in Windows Server 2012 provides Hyper-V Port load balancing to distribute the VM network traffic based on the VM's MAC address. Hyper-V Port load balancing uses a round-robin method to distribute VMs across the NIC team; an active network adapter in the team handles outbound traffic of the VMs.

NIC teaming supports two modes: switch-dependent and switch-independent. Switch-dependent mode requires a virtual switch to participate in the team. Admins can use switch-independent mode if the network adapters connect to a different virtual switch, each with no function or participation. For Hyper-V clusters, it's recommended to use switch-independent mode with Hyper-V Port load balancing.

VMQ avoids unnecessary congestion

If the Hyper-V host has network adapters with the Virtual Machine Queue (VMQ) feature, the admins should enable it.

VMQ establishes a dedicated queue on the physical network adapter to transmit data directly to virtual network adapters -- rather than route them to the management OS first -- and prevents communication delays.

High availability through a Hyper-V failover cluster offers some level of assurance to the business, but admins should learn about the available settings and options to keep applications in VMs in the cluster operating at the expected level.

Next Steps

How to construct a solid Hyper-V failover cluster

Key features in Hyper-V 2016 on Windows Server

Check on Hyper-V health with PowerShell

This was last published in November 2017

Dig Deeper on Windows Server Virtualization and Microsoft Hyper-V

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How do you configure networking for a Hyper-V failover?
Cancel

-ADS BY GOOGLE

SearchServerVirtualization

SearchCloudComputing

SearchExchange

SearchSQLServer

SearchEnterpriseDesktop

SearchVirtualDesktop

Close