JumalaSika ltd - Fotolia

Get started Bring yourself up to speed with our introductory content.

Digging deep in Windows Server storage QoS

In the newest preview of Windows Server, storage QoS helps administrators get the most of their storage by setting minimum and maximum IOPS count.

Resource management is a common theme among consolidation stories, and making sure workloads within virtual machines perform as you expect is part of the job. In Windows Server 2012 R2, storage quality of service can help control rogue VMs machines to stabilize the entire infrastructure.

Most Hyper-V deployments have a shared storage unit -- either attached to the host that is running the virtual machines (VMs), through network-attached storage or with a big-iron-style storage area network. Hosting more VMs on the infrastructure means you have less control over which VMs use resources, such as disks and IOPS.

Storage QoS (quality of service) lets administrators quantify and define a minimum -- reserved amount -- and a maximum IOPS count that can be set across Hyper-V machines that use the same scale-out file server (SOFS) as storage for VMs. The SOFS cluster shows up as regular Windows shares and contains the requisite knowledge to govern all of this storage traffic too.

With Windows Server 2012 R2, Microsoft tried to address this by placing a cap on the number of IOPS a single Hyper-V host could occupy. And now that the scale-out file server in Windows Server is mature and the performance and quality of Server Message Block version 3 is proven, the server team has paired SOFS with Hyper-V to create storage QoS in the 2015 Windows Server technical preview.

How storage QoS works

Suppose you have a bevy of VMs spread across a few Hyper-V hosts that are all members of a failover cluster. In the Windows Server 10 technical preview, the individual hypervisor hosts in the cluster have a rate limiter feature that spies on storage traffic and understands the quantity of traffic being sent from each VM relative to others. On the SOFS side, a policy manager oversees the cluster and speaks with the I/O schedulers on each of the SOFS members to regulate the traffic going to and from each VM.

When machines are spun up, the policy manager notes that activity and begins monitoring the traffic. Next, it enforces set minimum and maximum IOPS limits by contacting the Hyper-V host and instructing it to limit the VMs on its side. All governance and rate limiting happens on the Hyper-V cluster side; only monitoring and informing happens on the side of the storage cluster, to keep it operating at peak performance capacity.

You can set up the QoS policies and look at the monitoring reports right now through PowerShell cmdlets, although at this point it is unclear if there will also be a GUI component to this feature.

Who benefits from storage QoS?

Administrators who overprovision their deployments will be the ones most excited about this feature. While most consolidation efforts are oversubscribed at one point or another, many with stretched IT budgets and scarce additional funds try to squeeze every last ounce of capacity out of hosts. With storage QoS, admins can do so without worrying that one rogue VM will take down the performance of the SOFS it is connected to -- affecting other machines.

And because the service has to know about the storage performance of each relevant VM, it has to track that performance. So, with the feature, admins have a central location where they can track the storage of all workloads attached to any given deployment. No messy agents or custom configuration of their existing monitoring software will be required to get this information.

Dig Deeper on Windows Server storage management