James Thew - Fotolia
Windows server administrators can bump up utilization of I/O devices with single-root I/O virtualization.
Windows Server 2012 works with the hypervisor -- either Microsoft Hyper-V or third parties such as VMware ESXi -- to virtualize provisioning, management and sharing of the server's I/O devices. Single-root I/O virtualization (SR-IOV) applies the concept established by server virtualization to peripherals such as network interface cards (NICs).I/O virtualization aims to improve compute efficiency by easing the overhead imposed by the hypervisor. Data moves from the physical I/O peripheral device to the host hypervisor, on to the VM and then back again. This introduces latency and takes processor cycles away from other tasks. SR-IOV can cut out this translation, allowing virtual machines to engage I/O peripheral devices directly.
Benefits of Windows Server SR-IOV
Single-root I/O virtualization aims to improve utilization and performance, but it isn't a golden ticket to better Windows servers. Learn the pros and cons of Windows Server SR-IOV before deciding to implement it.
Although Windows Server 2012 supports the technology, SR-IOV is not native to Microsoft or Windows Server. Instead, SR-IOV originated from the PCI Special Interest Group as an extension to the peripheral component interconnect express (PCIe) specification. SR-IOV specifically targets networking peripherals such as Ethernet NICS on data center servers.
How single-root I/O virtualization works
SR-IOV starts by defining the peripheral device's physical function (PF), which exposes the device's SR-IOV capabilities and interacts with the host hypervisor. The PF basically tells the hypervisor about the physical PCIe device that is available in the server.
The system can now create one or more virtual functions (VFs) underneath the overall PF. VFs share the peripheral device's physical resources (like NIC ports or memory space on the NIC) and are associated with specific VMs on the server. This allows the physical PCIe device to represent itself as multiple virtual PCIe devices.
Each PF and VF receives a unique PCIe identifier that allows the hypervisor's virtual memory manager to distinguish traffic streams and use direct memory access (DMA) remapping and address translation to move data directly between the peripheral device and the target VM. In this way, SR-IOV essentially bypasses the host hypervisor and the processing overhead and resulting latency involved with it.
SR-IOV lets VMs work directly with PCIe devices through DMA, so avoiding virtualization transports, virtual switches or other translations through the host hypervisor. This direct interaction boosts actual communication performance close to non-virtualized levels.
What SR-IOV does for Hyper-V VMs.
Has virtualizing real-time applications become easier?
Dig Deeper on Enterprise infrastructure management
Related Q&A from Stephen J. Bigelow
VMware vCC requires a handful of prerequisites before you can deploy it. Install vCC once you have the correct processors, memory, network ... Continue Reading
The three crucial components of vCC are the interface, the server and the nodes. The interface enables admins to modify the environment, while the ... Continue Reading
VMware vCC enables organizations to move workloads, such as VMs, templates and vApps, between clouds by extending the data center, synchronizing ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.