Windows server administrators can bump up utilization of I/O devices with single-root I/O virtualization.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Windows Server 2012 works with the hypervisor -- either Microsoft Hyper-V or third parties such as VMware ESXi -- to virtualize provisioning, management and sharing of the server's I/O devices. Single-root I/O virtualization (SR-IOV) applies the concept established by server virtualization to peripherals such as network interface cards (NICs).I/O virtualization aims to improve compute efficiency by easing the overhead imposed by the hypervisor. Data moves from the physical I/O peripheral device to the host hypervisor, on to the VM and then back again. This introduces latency and takes processor cycles away from other tasks. SR-IOV can cut out this translation, allowing virtual machines to engage I/O peripheral devices directly.
Benefits of Windows Server SR-IOV
Single-root I/O virtualization aims to improve utilization and performance, but it isn't a golden ticket to better Windows servers. Learn the pros and cons of Windows Server SR-IOV before deciding to implement it.
Although Windows Server 2012 supports the technology, SR-IOV is not native to Microsoft or Windows Server. Instead, SR-IOV originated from the PCI Special Interest Group as an extension to the peripheral component interconnect express (PCIe) specification. SR-IOV specifically targets networking peripherals such as Ethernet NICS on data center servers.
How single-root I/O virtualization works
SR-IOV starts by defining the peripheral device's physical function (PF), which exposes the device's SR-IOV capabilities and interacts with the host hypervisor. The PF basically tells the hypervisor about the physical PCIe device that is available in the server.
The system can now create one or more virtual functions (VFs) underneath the overall PF. VFs share the peripheral device's physical resources (like NIC ports or memory space on the NIC) and are associated with specific VMs on the server. This allows the physical PCIe device to represent itself as multiple virtual PCIe devices.
Each PF and VF receives a unique PCIe identifier that allows the hypervisor's virtual memory manager to distinguish traffic streams and use direct memory access (DMA) remapping and address translation to move data directly between the peripheral device and the target VM. In this way, SR-IOV essentially bypasses the host hypervisor and the processing overhead and resulting latency involved with it.
SR-IOV lets VMs work directly with PCIe devices through DMA, so avoiding virtualization transports, virtual switches or other translations through the host hypervisor. This direct interaction boosts actual communication performance close to non-virtualized levels.
What SR-IOV does for Hyper-V VMs.
Has virtualizing real-time applications become easier?
Dig Deeper on Microsoft Systems and Network Troubleshooting
Related Q&A from Stephen J. Bigelow
If you plan to manually configure a vSAN 6.6 or 6.6.1 deployment, make sure your organization meets these hardware and software requirements before ...continue reading
VSAN 6.6 and 6.6.1 boast new features, such as vSAN Configuration Assist, integration with vROps and a streamlined upgrade process to improve storage...continue reading
For enterprises that require powerful security and resiliency, vSAN 6.6 presents an array of features, such as encryption and stretched clusters, to ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.