Q
Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

How single-root I/O virtualization changes Windows Server performance

Single root I/O virtualization is designed to improve computing efficiency by lessening the hypervisors overhead.

Windows server administrators can bump up utilization of I/O devices with single-root I/O virtualization.

Windows Server 2012 works with the hypervisor -- either Microsoft Hyper-V or third parties such as VMware ESXi -- to virtualize provisioning, management and sharing of the server's I/O devices. Single-root I/O virtualization (SR-IOV) applies the concept established by server virtualization to peripherals such as network interface cards (NICs).I/O virtualization aims to improve compute efficiency by easing the overhead imposed by the hypervisor. Data moves from the physical I/O peripheral device to the host hypervisor, on to the VM and then back again. This introduces latency and takes processor cycles away from other tasks. SR-IOV can cut out this translation, allowing virtual machines to engage I/O peripheral devices directly.

Benefits of Windows Server SR-IOV

Single-root I/O virtualization aims to improve utilization and performance, but it isn't a golden ticket to better Windows servers. Learn the pros and cons of Windows Server SR-IOV before deciding to implement it.

Although Windows Server 2012 supports the technology, SR-IOV is not native to Microsoft or Windows Server. Instead, SR-IOV originated from the PCI Special Interest Group as an extension to the peripheral component interconnect express (PCIe) specification. SR-IOV specifically targets networking peripherals such as Ethernet NICS on data center servers.

How single-root I/O virtualization works

SR-IOV starts by defining the peripheral device's physical function (PF), which exposes the device's SR-IOV capabilities and interacts with the host hypervisor. The PF basically tells the hypervisor about the physical PCIe device that is available in the server.

The system can now create one or more virtual functions (VFs) underneath the overall PF. VFs share the peripheral device's physical resources (like NIC ports or memory space on the NIC) and are associated with specific VMs on the server. This allows the physical PCIe device to represent itself as multiple virtual PCIe devices.

Each PF and VF receives a unique PCIe identifier that allows the hypervisor's virtual memory manager to distinguish traffic streams and use direct memory access (DMA) remapping and address translation to move data directly between the peripheral device and the target VM. In this way, SR-IOV essentially bypasses the host hypervisor and the processing overhead and resulting latency involved with it.

SR-IOV lets VMs work directly with PCIe devices through DMA, so avoiding virtualization transports, virtual switches or other translations through the host hypervisor. This direct interaction boosts actual communication performance close to non-virtualized levels.

Next Steps

What SR-IOV does for Hyper-V VMs.

Has virtualizing real-time applications become easier?

This was last published in May 2015

Dig Deeper on Microsoft Systems and Network Troubleshooting

PRO+

Content

Find more PRO+ content and other member only offers, here.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Do latency issues justify the use of SR-IOV?
Cancel

-ADS BY GOOGLE

SearchServerVirtualization

SearchCloudComputing

SearchExchange

SearchSQLServer

SearchEnterpriseDesktop

SearchVirtualDesktop

Close