News Stay informed about the latest enterprise technology news and product updates.

Memory methods still differ for Microsoft and VMware

Despite new features for Hyper-V R2 SP1, the two vendors still take very different approaches to memory management for virtualization.

Editor’s note: In a previous article, Mike Laverick argued that Microsoft’s Dynamic Memory technology has more in common with that of Citrix XenServer than VMware’s memory overcommit. Here he expands on some other differences between Microsoft and VMware memory management and outlines the areas where Hyper-V could still improve.

The general impression I get from Microsoft and Citrix is that dynamic memory is an attempt to solve the question of, “How much memory should I allocate to a virtual machine?”

You could argue that this is less of a worry with VMware, since memory is delivered on-demand with ESX. A virtual machine only takes the memory it needs, and stops at the hard limit you set when you define it. It’s entirely possible to create a virtual machine with 8 GB of RAM as a limit but have it use only 1 GB of RAM, leaving the remaining 7 GB available for the kernel to allocate to other VMs. Historically, you would waste resources if you did this with Microsoft Hyper-V, as 8 GB would be needed to even power on the VM.

With VMware ESX, on the other hand, you would need to find the 8 GB on disk for the VM swap file by default. Why? Because at the end of the day these resources need to be guaranteed somewhere. The last thing you want is a situation where a VM demands blocks of memory (either physical or swap) and finds nothing there; it would be like when Wile E. Coyote goes off the edge of a cliff -- the VM would just crash.

Obviously it’s much easier to find free disk space than it is to find free RAM, and the VMware swap file could be relocated to cheaper storage to reduce this penalty. The worry with assigning memory in this way, however, is that some applications are designed to take as much memory as you assign them. If you give 8 GB of RAM to VMs with those types of memory-hungry applications, then they will use all 8 GB. That’s why I think VMware, while promoting memory overcommit, still wants customers to “right-size” the VMs correctly and monitor the environment properly. The result would have admins monitoring virtual memory in much the same way they would monitor virtual storage.

Different takes on the balloon driver
This philosophical difference among vendors has led to the unique ways in which different products utilize the balloon driver. With VMware, the balloon driver only really kicks in when memory becomes scarce, so it’s best to have no ballooning activity at all. This is important since ballooning causes the driver to leverage the guest operating systems’ memory management functions.

This is true of all balloon driver techniques, and is applied as such across all hypervisor vendors. But the downside to this sort of ballooning activity is that it will generate swap activity within the guest OS, which could potentially impact performance based on the amount of memory to recoup and/or the quality of the storage IOPS delivered to the VM.

With the Microsoft balloon driver, the intention is to have it allocate RAM to the VM on-demand. This is similar to how VMware ESX would use the balloon driver in its hot-add RAM feature, but in a more dynamic way. So as memory needs change throughout the day, Microsoft says it can de-allocate memory from one VM to another using its balloon driver, and the system that decides which VMs get certain RAM is dubbed the Memory Balancer.

Figure 1. The Hyper-V Dynamic Memory architecture (click to enlarge)
The Hyper-V Dynamic Memory architecture

So for Microsoft, the dynamic memory concept is about optimizing the system on the fly and taking away some of the anxiety about whether or not you’ve allocated the right amount of memory. This model depends on Microsoft identifying benign memory demands, otherwise a bad guy could deliberately create a “leaky app” that just demanded more and more memory. This dynamic memory functionality is currently limited to these Windows guest operating systems:

  • Windows Server 2003, 2008 and 2008 R2
    • 32-bit & 64-bit versions
  • Windows Vista and Windows 7
    • Enterprise and Ultimate Editions only
    • 32-bit & 64-bit versions

The nice thing about VMware’s memory management is that much of it happens outside the guest OS, and therefore is offered to a much wider range of guest operating systems. VMware produces the balloon driver itself for a much more diverse range of operating systems as well.

Still room for improvement
As I’ve said before, Dynamic Memory is a welcome addition to Hyper-V, especially given Microsoft’s previous less-than-stellar showing in this critical aspect of virtualization. There are still some important limitations to be aware of, however.

One concern is that virtual machines with Hyper-V R2 SP1 can demand more memory than is physically present. Therefore, Microsoft had to change the way the parent partition is granted memory (something that is sometimes referred to as the “root reserve”). Without these changes the VMs could potentially cause degraded performance to the management of the physical server.

Dynamic Memory is a welcome addition to Hyper-V, especially given Microsoft’s previous less-than-stellar showing in this critical aspect of virtualization.

Mike Laverick, Expert and author

Other hypervisor vendors like VMware have been dealing with this issue for some time. In ESX Classic, for example, the Service Console receives a reservation of memory relative to the total amount of memory on the system. In Hyper-V R2 SP1, a reservation is made in the Microsoft registry. This registry key will allow Windows admins to adjust the default memory allocation.

Paradoxically, this could cause existing Hyper-V customers some problems if they apply SP1 to an existing Hyper-V server as they might find that fewer VMs are powered on than before. This is because after the reboot, the new registry setting reserves memory to the parent partition, which affectively reduces the total amount of RAM available for the VMs. It’s an issue that’s going to affect a small number of customers who run Hyper-V hosts near the maximum load.

Another issue is that while Hyper-V R2 SP1 allows the buffer and priority settings to be changed on the fly with the VM powered on, it’s still not possible to increase or decrease the startup or maximum memory values. Most cautious admins will set a maximum much lower than the default of 64 GB, which means that if a VM reaches its maximum and uses all of its buffers, they will have no choice but to power off the VM and add more memory manually.

This limitation somewhat taints the claim that Hyper-V and Citrix XenServer are “dynamic”. For me, it means that we are back to chasing our tails again in some respects. What exactly is the right amount to set for the maximum? That could only be decided by knowing your application well and right-sizing it, just as countless VMware admins have been doing since ESX 2.

You can follow on Twitter @WindowsTT.

Mike Laverick (VCP) is an award-winning expert and author who has been involved with the VMware community since 2003. He is a VMware forum moderator and member of the London VMware User Group Steering Committee. Laverick is the owner and author of the virtualization website and blog RTFM Education, where he publishes free guides and utilities aimed at VMware ESX/VirtualCenter users.

Dig Deeper on Windows Server storage management

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.