News Stay informed about the latest enterprise technology news and product updates.

Server consolidation: Modeling your consolidation server

In performing a server consolidation, it's hard to figure out how many servers can be safely consolidated into one without running into diminishing returns. You should learn what each server's real needs and behaviors are, then model your consolidation server along those guidelines.

In performing a server consolidation, one of the hardest things is figuring out how many machines can be safely consolidated into one server without running into diminishing returns.

You may find that the machine you've set aside to perform the consolidation won't be enough to handle the load you plan to throw at it. Or you may find that you have a surplus of computing power that you can use to consolidate things even further.

One thing is clear when considering how to allocate a consolidation server's resources: What you think you need might not be what you actually need. You'll always be best served by learning what each server's real needs and behaviors are, then modeling your consolidation server along those guidelines.

You'll always be best served by learning what each server's real needs and behaviors are, then modeling your consolidation server along those guidelines.

For instance, virtualization used to be very "lossy" when it came to CPU utilization. Over time, this loss has been reduced a great deal, thanks to several improvements. One of the biggest of these is hardware-assisted virtualization, although it's only available on the "latest and greatest" CPUs. Consequently, you'll need to devote some thought to how much CPU you'll need to have available on your host system for each guest.

One way to calculate this is to take the speed rating for each CPU, total it and use that as an index for how much CPU you'll need. For instance, if you're consolidating five 766 MHz servers, that might seem to indicate that you'd need at least a pair of 2 GHz CPUs to handle such a load. However, this number is potentially misleading, since most servers don't spend their entire time peaked at 100% CPU.

The best way to determine how much CPU you'll really need is to get live statistics from the machines in question over a period of time—a week's duty cycle, perhaps—and then compute the average CPU usage from each server. If one of the 766 MHz servers in question averages about 15% CPU usage, then you can use 0.15 x 766, or 114 MHz, as the server's actual usage.

You can even multiply this number by two, and use 228 MHz as a margin-of-safety computation. If all five machines averaged about that much usage, you could comfortably consolidate them into a single 2 GHz dual-core system with CPU to spare. And you'll need at least some CPU left over for the host to perform its own functions, so be sure to budget that back into your estimates—say, at least 20% of the CPU for native functions.

Note: If you are consolidating systems that ran on multiple processors, you need to be aware of the level of support provided for multiple CPUs in your virtualization solution. For instance, Microsoft Virtual Server 2005 R2 can make use of SMP on the host, but emulated systems cannot run as more than one CPU. However, VMware ESX Server does support guest OSes that run as SMP systems. The more individual cores or sockets you can add to the host server, the better performance you'll get regardless of whether or not the individual guest systems can use SMP.

Network bandwidth

Planning for the network bandwidth used by consolidated systems is a little like planning for CPU utilization, and can be approached in much the same way. Not every server is going to have its network bandwidth saturated 100% of the time, so it's a good idea to derive some live usage statistics for the servers in question and see how they add up. Servers that saturate their network connections fairly aggressively ought to be given a dedicated physical network card; less heavily-trafficked servers can share a card and will probably never come close to using its available bandwidth.


Here's where things are a bit more carved in stone. In a server (and in computers in general), unused memory is wasted memory. To that end, when you're consolidating multiple servers into a single server, the host should have at least as much memory as the guests did when combined. This in itself is a strong argument for consolidating to a 64-bit host with 64-bit OS support (for the sake of being able to use more than 4 GB of memory), although that's probably a given for any newly provisioned system.

Start by devoting at least 512MB of memory to the host itself, and figure in the same amount of physical memory used by each guest system. For instance, four 256 MB guests would mean you'd need at least 1.5GB of RAM to run everything comfortably. Also be aware of any per-virtual-machine limits on memory, although if you're migrating older machines into your consolidation setup you probably won't encounter this issue.

Disk space

Disk space is another resource where you probably can't cheat—you'll need at least as much disk space on the host as you did in each guest. The good news is that disk space is cheap, and if you're consolidating systems that were created back when disk space wasn't as cheap as it is now, you'll have a lot less trouble meeting or exceeding the original stats. Another important corollary to this is how to handle separate OS and data disks on a virtualized guest. If you have a candidate guest machine which has an OS drive and a data drive (or even just has OS and data on separate partitions on the same drive), virtualize the OS drive but move the data drive directly to another physical disk whenever possible. This way you won't incur the additional overhead of virtualizing something that really doesn't need to be virtualized in the first place.

Whatever you do, don't try to save disk space by moving the OS partitions for several different machines onto the same physical disk. This not only slows things down but will negate another of the benefits of having things spread out across multiple physical drives: loss reduction. If that one shared drive goes out on you, you'll have five machines out of commission, not just one.

About the author: Serdar Yegulalp is editor of the Windows Insight, (formerly the Windows Power Users Newsletter), a blog site devoted to hints, tips, tricks and news for users and administrators of Windows NT, Windows 2000, Windows XP, Windows Server 2003 and Vista. He has more than 12 years of Windows experience under his belt, and contributes regularly to and

More information on this topic:

Dig Deeper on Windows Server deployment

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.