Scaling Windows server resources for virtualization

Before implementing virtualization, it's important to understand the affect it will have on server resources. Learn about the computing needs of virtualization and how to determine the needs of virtual workloads.

This Content Component encountered an error

Servers provide vital computing resources -- the processor cycles, RAM, disk I/O and network connectivity needed to drive critical enterprise applications. Virtualization adds a new wrinkle to server resource considerations, and data center administrators must weigh server resources against the computing needs of numerous virtual machines (VMs).

eZines and eBooks for Windows server professionals

Check out our library of Windows server resources, with technical advice for IT professionals on working with the latest storage technologies, file systems, Windows Server 2008 and more.

Modest consolidation projects rarely strain server resources, simply drawing from a server's underutilized computing capacity. But modern enterprise applications can pose enormous demands, easily justifying CPU, memory, and other server upgrades rather than investing in a new server outright. The onus is on IT professionals to determine adequate server resources, measure actual resource utilization of VMs, and maintain the availability of virtualized servers.

The computing needs of virtualization

To really appreciate the importance of server resources, it's important to understand how computing resources are affected by virtualization. A conventional (non-virtualized) application server typically runs at least one major enterprise application such as an email or database server. Each application demands a certain amount of processing (CPU) cycles, memory (RAM), local disk I/O and network bandwidth. In most cases, the physical server provides ample computing resources, and the application easily meets performance requirements. It's not uncommon to find non-virtualized application servers running at only 5% to 10% utilization.

Server virtualization changes the way that IT professionals need to think about computing resources. Each virtual machine added to a physical host will demand additional CPU, memory, I/O and network connectivity -- applications have the same demands that they would in a non-virtualized deployment. The demands on computing resources are cumulative, vastly increasing the server's total processing load.

While most servers have the available computing resources needed to accommodate additional VMs, the demands of each workload can vary considerably.

The resource underutilization so commonplace in physical servers is usually a boon for virtualization, allowing multiple VMs to consume more of the available server resources that might otherwise be left idle. "The whole point of virtualization is that you drive up utilization of the things that you're spending money on," said Bob Plankers, technology consultant and blogger for The Lone Sysadmin.

However, while most servers have the available computing resources needed to accommodate additional VMs, the demands of each workload can vary considerably. For example, hosting processor-intensive workloads may strain the server's processor cycles while leaving plenty of memory and network bandwidth. Given the storage-intensive nature of virtual machines, disk I/O can often emerge as a gating factor. Data center professionals must understand the computing requirements of each workload, distribute workloads to optimize consolidation, and implement any necessary upgrades to maintain availability.

Determining the needs of virtual workloads

One of the first steps in any server consolidation project is to determine the computing resources (CPU, RAM, disk I/O, and network bandwidth) needed for each workload. This is not a difficult process -- relevant requirements can easily be determined by reviewing ongoing performance logs and baseline data for each non-virtualized server operating under normal load conditions. If baseline data is not readily available, it can be generated on live systems or approximated in a lab environment. The underlying operating system and hypervisor also require computing resources, so be sure to include those requirements in your estimates as well.

As a rule, never determine workload needs based entirely on the recommendations or anecdotal suggestions of staff. Bill Kleyman, director of technology at World Wide Fittings, a manufacturer and distributor of steel hydraulic tube and fittings headquartered in Niles Ill., noted a recent example of database programmers vastly over-specifying the needs for a virtualized database machine. "They only needed one CPU and about 2 GB of RAM, and the application is doing fine," he said.

Next, consider which workloads will be virtualized onto each physical server. This is an important step, as it is unlikely that all physical servers will be consolidated onto a single physical server. While it may be technically possible (especially for SMB environments), it is certainly not a best-practice because a single server itself presents a single point of failure -- potentially compromising all of the organization's workloads.

High-availability software

Virtual machines can easily be migrated from one physical server to another. This provides powerful flexibility when a server fails or requires routine maintenance, but the few minutes required to initialize ("spin-up" or "stand up") the VM on another virtual server may simply be too great for some mission-critical applications. When downtime must be avoided, high-availability (HA) software such as Marathon Technologies' everRunVM can pair duplicate instances of critical VMs between clustered servers.  

One instance of the VM works normally and maintains constant synchronization with the duplicate VM. If the first server fails and the heartbeat signal between servers is disrupted, the redundant VM takes over seamlessly to maintain availability of the application. Once the troubled server returns to service and the original VM is restored, the instances resynchronize and control returns to the first server. "Even in the most basic of HA failover configurations, your maximum downtime is going to be [no more than] a minute," said Bill Kleyman, director of technology at World Wide Fittings, a manufacturer and distributor of steel hydraulic tube and fittings headquartered in Niles IL. If even faster response time is called for, rely on more conventional server clustering techniques to maintain application availability.   ***

In actual practice, virtualized workloads are distributed across two or more physical servers. This approach offers several benefits. Data center administrators can "mix and match" the demands of each workload to the server's available resources. For example, rather than hosting two processor-intensive VMs on one server and two memory-intensive workloads on another, the VMs can usually be distributed so that a server hosts one processor-intensive and one memory-intensive workload. This also allows incompatible workloads to be kept apart and related workloads to be kept together. Multiple servers also allow for the redundant handling of critical workloads. As an example, an Exchange Server VM can exist on one server and be redundantly duplicated to another physical server through a tool like everRunVM from Marathon Technologies. If one instance of the VM fails, the redundant instance can take over seamlessly.

Remember that it's never a best-practice to "max-out" a server's resources -- always allow some amount of headroom for the server to host additional VMs if the need arises. This can occur when a hardware fault or routine maintenance demands that a particular server be taken offline. The VMs from that server can then be re-initialized (or "spun-up") from SAN storage onto the remaining servers while the affected server is repaired or maintained. Some organizations include a spare server for exactly this purpose (sometimes called an N+1 configuration). Headroom also allows for future growth demands of more or larger workloads.

Finally, once the workload and additional headroom requirements are known, it is then a relatively simple matter to add up the processor, memory, I/O, and network requirements for each workload and compare the totals to the available resources on each respective server. Any resource shortfalls can then be addressed with further workload distribution planning, appropriate server hardware upgrades, or entirely new servers.

 

ABOUT THE AUTHOR
Stephen J. Bigelow, senior features writer, has more than 15 years of technical writing experience in the PC/technology industry. He holds a BSEE, CompTIA A+, Network+, Security+ and Server+ certifications and has written hundreds of articles and more than 15 feature books on computer troubleshooting, including Bigelow's PC Hardware Desk Reference and Bigelow's PC Hardware Annoyances. Contact him at sbigelow@techtarget.com.

This was first published in August 2009

Dig deeper on Server Hardware for Windows

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchServerVirtualization

SearchCloudComputing

SearchExchange

SearchSQLServer

SearchWinIT

SearchEnterpriseDesktop

SearchVirtualDesktop

Close