Capacity planning a key piece of the virtualization puzzle

While server virtualization is great for resource utilization, those who overlook capacity planning could still take a hit on performance.

One of the key benefits of virtualization is that it can vastly improve server resource utilization. But there's also a downside to this high utilization, as unmonitored workload migration, uncontrolled VM proliferation and unexpected jumps in resource demands can deplete even the most powerful servers and leave workloads starved for precious resources.

IT administrators in a virtual data center need to implement and follow comprehensive capacity planning protocols to ensure that vital resources will be available to the workloads that need them.

Capacity planning and server performance
Windows servers running Hyper-V are no more vulnerable to capacity issues than non-Windows servers or other hypervisors, but experts are clear that the move to any virtualization platform must include a greater awareness of server resources at several levels.

I don’t think that there's a perfect model where you can go in and read a book and run a tool and know exactly what [computing resources] you need.

Rand Morimoto, president,Convergent Computing, Oakland, Cali.

First, understand the peak needs of each application. This can be determined by monitoring resource use over time. Second, pay close attention to the virtual workloads running on each server so that the total resource demand (such as CPU cycles or memory) does not exceed the resources available. And third, continue monitoring the resource utilization and watch for changes that may require assigning more resources or rebalancing the workload distribution among multiple virtual servers.

"I don’t think that there's a perfect model where you can go in and read a book and run a tool and know exactly what [computing resources] you need," said Rand Morimoto, president of solution provider Convergent Computing in Oakland, Cali. "You get a good idea of what it 'should' be; you test it, you monitor it, and you tweak and tune as you need to. And that's the piece that many people miss."

Failure to monitor server resources and workload distribution can lead to resource shortages. In most cases, shortages tend to starve one or more virtual machines and cause performance problems such as processing lags, long logon times, poor storage access, and so on. Unless administrators are diligent about monitoring resource use, the user base will typically be first to experience and complain about application performance problems. A spate of user complaints about particular applications (especially several applications on the same server) may indicate an overburdened server.

Workload distribution is an integral part of capacity planning -- by understanding the resource needs of each virtual machine, an administrator can mix applications with complementary resource needs on the same physical server. For example, assigning CPU-intensive VMs to the same server will exhaust the available CPU resources far sooner and limit the total number of virtual machines that the server might otherwise support. By comparison, assigning a mix of CPU, memory and I/O-intensive workloads can improve the server's overall utilization without overtaxing any particular resource. This is another case where resource monitoring over time will pay dividends for an administrator attempting to optimize workload distribution in the virtual data center.

Capacity planning also affects failover and availability. One of the great benefits of virtualization is the ability to migrate virtual workloads dynamically between physical servers. But the destination server absolutely must have adequate computing resources available to accommodate the new workload. If not, the workloads cannot be migrated. If this occurs in the wake of a server failure, it may be impossible to restore the disrupted workloads and cause extended (and unnecessary) downtime. Capacity planning should always provide enough spare resources to accommodate failover tasks.

Approaching capacity planning on virtual servers
The trick to capacity planning is to understand what resources you have, where those resources are located, what resources each workload needs and how those resource needs are changing over time. Tools like Iometer can be used to check network behavior for I/O performance, but one of the most popular tools for Windows capacity planning is Microsoft's Assessment and Planning Toolkit, which supports both physical and virtual workloads. An administrator can run the tool to identify server resources and check the resource demands of every workload over time.

"It really allows you to run various types of scenarios, looking at what the tool is going to recommend you provide for virtualization hosts and then what sort of resources they have available," said Scott Gorcester, president of solution provider Moose Logic located in Bothell, Washington. Gorcester added that the results of Microsoft's tool are reasonably accurate even with VMware and Citrix virtualization platforms.

Although capacity planning often relies on short-term data, the true benefit of planning is to ensure that servers can provide adequate resources as workload needs grow over time. There is no single formula or process to tell an administrator how far out to plan, but common sense should be your guide. Just how far into the future that you plan should first be dictated by the number and nature of workloads that you're running. For example, an environment with several static workloads that are not expected to change can probably plan out a long time. Conversely, organizations that are rapidly adding new workloads or more users should probably limit their planning to only several months out -- any further would simply be too inaccurate to be useful.

Gorcester suggests that it's best to build in extra capacity from the beginning because that's often more economical than upgrading servers later on. The idea is that virtualization eases many IT tasks such as maintenance, machine provisioning and so on -- you'll almost always save the extra money that you spend on an "oversized" server. "If you overbuild a little bit, you're providing more stability, more usability, some reserve capacity, and that reserve capacity will stand there waiting to handle spikes in usage," he said. "You get optimal performance and the ability to easily add some workloads or server without increasing costs much."

Don't forget to include business plans and consider the impact of technology refresh cycles on capacity planning. For example, the move to virtualization can reduce the number of servers, but purchasing more powerful servers to support additional VMs will result in more expensive servers. The ability to reallocate older servers is also causing many organizations to keep their servers longer. This results in less frequent technology refreshes, which can also make fewer powerful server purchases more cost-effective.

You can follow SearchWindowsServer.com on Twitter @WindowsTT.

ABOUT THE AUTHOR
Stephen J. Bigelow, senior features writer, has more than 15 years of technical writing experience in the PC/technology industry. He holds a BSEE, CompTIA A+, Network+, Security+ and Server+ certifications and has written hundreds of articles and more than 15 feature books on computer troubleshooting. Contact him at sbigelow@techtarget.com.

This was first published in January 2011

Dig deeper on Windows Server Virtualization and Microsoft Hyper-V

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchServerVirtualization

SearchCloudComputing

SearchExchange

SearchSQLServer

SearchWinIT

SearchEnterpriseDesktop

SearchVirtualDesktop

Close