The downside of Windows application server consolidation

While server consolidation certainly has its benefits, it is not without its shortcomings. Expert Christa Anderson walks through the cost, memory and redundancy issues IT managers should be aware of when consolidating application servers.

Christa Anderson

 Server consolidation is a popular -- and sensible -- idea for some very good reasons, including reduced administration costs. But when you're considering consolidation servers, you need to take reduced redundancy into account. Let's illustrate this with one server type ripe for consolidation: application servers.

Centralized application delivery is one of those overnight successes that took about fifteen years to come to fruition.

Citrix Systems Inc. has been adding capabilities to enable this model for the enterprise, and Microsoft has been pushing this delivery system for the small to medium–sized enterprise market. The introduction of offline application streaming from Citrix and Microsoft (through Microsoft's purchase of Softricity) even removes one of the real sticking points from the centralized delivery model: the reliability of the network -- or the availability of the network when users need their applications but are offline.

The single application server bottleneck

When you deliver applications to remote users through the centralized model, the users typically connect to a load-balanced server farm but eventually make a connection to a single server, as brokered by the logic set up in the load balancer. One of the problems plaguing centralized application delivery from the beginning has been the number of users that a single application server can support.

Historically, the cause of the bottleneck generally hasn't been the processor power or the network, but memory. Although a 32-bit operating system can address 4 GB of memory, this memory isn't available in a single large pool but is divided into discrete chunks devoted to certain tasks. The upshot of this is that if the space used to support remote user logins gets used up, then the server can't accept any more logins, even if some memory in other areas remains unused.

On an application server with 16 GB of RAM, you are no better off with the 64-bit operating system.
,

 As centralized application delivery has gained more traction inside Microsoft, the memory available to support remote users has increased, and the way that that memory is used has become more efficient. That means that Windows Server 2003 can support many more users than did Windows Terminal Server -- the first version of Windows to enable remote access. Win2K3 improved the memory layout and usage so that a Win2K3 box can support more users than one running Win2K. Still, a 32-bit virtual memory space is limited to the number of users an application server can support. So long as memory restricts the number of concurrent connections to an application server, it potentially limits the number of users in an enterprise who can use this model to access their applications.

Sixty-four-bit operating systems remove the memory bottleneck. 32-bit servers count the number of concurrent connections that they can support in dozens; 64-bit systems count the number of users they can support in hundreds. According to one IBM white paper, Running Citrix Presentation Server on X3 Architecture Servers, the company was able to support more than 600 concurrent connections on a single server. That's many fewer servers, which means fewer servers to maintain, less space required to store them and less air conditioning required to cool them. Combined with the improvements to the remote application experience and some new technologies available, this makes application servers a good choice for more workers.

Brace yourself for the cost

The downside of this consolidation is cost. Fellow MVP Benny Tritsch and I did a study for visionapp during the summer of 2006, called Performance Benefits of 64-bit Operating Systems in terminal server environments. The study compared performance in equivalent 32-bit and 64-bit servers running a few standard applications (Microsoft Word, Notepad and a command line) per remote session using Microsoft's Windows Terminal Services.

You can read the technical details here, but the upshot of the study was this: On a server with 16 GB of RAM, you are no better off with the 64-bit operating system. Although many users could connect to the 64-bit servers, after a certain point they couldn't actually run any applications -- our tests showed their sessions, once created, froze up due to lack of memory. The problem is partially that 64-bit operating systems use more memory than 32-bit operating systems, and we didn't add enough memory to compensate for the difference. According to Microsoft, Citrix and IBM, performance would have improved with 32 GB of RAM.

Memory problems

So why didn't we add more memory? The truth is that we couldn't afford it. RAM is very expensive. To explain what RAM has to do with it, I need to delve into the workings of memory management on Windows operating systems for a moment. A 64-bit operating system can address exponentially more memory than a 32-bit server. However, the way the operating system works means that this storage space can be located either in RAM, which is very fast but expensive, or on disk in an area called the page file, which is cheap but relatively slow. You have to have a certain amount of RAM for the system to work at all, and the more RAM you have relative to the amount of virtual address space the operating system can support, the more information that the OS will store in the fast RAM and the better the system will perform.

But adding a lot of RAM isn't cheap. The larger the module (the physical object that adds RAM to a computer), the more it costs. Computers only have so many slots, so to add a lot of memory to a computer you need to add it in big modules.

The upshot of all of this is that, to get IBM's 600 users on a single application server, you need a server with 64 GB of RAM, which will cost you about $35,000, according to a March 2007 quote I received. Supporting that many users with 32-bit systems with 4 GB of RAM would cost you about $4,000 per server, and thus about $40,000 if we assume 60 users per server needing 10 servers. Hence, thus far, server consolidation makes sense.

Reduced redundancy

But what happens if your huge server fails? If you need 10 load-balanced 32-bit servers to support 600 users, and if one server fails or you need to take it offline for maintenance, then you've lost one-tenth of your capacity and the users likely didn't notice any difference. If the huge server goes offline, then you've lost everything and no one can work. When a single server costs $36,000, it makes keeping spare capacity on hand very expensive.

For IT managers, it means that server consolidation to 64-bit reduces some administrative, hardware and logistical costs, but increases your up-front hardware investment if you're going to provide redundancy. Balancing the two will become easier as hardware prices drop, but it's still a consideration (especially for smaller organizations) today.

Next time: What are companies that offer 64-bit servers doing to mitigate the risk of server failure?

Christa Anderson, a Terminal Server MVP and co-author of the best-selling Mastering Windows Server 2003, is the director, Strategic Technologies for visionapp GmbH, the experts in a practical approach to ITIL best practices for service support and service delivery. You can reach her at christa.anderson@visionapp.com.

Dig deeper on Enterprise Infrastructure Management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchServerVirtualization

SearchCloudComputing

SearchExchange

SearchSQLServer

SearchWinIT

SearchEnterpriseDesktop

SearchVirtualDesktop

Close