Manage Learn to apply best practices and optimize your operations.

Rethinking capacity planning with Windows Server 2008 R2

With the RTM of Windows Server 2008 R2, IT managers need to reconsider capacity planning. The server's 64-bit capability will call for new hardware.

With the recent release to manufacturing of Windows 7 and Windows Server 2008 R2, it is incumbent upon any IT manager to consider future capacity planning given that an eventual upgrade to the new OS is inevitable.

More on Windows capacity planning

How a proper data capacity planning strategy aids in DR

Tools make capacity planning easier for Windows shops

Maintaining the balance in Windows capacity planning 

These new Microsoft operating system platforms are based on a different code tree than their predecessors, and this will impact the hardware they run on.

Table 1 shows the basic hardware requirements for Windows Server 2008 and 2008 R2. While these requirements are very similar, the most important difference is that 2008 R2 is limited to x64 and Itanium platforms – and that will require some planning on your part. Microsoft has said it will not produce any more x86 (32-bit) server products. Exchange 2007 and later products will only run on x64 platforms, so this is just another move in the 64-bit direction. Because of this new 64-bit requirement, IT managers working with Windows Server 2008 R2 need a hardware planning strategy.

Table 1: Windows 2008 and 2008 R2 hardware requirements

Windows Server 2008 R2 Component Requirement
  Processor X64 1.4 GHz
IA64 Itanium 2
  Memory 512 MB
8 GB foundation
32 GB Std
Enterprise, Data Center, Itanium
  Disk Space 32 GB+
10 GB
More if >16 GB RAM
Windows Server 2008 Component Requirement
  Processor X86 – 1 GHz
X64 – 1.4 GHz
IA64 - Itanium 2
  Memory Min: 512 MB RAM
X86: 4 GB Std
64 GB (Enterprise, Data Center)
X64: 8 GB (Foundation)
32 GB (Standard)
2 TB (Enterprise, Data Center, Itanium)
  Disk Space Min:
(x86) 20 GB
(x64) 32 GB
Foundation: 10 GB
More for page file if >16 GB RAM

New 64-bit technology has incredible potential to maximize hardware investment. The hardware technology has outpaced 32-bit technology for several years now because of the limitation of 32-bit architecture to address only 232 bytes (4 GB) of physical memory. With servers easily configured with many times more than 4 GB of memory, the OS limits the ability to use physical resources.

So, in spite of having, say, 16 GB of physical RAM, 32-bit technology only allows 4 GB to be addressed. When instructions stored by the OS and applications have no more room to be stored in addressed locations of physical memory, they are stored in virtual memory. This is accomplished with a "page file," a file that exists on the disk. And we refer to physical memory plus page file as "virtual memory."

In order to execute instructions that are in the page file, the instructions must be moved to a physical address with existing instructions moved out to make room. This is referred to as paging and it degrades performance compared to executing instructions directly from physical memory. You could use some tricks to squeeze out a bit more addressable memory, but it's really robbing Peter to pay Paul and causes more paging.

The 64-bit architecture can address 264 bytes of physical memory. Very simply, this allows applications and OS functions to load in physical memory and execute much faster.

The 64-bit architecture, on the other hand, can address 264 bytes (16 Exabytes) of physical memory. Very simply, this allows applications and OS functions to load in physical memory (assuming you have sufficient physical memory) and execute much faster.

While x64 technology allows 32-bit Windows OS to run on it, you need 64-bit Windows to take advantage of the memory addressing. If you have 4 GB or less of RAM, you can get away with 32-bit Windows. It is important to note that 32-bit applications can run on x64 Windows in an emulation mode called WOW (Windows on Windows). They might run a little faster than they do on a 32-bit platform, but it really depends on the application. Applications must be written for x64 to take advantage of the 64-bit technology. Note, too, that 16-bit applications will no longer run on x64 platforms.

Of course Exchange Server, a notorious memory hog, will definitely take advantage of 64-bit architecture, and domain controllers and virtual server technology will be benefactors as well. You can configure the 64-bit DCs to store the entire NTDS.dit (AD database) in memory, making a huge difference in the DC performance in large AD deployments. Virtual server technology, which runs several servers on a single physical host, can now map large quantities of physical memory, which makes it possible to put more virtual machines on the host. This allows the physical machines to be used more efficiently.

That was the good; now for the bad and ugly

In spite of the advantages, x64 technology does not cure all performance issues. Other factors such as CPU speed and disk I/O affect performance too. But Windows Server 2008 will be the last version based on 32-bit technology.

Here are the disadvantages of deploying 2008 R2 servers:

  • No upgrade from Windows Server 2008 32-bit. (Note: There is never an upgrade option from any x86 version of Windows to x64, which will pose a challenge in the migration of domain controllers.)
  • Active Directory will require a schema upgrade to deploy 2008 R2 DCs.
  • A new set of service packs and security updates is required. This will add to the complexity of change management, leaving you with Windows 2003, 2008 and 2008 R2 having their own set of patches and service packs.
  • Legacy 16-bit apps will not run on 2008 R2 because of the x64 limitations. Maybe this is the time to get rid of these applications.
  • Not all applications will notice a significant difference just because it's x64 technology. You must do your own benchmarking.

The advantages of deploying 2008 R2 servers include the following:

  • All of the 64-bit memory addressing advantages noted previously. This allows Windows and applications to actually use all of the memory you buy for the servers.
  • Improvements in 2008 failover clustering
  • Improvements in Hyper-V virtualization features
  • PowerShell V2, including many new Active Directory cmdlets
  • Active Directory improvements such as the AD recycle bin and new management tools
  • Deployment of new features of BranchCache and DirectAccess (working with Windows 7). DirectAccess and VPN improvements have the potential to yield huge economic benefits to an organization's remote access deployment.

All in all, Windows Server 2008 R2 is a new operating system version and the road to future Windows deployments. When developing a capacity plan, be sure to examine your company's roadmap to 64-bit server technology. You'll get better application performance and hardware usage, particularly in virtualization scenarios.

Gary Olsen is a systems software engineer for Hewlett-Packard in Global Solutions Engineering. He authored Windows 2000: Active Directory Design and Deployment and co-authored Windows Server 2003 on HP ProLiant Servers. Gary is a Microsoft MVP for Directory Services and formerly for Windows File Systems.

Dig Deeper on Enterprise infrastructure management

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.