Q

Making the right Windows Server 2012 R2 hardware decision

Server hardware selection can have a profound impact on operating system and workload performance.

Are there hardware choices that can improve Windows Server 2012 R2 performance?

Every Windows administrator wants better performance from a server. Performance improvements are an excellent way for business to get more value for its hardware investment -- and top productivity for each workload. Performance often focuses on hardware choices in the server itself, but it's not always possible to buy the latest and greatest name-brand systems. In many cases, it's the many potential upgrades or readily available tweaks that can also bring measurable improvement, especially when one is working with CPUs and storage.

There are certainly some well-known Windows Server 2012 R2 hardware performance improvements, such as selecting faster 64-bit processors, models with additional cores, or both -- but these basic choices affect raw computing capacity. The processors in virtualized servers should also support second-level address translation (SLAT) in order to translate physical memory addresses to virtual ones directly and enhance hypervisor performance. SLAT is called extended page tables, or EPT, for Intel chips and nested page tables, or NPT, for AMD chips.

Processor cache selection is often overlooked as a performance tweak. Processors store recently accessed instructions and data in a relatively small area of fast memory within the processor itself. Any instruction or data fetch will check the cache first. When the required content is in cache (a cache hit), the processor does not need to fetch that content from system memory -- resulting in vastly enhanced performance. When the required content is not in cache (a cache miss), a normal memory access cycle takes place. Large L2, L3 or even L4 (last level cache or LLC) caches will typically benefit system performance. Current Intel Xeon E5 processors can provide up to 30 MB of cache depending on the specific CPU model.

Select memory modules that meet the processor's clock speed. Typical memory modules are available in 1333 MHz, 1600 MHz and 1866 MHz bus speeds. Be sure to select the speed that matches the processor's best bus speed. Faster memory doesn't hurt, but it costs more and doesn't help performance because the memory would be fixed to a slower speed. Slower memory will prevent the system from running at its optimum level. For example, if you select a processor that supports a 1600 MHz bus speed, select 1600 MHz memory modules too -- 1,866 MHz modules will just waste money, and 1333 MHz modules will not work (or force a slower bus speed and degrade server performance).

The benefit of memory quantity is a similar argument: A memory surplus doesn't really help performance, but a memory shortage can devastate performance. The trick with memory is to ensure that enough memory is present to support the operating system and all the workloads expected to reside on the system. Additional memory costs more and adds no performance (unless you're providing additional memory capacity for a possible virtual-machine migration or failover support). However, if memory runs short, the system will use local disk storage as supplemental space. This "disk paging" can seriously degrade the performance because disk access is far slower than memory access; in addition, carelessly located paging files may compete with other frequently-accessed disk files.

Today, most storage and network interface devices rely on the PCI Express interface, so be sure to select a server with an adequate number of PCIe slots for expansion. Extremely high-bandwidth devices, such as 10 Gigabit Ethernet, or 10 GigE, network adapters can encounter bus contention with just a few PCIe lanes, so opt for servers with at least eight PCIe lanes (PCIe x8) or more.

If storage is local to the server (rather than to a storage area network or network attached storage), you can improve storage performance by selecting small form factor disks with high rotational speeds (RPM) and using several disks in a RAID group to spread out read/write activity across multiple spindles to gain concurrency. For example, four relatively small 2.5-inch 15,000 RPM disks configured in a RAID 5 group will generally yield better storage performance than a single 3.5-inch 7,200 RPM disk of equal capacity. Some read-intensive or latency-sensitive applications may also benefit from solid-state disk devices instead of traditional mechanical disks.

This was first published in June 2014
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchServerVirtualization

SearchCloudComputing

SearchExchange

SearchSQLServer

SearchWinIT

SearchEnterpriseDesktop

SearchVirtualDesktop

Close