Optimize your server's disk performance

How a server's hard disks are configured and maintained can have a big impact on server performance. Here's how to improve the efficiency of your server's disk subsystem.

Hard disks are the resources most heavily used by a server. The way a server's hard disks are configured and maintained...

usually has an enormous impact on the server's overall performance.

As such, it makes sense to spend a bit of time optimizing a server's disk usage. In this article, I'll share some techniques for improving the efficiency of your server's disk subsystem.

Note: Every server is configured differently. As such, disk configuration and optimization techniques that would be ideally suited to one organization might be detrimental to another. The only way to fully optimize a server's disk usage is to analyze which applications are running on the server, and how much of a workload those applications place on the server's disk subsystem.

Therefore, there is no one technique that will make your server run with peak performance. This article will provide you with some general techniques that will improve the performance of most types of servers.

File placement

One trick for optimizing a server's performance is to place files onto the appropriate hard drives. For example, the Windows Server operating system is almost always installed on the C drive. That being the case, I recommend using the C drive only for the operating system, although in some cases you can also install applications to the C drive without hurting performance much.

The OS component that consumes the most disk I/O time is the pagefile. I recommend placing the pagefile on a dedicated hard drive (not a dedicated partition). This prevents disk I/O related to virtual memory paging from disrupting other parts of the system.

Any data stored on a server should be stored on a dedicated volume. Since most of the server's disk I/O is usually related to reading and writing data, it usually makes sense to place the data on a RAID array. Which type of RAID depends on the level of performance and fault tolerance you need. RAID 10 arrays are a good choice because they're as fast as a RAID 1 array, but offer fault tolerance through mirroring.


The amount of memory in your system makes a huge difference in disk subsystem performance. As mentioned above, placing the pagefile on a dedicated hard disk minimizes the impact that virtual memory paging has on the rest of the system.

The concept of virtual memory comes from a time when physical memory was extremely expensive. The idea was to compensate for a shortage of physical memory by allowing a portion of the hard disk to act as physical memory.

However, this technique is inefficient. For one thing, the hard disk is much slower than even the slowest physical memory. Hard disk access is measured in milliseconds; physical memory access is measured in nanoseconds.

Another problem is that the OS cannot work with pages of memory stored in virtual memory. If the OS needs to perform an action on a page of memory, that page must be moved to physical memory first. This process is called swapping or paging.

If a server's physical memory is full, and it needs to access a page of memory stored in virtual memory, the server will have to move a currently unused page from physical memory to virtual memory, in order to make room for a page to be moved from virtual memory to physical memory.

This paging process consumes not only disk resources but also a lot of CPU time and memory. Some of the server's physical memory is used to keep track of which pages are in RAM and which are in virtual memory. This is why the paging process is so inefficient. Some paging is normal, but excessive paging can lead to constant disk activity and kill a server's performance.

The best way to reduce paging is to install as much memory as possible in your server. As the amount of RAM in your server increases, the size of the pagefile will also increase. However, the system's dependency on the pagefile decreases. This will free up a considerable amount of disk time and CPU time.


One way to keep hard disks performing well is to schedule a routine defragmentation. Hard disks are designed to write data in a linear fashion. But when files are deleted, gaps of empty space are left behind.

A hard disk will fill in this empty space before writing data to the empty space at the end of the drive. If a gap of empty space is not large enough to accommodate a file that's being written, the file is fragmented. This means that a portion of the file is inserted into the gap of empty space, and the rest of the file is placed either in the next gap of empty space or at the end of the drive. . .whichever comes first.

Fragmentation is bad for system performance, because the disks heads need to move all over the surface of the hard disk to read or write files. Defragmenting a hard disk removes fragmentation and allows files to be stored in a linear fashion, which allows for much faster reads and writes.


Depending on what your server is being used for, indexing can be good or a bad. Indexing involves reading all the files on a disk and building an index that tells the OS what each file contains. If users are routinely searching through large quantities of data looking for specific files, indexing can make this process dramatically more efficient.

However, many administrators do not like indexing because the process consumes a lot of CPU time and memory. Some disk space is also used to store the index.

Here's my recommendation: Decide whether indexing is appropriate on a server-by-server basis. I index the file servers in my organization because I constantly have to search through my files to see if I've ever written about certain topics. However, my mail servers, database servers, DNS servers, domain controllers, etc., are not indexed. Indexing these types of servers would be pointless. In fact, if you want to improve performance, you might check your infrastructure servers to make sure they're not indexed.

About the author:
Brien M. Posey, MCSE, is a Microsoft Most Valuable Professional for his work with Windows 2000 Server, Exchange Server and IIS. He previously served as CIO for a nationwide chain of hospitals and was in charge of IT security for Fort Knox. He writes regularly for SearchWinComputing.com and other TechTarget sites.

More information on this topic:


This was first published in March 2007

Dig Deeper



Find more PRO+ content and other member only offers, here.



Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:









  • Virtual desktop security guide

    To secure virtual desktops, consider antivirus, certificates and network vulnerabilities. Just remember, VDI doesn't always ...

  • Guide to low-cost desktop virtualization

    In this guide, learn to virtualize desktops without spending more than you would when deploying PCs, and what VDI vendors are ...

  • VDI pilot project guide

    A VDI pilot project should start with a VDI project plan. Know what pitfalls to avoid and test product options to achieve a ...