News Stay informed about the latest enterprise technology news and product updates.

Best practices for setting up partitions for Exchange 5.5 on a Win2k server, part 2

This is a continuation of Best practices for setting up partitions for Exchange 5.5 on a Win2k server, an Active Directory and/or Migration Ask the Experts question.

Looking back at the last few paragraphs, you will note that we have identified three different types of disk a...


  • Random reads/writes
  • Sequential reads/writes
  • Random read-only
The various types of access should be grouped together on disk sets in order to improve performance. In addition, if you can group them on separate controllers you can even further enhance performance; however, such methods as this are seldom required except for the most demanding situations. Random read/writes causes the mechanical portions of the hard drive work very hard. Just think about putting an encyclopedia set on the shelf. If you were handed A and then Z and then Q -- you would be moving back and forth putting them back in the right sport. But if someone hands you J, K, L, you slide them in rapidly. The transaction logs are written and read much like the J,K,L volumes of our encyclopedia set. If you put the random reads on the same volume -- the benefits of the sequential reads/writes is lost (you then have A, J,K,L,Z,Q -- slightly sequential but still plenty random to slow you down). Consider also that the system is relying on the transaction logs to be written as fast and safely as possible. The transaction logs are the built-in recovery system for the database.

So you see that there is also backup/recovery issues related to the type of access. Consider that the two stage commit style of writing data to the database is part of a fault tolerant process, why would you put the transaction logs and the database on the same device? If you lose the database disks due to failure or corruption, you would seek to restore the database via the transaction logs. However, if you put the transaction logs on the same disk that was corrupted you have destroyed your recovery method.

For review, our general concepts for secure fast data access are:

  • There are three types of access (in our case) -– random reads, sequential reads/writes and random read/writes.
  • Each type of access should be separated for performance.
  • Transaction logs are the built-in back up for databases and should be on a separate disk for performance.
Looking at databases a little more, we have to consider the growth factor. Databases are intended to store information and thus are expected to grow. Transaction logs are transient -– once the information is committed to the database the transaction log can be removed. Growth of the logs over time is expected as the demands on the system increase –- but the growth is gradual. The OS/application disks are not expected to grow at all (except for the occasional service pack). Using this information allows us to determine the type of fault tolerant disk subsystem we will be applying to our various access types.

RAID 1, offers excellent read capabilities as the system will read from both disks at once as long as they are both healthy. This increases performance. RAID 1 is not expanded very easily in most hardware/software configuration so it is more suitable for slowly growing or stagnate file systems applications, for example the OS and transaction logs. RAID 1 offers good fault tolerance as the loss of one disk will not disable the system.

RAID 5, offers excellent expansion possibilities as it can be expanded fairly easily by most quality hardware RAID controllers. It is not a performance configuration however. The overhead is associated with the parity calculation and makes this the slowest of all RAID configurations. The fault tolerant aspects of the RAID 5 are good. You can lose one disk and still function. In addition, many systems allow you to have a Hot Spare that is automatically rebuilt on the fly to replace the lost drive. Until the disk is rebuilt you are exposed to a complete failure of the system. RAID 5 is nicely suited to offer economical fault tolerance for growing systems, like databases systems (or random reads/writes).

RAID 0+1 (or RAID 10), offers superior performance and excellent expansion capabilities. RAID 0 is the striping of data across disks without the use of the parity bit. Thus, RAID 0 by itself is NOT fault tolerant. However, the access to all drives can increase performance. RAID 0+1 involves taking two RAID 0 strip sets and mirroring them. Not all hardware controllers are capable of this particular configuration (Compaq and Adaptec server controllers come to mind as good choices). However, the result is an exceptional fast and exceptional fault tolerant disk subsystem. For example, if you had two RAID 0 strip sets of five disks each and then mirrored them, you could potentially lose five hard drives before you had to worry. While this is a very fast solution, it is not for the financially squeamish. You will be paying for twice as many disks for the same amount of space.

Summarizing our RAID configuration choices:

  • RAID 1 for OS random reads
  • RAID 1 for transaction logs
  • RAID 5 (RAID 0+1) for database
Here is a look at a few configurations.


Here we only have three to four hard drive slots in an internal system. You will make some definite concessions on performance for this configuration.

Use Access Type RAID Type Size
OS/Application/Log Files random read 1 9 GB/min (18 GB suggested)
Database random read/write 1 2 x the projected amount of database size

Here we have lumped the log files on the drive with the application, breaking one of our rules of not mixing access types. Performance will be lost, but if you selected a lower end system (PowerEdge 1650), you are money conscious or your application demands are not strenuous, you will definitely need to watch as your company grows and you will most likely have to replace this entire system in a year or two.

If your system is down to three HDDs, you will be forced to either surrender fault tolerance or further compromise disk performance by creating a RAID 5 array and putting logical volumes on the system. Even though the volumes are not on separate physical disks, creating logical volumes helps when you are looking to alter the configuration down the road, for expansion purposes.


Here we only have six hard drive slots in an internal system. This is generally considered the minimum for an Exchange or database system.

Use Access Type RAID Type Size
OS/Application random read 1 9 GB/min (18 GB suggested)
Log files sequential read/write 1 25% of database size initially
Database random read/write 1 2 x the projected amount of database size

The 2x in the database size column requires a little explanation. Generally for database backups and recovery, you will never want the drive on which the database is stored to be more than 50% full. If you cannot recover the files to the same drive recovery times greatly increase. Also, regular maintenance activities such as offline defragmentations benefit from being able to write to the same drive.


Here we do not have restrictions on the number of drives; however, we try to remain economical in our approach.

Use Access Type RAID Type Size
OS/Application random read 1 9 GB/min (18 GB suggested)
OS Pagefile Random read/write 0 9 GB or less
Log files sequential read/write 1 25% of database size initially
Database random read/write 5 +HS 2 x the projected amount of database size

The primary changes here are that we have allowed for greater expansion of the database drives by using a RAID 5 configuration. This will require at least three drives. Optimally, you would want the drives on an external disk array allowing for the increasing size of the database -- in addition, we have included the Hot Spare (HS). This gives you a backup to a failed hard drive in the form of an online spare. Of course, this drive represents unused space. You can do without the drive if you are confident of your failure detection systems/processes and you have spare drives on hand. The pagefile for the operating system has been placed on a separate drive. Most server chassis have at least four drives worth of space. It would be suggested to put the OS on two of the drives (mirrored) and the pagefile on a additional drive (no RAID required or using striping). The log files would be on a mirrored set of drives on the external array. This solution is the best for a growing company, as you can adjust the disk configuration as needed and expand it if you so desire. In addition, the server itself can completely fail. You then hook the external storage unit up to a new server and away you go. Most external storage units come with an optional dual controller configuration. If it is in the budget you would be well advised to purchase this option.


Here is a serious solution for major Exchange implementations. We have disregarded any issues with monetary limitations.

Use Access Type RAID Type Size
OS/Application random read 1 9 GB/min (18 GB suggested)
Application Random read 1 9 G/ min (18 GB suggested)
OS Pagefile Random read/write 0 9 GB or less
Log files sequential read/write 1 25% of database size initially
Database random read/write 0+1 2 x the projected amount of database size

Here we are most likely dealing with a system with five or six internal drives (e.g., Compaq DL380 G2, Dell PowerEdge 2650) and at least one if not two external storage units. The OS and the application have been separated into two mirror sets using up four of the bays on the internal systems. The last bay is used for the pagefile. The external storage units are connected to the system preferably via fiber connections. Dual fiber hubs and dual controllers would be used to provide full redundancy. The database is distributed between the two separate external arrays -- mirrored. The performance of this solution is remarkable even with nominal quality hardware. However, even on the cheap side this will be expensive.

Note: Part of this information is reproduced with permission of Paul Hinsberg and New Riders Publishing originally released as part of Windows NT Applications: Measuring and Optimizing Performance.

Click here to return to Paul Hinsberg's ATE section.

Dig Deeper on Windows Operating System Management

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.