I recently wrote about five administrative tasks that weren't worth the time and effort to perform them. The article generated so much interest that I decided to follow it up with a list of such tasks related to storage and storage management.
1. Full formatting a drive. Once upon a time, putting a drive into service that hadn't had every single sector physically tested was madness. Go back far enough and you can find drives that came from the factory with a defect list pasted on the top of the drive, so you could pass appropriate instructions on which sectors to exclude during the format.
But with each new generation of hard drive technology, drives have become more thoroughly spec'd out and reliable, to the point where it's no longer as necessary to physically test a new drive. Most drives—especially those in high-end RAID arrays—have self-checking mechanisms that can automatically detect physical problems and relocate data at risk. If you're determined to waste several hours performing a disk test that might simply be redundant, it probably won't do any harm, but it'll be sure to slow you way down. (Check your drive array's documentation for the straight dope on how new drives are provisioned.)
Note: A major exception to this rule is if you're decommissioning a drive from service and don't want the data on it to be exposed to anyone outside your organization. In a case like that, don't even bother with formatting as a way to insure the data's wiped: go with a third-party program like Darik's Boot and Nuke.
2. Defragmenting workstations more than once a week. I recently wrote a series of articles about the benefits of defragmentation, and one conclusion was that on a system less than three years old, defragmenting more than once a week in a workstation-type environment didn't provide any justifiable benefits. Defragmenting a workstation more than once a week is probably not going to help.
The real performance killer is fragmentation plus low free space; workstations with drives that are more than 75% full need to either be cleaned off or upgraded. Servers, on the other hand, can benefit from being defragmented more aggressively, but only when it's not at the expense of performance. Defragment servers during off-peak hours (e.g., 4 a.m.) to keep the defrag process from slowing other things down.
3. Performing spurious offline defragmentation of Exchange databases. Exchange 2000 and 2003 defragment themselves internally once a day, at 2 a.m. Some Exchange administrators seem to get real twitchy about the amount of space used up by the Exchange database, since the only way to compact the database files is to run ESEUTIL (which essentially recreates the database in an entirely new file). This takes anywhere from 1GB to 7GB of database space per hour (estimates vary widely), and you're going to have the take the whole database offline to do it.
Unless there's an overriding reason to run ESEUTIL—i.e., as part of a larger error-checking or crash-recovery operation, or when the database is not startable—and unless you have a current backup of the database, it's a waste of time to run it just for the sake of reclaiming free space. One way to determine if an offline defrag will be worth it is to inspect the application log on the Exchange server and look for event 1221, which contains an estimate of how much free space might be recovered.
4. Running CHKDSK in read-only mode. CHKDSK needs to be run with the /F parameter to make any changes. This can only be done on a volume that hasn't been mounted: you can't run CHKDSK /F on a system drive without a reboot. Some people opt to run CHKDSK without the /F parameter—i.e., in read-only mode—to see whether or not a given partition needs to be error-checked in earnest.
The bad news is that CHKDSK in read-only mode doesn't give entirely accurate results about what might be wrong in the first place. It'll hint that something is wrong, but the only way to get a completely accurate picture of what might be wrong on a system drive is to run it with /F and reboot. The disparity of reports that you get when you run CHKDSK with and without the /F switch is due to files being locked for exclusive use.
If you're going to take the time out to run CHKDSK—whether because the dirty bit is set, or after a hard crash as a provisional measure—do it right the first time. Also, on drives that may have a lot of errors, CHKDSK /F will need to be run several times—at least until it no longer reports that it found any problems. A suspect system should be taken offline entirely until it passes muster. (If you're forced to run CHKDSK /F more than three times in a row, try booting to Safe Mode next time you reboot and seeing if that helps when CHKDSK runs.)
5. Using CHKDSK /R to perform surface tests on some RAID arrays. The /R option in CHKDSK performs a surface test in an attempt to locate bad sectors and recover any data in them. It sounds like a good idea, but it's terribly slow. On some RAID arrays, it's both redundant and slow. For instance, the HP StorageWorks 1000 performs background surface tests on connected disks, and moves out data if it finds a bad sector to prevent future problems. Running surface tests through CHKDSK on such a drive is like polishing a no-wax floor. If the manufacturer has surface-test tools of its own, use those instead.
6. Using memory "optimizer" utilities. I shouldn't even have to discuss this one at this point, but I'm amazed at how often I hear people talking about it. There are a bunch of utilities—some freeware, some shareware, some commercial software—that claim to "defragment your system memory," mostly by allocating and then deallocating large blocks of physical memory. Do not use them! Why? For the simple reason that they try to second-guess the way the memory manager works in Windows, which simply trades one set of problems for another. If you need more physical memory, buy it. Memory is cheap.
About the author: Serdar Yegulalp is editor of the Windows Power Users Newsletter, which is devoted to hints, tips, tricks, news and goodies for Windows NT, Windows 2000 and Windows XP users and administrators. He has more than 10 years of Windows experience under his belt, and contributes regularly to SearchWinComputing.com and SearchSQLServer.com.
More information on this topic: