Manage Learn to apply best practices and optimize your operations.

Improve NT file system performance

You can boost the performance of Windows NTFS, and Rick Cook's tip offers six suggestions to get you started.

Please let us know how useful you find this tip by rating it below. Do you have a useful Windows tip, timesaver...

or workaround to share? Submit it to our tip contest and you could win a prize!

Right from the box, the Windows NT file system (NTFS) is set up to boost performance for most applications. However, there are a number of things you can do to improve its performance -- under the right conditions.

  • Defragment regularly
    Of all the suggestions, this is the one that gives the most consistent performance improvement. Like any Windows file system, NTFS needs to be defragmented on a regular basis. "Regular" in this case means at least once a month for lightly used systems and at least weekly for systems that go through a substantial amount of file creation and deletion.

    While Windows 2000 and its variants, as well as Windows XP, include a file defragmentation utility, it is pretty rudimentary. You might want to invest in a more powerful defragmenter, such as DiskKeeper, from Executive Software International Inc.

  • Set the indexing service appropriately
    The indexing service makes it faster for users to find documents and other files, but it takes a good deal of time to create the indexes and it slows down the system. To make an indexing service a significant benefit to the users of the system, enable it. Otherwise leave it disabled.

  • Use more partitions
    Partitioning your hard disk can significantly improve performance because it cuts down on the amount of territory the computer has to search to find files. Of course, this also helps organize your data more efficiently. If you put data in one set of partitions and applications in another, it can also speed backups since the data can be backed up more frequently than the applications.

  • Eliminate 8.3 file names
    As a holdover from earlier versions of Windows, NTFS creates an 8.3 file name (8 characters in the name and 3 characters in the extension after the period) in addition to long file names. In most cases, they're unnecessary, and you can use the fsutil command to prevent their creation. This saves a little time when each file is created.

    Like most of the rest of these techniques, this one only pays off if you have folders with a whole lot of files, particularly small files. Microsoft suggests 300,000 files per folder as a minimum for this technique to make any difference.

  • Make more folders
    Creating more folders not only makes files easier for users to find, it also helps the computer find them by cutting down the area that needs to be searched. While users will see the benefit of having more logically named folders with any number of files, increasing the number of files generally only produces a performance improvement when you've got a lot of files in each folder. Again, Microsoft's magic number of 300,000 files applies.

    Of course, users aren't likely to stick 300,000 files in a folder, but software may. This will usually be something like a log file folder. It makes sense to move the files to another folder as they age, preferably automatically.

  • Adjust the cluster size
    This is a controversial topic, and some administrators will tell you it's more trouble than it's worth. In general, the default cluster size of 4K is perfectly adequate. However, in some cases, especially with very large files, you can improve performance and utilization by using larger clusters. Note that this only works if all the files, or at least the most frequently accessed ones, are very large.

How much of a performance improvement will you get by using these techniques? It can range from substantial to unnoticeable, depending on how many files you have on the system and how heavily the system is used. Of those on the list, only regular defragmentation is recommended for all systems in all circumstances.

Rick Cook has been writing about mass storage since the days when the term meant an 80 K floppy disk. The computers he learned on used ferrite cores and magnetic drums. For the last 20 years he has been a freelance writer specializing in storage and other computer issues.
This was last published in June 2005

Dig Deeper on Windows Server storage management

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.