On a regular basis, top Microsoft executives answer readers' toughest technical questions about Windows-based systems....
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
This installment of "Ask Microsoft" was answered by Mark Licata, Microsoft senior engineer, the Exchange Center of Excellence. To submit a technical question for consideration, send an e-mail to editor@SearchWinComputing.com.
Question: I have a good performing server -- custom built -- with plenty of RAM. I have dual Xeon processors attached to a HP/Compaq MSA1000 ultra SCSI 3 drive array via a 2 GB Fibre Channel controller, 1 GB/s networking attached to a decent network switch. I'm running Windows Server 2003 Standard. Performance is acceptable overall, except deleting files slows the system down, and can be felt by all users on the network using this server. Interestingly, doing a copy of the same data is much faster.
The server is caught up with all patches and updates. There was a hot fix months ago regarding performance when using a Fibre Channel controller that looks like it's been incorporated into SP1. This has not helped the problem. One tip I've received from my newsgroups is to try the same deletes locally on the server, to rule out networking. That really tore up the system badly, slowing it enough so that users lost their network connection to the server.
The server is running Disk Keeper to keep the drive array defragmented. I've gutted the drive once, deleting and recreating it. At that time, I used Disk Keeper's utility to pre-expand the MFT. No help. Performance Monitor, at least the myriad items I've monitored, shows the server is working hard, but nothing seems to point out the cause of deletes/moves being slower than everything else. Any idea what's causing this problem?
Answer: You will need to use NTFILMON to see what is occurring.
The delete operation should be a simple MFT change and BITMAP change. But if there was high write activity there may be a scrubber at work.
Also some storage actually implements FUA (force unit access) which attempts to bypass cache and go direct to disk, a poor choice if the controller does this. The MSA is known to do this.
Benchmarking is also in order to test read/write times aside from allocate and de-allocate NTFS times. You/we need to know that base performance is fine and allocation de-allocation is problematic.
We have no experience with Disk Keeper so you may need to remove that as a test item. You can also benchmark the deletions over the LAN as well as local to see the impact changes if any.
Let me know what you discover. Capture the File Monitor data to see the access types and origin. That should light the way.
-- Mark Licata, Microsoft senior engineer for the Exchange Center of Excellence