Despite the many improvements made to DFS in Windows Server 2003 R2, there are still limitations to the service,...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
and you should keep these in mind in order to be certain that DFS is appropriate for your environment.
Perhaps the most important limitation is this: DFS Replication is suitable for environments that can tolerate a certain loose consistency between different copies of a particular document. Even given the performance enhancements of the Remote Differential Compression (RDC) replication, there will still be a slight amount of inconsistency between servers in a replication group as a change is replicated between them.
Consider the following situation. You've configured a replication schedule that only allows for DFS replication between 11 pm and 5 am for a site whose WAN link is nearly saturated during the day. If a user makes a change to a file stored on SERVERA at 1 pm on a Tuesday, and another user accesses the copy of that file that's stored on SERVERB at 2 pm on the same day, the second user will not see the changes that were made on SERVERA. The changes have not replicated yet.
Even if you allow DFS replication to take place 24x7, limitations of network transmission speed mean that SERVERA's copy of a document might not be precisely in sync with SERVERB's copy at any given second. If you're working with documents that have zero tolerance for this type of loose consistency, you might want to consider a document collaboration solution (one that allows for document check-in and check-out) such as Microsoft Sharepoint. But in most cases, the replication improvements offered by R2 DFS will provide an easy means of synchronizing files across multiple locations.
But what happens if two people create a replication conflict by managing to modify the same file at the same time while working from two separate servers? Similar to Active Directory replication, the RDC algorithm resolves conflicts by taking the "last writer wins" approach: Whichever file was modified most recently is the one that will win the conflict and be replicated throughout the namespace. The "losing" file will be renamed and stored in a Conflict and Deleted folder on the server that processed the conflict, and details of the file will be stored in a ConflictandDeletedManifest.xml file. This folder has a default quota of 660MB, and will be automatically purged when its size reaches 90% of that limit.
About the author: Laura E. Hunter (CISSP, MCSE: Security, MCDBA, Microsoft MVP) is a senior IT specialist with the University of Pennsylvania where she provides network planning, implementation and troubleshooting services for business units and schools within the university. Hunter is a two-time recipient of the prestigious Microsoft "Most Valuable Professional" award in the area of Windows Server-Networking. She is the author of the Active Directory Field Guide (APress Publishing).
More information on this topic:
- Tip: Work around More replication improvements for DFS in R2
- Topics: Windows server management
- RSS: Sign up for our RSS feed to receive expert advice every day.