Replication upgrades make DFS ideal for low-bandwidth branch offices

While DFS has been around since the days of Windows NT 4.0, improvements made to DFS in Windows Server 2003 R2 make it much more attractive for Windows systems administrators. In particular, there have been advances in the areas of replication and file availability that can make DFS an ideal solution for low-bandwidth branch office environments. The most exciting advances for DFS in R2 are in the realm of DFS-Replication.

This Content Component encountered an error

While the Distributed File System (DFS) has been around since the days of Windows NT 4.0, improvements made to DFS in Windows Server 2003 R2 make it much more attractive for Windows systems administrators. In particular, there have been advances in the areas of replication and file availability that can make DFS an ideal solution for low-bandwidth branch office environments.

In Windows Server 2003 R2, the DFS service has been split into two components: DFS-Namespace (DFS-N) and DFS-Replication (DFS-R). The DFS Namespaces feature in R2 offers three updated capabilities: target priority, client failback and delegated authority. While these three capabilities are quite useful, the most exciting advances for DFS in R2 are in the realm of DFS-Replication (DFS-R).

It is here that the new DFS really begins to shine, with a new replication algorithm that provides incredible performance gains for bandwidth-challenged environments such a branch office separated from its corporate headquarters by a low-speed or heavily-utilized WAN link.

Prior to R2, DFS used the File Replication Service (FRS) to replicate files between multiple link targets. FRS is the service used to replicate the information that's stored in the Active Directory SYSVOL share: logon/logoff scripts and Group Policy Objects. FRS uses RPC over TCP/IP to replicate files within a single site as well as between sites; FRS creates its own replication topology with its own schedule and connection objects that are controlled separately from Active Directory replication.

FRS will trigger replication whenever a file is closed, with changes held in a 3-second aging cache to allow for files that are being changed frequently. Once this 3-second "waiting period" is up, the FRS service on the server hosting the changed file notifies its FRS replication partners, and the file gets replicated across the FRS replication topology. For small files stored on lightly utilized servers, this process works quite well. But because FRS traffic is not compressed even when traversing site boundaries, replication of large files can be a tricky process. Often, replication errors or inconsistencies are created.

DFS in R2 changes all of this by introducing a new replication algorithm called Remote Differential Compression (RDC). RDC functions by breaking files up into small "chunks"; it will then only replicate the individual chunks of a file that have changed from one replication cycle to the next. For example, consider a Microsoft Word file that contains the line of text: The quick brown fox jumps over the lazy white dog.

If someone changed the first few words of the sentence to read "The slow black fox…" RDC would replicate only that particular chunk, rather than sending the entire .DOC file across the wire. RDC does this by computing MD4 hashes of these small chunks of files, then comparing those hashes between servers that are attempting to replicate.

If a particular chunk of a file has changed, then the MD4 hash of that chunk will change  while the hashes for the remaining chunks in the file remain the same. This allows an RDC-enabled server's replication partner to request only those chunks whose hashes have changed since the last time replication took place. For larger files that only need to replicate small changes, this provides a drastic reduction in replication time and improved performance for your users.

Imagine a 4MB Visio document where you need to change the title of one or two sections. In the FRS world, this would prompt the entire 4MB file to replicate, whereas RDC will only need a few seconds to replicate the changed sections of the file. For environments with branch offices to support, particularly where bandwidth is at a premium, DFS-R in R2 can more than justify making the move to the new operating system.

Note: Because DFS Replication is triggered on file close, it's not good for replicating files that are always locked and in use, like in the case of a database or another file that's used by an "always-on" service. Nor can you use DFS-R to replicate the Active Directory SYSVOL share: Logon scripts and Group Policy Objects still need to be replicated via FRS. However, FRS and DFS-R can co-exist quite comfortably on the same server.

About the author:  Laura E. Hunter (CISSP, MCSE: Security, MCDBA, Microsoft MVP) is a senior IT specialist with the University of Pennsylvania where she provides network planning, implementation and troubleshooting services for business units and schools within the university. Hunter is a two-time recipient of the prestigious Microsoft "Most Valuable Professional" award in the area of Windows Server-Networking. She is the author of the Active Directory Field Guide (APress Publishing).

More information on this topic:


This was first published in December 2006

Dig deeper on Windows Disaster Recovery and Business Continuity

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchServerVirtualization

SearchCloudComputing

SearchExchange

SearchSQLServer

SearchWinIT

SearchEnterpriseDesktop

SearchVirtualDesktop

Close