|Brien M. Posey|
If you have ever used a protocol analyzer to look at the traffic that's flowing across your network, you might...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
have been surprised by how much of the traffic is related to Active Directory replication. Fortunately, there's an easy way to reduce the replication-related traffic and reclaim some of your lost bandwidth.
Why so much traffic?
As you may know, in Windows 2000 each domain controller within a domain contains a copy of the Active Directory. Any time anyone makes even the most insignificant change to the Active Directory, the domain controller that the change occurred on must replicate the change to every other domain controller for the domain. This means that if you have 10 different domain controllers, then there will be 10 different sets of updates every time a user account is created or deleted, someone changes his password, a computer is added to or removed from the domain or any number of other common activities. Furthermore, if 10 different changes are made within a short period of time and you have ten different domain controllers, then there will be 100 different Active Directory updates flooding your network with traffic.
The idea behind sites
You can greatly reduce the Active Directory-related traffic by dividing your domain into sites in strategic locations. The idea behind sites is that while every domain controller does need to be made aware of Active Directory updates, they don't all necessarily have to be updated right away. For example, if the finance department has their own domain controller, does a domain controller in the marketing department, or even in another building, really need to be updated the instant someone in finance changes his password? It isn't likely that this would be a critical update because the chances of a domain controller in another department or in another building authenticating the login would be slim.
By dividing the domain controllers into sites, you can schedule the replication between the sites. This means that when a change is made to the Active Directory, domains within the site are updated immediately, while domain controllers outside of the site are updated according to the replication schedule. By default, the replication schedule is set to occur every fifteen minutes but can be set for as long as a week.
Sending replication information in batches does save bandwidth, but there's another concept that saves even more. Each site designates one domain controller within the site as a designated bridgehead server. The bridgehead server controls all inbound and outbound replication traffic for the site. This means that when a domain controller within a site receives an Active Directory update, the update is immediately replicated to the domain controllers within the site. When the next scheduled replication event occurs, the bridgehead server passes the updates to the bridgehead server in the other sites. The receiving bridgehead servers are then responsible for updating the domain controllers within their respective sites.
Therefore, if you have 10 domain controllers within a site, updates to the site are sent only once, rather than 10 different times, because the sending site is worried about updating only the bridgehead server rather than every domain controller in the remote site.
If there are 10 different servers in a remote site, an argument could be made that there are still 10 different bursts of replication traffic because the local bridgehead server updates the remote bridgehead server, and then the remote bridgehead server updates the other nine domain controllers in the remote site. This is only partially true. Microsoft recommends that sites be created in a manner that matches your organization's subnetting. This means that each subnet would have a separate site, and no site would span multiple subnets (as long as there were enough domain controllers to support this arrangement). This means that when a bridgehead server receives replication updates, the updates are confined not only by the site structure, but also by the subnet structure. This keeps the update traffic confined to a small portion of your network rather than it being broadcast across the entire network, thus preserving bandwidth.
About the author:
Brien Posey, CEO of Posey Enterprises, is a freelance technical writer and has been working with computers for about 15 years. Before going freelance, Brien served as the Director of Information Systems for a large, nationwide healthcare company. He has also served as a network engineer/security consultant for the Department of Defense. You can access Brien's Web site, which contains hundreds of his articles and white papers, at www.brienposey.com.