Site Links - Center

advertisement
Module 3:
Designing an
Active Directory
Site Topology
Agenda
Sites



Replication Within Sites
Replication Between Sites
Replication Protocols
Active
Directory Branch Office Deployment
What Are Sites?
 The
First Site Is Set Up Automatically, and Is Called
Default-First-Site-Name
 Sites Can Consist of Zero, One, or More Subnets
 Sites Are Used to Control Replication Traffic and
Logon Traffic
 Sites Contain Server Objects and Are Associated with
IP Subnet Objects
Sites: Purpose and Function

Definition:



A set of well-connected subnets
Contain only Server and Configuration
objects
Sites are used for…



Logging on
Group Policies
Replication topology


Intra-Site
Inter-Site
Site Boundaries



Sites may span domains
Domains may span sites
OUs may span sites
Site A spans some
components of
dom.com
Site A
dom.com
OU
Site B
Site B spans all of
sub.dom.com and
some components of
dom.com
Some components of
OU exist in SiteA
Some components of
OU exist in Site B
sub.dom.com
Replication

Intra-Site Replication




Automatic topology generation
Pull-only, based on update
notification
Always RPC based
Inter-Site Replication



Semi-automatic topology generation
Scheduled (no update notification)
RPC or SMTP (Domain NC: RPC Only)
Intra-Site Replication

Information that is replicated




Domain Naming Context (NC)
Configuration Naming Context (NC)
Schema Naming Context (NC)
Replication Topologies


Domain NC
Schema/Configuration NC always
share the same topology
Intra-Site Replication

SITE
A
1
A
2

A
3
A
4
Legend
Domain


A
3
Server
Domain A Topology/Connection
Schema/Configuration Topology

Same Site - Single Domain
= One Replication
Topology
Each new DC (KCC) inserts
itself into the ring
Replication via RPC is
based on pull
Topology adjusts to ensure
a maximum of three hops
(edges added at 7 servers)
KCC runs every 15 minutes
Intra-Site Replication
A
1
SITE
A
2
B
2

B
1
A
3
A
4
B
3
Legend
A Domain
3 Server
Domain A Topology/Connection
Domain B Topology/Connection
Schema/Configuration Topology
DCs within a site/domain
will maintain distinct
Domain NC connection
objects


Schema/Configuration
replication performed
normally
Domain NC’s topologies
are separate for
Domain A and
Domain B
Intra-Site Replication


SITE
A
1
A
2
B
2
B
1
Global Catalog
Server & Connector
B
3
A
3
A
4
Global Catalog
Servers within a site
will source from a DC
Global Catalog will
establish a connection
object to request
Domain NC from the
other domain(s)
Inter-Site Replication

Site Links






Two or more sites
Connected by common transport
Cost associated with link
Schedule determines window
Frequency determines replication
Site Link Bridges


Two or more site links
Transitiveness between site links
Site Links

Transport


Cost



IP (RPC) or SMTP
Smaller number is cheaper
Based on network characteristics
Schedule



Configurable
Schedule defines windows of
replication
Frequency defines how often
replication will happen
Site Links
NYC
AB
1
BOS
AB
10
512
5
ATL
AB
 Two or more sites
 Cost
 Transport
Legend
LAX
B
B
 Used for message
route paths
 Defined by:
256
512
SEA
A
A
5
20
FRAME
128 128
RED
AB
10
 Describe physical
network
COST
Site LInk Domain
A Domain Servers
B
 Schedule
 Frequency
Site Link Bridges
NYC
AB
2
0
FRAME
BOS
128128
AB
1
10
256
RED
AB
5
512
SEA
A
A
LAX
B
B
10
512
5
ATL
AB




Provide
transitiveness
between site links
Similar to network
routers
All sites bridged by
default
Defined by two or
more site links
Replication Within Sites
Domain
Controller A
Site
IP Subnet
Replication
IP Subnet
Domain
Controller B
Replication Within Sites:
Occurs Between Domain Controllers in the
Same Site
Assumes Fast and Highly Reliable Network
Links
Does Not Compress Replication Traffic
Uses a Change Notification Mechanism
Replication Between Sites
Replication Between
Sites:
Occurs on a
Manually Defined
Schedule
Is Designed to
Optimize Bandwidth
One or More
Replicas in
Each Site Act As
IP Subnet
Bridgeheads
ISTG
Bridgehead
Server
Replication
IP Subnet
Site
IP Subnet
Replication
Replication
Bridgehead
Server, ISTG
Site
IP Subnet
Replication Protocols
RPC or SMTP
Domain Controller A
Domain Controller B
Replication Protocols
RPC for Replication Within and Between
Sites
 SMTP for Replication Between Sites

ISM and KCC/ISTG


Inter-Site Messaging Service (ISM)
 Creates cost matrix for Inter-Site replication
 Sends and receives SMTP messages if SMTP
replication is used
 Runs only when
 ISM service starts up
 Changes happen in site configuration (new
sites, site-links, site-link-bridges)
 Information is used by
 Netlogon for auto-site coverage
 Load-Balancing tool
 Universal Group Caching
 DFS
KCC/ISTG
 Computes least cost spanning tree Inter-Site
replication topology
 Inter-Site component of KCC
 Runs every 15 minutes by default
Bridgehead Server Selection

Windows 2000


On a per site basis, for each domain, one DC
per NC used as Bridgehead
Windows Server 2003


On a per site basis, for each domain, all DCs
per NC used as Bridgehead
KCC picks DC randomly when connection
object is created

For both incoming and outgoing connection
objects
Bridgehead Server Selection
A
A1 A2 A3 B1 B2 B3 B4
B
siteLink
siteLink
siteLink
A11
B11
A12
B12
A13
B13
Bridgehead Server Selection
Windows 2000
A
A1 A2 A3 B1 B2 B3 B4
B
A11
B11
A12
B12
A13
B13
Bridgehead Server Selection
Preferred Bridgehead Server List



Some servers should not be used as
Bridgeheads
 PDC FSMO
 Weak hardware
Solution: Preferred Bridgehead Server List
 Allows administrator to restrict what DCs can
be used as Bridgehead Servers
 If Preferred Bridgehead Server List is defined
for a site, KCC/ISTG will only use members of
the list as Bridgeheads
Warning:
 If Preferred Bridgehead Server List is defined,
make sure that there are at least DCs per NC
in the list
 If there is no DC for a specific NC in the list,
replication will not occur out of site for this
NC
Bridgehead Server Selection
Preferred Bridgehead Server List
A
A1 A2 A3 B1 B2 B3 B4
B
A11
B11
A12
B12
A13
B13
Bridgehead Server Selection
Bad Bad Preferred Bridgehead Server List
A
A1 A2 A3 B1 B2 B3 B4
B
A11
B11
A12
B12
A13
Replication
to B NC
broken
B13
Bridgehead Server Selection
Recommendations for Branch Office Deployments

Always use Preferred Bridgehead Server
List in hub sites


Make sure that there are enough DCs in the
list
Make sure that there are enough DCs that
are not included in the list





Do not add PDC Operations Master
Do not add DCs used for user logons
Do not add DCs used by Exchange servers
Make sure that all NCs are covered in the
Preferred Bridgehead Server List
If there are GCs in the branches, make all
Bridgehead Servers GCs
Best Practices
Place at Least One Domain Controller in Every Site
Place At Least One DNS Server in Each Site
Schedule Site Links for Times When Network Traffic Is Slow
Agenda
Sites



Replication Within Sites
Replication Between Sites
Replication Protocols
Active
Directory Branch Office Deployment
Characteristics Of A Branch
Office Deployment




Large number of locations
Small number of users per location
Hub and spoke network topology
Slow network connections and dial
on demand links




WAN availability
Bandwidth available for Active Directory
Other services relying on the WAN
Large number of domain controllers
in remote locations
Branch Office Scenario
TCP/IP & DNS Settings
FSMO Role Placement
AD Branch Office Scenario
ROOT1- GC
(SM, DNM FSMO Roles)
corp.hay-buv.com
10.10.1.1
DNS P: 10.10.1.2
DNS A: 10.10.1.3
ROOT2 - DC
(IM, RID, PDC FSMO Roles)
corp.hay-buv.com
10.10.1.2
DNS P: 10.10.1.1
DNS A: 10.10.1.3
ROOT3 - DC
corp.hay-buv.com
10.10.1.3
DNS P: 10.10.1.1
DNS A: 10.10.1.2
HUBDC1 - DC
(IM, RID, PDC FSMO Roles)
branches.corp.hay-buv-com
10.10.20.99
DNS P: 10.10.20.99
BH3 - GC
DNS A: 10.10.20.1
BH1 - GC
branches.corp.hay-buv.com
BH2 - GC
branches.corp.hay-buv-com
10.10.20.3
branches.corp.hay-buv.com
10.10.20.1
DNS
P: 10.10.20.3
10.10.20.2
DNS P: 10.10.20.1
DNS
A: 10.10.20.1
DNS P: 10.10.20.2
DNS A: 10.10.20.2
DNS A: 10.10.20.1
Site-HUB
Staging - GC
branches.corp.hay-buv.com
10.10.30.1
DNS P: 10.10.30.1
DNS A: 10.10.20.1
Site-Stage
Branch Office
Site-Branch1
BODC1
DC
10.10.21.1
DNS P: 10.10.21.1
Branch Office
Site-Branch2
BODC2
DC
10.10.22.1
DNS P: 10.10.22.1
DNS A: 10.10.20.2
Branch Office
Site-Branch3
Branch Office
Site-Branch4
Branch Office
Site-Branch5
BODC3
DC
10.10.23.1
DNS P: 10.10.23.1
DNS A: 10.10.20.3
BODC4
DC
10.10.24.1
DNS P: 10.10.24.1
DNS A: 10.10.20.1
BODC5
DC
10.10.25.1
DNS P: 10.10.25.1
DNS A: 10.10.20.2
Design Considerations For Branch
Offices


User management and Group Policies
Structural Planning






Forest planning
Domain planning
DNS considerations
Replication planning
DNS configuration for branch offices
Replication planning
Centralized User Management



Advantages:
 Good security control and policy enforcement
 Easy automation of common management tasks from a single
source point
 Problems can be fixed quickly
 Changes flow from hub to branch
Disadvantages
 Success varies directly with the availability and speed of the
local area network (LAN) or WAN
 Propagation changes are time-consuming, depending on the
replication infrastructure and the replication schedules
 Time to react and to fix issues might be longer
 IT organization tends to be further away from “customer”
Recommendation
 Use centralized model
Group Policy Management



Management of Group Policies focuses on PDC
 Group policies use both Active Directory and
sysvol replication (NTFRS replication)
 Sysvol replicates on a per file level
 Changes are performed on PDC
Always use centralized Group Policy model for
Branch Office deployments
 Watch applications that change Group Polices
(account security settings)
Restrict administration of policies to group that
understands impact of changes
 Avoid last writer win overwrite issues
SYSVOL Replication

Follows AD replication topology


Different conflict resolution algorithm



Uses connection objects
Replicates on a per file level
Last writer wins
Avoid applications that create excessive
sysvol replication



Do not create file system policy against
replicated content
Check anti-virus software
Diskeeper
Forest Planning


Deploy single forest for Branch Offices
Reasons for having multiple forests
 Political/organizational reasons
 Unlikely in branch office scenarios
 Too many locations where domain controllers
must be deployed
 Complexity of deployment
 Too many objects in the directory
 Should be partitioned on domain level
 GCs too big?
 Evaluate not deploying GCs to branch
offices
 “Whistler”: No-GC-logon feature
Domain Partitioning

Recommendation for Branch Office Deployment
 Use single domain
 Typically only one security area
 Central administration (users and policies)
 Replication traffic higher, but more flexible
model (roaming users, no GC dependencies)
 Database size no big concern
 If high number of users work in central location
 Create different domains for headquarters and
branches
 If number of users very high (> 50,000)
 Create geographical partitions
Design Considerations For Domain
Controller Placement

Required services




File and Print, e-mail, database, mainframe
Most of them require Windows® logon
Logon requires DC and GC availability
Logon locally or over the WAN


WAN logon requires acceptable speed
and line availability
Cached credentials only work for local
workstation logon
Design Considerations For Domain
Controller Placement


Replication versus client logon traffic
 Replication traffic more static and predictable
 Affected by domain design and GC location
 Applications using the GC can demand local
GC
 Logon traffic affected by number of users in the
branch and services
 Less predictable
 Security
 Management
Alternative solutions
 Terminal Servers
 Local accounts
Design Considerations For Global
Catalog Placement



No factor in single domain deployment
Multiple Domain deployments
 GC needed for logon in native mode
 Disable GC requirement
 “Whistler” has Universal Group caching feature
 Services might require GC
 Exchange 2000
Recommendation:
 If WAN unreliable or more than 50 users in branch,
deploy GC to branch
 Always put GC next to services that require GC
 I.e., if there are Exchange 2000 servers in the branch,
deploy GC to branch
DNS Planning Considerations






DNS – AD root domain
Distributing forest wide locator records
Island problem
Domain controller SRV record configuration
Auto Site Coverage
NS records
DNS Configuration
Of Root Domain

If DNS already exists



Delegate AD root domain to Windows 2000
DNS server (i.e., corp.microsoft.com)
Use Active Directory integrated DNS zones
If not



Use Windows 2000 DNS server on domain
controllers
Use Active Directory integrated DNS zones
Create internal root, or Configure forwarders
Distributing Forest Wide Records




CNAME records for replication and GC records are forest
wide records
Stored in _msdcs domain in the AD root domain
 I.e., two domains, corp.microsoft.com and
sales.corp.microsoft.com
 A records for DCs in corp.microsoft.com are stored
in the corp.microsoft.com DNS domain
 A records for DCs in sales.corp.microsoft.com are
stored in sales.corp.microsoft.com
 CNAMEs for replication for DC in corp.microsoft.com
are stored in the _msdcs.corp.microsoft.com DNS
domain
 CNAMEs for replication for DC in
sales.corp.microsoft.com are stored in the
_msdcs.corp.microsoft.com DNS domain
By default, this domain exists only on root domain
controllers
Create separate zone for _msdcs.<ForestRootDomain>
and transfer zone to all DCs in child domains
The Island Problem


A domain controller that is also a DNS server can isolate
itself from replication
 Can only happen if
 DC points to itself as preferred or alternate DNS
server
 DC has writeable copy of
_msdcs.<forestRootDomain> DNS domain
Recommendation
 Domain controllers that are DNS servers AND are
domain controllers in the forest root domain should
point to another DC as preferred and alternate
DNS server
 All other domain controllers (especially child domain
controllers) can point to themselves as preferred or
alternate DNS server
Managing Service Records



SRV records are published by netlogon in DNS
 On site level and domain level
 Clients search for services in the client site first,
and fall back to domain level
Branch Office deployments require specific configuration
 Large number of domain controllers creates scalability problem
for domain level registration
 If more than 850 branch office DCs want to register SRV records
on domain level, registration will fail
 Registration on domain level is in most cases meaningless
 DC cannot be contacted over WAN / DOD link anyways
 If local look-up in branch fails, client should always fallback to
hub only
Configure netlogon service to register SRV records for Branch Office
DCs on site level only
 Follow Q267855
 HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netl
ogon\Parameters

Registry value: DnsAvoidRegisterRecords

Data type: REG_MULTI_SZ
Mnemonic
Type
DNS record
Dc
SRV
_ldap._tcp.dc._msdcs.<DnsDomainName>
SRV
_ldap._tcp.<SiteName>._sites.dc._msdcs.<DnsDomainNam
e>
DcByGuid
SRV
_ldap._tcp.<DomainGuid>.domains._msdcs.<DnsForestNa
me>
Pdc
SRV
_ldap._tcp.pdc._msdcs.<DnsDomainName>
Gc
SRV
_ldap._tcp.gc._msdcs.<DnsForestName>
GcAtSite
SRV
_ldap._tcp.<SiteName>._sites.gc._msdcs.<DnsForestName
GenericGc
SRV
_gc._tcp.<DnsForestName>
GenericGcAtSite
SRV
_gc._tcp.<SiteName>._sites.<DnsForestName>
GcIpAddress
A
_gc._msdcs.<DnsForestName>
DsaCname
CNA
ME
<DsaGuid>._msdcs.<DnsForestName>
Kdc
SRV
_kerberos._tcp.dc._msdcs.<DnsDomainName>
KdcAtSite
SRV
_kerberos._tcp.dc._msdcs.<SiteName>._sites.<DnsDomain
Name>
Ldap
SRV
_ldap._tcp.<DnsDomainName>
LdapAtSite
SRV
_ldap._tcp.<SiteName>._sites.<DnsDomainName>
LdapIpAddress
A
<DnsDomainName>
Rfc1510Kdc
SRV
_kerberos._tcp.<DnsDomainName>
Rfc1510KdcAtSite
SRV
_kerberos._tcp.<SiteName>._sites.<DnsDomainName>
Rfc1510UdpKdc
SRV
_kerberos._udp.<DnsDomainName>
DcAtSite
AutoSite Coverage


AutoSite coverage allows DCs to
advertise for sites without DCs, if they
are in the closest site to the DC
Not practical for Branch Office
deployments


Root DCs would advertise for all sites
If client cannot connect to a local DC,
it will fall back to hub site anyways
(configuration of SRV records)
Planning For Replication


Concepts
 Connection objects
 KCC
 Site-links
 Site-link bridges
 Sysvol replication
Planning steps
 Planning for Bridgehead Servers
 Determine number of Sites
 Decide whether to use the KCC or create replication
topology manually
 Define site structure of hub site
 Define replication schedule
 Create Site-Links
 Create connection objects (if KCC disabled)
Planning For Bridgehead
Servers



How many bridgehead servers do I need?
How to configure bridgehead servers
Things you need to know:
 Centralized or decentralized change model
 Data update requirements formulated by customer
 How many times a day do we need to replicate?
 How many changes happen in a branch per day
 Total number of domain controllers
 Time needed to establish dial-on-demand
network connectivity
Inbound Versus Outbound
Replication

Different threading model
 Outbound replication is multi-threaded
 Bridgehead server can have multiple
replication partners
 Bottleneck is most likely CPU (monitor!)
 Inbound replication is single-threaded
 Replication of changes from branches
to hub is serialized
 Bottleneck is most likely the network
Replication Traffic


Documented in Notes from the Field,
Building Enterprise Active Directories
Replication overhead for branch office deployments
 Overhead if there are two domain controllers:
21 KB
 13 KB to setup the replication sequence
 5 KB to initiate replication of the domain naming
context, including the changed password
 1.5 KB for each schema and configuration
naming context (where no changes occurred)
 Each DC will add 24 Bytes
 Overhead for 1,002 DCs: 162 KB
Example – Outbound Replication
Partners


Requirements:
 Replication twice a day (= K)
 WAN 8 hours available (= H)
 High performance hardware (= 30 concurrent
connections) (= O)
 Outbound replication will always finish within 1
hour (= T)
Applying the formula:
 OC = (H * O) / (K * T) = (8 * 30) / (2 * 1) = 120
 Each bridgehead server can support 120 branch
office DCs (outbound)
 If number is too high/low, change parameters:
 I.e., WAN available for 12 hours: 180 branches
 I.e., replicating only once a day: 240 branches
Example – Inbound Replication
Partners

Let’s assume slow WAN with DOD lines

Factors




Replication traffic (time to submit changes
like password changes)
Time to setup DOD connections
4 minutes per branch is conservative
IC = R / N = 480 / 4 = 120 Branches
Example – Inbound Replication
Partners


Number of branches supported by one bridgehead
servers is lower value of results
 Outbound: 120 branches
 Inbound: 120 branches
 Result: One bridgehead can support 120 branches
 If you have 1,200 branches, you need 10 bridgehead
servers
Plan for disasters and special cases!
 Leave headroom for hub outbound replication
 Have spare machine available
 Create multiple connections from branch to hub DCs
Bridgehead Server Overload


Symptoms
 Bridgehead cannot accomplish replication requests
as fast as they come in
 Replication queues are growing
 Some DCs NEVER replicate from the bridgehead
 Once a server has successfully replicated from
the bridgehead, its requests are higher prioritized
than a request from a server that has never
successfully replicated
Monitoring
 Repadmin /showreps shows NEVER on last
successful replication
 Repadmin /queue <DCName>
Bridgehead Server Overload


Can be caused by
 Unbalanced site-links (if ISTG is enabled)
 Unbalanced connection objects
 Replication schedule too aggressive
 Panic trouble-shooting
 Like changing replication interval on all site-links
to a drastic shorter interval to accommodate
applications
Solution
 If ISTG is enabled
 Turn off ISTG (prevent new connections from
being generated)
 Delete all inbound connection objects
 Correct site-link balance and schedule
 Enable ISTG again
Bridgehead Server Hardware



Processor
 Dual/quad Pentium III or Xeon recommended for
bridgehead servers and servers supporting large
numbers of users
Memory
 Minimum of 512 MB
Disks
 Configure the operating system and logs on separate
drives that are mirrored. Configure directory database
on Redundant Array of Independent Disks (RAID) 5
or RAID 0+1
 Use larger number of smaller drives for
maximum performance
 Drive capacity will depend on your specific
requirements
Determine Number Of Sites

Rule for creating sites:

For each physical location that has WAN
connection (less than 10 MBit) to hub

If there is a DC in the location


If not, if there is a service that uses the site model
(DFS shares)


Create a new site
Create a new site
If not, create subnet for the location and add subnet
to hub site (or next closest site)
Use Of KCC For Inter-Site Replication
Topology Generation



Always disable transitiveness
Windows 2000:
 Less than 500 sites: Use KCC
 But test your hardware first
 Follow guidelines in KB article Q244368
 More than 500 sites: Create connection
objects manually
 Branch Office deployment guide
recommends manual topology for more
than 100 sites
Windows Server 2003: Use KCC
Define Site Structure Of Hub Site


If KCC is disabled, create single site
If KCC is enabled, create one site per
Bridgehead Server



KCC has no concept of Inter-Site load
balancing between servers in one site
Create artificial sites in hub site to spread load
between Bridgehead Servers
Create Site-Links with staggered schedules
between branches and hub sites
Load Balancing With Sites
Site-link: Schedule 2am – 4am, cost 100
Site-link: Schedule 4am – 6am, cost 100
Site-link: Schedule: Always (Notification enabled), cost 1
Hub Site-Link
SiteA
SiteB
SiteC
BranchC2
BranchA1
BranchA2
BranchB1
BranchB2
BranchC1
Load Balancing Manually
(With Redundancy)

Replicate on
alternating
schedule
Hub Site
DC1
DC2
DC3
Branch6
Branch1
Branch2
Branch3
Branch4
Branch5
Building The Hub Site

Building the root domain
 Availability of root domain
 Only needed for special configuration tasks
 Adding new domains
 Schema changes
 Kerberos trusts and dynamic registration of forest
wide resource records might depend on root
domain
 Operations Masters
 Typically not critical for root domain
 Server Sizing
 Empty root domain does not require high-end
hardware
 Kerberos referrals and dynamic DNS updates
 Disaster Recovery
 Root is critical for forest
 Make sure that you perform regular backups
Building The Hub Site

Building the Branch Office Domain
 Operations Master
 Off-load PDC operations master
 Move Infra-structure master off GC
 RID master is the most critical operations master
monitor this machine very closely
 Bridgehead Servers
 If Branch Office DC are GCs, then Bridgehead
Servers should be GCs
 If DNS runs on Branch Office DCs, don’t run DNS
on Bridgehead Servers
 Disaster recovery
 State on Bridgehead server not very
interesting not an ideal candidate for backup
 Leave headroom on Bridgehead, or have
spare machine in place
Staging Site

Most companies use outsource partners to
build servers and domain controllers



Machines are built at the factories
Server usually build from image and promoted
later
Where to promote domain controllers


Staging site: Less network traffic, better
control of process, opportunity to run QA
scripts while machine is accessible
In branch: Configuration less complex
(domain controller finds its site)
Building The Staging Site




Staging Site needs to be permanently connected
to production environment
 New DCs must be advertised
 New DCs need RID pool
Fully control replication topology in the staging site
 Only case where KCC should be disabled for IntraSite replication topology generation
 Reason is that once machines are moved out,
domain controllers that have not learned that will
try to replicate or re-route (DOD lines)
Capacity planning for domain controller used as
source
 Usually not a high-end machine
 Depends on how many DCs are installed in parallel
Software installation
 Add Service Packs and QFEs to image
 Include Resource Kit, Support Tools and scripts for
management and monitoring
 Document what is loaded on DC before
machine is shipped
Domain Controller Build Process






Use dcpromo answer file to promote the domain
controllers
Do not turn off DCs before shipping them
 Best practice is to build DCs when they are needed,
not months before
 If they are off-line for too long, they get out-of-sync
with production
 Tombstone lifetime
 Domain controller passwords
Install monitoring tools and make sure that monitoring
processes are in place
Configure domain controller for new site
Clean-up old connection objects before shipping
the machine
React if you find issues with domain controllers during
the deployment
 Don’t keep processes in place if they are broken
General Considerations For
Branch Office Deployments






Ensure that Your Hub is a Robust Data Center
Do Not Deploy All Branch Office Domain
Controllers Simultaneously
 Monitor load on Bridgehead servers as more and
more branches come on-line
 Verify DNS registrations and replication
Balance Replication Load Between Bridgehead
Servers
Keep Track of Hardware and Software Inventory
and Versions
Include Operations in Your Planning Process
 Monitoring plans and procedures
 Disaster recovery and troubleshooting strategy
 Personnel assignment and training
Personnel Assignment and Training
Download