Title: Module 3: Designing an Active Directory Site Topology
1Module 3 Designing an Active Directory Site
Topology
2Agenda
- Sites
- Replication Within Sites
- Replication Between Sites
- Replication Protocols
- Active Directory Branch Office Deployment
3What Are Sites?
- The First Site Is Set Up Automatically, and Is
Called Default-First-Site-Name - Sites Can Consist of Zero, One, or More Subnets
- Sites Are Used to Control Replication Traffic and
Logon Traffic - Sites Contain Server Objects and Are Associated
with IP Subnet Objects
4Sites Purpose and Function
- Definition
- A set of well-connected subnets
- Contain only Server and Configuration objects
- Sites are used for
- Logging on
- Group Policies
- Replication topology
- Intra-Site
- Inter-Site
5Site Boundaries
- Sites may span domains
- Domains may span sites
- OUs may span sites
dom.com
Site A
OU
Site B
sub.dom.com
6Replication
- Intra-Site Replication
- Automatic topology generation
- Pull-only, based on update notification
- Always RPC based
- Inter-Site Replication
- Semi-automatic topology generation
- Scheduled (no update notification)
- RPC or SMTP (Domain NC RPC Only)
7Intra-Site Replication
- Information that is replicated
- Domain Naming Context (NC)
- Configuration Naming Context (NC)
- Schema Naming Context (NC)
- Replication Topologies
- Domain NC
- Schema/Configuration NC always share the same
topology
8Intra-Site Replication
- Same Site - Single Domain One Replication
Topology - Each new DC (KCC) inserts itself into the ring
- Replication via RPC is based on pull
- Topology adjusts to ensure a maximum of three
hops (edges added at 7 servers) - KCC runs every 15 minutes
9Intra-Site Replication
- DCs within a site/domain will maintain distinct
Domain NC connection objects - Schema/Configuration replication performed
normally - Domain NCs topologies are separate for Domain A
and Domain B
10Intra-Site Replication
- Global Catalog Servers within a site will source
from a DC - Global Catalog will establish a connection object
to request Domain NC from the other domain(s)
SITE
A
A
1
2
B
2
B
Global Catalog
1
Server Connector
B
3
A
A
4
3
11Inter-Site Replication
- Site Links
- Two or more sites
- Connected by common transport
- Cost associated with link
- Schedule determines window
- Frequency determines replication
- Site Link Bridges
- Two or more site links
- Transitiveness between site links
12Site Links
- Transport
- IP (RPC) or SMTP
- Cost
- Smaller number is cheaper
- Based on network characteristics
- Schedule
- Configurable
- Schedule defines windows of replication
- Frequency defines how often replication will
happen
13Site Links
- Describe physical network
- Used for message route paths
- Defined by
- Two or more sites
- Cost
- Transport
- Schedule
- Frequency
NYC
A
20
B
FRAME
RED
BOS
128
128
A
A
T-1
B
B
T-1
256
256
SEA
ATL
A
A
B
A
512
512
LAX
B
B
14Site Link Bridges
NYC
A
20
B
- Provide transitiveness between site links
- Similar to network routers
- All sites bridged by default
- Defined by two or more site links
FRAME
RED
BOS
128
128
A
A
T-1
B
B
256
256
SEA
ATL
A
A
B
A
512
512
LAX
B
B
15Site Link Bridges
NYC
A
20
B
FRAME
RED
BOS
128
128
A
A
T-1
B
B
256
256
SEA
ATL
A
A
B
A
512
512
LAX
B
B
16Topology Creation
- Manual
- Disable auto-generation of KCC and manually
define connection objects - Automatic
- Allows site link transitiveness
- Bridges all sites
- Influenced
- Add site link bridges to enforce routes
17Design Considerations
- How big to make a site
- Few Locations Site LAN
- Many Locations Site segments in a certain
geographic area - Factors affecting site scope
- Replication latency
- Network impact
- Client performance
18Scoping Sites
- Small Organizations will base sites on LANs
- Organizations with many sites will want to
maximize site boundaries - Factors affecting traffic
- Differential replication
- Schedule
- Compression
- Replication Scope
- Topology is configurable
19Site Scopes
- Scope Sites to
- Increase performance of client logon
- Map replication traffic to the network
- Use SMTP replication between sites when
- Domain Site
- Spanning very slow links
20Replication Within Sites
- Replication Within Sites
- Occurs Between Domain Controllers in the Same
Site - Assumes Fast and Highly Reliable Network Links
- Does Not Compress Replication Traffic
- Uses a Change Notification Mechanism
21Replication Between Sites
- Replication Between Sites
- Occurs on a Manually Defined Schedule
- Is Designed to Optimize Bandwidth
- One or More Replicas in Each Site Act As
Bridgeheads
ISTG
Bridgehead Server
Replication
IP Subnet
IP Subnet
Site
Replication
Replication
Bridgehead Server, ISTG
IP Subnet
IP Subnet
Site
22Replication Protocols
23ISM and KCC/ISTG
- Inter-Site Messaging Service (ISM)
- Creates cost matrix for Inter-Site replication
- Sends and receives SMTP messages if SMTP
replication is used - Runs only when
- ISM service starts up
- Changes happen in site configuration (new sites,
site-links, site-link-bridges) - Information is used by
- Netlogon for auto-site coverage
- Load-Balancing tool
- Universal Group Caching
- DFS
- KCC/ISTG
- Computes least cost spanning tree Inter-Site
replication topology - Inter-Site component of KCC
- Runs every 15 minutes by default
24Bridgehead Server Selection
- Windows 2000
- On a per site basis, for each domain, one DC per
NC used as Bridgehead - Windows Server 2003
- On a per site basis, for each domain, all DCs per
NC used as Bridgehead - KCC picks DC randomly when connection object is
created - For both incoming and outgoing connection objects
25Bridgehead Server Selection
A
B
A1
A2
B1
A3
B2
B3
B4
A13
B13
B12
A12
B11
A11
26Bridgehead Server Selection Windows 2000
A
B
A1
A2
B1
A3
B2
B3
B4
A13
B13
B12
A12
B11
A11
27Bridgehead Server SelectionPreferred Bridgehead
Server List
- Some servers should not be used as Bridgeheads
- PDC FSMO
- Weak hardware
- Solution Preferred Bridgehead Server List
- Allows administrator to restrict what DCs can be
used as Bridgehead Servers - If Preferred Bridgehead Server List is defined
for a site, KCC/ISTG will only use members of the
list as Bridgeheads - Warning
- If Preferred Bridgehead Server List is defined,
make sure that there are at least DCs per NC in
the list - If there is no DC for a specific NC in the list,
replication will not occur out of site for this NC
28Bridgehead Server Selection Preferred
Bridgehead Server List
A
B
A1
A2
B1
A3
B2
B3
B4
A13
B13
B12
A12
B11
A11
29Bridgehead Server Selection Bad Bad Preferred
Bridgehead Server List
A
B
A1
A2
B1
A3
B2
B3
B4
Replication to B NC broken
A13
B13
B12
A12
B11
A11
30Bridgehead Server SelectionRecommendations for
Branch Office Deployments
- Always use Preferred Bridgehead Server List in
hub sites - Make sure that there are enough DCs in the list
- Make sure that there are enough DCs that are not
included in the list - Do not add PDC Operations Master
- Do not add DCs used for user logons
- Do not add DCs used by Exchange servers
- Make sure that all NCs are covered in the
Preferred Bridgehead Server List - If there are GCs in the branches, make all
Bridgehead Servers GCs
31Best Practices
32Agenda
- Sites
- Replication Within Sites
- Replication Between Sites
- Replication Protocols
- Active Directory Branch Office Deployment
33Characteristics Of A Branch Office Deployment
- Large number of locations
- Small number of users per location
- Hub and spoke network topology
- Slow network connections and dial on demand
links - WAN availability
- Bandwidth available for Active Directory
- Other services relying on the WAN
- Large number of domain controllers in remote
locations
34AD Branch Office Scenario
35Design Considerations For Branch Offices
- User management and Group Policies
- Structural Planning
- Forest planning
- Domain planning
- DNS considerations
- Replication planning
- DNS configuration for branch offices
- Replication planning
36Centralized User Management
- Advantages
- Good security control and policy enforcement
- Easy automation of common management tasks from a
single source point - Problems can be fixed quickly
- Changes flow from hub to branch
- Disadvantages
- Success varies directly with the availability and
speed of the local area network (LAN) or WAN - Propagation changes are time-consuming, depending
on the replication infrastructure and the
replication schedules - Time to react and to fix issues might be longer
- IT organization tends to be further away from
customer - Recommendation
- Use centralized model
37Group Policy Management
- Management of Group Policies focuses on PDC
- Group policies use both Active Directory and
sysvol replication (NTFRS replication) - Sysvol replicates on a per file level
- Changes are performed on PDC
- Always use centralized Group Policy model for
Branch Office deployments - Watch applications that change Group Polices
(account security settings) - Restrict administration of policies to group that
understands impact of changes - Avoid last writer win overwrite issues
38SYSVOL Replication
- Follows AD replication topology
- Uses connection objects
- Different conflict resolution algorithm
- Replicates on a per file level
- Last writer wins
- Avoid applications that create excessive sysvol
replication - Do not create file system policy against
replicated content - Check anti-virus software
- Diskeeper
39Forest Planning
- Deploy single forest for Branch Offices
- Reasons for having multiple forests
- Political/organizational reasons
- Unlikely in branch office scenarios
- Too many locations where domain controllers must
be deployed - Complexity of deployment
- Too many objects in the directory
- Should be partitioned on domain level
- GCs too big?
- Evaluate not deploying GCs to branch offices
- Whistler No-GC-logon feature
40Domain Partitioning
- Recommendation for Branch Office Deployment
- Use single domain
- Typically only one security area
- Central administration (users and policies)
- Replication traffic higher, but more flexible
model (roaming users, no GC dependencies) - Database size no big concern
- If high number of users work in central location
- Create different domains for headquarters and
branches - If number of users very high (gt 50,000)
- Create geographical partitions
41Design Considerations For Domain Controller
Placement
- Required services
- File and Print, e-mail, database, mainframe
- Most of them require Windows logon
- Logon requires DC and GC availability
- Logon locally or over the WAN
- WAN logon requires acceptable speed and line
availability - Cached credentials only work for local
workstation logon
42Design Considerations For Domain Controller
Placement
- Replication versus client logon traffic
- Replication traffic more static and predictable
- Affected by domain design and GC location
- Applications using the GC can demand local GC
- Logon traffic affected by number of users in the
branch and services - Less predictable
- Security
- Management
- Alternative solutions
- Terminal Servers
- Local accounts
43Design Considerations For Global Catalog Placement
- No factor in single domain deployment
- Multiple Domain deployments
- GC needed for logon in native mode
- Disable GC requirement
- Whistler has Universal Group caching feature
- Services might require GC
- Exchange 2000
- Recommendation
- If WAN unreliable or more than 50 users in
branch, deploy GC to branch - Always put GC next to services that require GC
- I.e., if there are Exchange 2000 servers in the
branch, deploy GC to branch
44DNS Planning Considerations
- DNS AD root domain
- Distributing forest wide locator records
- Island problem
- Domain controller SRV record configuration
- Auto Site Coverage
- NS records
45DNS Configuration Of Root Domain
- If DNS already exists
- Delegate AD root domain to Windows 2000 DNS
server (i.e., corp.microsoft.com) - Use Active Directory integrated DNS zones
- If not
- Use Windows 2000 DNS server on domain controllers
- Use Active Directory integrated DNS zones
- Create internal root, or Configure forwarders
46Distributing Forest Wide Records
- CNAME records for replication and GC records are
forest wide records - Stored in _msdcs domain in the AD root domain
- I.e., two domains, corp.microsoft.com and
sales.corp.microsoft.com - A records for DCs in corp.microsoft.com are
stored in the corp.microsoft.com DNS domain - A records for DCs in sales.corp.microsoft.com are
stored in sales.corp.microsoft.com - CNAMEs for replication for DC in
corp.microsoft.com are stored in the
_msdcs.corp.microsoft.com DNS domain - CNAMEs for replication for DC in
sales.corp.microsoft.com are stored in the
_msdcs.corp.microsoft.com DNS domain - By default, this domain exists only on root
domain controllers - Create separate zone for _msdcs.ltForestRootDomaingt
and transfer zone to all DCs in child domains
47The Island Problem
- A domain controller that is also a DNS server can
isolate itself from replication - Can only happen if
- DC points to itself as preferred or alternate DNS
server - DC has writeable copy of _msdcs.ltforestRootDomaingt
DNS domain - Recommendation
- Domain controllers that are DNS servers AND are
domain controllers in the forest root domain
should point to another DC as preferred and
alternate DNS server - All other domain controllers (especially child
domain controllers) can point to themselves as
preferred or alternate DNS server
48Managing Service Records
- SRV records are published by netlogon in DNS
- On site level and domain level
- Clients search for services in the client site
first, and fall back to domain level - Branch Office deployments require specific
configuration - Large number of domain controllers creates
scalability problem for domain level
registration - If more than 850 branch office DCs want to
register SRV records on domain level,
registration will fail - Registration on domain level is in most cases
meaningless - DC cannot be contacted over WAN / DOD link
anyways - If local look-up in branch fails, client should
always fallback to hub only - Configure netlogon service to register SRV
records for Branch Office DCs on site level only - Follow Q267855
- HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic
es\Netlogon\Parameters - Registry value DnsAvoidRegisterRecords
- Data type REG_MULTI_SZ
49Mnemonic Type DNS record
Dc SRV _ldap._tcp.dc._msdcs.ltDnsDomainNamegt
DcAtSite SRV _ldap._tcp.ltSiteNamegt._sites.dc._msdcs.ltDnsDomainNamegt
DcByGuid SRV _ldap._tcp.ltDomainGuidgt.domains._msdcs.ltDnsForestNamegt
Pdc SRV _ldap._tcp.pdc._msdcs.ltDnsDomainNamegt
Gc SRV _ldap._tcp.gc._msdcs.ltDnsForestNamegt
GcAtSite SRV _ldap._tcp.ltSiteNamegt._sites.gc._msdcs.ltDnsForestNamegt
GenericGc SRV _gc._tcp.ltDnsForestNamegt
GenericGcAtSite SRV _gc._tcp.ltSiteNamegt._sites.ltDnsForestNamegt
GcIpAddress A _gc._msdcs.ltDnsForestNamegt
DsaCname CNAME ltDsaGuidgt._msdcs.ltDnsForestNamegt
Kdc SRV _kerberos._tcp.dc._msdcs.ltDnsDomainNamegt
KdcAtSite SRV _kerberos._tcp.dc._msdcs.ltSiteNamegt._sites.ltDnsDomainNamegt
Ldap SRV _ldap._tcp.ltDnsDomainNamegt
LdapAtSite SRV _ldap._tcp.ltSiteNamegt._sites.ltDnsDomainNamegt
LdapIpAddress A ltDnsDomainNamegt
Rfc1510Kdc SRV _kerberos._tcp.ltDnsDomainNamegt
Rfc1510KdcAtSite SRV _kerberos._tcp.ltSiteNamegt._sites.ltDnsDomainNamegt
Rfc1510UdpKdc SRV _kerberos._udp.ltDnsDomainNamegt
Rfc1510Kpwd SRV _kpasswd._tcp.ltDnsDomainNamegt
Rfc1510UdpKpwd SRV _kpasswd._udp.ltDnsDomainNamegt
50AutoSite Coverage
- AutoSite coverage allows DCs to advertise for
sites without DCs, if they are in the closest
site to the DC - Not practical for Branch Office deployments
- Root DCs would advertise for all sites
- If client cannot connect to a local DC, it will
fall back to hub site anyways (configuration of
SRV records)
51Name Server Records
- DNS servers use NS records to advertise that they
are authoritative for a zone - Hidden name servers are not advertised
- Clients find DNS server through DNS client
configuration (preferred and alternate DNS
servers) - Configure Branch Office DNS Servers to not add
NS records - Two registry keys
- HKLM/System/CCS/Services/DNS/Parameters REG_DWORD
DisableNSRecordsAutoCreation - Server automatically adds NS record (0 / 1)
- HKLM/System/CCS/Services/DNS/Parameters/Zones/Zone
name REG_SZ AllowNSRecordsAutoCreation - Recommended, single point of administration
- List is a list of space separated IP addresses of
the DNS servers that are allowed to add the NS
records to the zone
52Planning For Replication
- Concepts
- Connection objects
- KCC
- Site-links
- Site-link bridges
- Sysvol replication
- Planning steps
- Planning for Bridgehead Servers
- Determine number of Sites
- Decide whether to use the KCC or create
replication topology manually - Define site structure of hub site
- Define replication schedule
- Create Site-Links
- Create connection objects (if KCC disabled)
53Planning For Bridgehead Servers
- How many bridgehead servers do I need?
- How to configure bridgehead servers
- Things you need to know
- Centralized or decentralized change model
- Data update requirements formulated by customer
- How many times a day do we need to replicate?
- How many changes happen in a branch per day
- Total number of domain controllers
- Time needed to establish dial-on-demand network
connectivity
54Inbound Versus Outbound Replication
- Different threading model
- Outbound replication is multi-threaded
- Bridgehead server can have multiple replication
partners - Bottleneck is most likely CPU (monitor!)
- Inbound replication is single-threaded
- Replication of changes from branches to hub is
serialized - Bottleneck is most likely the network
55Replication Traffic
- Documented in Notes from the Field, Building
Enterprise Active Directories - Replication overhead for branch office
deployments - Overhead if there are two domain controllers 21
KB - 13 KB to setup the replication sequence
- 5 KB to initiate replication of the domain naming
context, including the changed password - 1.5 KB for each schema and configuration naming
context (where no changes occurred) - Each DC will add 24 Bytes
- Overhead for 1,002 DCs 162 KB
56Number Of Hub-Outbound Replication Partners
- Use formula OC (H O) / (K T)
- H Sum of hours that outbound replication can
occur per day - O of concurrent connections per hour of
replication (a realistic value is 30 on the
reference server specified below) - K Number of required replication cycles per day
(This parameter is driven by replication latency
requirements.) - T Time necessary for outbound replication
(Depending on assumed replication traffic, this
should be one hour or a multiple of one hour.)
57Example Outbound Replication Partners
- Requirements
- Replication twice a day ( K)
- WAN 8 hours available ( H)
- High performance hardware ( 30 concurrent
connections) ( O) - Outbound replication will always finish within 1
hour ( T) - Applying the formula
- OC (H O) / (K T) (8 30) / (2 1) 120
- Each bridgehead server can support 120 branch
office DCs (outbound) - If number is too high/low, change parameters
- I.e., WAN available for 12 hours 180 branches
- I.e., replicating only once a day 240 branches
58Number Of Inbound Replication Partners
- Use formula IC R / N
- R Length of replication window in minutes
- N of minutes a domain controller needs to
replicates all changes - Use replication window defined for outbound
replication - Example was WAN available for 8 hours
- If customer wants to replicated hub-inbound only
once a day, then R 480 minutes - If customer follows hub-outbound model (twice a
day), then R 240 minutes
59Example Inbound Replication Partners
- Lets assume slow WAN with DOD lines
- Factors
- Replication traffic (time to submit changes like
password changes) - Time to setup DOD connections
- 4 minutes per branch is conservative
- IC R / N 480 / 4 120 Branches
60Example Inbound Replication Partners
- Number of branches supported by one bridgehead
servers is lower value of results - Outbound 120 branches
- Inbound 120 branches
- Result One bridgehead can support 120 branches
- If you have 1,200 branches, you need 10
bridgehead servers - Plan for disasters and special cases!
- Leave headroom for hub outbound replication
- Have spare machine available
- Create multiple connections from branch to hub DCs
61Bridgehead Server Overload
- Symptoms
- Bridgehead cannot accomplish replication requests
as fast as they come in - Replication queues are growing
- Some DCs NEVER replicate from the bridgehead
- Once a server has successfully replicated from
the bridgehead, its requests are higher
prioritized than a request from a server that
has never successfully replicated - Monitoring
- Repadmin /showreps shows NEVER on last successful
replication - Repadmin /queue ltDCNamegt
62Bridgehead Server Overload
- Can be caused by
- Unbalanced site-links (if ISTG is enabled)
- Unbalanced connection objects
- Replication schedule too aggressive
- Panic trouble-shooting
- Like changing replication interval on all
site-links to a drastic shorter interval to
accommodate applications - Solution
- If ISTG is enabled
- Turn off ISTG (prevent new connections from being
generated) - Delete all inbound connection objects
- Correct site-link balance and schedule
- Enable ISTG again
63Bridgehead Server Hardware
- Processor
- Dual/quad Pentium III or Xeon recommended for
bridgehead servers and servers supporting large
numbers of users - Memory
- Minimum of 512 MB
- Disks
- Configure the operating system and logs on
separate drives that are mirrored. Configure
directory database on Redundant Array of
Independent Disks (RAID) 5 or RAID 01 - Use larger number of smaller drives for maximum
performance - Drive capacity will depend on your specific
requirements
64Determine Number Of Sites
- Rule for creating sites
- For each physical location that has WAN
connection (less than 10 MBit) to hub - If there is a DC in the location
- Create a new site
- If not, if there is a service that uses the site
model (DFS shares) - Create a new site
- If not, create subnet for the location and add
subnet to hub site (or next closest site)
65Use Of KCC For Inter-Site Replication Topology
Generation
- Always disable transitiveness
- Windows 2000
- Less than 500 sites Use KCC
- But test your hardware first
- Follow guidelines in KB article Q244368
- More than 500 sites Create connection objects
manually - Branch Office deployment guide recommends manual
topology for more than 100 sites - Windows Server 2003 Use KCC
66Define Site Structure Of Hub Site
- If KCC is disabled, create single site
- If KCC is enabled, create one site per Bridgehead
Server - KCC has no concept of Inter-Site load balancing
between servers in one site - Create artificial sites in hub site to spread
load between Bridgehead Servers - Create Site-Links with staggered schedules
between branches and hub sites
67Load Balancing With Sites
Site-link Schedule 2am 4am, cost 100
Site-link Schedule 4am 6am, cost 100
Site-link Schedule Always (Notification
enabled), cost 1
Hub Site-Link
BranchC2
BranchA1
68Load Balancing Manually(With Redundancy)
- Replicate on alternating schedule
Hub Site
Branch6
Branch1
Branch5
Branch2
Branch4
Branch3
69Creating Connection Objects Manually
- Considerations
- Connection objects must be balanced between
Bridgehead Servers - Schedule on connection objects must be balanced
- Branches should have multiple connection objects
to Bridgehead Servers for fault tolerance - Connection objects need to be created and managed
on the local DC - Sounds complex?
- Use our set of scripts to create a manual
replication topology (details later)
70Building The Hub Site
- Building the root domain
- Availability of root domain
- Only needed for special configuration tasks
- Adding new domains
- Schema changes
- Kerberos trusts and dynamic registration of
forest wide resource records might depend on root
domain - Operations Masters
- Typically not critical for root domain
- Server Sizing
- Empty root domain does not require high-end
hardware - Kerberos referrals and dynamic DNS updates
- Disaster Recovery
- Root is critical for forest
- Make sure that you perform regular backups
71Building The Hub Site
- Building the Branch Office Domain
- Operations Master
- Off-load PDC operations master
- Move Infra-structure master off GC
- RID master is the most critical operations master
monitor this machine very closely - Bridgehead Servers
- If Branch Office DC are GCs, then Bridgehead
Servers should be GCs - If DNS runs on Branch Office DCs, dont run DNS
on Bridgehead Servers - Disaster recovery
- State on Bridgehead server not very interesting
not an ideal candidate for backup - Leave headroom on Bridgehead, or have spare
machine in place
72Staging Site
- Most companies use outsource partners to build
servers and domain controllers - Machines are built at the factories
- Server usually build from image and promoted
later - Where to promote domain controllers
- Staging site Less network traffic, better
control of process, opportunity to run QA scripts
while machine is accessible - In branch Configuration less complex (domain
controller finds its site)
73Building The Staging Site
- Staging Site needs to be permanently connected
to production environment - New DCs must be advertised
- New DCs need RID pool
- Fully control replication topology in the staging
site - Only case where KCC should be disabled for
Intra-Site replication topology generation - Reason is that once machines are moved out,
domain controllers that have not learned that
will try to replicate or re-route (DOD lines) - Capacity planning for domain controller used as
source - Usually not a high-end machine
- Depends on how many DCs are installed in parallel
- Software installation
- Add Service Packs and QFEs to image
- Include Resource Kit, Support Tools and scripts
for management and monitoring - Document what is loaded on DC before machine is
shipped
74Domain Controller Build Process
- Use dcpromo answer file to promote the domain
controllers - Do not turn off DCs before shipping them
- Best practice is to build DCs when they are
needed, not months before - If they are off-line for too long, they get
out-of-sync with production - Tombstone lifetime
- Domain controller passwords
- Install monitoring tools and make sure that
monitoring processes are in place - Configure domain controller for new site
- Clean-up old connection objects before shipping
the machine - React if you find issues with domain controllers
during the deployment - Dont keep processes in place if they are broken
75General Considerations For Branch Office
Deployments
- Ensure that Your Hub is a Robust Data Center
- Do Not Deploy All Branch Office Domain
Controllers Simultaneously - Monitor load on Bridgehead servers as more and
more branches come on-line - Verify DNS registrations and replication
- Balance Replication Load Between Bridgehead
Servers - Keep Track of Hardware and Software Inventory
and Versions - Include Operations in Your Planning Process
- Monitoring plans and procedures
- Disaster recovery and troubleshooting strategy
- Personnel assignment and training
- Personnel Assignment and Training
76Branch Office Deployment Guide
- Prescriptive documentation
- Planning, deploying, monitoring
- Includes scripts for all tasks
77AD Branch Office Scenario
78AD Branch Office Deployment Process
- Build the Forest Root Domain and Central Hub Site
- Build the Branch Office Domain and Bridgehead
Servers - Pre-Staging Configuration at the Hub
- Create and Configure the Staging Domain
Controller - Stage a Branch Office Domain Controller
- Pre-shipment Configuration of the Branch Office
Domain Controller - Quality Assurance of the Domain Controller at the
Branch Office
79Active Directory Branch Office Scripts
- Four Types of scripts are included with the
Branch Office guides - Configuration scripts
- Configure the environment in preparation for
deploying branch office domain controllers - Branch Office Domain Controller Deployment
scripts - Make implementation easier
- Connection Object scripts
- Create connection objects between the hub site
bridgehead servers and branch office domain
controllers - Quality Assurance scripts
- Monitor an Active Directory environment
80Connection Object Scripts
- Build the Hub and Spoke Topology
- Create Connection Objects between Branch DCs and
Bridgehead Servers - 4 Core Files
- Topo.dat
- Mkhubbchtop.cmd
- Mkdsx.dat
- Mkdsx.cmd
81Creating Connection Objects
82Mkdsx.cmd
- Creates the Connection Objects for the Hub and
Spoke Topology Specified in the Mkdsx.dat File - Connects to
- The bridgehead server in Topo.dat to create Hub
connection objects - Connects to the branch office DC to create the
connection objects on it
83Quality Assurance Scripts
- Used to Validate a Domain Controller
- Not Specific to a Branch Office Deployment
- Must be Scheduled to Run Daily on Every DC
- Used through out the AD Branch Office Deployment
guide - Three Core Scripts
- QA_Check.cmd
- QA_Parse.vbs
- CheckServers.vbs
84Running QA Scripts
85QA_Check.cmd
- Main Script of the Quality Assurance Process
- Records the Current State of a DC in a series of
log files - Log files stored in C\ADResults
86QA_Check.cmd
- Uses the Following Reskit Tools
- DCDiag.exe, NetDiag.exe, Ntfrsutl.exe,
Regdmp.exe, Repadmin /showreps, and Repadmin
/showconn - Uses the Following Scripts
- Gpostat.vbs - verifies that each Group Policy
object is in sync - Connstat.cmd - processes the output of the
ntfrsutl sets command to generate a summary of
the FRS connections for a given domain controller
- QA_Parse.vbs
87Results Of QA_Check.cmd
88QA_Parse.vbs
- Called by the QA_Check.cmd Script
- Parses the Log Files to Locate Errors and
Potential Issues - Any Errors or Issues are Written to a Summary
File - Summary File is
- Stored in C\ADResults\ltcomputernamegt
- Copied by the QA_Check.cmd script to a central
server so there is a single location to examine
the state of all DCs
89Results Of QA_Parse.vbs
90Contents Of QAShare
91CheckServers.vbs
- Run on the Central Server that has the Summary
Files from each DC - Provides a Status Report File with the Health of
the DCs - The Status Report Consists of Three Lists of DCs
- DCs that are healthy and did not report any
errors - DCs that reported errors and require further
investigation - DCs that did not report and should be
investigated
92Results Of CheckServers.vbs
93QA Using The Scripts
- Schedule Scripts to Run Every Night
- Check ServerReport.txt Every Morning
- If Any Errors are Reported in ServerReport.txt,
Examine the DCs Summary File - If Error in Summary File Requires Further
Investigation, Examine Logs on the DC
94Best Practices DNS ModelServer Configuration
corp.ellipsis.dot
noam
- Forest root also hosts _msdcs.ltfrgt zone
- Child domain hosts secondary copy of _msdcs.ltfrgt
zone through incremental zone transfer