Windows User Group - New York Enterprise Windows Users Group

Windows User Group
Active Directory
Objectives





Where did Active Directory come
from
Why is AD the way it is
What is AD fundamentally
What does this mean to you
Where is AD going
Agenda




Directory Services History
What is Active Directory
How to implement AD
Active Directory Futures
• Windows 2003 R2
• Active Directory Federation Services
Security


Identity - The catalog of what you
have and who you are
Authentication – How do you know
that someone is who they claim to be
• What you are
• What you have
• What you know


Authorization – What can they do?
Auditing – Who did what?
Directory Services


External (Public) Directories
•
•
•
•
X.500 (de jure)
DNS (de facto)
RFC 2247
PKI (not a DS but here for discussion)
•
•
•
•
•
IBM Mainframe (eg RACF, NetBIOS)
UNIX (e.g. Host file, NIS, YP)
Novell Bindery/NDS
Banyan StreetTalk
LDAP
Internal Directories
Active Directory Design Goals






Maintain Download compatibility with
NetBIOS domains
Utilize Kerberos Realms as the
primary native namespace
Utilize LDAP as the access/query
protocol
Support PKI
Dynamically extensible
Performance/cost
RFC 2247 is the Key








X.500 never achieved global operational stability
DNS became the defacto global naming standard
RFC 2247 mapped the X.500 naming standard into the DNS
nomenclature
Administrative boundaries moved from the OU (x.500) to
the DC (DNS). This is a point of contention with x.500based directory services to this day.
The Domain Component mapped directly into the kerberos
realm and NetBIOS Domain namespace model.
NetBIOS Shortnames became the Relative Distinguished
Name (RDN)
PKI Security boundaries mapped into the DC authority
level.
PKI cross-signed trusted mapped into the inter-domain
trust model.
Active Directory Functional
Components

Database
• Optimize for queries
• Efficient use of space (sparse data)
• Replication Engine

Protocol Headers
•
•
•
•
•
•

NetBIOS
LDAP
DAP
Kerberos
PKI
other
Management Interfaces
AD Database Issues

Database structure
•
•
•
•
•


Bootstrapping
Attribute granularity
Attribute-level permissioning
Multi-valued attributes
Linked value integrity
Schema Extensibility
Replication
• Replication topology
• Replication protocols
• Collision detection/resolution
AD Namespaces

Forest Common
• Schema Context


Small and rarely Changes
Common throughout the forest
• Configuration Context
• Global Catalog



Contains a subset of attributes
Glues the forest together
Domain
• Domain Naming Context

Contains all details of each domains objects
• Application Namespaces
Floating Single Master Operations

Forest-Wide Roles
• Schema Master
• Domain Naming Master

Domain-Wide Roles
• Primary Domain Controller Emulator
• RID Master
• Infrastructure Master

Updates user-group relationships
What’s new with AD Branch
Offices this year?

Windows Server 2003 Branch Office
guide released to web
• 250 pages of proven and supported
recommendations.
• New Branch Office Monitoring tool
(Brofmon)
• V1.1 of guide shipped

Upcoming Win2k03 Sp1 changes:
• ADLB.EXE and DCDIAG.EXE have fixes
(both updates are in the Branch Office
Guide)

Ultrasound is a FRS monitoring tool
which shipped late 03’
What’s upcoming with AD
Branch Offices?


R2 – Branch Office Team building
branch office solution for role
deployment
V 2.0 of the AD Branch Office Guide
should ship March ‘05
• New chapter on Disaster Recovery for
branches
• New tool and process for converting all
manual connections to KCC generating

Longhorn server - branch appliance
for authentication\authorization
AD Branch Office Scenario
ROOTDC1- GC
10.0.0.1
DNS
corp.contoso.com
HUBDC1 - FSMO
DNS
10.0.0.10
branches.corp.contoso.com
ROOT2DC2 - FSMO
10.0.0.2
DNS
corp.contoso.com
HQDC1 - FSMO
10.0.0.3
DNS
hq.corp.contoso.com
HQDC2 - GC
10.0.0.5
DNS
hq.corp.contoso.com
MOMSVR
10.0.0.26
MOM Server
corp.contoso.com
TOOLMGRSVR
10.0.0.4
Monitoring Server
corp.contoso.com
Data-Center-Site
HUBDC2
DNS
10.0.0.11
branches.corp.contoso.com
BHDC1 - GC
BHDC2 - GC
DNS
10.0.0.14
DNS
10.0.0.12
branches.corp.contoso.com
10.0.0.13
branches.corp.contoso.com
branches.corp.contoso.com
STAGINGDC1
DNS
GC
10.0.0.25
branches.corp.contoso.com
TSDCSERVER
10.0.0.20
ADS Server
corp.contoso.com
BHDC4 - GC
DNS
10.0.0.15
BHDC3 - GC
branches.corp.contoso.com
DNS
BOSite1
BOSite2
BOSite3
BOSite4
BOSiten
BODC1
DC
BODC2
DC
BODC3
DC
BODC4
DC
BODC n
DC
Staging-Site
What Makes a Branch Office
Design Interesting?






IP connectivity incl. WAN, link speed, Dial
on demand, routers, firewalls, IPSEC
Name resolution incl. DNS server, zone and
client configuration
Active Directory replication to a large
number of replication partners
FRS replication
Group policy implementation
Considerations
• Proper care of DNS name resolution will
guarantee replication success
• IPSEC preferred firewall solution
New Features in Windows 2003 for
Branch Office Deployments

KCC improvements
• KCC/ISTG inter-site topology generation
• Bridgehead Server load-balancing and connection object
load-balancing tool (ADLB.EXE)
• KCC redundant connection object mode for branch offices
• No more “keep connection objects” mode if replication
topology is not 100% closed
• Better event logging to find disconnected sites

Replication improvements
• Linked-Valued Replication
• More replication priorities



Intra-Site before Inter-Site
NC priorities: Schema -> Config -> domain -> GC -> DNS
Notifications clean-up after site move
• Lingering Object detection
New Features in Windows 2003 for
Branch Office Deployments

No GC full-sync
• In Windows 2000, schema changes that
changed the PAS triggered GC full sync
• Removed in Windows 2003





Universal Group Caching
DNS Improvements
Install from media
FRS improvements
Plus many more….
Active Directory Deployment

For Branch Offices
Active Directory Design
• Forest design
• Decide on centralized or decentralized
deployment
• Domain design
• DNS design
• Site topology and replication design
• Capacity planning
• Monitoring design

Active Directory deployment
•
•
•
•
Deploying
Deploying
Deploying
Deploying
and monitoring non-branch domains
branches domain in hub site
and monitoring a staging site
and monitoring the branch sites
Active Directory Deployment

For Branch Offices
Active Directory Design
• Forest design
• Decide on centralized or decentralized
deployment
• Domain design
• DNS design
• Site topology and replication design
• Capacity planning
• Monitoring design

Active Directory deployment
•
•
•
•
Deploying
Deploying
Deploying
Deploying
and monitoring non-branch domains
branches domain in hub site
and monitoring a staging site
and monitoring the branch sites
Forest Design

Follow recommendations in Windows 2003
Deployment Kit (Chapter 2)
• http://www.microsoft.com/downloads/details.aspx?familyid=6
cde6ee7-5df1-4394-92ed-2147c3a9ebbe&displaylang=en

Reasons for having multiple forests
• Political / organizational reasons

Unlikely in branch office scenarios

Complexity of deployment
• Too many locations where domain controllers
must be deployed
• Too many objects in the directory


Should be partitioned on domain level
GCs too big?
• Evaluate not deploying GCs to branch offices
• Windows 2003: Universal group caching

Recommendation: Deploy single forest for
Branch Offices
Active Directory Deployment

For Branch Offices
Active Directory Design
• Forest design
• Decide on centralized or decentralized
deployment
• Domain design
• DNS design
• Site topology and replication design
• Capacity planning
• Monitoring design

Active Directory deployment
•
•
•
•
Deploying
Deploying
Deploying
Deploying
and monitoring non-branch domains
branches domain in hub site
and monitoring a staging site
and monitoring the branch sites
Centralized vs. Decentralized Domain
Controller Deployment


The number of sites with domain
controllers defines the scope of the
deployment
Deployment options
• Centralized deployment


Domain controllers are located in datacenters / hub
sites only
Users in branches logon over WAN link
• De-centralized deployment

All branches have domain controllers
Users can logon even if WAN is down

Some branches have DCs, some don’t

• Mixed model

Centralized deployment has lower cost of
ownership
• Easier to operate, monitor, troubleshoot
Design Considerations for Domain
Controller Placement



Local DC requires physical security
Domain controller management
• Monitoring, auditing, SP deployment etc. must be
guaranteed
Required services – business drivers
•
•
•
•
File & Print, email, database, mainframe
Most of them require Windows logon
Logon requires DC availability
Can the business still run even if WAN is down?


Is the business in the branch focused on a LOB application that
requires WAN access (mainframe)?
Logon locally or over the WAN
• WAN logon requires acceptable speed and line availability
• WAN only an option if WAN is reliable



Cached credentials only work for local workstation logon
Terminal Service clients use local logon
In many cases, network traffic is important
• Client logon traffic – directory replication traffic
Design Considerations for Global
Catalog Placement

No factor in single domain deployment
• Turn on GC flag on all DCs
• No extra cost associated

GC not needed for user logon anymore in
multi-domain deployments
• Universal Group Caching

GC placement driven by application
requirements in multi-domain deployments
• Exchange 2000\2003 servers
• Outlook
Active Directory Deployment

For Branch Offices
Active Directory Design
• Forest design
• Decide on centralized or decentralized
deployment
• Domain design
• DNS design
• Site topology and replication design
• Capacity planning
• Monitoring design

Active Directory deployment
•
•
•
•
Deploying
Deploying
Deploying
Deploying
and monitoring non-branch domains
branches domain in hub site
and monitoring a staging site
and monitoring the branch sites
Domain Design
Recommendation for Branch Office Deployment

Use single domain

If high number of users work in central
location
• Typically only single administration area
• Central administration (users and policies)
• Replication traffic higher, but more flexible model
(roaming users, no GC dependencies)
• Database size no big concern
• Create different domains for headquarters and
branches

If number of users very high ( > 50,000)

High number of domains discouraged
• Create geographical partitions
• Examples: One domain / branch, one domain /
state
• Increases complexity of deployment
Active Directory Deployment

For Branch Offices
Active Directory Design
• Forest design
• Decide on centralized or decentralized
deployment
• Domain design
• DNS design
• Site topology and replication design
• Capacity planning
• Monitoring design

Active Directory deployment
•
•
•
•
Deploying
Deploying
Deploying
Deploying
and monitoring non-branch domains
branches domain in hub site
and monitoring a staging site
and monitoring the branch sites
DNS Design

Recommendations
DNS server placement
• Put DNS server on all domain controllers

DNS client (resolver) configuration
• Primary DNS server: Local machine
• Secondary DNS server: Same site DNS server
or hub DNS server
• Windows 2000: Different configuration for
forest root DCs

DNS zone configurations
• Use AD integrated zones (application partitions)
• Use DNS forwarding

No NS records for Branch Office DCs
• Configure zones
DNS Design
Managing SRV (locator) records and autositecoverage

SRV records are published by netlogon in
DNS
• On site level and domain/forest level
• Clients search for services in the client site first,
and fall back to domain/forest level

Branch Office deployments require specific
configuration
• Large number of domain controllers creates
scalability problem for domain level registration

If more than 1200 branch office DCs want to register SRV
records on domain level, registration will fail
• Registration on domain/forest level is in most
cases meaningless




DC cannot be contacted over WAN / DOD link anyways
If local look-up in branch fails, client should always
fallback to hub only
Disable autositecoverage
Use group policy for configuration
Using GPOs for DNS Settings

Create new Global Group for Hub DCs

Create new GPO (BranchOfficeGPO)

In BranchOfficeGPO properties, deny “Apply
Group Policy” to Hub DCs
• Add all non-Branch Office DCs as group
members
• Configure DC locators records not registered by
branch DCs
• Configure refresh interval
• Negative list is easier to manage than positive
list



No damage if DC is not added to group
Smaller number of hub DCs than Branch Office DCs
Edit Default Domain Controllers Policy
• Disable automated site coverage
• Important that this is configured for ALL DCs,
not only Branch Office DCs
Active Directory Deployment

For Branch Offices
Active Directory Design
• Forest design
• Decide on centralized or decentralized
deployment
• Domain design
• DNS design
• Site topology and replication design
• Capacity planning
• Monitoring design

Active Directory deployment
•
•
•
•
Deploying
Deploying
Deploying
Deploying
and monitoring non-branch domains
branches domain in hub site
and monitoring a staging site
and monitoring the branch sites
Replication Planning
Improvements in Windows 2003

Windows 2000
• Topology creation had scalability limits
• Required to manage connection objects
manually

Windows 2003 has many
improvements to fully automate
topology management
• New KCC / ISTG algorithm
• Bridgehead server loadbalancing
• KCC redundant connection object mode

Specifically developed for Branch Office
deployments
Replication Planning
KCC/ISTG

ISTG = Inter-Site Topology Generator
• Computes least cost spanning tree InterSite replication topology

Does not require ISM Service
• Windows 2000: ISTG uses ISM service

Runs every 15 minutes by default
Replication Planning

KCC/ISTG
Vastly improved inter-site topology
generation (KCC/ISTG) scalability
• Complexity: approximately O(d*s)
d = number of domains
s = number of sites
Win2000: approximately O(d*s²)

Scales to more than 5,000 sites

Can generate different topology than
Windows 2000 KCC/ISTG
• Still single threaded – uses only one CPU on
SMP DCs
• Performance: 4,000 sites: 10 secs (700 Mhz
test system)
• Ongoing tests in scalability lab
• Requires Windows 2003 forest functional level
Replication Planning
Bridgehead Server Selection

Windows 2000
• On a per site basis, for each domain, one
DC per NC used as Bridgehead

Windows 2003
• On a per site basis, for each domain, all
DCs per NC used as Bridgehead
• KCC picks DC randomly amongst
bridgehead candidates when connection
object is created

For both incoming and outgoing connection
objects
Replication Planning

Bridgehead Server Load-Balancing
KCC/ISTG randomly chooses Bridgehead
Server
• Both incoming and outgoing replication

Once connection object is established, it is
not rebalanced when changes happen
• Adding new servers does not affect existing
connection objects

Has to be used with care in Branch Office
Deployments
• Necessary to control what servers are used as
Bridgehead Servers

Recommendation: Use preferred
Bridgehead Server List and load balancing
tool
Replication Planning
Preferred Bridgehead Server List

Some servers should not be used as Bridgeheads
• PDC operations master, Exchange facing GCs, Authentication
DCs
• Weak hardware

Solution: Preferred Bridgehead Server List
• Allows administrator to restrict what DCs can be used as
Bridgehead Servers
• If Preferred Bridgehead Server List is defined for a site,
KCC/ISTG will only use members of the list as Bridgeheads

Warning:
• If Preferred Bridgehead Server List is defined, make sure
that there are at least 2 DCs per NC in the list
• If there is no DC for a specific NC in the list, replication will
not occur out of site for this NC
• Don’t forget application partitions

If branches have GCs, all bridgeheads should be GCs
Replication Planning
Active Directory Load Balancing Tool (ADLB)

ADLB complements the KCC/ISTG
• Real load balancing of connection objects
• Stagers schedules using a 15 minute interval


Hub-outbound replication only
Hub-inbound replication is serialized
• Does not interfere with the KCC


KCC is still needed / prerequisite
Tool does not create manual connection objects, but
modifies “from-server” attribute on KCC created connection
objects

Can create a preview

Single exe / command line tool

Not needed for fault tolerance, only as
optimization
• Allows using the tool as an advisor
• Runs on a single server / workstation
• Uses ISTG in hub site to re-balance connection objects
• Can be run on any schedule
Replication Planning
KCC Redundant Connection Objects Mode



Goal
• Create stable, simple and predictable replication topology
• Like mkdsx scripts for Windows 2000
Enabled on a per site level
Implementation
• Creates two redundant connection objects



Each branch site replicates from two different Bridge Head Servers
Two different Bridge Head Servers replicate from each site
Replication schedule is staggered between connection objects
• Fail-over is disabled

If replication from one Bridge Head fails, the branch can still
replicate from the other Bridge Head
• Schedule hashing is enabled



Inbound connections start replication at random time inside the
replication window
Only DCs in same site are used for redundant connection
objects
Demoting DC causes KCC to create new connection object
Replication Planning
KCC Redundant Connection Objects Mode

Schedule for redundant connection objects
• Use schedule defined on site-link

Like, window open 8pm to 2am, replicate once every
180 minutes (= 2 replications)
• Divide by “2” and stagger


Connection object 1 replicates once between 8pm and
11pm
Connection object 2 replicates once between 11pm
and 2am
• Second replication usually causes little network
traffic

Monitoring becomes even more critical
• Important to act quickly if hub DC
becomes unavailable
Replication Planning
KCC Redundant Connection Objects Mode
BH1
BH2
HUB Site
Site Link 1
Duration 8h
Replicate every
240 Min.
Site Link 2
Duration 8h
Replicate every
240 Min.
BranchDC01
BranchDC02
Branch01
Branch02
Replication is open from 0:00 and 8:00 a.m.
Replication is open from 0:00 and 8:00 a.m.
240 Min
240 Min
0:00 - 0:15 and 4:00 - 4:15 and
2:00 -2:15
6:00 - 6:15
240 Min
240 Min
0:16 - 0:30 and 4:16 - 4:30 and
2:16 - 2:30
6:16 and 6:30
Replication Planning
Recommendations: Sites, Site-Links and Topology

Create single site for hub site
• Leverage KCC load-balancing between Bridgehead servers

Create site-links between Branch Office sites and
hub site
• No redundant site-links or connection objects are needed

Disable transitivity of site-links
• Not only for performance, but also to avoid branch-branch
fail-over connection objects


Disable auto-site coverage
Use KCC/ISTG services
• Use KCC redundant connection objects mode


Use ADLB to load-balance connection objects
Use Universal Group Caching to remove
requirement for GC in branch
• Unless branch application requires GC
Active Directory Deployment

For Branch Offices
Active Directory Design
• Forest design
• Decide on centralized or decentralized
deployment
• Domain design
• DNS design
• Site topology and replication design
• Capacity planning
• Monitoring design

Active Directory deployment
•
•
•
•
Deploying
Deploying
Deploying
Deploying
and monitoring non-branch domains
branches domain in hub site
and monitoring a staging site
and monitoring the branch sites
Capacity Planning
Replication Planning

Branch Office DCs
• Usually low load only
• Use minimum hardware

Datacenter DCs
• Depends on usage
• See Windows 2003 Deployment Kit for
DC capacity planning

Bridgehead servers
• Require planning
Capacity Planning
Formulas to compute number of Bridgeheads



Hub outbound replication is multi-threaded
Hub inbound replication is single-threaded
Hub outbound: OC = (H * O) / (K * T)
• OC = outbound connections
• H = sum of hours available for outbound
replication
• O = concurrent connection objects
• K = Number of replications required / day
• T = time necessary for outbound replication
(usually one hour)

Hub inbound: IC = R / N
• IC = inbound connections
• R = Length of replication in minutes
Capacity Planning
Bridgehead Server Overload

Cause
•
•
•
•

Unbalanced site-links
Unbalanced connection objects
Replication schedule too aggressive
Panic trouble-shooting
Symptoms
• Bridgehead cannot accomplish replication requests as fast
as they come in
• Replication queues are growing
• Some DCs NEVER replicate from the bridgehead


Once a server has successfully replicated from the
bridgehead, its requests are higher prioritized than a request
from a server that has never successfully replicated
Monitoring
• Repadmin /showreps shows NEVER on last successful
replication
• Repadmin /queue <DCName>
Capacity Planning
Bridgehead Server Overload - Solution

Turn off ISTG
• prevents new connections from being
generated




Delete all inbound connection objects
Correct site-link balance and schedule
Enable ISTG again
Monitor AD and FRS replication for
recovery
Active Directory Deployment

For Branch Offices
Active Directory Design
• Forest design
• Decide on centralized or decentralized
deployment
• Domain design
• DNS design
• Site topology and replication design
• Capacity planning
• Monitoring design

Active Directory deployment
•
•
•
•
Deploying
Deploying
Deploying
Deploying
and monitoring non-branch domains
branches domain in hub site
and monitoring a staging site
and monitoring the branch sites
Monitoring Design

Monitoring is must for any Active Directory
Deployment
• DCs not replicating will be quarantined
• DCs might have stale data
• Not finding issues early can lead to more problems later


I.e., DC does not replicate because of name resolution
problems, then password expires
Use MOM for datacenter / hub site
• Monitor replication, name resolution, performance

Windows Server 2003 Branch Office Guide ships
with BrofMon
• System to push and run scripts to Branch DCs
• Results copied to central server
• Web page presents Red/Yellow/Green state per server

Evaluate available monitoring tools
• MOM and third parties
Active Directory Deployment

For Branch Offices
Active Directory Design
• Forest design
• Decide on centralized or decentralized
deployment
• Domain design
• DNS design
• Site topology and replication design
• Capacity planning
• Monitoring design

Active Directory deployment
•
•
•
•
Deploying
Deploying
Deploying
Deploying
and monitoring non-branch domains
branches domain in hub site
and monitoring a staging site
and monitoring the branch sites
Active Directory Deployment

For Branch Offices
Active Directory Design
• Forest design
• Decide on centralized or decentralized
deployment
• Domain design
• DNS design
• Site topology and replication design
• Capacity planning
• Monitoring design

Active Directory deployment
•
•
•
•
Deploying
Deploying
Deploying
Deploying
and monitoring non-branch domains
branches domain in hub site
and monitoring a staging site
and monitoring the branch sites
Deploying Non-Branch Domains

Not different from normal deployment
• Documented in Windows 2003
Deployment Kit



Build forest root domain
Create all sites (incl. branches)
Build other non-branches domains as
needed
Active Directory Deployment

For Branch Offices
Active Directory Design
• Forest design
• Decide on centralized or decentralized
deployment
• Domain design
• DNS design
• Site topology and replication design
• Capacity planning
• Monitoring design

Active Directory deployment
•
•
•
•
Deploying
Deploying
Deploying
Deploying
and monitoring non-branch domains
branches domain in hub site
and monitoring a staging site
and monitoring the branch sites
Deploying Branches Domain in
Hub Site




Install operations master
Install bridgehead servers
Install and configure ADLB
Modify domain GPO for DNS settings
• Auto-site coverage


Configure DNS zone for NS records
Create branches DNS GPO
• SRV record registration
Active Directory Deployment

For Branch Offices
Active Directory Design
• Forest design
• Decide on centralized or decentralized
deployment
• Domain design
• DNS design
• Site topology and replication design
• Capacity planning
• Monitoring design

Active Directory deployment
•
•
•
•
Deploying
Deploying
Deploying
Deploying
and monitoring non-branch domains
branches domain in hub site
and monitoring a staging site
and monitoring the branch sites
Deploying Staging Site

Staging Site has special characteristics
• All replication topology must be created
manually


KCC turned off Inter- and Intra-Site
Scripts will be provided
• Should not register DNS NS records

Create manual connection objects between
staging site and production
• Staging DC needs to be able to replicate 7/24


Install Automated Deployment Services
(ADS)
Create image for branch DCs prepromotion
Active Directory Deployment

For Branch Offices
Active Directory Design
• Forest design
• Decide on centralized or decentralized
deployment
• Domain design
• DNS design
• Site topology and replication design
• Capacity planning
• Monitoring design

Active Directory deployment
•
•
•
•
Deploying
Deploying
Deploying
Deploying
and monitoring non-branch domains
branches domain in hub site
and monitoring a staging site
and monitoring the branch sites
Deploying Branch Sites




Build branch DCs in staging site from
image
Run quality assurance scripts
(provided)
Move branch DC into branch site
Ship DC
General Considerations for Branch
Office Deployments



Ensure that hub is a robust data center
Monitor the deployment
• Use MOM for hub sites
Do not deploy all branch office domain controllers
simultaneously
• Monitor load on Bridgehead servers as more and more
branches come on-line
• Verify DNS registrations and replication

Balance replication load between Bridgehead
Servers
Keep track of hardware and software inventory
and versions
Include operations in planning process

Personnel assignment and training


• Monitoring plans and procedures
• Disaster recovery and troubleshooting strategy
• Personnel assignment and training
Summary

Windows 2003 has many improvements for
Branch Office deployments
• New KCC algorithm: no more scalability limit
• KCC redundant connection object mode: Provides
stability
• Less replication traffic through LVR replication and
DNS in app partitions




Deployments are much easier to manage
• No manual connection object management
• GPO for DNS locator settings
• No more island problem
Bridgehead servers more scalable
Branch Office guide will have step by step
procedures for deployment and tools
Total cost of deployment will be much lower
AD Futures

Windows 2003 ‘R2’ Release
• Caching

AD Federation Services
User Group Future Topics

Advanced AD architecture
• Multi-forest Issues
• Exchange Issues
• Internet facing

AD Operations
• Provisioning Systems
• Monitoring Systems
• Deployment Systems


AD debugging
AD programming
© 2004 Microsoft Corporation. All rights reserved.
This presentation is for informational purposes only. Microsoft makes no warranties, express or implied, in this summary.