Exchange Design Concepts and Best Practices

advertisement
















Evolution
Node 1
Node 2
Node 1
Node 2
Node 3
Node 4
DAS
DAS
DAS
DAS
SAN
FC
FC
RAID 10
NL-SAS
JBOD
NL-SAS
JBOD
NL-SAS
JBOD
NL-SAS
JBOD
In modern Exchange world software, not hardware, powers and controls the solution
Reduce complexity, simplify the solution
Decrease the number of system dependencies to improve availability and lower the risks
Use native capabilities where possible as it makes the design simpler
Deploy redundant solution components to increase availability and protect the solution
Avoid failure domains: do not group redundant solution components into blocks that could be impacted by a single failure
Enable and enhance user experience
Provide functionality and access that is required or expected by the end users
Provide large low cost mailboxes
Use Exchange as a single data repository
Increase value with Lync and SharePoint integration
Build a bridge to the cloud – ensure feature rich cloud integration and co-existence
Optimize People and Process, not just Technology
Decrease complexity of team collaboration by leveraging solution / workload focused teams
Simplify / optimize administration / monitoring / troubleshooting process
Reduce / minimize total cost of the ownership (TCO) for the solution
Use commodity hardware and leverage native product capabilities
Implement storage solution that minimizes cost, complexity, and administrative overhead
Failures *do* happen!
Critical system dependencies decrease availability
Deploy Multi-role servers
Avoid intermediate and extra components
(e.g. SAN; network teaming; archiving servers)
Simpler design is always better: KISS
BAD
Multiple database copies
Multiple balanced servers
Failure domains combining redundant components
decrease availability
Examples: SAN; Blade chassis; Virtualization hosts
Software, not hardware is driving the solution
Exchange powered replication and managed availability
Redundant transport and Safety Net
Load balancing and proxying to the destination
System dependencies
Redundant components increase availability
GOOD
failure
domains
Availability principles:
DAG beyond the “A”
http://blogs.technet.com/b/exchange/archive/2011/09/16/dag-beyond-the-a.aspx
BAD
Redundancy
GOOD
Classical shared infrastructure design introduces numerous critical dependency components
Relying on hardware requires expensive redundant components
Failure domains reduce availability and introduce significant extra complexity
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
30 0GB 15k
Bay 14
Bay 1
Bay 14
Bay 1
Bay 14
Bay 1
Bay 14
Bay 1
Bay 14
hp StorageWorks
hp StorageWorks
hp StorageWorks
hp StorageWorks
hp StorageWorks
hp StorageWorks
Bay 1
Bay 14
Bay 1
Bay 14
Bay 1
Bay 14
Bay 1
Failure
domain
Bay 14
Bay 1
Bay 14
Bay 1
Bay 14
hp StorageWorks
hp StorageWorks
hp StorageWorks
hp StorageWorks
hp StorageWorks
hp StorageWorks
Bay 1
Bay 14
Bay 1
Failure
domain
Bay 14
Bay 1
Bay 14
Bay 1
Bay 14
Bay 1
Bay 14
Bay 1
FC
Low level replication
FC
FC
Low level replication
FC
hp StorageWorks
hp StorageWorks
hp StorageWorks
hp StorageWorks
SAN
SAN
UID
UID
UID
UID
UID
UID
UID
UID
UID
UID
UID
UID
UID
UID
UID
UID
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
Failure
domain
Blade Chassis
Blade Chassis
Failure
domain
UID
UID
UID
UID
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
UID
UID
UID
UID
UID
UID
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
UID
UID
UID
UID
UID
UID
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
Failure
domain
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
HP ProLiant
BL460c Gen8
Virtualization
hosts
Virtualization
hosts
Failure
domain
Site 2
Virtual servers
Virtual servers
Site 1
Combined workloads +
Shared spindles + Dynamic
disk provisioning + Virtual
LUN carving = very complex
4
5
Hardware building block
5
5
Exchange architecture building block (DAG)
5.4 k
serial ata
60 GB
5.4 k
serial ata
60 GB
serial ata
5.4 k
serial ata
60 GB
HP
StorageWorks
MSA70
60 GB
5.4 k
25
5.4 k
serial ata
60 GB
24
serial ata
serial ata
60 GB
23
60 GB
serial ata
60 GB
22
serial ata
5.4 k
serial ata
60 GB
21
60 GB
5.4 k
20
5.4 k
serial ata
19
5.4 k
serial ata
60 GB
18
60 GB
JBOD
5.4 k
serial ata
60 GB
17
serial ata
5.4 k
serial ata
60 GB
16
60 GB
5.4 k
15
5.4 k
serial ata
60 GB
14
5.4 k
serial ata
60 GB
13
5.4 k
serial ata
60 GB
12
serial ata
5.4 k
serial ata
60 GB
11
60 GB
5.4 k
10
5.4 k
serial ata
60 GB
9
5.4 k
serial ata
60 GB
8
5.4 k
serial ata
60 GB
7
serial ata
5.4 k
serial ata
60 GB
6
60 GB
5.4 k
2
5.4 k
serial ata
4
1
5.4 k
serial ata
60 GB
3
60 GB
r ra y P600
Smart A
r ra y P600
Smart A
DAG replication
5.4 k
serial ata
60 GB
UID
serial ata
5.4 k
serial ata
60 GB
HP
StorageWorks
MSA70
60 GB
5.4 k
25
5.4 k
serial ata
60 GB
24
5.4 k
serial ata
60 GB
23
5.4 k
serial ata
60 GB
22
serial ata
5.4 k
serial ata
60 GB
21
60 GB
5.4 k
20
5.4 k
serial ata
19
5.4 k
serial ata
60 GB
18
60 GB
JBOD
5.4 k
serial ata
60 GB
17
serial ata
5.4 k
serial ata
60 GB
16
60 GB
5.4 k
15
5.4 k
serial ata
60 GB
14
5.4 k
serial ata
60 GB
13
5.4 k
serial ata
60 GB
12
serial ata
5.4 k
serial ata
60 GB
11
60 GB
5.4 k
10
5.4 k
serial ata
60 GB
9
5.4 k
serial ata
60 GB
8
5.4 k
serial ata
60 GB
7
serial ata
5.4 k
serial ata
60 GB
6
60 GB
5.4 k
2
5.4 k
serial ata
4
1
5.4 k
serial ata
60 GB
3
60 GB
r ra y P600
Smart A
r ra y P600
Smart A
DAG replication
5.4 k
serial ata
60 GB
UID
serial ata
5.4 k
serial ata
60 GB
HP
StorageWorks
MSA70
60 GB
5.4 k
25
5.4 k
serial ata
60 GB
24
5.4 k
serial ata
60 GB
23
5.4 k
serial ata
60 GB
22
serial ata
5.4 k
serial ata
60 GB
21
60 GB
5.4 k
20
5.4 k
19
5.4 k
serial ata
JBOD
5.4 k
serial ata
60 GB
18
60 GB
5.4 k
serial ata
60 GB
serial ata
60 GB
17
serial ata
5.4 k
16
60 GB
5.4 k
15
5.4 k
serial ata
14
5.4 k
serial ata
60 GB
13
60 GB
5.4 k
serial ata
60 GB
serial ata
60 GB
12
serial ata
5.4 k
11
60 GB
5.4 k
10
5.4 k
serial ata
9
5.4 k
serial ata
60 GB
8
60 GB
5.4 k
serial ata
60 GB
serial ata
60 GB
7
serial ata
5.4 k
6
60 GB
5.4 k
5
2
5.4 k
serial ata
4
1
5.4 k
serial ata
60 GB
3
60 GB
5.4 k
serial ata
60 GB
serial ata
60 GB
UID
serial ata
5.4 k
HP
StorageWorks
MSA70
60 GB
5.4 k
25
5.4 k
serial ata
24
5.4 k
serial ata
60 GB
23
60 GB
5.4 k
serial ata
60 GB
serial ata
60 GB
22
serial ata
5.4 k
21
60 GB
5.4 k
20
5.4 k
serial ata
19
5.4 k
serial ata
60 GB
18
60 GB
5.4 k
serial ata
60 GB
serial ata
60 GB
17
serial ata
5.4 k
16
60 GB
5.4 k
15
5.4 k
serial ata
14
5.4 k
serial ata
60 GB
13
60 GB
5.4 k
serial ata
60 GB
serial ata
60 GB
12
serial ata
5.4 k
11
60 GB
5.4 k
10
5.4 k
9
5.4 k
serial ata
60 GB
8
serial ata
5.4 k
serial ata
60 GB
serial ata
60 GB
7
60 GB
5.4 k
6
serial ata
5.4 k
3
5.4 k
2
5.4 k
1
60 GB
Scale the solution out, not in; more servers mean better availability
Nearline SAS storage: provide large mailboxes by using large low cost drives
Exchange I/O reduced by 93% since Exchange 2003
Exchange 2013 database needs ~10 IOPS; single Nearline SAS disk provides
~60 IOPS; single 2.5” 15K rpm SAS disk provides ~230 IOPS
3+ redundant database copies eliminate the need for RAID and backups
Redundant servers eliminate the need for redundant server components
(e.g. NIC teaming or MPIO)
Physical servers
DAG replication
JBOD
UID
Google, Microsoft, Amazon, Yahoo! use commodity hardware for 10+ years already
Not only for messaging but for other technologies as well (started with search, actually)
Inexpensive commodity server and storage as a building block
Easily replaceable, highly scalable, extremely cost efficient
Software, not hardware is the brain of the solution
Photo Credit: Stephen Shankland/CNET
BAD
UC
Team
Messaging Collaboration
Team
Team
Own the solution / technology,
not the infrastructure area
Team Dependencies and Solution Complexity
Help Desk
Security Team
Load Balancing Team
Telephony Team
Network Team
Directory Team
Application Team
Windows Team
Server Platform Team
Storage Team
Technologies
Exchange PLA: Special tightly scoped reference
architecture offering from Microsoft Consulting Services
Based on deployment best practices and collective customer experience
Structured design based on detailed rule sets to avoid
Exchange
On-Premises
common mistakes and misconceptions
Custom Design
Based on cornerstone design principles:
Exchange
 4 database copies across 2 sites
On-Premises
recommended
 Unbound Service Site model (single namespace)
best practices
rd
 Witness in the 3 site
Exchange
 Multi-role servers
On-Premises
 DAS storage with NL SAS or SATA
PLA Design
JBOD configuration
 L7 load balancing (no session affinity)
Exchange
 Large low cost mailboxes
Online
(25/50 GB standard mailbox size)
Public Cloud
(Office 365)
 Enable access for all internal / external clients
 System Center for monitoring
 Exchange Online Protection for messaging hygiene
WAN
Server-Server traffic
various protocols
Server
Server
Client-Server traffic
HTTPS
Firewall OK
Consolidated deployment
WAN
Client
Server
Server-Server traffic
various protocols
No firewalls
Client-Server traffic
HTTPS
Server
Distributed deployment
Client
DAG
Datacenter site
Datacenter site
single unified namespace
mail.company.com
User site
User site
DAG1
DAG2
site specific namespace
mail1.company.com
User site
Datacenter site
User site
User site
User site
User site
site specific namespace
mail2.company.com
Datacenter site
User site
DAG size is important because DAG is the building block
General guidance is to prefer larger DAG size
Larger DAGs provide better availability and load balancing

Server
Server

Larger DAGs, however, have disadvantages too

Server
Server

http://aka.ms/partitioned-cluster-networks

Scalability planning due to growth also impacts the decision

Server
Server
http://blogs.technet.com/b/exchange/archive/2010/09/10/3410995.aspx
4
2
1
4
3
3
2
1
4
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
1
4
3
2
4
3
2
1
4
3
1
4
3
2
4
3
2
Disk 1
Disk 2
Disk 1
1
1
3
2
1
4
2
1
4
3
3
2
1
4
1
4
3
2
1
Disk 3
1
4
3
2
Disk 2
Disk 1
Disk 2
Disk 3
4
3
4
2
1
2
1
3
2
1
4
3
2
1
4
3
2
1
4
3
4
2
1
4
3
2
1
4
3
2
3
2
1
4
3
2
1
4
3
2
4
3
2
1
4
3
2
1
4
3
1
4
3
2
Server6
1
4
3
2
1
4
3
4
3
2
1
Server1
Server2
Server4
Server4
Server5
Server1
Server1
Server2
Server5
Server4
Server5
Server2
Server1
Server2
Server1
Server4
Server5
Server4
Server5
4
3
2
4
3
2
1
4
3
2
1
DB01
DB02
DB03
DB04
DB05
DB06
DB07
DB08
DB09
DB10
DB11
DB12
DB13
DB14
DB15
DB16
DB17
DB18
Server4
1
4
3
2
1
4
3
2
4
3
2
1
Server3
4
3
2
4
3
2
3
2
1
Active
Server
Server2
1
3
2
2
1
Database
Name
Assigned 40 40 40 40 40 40
Active 15 15 0 15 15 0
Server1
4
2
1
10 60
Server6
4
3
4
3
2
1
Server1
Server2
Server4
Server4
Server5
Server6
Server1
Server2
Server5
Server4
Server5
Server6
Server1
Server2
Server6
Server4
Server5
Server6
Server5
4
3
2
1
4
3
2
1
DB01
DB02
DB03
DB04
DB05
DB06
DB07
DB08
DB09
DB10
DB11
DB12
DB13
DB14
DB15
DB16
DB17
DB18
Server4
4
3
2
1
Server3
3
2
1
Active
Server
Server2
2
1
Database
Name
Assigned 40 40 40 40 40 40
Active 12 12 0 12 12 12
Server1
Server4
1
10 60
Server6
Server3
Server1
Server2
Server3
Server4
Server5
Server6
Server1
Server2
Server3
Server4
Server5
Server6
Server1
Server2
Server3
Server4
Server5
Server6
Server5
Server2
DB01
DB02
DB03
DB04
DB05
DB06
DB07
DB08
DB09
DB10
DB11
DB12
DB13
DB14
DB15
DB16
DB17
DB18
Active
Server
Server1
Database
Name
Assigned 40 40 40 40 40 40
Active 10 10 10 10 10 10
Disk 3
10 60
4
3
2
1
4
3
2
1
4
2
1
4
3
3
2
1
4
1
4
3
2
1
New capability in Exchange 2013: DAG without a Cluster Administrative Access Point (a.k.a. IP-less DAG)
http://blogs.technet.com/b/scottschnoll/archive/2014/02/25/database-availability-groups-and-windows-server-2012-r2.aspx
http://blogs.technet.com/b/timmcmic/archive/2015/04/29/my-exchange-2013-dag-has-gone-commando.aspx
Recommended and preferred model, default in Exchange 2016
Advantages: reduced complexity



Disadvantages:


Useful Powershell cmdlets:
Get-Cluster -Name DAG01 | select *
Get-ClusterNode -Cluster DAG01 [-Name SVR01] | select *
Get-ClusterNetwork -Cluster DAG01 [-Name DAGNetwork01] | select *
Get-ClusterQuorum -Cluster DAG01 | fl
Get-ClusterGroup -Cluster DAG01
Move-ClusterGroup -Cluster DAG01 -Name "Cluster Group" -Node SVR01
Get-ClusterLog –Cluster DAG01
High Availability (HA) is redundancy of solution components within a datacenter
Site Resilience (SR) is redundancy across datacenters providing a DR solution
Both HA and SR are based on native Exchange data replication
Each database exists in multiple copies, one of them active
Data is shipped to passive copies via transaction log replication over the network
It is possible to use dedicated isolated network for Exchange data replication
Network requirements for replication:
Each active  passive database replication stream generates X bandwidth
The more database copies, the more bandwidth is required
Exchange natively encrypts and compresses replication traffic
Pros and cons for dedicated replication network => Not recommended
Replication network can help isolating client traffic from replication traffic
Replication network must be truly isolated along the entire data transfer path: having separate
NICs but sharing the network path after the first switch is meaningless
Replication network requires configuring static routes and eliminating cross talk; this leads to
extra complexity and increases risk of human error
If server NICs are 10Gbps capable, it’s easier to have a single network for everything
No need for network teaming: think of a NIC as JBOD
Introduces additional critical solution component and associated performance and maintenance overhead
Reduces availability and introduces extra complexity
Could make sense for small deployments helping consolidate workloads – but this introduces shared infrastructure
Consolidated roles is a guidance since Exchange 2010 – and now there is only a single role in Exchange 2016!
Deploying multiple Exchange servers on the same host would create failure domain
Hypervisor powered high availability is not needed with proper Exchange DAG designs
No real benefits from Virtualization as Exchange provides equivalent benefits natively at the application level
Failure domain!
CAS
role
MBX
role
No extra overhead
Consolidated server
CAS
VM
MBX
VM
Multi-Role VM
Multi-Role VM
Multi-Role VM
Virtualization Layer
Virtualization Layer
Virtualization Layer
Virtualization host
Virtualization host
Virtualization host
SAN  DAS
FC  SAS  SATA








RAID  JBOD (RBOD)
RAID



DAS

SAN
Exchange
sweet spot
JBOD
SATA NL-SAS SAS/SCSI FC/SSD
Conceptually similar replication – goal is to introduce redundant copy of the data
Software, not hardware powered  Application aware replication
Enables each server and associated storage as independent isolated building block
Exchange 2013 is capable of automatic reseed using hot spare (no manual actions besides replacing the failed disk!)
Finally, cost factor: RAID1/0 requires 2x storage (you still want 4 database copies for Exchange availability)!
BAD
2 servers, 4 disks, 2 database copies
BEST
BETTER
3 servers, 4 disks, 3 database copies
4 servers, 4 disks, 4 database copies
Exchange mailboxes will grow but they don’t consume that much on Day 1
The desire not to pay for full storage capacity upfront is understood
However, inability to provision more storage and extend capacity quickly when needed is a big risk
Successful thin provisioning requires significant operational maturity and process excellence unseen in the wild
Microsoft guidance and best practice is to use thick provisioning with low cost storage
Incremental provisioning model can be considered a reasonable compromise
12.0
12.0
12.0
10.0
10.0
10.0
8.0
8.0
8.0
6.0
6.0
6.0
4.0
4.0
2.0
2.0
0.0
0.0
4.0
2.0
0.0
2015
2016
2017
Underprovisioned storage
Unused storage
Used Mailbox Storage
2018
2019
2015
2016
2017
2018
2019
2015
2016
2017
Underprovisioned storage
Underprovisioned storage
Unused storage
Unused storage
Used Mailbox Storage
Used Mailbox Storage
2018
2019
Exchange continuous replication is a native transactional replication (based on transaction data shipping)
Database itself is not replicated (transaction logs are played back to target database copy)
Each transaction is checked for consistency and integrity before replay (hence physical corruption cannot propagate)
Page patching is automatically activated for corrupted pages
Replication data stream can be natively encrypted and compressed (both settings configurable, default is cross site only)
In case of data loss Exchange automatically reseeds or resynchronizes the database (depending on type of loss)
If hot spare disk is configured, Exchange automatically uses it for reseeding (like RAID rebuild does)
Exchange replication
(transaction logs only)
Logs
Database
Logs
Mailbox
Server
Database
Mailbox
Server
Need to check
consistency
(eseutil /k)
Storage
Application aware
resiliency
(automatic replication; no
physical corruption; page
patching; auto failover;
auto database mount; auto
reseed; auto use of hot
spare; etc.)
Low level replication
(all data changed on disk)
Storage
Low level
resiliency
(corruption can propagate;
consistency check will take
long time; etc.)
OS
Server
HP DL380p G9
Hot
Dell R730xd
Spare
Cisco C240 M4
or similar
12 or more 3.5" drives
OS
System disk (RAID1)
8TB
Disk 1
DAG1-DB1
DAG1-DB16
DAG1-DB15
DAG1-DB14
8TB
1
2
3
4
Disk 2
DAG1-DB17
DAG1-DB31
DAG1-DB30
DAG1-DB29
8TB
1
2
3
4
Disk 3
DAG1-DB33
DAG1-DB46
DAG1-DB45
DAG1-DB44
8TB
1
2
3
4
Disk 4
DAG1-DB49
DAG1-DB61
DAG1-DB60
DAG1-DB59
8TB
1
2
3
4
Disk 5
DAG1-DB65
DAG1-DB76
DAG1-DB75
DAG1-DB74
8TB
1
2
3
4
Disk 6
DAG1-DB81
DAG1-DB91
DAG1-DB90
DAG1-DB89
Database disks
8TB
1
2
3
4
Disk 7
DAG1-DB97
DAG1-DB106
DAG1-DB105
DAG1-DB104
8TB
1
2
3
4
Disk 8
DAG1-DB113
DAG1-DB121
DAG1-DB120
DAG1-DB119
8TB
1
2
3
4
Disk 9
DAG1-DB129
DAG1-DB136
DAG1-DB135
DAG1-DB134
1
2
3
4
OS
Server
HP DL380p G9
Hot
Dell R730xd
Spare
Cisco C240 M4
or similar
12 or more 3.5" drives
OS
System disk (RAID1)
4TB
4TB
4TB
4TB
4TB
4TB
4TB
4TB
4TB
Database disks
Enclosure
HP D3600
Dell MD1400
or similar
12 or more 3.5" drives
Hot
Spare
4TB
4TB
4TB
4TB
4TB
4TB
Database disks
4TB
4TB
4TB
4TB
4TB
3-node DAG
4-node DAG
Backup
Client type
Outlook 2007
Outlook 2010
Outlook 2013
OWA
EWS
ActiveSync Device 1
ActiveSync Device 2
ActiveSync Device 3
Blackberry 5.x
Blackberry 10
GoodLink
Total mailboxes
Concurrency
Aggregate IO penalty
Aggregate CPU penalty
# of clients IO Penalty CPU Penalty
0
1
1
5000
1
1
5000
1
1
1000
1
1
100
1
1
500
2
2
1000
1.5
1.5
500
1
1
1000
1.0
1.5
1000
1
1
100
2.16
2.38
10000
75%
1.22
1.26
at http://aka.ms/bes5performance
Penalty factor for Good is based on data published on Good
Portal at http://aka.ms/goodperformance
Continue to monitor system utilization closely
Four or more physical servers (collocated roles) in each DAG split symmetrically between two datacenter sites
Four database copies (one lagged with lag replay manager) for HA and SR on DAS storage with JBOD; minimized failure domains
Unbound Service site model with single unified load balanced namespace and Witness in the 3rd datacenter
Load
balancer
Switch
Switch
Switch
Load
balancer
Switch
Switch
Router
Log1
Active Copy 1
DB1
Logs
Log2
DB4
DB2
KW
DB2
DB6
DB3
Logs
DB8
KW
KW
DB7
Logs
KW
KW
Passive Copy 4
DB3
DB1
Logs
DB4
KW
KW
DB7
DB2
DB4
Passive Copy 2
DB8
Logs
Active Copy 1
DB5
Logs
KW
KW
KW
DB5
DB3
Active Copy 1
DB6
Logs
KW
KW
DB6
DB7
KW
KW
Logs
Passive Copy 2
DB4
Logs
Active Copy 1
Logs
Logs
Passive Copy 3
Logs
Passive Copy 2
Logs
Log2
DB3
Logs
Passive Copy 3
Logs
Log1
Passive Copy 4
Log2
DB2
Logs
Passive Copy 3
Logs
Passive Copy 2
Logs
KW
Switch
Log1
Passive Copy 4
Log2
DB1
Logs
Passive Copy 3
Logs
Active Copy 1
Logs
Log1
Passive Copy 4
Log2
Passive Copy 2
Logs
Passive Copy 4
Passive Copy 3
Logs
DB8
Active Copy 1
Passive Copy 4
Logs
Log1
Router
Passive Copy 3
Logs
Switch
Router
Log2
DB6
Logs
Passive Copy 2
Logs
Log1
Passive Copy 4
Log2
DB5
Logs
Active Copy 1
Passive Copy 2
DB7
DB1
Logs
Passive Copy 3
DB5
Passive Copy 3
Log2
Log3
Passive Copy 4
Log1
Log1
Passive Copy 2
Switch
Logs
Active Copy 1
DB8
Logs
KW
KW
Logs
KW
Pre-Release Programs
Be first in line!
Exchange & SharePoint On-Premises Programs
Customers get:
Early access to new features
Opportunity to shape features
Close relationship with the product teams
Opportunity to provide feedback
Technical conference calls with members of the
product teams
Opportunity to review and comment on
documentation
Get selected to be in a program:
Sign-up at Ignite at the Preview Program desk
OR
Fill out a nomination: http://aka.ms/joinoffice
Questions:
Visit the Preview Program desk in the Expo Hall
Contact us at: ignite2015taps@microsoft.com
E-mail: borisl@microsoft.com
Profile: https://www.linkedin.com/in/borisl
Social: https://www.facebook.com/lokhvitsky
http://myignite.microsoft.com
Download