Exchange

advertisement
Exchange 2003 ScaleOut Performance on a
Dell PowerEdge High
Availability Cluster
Product Group - Enterprise
Dell White Paper
By
Arrian Mehis
Ananda Sankaran
Scott Stanford
September 2004
Contents
Executive Summary ...................................................................................................................... 3
Introduction ................................................................................................................................... 4
FE 400 Cluster Architecture: Modular and Scalable Building Blocks ............................... 5
The Microsoft Exchange Server Benchmark Simulation and MMB3 Environment...... 11
Exchange Server 2003 Virtual Server: Scaling and Performance ....................................... 15
Conclusion ................................................................................................................................... 22
Appendix A.................................................................................................................................. 23
The Test Environment ......................................................................................................... 23
September 2004
Page 2
Dell Enterprise Product Group
Section
1
Executive Summary
Dell PowerEdge™ clusters built with industry standard components such as Dell
PowerEdge servers, Dell/EMC storage and Windows® Server 2003, Enterprise
Edition provide scaleable high availability by minimizing downtime through
application failovers. The MAPI Messaging Benchmark Version 3 (MMB3) is an
industry standard benchmark which simulates enterprise level messaging
workloads in the Exchange 2003 Server environment. The MMB3 workload
places demands on processor, memory, network, and disk subsystem. The
LoadSim 2003 tool configured with (MMB3) profile was used to simulate MAPI
based workloads on a Dell cluster running Exchange 2003 Server in a 3+1
active/passive configuration. This paper discusses Dell PowerEdge Cluster
FE400 architecture, the LoadSim 2003 and MMB3 test environment, and some
key findings from tests that show how Dell PowerEdge Cluster clusters scale
well while running enterprise level applications like Exchange 2003.
Key points:
September 2004

Dell High Availability Clusters use industry standard components

Dell High Availability Clusters are designed and tested to ensure that
single points of failure do not exist

Dell High Availability Clusters are natural building blocks for scalable
enterprise computing initiatives

Dell High Availability Clusters provide scalability and enhance
reliability for key mission critical applications like Microsoft Exchange
2003 Server
Page 3
Dell Enterprise Product Group
Section
2
Introduction
Scaleable hardware that can meet increasing user and application demands while providing solid
performance and reliability is a crucial component for today’s business operations. Scalability can
be achieved in two ways – scale-up and scale-out. Scale-up involves adding more system
resources (CPUs, memory, etc.) to an existing server to meet increasing requirements. Scale-out
involves adding more servers to satisfy the increasing application requirements. In the former
approach the application workload is concentrated on a single high-performance server and in
the latter approach the workload is distributed across two or more smaller servers. Currently the
computer industry focus is shifting towards techniques that help to achieve scale-out. Dell
believes that the approach of adding two- and four-processor servers in small increments to meet
increasing demands is cost effective and flexible.
A cluster is a collection of servers grouped together to achieve a common purpose. High
availability clusters are utilized for minimizing downtime of applications by performing
failovers. Downtime may be either planned (maintenance) or unplanned (server hardware,
operating system, or application failures) and failover helps ensure continuous availability of
application services/data. Windows 2003 Server clusters provide support for up to eight server
nodes and up to seven active Exchange virtual servers. This renders Dell PowerEdge high
availability clusters a suitable platform for scaling out and supporting highly available Exchange
workloads.
This report provides an overview of scale-out performance for Exchange Server 2003 running on
a Dell PowerEdge Cluster FE400 in a 3+1 active/passive configuration. A performance baseline is
established for Exchange 2003 on one virtual server of the cluster. Then the performance aspects
of scaling out the Exchange 2003 deployment up to 3 virtual servers on the cluster are elaborated.
First the architecture of Dell PowerEdge Fibre channel clusters, and LoadSim 2003 and the MMB3
benchmark are discussed.
September 2004
Page 4
Dell Enterprise Product Group
Section
3
PowerEdge Cluster FE 400 Cluster
Architecture: Modular and Scaleable
Building Blocks
The primary benefits of high availability clustering are Availability, Scalability and
Manageability. Availability of business critical applications is provided by the cluster through
fail-over. If a server in the cluster has failed, the applications running on it are failed over to the
remaining server(s) in the cluster. This helps ensure continuous availability of applications and
data and minimizes downtime. A server may fail and stop functioning because of failures in the
hardware, operating system or applications. High availability clusters are also referred to as failover clusters. Scalability in the cluster can be achieved by incrementally adding additional nodes
to the cluster. Windows 2003 server cluster provides support for up to 8 server nodes. Whenever
the application load exceeds current capacity, or more applications need to be deployed,
additional nodes can be added to the cluster to achieve scale-out. Manageability is brought about
by the ability to perform maintenance tasks on the server nodes in the cluster without incurring
downtime. In a rolling upgrade, the cluster continues to provide application service while
software or hardware is being upgraded on each node until all nodes have been upgraded. The
cluster provides a single consistent view of all the server nodes for management purposes
Windows 2003 Cluster Service
Windows 2003 Server Cluster, also referred to as Microsoft Cluster Service (MSCS) provides
support for eight server nodes. The cluster service on each node manages cluster resources that
are required for providing application service. Cluster resources may be physical entities like a
disk or network adapter, or logical entities like an IP address, or an application service. The
quorum resource is a special resource which maintains the database that reflects the state of the
cluster and is required for crucial operations such as forming the cluster, avoiding split-brain
situations and preserving cluster integrity. A resource group is a collection of logically related
resources managed by the cluster as a single unit, typically associated with one application.
MSCS follows a shared nothing cluster architecture wherein any common cluster resource
including quorum and devices such as shared drives is selectively owned by only one cluster
node at any point in time. The resource will be moved over to another node when the node
owning it fails or incurs maintenance. All the nodes have physical connection to the shared disks
but only one node can have access to a shared disk at a given time. The cluster requires a shared
storage system for the shared disk resources. The cluster requires the shared storage system to be
SCSI-based since it uses the SCSI protocol to manage the shared disk drives. Thus a multiinitiator based SCSI storage or a Fibre Channel based storage with the ability to serve multiple
host servers with shared logical disks can be used for clustering. Figure 1 below illustrates the
physical architecture of a typical 2-node MSCS-based high availability server cluster.
September 2004
Page 5
Dell Enterprise Product Group
Physical architecture of a MSCS based HA cluster
Private Heartbeat
Clients
Public
LAN
Private
Heartbeat
Server
Node 1
Server
Node 2
SCSI (or) SCSI
over FC
Shared
storage
Figure 1.
The clients connect to application services through virtual servers hosted on the cluster. The
virtual servers will be moved across cluster nodes due to failures and maintenance and the clients
will not be aware of which physical node is hosting which virtual server. All the application’s
resources including the virtual server resources are grouped into a resource group and it
becomes the unit of fail-over within a cluster. The process of moving an application from one
cluster node to another is termed as fail-over. The process of fail-over is highly transparent and
realistically the disruption experienced by the clients is negligible. The clustered servers are
interconnected via a private LAN and a heartbeat is used on this network to determine whether
each server is functional or not. In this way the cluster provides high availability by failing over
the application virtual servers to functional physical servers and minimizes application
downtime. Server clusters can exist in both active/active and active/passive configurations. In an
active/active configuration, all the server nodes in the cluster are active and utilized for running
virtual application servers. In an active/passive configuration, one or more nodes in the cluster
are left idle to take over the applications (virtual servers) of the active nodes in case of failures.
Depending upon application requirements and best practices standards, passive nodes can also
take on additional administrative or backup duties in the cluster thus minimizing idle time and
gaining more effective utilization of all nodes. Windows 2003 Server Cluster supports up to eight
nodes, and Exchange Server 2003 supports up to a maximum of seven Exchange Virtual Servers
(EVS) on a Windows 2003 Server cluster.
September 2004
Page 6
Dell Enterprise Product Group
Illustration of active/active and active/passive “N+I” configurations
Node 1
Node 2
Exchange
VS
Exchange
VS
VS – Virtual
Server
Shared Storage
EXCH
Quorum
EXCH
Node 1
Node 2
Exchange
VS
Exchange
VS
Node 3
Passive node
Active/passive
“2+1” cluster
Shared Storage
EXCH
EXCH
Active/active
cluster
Quorum
Figure 2.
September 2004
Page 7
Dell Enterprise Product Group
Dell’s cluster model and PowerEdge Cluster FE400
As shown in Figure 2, business critical applications such as corporate e-mail, databases, file and
print services, and ERP should be hosted on systems that are designed for high availability. High
availability clustering makes sure that the physical server hosting an application is not a single
point-of-failure by providing the ability for that application to be restarted on one of multiple
servers. As discussed in the previous section, if the server running the application fails, another
designated server takes over the responsibility of running that application. Dell’s high
availability (HA) cluster solutions build upon the MSCS model and are designed and tested to
make sure that no single point of failure (SPOF) exists. These include the standalone server,
storage system including fabric switches and host adapters, operating system (OS), cluster service
and the applications. Figure 3 below displays the architecture of a typical fibre channel based
HA cluster solution from Dell. From the figure it can be seen that Dell’s HA cluster solutions are
highly modular and scalable. Each cluster component (server, storage sub-system, OS) acts as a
module that integrates seamlessly with other modules to form the entire end to end cluster
solution. Also, each component is an industry standard based commodity component, thus
helping to ensure maximum interoperability with other cluster components. With Dell HA
clusters, server scale-out is achieved by adding more server nodes seamlessly into the existing
Storage Area Network (SAN) of the cluster. The storage sub-system can also be scaled-out by
adding additional storage systems to the existing SAN. Additionally hardware components in the
cluster can be scaled up to meet increasing demands. For example the server can be scaled up to
add more processing power (CPUs) or memory and the storage system can be scaled up to add
more storage capacity.
September 2004
Page 8
Dell Enterprise Product Group
Dell Fibre Channel cluster model
App
MS
Exchange
Passive
Node
OS
Windows
2003 EE
Windows
2003 EE
Server
Dell PE
1750
Dell PE
1750
HBA
E
982
FC Switch
E
982
E
982
E
982
Brocade 3200
Brocade 3200
SP A
SP B
Dell | EMC CX400
FC
Storage
DAE
DAE
Figure 3.
Dell servers have been designed with a broad array of redundant features to maximize a server's
availability. HA Clustering further increases the level of system availability to application
availability. The shared storage required for fibre channel clustering can be either attached via
fibre channel switches (SAN attached configuration) or can be directly attached to the cluster
servers (direct attach configuration). Dual redundant switches are required in the SAN attached
configurations to avoid single point of failure. Each clustered server has dual host bus adapters
connecting it to the storage directly or via fabric switches. This provides multiple redundant
paths to the storage for each clustered server which are used for path failover and load balancing.
The redundant paths to the storage are managed by path management software (e.g. EMC
PowerPath) on the server. Multiple redundant paths to storage provide an additional level of
storage availability for the servers. High availability fibre channel cluster product solutions from
Dell encompass a particular set of the cluster components described above based on their
generation and/or component revisions. PowerEdge cluster FE400 is a cluster solution from Dell
that encompasses the following matrix of components:
September 2004
Page 9
Dell Enterprise Product Group
Type
Model
Platforms
Fibre
Channel
Clusters
PowerEdge PowerEdge 1550,
Cluster
1650, 1750, 2500,
FE400
2550, 2600, 2650,
4400, 4600, 6400,
6450, 6600, 6650,
8450
Host Bus
Adapter
Storage
Enclosure
Fibre channel
switch
Cluster
Interconnect
Emulex
LP9802, LP982
or QLogic
QLA2340 HBA
Dell /EMC
CX600,
CX400,
CX200
Brocade Silkworm
3200,
Brocade Silkworm
3800,
McData Sphereon
4500 (flexport)
Any Ethernet
adapter
supported by
platform
(Dual
redundant
HBAs required
per node)
(Redundant
switches required
for SAN and none
for DAS)
Microsoft Windows 2003 Enterprise Edition (up to 8 nodes) and Windows 2000 Advanced Server with Service Pack
4 or higher (2 nodes).
Table 1.
September 2004
Page 10
Dell Enterprise Product Group
Section
4
The Microsoft Exchange Server Benchmark
Simulation and MMB3 Environment
In previous sections, an overview of Microsoft Cluster Service and the Dell HA Cluster model
was presented. In this section the MMB3 benchmark workload and the LoadSim 2003
environment are explored. The MAPI Messaging Benchmark 3 (MMB3) was used as the
workload to characterize scalability of the FE400 cluster. The LoadSim 2003 application, which
uses the MMB3 workload, is developed and maintained by Microsoft. Microsoft also acts as the
second party auditor by verifying the validity of the performance submissions from competing
companies. The benchmark is the only benchmarking standard for measuring the performance
and scalability of servers running Exchange Server 2003.
The MMB3 benchmark is modeled after a typical corporate e-mail usage profile. The higher the
MMB3 score, the more theoretical users the server can support. Bear in mind, however, the
MMB3 score does not correlate directly with how many real world users the server can support.
Since it is indeed a benchmark, the MMB3 benchmark must be consistent, repeatable, and possess
the ability to scale to produce a standard metric across different servers and configurations.
Given that each real world Exchange Server environment may have different requirements,
workload sizes, and dynamic peaks and valleys in messaging traffic, the MMB3 score serves only
as a relative indication of platform comparison across different configurations of servers and
storage.
The benchmark inherently is sensitive to more than just the backend end server(s) as key players
in obtaining the maximum MMB3 score. Other major factors that affect Exchange Server
performance include the domain controller hosting Active Directory; the backend storage
housing the mail stores and transaction logs for each storage group, as well as the network
interconnects that piece the entire system together. The MMB3 benchmark requires that there be
two storage groups, each having two mailbox stores each. The public store must be placed on
any one of the logical disks housing a mail store. An example of a logical, physical and
application level view of a typical MMB3 setup is depicted below in Figures 4-6.
September 2004
Page 11
Dell Enterprise Product Group
September 2004
Page 12
Dell Enterprise Product Group
September 2004
Page 13
Dell Enterprise Product Group
September 2004
Page 14
Dell Enterprise Product Group
Section
5
Exchange Server 2003 Virtual Server:
Scaling and Performance
As discussed in earlier sections, one benefit of Exchange 2003 Server clusters running on Dell
hardware is the ability to easily add on more compute nodes as required to meet business needs.
This section covers how a 3+1 Active/Passive cluster is impacted as additional cluster nodes are
brought online to meet scalable computing requirements. Before proceeding, however, it is
important to understand that linear scaling, or incrementally increasing the workload while
simultaneously maintaining equivalent system latencies, is difficult to achieve. There are two key
reasons why linear scaling cannot be readily achieved in a clustered environment – shared
storage system and intra-site messaging latencies.
In clustered messaging environments typically dominated by highly randomized I/O patterns, a
shared storage system using SAN technologies and building blocks - the storage processors (SPs),
SP cache memory, and the disk resources - are all subject to some latency inherent to random
read and write I/Os. As additional cluster server nodes connect to the shared storage system and
as those server nodes in turn introduce more work, the time required to seek out a specific data
block will increase.
Exchange administrators familiar with the Single Instance Ratio concept will recognize the
latency implications Intra-site messaging can have on message delivery times and message
queues. Single Instance Ratio, although often touted as one way to reduce the storage
requirements for a message destined to more than one user on the same server, just as
importantly reduces the amount of network traffic beyond a single Exchange server when large
distribution lists are present. Intra-site messaging latencies arise when messages travel from one
user to another whose mailbox resides on another server in the Exchange organization. For
example, target user names and properties often require resolution from an Active Directory
Domain controller before the message can be delivered. As an additional step, the resolution
process can introduce delays. In an Exchange Organization hosting thousands of mailboxes
spread across numerous physical servers that may or may not be in the same Active Directory
Site, some latency increases are to be expected. While there are numerous Active Directory and
Exchange best practice deployment methodologies available to help reduce and minimize
messaging communication latencies, those are beyond the scope of this paper.
With an overview of some storage and messaging system latency factors, the focus now turns to
the need for baseline analysis criteria and the exploration of performance characteristics for some
key subsystems that are crucial for achieving a scalable clustered messaging system.
September 2004
Page 15
Dell Enterprise Product Group
Establishing a Baseline
Before examining overall Exchange 2003 cluster scaling and performance data, it is important to
establish and explore a baseline system and understand how it is impacted by a random
messaging workload.
For the tests run the Exchange Information Store Databases and Transaction Logs were stored on
RAID 0 LUNs.1 While RAID 0 is not a recommended RAID type for mission critical systems in a
production environment, its use in a lab testing environment helps to minimize I/O bottlenecks
thereby allowing other subsystems like Memory, Network, and Processor to achieve higher
utilization levels.
Pushing server and storage building blocks to high utilization levels with tools like LoadSim 2003
allows performance engineers better insight into how individual building blocks impact and
contribute to overall system and application level performance and scaling. Engineers and
system administrators can then leverage this insight and apply those principles into the design of
scalable, reliable, and high performing production level messaging systems.
EVS0 Baseline Performance Data at 5,000 MMB3s
60
51.62
50
40
30
22.86
20.25
9.02
1.29
%
Processor
Time
Network
Mbytes/sec
DB Avg.
Disk Queue
Length
0
DB Disk
MBytes/sec
1.67
SMTP Local
Queue
Length
10
Store Send
Queue Size
8.61
Pages/sec
20
Figure 7. EVS0 Baseline Performance Data
Figure 7 shows a few key system and application level performance metrics for a single Active
Exchange 2003 Server cluster node running 5,000 MMB3s. The first server in the cluster, EVS0
(Exchange Virtual Server 0), is under a moderate workload with just over 50% Processor
Utilization and slightly over 9MB/sec of random Information Store database throughput. Under
this load, the Information Store Send Queue Size and SMTP Local Queue Lengths are less than
0.45% of the total simulated workload of 5,000 MMB3s.
1
See Table 3 in Appendix A for configuration details.
September 2004
Page 16
Dell Enterprise Product Group
Avg. Disk Queue Length is one latency related indicator. This performance metric shows how
many I/Os are queued for a particular LUN. High Disk Queue Lengths are one indication that
the disk subsystem cannot sustain the current messaging workload and that service times
required to retire pending I/Os are high. Memory pages occur when data is read from or written
to disk. A high level of memory pages/sec indicates an insufficient amount of system memory or
heavily loaded disk subsystem.2
Thus far, brief overviews of some baseline system and application level performance metrics
have been presented. In the next section, these metrics will take on more relevance as the
baseline data are narrowed in focus to a few key areas and compared and contrasted as the
cluster scales out.
Scaling Out the Cluster
Two Active Exchange 2003 Virtual Servers
As additional active cluster nodes are brought online processor utilization grows to
accommodate the additional overhead associated with processing messages bound for different
hosts within the cluster.
Note how the average processor time increases as the workload is incremented by 5,000 MMB3s.
The averages shown in Figure 8 are derived from each of the active Exchange cluster nodes. At
10,000 MMB3s, there are two active Exchange virtual servers with another cluster node in passive
(standby) mode. Under the MMB3 Benchmarking guidelines, one server in the cluster is required
to host the Public Folder root. Although there is minimal Public Folder activity when running
LoadSim 2003 with the MMB3 benchmark profile, the Exchange server that hosts the public
folder root incurs additional overhead. The additional overhead surfaces primarily in the
store.exe process and as a result average processor utilization for a two node virtual server
configuration is slightly higher than that of two Exchange virtual servers that are not hosting a
public folder instance. To scale beyond 5,000 MMB3s, additional cluster virtual servers are
brought online to handle the increasing workload up to 15,000 MMB3s for a total of three active
Exchange virtual servers and one passive (standby) node.
2
For more information on memory performance in an Exchange environment, see “The Effect of L3 Cache Size on MMB2 Workloads,” by Scott
Stanford, Dell Power Solutions, Feb. 2003.
http://www1.us.dell.com/content/topics/global.aspx/power/en/ps1q03_stanford?c=us&cs=555&l=en&s=biz.
September 2004
Page 17
Dell Enterprise Product Group
EVS0-EVS1 Peformance Data at 10,000 MMB3s
70
62.98
60
50
36.68
40
30
20
24.09
19.49
12.65
10
0.45
% Processor
Time
SMTP Local
Queue
Length
Store Send
Queue Size
Pages/sec
Network
Mbytes/sec
DB Disk
MBytes/sec
DB Avg. Disk
Queue
Length
0
2.54
Figure 8. Three Active Exchange 2003 Virtual Servers
As the workload tripled, average cluster host CPU utilization increased by less than 20% as
additional Exchange virtual servers were brought online.
Table 2
% Processor
Time
Interrupts/sec
Context
Switches/sec
Processor
Queue Length
1 x Active node
@ 5,000 MMB3s
2 x Active nodes
@ 10,000 MMB3s
3 x Active nodes
@ 15,000 MMB3s
51.621
62.977
70.2916667
3525.844
3767.684
3773.23533
10228.441
11625.88
12077.6997
1.826
3.559
5.02
Table 2 shows additional processor related Perfmon counters that can be used to further examine
the gradual increase in overall average cluster system processor utilization. Interrupts/sec tracks
how many times per second the processors are interrupted by hardware devices for servicing.
Context switches/sec is the rate at which processors switch to and from waiting and ready
process level threads. Similar to the Average Disk Queue Length metric, Processor queue lengths
show how many ready threads are waiting in the processor queue. This metric is a total at the
system level and is divisible by the number of processors present in the server.3
3
Perfmon counter Help.
September 2004
Page 18
Dell Enterprise Product Group
All Active Cluster Node Performance Data at
15,000 MMB3s
5,000 MMB3s
10,000 MMB3s
15,000 MMB3s
% Processor
Time
SMTP Local
Queue Length
Store Send
Queue Size
Pages/sec
Network
Mbytes/sec
DB Disk
MBytes/sec
DB Avg. Disk
Queue Length
80
70
60
50
40
30
20
10
0
Figure 9.
Looking back at the EVS0 Baseline Performance Data and comparing that to the other workloads,
one can see an increase in utilization for several cluster building block components. At 5,000
MMB3s, EVS0’s Information Store Database handled approximately nine megabytes/sec of
random I/O. As shown in Figure 9, as the workload scaled to 15,000, the total aggregate
Information Store Database workload for the cluster increased to over 29 megabytes/sec.
September 2004
Page 19
Dell Enterprise Product Group
Interrelationship Between Response Time and
Message Throughput
2500
2000
95th Percentile
Response Time (ms)
1500
Messages
Delivered/min
1000
Messages Sent/min
500
0
5,000 MMB3s
10,000
MMB3s
15,000
MMB3s
Figure 10
Even as the workload increased, the cluster maintained a relatively consistent level of messaging
throughput as shown in Figure 10. The Messages Delivered/min and the Messages Sent/min
lines in the graph show that the average messaging throughput remained constant and was able
to scale well despite the increase in Disk I/O subsystem latencies and Exchange server utilization.
Messages Delivered/sec measures the rate at which messages are delivered to all recipients and
Messages Sent/sec is the rate at which messages are sent to the Store process. Figure 10 shows
the average of all individual Exchange Virtual servers in the cluster.
September 2004
Page 20
Dell Enterprise Product Group
Interrelationship Between Response Time and Message
Throughput for All Cluster Nodes
7000
6000
95th Percentile
Response Time (ms)
5000
4000
3000
Messages
Delivered/min
2000
Messages Sent/min
1000
0
5,000
MMB3s
10,000
MMB3s
15,000
MMB3s
Figure 11
Figure 11, on the other hand, shows the combined average messaging throughput for the three
active Exchange 2003 Virtual Server cluster nodes as compared to the response time. As one can
see, the Messages Delivered/min and Messages Sent/min metrics indicate that the cluster can
sustain a high level of messaging throughput even as the overall server response time increases.
In this section, some time was spent learning about several factors that can impact linear scaling
in clustered Exchange environment – I/O subsystem latency and intra-server messaging traffic.
Then a baseline workload was examined and explored on one single active Exchange 2003 cluster
server. As the workload increased, I/O and messaging subsystem states were discussed and
compared. Finally, it was established that the FE400 cluster was able to sustain consistent levels
of messaging throughput even as overall server and storage subsystem utilization levels
increased.
September 2004
Page 21
Dell Enterprise Product Group
Section
6
Conclusion
Dell PowerEdge FE400 Clusters offer availability, scalability, reliability and enhanced
manageability to an Exchange Organization. Deploying Exchange 2003 on a Windows 2003
server cluster not only helps achieve scalability but also brings in the real benefits of using a high
availability cluster – availability and improved management. The scalability aspect of Exchange
2003 on a Dell PowerEdge cluster is elaborated in this paper. It is observed that Exchange 2003
scales well and is able to provide consistent levels of messaging throughput on a FE400 cluster.
Dell PowerEdge clusters based on Windows 2003 Server clusters such as the FE400 offer a
standard messaging platform for Exchange that is both scalable and reliable. The components of
a Dell PowerEdge cluster are industry standards based and offer a truly end-to-end reliable
cluster solution. The ability to deploy up to seven active Exchange 2003 virtual server instances
makes the cluster scalable.
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN
TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS
PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.
Dell, PowerEdge and the Dell logo are trademarks of Dell Inc. Windows is a trademark of
Microsoft Corp. © 2004 Dell Inc. U.S. only.
September 2004
Page 22
Dell Enterprise Product Group
Appendix A
The Test Environment
Table 3 shows the server and HA cluster configuration used for all tests.
Table 31: Server and HA Cluster Configurations for the MMB3 Tests
Server
Dell PowerEdge 1750
Details/Version
Processor
2 x Xeon® 3.2GHz 2MB L3 Cache
Processors
Hyper-Threading Enabled
Memory
4 x 1GB memory modules
266MHz DDR SDRAM
Network
2 x Embedded Broadcom
NetXtreme PCI-X Gigabit4 Network
Adapter
Tx/Rx TCP/IP Checksum enabled.
MTU = 1500 (default)
Operating System
Windows Server 2003 Enterprise
Edition
KB831464
Messaging Software
Exchange 2003 Server Enterprise
Edition
Version 6.5 (Build 7226.6: Service
Pack 1)
OS RAID LUN
RAID 1 Mirror Set
2 x 15K U320 disk drives supporting
one OS level partition for the
Operating System, Exchange
Executables, Active Directory, and
Paging file housed in the internal
1x3 bay.
Transaction Log RAID LUNs
1 x RAID 0 LUN for each Storage
Group
Each Transaction Log is comprised
of 8 x 15K FC2 drives on the CX600
SAN.
Message Database LUNs
2 x RAID 0 LUNs for each
Exchange Virtual Server.
Each Database LUN is comprised of
a 30 disk MetaLUN on the CX600
SAN.
This term indicates compliance with IEEE standard 802.3ab for Gigabit Ethernet, and does not connote actual
operating speed of 1Gb/sec. For high speed transmission, connection to a Gigabit Ethernet server and network
infrastructure is required.
4
September 2004
Page 23
Dell Enterprise Product Group
Download