Quick Start Guide for Server Clusters

advertisement
INSTALLATION OF
FIX EDGE SERVER
ON
SERVER CLUSTERS
1
REQUIREMENTS AND GUIDELINES ................................................................................................................... 3
REQUIREMENTS AND GUIDELINES FOR CONFIGURING SERVER CLUSTERS ................................................................. 3
Software requirements and guidelines .................................................................................................................. 3
Hardware requirements and guidelines................................................................................................................ 3
Network requirements and guidelines .................................................................................................................. 4
Storage requirements and guidelines.................................................................................................................... 5
DOMAIN CONTROLLER INSTALLATION ......................................................................................................... 5
Server Configuration Overview ............................................................................................................................ 6
Server Disk Configuration .................................................................................................................................... 6
Server Installation ................................................................................................................................................ 7
CREATING A CLUSTER ........................................................................................................................................ 10
Preparing to create a cluster .............................................................................................................................. 11
Installing the Windows Server 2003 operating system ....................................................................................... 11
Setting up networks ............................................................................................................................................. 12
Configuring up Remote Desktop ......................................................................................................................... 18
To set up a Cluster service user account ............................................................................................................ 19
Setting up disks ................................................................................................................................................... 20
Creating a new server cluster ............................................................................................................................. 22
Validating the cluster installation ....................................................................................................................... 27
Configuring subsequent nodes ............................................................................................................................ 27
Configuring the server cluster after installation................................................................................................. 28
Quorum disk configuration ................................................................................................................................. 31
SCSI DRIVE INSTALLATIONS............................................................................................................................. 32
CONFIGURING SCSI DEVICES................................................................................................................................... 33
TESTING THE SERVER CLUSTER ..................................................................................................................... 33
Testing whether group resources can fail over ................................................................................................... 33
FIX EDGE INSTALLATION .................................................................................................................................. 34
INSTALLATION OF FIX EDGE AS SERVICE ................................................................................................................ 34
INSTALLATION OF FIX EDGE AS CONSOLE APPLICATION ......................................................................................... 35
RESOURCES............................................................................................................................................................. 38
2
Requirements and guidelines
This guide provides system requirements, installation instructions, and other, step-by-step
instructions that you can use to deploy server clusters with installed FIX Edge if you are using
Microsoft® Windows Server™ 2003, Enterprise Edition operating system.
The server cluster technology in Windows Server 2003, Enterprise Edition helps ensure that you
have access to important server-based resources. You can use server cluster technology to
create several cluster nodes that appear to users as one server. If one of the nodes in the
cluster fails, another node begins to provide service. This is a process known as "failover." In
this way, server clusters can increase the availability of critical applications and resources.
Also this guide explains how to install and configure the Microsoft Windows Server 2003,
Enterprise Edition as a domain controller.
Full information about FIX Edge installation you can find in FIX Edge Quick Start Guide.
Requirements and Guidelines for Configuring Server Clusters
The section lists requirements and guidelines that will help you set up a server cluster with
FIX Edge effectively.
Software requirements and guidelines

You must have Windows Server 2003, Enterprise Edition installed on all computers in the
cluster. We strongly recommend that you also install the latest service pack for Windows
Server 2003. If you install a service pack, the same service pack must be installed on all
computers in the cluster.

All nodes in the cluster must be of the same architecture. You cannot mix x86-based,
Itanium-based, and x64-based computers within the same cluster.

Your system must be using a name-resolution service, such as Domain Name System
(DNS), DNS dynamic update protocol, Windows Internet Name Service (WINS), or Hosts
file. Hosts file is supported as a local, static file method of mapping DNS domain names for
host computers to their Internet Protocol (IP) addresses. The Hosts file is provided in the
systemroot\System32\Drivers\Etc folder.

All nodes in the cluster must be in the same domain. As a best practice, all nodes should
have the same domain role (either member server or domain controller), and the
recommended role is member server. Exceptions that can be made to these domain role
guidelines are described later in this document.

When you first create a cluster or add nodes to it, you must be logged on to the domain
with an account that has administrator rights and permissions on all nodes in that cluster.
The account does not need to be a Domain Admin level account, but can be a Domain User
account with Local Admin rights on each node.
Hardware requirements and guidelines

An Intel processor–based server running Windows Server 2003 must have at least 128
megabytes (MB) of RAM. Microsoft also recommends that the server have several gigabytes
of disk storage. In addition, servers should be equipped with high-speed network interface
cards.

For Windows Server 2003, Enterprise Edition Microsoft supports only complete server cluster
systems chosen from the Windows Catalog. To determine whether your system and
hardware components are compatible, including your cluster disks, see the Microsoft
Windows Catalog at the Microsoft Web site. For a geographically dispersed cluster, both the
hardware and software configuration must be certified and listed in the Windows Catalog.
3
For more information, see article 309395, "The Microsoft support policy for server clusters,
the Hardware Compatibility List, and the Windows Server Catalog," in the Microsoft
Knowledge Base.

If you are installing a server cluster on a storage area network (SAN), and you plan to have
multiple devices and clusters sharing the SAN with a cluster, your hardware components
must be compatible. For more information, see article 304415, "Support for Multiple Clusters
Attached to the Same SAN Device," in the Microsoft Knowledge Base.

You must have two mass-storage device controllers in each node in the cluster: one for the
local disk, one for the cluster storage. You can choose between SCSI, iSCSI, or Fibre
Channel for cluster storage on server clusters that are running Windows Server 2003,
Enterprise Edition, or Windows Server 2003, Datacenter Edition. You must have two
controllers because one controller has the local system disk for the operating system
installed, and the other controller has the shared storage installed.

You must have two Peripheral Component Interconnect (PCI) network adapters in each
node in the cluster.

You must have storage cables to attach the cluster storage device to all computers. Refer to
the manufacturer's instructions for configuring storage devices.

Ensure that all hardware is identical in all cluster nodes. This means that each hardware
component must be the same make, model, and firmware version. This makes configuration
easier and eliminates compatibility problems.
Network requirements and guidelines

Your network must have a unique NetBIOS name.

A WINS server must be available on your network.

You must use static IP addresses for each network adapter on each node.
Important: Server clusters do not support the use of IP addresses assigned from Dynamic
Host Configuration Protocol (DHCP) servers.

The nodes in the cluster must be able to access a domain controller. The Cluster service
requires that the nodes be able to contact the domain controller to function correctly. The
domain controller must be highly available. In addition, it should be on the same local area
network (LAN) as the nodes in the cluster. To avoid a single point of failure, the domain
must have at least two domain controllers.

Each node must have at least two network adapters. One adapter will be used exclusively
for internal node-to-node communication (the private network). The other adapter will
connect the node to the client public network. It should also connect the cluster nodes to
provide support in case the private network fails. (A network that carries both public and
private communication is called a mixed network.)

If you are using fault-tolerant network cards or teaming network adapters, you must ensure
that you are using the most recent firmware and drivers. Check with your network adapter
manufacturer to verify compatibility with the cluster technology in Windows Server 2003,
Enterprise Edition.
Note: Using teaming network adapters on all cluster networks concurrently is not supported. At
least one of the cluster private networks must not be teamed. However, you can use teaming
network adapters on other cluster networks, such as public networks.
4
Storage requirements and guidelines

An external disk storage unit must be connected to all nodes in the cluster. This will be used
as the cluster storage. You should also use some type of hardware redundant array of
independent disks (RAID).

All cluster storage disks, including the quorum disk, must be physically attached to a shared
bus.

Cluster disks must not be on the same controller as the one that is used by the system
drive.

You should create multiple logical unit numbers (LUNs) at the hardware level in the RAID
configuration instead of using a single logical disk that is then divided into multiple
partitions at the operating system level. We recommend a minimum of two logical clustered
drives. This enables you to have multiple disk resources and also allows you to perform
manual load balancing across the nodes in the cluster.

You should set aside a dedicated LUN on your cluster storage for holding important cluster
configuration information. This information makes up the cluster quorum resource. The
recommended minimum size for the volume is 500 MB. You should not store user data on
any volume on the quorum LUN.

If you are using SCSI, ensure that each device on the shared bus (both SCSI controllers and
hard disks) has a unique SCSI identifier. If the SCSI controllers all have the same default
identifier (the default is typically SCSI ID 7), change one controller to a different SCSI ID,
such as SCSI ID 6. If more than one disk will be on the shared SCSI bus, each disk must
also have a unique SCSI identifier.

Software fault tolerance is not natively supported for disks in the cluster storage. For cluster
disks, you must use the NTFS file system and configure the disks as basic disks with all
partitions formatted as NTFS. They can be either compressed or uncompressed. Cluster
disks cannot be configured as dynamic disks. In addition, features of dynamic disks, such as
spanned volumes (volume sets), cannot be used without additional non-Microsoft software.

All disks on the cluster storage device must be partitioned as master boot record (MBR)
disks, not as GUID partition table (GPT) disks.
Brief
Following is the brief installation scenario. To get more information how to perform particular
action follow the link.
1. Install Windows 2003 Server Enterprise on all nodes of future cluster (Installing the
Windows Server 2003 operating system)
2. Setup network on the nodes: public, private. (Setting up networks)
a. Private network should support only TCP protocol. File sharing should be turned
off.
3. Install Domain Controller if there is no domain controller in the network. (Domain
Controller installation)
4. Add all nodes to the Active Directory
5. Create Cluster (Creating a Cluster)
6. Attach all nodes to the cluster (Configuring subsequent nodes)
a. Correct network cluster parameters (Setting up networks)
7. Attach Storage to the any node
8. Prepare Storage (SCSI Drive Installations)
a. Start Storage Administrator (CTRL+M at startup)
b. Install storage to work in cluster mode
i. Objects>Adapter>Initiator ID = 6
ii. Objects>Adapter>Cluster Mode = Enabled
5
iii. Objects>Adapter>Emulation = Mass Storage
c. Create RAID array
i. Configuration>New configuration
ii. select all hard drives (Space)
iii. Mark end of RAID array (Enter)
iv. Start configuration (Enter)
v. Select Array (Space)
vi. Accept Configuration (F10)
vii. Provide following settings (RAID=5) and choose "Accept"
viii. Initialize, (Space), (F10), Yes
9. Install FIXEdge on all nodes (FIX Edge installation)
10. Add Storage and two services FE and Apache to the cluster resources.
Domain Controller installation
If you have domain controller installed in you network skip this section and continue from
Creating a Cluster chapter.
Server Configuration Overview
Services
Figure 1 shows the basic server configuration.
Computer Name
IP Address
Active Directory
DNS
System Partition
\WINDOWS
HQ-CON-DC-01
Reserved Disk or
Partition
10.0.0.2
Figure 1. The Server configuration
Server Disk Configuration
To use a single server, you will need a server with either two disk drives or a single disk drive
with two partitions.
The first disk or partition holds Windows Server 2003 and other files for the common
infrastructure, such as the Windows Installer packages and application source files. The second
disk or partition is reserved for Active Directory log files and procedures required by other stepby-step guides.
Each disk or partition must hold several gigabytes of information, and each disk or partition
must be formatted for the NT file system (NTFS). The steps for creating and formatting
partitions are contained in this guide.
6
Server Installation
If you have Windows 2003 Server installed go to the Configuring Your Server as a Domain
Controller paragraph. To begin the installation procedure, boot directly from the Windows
Server 2003 CD. Your CD-ROM must support bootable CDs.
Note: When you configure partitions and format drives, all data on the server hard drive is
destroyed.
Beginning the Installation
Setup creates the disk partitions on the computer running Windows Server 2003, formats the
drive, and then copies installation files from the CD to the server.
Note: These instructions assume that you are installing Windows Server 2003 on a computer
that is not already running Windows. If you are upgrading from an older version of Windows,
some of the installation steps may differ.
To begin the installation
1.
Insert the Windows Server 2003 CD in the CD-ROM drive.
2.
Restart the computer. If prompted, press any key to boot from the CD.
The Windows Server 2003 installation begins.
3.
On the Welcome to Setup screen, press Enter.
4.
Review and, if acceptable, agree to the license agreement by pressing F8.
Note: If you had a previous version of Windows Server 2003 installed on this server,
you might get a message asking if you want to repair the drive. Press Esc to continue
and not repair the drive.
5.
Follow the instructions to delete all existing disk partitions. The exact steps will differ
based on the number and type of partitions already on the computer. Continue to delete
partitions until all disk space is labeled as Unpartitioned space.
6.
When all disk space is labeled as Unpartitioned space, press C to create a partition in
the unpartitioned space on the first disk drive (as applicable).
7.
If your server has a single disk drive, split the available disk space in half to create two
equal-sized partitions. Delete the total space default value. Type the value of half
your total disk space at the Create partition of size (in MB) prompt, and the press
Enter. (If your server has two disk drives, type the total size of the first drive at this
prompt.)
8.
After the New <Raw> partition is created, press Enter.
9.
Select Format the partition using the NTFS file system <Quick>, and then press
Enter.
Windows Server 2003 Setup formats the partition and then copies the files from the Windows
Server 2003 Server CD to the hard drive. The computer restarts and the Windows Server 2003
Installation Program continues.
Completing the Installation
To continue the installation with the Windows Server 2003 Setup Wizard
1.
The Windows Server 2003 Setup Wizard detects and installs devices. This can
take several minutes, and during the process your screen may flicker.
2.
In the Regional and Language Options dialog box, make changes required for your
locale (typically, none are required for the United States), and then click Next.
7
3.
In the Personalize Your Software dialog, type Mike Nash in the Name box and
type Reskit in the Organization box. Click Next.
4.
Type the Product Key (found on the back of your Windows Server 2003 CD case) in
the text boxes provided, and then click Next.
5.
In the Licensing Modes dialog box, select the appropriate licensing mode for your
organization, and then click Next.
6.
In the Computer Name and Administrator Password dialog box, type the new
computer name HQ-CON-DC-01 in the computer name box, and then click Next.
Best Practice: To facilitate the steps in these guides, the Administrator password is
left blank and there is no password. This is not an acceptable security practice. When
installing a server for your production network, a password should always be set.
Windows Server 2003 requires complex passwords by default.
7.
When prompted by Windows Setup, click Yes to confirm a blank Administrator
password.
8.
In the Date and Time Settings dialog box, correct the current date and time if
necessary, and then click Next.
9.
In the Networking Settings dialog box, make sure Typical Settings is selected,
and then click Next.
10.
In the Workgroups or Computer Domain dialog box (No is selected by default),
click Next.
Note: A domain name could be specified at this point, but this guide uses the
Configure Your Server Wizard to create the domain name at a later time.
The Windows Server 2003 Installation continues and configures the necessary
components. This may take a few minutes.
11.
The server restarts and the operating system loads from the hard drive.
Preparing a Secondary Partition or Secondary Disk Drive
The unpartitioned space from the installation of Windows Server 2003 requires formatting
before it can be accessed by the operating system. Management of disks and partitions occurs
through the Computer Management snap-in for Microsoft Management Console. The
following steps assume a second disk drive is in use; modify procedures accordingly for a
second partition.
To prepare a secondary partition or disk drive
Warning: Formatting a partition destroys all data on that partition. Make sure that you select
the correct partition.
1.
Press Ctrl+Alt+Del and log on to the server as administrator. Leave the password
blank.
2.
Click the Start button,
Computer Management.
3.
To define and format the unpartitioned space, click Disk Management.
4.
Right-click Unallocated on Disk 1.
5.
To define a partition, click New Partition, and then click Next to continue.
6.
Select Primary Partition (default), and then click Next to continue.
7.
Click Next leaving the Partition size in MB set to the default.
point
to
Administrative Tools,
and
then
click
8
8.
For Assign the following drive letter, select L, and then click Next to continue.
9.
Under Format this partition with the following settings, click Perform a quick
format. Click Next, and then Finish to complete the configuration of the secondary
disk drive. Once you have finished, your disk allocation should look similar to Figure 2.
Figure 2. Disk Management
10.
Close the Computer Management console.
Configuring Your Server as a Domain Controller
Domain Name Service (DNS) and DCPromo (the command-line tool that creates DNS and Active
Directory) can be installed manually or by using the Windows Server 2003 Manager Your
Server Wizard. This section uses the manual tools to complete the installation.
To install DNS and Active Directory using the manual tools
1.
Click the Start button, click Run, type DCPROMO, and then click OK.
2.
When the Active Directory Installation Wizard appears, click Next to begin the
installation.
3.
After reviewing the Operating System Compatibility information, click Next.
4.
Select Domain controller for a new domain (default), and then click Next.
5.
Select Domain in a new forest (default), and then click Next.
6.
For Full DNS name, type contoso.com, and then click Next. (This represents a Fully
Qualified name.)
7.
Click Next to accept the default Domain NetBIOS name of CONTOSO. (NetBIOS
names provides for down-level compatibility.)
8.
On the Database and Log Folders screen, point the Active Directory Log Folder to
L:\Windows\NTDS, and then click Next to continue.
9.
Leave the default folder location for Shared System Volume, and then click Next.
10.
On the DNS Registration Diagnostics screen, click Install and configure the DNS
server on this computer. Click Next to continue.
9
11.
Select Permissions compatible only with Windows 2000 or Windows Server
2003 (default), and then click Next.
12.
Type password for Restore Mode Password and Confirm password, and then click
Next to continue.
Note: Production environments should employ complex passwords for Directory
Services Restore passwords.
Figure 3. Summary of the Active Directory Installation Options
13.
Figure 3 represents a summary of the Active Directory installation options. Click
Next to start the installation of Active Directory. If prompted, insert the Windows Server
2003 installation CD.
14.
Click OK to acknowledge the warning of having a dynamically assigned IP address for a
DNS server.
15.
If you have more than one network interface, select the 10.0.0.0 network interface
from the Choose Connection drop-down list, and then click Properties.
16.
Under the This connection uses the following
Internet Protocol (TCP/IP), and then click Properties.
17.
Select Use the following IP address, and then type 10.0.0.2 for the IP address.
Press the Tab key twice, and then type 10.0.0.1 for the Default gateway. Type
127.0.0.1 for the Preferred DNS server, and then click OK. Click Close to continue.
18.
Click Finish once the Active Directory Installation Wizard is finished.
19.
Click Restart Now to reboot the computer.
items
section,
click
Creating a Cluster
It is important to plan the details of your hardware and network before you create a cluster.
If you are using a shared storage device, ensure that when you turn on the computer and start
the operating system, only one node has access to the cluster storage. Otherwise, the cluster
disks can become corrupted.
In Windows Server 2003, Enterprise Edition, logical disks that are not on the same shared bus
as the boot partition are not automatically mounted and assigned a drive letter. This helps
prevent a server in a complex SAN environment from mounting drives that might belong to
10
another server. (This is different from how new disks are mounted in Microsoft Windows® 2000
Server operating systems.) Although the drives are not mounted by default, we still recommend
that you follow the procedures provided in the table later in this section to ensure that the
cluster disks will not become corrupted.
The table in this section can help you determine which nodes and storage devices should be
turned on during each installation step. The steps in the table pertain to a two-node cluster.
However, if you are installing a cluster with more than two nodes, the Node 2 column lists the
required state of all other nodes.
Step
Node
1
Node
2
Storage Notes
Set up networks
On
On
Off
Verify that all storage devices on the shared
bus are turned off. Turn on all nodes.
Set up cluster
disks
On
Off
On
Shut down all nodes. Turn on the cluster
storage, and then turn on the first node.
Verify disk
configuration
Off
On
On
Turn off the first node, turn on second node.
Repeat for nodes three and four if necessary.
Configure the
first node
On
Off
On
Turn off all nodes; and then turn on the first
node.
Configure the
second node
On
On
On
After the first node is successfully configured,
turn on the second node. Repeat for nodes
three and four as necessary.
Post-installation
On
On
On
All nodes should be turned on.
Preparing to create a cluster
Complete the following three steps on each cluster node before you install a cluster on the first
node.

Install Windows Server 2003, Enterprise Edition, on each node of the cluster. We strongly
recommend that you also install the latest service pack for Windows Server 2003. If you
install a service pack, the same service pack must be installed on all computers in the
cluster.

Set up networks.

Set up cluster disks.
All nodes must be members of the same domain. When you create a cluster or join nodes to a
cluster, you specify the domain user account under which the Cluster service runs. This account
is called the Cluster service account (CSA).
Installing the Windows Server 2003 operating system
Install Windows Server 2003, Enterprise Edition, on each node of the cluster. For information
about how to perform this installation, see the documentation you received with the operating
system or chapter Brief
Following is the brief installation scenario. To get more information how to perform particular
action follow the link.
11. Install Windows 2003 Server Enterprise on all nodes of future cluster (Installing the
Windows Server 2003 operating system)
12. Setup network on the nodes: public, private. (Setting up networks)
a. Private network should support only TCP protocol. File sharing should be turned
off.
11
13. Install Domain Controller if there is no domain controller in the network. (Domain
Controller installation)
14. Add all nodes to the Active Directory
15. Create Cluster (Creating a Cluster)
16. Attach all nodes to the cluster (Configuring subsequent nodes)
a. Correct network cluster parameters (Setting up networks)
17. Attach Storage to the any node
18. Prepare Storage (SCSI Drive Installations)
a. Start Storage Administrator (CTRL+M at startup)
b. Install storage to work in cluster mode
i. Objects>Adapter>Initiator ID = 6
ii. Objects>Adapter>Cluster Mode = Enabled
iii. Objects>Adapter>Emulation = Mass Storage
c. Create RAID array
i. Configuration>New configuration
ii. select all hard drives (Space)
iii. Mark end of RAID array (Enter)
iv. Start configuration (Enter)
v. Select Array (Space)
vi. Accept Configuration (F10)
vii. Provide following settings (RAID=5) and choose "Accept"
viii. Initialize, (Space), (F10), Yes
19. Install FIXEdge on all nodes (FIX Edge installation)
20. Add Storage and two services FE and Apache to the cluster resources.
Domain Controller installation.
Before configuring the Cluster service, you must be logged on locally with a domain account
that is a member of the local administrators group.
Important: If you attempt to join a node to a cluster that has a blank password for the local
administrator account, the installation will fail. For security reasons, Windows Server 2003
operating systems prohibit blank administrator passwords
Setting up networks
Each cluster node requires at least two network adapters and must be connected by two or
more independent networks. At least two LAN networks (or virtual LANs) are required to
prevent a single point of failure. A server cluster whose nodes are connected by only one
network is not a supported configuration. The adapters, cables, hubs, and switches for each
network must fail independently. This usually means that the components of any two networks
must be physically independent.
Two networks must be configured to handle either All communications (mixed network) or
Internal cluster communications only (private network). The recommended
configuration for two adapters is to use one adapter for the private (node-to-node only)
communication and the other adapter for mixed communication (node-to-node plus client-tocluster communication).
You must have two PCI network adapters in each node. They must be certified in the Microsoft
Windows Catalog and supported by Microsoft Product Support Services. Assign one network
adapter on each node a static IP address, and assign the other network adapter a static IP
address on a separate network on a different subnet for private network communication.
Because communication between cluster nodes is essential for smooth cluster operations, the
networks that you use for cluster communication must be configured optimally and follow all
hardware compatibility-list requirements. For additional information about recommended
12
configuration settings, see article 258750, "Recommended private heartbeat configuration on a
cluster server," in the Microsoft Knowledge Base.
You should keep all private networks physically separate from other networks. Specifically, do
not use a router, switch, or bridge to join a private cluster network to any other network. Do
not include other network infrastructure or application servers on the private network subnet.
To separate a private network from other networks, use a cross-over cable in a two-node
cluster configuration or a dedicated hub in a cluster configuration of more than two nodes.
Additional network considerations

All cluster nodes must be on the same logical subnet.

If you are using a virtual LAN (VLAN), the one-way communication latency between any pair
of cluster nodes on the VLAN must be less than 500 milliseconds.

In Windows Server 2003 operating systems, cluster nodes exchange multicast heartbeats
rather than unicast heartbeats. A heartbeat is a message that is sent regularly between
cluster network drivers on each node. Heartbeat messages are used to detect
communication failure between cluster nodes. Using multicast technology enables better
node communication because it allows several unicast messages to be replaced with a
single multicast message. Clusters that consist of fewer than three nodes will not send
multicast heartbeats. For additional information about using multicast technology, see
article 307962, "Multicast Support Enabled for the Cluster Heartbeat," in the Microsoft
Knowledge Base.
Determine an appropriate name for each network connection. For example, you might want to
name the private network "Private" and the public network "Public." This will help you uniquely
identify a network and correctly assign its role.
Setting the order of the network adapter binding
One of the recommended steps for setting up networks is to ensure the network adapter
binding is set in the correct order. To do this, use the following procedure.
To set the order of the network adapter binding
1. To open Network Connections, click Start, click Control Panel, and then double-click
Network Connections.
2. On the Advanced menu, click Advanced Settings.
3. In Connections, click the connection that you want to modify.
4. Set the order of the network adapter binding as follows:
a. External public network
b. Internal private network (Heartbeat)
c. [Remote Access Connections]
5. Repeat this procedure for all nodes in the cluster.
Configuring the private network adapter
As stated earlier, the recommended configuration for two adapters is to use one adapter for
private communication, and the other adapter for mixed communication. To configure the
private network adapter, use the following procedure.
To configure the private network adapter
1.
To open Network Connections, click Start, click Control Panel, and then doubleclick Network Connections.
13
2.
Right-click the connection for the adapter you want to configure, and then click
Properties. Local Area Properties opens and looks similar to the following figure:
3.
On the General tab, verify that the Internet Protocol (TCP/IP) check box is
selected, and that all other check boxes in the list are clear.
4.
If you have network adapters that can transmit at multiple speeds and that allow you to
specify the speed and duplex mode, manually configure the Duplex Mode, Link Speed,
and Flow Control settings for the adapters to the same values and settings on all nodes.
If the network adapters you are using do not support manual settings, contact your
adapter manufacturer for specific information about appropriate speed and duplex
settings for your network adapters. The amount of information that is traveling across
the heartbeat network is small, but latency is critical for communication. Therefore, if
you have the same speed and duplex settings, this helps ensure that you have reliable
communication. If the adapters are connected to a switch, ensure that the port settings
of the switch match those of the adapters. If you do not know the supported speed of
your card and connecting devices, or if you run into compatibility problems, you should
set all devices on that path to 10 megabytes per second (Mbps) and Half Duplex.
Teaming network adapters on all cluster networks concurrently is not supported
because of delays that can occur when heartbeat packets are transmitted and received
between cluster nodes. For best results, when you want redundancy for the private
interconnect, you should disable teaming and use the available ports to form a second
private interconnect. This achieves the same end result and provides the nodes with
dual, robust communication paths.
You can use Device Manager to change the network adapter settings. To open Device
Manager, click Start, click Control Panel, double-click Administrative Tools, doubleclick Computer Management, and then click Device Manager. Right-click the
network adapter you want to change, and then click Properties. Click Advanced to
manually change the speed and duplex mode for the adapter. The page that opens
14
looks similar to the following figure:
5.
On the General tab in Network Connections, select Internet Protocol (TCP/IP), and click
Properties.
Internet Protocol (TCP/IP) Properties opens and looks similar to the following figure:
15
6.
On the General tab, verify you have selected a static IP address that is not on the
same subnet or network as any other public network adapter. You should put the
private network adapter in one of the following private network ranges:

10.0.0.0 through 10.255.255.255 (Class A)

172.16.0.0 through 172.31.255.255 (Class B)

192.168.0.0 through 192.168.255.255 (Class C)
7.
On the General tab, verify that no values are defined in Default Gateway under Use
the following IP address, and no values are defined under Use the Following DNS
server addresses. After you have done so, click Advanced.
8.
On the DNS tab, verify that no values are defined on the page and that the check
boxes for Register this connection's addresses in DNS and Use this
connection's DNS suffix in DNS registration are clear.
9.
On the WINS tab, verify that no values are defined on the page, and then click
Disable NetBIOS over TCP/IP.
Advanced TCP/IP Settings opens and looks similar to the following figure:
16
10.
After you have verified the information, click OK. You might receive the message "This
connection has an empty primary WINS address. Do you want to continue?" To
continue, click Yes.
11.
Repeat this procedure for all additional nodes in the cluster. For each private network
adapter, use a different static IP address.
Configuring the public network adapter
If DHCP is used to obtain IP addresses, it might not be possible to access cluster nodes if the
DHCP server is inaccessible. For increased availability, static, valid IP addresses are required for
all interfaces on a server cluster. If you plan to put multiple network adapters in each logical
subnet, keep in mind that the Cluster service will recognize only one network interface per
subnet.
•
Verifying connectivity and name resolution. To verify that the private and public networks
are communicating properly, "ping" all IP addresses from each node. To "ping" an IP
address means that you search for and verify it. You should be able to ping all IP
addresses, both locally and on the remote nodes. To verify the name resolution, ping each
node from a client using the node's computer name instead of its IP address. It should only
return the IP address for the public network. You might also want to try using the PING –a
command to perform a reverse name resolution on the IP addresses.
•
Verifying domain membership. All nodes in the cluster must be members of the same
domain, and they must be able to access a domain controller and a DNS server. They can
be configured as member servers or domain controllers. You should have at least one
domain controller on the same network segment as the cluster. To avoid having a single
point of failure, another domain controller should also be available. In this guide, all nodes
17
are configured as member servers, which is the recommended role.
In a two-node server cluster, if one node is a domain controller, the other node must also
be a domain controller. In a four-node cluster, it is not necessary to configure all four nodes
as domain controllers. However, when following a "best practices" model of having at least
one backup domain controller, at least one of the remaining three nodes should also be
configured as a domain controller. A cluster node must be promoted to a domain controller
before the Cluster service is configured.
The dependence in Windows Server 2003 on DNS requires that every node that is a domain
controller must also be a DNS server if another DNS server that supports dynamic updates
is not available.
You should consider the following issues if you are planning to deploy cluster nodes as
domain controllers:

If one cluster node in a two-node cluster is a domain controller, the other node
must also be a domain controller.

There are performance implications associated with the overhead of running a
computer as a domain controller. There is increased memory usage and additional
network traffic from replication because these domain controllers must replicate with
other domain controllers in the domain and across domains.

If the cluster nodes are the only domain controllers, they each must be DNS servers
as well. They should point to themselves for primary DNS resolution and to each
other for secondary DNS resolution.

The first domain controller in the forest or domain will assume all Operations Master
Roles. You can redistribute these roles to any node. However, if a node fails, the
Operations Master Roles assumed by that node will be unavailable. Because of this,
you should not run Operations Master Roles on any cluster node. This includes
Schema Master, Domain Naming Master, Relative ID Master, PDC Emulator, and
Infrastructure Master. These functions cannot be clustered for high availability with
failover.

Because of resource constraints, it might not be optimal to cluster other applications
such as Microsoft SQL Server™ in a scenario where the nodes are also domain
controllers. This configuration should be thoroughly tested in a lab environment
before deployment.
Because of the complexity and overhead involved when cluster nodes are domain
controllers, all nodes should be member servers.
•
Setting up a Cluster service user account. The Cluster service requires a domain user
account that is a member of the Local Administrators group on each node. This is the
account under which the Cluster service can run. Because Setup requires a user name and
password, you must create this user account before you configure the Cluster service. This
user account should be dedicated to running only the Cluster service and should not belong
to an individual.
Note: It is not necessary for the Cluster service account (CSA) to be a member of the
Domain Administrators group. For security reasons, domain administrator rights should not
be granted to the Cluster service account.
The Cluster service account requires the following rights to function properly on all nodes in
the cluster. The Cluster Configuration Wizard grants the following rights automatically:

Act as part of the operating system

Adjust memory quotas for a process
18

Back up files and directories

Restore files and directories

Increase scheduling priority

Log on as a service
You should ensure that the Local Administrator Group has access to the following user
rights:

Debug programs

Impersonate a client after authentication

Manage auditing and security log
You can use the following procedure to set up a Cluster service user account.
Configuring up Remote Desktop
Remote Desktop for Administration can greatly reduce the overhead associated with remote
administration. Enabled by Terminal Services technology, Remote Desktop for Administration is
specifically designed for server management. Therefore, it does not install the applicationsharing and multiuser capabilities or the process scheduling of the full Terminal Server
component (formerly called Terminal Services in Application Server mode). As a result, Remote
Desktop for Administration can be used on an already busy server without noticeably affecting
CPU performance, which makes it a convenient and efficient service for remote management.
Remote Desktop for Administration does not require you to purchase special licenses for client
computers that access the server. It is not necessary to install Terminal Server Licensing when
using Remote Desktop for Administration.
Administrators can also fully administer computers running Windows Server 2003 family
operating systems from computers running earlier versions of Windows by installing Remote
Desktop Connection.
Remote Desktop for Administration is disabled by default in Windows Server 2003 family
operating systems.
To enable remote connections:
1. Open System in Control Panel
2. On the Remote tab, select the Allow users to connect remotely to your computer
check box. Click OK.
Notes:

You must be logged on as a member of the Administrators group to enable or disable
Remote Desktop for Administration.

To open a Control Panel item, click Start, click Control Panel, and then double-click the
appropriate icon.

Be aware of the security implications of remote logons. Users who log on remotely can
perform tasks as though they were sitting at the console. For this reason, you should ensure
that the server is behind a firewall. For more information, see VPN servers and firewall
configuration and Security information for IPSec.

You should require all users who make remote connections to use a strong password. For
more information, see Strong passwords.
To connect to Remote Desktop for Administration from a remote computer, use Remote
Desktop Connection. To open Remote Desktop Connection, click Start, point to
19
Programs or All Programs, point to Accessories, point to Communications, and then click
Remote Desktop Connection.
To set up a Cluster service user account
1. Open Active Directory Users and Computers.
2. In the console tree, right-click the folder to which you want to add a user account.
Where?

Active Directory Users and Computers/domain node/folder
3. Point to New, and then click User.
4. New Object - User opens and looks similar to the following figure:
5. Type a first name and last name (these should make sense but are usually not important
for this account)
6. In User logon name, type a name that is easy to remember, such as ClusterService1,
click the UPN suffix in the drop-down list, and then click Next.
7. In Password and Confirm password, type a password that follows your organization's
guidelines for passwords, and then select User Cannot Change Password and
Password Never Expires. Click Finish to create the account.
If your administrative security policy does not allow the use of passwords that never
expire, you must renew the password and update the Cluster service configuration on
each node before the passwords expire.
8. In the console tree of the Active Directory Users and Computers snap-in, right-click
Cluster, and then click Properties.
9. Click Add Members to a Group
10. Click Administrators, and then click OK. This gives the new user account administrative
permissions on the computer.
20
Setting up disks
This section includes information and step-by-step procedures you can use to set up disks.
Important: To avoid possible corruption of cluster disks, ensure that both the Windows
Server 2003 operating system and the Cluster service are installed, configured, and running on
at least one node before you start the operating system on another node in the cluster.
Quorum resource
The quorum resource maintains the configuration data necessary for recovery of the cluster.
The quorum resource is generally accessible to other cluster resources so that any cluster node
has access to the most recent database changes. There can only be one quorum disk resource
per cluster.
The requirements and guidelines for the quorum disk are as follows:

The quorum disk should be at least 500 MB in size.

You should use a separate LUN as the dedicated quorum resource.

A disk failure could cause the entire cluster to fail. Because of this, we strongly recommend
that you implement a hardware RAID solution for your quorum disk to help guard against
disk failure. Do not use the quorum disk for anything other than cluster management.
When you configure a cluster disk, it is best to manually assign drive letters to the disks on the
shared bus. The drive letters should not start with the next available letter. Instead, leave
several free drive letters between the local disks and the shared disks. For example, start with
drive Q as the quorum disk and then use drives R and S for the shared disks. Another method is
to start with drive Z as the quorum disk and then work backward through the alphabet with
drives X and Y as data disks. You might also want to consider labeling the drives in case the
drive letters are lost. Using labels makes it easier to determine what the drive letter was. For
example, a drive label of "DriveR" makes it easy to determine that this drive was drive letter R.
We recommend that you follow these best practices when assigning driver letters because of
the following issues:

Adding disks to the local nodes can cause the drive letters of the cluster disks to be revised
up by one letter.

Adding disks to the local nodes can cause a discontinuous flow in the drive lettering and
result in confusion.

Mapping a network drive can conflict with the drive letters on the cluster disks.
The letter Q is commonly used as a standard for the quorum disk. Q is used in the next
procedure.
The first step in setting up disks for a cluster is to configure the cluster disks you plan to use.
To do this, use the following procedure.
To configure cluster disks
1. Make sure that only one node in the cluster is turned on.
2. Open Computer Management (Local).
3. In the console tree, click Computer Management (Local), click Storage, and then click
Disk Management.
4. When you first start Disk Management after installing a new disk, a wizard appears that
provides a list of the new disks detected by the operating system. If a new disk is detected,
the Write Signature and Upgrade Wizard starts. Follow the instructions in the wizard.
5. Because the wizard automatically configures the disk as dynamic storage, you must
reconfigure the disk to basic storage. To do this, right-click the disk, and then click
21
Convert To Basic Disk.
6. Right-click an unallocated region of a basic disk, and then click New Partition.
7. In the New Partition Wizard, click Next, click Primary partition, and then click Next.
8. By default, the maximum size for the partition is selected. Using multiple logical drives is
better than using multiple partitions on one disk because cluster disks are managed at the
LUN level, and logical drives are the smallest unit of failover.
9. Change the default drive letter to one that is deeper into the alphabet. For example, start
with drive Q as the quorum disk, and then use drives R and S for the data disks.
10. Format the partition with the NTFS file system.
11. In Volume Label, enter a name for the disk; for example, "Drive Q." Assigning a drive
label for cluster disks reduces the time it takes to troubleshoot a disk recovery scenario.
After you have finished entering values for the new partition, it should look similar to the
following figure:
Important: Ensure that all disks are formatted as MBR; GPT disks are not supported as
cluster disks.
After you have configured the cluster disks, you should verify that the disks are accessible. To
do this, use the following procedure.
To verify that the cluster disks are accessible
1. Open Windows Explorer.
2. Right-click one of the cluster disks, such as "Drive Q," click New, and then click Text
Document.
3. Verify that the text document was created and written to the specified disk, and then delete
the document from the cluster disk.
4. Repeat steps 1 through 3 for all cluster disks to verify that they are all accessible from the
first node.
22
5. Turn off the first node, and then turn on the second node.
6. Repeat steps 1 through 3 to verify that the disks are all accessible from the second node.
7. Repeat again for any additional nodes in the cluster.
8. When finished, turn off the nodes and then turn on the first node again.
Creating a new server cluster
In the first phase of creating a new server cluster, you must provide all initial cluster
configuration information. To do this, use the New Server Cluster Wizard.
Important: Before configuring the first node of the cluster, make sure that all other nodes are
turned off. Also make sure that all cluster storage devices are turned on.
The following procedure explains how to use the New Server Cluster Wizard to configure the
first cluster node.
To configure the first node
1. Open Cluster Administrator. To do this, click Start, click Control Panel, double-click
Administrative Tools, and then double-click Cluster Administrator.
2. In the Open Connection to Cluster dialog box, in Action, select Create new cluster,
and then click OK.
3. The New Server Cluster Wizard appears. Verify that you have the necessary information to
continue with the configuration, and then click Next to continue.
4. In Domain, select the name of the domain in which the cluster will be created. In Cluster
name, enter a unique NetBIOS name. It is best to follow the DNS namespace rules when
entering the cluster name. For more information, see article 254680, "DNS Namespace
Planning," in the Microsoft Knowledge Base.
23
5. On the Domain Access Denied page, if you are logged on locally with an account that is
not a domain account with local administrative permissions, the wizard will prompt you to
specify an account. This is not the account the Cluster service will use to start the cluster.
Note: If you have the appropriate credentials, the Domain Access Denied screen will not
appear.
6. Since it is possible to configure clusters remotely, you must verify or type the name of the
computer you are using as the first node. On the Select Computer page, verify or type
the name of the computer you plan to use.
24
Note: The wizard verifies that all nodes can see the cluster disks. In some complicated
SANs, the target IDs for the disks might not match on all the cluster nodes. If this occurs,
the Setup program might incorrectly determine that the disk configuration is not valid. To
address this issue, click Advanced, and then click Advanced (minimum) configuration.
7. On the Analyzing Configuration page, Setup analyzes the node for possible hardware or
software issues that can cause installation problems. Review any warnings or error
messages that appear. Click Details to obtain more information about each warning or
error message.
8. On the IP Address page, type the unique, valid, cluster IP address, and then click Next.
The wizard automatically associates the cluster IP address with one of the public networks
by using the subnet mask to select the correct network. The cluster IP address should be
used for administrative purposes only, and not for client connections.
25
9. On the Cluster Service Account page, type the user name and password of the Cluster
service account that was created during pre-installation. In Domain, select the domain
name, and then click Next. The wizard verifies the user account and password.
10. On the Proposed Cluster Configuration page, review the information for accuracy. You
can use the summary information to reconfigure the cluster if a system recovery occurs.
You should keep a hard copy of this summary information with the change management log
at the server. To continue, click Next.
Note: If you want, you can click Quorum to change the quorum disk designation from the
default disk resource. To make this change, in the Quorum resource box, click a different
26
disk resource. If the disk has more than one partition, click the partition where you want the
cluster-specific data to be kept, and then click OK.
11. On the Creating the Cluster page, review any warnings or error messages that appear
while the cluster is being created. Click to expand each warning or error message for more
information. To continue, click Next.
12. Click Finish to complete the cluster configuration.
Note: To view a detailed summary, click View Log, or view the text file stored at the
following location:
27
%SystemRoot%\System32\LogFiles\Cluster\ClCfgSrv.Log
Validating the cluster installation
You should validate the cluster configuration of the first node before configuring the second
node. To do this, use the following procedure.
To validate the cluster configuration
1. Open Cluster Administrator. To do this, click Start, click Control Panel, double-click
Administrative Tools, and then double-click Cluster Administrator.
2. Verify that all cluster resources are successfully up and running. Under State, all resources
should be "Online."
Configuring subsequent nodes
After you install the Cluster service on the first node, it takes less time to install it on
subsequent nodes. This is because the Setup program uses the network configuration settings
configured on the first node as a basis for configuring the network settings on subsequent
nodes. You can also install the Cluster service on multiple nodes at the same time and choose
to install it from a remote location.
Note: The first node and all cluster disks must be turned on. You can then turn on all other
nodes. At this stage, the Cluster service controls access to the cluster disks, which helps
prevent disk corruption. You should also verify that all cluster disks have had resources
automatically created for them. If they have not, manually create them before adding any more
nodes to the cluster.
After you have configured the first node, you can use the following procedure to configure
subsequent nodes.
To configure the second node
1. Open Cluster Administrator. To do this, click Start, click Control Panel, double-click
Administrative Tools, and then double-click Cluster Administrator.
2. In the Open Connection to Cluster dialog box, in Action, select Add nodes to cluster.
Then, in Cluster or server name, type the name of an existing cluster, select a name
from the drop-down list box, or click Browse to search for an available cluster, and then
click OK to continue.
3. When the Add Nodes Wizard appears, click Next to continue.
4. If you are not logged on with the required credentials, you will be asked to specify a domain
account that has administrator rights and permissions on all nodes in the cluster.
5. In the Domain list, click the domain where the server cluster is located, make sure that the
server cluster name appears in the Cluster name box, and then click Next.
6. In the Computer name box, type the name of the node that you want to add to the
cluster. For example, to add Node2, you would type Node2.
28
7. Click Add, and then click Next.
8. When the Add Nodes Wizard has analyzed the cluster configuration successfully, click Next.
9. On the Cluster Service Account page, in Password, type the password for the Cluster
service account. Ensure that the correct domain for this account appears in the Domain
list, and then click Next.
10. On the Proposed Cluster Configuration page, view the configuration details to verify
that the server cluster IP address, the networking information, and the managed disk
information are correct, and then click Next.
11. When the cluster is configured successfully, click Next, and then click Finish.
Configuring the server cluster after installation
Heartbeat configuration
After the network and the Cluster service have been configured on each node, you should
determine the network's function within the cluster. Using Cluster Administrator, select the
Enable this network for cluster use check box and select from among the following
options.
Option
Description
Client access only
(public network)
Select this option if you want the Cluster service to use this
network adapter only for external communication with other
clients. No node-to-node communication will take place on this
network adapter.
Internal cluster
communications only
(private network)
Select this option if you want the Cluster service to use this
network only for node-to-node communication.
All communications (mixed Select this option if you want the Cluster service to use the
network)
network adapter for node-to-node communication and for
communication with external clients. This option is selected by
29
default for all networks.
This guide assumes that only two networks are in use. It explains how to configure these
networks as one mixed network and one private network. This is the most common
configuration.
Use the following procedure to configure the heartbeat.
To configure the heartbeat
1. Open Cluster Administrator. To do this, click Start, click Control Panel, double-click
Administrative Tools, and then double-click Cluster Administrator.
2. In the console tree, double-click Cluster Configuration, and then click Networks.
3. In the details pane, right-click the private network you want to enable, and then click
Properties. Private Properties opens and looks similar to the following figure:
4. Select the Enable this network for cluster use check box.
5. Click Internal cluster communications only (private network), and then click OK.
6. In the details pane, right-click the public network you want to enable, and then click
Properties. Public Properties opens and looks similar to the following figure:
30
7. Select the Enable this network for cluster use check box.
8. Click All communications (mixed network), and then click OK.
Prioritize the order of the heartbeat adapter
After you have decided the roles in which the Cluster service will use the network adapters, you
must prioritize the order in which the adapters will be used for internal cluster communication.
To do this, use the following procedure.
To configure network priority
1. Open Cluster Administrator. To do this, click Start, click Control Panel, double-click
Administrative Tools, and then double-click Cluster Administrator.
2. In the console tree, click the cluster you want.
3. On the File menu, click Properties.
31
4. Click the Network Priority tab.
5. In Networks used for internal cluster communications, click a network.
6. To increase the network priority, click Move Up; to lower the network priority click Move
Down.
7. When you are finished, click OK.
Note: If multiple networks are configured as private or mixed, you can specify which one
to use for internal node communication. It is usually best for private networks to have
higher priority than mixed networks.
Quorum disk configuration
The New Server Cluster Wizard and the Add Nodes Wizard automatically select the drive used
for the quorum device. The wizard automatically uses the smallest partition it finds that is larger
then 50 MB. If you want to, you can change the automatically selected drive to a dedicated one
that you have designated for use as the quorum. The following procedure explains what to do if
you want to use a different disk for the quorum resource.
To use a different disk for the quorum resource
1. Open Cluster Administrator. To do this, click Start, click Control Panel, double-click
Administrative Tools, and then double-click Cluster Administrator.
2. If one does not already exist, create a physical disk or other storage-class resource for the
new disk.
3. In the console tree, click the cluster name.
4. On the File menu, click Properties, and then click the Quorum tab. The quorum
property page opens and looks similar to the following figure:
32
5. On the Quorum tab, click Quorum resource, and then select the new disk or storageclass resource that you want to use as the quorum resource for the cluster.
6. In Partition, if the disk has more than one partition, click the partition where you want the
cluster specific data kept.
7. In Root path, type the path to the folder on the partition; for example:
\MSCS
SCSI Drive Installations
This section of the guide provides a generic set of instructions for parallel SCSI drive
installations.
Important: If the SCSI hard disk vendor’s instructions differ from the instructions provided
here, follow the instructions supplied by the vendor.
The SCSI bus listed in the hardware requirements must be configured before you install the
Cluster service. This configuration applies to the following:

The SCSI devices.

The SCSI controllers and the hard disks. This is to ensure that they work properly on a
shared SCSI bus.

The termination of the shared bus. If a shared bus must be terminated, it must be done
properly. The shared SCSI bus must have a terminator at each end of the bus. It is
possible to have multiple shared SCSI buses between the nodes of a cluster.
In addition to the following information, refer to documentation from the manufacturer of your
SCSI device.
33
Configuring SCSI devices
Each device on the shared SCSI bus must have a unique SCSI identification number. Because
most SCSI controllers default to SCSI ID 7, configuring the shared SCSI bus includes changing
the SCSI ID number on one controller to a different number, such as SCSI ID 6. If there is more
than one disk that will be on the shared SCSI bus, each disk must have a unique SCSI ID
number.
Important: For Dell PowerEdge 1850 go to RAID configuration utility (press Ctrl+M at
startup) and enter 6 for Objects > Adapter > Initiator ID parameter.
Dell PowerEdge 1850 installation sample
1. Start Storage Administrator (CTRL+M at startup)
2. Install storage to work in cluster mode
a. MainMenu>Objects>Adapter>Initiator ID = 6
b. MainMenu>Objects>Adapter>Cluster Mode = Enabled
c. MainMenu>Objects>Adapter>Emulation = Mass Storage
3. Create RAID array
a. MainMenu>Configuration>New configuration
b. select all hard drives (Space)
c. Mark end of RAID array (Enter)
d. Start configuration (Enter)
e. Select Array (Space)
f. Accept Configuration (F10)
g. Provide following settings (RAID=5) and choose "Accept"
h. MainMenu>Initialize>, (Space), (F10), Yes
Testing the Server Cluster
After Setup, there are several methods you can use to verify a cluster installation.

Use Cluster Administrator. After Setup is run on the first node, open Cluster Administrator,
and then try to connect to the cluster. If Setup was run on a second node, start Cluster
Administrator on either the first or second node, attempt to connect to the cluster, and then
verify that the second node is listed.

Services snap-in. Use the Services snap-in to verify that the Cluster service is listed and
started.

Event log. Use Event Viewer to check for ClusSvc entries in the system log. You should see
entries that confirm the Cluster service successfully formed or joined a cluster.
Testing whether group resources can fail over
You might want to ensure that a new group is functioning correctly. To do this, use the
following procedure.
To test whether group resources can fail over
1. Open Cluster Administrator. To do this, click Start, click Control Panel, double-click
Administrative Tools, and then double-click Cluster Administrator.
2. In the console tree, double-click the Groups folder.
3. In the console tree, click a group.
4. On the File menu, click Move Group. On a multi-node cluster server, when using Move
Group, select the node to move the group to. Make sure the Owner column in the details
pane reflects a change of owner for all of the group's dependencies.
34
5. If the group resources successfully fail over, the group will be brought online on the second
node after a short period of time.
FIX Edge installation
Note: Before you continue read FIX Edge Quick Start Guide.
There are two available scenarios of FIX Edge installation:

Installation of FIX Edge as service (recommended)

Installation of FIX Edge as console application
Installation of FIX Edge as service
1. Extract FIX Edge archive to shared disk of active node.
2. Install FIX Edge as service on all nodes of cluster as FixServer_1.
3. Open Cluster Administrator.
4. On the File > New menu, click Resource. Resource wizard will appear.
5. Provide Name for new resource e.g. FixServer.
6. Set Resource Type as Generic Service and click Next.
7. Add shared disk resource to which you’ve installed FIX Edge and IP Address to
dependencies and click Next.
35
8. Set Service name as FixServer_1 (the name you registered service on step 2) and click
Next.
9. Go to the last step and click Finish.
10. To start service in File menu click Bring Online item.
Installation of FIX Edge as console application
1. Extract FIX Edge archive to shared disk of active node.
2. Open Cluster Administrator.
3. On the File > New menu, click Resource. Resource wizard will appear.
4. Provide Name for new resource e.g. FixServer.
5. Set Resource Type as Generic Application and click Next.
36
6. Add shared disk resource to which you’ve installed FIX Edge and IP Address to
dependencies and click Next.
7. Set Command line and Current directory fields and click Next. For instance, if you have
installed FIX Edge to R:\B2Bits\FixEdge, executable file is in the bin subdirectory and config
file is conf\FIXEdge.properties than

Command line should be R:\B2Bits\FixEdge\bin\FixServer_71.exe
R:\B2Bits\FixEdge\conf\FIXEdge.properties

Current directory should be R:\B2Bits\FixEdge\bin
–console
8. Go to the last step and click Finish.
37
9. To start application in File menu click Bring Online item.
Notes:

To perform this procedure, you must be a member of the Administrators group on the local
computer, or you must have been delegated the appropriate authority. If the computer is
joined to a domain, members of the Domain Admins group might be able to perform this
procedure. As a security best practice, consider using Run as to perform this procedure.

To open Cluster Administrator, click Start, click Control Panel,
Administrative Tools, and then double-click Cluster Administrator.
double-click
38
Resources

Step-by-Step Guide to a Common Infrastructure for Windows Server 2003 Deployment
http://www.microsoft.com/technet/prodtechnol/windowsserver2003/technologies/directory/
activedirectory/stepbystep/domcntrl.mspx

Quick Start Guide for Server Clusters
http://technet2.microsoft.com/WindowsServer/en/library/dba487bf-61b9-45af-b927e2333ec810b61033.mspx
39
Download