Lenovo ThinkServer High-Availability Solutions white paper

advertisement
Lenovo ThinkServer High-Availability
Solutions
With Lenovo ThinkServer SA120 DAS Array, LSI Syncro® CS 9286-8e, and
Microsoft Windows Server 2012
Lenovo Enterprise Product Group
Version 1.0
June 2014
© Copyright Lenovo 2014
Lenovo ThinkServer High-Availability Solutions
LENOVO PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. This information could include technical
inaccuracies or typographical errors. Changes may be made to the information herein; these changes will
be incorporated in new editions of the publication. Lenovo may make improvements and/or changes in
the product(s) and/or the program(s) described in this publication at any time without notice.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not
in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part
of the materials for this Lenovo product, and use of those Web sites is at your own risk.
The following terms are trademarks of Lenovo in the United States, other countries, or both: Lenovo, and
ThinkServer.
Intel and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.
Microsoft, Windows Storage Server 2012, Windows Server 2012, and the Windows Logo are trademarks
of Microsoft Corporation in the United States and/or other countries.
LSI, the LSI & Design logo, and MegaRAID are trademarks or registered trademarks of LSI Corporation in
the United States and/or other countries.
2
Lenovo ThinkServer High-Availability Solutions
Contents
1.0
Introduction ...................................................................................................................5
2.0
High-Availability Solutions .............................................................................................6
2.1
Solution Architecture .................................................................................................8
2.2
Network Architecture ................................................................................................9
2.3
Virtual Disk Layout ...................................................................................................10
2.4
Storage Array Design................................................................................................11
3.0
Solution Hardware Recommendations ........................................................................12
3.1
ThinkServer Systems ................................................................................................12
3.2
ThinkServer SA120 DAS Array ..................................................................................14
3.3
Scaling the Solution..................................................................................................15
4.0
Configuration Guide .....................................................................................................17
4.1
Pre-installation tasks ................................................................................................17
4.2
Making the Physical Connections ............................................................................18
4.3
Creating Virtual Drives on Each Server Node...........................................................19
4.4
Operating System Installation and Configuration....................................................20
4.5
Configure Networking ..............................................................................................23
4.6
Create Virtual Drives on the SA120..........................................................................28
4.7
Expose Storage to the Cluster Nodes.......................................................................36
4.8
Creating the Cluster .................................................................................................40
5.0
Creating the Highly Available Storage Cluster .............................................................56
5.1
Install the File Services Role .....................................................................................56
5.2
Create the Highly Available File Server ....................................................................57
5.3
Create a File Share ...................................................................................................60
5.4
Mapping User Folders to the Highly Available File Server Share .............................64
5.5
Test a Cluster Failover ..............................................................................................65
6.0
Creating the Highly Available Application Cluster .......................................................65
6.1
Install Hyper-V..........................................................................................................66
6.2
Create a Virtual Switch.............................................................................................68
6.3
Add a Disk as CSV to Store Virtual Machine Data ....................................................69
6.4
Create a Highly Available Virtual Machine...............................................................71
3
Lenovo ThinkServer High-Availability Solutions
6.5
Test a Planned Failover ............................................................................................76
6.6
Test an Unplanned Failover .....................................................................................76
7.0
References ...................................................................................................................77
4
Lenovo ThinkServer High-Availability Solutions
1.0
Introduction
Organizations expect their IT environments to operate with mission-critical reliability. End-users expect
that their key applications such as email, database, and transaction processing will always be available,
and that their data will be protected against loss in the event of a hardware or software failure.
Shared storage is essential to achieving many of the benefits of high availability. Storage area networks
(SANs) satisfy this need, but a SAN can also be very complex and expensive to deploy and manage.
Network-attached storage (NAS) can be more affordable, but adding reliability and data protection to
NAS can significantly increase the cost.
High-availability systems provide applications with a means of
continuance if a server on which they are running should fail.
In a high-availability solution, servers work together in a cluster
to provide redundancy to each other, maximizing uptime by
utilizing fault-tolerant components. When a server in the
cluster (a node) fails, the workload moves automatically to
other nodes in the cluster with little interruption – a process
known as failover. High-availability configurations can also
provide additional benefits by allowing CPU loads to be
balanced by moving applications to servers that have lower
CPU utilization in a way that is transparent to the clients.
Solution Benefits
 Enterprise-class, highavailability, server application
and storage at a fraction of the
cost and complexity of existing
HA solutions
 Storage resilience and
performance similar to highcost storage solutions, such as
Fibre Channel or SAN devices
New virtualization and failover clustering capabilities of
Microsoft Windows Server 2012 make high-availability application and storage solutions easier to
configure and less expensive to deploy. The Windows Server platform provides high availability and
scalability to many types of server workloads including Microsoft Hyper-V hosts, SQL Server, and
Exchange, as well as file share storage for users and server applications.
This document describes a ThinkServer solution that provides a continuously available hardware and
software platform utilizing Microsoft Windows Server 2012 R2 Failover Clustering, which easily provides
transparent failover without data loss. The LSI Syncro adapter provides robust hardware RAID data
protection while supporting cluster failover, something that otherwise cannot be done natively within
Windows Server. The Lenovo ThinkServer SA120 direct-attached storage array (also known as JBOD)
completes the solution, enabling shared storage as reliable as a SAN at a fraction of the cost. This
solution rivals the functionality and scalability of advanced architectures using SANs, while reducing
capital and management costs and complexity. This solution is well suited to departments, workgroups,
mid-size enterprises, and especially customers with limited IT staff and constrained budgets.
This document provides guidance for installing, configuring, and supporting the solution. It is intended
for IT administrators and managers, as well as business partners planning to evaluate or deploy these
storage solutions using Lenovo servers. It assumes a working knowledge of Windows networking and
5
Lenovo ThinkServer High-Availability Solutions
server software. Additional information beyond the scope of this document can be found in the
References section.
2.0
High-Availability Solutions
The high availability solutions described in this document can serve two primary purposes.
1. A highly available storage cluster provides continuously available networked storage for users
and server applications such as Microsoft SQL Server and Hyper-V virtual machines (see Figure
1).
2. A highly available application cluster enables Windows Server clustered roles to run on physical
servers or on virtual machines installed on the servers running Hyper-V (see Figure 2).
Both solutions can achieve similar levels of reliability, availability, manageability, and performance
expected of solutions using a SAN, but at a lower acquisition cost.
Highly Available Storage Cluster – This solution provides continuously available centralized networked
file services for general use — just like a SAN— to traditional information workers and server application
workloads. The solution enables continuous access to SMB and NFS file shares as well as iSCSI storage
targets with transparent failover for connections to those services. This capability is appropriate for
users who need access to the same files and applications, or if centralized backup and file management
is needed.
Figure 1 – Highly Available Storage Cluster Solution Stack
In addition, this solution can leverage failover clustering and capabilities of Windows SMB 3.0 to provide
file shares to store server application data, such as Hyper-V virtual machine files or SQL Server database
6
Lenovo ThinkServer High-Availability Solutions
files. Microsoft calls file shares associated with this type of clustered file server scale-out file shares. In
this configuration, all file shares are simultaneously accessible through all nodes in the cluster, referred
to as an active-active configuration. This configuration provides better utilization of network bandwidth
by automatically aggregating the bandwidth available from multiple redundant network paths between
the application servers and the SMB 3.0 shares hosted on the storage server, and provides resiliency to a
network failure. Connections to the shares are also load balanced by redirecting clients to the cluster
node with the best access to the volume used by the file share. This is the recommended file server
type when deploying either Hyper-V, or Microsoft SQL Server over SMB.
Other important storage server roles and features of Windows Server 2012 R2 that can be employed
include:

Data Deduplication – Deduplication can significantly improve the efficiency of storage space
utilization by storing a single copy of identical data on the volume. This can deliver storage
optimization ratios of 2:1 for general file servers and up to 20:1 for virtualization data.

DFS Namespaces and Replication – In a larger network, users can be given a centralized folder
namespace, through which the underlying file shares on different servers and in different sites
are made available to access and store files. DFS Namespaces map clients’ logical file requests
to physical server files without having to search or map multiple locations. If deployed in a
distributed environment (e.g. a branch office), DFS Replication provides synchronization
capabilities between the central and remote servers across limited bandwidth network
connections.

BranchCache – BranchCache optimizes the usage of wide area network (WAN) links by locally
caching the remote data based on predefined policies. When a user accesses content on
remote servers, BranchCache copies content from the remote servers and caches the content
on the branch office server, allowing compatible clients to access the content from the local
server rather than over the WAN. Subsequent requests for the same data will be served from
the local server until updates are required.

Volume Shadow Copy Services (VSS) is used to create a point-in-time image (shadow copy) of
one or more volumes. It provides enhanced data protection through high fidelity backups, rapid
data restores and data transport. VSS for SMB file shares allows performing of backup
operations using the snapshots of remote file shares supporting SMB-based server applications
(for example, SQL over SMB).
Highly Available Application Cluster – The highly available application cluster increases the availability of
applications and services running in the member nodes. If one or more of the cluster nodes fail, other
nodes begin to provide service (the failover process). In addition, the clustered roles are proactively
monitored to verify that they are working properly. If they are not working, they are restarted or moved
to another node. In this solution design, the shared storage is part of, and is managed by the clustered
nodes, although a highly available storage cluster as described above could provide it.
7
Lenovo ThinkServer High-Availability Solutions
Figure 2 – Highly Available Application Cluster Solution Stack
2.1
Solution Architecture
This guide will focus on configuring both the storage cluster solution and the application cluster solution.
Steps to create the storage cluster are the same for the application cluster with the addition of enabling
Hyper-V or other operating system services for high availability in the application solution.
Windows Server 2012 R2 is installed on two ThinkServer systems deployed as a failover cluster. An LSI
Syncro RAID controller in each server in the cluster provides connectivity to an SA120 JBOD to provide
shared storage with hardware RAID capability. Storage resiliency is provided by using redundant
connections from each cluster node to the SA120.
Syncro provides hardware RAID to guard against data loss in the event of a drive failure. Syncro also
mirrors the Input / Output (I/O) data cache in real time across the two controllers to support the failover
cluster functionality. Because the data cache in both controllers is completely mirrored, data is not lost
in the event of an un-planned failover.
The cluster connects to the public network using standard Ethernet, and network resiliency can be
provided by using multiple redundant Ethernet connections to redundant switches. A private network is
8
Lenovo ThinkServer High-Availability Solutions
used for cluster internal communications. An optional separate management network can also be
configured for management of the servers.
To support the cluster, at least one Active Directory Domain Services controller is needed for centralized
security and management of the cluster member computers. DNS services are also required. It is
assumed that Active Directory and DNS are deployed at the customer site, and deployment of these
services is not in scope for this document.
Figure 3 shows the logical architecture of the solution.
Figure 3 – Logical Architecture
2.2
Network Architecture
The network architecture requires a minimum of two networks to be configured. The first provides a
private network for internal cluster communications. With only two nodes, the server-to-server
network connection can be made directly (using a crossover cable) without going through a switch;
otherwise, this network must be on a separate subnet from all other network communications.
The second network provides access to the high-availability cluster and to infrastructure services over
cost-efficient Ethernet connections (1 Gb or 10 Gb). The use of 1 Gb Ethernet versus 10 Gb Ethernet
networking can be selected based on the intended workload.
If resiliency against network failures is required, the solution must have redundant paths to each cluster
server. Additional network adapters can be added and each NIC connected to redundant switches to
provide continued access to the cluster in the event of a network component failure. When multiple
NICs are available, network path redundancy, failover, load balancing and the aggregation of available
9
Lenovo ThinkServer High-Availability Solutions
bandwidth on the available NIC ports can be configured through the use of NIC teaming, or the SMB
multi-channel capability in Windows Server 2012.
Optionally, a third network can be configured for management of the servers. Dedicating a network to
this function prevents competition with guest traffic, and provides a degree of separation for security
and ease of management. Additionally, the server out-of-band management can be combined on this
network.
2.3
Virtual Disk Layout
The Syncro CS controllers work together to achieve file sharing, cache coherency, heartbeat monitoring,
and redundancy. In order to maintain data synchronization between the controllers, at any point in
time, a particular virtual disk can only be accessed or owned by a single controller at any given point in
time (a local virtual disk as shown in Figure 4). The other Syncro controller is aware of the virtual disk,
but only has indirect access to it (a remote virtual disk).
Figure 4 – Local Virtual Disks
Access to a remote virtual disk is accomplished with “I/O shipping” which is a means of submitting I/O
requests from one controller to the controller that owns the virtual disk. As shown in Figure 5, when a
controller requires access to a remote virtual disk, the I/O is “shipped” to the remote controller, which
then processes the I/O locally. This preserves the active-active configuration of the cluster nodes;
however, I/O requests serviced by local virtual disks are much faster than those serviced by remote
virtual disks.
10
Lenovo ThinkServer High-Availability Solutions
Figure 5 – Remote Virtual Disks
From a performance perspective, the situation shown in Figure 5 is non-optimal as there is an additional
command processing overhead associated with shipped I/O. The preferred configuration is to co-locate
the virtual disks with the server cluster node that is primarily driving the I/O load. Avoid configurations
with multiple virtual disks whose I/O load is split between the server nodes.
2.4
Storage Array Design
The storage array RAID level selected should be based on consideration of several factors, most
importantly performance, fault tolerance, and storage capacity. However, not all of these factors can be
optimized at the same time.
In general, a storage configuration such as RAID 10 is appropriate for virtual machine usage balancing
performance and capacity. RAID 5 can be used when more total drive capacity should be allocated to
storage. RAID 1 is sufficient for server boot volumes.
The examples shown in this document use a storage configuration as shown in Figure 6.
In the SA120, a total of 12 hot-swap 7,200 rpm, 6Gbs SAS drives are organized into two drive groups
(DG0 and DG1), each composed of five drives in a RAID 5 configuration (4 data + 1 parity). Two
additional drives are dedicated as global hot spares for the cluster. The first drive group (DG0) is divided
into two virtual disks. The first virtual disk (JBOD VD0) is used for the Quorum Drive, and the second
(VD1) is used as a shared virtual drive for application or file data. The second drive group (DG1) is
configured as a shared single virtual drive (VD2) for application or file data. The larger virtual drives can
be further subdivided into partitions within Windows, and ownership of the virtual disks can be
designated to a particular node of the cluster during cluster setup if desired.
Each server will have a single drive group composed of two drives in a RAID 1 configuration. The drive
group will be used for the operating system and its associated partitions, and is organized into a single
virtual drive (Server VD0). This configuration provides optimum performance as well as protection
against a drive failure in this group.
11
Lenovo ThinkServer High-Availability Solutions
Figure 6 – Drive Configuration
3.0
Solution Hardware Recommendations
Recognizing that system results are highly dependent on the specific workload, this section describes
recommended hardware for the solutions that can be used as a starting point for larger or more feature
rich configurations.
3.1
ThinkServer Systems
Enterprise-class Lenovo ThinkServer systems are an ideal choice for customers seeking affordable
options that pack a punch. ThinkServer systems provide the performance, security, and reliability
needed to support any workload. The servers feature balanced designs, flexible configurations, and
expansive I/O to handle demanding deployments. Powerful new network adapter, storage controller,
and sophisticated RAID choices increase scalability, reliability, and I/O capacity to handle growing
requirements for large and compute-intensive, scale-out applications. With attractive price points,
built-in redundancy, high reliability components, and sophisticated cooling technology, enterprise-class
ThinkServer systems deliver outstanding value.
12
Lenovo ThinkServer High-Availability Solutions
For the highly available storage cluster, Lenovo recommends two ThinkServer RD340 dual-CPU servers
connected to the ThinkServer SA120 JBOD for shared storage. A typical configuration for each of the
ThinkServer RD340 systems includes:






Intel Xeon processors
o Entry solutions: One 8-core CPU per node
o Large-capacity solutions: Two 8-core CPUs per node
Memory
o Entry solutions: 32GB memory
o Large-capacity solutions: 64GB memory (for large active datasets e.g. greater than
1GBps throughput)
LSI Syncro CS RAID adapter for connection to SA120 JBOD
ThinkServer RAID 300 for the internal drives in a RAID 1 configuration for the operating system
Two 500GB SATA HDDs for system boot drives
Four 1 Gb Ethernet interfaces for the network connections for network resiliency and load
balancing
o One Heartbeat – 1GbE
o Two External – 1 GbE
o One System Management – 1GbE
For the highly available application cluster, Lenovo recommends two ThinkServer RD640 systems be
used. A typical configuration for each of the RD640 servers includes:






Intel Xeon processors
o Entry solutions: One 8-core CPU per node
o Large-capacity solutions: Two 8-core CPUs per node
Memory
o Entry solutions: 64GB memory (with 1 CPU)
o Large-capacity solutions: 128GB memory (with 2 CPUs)
LSI Syncro CS RAID adapter for connection to SA120 JBOD
ThinkServer RAID 500 or ThinkServer RAID 700 controller for the internal drives in a RAID 1
configuration for the operating system
Two 500GB SATA HDDs for system boot drives
Four 1 Gb Ethernet interfaces for the network connections for network resiliency and load
balancing
o One Heartbeat – 1GbE
o Two External – 1 GbE
o One System Management – 1GbE
Ordering information for these typical configurations is provided in Table 1, and Table 2. Two servers
each are required for the solution.
13
Lenovo ThinkServer High-Availability Solutions
Table 1 – Storage Cluster Server Configuration
Part Number
70AB001XUX
0C19534
4XB0F28655
0A89473
0C19506
67Y2624
4X20E54689
82972SM
Description
RD340 (1U rack server with 4 x 3.5-inch hot-swap HDD bays)
- 1 x Intel Xeon processor E5-2440 v2 (8-cores, 20MB cache,
1.9GHz, 7.2GT/s QPI)
- 1 x 8GB DDR3L-1600MHz (2Rx8) RDIMM
- ThinkServer RAID 300 (RAID 0, 1, 10)
- 2 x integrated 1 Gb Ethernet
- ThinkServer Management Module
- Slim DVD optical
- 1 x 550W Gold hot-swap redundant power supply
- ThinkServer tool-less rail kit
- Next Business Day On-site Warranty, 3 Years Parts and Labor
ThinkServer 8GB DDR3L-1600MHz (2Rx8) RDIMM
ThinkServer Syncro CS 9286-8e 6Gb High Availability Enablement
Kit by LSI
- Includes two ThinkServer 1 meter external mini-SAS cables
ThinkServer 500GB 7.2K 3.5-inch enterprise 6Gbps SATA hotswap hard drive
ThinkServer 1Gbps Ethernet I350-T2 Server Adapter by Intel (Dual
Port, 1Gb BASE-T)
ThinkServer Management Module Premium for Remote iKVM
550W Gold hot-swap redundant power supply
Windows Server 2012 R2 Standard
Quantity
1
3
1
2
1
1
1
1
Table 2 – Application Cluster Server Configuration
Part Number
70B10007UX
0C19534
4XB0F28655
0C19495
0C19506
67Y2624
4X20E54690
4XI0E51562
3.2
Description
RD640 (2U Rack with 8 x 2.5-inch hot swap HDD bays)
- 1 x Intel Xeon processor E5-2640 v2 (8-core, 20MB cache, 2.00
GHz, 7.20 GT/s QPI)
- 1 x 8GB DDR3L-1600MHz (2Rx8) RDIMM
- 1 x ThinkServer RAID 700 Adapter II (RAID 0, 1, 5, 6, 10, 50, 60)
- 2 x integrated 1 Gb Ethernet
- ThinkServer Management Module
- Slim DVD R/W optical
- 1 x 800W Gold hot-swap redundant power supply
- ThinkServer tool-less rail kit
- Next Business Day On-site Warranty, 3 Years Parts and Labor
ThinkServer 8GB DDR3L-1600MHz (2Rx8) RDIMM
ThinkServer Syncro CS 9286-8e 6Gb High Availability Enablement
Kit by LSI
- Includes two ThinkServer 1 meter external mini-SAS cables
ThinkServer 500GB 7.2K 2.5-inch enterprise 6Gbps SATA hotswap hard drive
ThinkServer 1Gbps Ethernet I350-T2 Server Adapter by Intel (Dual
Port, 1G BASE-T)
ThinkServer Management Module Premium for Remote iKVM
800W Gold hot-swap redundant power supply
Windows Server 2012 R2 Datacenter
Quantity
1
7
1
2
1
1
1
1
ThinkServer SA120 DAS Array
14
Lenovo ThinkServer High-Availability Solutions
The ThinkServer SA120 is a 2U rack-mountable storage enclosure that provides both 2.5- inch and 3.5inch drive bays in a single enclosure. The SA120 is unique in that twelve 3.5-inch hard disk drives (HDDs)
mount in the front while four 2.5-inch drives mount in the rear of the enclosure. The rear 2.5-inch bays
are reserved exclusively for optional Intel enterprise solid-state drives (SSDs), providing an optimal
tiered storage platform in one dense enclosure1. The SA120 supports direct- attached 6Gbps SAS
connectivity and integrates seamlessly with ThinkServer rack and tower models via supported
ThinkServer LSI SAS and RAID adapters. The SA120 features hot-swap disk drives, SAS Input/output
Controller Cards (IOCCs), redundant fans and power supplies. Drives and power supplies are common
with other ThinkServer systems, and can be shared increasing convenience and reducing overall costs.
A typical configuration for the SA120 includes:


Two IOCCs with dual SAS connections per controller
Twelve 1 TB 7,200 rpm SAS 3.5-inch HDDs
Table 3 provides ordering information for the SA120 typical configuration.
Table 3 – SA120 Configuration
Part Number
70F10001UX
Description
SA120 (2U rack-mountable disk array with 12 x 3.5-inch hot-swap
HDD Bays)
- Dual ThinkServer Storage Array I/O Module (6 Gbps)
- Dual redundant 550W PSUs
- Two ThinkServer 1 meter external mini-SAS cables
- ThinkServer static rail kit
- Next Business Day On-site Warranty, 3 Years Parts and
0C19530
ThinkServer 3.5-inch 1TB 7.2K SAS 6Gbps hot-swap hard drive
Quantity
1
Labor
3.3
12
Scaling the Solution
The servers and SA120 hardware can scale to optimize for cost and performance requirements. The
factors most likely to be modified to scale the solution include:
1

Increase processing bandwidth for auxiliary processes (e.g. anti-virus, deduplication, backup for
storage, or additional VMs for applications) by raising the performance and power rating of the
processors, and increasing the amount of installed memory in each server cluster node.

Increase network IOPs by increasing the number of NIC ports, or the bandwidth of the ports in
each server cluster node.
The 2.5-inch SSD drives are not supported with the Syncro solutions.
15
Lenovo ThinkServer High-Availability Solutions

Expand the storage array capacity by adding more, or higher capacity drives. Capacity can also
be increased by adding additional clusters (servers and JBOD shared storage clusters2).

Enhance performance by adding additional drives (more spindles in a RAID virtual drive).
Table 4 provides recommended options to address capacity and performance requirements, and enable
connectivity to various Ethernet networks.
Table 4 – Server Expansion Options
Option
Memory
HDDs
Network
Adapters
Description
ThinkServer 4GB DDR3-1866MHz (1Rx8) RDIMM
ThinkServer 8GB DDR3-1866MHz (1Rx4) RDIMM
ThinkServer 16GB DDR3-1866MHz (2Rx4) RDIMM
ThinkServer 4GB DDR3L-1600MHz (1Rx8) RDIMM
ThinkServer 8GB DDR3L-1600MHz (2Rx8) RDIMM
ThinkServer 16GB DDR3L-1600MHz (2Rx4) RDIMM
ThinkServer 500GB 7.2K 3.5-inch Enterprise 6Gbps SATA Hot
Swap Hard Drive
ThinkServer 1TB 7.2K 3.5-inch Enterprise 6Gbps SATA Hot Swap
Hard Drive
ThinkServer 2TB 7.2K 3.5-inch Enterprise 6Gbps SATA Hot Swap
Hard Drive
ThinkServer 3TB 7.2K 3.5-inch Enterprise 6Gbps SATA Hot Swap
Hard Drive
ThinkServer 3.5-inch 4TB 7.2K Enterprise SATA 6Gbps Hot Swap
Hard Drive
ThinkServer 3.5-inch 300GB 15K SAS 6Gbps Hot Swap Hard Drive
ThinkServer 3.5-inch 600GB 15K SAS 6Gbps Hot Swap Hard Drive
ThinkServer 3.5-inch 1TB 7.2K SAS 6Gbps Hot Swap Hard Drive
ThinkServer 3.5-inch 2TB 7.2K SAS 6Gbps Hot Swap Hard Drive
ThinkServer 3.5-inch 3TB 7.2K SAS 6Gbps Hot Swap Hard Drive
Part Number
4X70F28585
4X70F28586
4X70F28587
0C19533
0C19534
0C19535
0A89473
0A89474
0A89475
0A89477
0C19520
67Y2616
4XB0F28644
0C19530
0C19531
0C19532
ThinkServer 1Gbps Ethernet I350-T2 Server Adapter by Intel
0C19506
ThinkServer 1Gbps Ethernet I350-T4 Server Adapter by Intel
Lenovo 10Gbps Ethernet X520-SR2 Server Adapter by Intel
Lenovo 10Gbps Ethernet X520-DA2 Server Adapter by Intel
Lenovo 10Gbps Ethernet X540-T2 Server Adapter by Intel
Lenovo 10Gbps Ethernet Fibre Module by Intel
0C19507
0C19487
0C19486
0C19497
0C19488
Table 5 – SA120 Expansion Options
Option
HDDs
2
Description
ThinkServer 3.5-inch 1TB 7.2K SAS 6Gbps Hot Swap Hard Drive
ThinkServer 3.5-inch 2TB 7.2K SAS 6Gbps Hot Swap Hard Drive
ThinkServer 3.5-inch 3TB 7.2K SAS 6Gbps Hot Swap Hard Drive
Part Number
0C19530
0C19531
0C19532
The clusters described in this document are limited to two servers and one JBOD.
16
Lenovo ThinkServer High-Availability Solutions
Option
Cables
4.0
Description
ThinkServer 3.5-inch 4TB 7.2K SAS 6Gbps Hot Swap Hard Drive
ThinkServer 1 meter External mini-SAS cable
ThinkServer 2 meters External mini-SAS cable
ThinkServer 4 meters External mini-SAS cable
ThinkServer 6 meters External mini-SAS cable
Part Number
4XB0F28635
4X90F31495
4X90F31496
4X90F31497
4X90F31498
Configuration Guide
This section explains how to set up the hardware components and configure the high availability cluster.
The basic steps are as follows:
1. Configure hardware and insure firmware is up to date, and hardware settings are configured.
2. Install and make physical connections to the hardware
3. Configure the drive groups and the virtual drives on each server and the SA120


Configure the internal RAID and virtual drive for the ThinkServer OS boot drive
Configure the shared virtual drives in the SA120 with Syncro
4. Install and configure Windows Server 2012 R2 on both servers in the cluster
5. Install and configure the cluster feature on both servers.
6. Enable high-availability services for the storage cluster or the application cluster
7. Test the failover cluster
4.1
Pre-installation tasks
To prepare for installation of Windows Server 2012 R2, ensure the following tasks are completed:
1. Select and install the desired server storage and network connectivity options. Recommended
options are listed in Table 4, page 16.
2. Ensure that the server firmware is up to date. If necessary, update the system BIOS, ThinkServer
Management Module (TMM), and Syncro controller to the latest version. Server BIOS and TMM
updates can be installed using the ThinkServer Firmware Updater tool, available at
http://www.lenovo.com/support.
3. Configure BIOS settings including:
a. System date and time
b. Boot devices and boot order
17
Lenovo ThinkServer High-Availability Solutions
c. TMM management interfaces
4.2
Making the Physical Connections
Hardware connections should be made as follows:
4.2.1
Storage Connections
Figure 7 shows the SAS cable connections from two ThinkServer nodes and a single SA120 enclosure.
Dual connections to the controllers provide redundant paths that safeguard against cable or controller
failure.
Figure 7 – SAS Connections
Table 6 summarizes the connections.
Table 6 – SAS Connections – Point to Point
Server Connection
Server A – Syncro Top Connector
Server A – Syncro Bottom Connector
Server B – Syncro Top Connector
Server B – Syncro Bottom Connector
4.2.2
SA120 Connection
I/O Module 1 – A
I/O Module 2 – A
I/O Module 1 – B
I/O Module 2 – B
Network Connections
The servers have three integrated 1 GbE ports (one can be shared with, or dedicated to the TMM for
system management), and the server can support additional 1 GbE or 10 GbE ports with optional
Ethernet adapters. In the basic configuration, a two-port Ethernet adapter is used to connect to the
public local area network for access to the failover cluster.
18
Lenovo ThinkServer High-Availability Solutions
In a basic configuration, connections to the public network are made with ports 1 and 2 (connection A1
and A2 in Figure 8) to redundant switches. When additional optional Ethernet adapters are used (in 2U
servers), additional aggregate bandwidth topologies are possible.
A private heartbeat network connection is required for the cluster, and it attaches to an isolated
network segment that is shared among the failover cluster nodes. There should be no other network
communication on this network segment. The most typical connection type for the heartbeat segment
between the nodes of a two-node failover cluster is a crossover network cable. This method is used in
this document (connection B in Figure 8). If you connect to the LAN infrastructure, the network segment
must be isolated.
The management port is typically connected to a separate Ethernet switch or VLAN dedicated to
management traffic (connection C in Figure 8).
Figure 8 – Network Connections
4.3
Creating Virtual Drives on Each Server Node
Before attempting to install the operating system on the server, the internal RAID subsystem on each
server must be configured. This can be accomplished by either using the EasyStartup configuration tool
to preconfigure the RAID subsystem and install the operating system, or it can be done manually.
Manual configuration can be done using either the pre-boot WebBIOS Configuration Utility or the
MegaRAID CLI interface, which is suitable for scripting. The WebBIOS Configuration Utility allows the
creation, management, and deletion of RAID arrays from the available physical drives attached to the
RAID adapter. If RAID volumes have already been configured, the Configuration Utility does not
automatically change their configuration.
To configure the internal server RAID subsystem:
1. Enter WebBIOS during system POST
19
Lenovo ThinkServer High-Availability Solutions
2. Create a new RAID configuration where a drive group is created with the available HDDs, and a
RAID 1 virtual drive is created from the drive group.
3. Insure that the new virtual drive is set as the boot drive.
Figure 9 shows the completed configuration in WebBIOS.
Figure 9 – Configure Server Virtual Drive
4.4
Operating System Installation and Configuration
Windows Server 2012 R2 can be installed manually, or by using EasyStartup. Both nodes should be
running the same version of the operating system, and be updated to the same level.
Configure basic OS settings including networking and other features before creating the failover cluster.
4.4.1
Install OS and Perform Initial Configuration
To install the OS manually, complete the following steps:
1. Depending on your server configuration, attach an external CD/DVD reader device.
2. Install the OS from the media and follow the prompts, completing the installation as directed by
the installation routine.
After the OS is successfully installed, log on to the system using the local administrator password
created during the installation process. After logging in, the Server Manager is displayed (see
Figure 10).
20
Lenovo ThinkServer High-Availability Solutions
Figure 10 – Server Manager
3. In Server Manager, select Local Server to perform basic system configuration:

Change the Computer Name for each node. In our example, we use:
Server node 1: csnode1
Server node 2: csnode2

Configure System Date and Time / Time Zone

If desired, enable and configure Remote Desktop

Enable Remote Management (remote management of this server from other
servers)

Insure all required hardware device drivers are installed and updated to the latest
levels. In particular, the Syncro device driver should be at the current level. Use
Device Manager to update the drivers as shown in Figure 11.
21
Lenovo ThinkServer High-Availability Solutions
Figure 11 – Update Device Drivers
4.4.2

Configure and install Windows Updates

Add each server node to the same Active Directory Domain – reboot will be
required. Future log ins to the servers should use the domain account.
Enabling Clustered RAID Controller Support
Support for clustered RAID controllers is not enabled by default in Microsoft Windows Server 2012. To
enable support for this feature, perform the following steps:
1. Open Registry Editor (regedit.exe).
2. Locate and then create the following registry subkey:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ClusDIsk\Parameters
3. Right-click on the Parameters key and then choose New.
4. Select DWORD and give it a name of AllowBusTypeRAID.
5. Once the key is created, give it a value of 0x01.
22
Lenovo ThinkServer High-Availability Solutions
Figure 12 – Clustered RAID Registry Key
6. Exit the Registry editor.
4.5
Configure Networking
Naming the network connections will simplify management of the failover cluster. In addition, some
TCP/IP settings for the failover cluster must be configured exactly, while some permit choices to match
your network configuration. Specific recommendations are provided in this section and are shown in
Figure 13.
Figure 13 – Network Connections
4.5.1
Public Network
The public network attaches to the local area network for client access of the cluster. This is the
network that clients will use to access the failover cluster. Multiple ports on the same network enable
load balancing and redundancy.
The public network ports (named Public-1 and Public-2) can use either statically assigned TCP/IP
settings, or the default settings provided through the Dynamic Host Configuration Protocol (DHCP).
DHCP is the preferred method of assigning the addresses for the public interfaces, because this will
23
Lenovo ThinkServer High-Availability Solutions
simplify the configuration of the cluster on the network. Use DHCP-assigned addresses for the physical
network adapter’s IP address, as well as all virtual IP addresses assigned to virtual servers configured
within the failover cluster if configuring an application cluster solution. No additional network
configuration is typically required if DHCP assignment is used, except for setting a reservation in the
DHCP scope if you want the cluster to have a consistent address.
4.5.2
Heartbeat Network
The Heartbeat network (named Heartbeat-1) is used only for the heartbeat communication between
failover cluster nodes, so most network services for this interface can be disabled. To modify network
connection properties:
1.
Open Network Connections. Click Start, right-click Network, and then click Properties. In
Network and Sharing Center, click Change adapter settings.
2.
Open Properties for a network connection. Right-click the network connection and then
click Properties.
Figure 14 – Heartbeat Network Settings
3.
Uncheck the following unnecessary network features:

Client for Microsoft Networks

File and Printer Sharing for Microsoft Networks

QoS Packet Scheduler
24
Lenovo ThinkServer High-Availability Solutions

4.
Internet Protocol Version 6 (TCP/IPv6)
Double click Internet Protocol Version 4 (TCP/IPv4) to modify its properties.

It is common to use an address range of 10.x.x.x for the private heartbeat network.
Enter a different IP address for each server. A default gateway and DNS server are
not necessary when using a crossover cable for this network and do not need to be
entered.
Figure 15 – Heartbeat Network IP Address
5.
4.5.3

Click the DNS tab. Uncheck Register this connection’s addresses in DNS.

Click the WINS tab. Uncheck Enable LMHOSTS lookup and Disable NETBIOS over
TCP/IP.
Click OK to save the changes.
Configure NIC Teaming
In order to use more than one Ethernet port together in the cluster, the adapters need to be teamed
prior to the creation of the cluster. In Windows Server 2012 R2, NICs can be teamed via software from
the NIC manufacturer (such as Intel), or through the built-in load balancing and failover option (LBFO)
within Windows Server 2012.
To configure Teaming using the Intel software for the NIC shown in the base configuration, complete the
following steps:
25
Lenovo ThinkServer High-Availability Solutions
1.
Open Network Connections from the Control Panel and right click the first adapter to be
used in the NIC Team, and select Properties. Then click Configure. Click on the Teaming tab
and check the option to Team this adapter with other adapters. Then click on the New
Team button.
Figure 16 – NIC Teaming Control Panel
2.
Create a name for the team. In this example, Public-Team is used. Click Next to continue.
Figure 17 – New Team Wizard (Name the Team)
3.
Select the network adapter ports to be teamed. In the figure below, the Intel I350 network
adapter ports are selected.
26
Lenovo ThinkServer High-Availability Solutions
Figure 18 – New Team Wizard (Select Adapters)
4.
Choose the type of teaming method. In the figure below, Adaptive Load Balancing is
selected. This allows for both load balancing and fault tolerance on the port team. No
special switch configuration is needed to use this mode.
Figure 19 – New Team Wizard (Select Teaming Method)
5.
Click Finish to complete the Team Wizard Application.
27
Lenovo ThinkServer High-Availability Solutions
Figure 20 – New Team Wizard (Completed)
6.
A new network adapter created that represents the NIC Team, and the connections that
make up the team are added to the available network connections.
Figure 21 – Teamed Network Connection
4.6
Create Virtual Drives on the SA120
Before the drives in the SA120 can be used, they must be configured into drive groups, which hold one
or more divisions known as virtual drives. The virtual drive will be assigned a RAID level, which is seen
by the host computer system as a single drive volume.
The high-availability cluster configuration requires that virtual disks used for storage must be shared;
otherwise, they are only visible to the controller node that created them. A minimum of one shared
virtual disk is required to be used as a quorum disk to enable the operating system support for cluster.
This section explains how to configure the virtual disks using the WebBIOS pre-boot utility. This
procedure will configure the virtual drives as shown in section 2.4, Storage Array Design, on page 11.
28
Lenovo ThinkServer High-Availability Solutions
To coordinate the configuration of the two controller nodes, both nodes must be booted into the
WebBIOS pre-boot utility simultaneously. After powering on the two nodes in the cluster, rapidly access
both consoles. One of the systems is used to create the virtual drives while the other system simply
remains in the pre-boot utility. This approach keeps the second system in a state that does not fail over
while the virtual drives are being created on the first system.
1. Simultaneously power on both servers.
2. On each system, when prompted during the POST, type CTRL-H for the Syncro controller to
access the WebBIOS pre-boot BIOS utility. Wait until both systems are running the WebBIOS
utility, and then proceed to the next step.
Figure 22 – Prompt to Enter Syncro WebBIOS
3. Select the LSI Syncro card from the menu if more than one LSI adapter is present. Click Start.
Figure 23 – WebBIOS Adapter Selection
4. On the WebBIOS main page, click Configuration Wizard, as shown in Figure 24.
29
Lenovo ThinkServer High-Availability Solutions
Figure 24 – WebBIOS Main Page
5. The Configuration Wizard appears. Select New Configuration and click Next.
6. On the Select Configuration screen, select Virtual Drive Configuration and press Next.
Figure 25 – Select Configuration
7. On the Select Configuration Method screen, select Manual Configuration and click Next.
30
Lenovo ThinkServer High-Availability Solutions
Figure 26 – Select Configuration Method
8. The Drive Group Definition screen appears. In the Drives panel on the left, select the drives to
be included in the drive group, and click Add To Array. Hold down the Ctrl key to select multiple
drives simultaneously. In this example, select drives in slots 0 through 4 for the first drive group
as shown in Figure 27.
Figure 27 – Drive Group Definition
9. After adding the drives to the drive group, click Accept DG and then click Next.
31
Lenovo ThinkServer High-Availability Solutions
Figure 28 – Accept Drive Group 0 Definition
10. On the Span Definition screen, select the drive group just created and click Add to SPAN, then
click Next.
Figure 29 – Span Definition
11. On the Virtual Drive Definition screen, we will create the virtual drives as described in section
2.4, Storage Array Design on page 11. In this first drive group, we will create a virtual drive for
the Quorum and the remaining space will be used for shared data.
The quorum disk must be at least 50MB, but it does not require more than 1GB of space. In this
example, we recommended that 500MB be allocated as shown in Figure 30.
32
Lenovo ThinkServer High-Availability Solutions
Insure that the Provide Shared Access checkbox is selected.
Figure 30 – Virtual Drive Definition for Quorum
The Provide Shared Access option enables a shared virtual drive that both controller nodes can
access. If this option is deselected, the virtual drive will be available exclusively for the node
that creates it.
After all settings have been configured, click Accept, and then click Next.
12. On the Confirmation Page, select Yes to confirm usage of Write Back with BBU mode.
13. Click Back to return to the Virtual Drive Definition page to create the second virtual drive in the
drive group for shared data. Settings for this virtual drive are shown in Figure 31. To use the
remaining space available, click Update Size to quickly enter the value in the Select Size field.
33
Lenovo ThinkServer High-Availability Solutions
Figure 31 – Virtual Drive Definition for Shared Data
14. Repeat the previous steps to create the other drive groups and virtual drives as desired. As the
virtual drives are configured on the first controller node, the other controller node’s drive listing
is updated to reflect the use of the drives.
Figure 32 – Drive Group 1 Definition
15. When prompted, click Yes to save the configuration, and click Yes to confirm that you want to
initialize it.
16. Define hot-spare disks for the virtual drives to maximize the level of data protection. Syncro
supports global hot spares and dedicated hot spares. Global hot spares are global for the
cluster, not just for a controller.
34
Lenovo ThinkServer High-Availability Solutions
Select Drives from the main menu and select the drives to configure as spares. Select
Properties, then press Go.
Figure 33 – Select Drive for Hot Spare
17. Select Make Global HSP and click Go.
Figure 34 – Configure Hot Spare
18. After all is done, the drive groups, virtual drives and hot spares can be viewed from the main
screen as shown in Figure 35.
35
Lenovo ThinkServer High-Availability Solutions
Figure 35 – Syncro Configuration Logical View
19. When all virtual drives and spares are configured, exit WebBIOS, and reboot both systems.
4.7
Expose Storage to the Cluster Nodes
Before the failover cluster is created, verify that all cluster servers can see the shared disks.
1.
To verify from one console that all servers can see the shared disks, make sure that you add
all computers that you want to add as cluster nodes to Server Manager.
2.
In Server Manager, click File and Storage Services, and then under Volumes, click Disks.
3.
Under each server, verify that the shared disks are listed.
4.
All shared disks must be formatted with one or more NTFS volumes.
5.
One of the shared disks is used as the quorum disk. It must be formatted with an NTFS
volume.
Storage can be configured for use with the cluster from either the Server Manager, or the Disk
Management plugin in Computer Management. This section demonstrates using Server Manager.
1. On one of the server nodes, open Server Manager. Select File and Storage Services, and then
Disks. Available disks will appear as unknown and online. Figure 36 shows the following drives:
Drive 0: Server Boot Drive (Windows Server 2012) – should
not be used for cluster storage
Drive 1: 500MB – for Quorum
Drive 2: 7.27TB – for Shared Data
36
Lenovo ThinkServer High-Availability Solutions
Drive 3: 7.28TB – for Shared Data
Figure 36 – Server manager – Disks
2. For each disk to prepare, select the Disk, right click, and select New Volume from the context
menu.
Figure 37 – Create Volume
3. The New Volume Wizard will appear. Select the Disk to use to create the volume. If both server
nodes are known to the system, the volume can be configured to be controlled by that node.
Click Next.
37
Lenovo ThinkServer High-Availability Solutions
Figure 38 – Select Server and Disk
4. Specify the volume size. In this example, allocate all available capacity for the Volume. Click
Next.
Figure 39 – Specify the Volume Size
5. Assign a drive letter to the volume. The drive letters will not necessarily be the same on every
node of the cluster. In Figure 40, the drive letter is assigned as Q to indicate the Quorum drive.
38
Lenovo ThinkServer High-Availability Solutions
Figure 40 – Assign Drive Letter
6. Select the Format options as shown in Figure 41. Name the volume to match its intended
purpose. In this example we use the following volume names:
Quorum Drive: Quorum
Shared VD0: VD-0
Shared VD1: VD-1
Figure 41 – File System Settings
7. Confirm the settings and click Create.
39
Lenovo ThinkServer High-Availability Solutions
Figure 42 – Confirm New Volume Settings
8. Verify that each server recognizes the disks by viewing the disks in Server Manager or in Disk
Management.
Figure 43 – Shared Storage in Server Manager
4.8
Creating the Cluster
The following section describes how to configure and validate the failover cluster in Windows Server
2012 R2.
4.8.1
Installing the Failover Clustering Feature
40
Lenovo ThinkServer High-Availability Solutions
The Microsoft Server 2012 R2 operating system installation does not enable the clustering feature by
default. Follow these steps to view the system settings, and to enable clustering.
1. Launch the Server Manager dashboard.
Figure 44 – Server Manager Dashboard
2. If the Before you Begin box appears, click Next. Select Role-based or feature-based installation.
Figure 45 – Add Roles and Features Wizard
3. In the Select Destination Server box, select the local server and click Next.
41
Lenovo ThinkServer High-Availability Solutions
Figure 46 – Select Destination Server
4. On the Select Server Roles screen, Click Next.
5. On the Select Features screen, select the Failover Clustering checkbox. Click Next.
Figure 47 – Select Features
6. Confirm the selection and click Install.
42
Lenovo ThinkServer High-Availability Solutions
Figure 48 – Confirm Installation Selections
7. Close the Installation Wizard when the installation has completed.
Figure 49 – Installation Progress
8. Repeat these steps on the other server that will form the cluster.
4.8.2
Validating the Failover Cluster Configuration
Microsoft recommends that the configuration be validated before the cluster is formed. Validation
verifies that network, storage, and system configuration requirements are met and that the nodes can
form an effective cluster. To do this, run the Validate a Configuration wizard. The tests in the validation
wizard include simulations of cluster actions and inspect the following aspects of the system:
43
Lenovo ThinkServer High-Availability Solutions
•
System – These tests analyze whether the two server modules meet specific requirements, such
as running the same version of the operating system version using the same software updates.
•
Network – These tests analyze whether the planned cluster networks meet specific
requirements, such as requirements for network redundancy.
•
Storage – These tests analyze whether the storage meets specific requirements, such as
whether the storage correctly supports the required SCSI commands and handles simulated
cluster actions correctly.
To validate the configuration, perform the following steps:
1. Launch the Failover Cluster Manager tool from Server Manager: Select Server Manager > Tools >
Failover Cluster Manager.
Figure 50 – Failover Cluster Manager
2. In the actions pane, click Validate Configuration. The Validate a Configuration Wizard starts.
3. In the Select Servers screen, enter the name of each server to be added to the cluster. Click Add
after each name is entered. After all nodes are listed, click Next.
44
Lenovo ThinkServer High-Availability Solutions
.
Figure 51 – Select Cluster Servers
4. Select Run all tests, and click Next.
Figure 52 – Cluster Validation Test Options
5. Confirm the tests to run then click Next to begin.
45
Lenovo ThinkServer High-Availability Solutions
Figure 53 – Cluster Validation Confirmation
6. When the tests complete, a summary of the results will be provided. The detailed results can be
viewed by clicking View Report.
Deselect Create the cluster now using the validated nodes… and click Finish.
Figure 54 – Cluster Validation Summary Report
If any of the validation tests fails or results in a warning, you should review the validation report
and resolve the issues before creating the cluster. Be sure to run the Validate a Configuration
Wizard again to verify that all issues have been resolved.
4.8.3
Creating the Failover Cluster
46
Lenovo ThinkServer High-Availability Solutions
After successfully completing the cluster validation, create the Failover Cluster by performing the
following steps:
1. Launch the Failover Cluster Manager tool.
Figure 55 – Failover Cluster Manager
2. In the actions pane, click Create Cluster.... The Create Cluster Wizard starts.
3. In the Select Servers screen, enter the name of each server to be added to the cluster. Click Add
after each name is entered. After all nodes are listed, click Next.
Figure 56 – Select Servers
47
Lenovo ThinkServer High-Availability Solutions
4. Enter the name that you want to assign to the cluster in the cluster name field. If the wizard
requests that an IP address be entered for the cluster, deselect all networks – the networks will
be configured later in section 4.8.4, “Set Cluster Network Properties,” page 49. Click Next.
Figure 57 – Cluster Name and IP Address
5. A confirmation page containing the cluster properties appears. If no other changes are
required, you have the option to specify available storage by selecting the Add all eligible
Storage to the cluster check box. Deselect this box – the storage will be added to the cluster
later in section 4.8.5, “Add Disks to the Cluster,” page 51. Click Next.
Figure 58 – Create Cluster Confirmation
6. After the cluster is created, a cluster creation report summary appears. This report includes any
errors or warnings encountered. Click on the View Report… button for additional details about
the report. In this case, a warning is generated because no storage has been added to the
48
Lenovo ThinkServer High-Availability Solutions
cluster and as a result, quorum drive has yet been configured yet. The Quorum drive will be
configured in section 4.8.6, “Create the Quorum Drive,” page 52. Click Finish.
Figure 59 – Create Cluster Summary
4.8.4
Set Cluster Network Properties
After the failover cluster has been created, configure the network usage in Failover Cluster Manager.
This step tells the cluster which network connections are used by the cluster, and which are available for
network access by clients. To configure network connections in the failover cluster, perform the
following steps:
1. Open Failover Cluster Manager. Expand the cluster, and expand the Networks node.
49
Lenovo ThinkServer High-Availability Solutions
Figure 60 – Cluster Networks
2. Select a network, and select Properties from the Action panel.
3. Under Name, type the corresponding network name for the connection. This should match the
network connection naming convention created earlier (see section 4.5, “Configure
Networking,” page 23).
4. Click the appropriate network options for the connection. Refer to Table 7 below for the
information used in this example.
Note that by default, only networks configured with a default gateway will be set automatically to
Allow Clients to connect through this network. The network connections you create for the public
network (that is, the connections clients use to connect to the cluster) will have a default gateway
address whether you statically assign the addresses or use DHCP. The isolated network segments
used for the heartbeat communication do not have default gateways assigned. When the failover
cluster is created, the wizard should correctly configure these networks based on the addressing
used.
Table 7 – Cluster Network Settings
Network Name
Cluster Network 1 –
Public
Cluster Network 2 –
Mgmt
Cluster Network 3 –
Heartbeat
Cluster Use
IP address
172.16.0.#/21
Allow cluster Network
Communications on
this Network
Yes
Allow clients to
connect through this
network
Yes
Client Access and
Cluster
None
172.16.5.#/24
No
No
Cluster Only
10.10.10.#/24
Yes
No
50
Lenovo ThinkServer High-Availability Solutions
4.8.5
Add Disks to the Cluster
Storage that was previously created and exposed to the cluster nodes must now be made available for
the cluster to use.
1.
Open Failover Cluster Manager. Expand the cluster, and expand the Disks node.
Figure 61 – Failover Cluster Disks
2.
In the actions panel, click Add Disk.
3.
Select the disk or disks to add and click OK. The selected disks are brought on line.
Figure 62 – Add Disks to Cluster
51
Lenovo ThinkServer High-Availability Solutions
4.
The disks added to the cluster appear in the Failover Cluster Manager. These disks will be
used to create the Quorum drive, as well as shared storage for the failover cluster.
Figure 63 – Cluster Disks
4.8.6
Create the Quorum Drive
The Quorum drive is required for the cluster to function correctly. To configure or change the Quorum
settings, perform the following steps:
1. Open Failover Cluster Manager, and select the cluster. With the cluster selected, under Actions,
click More Actions, and then click Configure Cluster Quorum Settings. The Configure Cluster
Quorum Wizard appears. Click Next.
52
Lenovo ThinkServer High-Availability Solutions
Figure 64 – Configure Quorum
2. On the Select Quorum Configuration Option page, select Select the Quorum Witness. Click Next.
Figure 65 – Select Quorum Configuration Options
3. On the Select Quorum Witness page, select the option to configure a disk witness, and then click
Next.
53
Lenovo ThinkServer High-Availability Solutions
Figure 66 – Select Quorum Witness
4. On the Configure Storage Witness page, select the storage volume that you want to assign as
the disk witness, and then click Next.
Figure 67 – Configure Storage Witness
5. Confirm your selections on the confirmation page that appears, and then click Next.
54
Lenovo ThinkServer High-Availability Solutions
Figure 68 – Confirm Cluster Quorum Settings
6. After the wizard runs and the Summary page appears, if you want to view a report of the tasks
that the wizard performed, click View Report. Click Next to exit the wizard.
Figure 69 – Configure Cluster Quorum Summary Report
7. After completion of the wizard, the quorum witness will be listed in the Failover Cluster
Manager.
55
Lenovo ThinkServer High-Availability Solutions
Figure 70 – Quorum in Failover Cluster Manager
5.0
Creating the Highly Available Storage Cluster
This section provides steps to configure and deploy the failover cluster as a high-availability file server.
Two types of file servers can be created.
The first is a File Server for General Use that provides file shares to users and applications that open and
close files frequently. It supports NFS and SMB protocols, as well as provides for Data Deduplication,
DFS Replication, and other File Services roles, but it cannot use a Clustered Shared Volume (CSV) for
storage.
The second is a Scale-Out File Server for Application Data that provides storage to server applications or
Hyper-V VMs that leave files open for extended periods. This server type supports SMB, but not NFS,
nor does it support the file services that the File Server for General Use provides. A Scale-Out File Server
uses a CSV for storage.
In this section, a High Availability File Server for General Use is created.
5.1
Install the File Services Role
The File Services role should already be installed on the nodes of the failover cluster. If it is not, or if you
want to verify that the role is installed, use the following steps.
1. Open Server Manager. Click Start, click Administrative Tools, and then click Server Manager.
2. Click the Roles node, and then click Add Roles. Click Next.
56
Lenovo ThinkServer High-Availability Solutions
3. Click the File Services checkbox, if it is not already selected and then click Next.
4. Click all the appropriate Role Services for your cluster to provide (such DFS, FSRM, NFS, and so
on), and then click Next.
5. Click Install.
6. When the wizard completes, click Close. Repeat these steps on each node of the cluster.
5.2
Create the Highly Available File Server
1. Open Failover Cluster Manager. Expand the cluster, and select the Roles node. In the Actions
panel, click Configure Role.
Figure 71 – Configure Role
2. The High Availability Wizard starts. Click Next.
3. Click File Server from the list of available roles, and then click Next.
57
Lenovo ThinkServer High-Availability Solutions
Figure 72 – Select Role
4. Select File Server for General Use and click Next.
Figure 73 – File Server Type
5. Enter the name of the file server (CS-Cluster-FS1 in this example). If you are prompted to specify
the networks to use, you should uncheck all statically assigned networks, because they
represent isolated networks that clients cannot access. Click Next.
58
Lenovo ThinkServer High-Availability Solutions
Figure 74 – Client Access Point
6. Select one of the available disks to allocate to the file server cluster, and then click Next.
Figure 75 – Select Storage
7. Click Next to confirm the operation.
59
Lenovo ThinkServer High-Availability Solutions
Figure 76 – Confirm File Server Settings
8. After the file server configuration has finished, the server, and assigned storage will be visible in
Failover Cluster Manager.
Figure 77 – File Server in Failover Cluster Manager
5.3
Create a File Share
Shared folders must be contained in the file server cluster in order to provide failover capability. The
following steps demonstrate how to create an SMB file share in the server cluster.
60
Lenovo ThinkServer High-Availability Solutions
1. Open Failover Cluster Manager. Expand the cluster, and click on the Roles node to show the
highly available file server just created.
Figure 78 – File Share in Failover Manager
2. Select the file server to display resources. From the Actions panel, click Add File Share.
3. The New Share Wizard will appear. Follow the instructions in the wizard. These instructions will
depend on the files services you selected when installing the File Services role.
In this example, we create a simple SMB share. Click Next.
Figure 79 – Create SMB Share
4. In the Share Location pane, enter a location for the file share on a disk that is available to the
cluster. Click Next.
61
Lenovo ThinkServer High-Availability Solutions
Figure 80 – Select the Share Location
5. Enter a name for the file share in the Share Name field. The wizard displays the remote path to
the share that users of the file server will use to access their shared files. Click Next.
Figure 81 – Select Share Name
6. If the path entered does not exist a warning will be displayed, with an option given to create the
path location, or return to correct the entry. Click OK to continue and create the share location.
62
Lenovo ThinkServer High-Availability Solutions
Figure 82 - New Share Path Does Not Exist
7. In the Configure Share Settings panel, select Enable Continuous Availability at a minimum to
enable uninterrupted operation of the file share in the event of a system fault. Click Next.
Figure 83 – Configure Share Settings
8. Finally, specify permissions for the share. Click Next.
Figure 84 – Specify Share Permissions
63
Lenovo ThinkServer High-Availability Solutions
9. A confirmation page appears. Click Create to create the share.
Figure 85 – Confirm Share Settings
10. At the completion of the wizard, the file share will be displayed in the file server role of the
cluster within the Failover Cluster Manager.
Figure 86 – Share in High Availability File Server
5.4
Mapping User Folders to the Highly Available File Server Share
Users can now access the highly available file server by manually mapping to the SMB share that was
created. The users should be directed to \\<highly available file server name>\<file share name>. In the
example above, this is:
64
Lenovo ThinkServer High-Availability Solutions
\\CS-Cluster-FS1\CS-GPFileShare
Connecting to the file server you created in Failover Cluster Management (instead of connecting to the
cluster name or to any of the nodes in the cluster) may not be intuitive for users. The purpose of the
highly available file server is to be online regardless of the specific server hosting the service, and so the
connection is made to the role rather than to a physical computer.
5.5
Test a Cluster Failover
After the failover cluster has been created, and high availability roles have been configured, the cluster’s
failover ability can be tested in Failover Cluster Manager. Use the following steps to test failover by
moving a role to another node in the cluster.
1. Open Failover Cluster Manager and select the cluster. Expand the Roles node.
2. Select the role to move, in this case the high availability File Server just created.
3. Right click on the role, and click Move from the context menu. Select the cluster node to move
the role to.
Figure 87 – Move Clustered Role
If the move operation completes successfully, there will be no errors or warnings and the summary view
of the service or application will update the Current Owner field to show the new node’s name.
6.0
Creating the Highly Available Application Cluster
This section provides steps to configure and deploy the failover cluster with clustered Hyper-V virtual
machines. In this configuration guest VMs are managed through Failover Cluster Manager. In a standalone Hyper-V environment guest VMs are managed through Hyper-V Manager.
65
Lenovo ThinkServer High-Availability Solutions
6.1
Install Hyper-V
To install the Hyper-V role perform the following steps:
1. Open Server Manager.
2. Click the Roles node, and then click Add Roles. Click Next.
3. Click the Hyper-V checkbox, if it is not already selected, and then click Next.
Figure 88 – Select Hyper-V Role
4. In the Create Virtual Switch panel, select the Public-Team network adapter to attach the virtual
switch. Click Next.
Figure 89 – Create Virtual Switch
66
Lenovo ThinkServer High-Availability Solutions
5. In the Virtual Machine Migration panel, uncheck the Allow this server to send and receive live
migrations of virtual machines. Migration of VM’s will be handled by the cluster. Click Next.
Figure 90 – Virtual Machine Migration
6. The default Stores panel allows the selection of a default location of the virtual machine files.
Accept the default for now. Click Next.
Figure 91 – Default Stores
7. The confirmation page is displayed. Click Install to install the Hyper-V role.
67
Lenovo ThinkServer High-Availability Solutions
Figure 92 – Confirm Hyper-V Role Selections
8. When the wizard completes, click Close. Repeat these steps on each node of the cluster.
6.2
Create a Virtual Switch
Perform this step on both physical computers if you did not create the virtual switch when you installed
the Hyper-V role. This virtual switch provides the highly available virtual machine with access to the
physical network.
1. Open Hyper-V Manager.
2. From the Actions menu, click Virtual Switch Manager.
3. Under Create virtual switch, select External.
4. Click Create Virtual Switch. The New Virtual Switch page appears.
68
Lenovo ThinkServer High-Availability Solutions
Figure 93 – Virtual Switch Manager
5. Type a name for the new switch. Make sure you use exactly the same name on both servers
running Hyper-V.
6. Under Connection Type, click External network, and then select the physical network adapter.
7. Click OK to save the virtual network and close Virtual Switch Manager.
6.3
Add a Disk as CSV to Store Virtual Machine Data
To implement certain scenarios for clustered virtual machines, the virtual machine storage and virtual
hard disk file should be configured as Cluster Shared Volumes (CSV). CSV can enhance the availability
and manageability of virtual machines by enabling multiple nodes to concurrently access a single shared
storage volume. CSV also support live migration of a Hyper-V virtual machine between nodes in a
failover cluster.
To configure a disk in clustered storage as a CSV volume, perform the following steps.
1. Open Failover Cluster Manager. Expand the cluster, and expand Storage and then click the Disks
node.
69
Lenovo ThinkServer High-Availability Solutions
Figure 94 – Failover Cluster Manager Disks
2. Right-click a cluster disk, and then click Add to Cluster Shared Volumes.
Figure 95 – Create CSV
3. The Assigned To column changes to “Cluster Shared Volume.”
70
Lenovo ThinkServer High-Availability Solutions
Figure 96 – CSV in Failover Cluster manager
6.4
Create a Highly Available Virtual Machine
As a best practice, it is recommended that if you create a virtual machine on a failover cluster node, you
create it as a highly available virtual machine. Run the “Hyper-V New Virtual Machine Wizard” directly
from Failover Cluster Manager. After the virtual machine is created in this way, it is automatically
configured for high availability.
1. In Failover Cluster Manager, select or specify the cluster that you want. Ensure that the console
tree under the cluster is expanded. Click Roles.
Figure 97 – Failover Cluster Manager Roles
71
Lenovo ThinkServer High-Availability Solutions
2. In the Actions pane, click Virtual Machines, and then click New Virtual Machine.
Figure 98 – Add HA VM
3. Select a cluster node on which to initially install the VM and Click OK.
Figure 99 – New Virtual machine node
4. The New Virtual Machine Wizard appears. Click Next.
5. On the Specify Name and Location page, specify a name for the virtual machine. In this example
we use CS-VM1. Click Store the virtual machine in a different location, and then type the full
path or click Browse and navigate to the CSV created earlier. Click Next.
72
Lenovo ThinkServer High-Availability Solutions
Figure 100 – VM Name and Location
6. Specify the VM Generation. Click Next.
Figure 101 – VM Generation
7. On the Assign Memory page, specify the amount of memory required for the operating system
that will run on this virtual machine. In this example, specify 1024 MB. Click Next.
73
Lenovo ThinkServer High-Availability Solutions
Figure 102 – Assign VM Memory
8. On the Configure Networking page, connect the VM to the virtual switch. You should specify the
virtual switch that you configured in section 6.1, “Install Hyper-V,” page 66. Click Next.
Figure 103 – Configure VM Networking
9. On the Connect Virtual Hard Disk page, click Create a virtual hard disk. Type the full path or click
Browse and navigate to the CSV created earlier. Click Next.
74
Lenovo ThinkServer High-Availability Solutions
Figure 104 – Connect Virtual Hard Disk
10. On the Installation Options page, specify the location of the guest OS installation media, or defer
the installation to a later time. Click Finish.
Figure 105 – VM Installation Options
The virtual machine is created. The High Availability Wizard in Failover Cluster Manager then
automatically configures the virtual machine for high availability.
11. The High Availability VM is added to the cluster in the Failover Cluster Manager.
75
Lenovo ThinkServer High-Availability Solutions
Figure 106 – VM in Failover Cluster Manager
6.5
Test a Planned Failover
To test a planned failover, you can move the clustered virtual machine that you created to another
node.
1. In Failover Cluster Manager, select or specify the cluster that you want. Ensure that the console
tree under the cluster is expanded.
2. To select the destination node for live migration of the clustered virtual machine, right-click CSVM1 (the clustered virtual machine previously created), point to Move, point to Live Migration,
and then click Select Node.
As the virtual machine is moved, the status is displayed in the results pane (center pane).
3. Verify that the move succeeded by inspecting the details of each node.
6.6
Test an Unplanned Failover
To test an unplanned failover of the clustered virtual machine, you can stop the Cluster service on the
node that owns the clustered virtual machine.
1. In Failover Cluster Manager, select or specify the cluster that you want. Ensure that the console
tree under the cluster is expanded.
2. Expand the console tree under Nodes.
76
Lenovo ThinkServer High-Availability Solutions
3. Right-click the node that owns the VM CS-VM1 (the clustered virtual machine previously
created), point to More Actions, and then click Stop Cluster Service. The virtual machine moves
to the other node. There might be a short delay while this happens.
7.0
References
For information about hardware systems used in these solutions, see the following websites:
User Guide and Hardware Maintenance Manual – ThinkServer RD340
http://download.lenovo.com/ibmdl/pub/pc/pccbbs/thinkservers/rd340_ug_en.pdf
User Guide and Hardware Maintenance Manual – ThinkServer RD640
http://download.lenovo.com/ibmdl/pub/pc/pccbbs/thinkservers/rd640ug_hmm_en.pdf
ThinkServer Management Module User Guide
http://download.lenovo.com/ibmdl/pub/pc/pccbbs/thinkservers/rd540rd640tmmug_en.pdf
LSI Syncro User Guide & Troubleshooting
http://www.lsi.com/downloads/Public/Syncro%20Shared%20Storage/docs/Syncro_CS_92868e_Solution_UG.pdf
MegaRAID SAS Software User Guide
http://download.lenovo.com/ibmdl/pub/pc/pccbbs/thinkservers/megaraid_swug_en.pdf
User Guide and Hardware Maintenance Manual – SA120
http://download.lenovo.com/ibmdl/pub/pc/pccbbs/thinkservers/sa120_hmm_ug_en.pdf
For more information about Server 2012 R2 roles and features, see the following Microsoft website:
Server Roles and Technologies in Windows Server 2012
http://technet.microsoft.com/en-us/library/hh831669.aspx
77
Download