Red paper IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V

advertisement
Redpaper
Alex Osuna
William Luiz de Souza
IBM System Storage N series and
Microsoft Windows Server 2008 Hyper-V
Introduction
This IBM® Redpaper publication walks you through the steps required to set up Microsoft®
Windows® 2008 Hyper-V™ and failover clustering on IBM System Storage™ N series. This
configuration merges the high availability and data protection features of IBM System Storage
N series with the virtualization and clustering features of Windows 2008.
© Copyright IBM Corp. 2009. All rights reserved.
ibm.com/redbooks
1
Overview
Virtual infrastructures are a fantastic solution to the challenges of a distributed server
architecture. In recent years, just about every company with an information systems
department has begun some form of consolidation and virtualization effort with the goal of
increasing asset utilization while reducing management and infrastructure costs. The
virtualization marketplace is filled with solutions from just about every traditional vendor and a
bevy of startups. However, the native storage virtualization capabilities shipped with Microsoft
Hyper-V do not provide the same benefits and hardware reductions as those seen in the
server space.
Many customers have experienced an increase in storage requirements after implementing
their virtual infrastructure. The reasons for this increase are many, including but not limited to
a requirement for a shared storage platform, inefficiencies in the multiple layers of storage
virtualization, over provisioning, and challenges with backups that can lead to inefficient
disk-to-disk backup solutions.
This book demonstrates how integrating N series technologies in a virtual infrastructure can
solve the unique challenges inherent with Hyper-V deployments in the areas of storage
utilization, fault tolerance, and backups. With N series virtualized storage and data
management solutions, customers can make dramatic gains in these areas.
The Hyper-V role enables you to create a virtualized server computing environment using a
technology that is part of the Windows Server® 2008 operating system. This solution is
provided through Hyper-V. You can use a virtualized computing environment to improve the
efficiency of your computing resources by utilizing more of your hardware resources.
The failover clustering feature enables you to create and manage failover clusters. A failover
cluster is a group of independent computers that work together to increase the availability of
applications and services. The clustered servers (called nodes) are connected by physical
cables and by software. If one of the cluster nodes fails, another node begins to provide
service (a process known as failover). Users experience a minimum of disruptions in service.
2
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
This book shows you how to use these two technologies together to make a virtual machine
highly available. You will do this by creating a simple two-node cluster and a virtual machine,
and then failing over the virtual machine from one node to the other (Figure 1 and Figure 2).
Figure 1 Scenario before failure
© 2008 IBM Corporation
Figure 2 Scenario after failure
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
3
Requirements for testing Hyper-V and failover clustering
To test the Hyper-V role on a failover cluster with two nodes, you must have the minimum
hardware, software, accounts, and network infrastructure described in the sections that follow.
Hardware requirements for Hyper-V
Hyper-V requires an x64-based processor, hardware-assisted virtualization, and hardware
data execution protection. You can identify systems that support the x64 architecture and
Hyper-V by searching the Windows Server catalog for Hyper-V as an additional qualification.
The Windows Server catalog is available at the Microsoft Web site:
http://go.microsoft.com/fwlink/?LinkId=111228
Hardware requirements for failover cluster
There are some requirements for the cluster service installation. The requirements are:
򐂰 Administrative rights are necessary on each cluster node.
򐂰 There should be enough disk space on the system drive and on the share device for
cluster service installation.
򐂰 The appropriate Network Interface Cards (NICs) drivers should be installed.
򐂰 The NICs should have the proper TCP/IP configurations.
򐂰 The file and print sharing for Microsoft networks should be installed on each node.
򐂰 The nodes should have the same hardware and device drivers levels.
򐂰 Each node should belong to the same Active Directory® Domain.
򐂰 The domain accounts should be the same on each cluster node.
򐂰 A cluster must have a unique NetBIOS name.
򐂰 You should use a Microsoft Windows version that allows cluster installation.
򐂰 The system paging file should have enough space for performance.
򐂰 Analyze the system logs before and after the cluster service installation.
򐂰 Before adding any new nodes, certify that the current ones are working perfectly.
򐂰 You can use the Performance Monitor to troubleshoot virtual memory issues.
4
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
Figure 3 shows a typical cluster configuration.
Figure 3 Typical cluster configuration
Additional hardware-related information for the cluster service installation is listed below:
򐂰 If you are using Fibre Channel Protocol (FCP), all shared drives should be attached to
each cluster node.
򐂰 If you are using Internet Small Computer System Interface (iSCSI), all share drives should
be mapped to each cluster node.
򐂰 The NTFS file system should be used to format the shared disks.
򐂰 The shared disk should be in basic mode.
򐂰 The SCSI drivers and each adapter cannot use the same SCSI ID.
򐂰 Each node should have a minimum of two NICs.
򐂰 The storage host adapter for Small Computer System Interface (SCSI) or Fibre Channel
should be separated.
򐂰 An external drive that has multiple redundant array of independent disks (RAID)
configured drives must be connected to the servers of the cluster.
򐂰 The N series storage system must belong to the same domain or Active Directory.
򐂰 The cluster nodes must belong to the same domain.
Note: For further information regarding Microsoft Cluster Service requirements and other
useful information see:
http://www.microsoft.com/windowsserver2003/technologies/clustering/resources.mspx
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
5
Disk layout
When you determine the disk storage layout, you evaluate the type of data to be stored and
the number of volumes that you want to create. For the quorum disk there is not too much to
design. However, when you have Microsoft Hyper-V there are several best practices to
achieve high performance, manageability, and data protection.
Sizing
Before you install Microsoft Cluster Service, you must configure your N series storage system
so that the operating system and cluster service have two separate physical devices for
cluster usage. At a minimum, you must create at least one LUN for the quorum disk. The drive
must be formatted as NTFS.
Quorum configuration
The quorum resource plays a crucial role in the operation of the Microsoft Cluster. In every
Microsoft Cluster a single resource is designated as the quorum resource. A quorum
resource can be any resource with the following functionality:
򐂰 It offers a means of persistent arbitration. Persistent arbitration means that the quorum
resource must allow a single node to gain physical control of the node and defend its
control. For example, SCSI disks can use reserve and release commands for persistent
arbitration.
򐂰 It provides physical storage that can be accessed by any node in the cluster. The quorum
resource stores data that is critical to recovery after there is a communication failure
between cluster nodes.
We recommend that you configure the quorum disk size to be 500 MB. However, we use a
1024 Mb partition for the quorum since this is the minimum N series LUN size. We also
recommend that you configure some form of fault tolerance at the N series hardware level to
be used for the quorum drive. The N series uses two types of RAID:
򐂰 RAID-4
򐂰 RAID-DP™ (Double Parity)
Note: We recommend RAID-DP for better protection. Refer to IBM System Storage N
series Implementation of RAID Double Parity for Data Protection, REDP-4169-00, for
further information.
6
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
The quorum device in a cluster is used to ensure that there is only a single management
process for the cluster. This is intended to prevent split-brain syndrome. Split-brain syndrome
is where more than one node claims ownership of some critical resource.
The quorum resource belongs to only a single node of a cluster at a time. The first node to
create the cluster takes ownership of the quorum resource. Since the clusters described in
this document make use of a shared disk as the quorum resource, the way in which the node
takes ownership and maintains ownership of the quorum resource is through SCSI
commands. When using a disk as a quorum resource, the drive must be a physical disk
resource and not a partition, since changing ownership of the quorum involves moving the
entire resource to another cluster node. Both nodes can access the drive but not at the same
time (Figure 4).
Node 1
Node 2
Quorum
IBM N series
Figure 4 Quorum access diagram
In a Microsoft cluster the first node in the cluster becomes the initial quorum owner. The
quorum owner issues a reserve request for the quorum disk, and so long as it continues to be
the quorum owner, it will continue to issue a reserve request every three seconds. Should the
cluster enter a regroup event, the quorum owner will be forced to defend its ownership of the
quorum through a challenge/defense mechanism.
When a regroup event is initiated, all nodes issue a device or bus reset. This reset releases
the reservation held by the quorum owner. Once a non-owner has issued a reset request, it
waits 10 seconds before checking to see whether the quorum resource is available. If the
quorum owner is functioning correctly, it regains its reservation (through its regular
three-second reservation request) and thus defends its ownership of the quorum resource.
Software requirements
The Microsoft Cluster build is a complex mix of hardware and software requirements and
configurations. There are some software requirements on the N series and Windows Server
portions. We cover these in the following sections.
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
7
Software requirements for Hyper-V and failover clustering
The following are the software requirements for testing Hyper-V and failover clustering:
򐂰 Windows Server 2008 Enterprise or Windows Server 2008 Datacenter must be used for
the physical computers. These servers must run the same version of Windows Server
2008, including the same type of installation. That is, both servers must be either a full
installation or a server core installation. The instructions in this book assume that both
servers are running a full installation of Windows Server 2008.
򐂰 If you do not want to install Windows Server 2008 Enterprise or Windows Server 2008
Datacenter on the virtual machine, you will need the installation media for the operating
system. The instructions in this guide assume that you will install Windows Server 2008 on
the virtual machine.
Microsoft iSCSI software initiator
For companies that do not have an FCP infrastructure in place or for those that want to
access storage using their Ethernet infrastructure and knowledge, iSCSI can be used as the
access protocol for communication between the Microsoft Cluster Server and IBM System
Storage N series storage system.
In case there are no iSCSI adapters on your planned environment, the iSCSI Initiator
software can be used to provide the same connectivity to the IBM System Storage N series
storage system. The use of multipaths is also recommended when using a hardware-based
or software-based iSCSI solution, as shown in Figure 5.
Figure 5 Multipathing configuration for Windows Server using iSCSI
In Figure 5 on page 8, there are two interfaces on the server (either iSCSI hardware-based or
Gigabit Ethernet cards) that connect to two different LAN switches. For performance and
8
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
reliability reasons, we recommend that the LAN segments and switches are other than the
public ones.
The IBM System Storage N series storage system will have two of its adapters also
connected to both switches.
Assuming that all the infrastructure is already in place and working and that you are not using
HBA iSCSI-enabled adapters, Microsoft iSCSI Software Initiator must be installed on the
server. After installing and configuring it, SnapDrive® should be installed and configured as
well so that the LUNs can be created.
Note: Despite the fact that the LUNs can be created from the IBM System Storage N
series storage system, the recommended procedure is to create the LUNs from the
Windows Server using SnapDrive.
SnapDrive software
The IBM System Storage N series SnapDrive feature provides a number of storage features
that enable you to manage the entire storage hierarchy, from the host-side application-visible
file, down through the volume manager, to the storage-system-side logical unit numbers
providing the actual repository. In addition, it simplifies the backup of data and helps you
decrease the recovery time.
SnapDrive provides a layer of abstraction between an application running on the host
operating system and the underlying IBM System Storage N series storage systems
(Figure 6). Applications that are running on a server with SnapDrive use virtual disks (or
LUNs) on IBM System Storage N series storage systems as though they were locally
connected drives or mount points. This allows applications that require locally attached
storage, and several others applications, to benefit from the N series technologies, including
Snapshot™, flexible volumes, cloning, and space management technologies.
Figure 6 Example of a typical SnapDrive deployment
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
9
SnapDrive includes all the necessary drivers and software to manage interfaces, protocols,
storage, and Snapshot copies. Snapshot copies are nondisruptive to applications and
functions on execution. Snapshot backups can also be mirrored across LAN or wide area
network (WAN) links for centralized archiving and disaster recovery.
Benefits of SnapDrive
Most of today’s enterprises use business-critical applications, and their storage management
team faces a number of challenges. They must:
򐂰 Support new business initiatives with a minimal increase in operating budget.
򐂰 Protect data from corruption, disaster, and attacks.
򐂰 Back up data without any performance degradation, quickly and consistently, without any
errors.
SnapDrive addresses these challenges by providing simplified and intuitive storage
management and data protection from a host/server perspective. The following list highlights
some of the important benefits of SnapDrive:
򐂰 It allows hosts and applications administrators to quickly create virtual disks with a
dynamic pool of storage that can be reallocated, scaled, and enlarged in real time, even
while system are accessing data.
򐂰 Dynamic on-the-fly file system expansion: New disks are usable within seconds.
򐂰 Snapshot copies provide rapid backup and recovery capability with minimal resource and
capacity requirements.
򐂰 Supports multipath technology for high performance.
򐂰 Enables connections to existing Snapshot copies from the original host or a different host.
򐂰 It is independent of underlying storage access media and protocol. SnapDrive supports
FCP, iSCSI, and Network File System (NFS) as the transport protocols. (NFS supports
only Snapshot management.)
򐂰 Robust and easy-to-use data and storage management feature and software.
SnapDrive requirements
IBM System Storage N series SnapDrive is a licensed feature.
There are some general requirements for SnapDrive:
򐂰 Host operating system and appropriate patches
򐂰 Host file systems
򐂰 IP access between the host and storage system
򐂰 Storage system licenses
򐂰 FCP Host Utilities or iSCSI Host Utilities required software
Note: For security reasons, we recommend a separate user account on IBM System
Storage N series storage server.
The operating system requirements and additional information about SnapDrive can be found
in the IBM Network-attached Storage (NAS) Support Web site:
http://www.ibm.com/storage/support/nas
10
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
Data ONTAP
The IBM N series storage system is a hardware-based and software-based data storage and
retrieval system. It responds to network requests from clients and fulfills them by writing data
to or retrieving data from its disk array. It provides a modular hardware architecture running
the Data ONTAP® operating system and Write Anywhere File Layout (WAFL®) software. With
a reduced operating system, many of the server operating system functions that you are
familiar with are not supported. The objective is to improve performance and reduce costs by
eliminating unnecessary functions normally found in the standard operating systems.
Data ONTAP Storage Microkernel
g
in
r
o
ti
n
o
m
d
n
a
n
o
it
a
rt
is
n
i
m
d
A
m
te
s
y
S
Fibre Channel mass storage
WAFL Protection (RAID & Mirroring)
NV RAM
Journaling
WAFL Virtualization
Snapshots
SnapMirror
File Semantics
LUN Semantics
File Services
Block Services
NFS, CIFS, HTTP, FT P, …
FCP, iSCSI
TCP/IP
10/100 & GbE
(Fibre & Copper)
7/29/2008
GbE TCP/IP
2 Gbps
Fibre Channel
Future…
NetApp Confidential -- Do Not Distribute | Subject To Change Without Notice
3
Figure 7 Data ONTAP storage microkernel
Data ONTAP provides a complete set of storage management tools through its command-line
interface, through the FilerView® interface, through the Operations Manager interface (which
requires a license), and—for storage systems with a Remote LAN Module (RLM) or a
Baseboard Management Controller (BMC) installed—through the RLM or the BMC Ethernet
connection to the system console.
Data ONTAP provides features for:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
Network file service
Multiprotocol file and block sharing
Data storage management
Data organization management
Data access management
Data migration management
Data protection system management
AutoSupport
Network file service
Data ONTAP enables users on client workstations (or hosts) to create, delete, modify, and
access files or blocks stored on the storage system.
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
11
Storage systems can be deployed in network-attached storage and storage area network
(SAN) environments for accessing a full range of enterprise data for users on a variety of
platforms. Storage systems can be fabric-attached, network-attached, or direct-attached to
support NFS, Common Internet File System (CIFS), HyperText Transmission Protocol
(HTTP), and File Transfer Protocol (FTP) for file access, and Internet SCSI for block-storage
access, all over TCP/IP, as well as SCSI over FCP for block-storage access, depending on
your specific data storage and data management needs.
Client workstations are connected to the storage system through direct-attached or TCP/IP
network-attached connections, or through FCP, fabric-attached connections.
For information about configuring a storage system in a network-attached storage network,
see the Data ONTAP Network Management Guide, GC52-1280.
For information about configuring a storage system in a storage area network fabric, see the
Data ONTAP Block Access Management Guide, GC52-1282.
Multiprotocol file and block sharing
Several protocols allow you to access data on the storage system (Figure 8):
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
NFS: Used by UNIX® systems
Personal Computer NFS (PC-NFS): Used by PCs to access NFS
Common Internet File System: Used by Windows clients
FTP: Used for file access and retrieval
HTTP: Used by the World Wide Web and corporate intranets
FCP: Used for block access in storage area networks
iSCSI: Used for block access in storage area networks
N series Gateway – Supported Topology Options
SAN
Enterprise
NAS
Departmental
FCP
Fibre
Channel
SAN
(Block)
Target
Side
Enterprise
iSCSI
Target
Side
Host
Side
Dedicated
Ethernet
Target Side
Corporate
LAN
Target
Side
NAS
(File)
.
IBM
N series
Gateway
Figure 8 N series protocols
12
Departmental
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
Files written using one protocol are accessible to clients of any protocol, provided that system
licenses and permissions allow it. For example, an NFS client can access a file created by a
CIFS client, and a CIFS client can access a file created by an NFS client. Blocks written using
one protocol can also be accessed by clients using the other protocol.
For information about NAS file access protocols, see the Data ONTAP File Access and
Protocols Management Guide, GC27-2207-00.
For information about SAN block access protocols, see the Data ONTAP Block Access
Management Guide, GC52-1282-00.
Data storage management
Data ONTAP stores data on disks in disk shelves connected to storage systems. Disks are
organized into RAID groups. RAID groups are organized into plexes, and plexes are
organized into aggregates.
Data organization management
Data ONTAP organizes the data in user and system files and directories, in file systems
called volumes, and optionally in logical unit numbers (LUNs) in SAN environments.
Aggregates provide the physical storage to contain volumes.
For more information see the Data ONTAP Storage Management Guide and the Data ONTAP
Block Access Management Guide, GC52-1282-00.
When Data ONTAP is installed on a storage system at the factory, a root volume is configured
as /vol/vol0, which contains system files in the /etc directory.
Data access management
Data ONTAP enables you to manage access to data.
Data ONTAP performs the following operations for data access management:
򐂰 Checks file access permissions against file access requests.
򐂰 Checks write operations against file and disk usage quotas that you set. For more
information see the Data ONTAP File Access and Protocols Management Guide,
GC27-2207.
򐂰 Takes Snapshot copies and makes them available so that users can access deleted or
overwritten files. Snapshot copies are read-only copies of the entire file system. For more
information about Snapshot copies see the Data ONTAP Data Protection Online Backup
and Recovery Guide, GC27-2204.
Data migration management
Data ONTAP enables you to manages data migration. Data ONTAP offers the following
features for data migration management:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
Snapshot copies
Asynchronous mirroring
Synchronous mirroring
Backup to tape
Aggregate copy
Volume copy
FlexClone®
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
13
Data protection
Storage systems provide a wide range of data protection features such as aggregate copy,
MetroCluster, NDMP, NVFAIL, SnapLock®, SnapMirror®, SnapRestore®, Snapshot,
SnapVault®, SyncMirror®, tape backup and restore, virus scan support, and volume copy.
System management
Data ONTAP provides a full suite of system management commands that allows you to
monitor storage system activities and performance.
You can use Data ONTAP to perform the following system management tasks:
򐂰 Manage network connections.
򐂰 Manage adapters.
򐂰 Manage protocols.
򐂰 Configure pairs of storage systems into active/active pairs for failover.
򐂰 Configure SharedStorage storage systems into a community.
򐂰 Manage storage.
򐂰 Dump data to tape and restore it to the storage system.
򐂰 Mirror volumes (synchronously and asynchronously).
򐂰 Create vFiler units. For information about vFiler units, see the Data ONTAP MultiStore®
Management Guide, GC52-1281.
For information about all Data ONTAP commands, see the Data ONTAP Commands: Manual
Page Reference, Volume 1, GC27-2202, and the Data ONTAP Commands: Manual Page
Reference, Volume 2, GC27-2203.
AutoSupport
AutoSupport automatically sends AutoSupport Mail notifications about storage system
problems to technical support and designated recipients.
N series licenses
Several things must be done when preparing an IBM N series storage system host to create a
reliable system with optimal performance. You must license all of the necessary protocols and
software on the storage system. NAS requires the NFS (UNIX) and CIFS (Windows) licenses
to be activated, and SAN requires a FCP license with FCP services up and running. For SAN
configuration using iSCSI, an iSCSI license must be enabled and the iSCSI service must be
running.
Verify that the licenses for SnapDrive, ISCSI, and CIFS are enabled and that CIFS and iSCSI
services are running on the IBM N series storage devices.
Before creating a network share, verify that a CIFS license is enabled and the CIFS setup is
complete. On our test setup, we used a clustered N5500.
Based on your company policy, you must prepare the storage. If the CIFS protocol is used,
configure the CIFS setup and have the necessary CIFS shares available.
Note: Refer to the latest SnapDrive and SnapManager® administration guides to ensure
that the proper licenses and options are enabled on IBM N series storage regarding the
license requirement.
14
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
Active Directory requirements
There are some domain requirements that must be checked to install the Microsoft Cluster.
Verifying domain membership
All nodes in the cluster must be members of the same domain and be able to access a
domain controller and a DNS server. They can be configured as member servers or domain
controllers. You should have at least one domain controller on the same network segment as
the cluster. For high availability another domain controller should also be available to remove
a single point of failure. In this paper all nodes are configured as member servers.
Setting up a cluster user account
The cluster service requires a domain user account that is a member of the local
administrators group on each node, under which the cluster service can run. Because setup
requires a user name and password, this user account must be created before configuring the
cluster service. This user account should be dedicated only to running the cluster service and
should not belong to an individual.
The cluster service account does not need to be a member of the domain administrators
group. For security reasons, we do not recommend granting domain administrator rights to
the cluster service account.
The cluster service account requires the following rights to function properly on all nodes in
the cluster. The Cluster Configuration Wizard grants the following rights automatically:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
Act as part of the operating system.
Adjust memory quotas for a process.
Back up files and directories.
Increase scheduling priority.
Log on as a service.
Restore files and directories.
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
15
You can configure these settings on the Security Policy MMC Console (Figure 9).
Figure 9 Security settings console
Note: For additional information, see the 269229 How to Manually Re-Create the Cluster
Service Account article in the Microsoft Knowledge Base.
16
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
N series and Active Directory support
Microsoft's Active Directory service allows organizations to efficiently organize, manage, and
control resources. Active Directory is implemented as a distributed, scalable database
managed by Windows 2000, Windows 2003, and Windows 2008 domain controllers.
N series storage systems can join and participate in mixed-mode or native-mode Active
Directory domains. Mixed-mode domains support a mix of Windows NT® 4.0, Windows 2000
Server, and Windows 2003 Server domain controllers for directory lookups and
authentication. Native-mode domains consist of Active Directory domain controllers only, and
do not emulate Windows NT 4.0 domains for previous generation computers. N series
storage systems adhere to the environment in which they are installed and support both
Active Directory and previous generation computers.
Note: Both domain styles support previous generation computers. The difference lies in
how the previous generation computers interact with Active Directory.
Name resolution
Similar to Windows 2000, Windows 2003, and Windows 2008 computers in an Active
Directory environment, N series storage systems query Domain Name System (DNS) servers
to locate domain controllers. Because the Active Directory service relies on DNS to resolve
names and services to IP addresses, the DNS servers that are used with N series storage
systems in an Active Directory environment must support service location (SRV) resource
records (per RFC 2782).
Note: Microsoft recommends using DNS servers that support dynamic updates (per RFC
2136), so that important changes to SRV records about domain controllers are
automatically updated and available immediately to clients.
When using non-Windows 2000 DNS servers, such as Berkeley Internet Name Domain
(BIND) servers, verify that the version that you use supports SRV records or update it to a
version that supports SRV records.
Locating domain controllers
An N series storage system attempts to sense automatically the type of domain that exists on
the network when one of the two following events occurs:
򐂰 You run a CIFS setup, the process that prepares the N series storage system for CIFS.
򐂰 CIFS restarts on an N series storage system. It accomplishes this by identifying the type
of domain controllers that are available.
The N series storage system searches first for an Active Directory domain controller by
querying DNS for the SRV record of an Active Directory domain controller. (This is the same
method used by Microsoft Windows-based computers.) If the N series storage system cannot
locate an Active Directory domain controller, it switches to NT4 mode and then searches for a
Windows NT 4.0 domain controller using the Windows Internet Naming Service (WINS) and
NetBIOS protocol or by using b-node broadcasts.
If the N series storage system can locate an Active Directory domain controller, the following
conditions apply:
򐂰 Clients obtain their session credentials by contacting a domain controller/Kerberos Key
Distribution Center (DC/KDC).
򐂰 NetBIOS is not required to access an N series storage system in a native-mode domain
where NetBIOS-over-TCP/IP has been disabled.
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
17
򐂰 CIFS/SMB is supported on TCP port 445.
򐂰 Registering with WINS servers is optional and can be turned on or off for each network
interface.
򐂰 If the N series storage system is configured in or switches to NT4 mode, the following
conditions apply:
– N series storage systems can register each interface with WINS. (WINS registration
can be turned on or off for each interface.)
– N series storage systems authenticate incoming sessions against a Windows domain
controller using the Windows NT LAN Manager (NTLM) authentication protocol.
Active Directory site support
Active Directory sites are used to logically represent an underlying physical network. A site is
a collection of networks connected at local area network (LAN) speed. Slower and less
reliable wide area networks (WANs) are used between sites (locations) that are too far apart
to be connected by a LAN.
N series storage systems are Active Directory site-aware. Therefore, N series storage
systems attempt to communicate with a domain controller in the same site instead of
selecting a domain controller at a different location. It is important to place the N series
storage system in the proper Active Directory site so that it can use the resources that are
physically close to it.
Authentication
N series storage systems can operate in a Windows workgroup mode or use Kerberos
authentication. Workgroup authentication allows local Windows client access and does not
rely on a domain controller. With Kerberos authentication, the client negotiates the highest
possible security level when a connection to the N series storage system is established.
During the session-setup sequence, Windows computers negotiate which authentication
methods support standalone Windows NT 4.0, Windows 2000, Windows 2003, and Windows
2008 computers. Those that are not part of an Active Directory domain use only NTLM for
authentication. By default, Windows 2003, Windows XP, and Windows 2000 computers that
are part of an Active Directory domain try to use Kerberos authentication first and then use
NTLM. Windows NT 4.0, Windows NT 3.x, and Windows 95/98 clients always authenticate
using NTLM.
Data ONTAP includes native implementation of the NTLM and Kerberos protocols. Therefore,
it provides full support for the Active Directory and existing authentication methods.
Kerberos authentication
The Kerberos server, or KDC service, stores and retrieves information about security
principles in the Active Directory. Unlike the NTLM model, Active Directory clients that want to
establish a session with another computer, such as an N series storage system, contact a
KDC directly to obtain their session credentials.
18
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
Using Kerberos, clients contact the KDC service that runs on Windows 2000 or Windows
2003 domain controllers. Clients then pass the authenticator and encrypted Kerberos ticket to
the N series storage system, as shown in Figure 10.
N series
Figure 10 Kerberos authentication workflow
Network requirements
The network requirements are:
򐂰 A unique NetBIOS name.
򐂰 Static IP addresses for all network interfaces on each node.
Note: Server clustering does not support the use of IP addresses assigned from
Dynamic Host Configuration Protocol (DHCP) servers.
򐂰 Access to a domain controller. If the cluster service is unable to authenticate the user
account used to start the service, it could cause the cluster to fail. We recommend that you
have a domain controller on the same local area network as the cluster to ensure
availability.
򐂰 Each node must have at least two network adapters—one for connection to the client
public network and the other for the node-to-node private cluster network. A dedicated
private network adapter is required for HCL certification.
򐂰 All nodes must have two physically independent LANs or virtual LANs for public and
private communication.
򐂰 If you are using fault-tolerant network cards or network adapter teaming, verify that you
are using the most recent firmware and drivers. Check with your network adapter
manufacturer for cluster compatibility.
Note: The network adapter teaming is not recommended for the heartbeat NICs.
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
19
Setting up networks
Each cluster node requires at least two network adapters with two or more independent
networks, to avoid a single point of failure. One is to connect to a public network and one is to
connect to a private network consisting of cluster nodes only.
Microsoft requires that you have two Hardware Compatibility List (HCL) signed Peripheral
Component Interconnect (PCI) network adapters in each node.
Communication between server cluster nodes is critical for smooth cluster operations, so to
eliminate possible communication issues, remove all unnecessary network traffic from the
network adapter that is set to Internal Cluster communications only.
Configure one of the network adapters on your production network with a static IP address
and configure the other network adapter on a separate network with another static IP address
on a different subnet for private cluster communication.
The private network adapter is used for node-to-node communication, cluster status
information, and cluster management. Each node's public network adapter connects the
cluster to the public network where clients reside and should be configured as a backup route
for internal cluster communication. To do so, configure the roles of these networks as either
internal cluster communications only or all communications for the cluster service. See a
configuration example in Figure 3 on page 5.
To verify that all network connections are correct, private network adapters (this adapter is
also known as the heartbeat or private network adapter) must be on a network that is on a
different logical network from the public adapters. This can be accomplished by using a
cross-over cable in a two-node configuration or a dedicated dumb hub in a configuration of
more than two nodes. Do not use a switch, smart hub, or any other routing device for the
heartbeat network.
Note: For additional information, see the 258750 Recommended private "Heartbeat"
configuration on a cluster server article in the Microsoft Knowledge Base.
SAN requirements
The IBM System Storage N series storage system must be configured prior to running the
Microsoft Cluster server on it. The aggregates, volumes, LUNs, and Snapshots must be
created and configured to support the Microsoft Cluster server environment.
Aggregates
An aggregate is a collection of physical disks from which the space is allocated to the
volumes. When creating the aggregates on the IBM System Storage N series storage system,
there are some considerations:
򐂰 On each aggregate, one or more flexible volumes can be created.
򐂰 Each aggregate has its own RAID configuration and set of assigned physical disks.
򐂰 The available space on the aggregate can be increased by simply adding disks to the
existing RAID group or by adding new RAID groups.
򐂰 Performance is proportional to the number of disk spindles on the aggregate.
20
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
For detailed information about aggregates, the WAFL file system, and Data ONTAP V7.3,
refer to the document IBM System Storage N series Data ONTAP 7.3 Storage Management
Guide, GC27-2207.
Creating aggregates
When creating the aggregate, a name should be defined. There are some naming
conventions for the aggregate name. The name should:
򐂰 Begin with either a letter or an underscore.
򐂰 Contain only letters, digits, and underscores.
򐂰 Contain no more than 255 characters.
After you have the name, size, and disk configurations planned, these are the steps to create
an aggregate:
1. Open the FilerView for the IBM System Storage N series storage system where you want
to create the aggregate.
2. On the FilerView, select Aggregates → Add. This brings up the Add New Aggregate
window, as shown in Figure 11. Click Next.
Figure 11 Add new aggregate window
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
21
3. The aggregate name window appears (Figure 12). Type in the name for the aggregate that
you are creating. Select whether this aggregate will be a mirrored aggregate (check box
Mirror checked) or an unmirrored aggregate (check box Mirror unchecked). The parity
should also be defined in this window. As mentioned earlier, we are creating a
RAID-DP-based aggregate, so select the Double Parity check box. If the Double Parity
check box is unchecked, the aggregate will be created using RAID 4. Click Next.
Figure 12 Aggregate name window
22
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
4. In the RAID Parameters window (Figure 13) select the number of disks that will be used
on each RAID Group created for the aggregate. The recommended number of disks per
RAID group is 16. If you are using fewer than 16 disks per RAID group, protection against
disk failure is increased, but performance will decrease because there will be fewer disk
spindles for accessing the data. If you are using more than 16 disks per RAID group,
performance will be increased (more disk spindles to access the data), but protection
against disk failure will decrease. Click Next.
Figure 13 RAID parameters window
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
23
5. In the Disk Selection Method window (Figure 14) select the method that should be used to
identify the disks used on the aggregate. The default method is automatic so that the IBM
System Storage N series storage system will automatically select the disks based on your
choices for the size and number of disks from the next windows. If for any reason you need
to select specific disks to compose the RAID groups, click Manual and select the number
and size of disks to be included on the aggregate. Click Next.
Figure 14 Disk Selection Method window
24
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
6. If you select the automatic method selection for the disks, the Disk Size window will be
shown (Figure 15). Select the disk size from the available options or select Any Size and
click Next.
Figure 15 Disk Size window
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
25
7. Select the number of disks of the selected size to be used on the aggregate in the Number
of Disks window (Figure 16). Click Next.
Figure 16 Number of Disks window
26
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
8. Review your selection in the Commit changes window (Figure 17) and click Commit.
Figure 17 Commit changes window
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
27
9. The last window is just a confirmation (Figure 18). In our test environment it took about 50
minutes to create the aggregate. That time it takes depends on how many disks you
selected. Click Close.
Figure 18 Confirmation window
28
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
10.The aggregate will be created. In the FilerView, select Aggregates → Manage and a list
of the existent aggregates will be shown, along with their status, RAID level, size, available
size, and other information (see Figure 19).
Figure 19 Manage aggregates window
Volumes
Volumes on the IBM System Storage N series storage system can be designated as
traditional volumes or flexible volumes.
Traditional volumes are tied to the physical disks on the aggregate on which they are created.
This means that the disks used on a traditional volume cannot be used on a different volume,
whether it is a traditional volume or a flexible volume.
Traditional volumes do not allow much flexibility and the only way to increase the size of a
traditional volume is to add disk spindles to the volume array. This type of volumes does not
allow downsizing.
On the other hand, flexible volumes created on aggregates can use disks from and share
disks with different flexible volumes. This is due to the fact that flexible volumes are not tied to
the physical disks on which they are created but to the aggregate collection of disks. That is
why using a flexible volume is always the best option.
Flexible volumes provide more management flexibility and allow for dynamic volume size
expansion and shrink without impact on the host client. For the Microsoft Cluster Quorum disk
this is not a useful feature since what is important is the disk availability, not the storage
capacity. But for the Microsoft Hyper-V disks, this is a very useful feature.
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
29
In this scenario, one volume and only one aggregate is necessary so that the Microsoft
Cluster quorum files can be moved to different paths on the IBM System Storage N series
storage system.
Creating volumes
Every volume on the IBM System Storage N series storage system must be created on an
aggregate. The volume name must follow these naming conventions:
򐂰 Begin with either a letter or an underscore.
򐂰 Contain only letters, digits, and underscores.
򐂰 Contain no more than 255 characters.
To create the volume on the aggregate:
1. Open the FilerView for the IBM System Storage N series storage system where you want
to create the volume.
2. In the FilerView, select Volumes → Add. This brings up the add new volume window, as
shown in Figure 20. Click Next.
Figure 20 Add new volume window
30
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
3. The Volume Type Selection window appears (Figure 21). Select Flexible for flexible
volumes or Traditional for traditional volumes. The recommended type for Microsoft
Cluster Server is FlexVol®. Click Next.
Figure 21 Volume Selection Type window
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
31
4. In the Volume Parameters window (Figure 22), type in the volume name and select the
language used on the volume. By default, the root volume language is selected. Click
Next.
Figure 22 Volume Parameters window
32
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
5. In the FlexVol parameters window (Figure 23), select the aggregate on which you want to
create the volume. Select the type of space guarantee to be used. The default, which we
recommend, is volume. This option pre-allocates the entire volume size on the aggregate.
Other options are file space guarantee and none. Click Next.
Figure 23 FlexVol parameters window
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
33
6. In the FlexVol volume size window (Figure 24), select the size type Total Size for the
entire volume size (including SnapShots and other features) or Usable Size to ensure that
the volume size available will be the specified. Type in the volume size in KB, MB, GB, or
TB, and set the SnapShot Reserve percentage. Click Next.
Figure 24 FlexVol size parameters window
34
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
7. Review the selections in the Commit changes window (Figure 25) and click Commit.
Figure 25 Commit changes window
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
35
8. The last window is just a confirmation. Click Close.
Figure 26 Confirmation window
36
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
9. The volume will be created. In the FilerView, select Volumes → Manage and a list of the
existing volumes will be shown, along with their status, RAID level, size, available size, and
other information (Figure 27).
Figure 27 Manage volumes window
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
37
After the volume is created for the Microsoft Cluster server, it must be shared. This is done
by using the CIFS option on the FilerView. In the FilerView, select CIFS → Shares →
Add. The Add a CIFS Share window appears (Figure 28). Type in the following
information:
– Share Name: This is the name that will be used to access the volume for the LUN
creation on the Microsoft Cluster server.
– Mount Point: The path to connect to this volume on the N series storage system, such
as /vol/Vol_clu_q.
– Share Description: General description for the share.
– Max. Users: Maximum number of concurrent users at a time on the share.
– Force Group: Not used for volumes accessed by Windows hosts.
10.Click Add.
Figure 28 Add a CIFS share window
LUNs
Logical unit numbers are the logical units of storage. They are created on the volumes and
appear to host systems (in this case, the Microsoft Cluster Server) as SAN disks. The LUNs
are virtual disks that will be accessed by the hosts.
The recommended way to create LUNs is using the SnapDrive utility on the Microsoft Cluster
server.
38
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
The team that wrote this IBM Redpaper publication
This paper was produced by a team of specialists from around the world working at the
International Technical Support Organization, Tucson Center.
Alex Osuna is a Project Leader at the International Technical Support Organization, Tucson
Center. He writes extensively and teaches IBM classes worldwide on all areas of storage.
Before joining the ITSO three years ago, Alex worked in the field as a Tivoli® Principal
Systems Engineer. Alex has over 30 years of experience in the IT industry and holds
certifications from IBM, RedHat, and Microsoft.
William Luiz de Souza is a System Management Engineer at the Brazil's Wintel Global
Resources Team, Brazil SDC. He works at third-level support for severity ones and
infrastructure projects. Before working for the BR Wintel GR Team two years ago, he worked
as Wintel Primary for Brazil's USF. William has more than eight years of experience in the IT
industry and focused on Microsoft technologies. He holds certifications from IBM, Microsoft,
Citrix, and ITIL®.
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
39
40
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
© Copyright International Business Machines Corporation 2009. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by
GSA ADP Schedule Contract with IBM Corp.
41
This document REDP-4496-00 was created or updated on February 5, 2009.
Send us your comments in one of the following ways:
򐂰 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
򐂰 Send your comments in an email to:
redbooks@us.ibm.com
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400 U.S.A.
®
Redpaper ™
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
IBM®
Redbooks (logo)
®
System Storage™
Tivoli®
The following terms are trademarks of other companies:
ITIL is a registered trademark, and a registered community trademark of the Office of Government
Commerce, and is registered in the U.S. Patent and Trademark Office.
Snapshot, RAID-DP, WAFL, SyncMirror, SnapVault, SnapRestore, SnapMirror, SnapManager, SnapLock,
SnapDrive, MultiStore, FlexVol, FlexClone, FilerView, Data ONTAP, and the NetApp logo are trademarks or
registered trademarks of NetApp, Inc. in the U.S. and other countries.
Active Directory, Hyper-V, Microsoft, Windows NT, Windows Server, Windows, and the Windows logo are
trademarks of Microsoft Corporation in the United States, other countries, or both.
"Microsoft product screen shot(s) reprinted with permission from Microsoft Corporation."
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
42
IBM System Storage N series and Microsoft Windows Server 2008 Hyper-V
Download