Uploaded by diegosebastian_dsv

Implementing the IBM System Storage SAN Volume Controller V5.1

advertisement
Front cover
Implementing the IBM
System Storage SAN
Volume Controller V5.1
Install, use, and troubleshoot the SAN
Volume Controller
Learn about and how to attach
iSCSI hosts
Understand what solid-state
drives have to offer
Jon Tate
Pall Beck
Angelo Bernasconi
Werner Eggli
ibm.com/redbooks
International Technical Support Organization
Implementing the IBM System Storage SAN Volume
Controller V5.1
March 2010
SG24-6423-07
Note: Before using this information and the product it supports, read the information in “Notices” on
page xvii.
Eighth Edition (March 2010)
This edition applies to Version 5 Release 1 Modification 0 of the IBM System Storage SAN Volume Controller
and is based on pre-GA versions of code.
Note: This book is based on a pre-GA version of a product and might not apply when the product becomes
generally available. We recommend that you consult the product documentation or follow-on versions of
this book for more current information.
© Copyright International Business Machines Corporation 2010. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
March 2010, Eighth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
Chapter 1. Introduction to storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1 Storage virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 User requirements that drive storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
2
5
6
Chapter 2. IBM System Storage SAN Volume Controller. . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1 SVC history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.1 SVC virtualization concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.2 MDisk overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.3 VDisk overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.4 Image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.5 Managed mode VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.6 Cache mode and cache-disabled VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.7 Mirrored VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.8 Space-Efficient VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.9 VDisk I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.10 iSCSI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2.11 Usage of IP addresses and Ethernet ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.12 iSCSI VDisk discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.13 iSCSI authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.14 iSCSI multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.15 Advanced Copy Services overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.16 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.3 SVC cluster overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3.1 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.3.2 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.3.3 Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.3.4 Cluster management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.3.5 User authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3.6 SVC roles and user groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.3.7 SVC local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.3.8 SVC remote authentication and single sign-on . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.4 SVC hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.4.1 Fibre Channel interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.4.2 LAN interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.5 Solid-state drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
© Copyright IBM Corp. 2010. All rights reserved.
iii
2.5.1 Storage bottleneck problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5.2 Solid-state drive solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5.3 Solid-state drive market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6 Solid-state drives in the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6.1 Solid-state drive configuration rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6.2 SVC 5.1 supported hardware list, device driver, and firmware levels . . . . . . . . . .
2.6.3 SVC 4.3.1 features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6.4 New with SVC 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.7 Maximum supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.8 Useful SVC links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.9 Commonly encountered terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
50
50
51
52
55
56
56
58
59
59
Chapter 3. Planning and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.2 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.2.1 Preparing your uninterruptible power supply unit environment . . . . . . . . . . . . . . . 68
3.2.2 Physical rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.2.3 Cable connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.3 Logical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.3.1 Management IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.3.2 SAN zoning and SAN connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.3.3 iSCSI IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.3.4 Back-end storage subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.3.5 SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.3.6 Managed Disk Group configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.3.7 Virtual disk configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.3.8 Host mapping (LUN masking) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.3.9 Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.3.10 SAN boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.3.11 Data migration from a non-virtualized storage subsystem . . . . . . . . . . . . . . . . . 99
3.3.12 SVC configuration backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.4 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.4.1 SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
3.4.2 Disk subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
3.4.3 SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.4.4 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Chapter 4. SAN Volume Controller initial configuration . . . . . . . . . . . . . . . . . . . . . . .
4.1 Managing the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1 TCP/IP requirements for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Systems Storage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.1 IBM System Storage Productivity Center hardware . . . . . . . . . . . . . . . . . . . . . .
4.2.2 SVC installation planning information for System Storage Productivity Center .
4.2.3 SVC installation planning information for the HMC . . . . . . . . . . . . . . . . . . . . . . .
4.3 Setting up the SVC cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.1 Creating the cluster (first time) using the service panel . . . . . . . . . . . . . . . . . . .
4.3.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.3 Initial configuration using the service panel . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4 Adding the cluster to the SSPC or the SVC HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.1 Configuring the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5 Secure Shell overview and CIM Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.1 Generating public and private SSH key pairs using PuTTY . . . . . . . . . . . . . . . .
4.5.2 Uploading the SSH public key to the SVC cluster. . . . . . . . . . . . . . . . . . . . . . . .
iv
Implementing the IBM System Storage SAN Volume Controller V5.1
103
104
104
107
108
109
110
111
111
114
115
116
117
125
126
129
4.5.3 Configuring the PuTTY session for the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.4 Starting the PuTTY CLI session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.5 Configuring SSH for AIX clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.6 Using IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.6.1 Migrating a cluster from IPv4 to IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.6.2 Migrating a cluster from IPv6 to IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.7 Upgrading the SVC Console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
130
134
136
136
137
141
142
Chapter 5. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.1 SVC setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
5.1.1 Fibre Channel and SAN setup overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
5.1.2 Port mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5.2 iSCSI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5.2.1 Initiators and targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5.2.2 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5.2.3 IQN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5.3 VDisk discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
5.4 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.5 AIX-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5.5.1 Configuring the AIX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5.5.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 162
5.5.3 HBAs for IBM System p hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5.5.4 Configuring for fast fail and dynamic tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.5.5 Subsystem Device Driver (SDD) Path Control Module (SDDPCM) . . . . . . . . . . 165
5.5.6 Discovering the assigned VDisk using SDD and AIX 5L V5.3 . . . . . . . . . . . . . . 167
5.5.7 Using SDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
5.5.8 Creating and preparing volumes for use with AIX 5L V5.3 and SDD . . . . . . . . . 172
5.5.9 Discovering the assigned VDisk using AIX V6.1 and SDDPCM . . . . . . . . . . . . . 172
5.5.10 Using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
5.5.11 Creating and preparing volumes for use with AIX V6.1 and SDDPCM. . . . . . . 177
5.5.12 Expanding an AIX volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
5.5.13 Removing an SVC volume on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
5.5.14 Running SVC commands from an AIX host system . . . . . . . . . . . . . . . . . . . . . 181
5.6 Windows-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.6.1 Configuring Windows Server 2000, Windows 2003 Server, and Windows Server
2008 hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.6.2 Configuring Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.6.3 Hardware lists, device driver, HBAs, and firmware levels. . . . . . . . . . . . . . . . . . 183
5.6.4 Host adapter installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
5.6.5 Changing the disk timeout on Microsoft Windows Server. . . . . . . . . . . . . . . . . . 185
5.6.6 Installing the SDD driver on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5.6.7 Installing the SDDDSM driver on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.7 Discovering assigned VDisks in Windows Server 2000 and Windows 2003 Server. . 190
5.7.1 Extending a Windows Server 2000 or Windows 2003 Server volume . . . . . . . . 195
5.8 Example configuration of attaching an SVC to a Windows Server 2008 host. . . . . . . 200
5.8.1 Installing SDDDSM on a Windows Server 2008 host . . . . . . . . . . . . . . . . . . . . . 200
5.8.2 Installing SDDDSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
5.8.3 Attaching SVC VDisks to Windows Server 2008 . . . . . . . . . . . . . . . . . . . . . . . . 205
5.8.4 Extending a Windows Server 2008 volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.8.5 Removing a disk on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.9 Using the SVC CLI from a Windows host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
5.10 Microsoft Volume Shadow Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
5.10.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Contents
v
vi
5.10.2 System requirements for the IBM System Storage hardware provider . . . . . . .
5.10.3 Installing the IBM System Storage hardware provider . . . . . . . . . . . . . . . . . . .
5.10.4 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.10.5 Creating the free and reserved pools of volumes . . . . . . . . . . . . . . . . . . . . . . .
5.10.6 Changing the configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.11 Specific Linux (on Intel) information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.11.1 Configuring the Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.11.2 Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.11.3 Disabling automatic Linux system updates . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.11.4 Setting queue depth with QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.11.5 Multipathing in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.11.6 Creating and preparing the SDD volumes for use . . . . . . . . . . . . . . . . . . . . . .
5.11.7 Using the operating system MPIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.11.8 Creating and preparing MPIO volumes for use. . . . . . . . . . . . . . . . . . . . . . . . .
5.12 VMware configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.12.1 Configuring VMware hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.12.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . .
5.12.3 Guest operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.12.4 HBAs for hosts running VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.12.5 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.12.6 VMware storage and zoning recommendations . . . . . . . . . . . . . . . . . . . . . . . .
5.12.7 Setting the HBA timeout for failover in VMware . . . . . . . . . . . . . . . . . . . . . . . .
5.12.8 Multipathing in ESX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.12.9 Attaching VMware to VDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.12.10 VDisk naming in VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.12.11 Setting the Microsoft guest operating system timeout . . . . . . . . . . . . . . . . . .
5.12.12 Extending a VMFS volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.12.13 Removing a datastore from an ESX host . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.13 SUN Solaris support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.13.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . .
5.13.2 SDD dynamic pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.14 Hewlett-Packard UNIX configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.14.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . .
5.14.2 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.14.3 Co-existence of SDD and PV Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.14.4 Using an SVC VDisk as a cluster lock disk . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.14.5 Support for HP-UX with greater than eight LUNs . . . . . . . . . . . . . . . . . . . . . . .
5.15 Using SDDDSM, SDDPCM, and SDD Web interface . . . . . . . . . . . . . . . . . . . . . . . .
5.16 Calculating the queue depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.17 Further sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.17.1 Publications containing SVC storage subsystem attachment guidelines . . . . .
216
216
220
221
222
225
225
225
225
226
226
231
233
233
237
238
238
238
238
239
240
241
242
242
245
246
246
248
249
249
249
250
250
250
250
251
251
251
252
253
253
Chapter 6. Advanced Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1 Business requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.2 Moving and migrating data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.3 Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.4 Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.5 Application testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.6 SVC FlashCopy features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 Reverse FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.1 FlashCopy and Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3 How FlashCopy works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
255
256
256
256
257
257
257
257
258
259
261
Implementing the IBM System Storage SAN Volume Controller V5.1
6.4 Implementing SVC FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.1 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.2 Multiple Target FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.3 Consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.4 FlashCopy indirection layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.5 Grains and the FlashCopy bitmap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.6 Interaction and dependency between Multiple Target FlashCopy mappings . . .
6.4.7 Summary of the FlashCopy indirection layer algorithm. . . . . . . . . . . . . . . . . . . .
6.4.8 Interaction with the cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.9 FlashCopy rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.10 FlashCopy and image mode disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.11 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.12 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.13 Space-efficient FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.14 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.15 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.16 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.17 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.18 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.19 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . .
6.4.20 Recovering data from FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.5 Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.5.1 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.5.2 Remote copy techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.5.3 SVC Metro Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.5.4 Multiple Cluster Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.5.5 Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.5.6 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.5.7 How Metro Mirror works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.5.8 Metro Mirror process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.5.9 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.5.10 State overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.5.11 Detailed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.5.12 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.5.13 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions.
6.5.14 Metro Mirror configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6 Metro Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.1 Listing available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.2 Creating the SVC cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.3 Creating a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.4 Creating a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.5 Changing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.6 Changing a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.7 Starting a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.8 Stopping a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.9 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.10 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.11 Deleting a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.12 Deleting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.13 Reversing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.14 Reversing a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . .
6.6.15 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.7 Global Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
262
262
263
264
266
266
267
269
269
270
270
271
274
276
277
278
278
278
280
280
281
281
281
282
283
284
287
288
291
292
292
295
298
301
302
302
303
303
304
304
305
305
306
306
306
307
307
307
308
308
308
309
309
vii
viii
6.7.1 Intracluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.7.2 Intercluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.8 Remote copy techniques. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.8.1 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.8.2 SVC Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.9 Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.9.1 Global Mirror relationship between primary and secondary VDisks . . . . . . . . . .
6.9.2 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.9.3 Dependent writes that span multiple VDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.9.4 Global Mirror consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.10 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.10.1 Intercluster communication and zoning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.10.2 SVC cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.10.3 Maintenance of the intercluster link. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.10.4 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.10.5 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.10.6 Space-efficient background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.11 Global Mirror process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.11.1 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.11.2 State overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.11.3 Detailed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.11.4 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.11.5 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions.
6.11.6 Global Mirror configuration limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.12 Global Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.12.1 Listing the available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.12.2 Creating an SVC cluster partnership. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.12.3 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.12.4 Creating a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.12.5 Changing a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.12.6 Changing a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . .
6.12.7 Starting a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.12.8 Stopping a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.12.9 Starting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.12.10 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . .
6.12.11 Deleting a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.12.12 Deleting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . .
6.12.13 Reversing a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.12.14 Reversing a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . .
309
309
310
310
311
313
313
313
314
315
317
317
317
317
318
318
319
319
319
322
324
328
329
329
329
330
333
334
334
334
335
335
335
336
336
336
337
337
337
Chapter 7. SAN Volume Controller operations using the command-line interface. .
7.1 Normal operations using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.1.1 Command syntax and online help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 Working with managed disks and disk controller systems . . . . . . . . . . . . . . . . . . . . .
7.2.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.2 Renaming a controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.4 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.5 Viewing MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.7 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.8 Adding MDisks to a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.9 Showing the Managed Disk Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
339
340
340
340
340
341
342
342
343
344
345
346
346
Implementing the IBM System Storage SAN Volume Controller V5.1
7.2.10 Showing MDisks in an managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.11 Working with Managed Disk Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.12 Creating a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.13 Viewing Managed Disk Group information . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.14 Renaming a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.15 Deleting a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.16 Removing MDisks from a managed disk group . . . . . . . . . . . . . . . . . . . . . . . .
7.3 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.1 Creating a Fibre Channel-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.2 Creating an iSCSI-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.3 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.5 Adding ports to a defined host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.6 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4 Working with VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.1 Creating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.2 VDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.3 Creating a Space-Efficient VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.4 Creating a VDisk in image mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.5 Adding a mirrored VDisk copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.6 Splitting a VDisk Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.7 Modifying a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.8 I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.9 Deleting a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.10 Expanding a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.11 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.12 Showing VDisk-to-host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.13 Deleting a VDisk-to-host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.14 Migrating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.15 Migrate a VDisk to an image mode VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.16 Shrinking a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.17 Showing a VDisk on an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.18 Showing VDisks using a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.19 Showing which MDisks are used by a specific VDisk . . . . . . . . . . . . . . . . . . . .
7.4.20 Showing from which Managed Disk Group a VDisk has its extents . . . . . . . . .
7.4.21 Showing the host to which the VDisk is mapped . . . . . . . . . . . . . . . . . . . . . . .
7.4.22 Showing the VDisk to which the host is mapped . . . . . . . . . . . . . . . . . . . . . . .
7.4.23 Tracing a VDisk from a host back to its physical disk . . . . . . . . . . . . . . . . . . . .
7.5 Scripting under the CLI for SVC task automation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.6 SVC advanced operations using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.6.1 Command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.6.2 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7 Managing the cluster using the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7.1 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7.2 Changing cluster settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7.3 Cluster authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7.4 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7.5 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7.6 Supported IP address formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7.7 Setting the cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7.8 Start statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7.9 Stopping a statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7.10 Status of copy operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
346
346
347
348
348
349
349
350
350
351
353
354
354
355
356
356
358
358
359
360
363
364
365
367
367
368
369
370
370
371
372
373
373
374
374
375
376
376
378
378
378
379
380
380
381
381
382
383
383
384
385
386
386
Contents
ix
7.7.11 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.8 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.8.1 Viewing node details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.8.2 Adding a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.8.3 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.8.4 Deleting a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.8.5 Shutting down a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.9 I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.9.1 Viewing I/O Group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.9.2 Renaming an I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.9.3 Adding and removing hostiogrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.9.4 Listing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.10 Managing authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.10.1 Managing users using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.10.2 Managing user roles and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.10.3 Changing a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.10.4 Audit log command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.11 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.11.1 FlashCopy operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.11.2 Setting up FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.11.3 Creating a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.11.4 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.11.5 Preparing (pre-triggering) the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . .
7.11.6 Preparing (pre-triggering) the FlashCopy consistency group . . . . . . . . . . . . . .
7.11.7 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.11.8 Starting (triggering) FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . .
7.11.9 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.11.10 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.11.11 Stopping the FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . .
7.11.12 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.11.13 Deleting the FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . .
7.11.14 Migrating a VDisk to a Space-Efficient VDisk . . . . . . . . . . . . . . . . . . . . . . . . .
7.11.15 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.11.16 Split-stopping of FlashCopy maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.12 Metro Mirror operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.12.1 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.12.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4 . . . . . . . .
7.12.3 Creating a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.12.4 Creating the Metro Mirror relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.12.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . .
7.12.6 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.12.7 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.12.8 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.12.9 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.12.10 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . .
7.12.11 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . .
7.12.12 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . .
7.12.13 Restarting a Metro Mirror consistency group in the Idling state . . . . . . . . . . .
7.12.14 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.12.15 Switching copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . . . .
7.12.16 Switching copy direction for a Metro Mirror consistency group. . . . . . . . . . . .
7.12.17 Creating an SVC partnership among many clusters . . . . . . . . . . . . . . . . . . . .
7.12.18 Star configuration partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
x
Implementing the IBM System Storage SAN Volume Controller V5.1
386
387
388
388
390
390
390
391
391
392
392
393
394
394
395
396
396
397
397
398
398
399
401
402
402
404
404
405
406
406
407
407
412
412
413
414
415
416
417
418
419
420
420
422
422
423
424
424
425
425
426
427
428
7.13 Global Mirror operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.13.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.13.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4 . . . . . . . .
7.13.3 Changing link tolerance and cluster delay simulation . . . . . . . . . . . . . . . . . . . .
7.13.4 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.13.5 Creating Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.13.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . .
7.13.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.13.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . .
7.13.9 Starting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.13.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.13.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.13.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . .
7.13.13 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . .
7.13.14 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . .
7.13.15 Restarting a Global Mirror consistency group in the Idling state. . . . . . . . . . .
7.13.16 Changing direction for Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.13.17 Switching copy direction for a Global Mirror relationship . . . . . . . . . . . . . . . .
7.13.18 Switching copy direction for a Global Mirror consistency group . . . . . . . . . . .
7.14 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.14.1 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.14.2 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.14.3 Setting up SNMP notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.14.4 Set syslog event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.14.5 Configuring error notification using an e-mail server. . . . . . . . . . . . . . . . . . . . .
7.14.6 Analyzing the error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.14.7 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.14.8 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.14.9 Backing up the SVC cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.14.10 Restoring the SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.14.11 Deleting configuration backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.15 SAN troubleshooting and data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.16 T3 recovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
434
435
436
437
439
439
441
441
441
442
443
444
444
445
446
446
447
447
448
449
450
456
458
458
459
460
461
462
466
467
468
468
468
Chapter 8. SAN Volume Controller operations using the GUI. . . . . . . . . . . . . . . . . . .
8.1 SVC normal operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.1.1 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.1.2 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.1.3 Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.1.4 General housekeeping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.1.5 Viewing progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2.2 Renaming a disk controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2.4 Managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2.5 MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2.7 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2.8 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2.9 Showing a VDisk using a certain MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3 Working with Managed Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.1 Viewing MDisk group information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
469
470
470
475
475
476
476
477
477
478
479
479
479
480
481
481
482
483
483
Contents
xi
8.3.2 Creating MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.3 Renaming a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.4 Deleting a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.5 Adding MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.6 Removing MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.7 Displaying MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.8 Showing MDisks in this group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.9 Showing the VDisks that are associated with an MDisk group . . . . . . . . . . . . . .
8.4 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4.1 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4.2 Creating a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4.3 Fibre Channel-attached hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4.4 iSCSI-attached hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4.5 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4.6 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4.7 Adding ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4.8 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5 Working with VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.1 Using the Viewing VDisks using MDisk window . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.2 VDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.3 Creating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.4 Creating a Space-Efficient VDisk with autoexpand. . . . . . . . . . . . . . . . . . . . . . .
8.5.5 Deleting a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.6 Deleting a VDisk-to-host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.7 Expanding a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.8 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.9 Modifying a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.10 Migrating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.11 Migrating a VDisk to an image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.12 Creating a VDisk Mirror from an existing VDisk . . . . . . . . . . . . . . . . . . . . . . . .
8.5.13 Creating a mirrored VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.14 Creating a VDisk in image mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.15 Creating an image mode mirrored VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.16 Migrating to a Space-Efficient VDisk using VDisk Mirroring . . . . . . . . . . . . . . .
8.5.17 Deleting a VDisk copy from a VDisk mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.18 Splitting a VDisk copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.19 Shrinking a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.20 Showing the MDisks that are used by a VDisk . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.21 Showing the MDG to which a VDisk belongs . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.22 Showing the host to which the VDisk is mapped . . . . . . . . . . . . . . . . . . . . . . .
8.5.23 Showing capacity information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.24 Showing VDisks mapped to a particular host . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.25 Deleting VDisks from a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.6 Working with solid-state drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.6.1 Solid-state drive introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.7 SVC advanced operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.7.1 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.8 Managing the cluster using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.8.1 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.8.2 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.8.3 Starting the statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.8.4 Stopping the statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.8.5 Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xii
Implementing the IBM System Storage SAN Volume Controller V5.1
484
486
487
488
489
490
491
492
493
494
495
495
497
499
500
501
502
504
504
505
505
509
513
514
514
516
517
518
519
521
523
526
529
532
534
535
536
537
538
538
538
539
540
540
540
543
543
544
544
545
547
548
549
8.8.6 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.8.7 Setting the cluster time and configuring the Network Time Protocol server . . . .
8.8.8 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9 Manage authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9.1 Modify current user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9.2 Creating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9.3 Modifying a user role. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9.4 Deleting a user role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9.5 User groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9.6 Cluster password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9.7 Remote authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.10 Working with nodes using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.10.1 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.10.2 Renaming an I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.10.3 Adding nodes to the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.10.4 Configuring iSCSI ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.11 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.12 FlashCopy operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.13 Creating a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.13.1 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.13.2 Preparing (pre-triggering) the FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.13.3 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.13.4 Starting (triggering) a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . .
8.13.5 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.13.6 Stopping the FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.13.7 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.13.8 Deleting the FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.13.9 Migrating between a fully allocated VDisk and a Space-Efficient VDisk . . . . . .
8.13.10 Reversing and splitting a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . .
8.14 Metro Mirror operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.14.1 Cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.14.2 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.14.3 Creating the SVC partnership between ITSO-CLS1 and ITSO-CLS2 . . . . . . .
8.14.4 Creating a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.14.5 Creating Metro Mirror relationships for MM_DB_Pri and MM_DBLog_Pri . . . .
8.14.6 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . .
8.14.7 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.14.8 Starting a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . .
8.14.9 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.14.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.14.11 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.14.12 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . .
8.14.13 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . .
8.14.14 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . .
8.14.15 Restarting a Metro Mirror consistency group in the Idling state . . . . . . . . . . .
8.14.16 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.14.17 Switching copy direction for a Metro Mirror consistency group. . . . . . . . . . . .
8.14.18 Switching the copy direction for a Metro Mirror relationship . . . . . . . . . . . . . .
8.15 Global Mirror operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.15.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.15.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS2 . . . . . . . .
8.15.3 Global Mirror link tolerance and delay simulations . . . . . . . . . . . . . . . . . . . . . .
8.15.4 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
549
549
550
552
553
554
556
556
557
558
558
559
559
559
560
563
566
566
566
568
573
574
574
575
576
578
579
580
580
582
582
584
585
587
590
594
597
597
598
599
599
600
600
602
603
604
605
606
607
608
609
612
614
xiii
xiv
8.15.5 Creating Global Mirror relationships for GM_DB_Pri and GM_DBLog_Pri . . . .
8.15.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . .
8.15.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.15.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . .
8.15.9 Starting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.15.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.15.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.15.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . .
8.15.13 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . .
8.15.14 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . .
8.15.15 Restarting a Global Mirror consistency group in the Idling state. . . . . . . . . . .
8.15.16 Changing copy direction for Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.15.17 Switching copy direction for a Global Mirror consistency group . . . . . . . . . . .
8.16 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17.1 Package numbering and version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17.2 Upgrade status utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17.3 Precautions before upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17.4 SVC software upgrade test utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17.5 Upgrade procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17.6 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17.7 Setting up error notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17.8 Setting syslog event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17.9 Set e-mail features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17.10 Analyzing the error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17.11 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17.12 Viewing the license settings log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17.13 Dumping the cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17.14 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.17.15 Setting up a quorum disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.18 Backing up the SVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.18.1 Backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.18.2 Saving the SVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.18.3 Restoring the SVC configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.18.4 Deleting the configuration backup files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.18.5 Fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.18.6 Common Information Model object manager log configuration. . . . . . . . . . . . .
617
620
624
624
625
626
627
627
628
630
631
632
634
635
636
636
636
637
638
639
645
647
649
651
655
659
662
663
663
666
668
669
670
672
672
672
673
Chapter 9. Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.1 Migration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2 Migration operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.1 Migrating multiple extents (within an MDG) . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.2 Migrating extents off of an MDisk that is being deleted. . . . . . . . . . . . . . . . . . . .
9.2.3 Migrating a VDisk between MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.4 Migrating the VDisk to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.5 Migrating a VDisk between I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.6 Monitoring the migration progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.3 Functional overview of migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.3.1 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.3.2 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.3.3 Migration algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.4 Migrating data from an image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.4.1 Image mode VDisk migration concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
675
676
676
676
677
678
680
680
681
682
682
683
683
685
685
Implementing the IBM System Storage SAN Volume Controller V5.1
9.4.2 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.5 Data migration for Windows using the SVC GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.5.1 Windows Server 2008 host system connected directly to the DS4700. . . . . . . .
9.5.2 Adding the SVC between the host system and the DS4700. . . . . . . . . . . . . . . .
9.5.3 Putting the migrated disks onto an online Windows Server 2008 host . . . . . . . .
9.5.4 Migrating the VDisk from image mode to managed mode . . . . . . . . . . . . . . . . .
9.5.5 Migrating the VDisk from managed mode to image mode . . . . . . . . . . . . . . . . .
9.5.6 Migrating the VDisk from image mode to image mode . . . . . . . . . . . . . . . . . . . .
9.5.7 Free the data from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.5.8 Put the free disks online on Windows Server 2008. . . . . . . . . . . . . . . . . . . . . . .
9.6 Migrating Linux SAN disks to SVC disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.6.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.6.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.6.3 Move the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.6.4 Migrate the image mode VDisks to managed MDisks . . . . . . . . . . . . . . . . . . . .
9.6.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.6.6 Migrate the VDisks to image mode VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.6.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.7 Migrating ESX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.7.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.7.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.7.3 Move the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.7.4 Migrating the image mode VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.7.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.7.6 Migrating the managed VDisks to image mode VDisks . . . . . . . . . . . . . . . . . . .
9.7.7 Remove the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.8 Migrating AIX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.8.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.8.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.8.3 Moving the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.8.4 Migrating image mode VDisks to VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.8.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.8.6 Migrating the managed VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.8.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.9 Using SVC for storage migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.10 Using VDisk Mirroring and Space-Efficient VDisks together . . . . . . . . . . . . . . . . . . .
9.10.1 Zero detect feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.10.2 VDisk Mirroring With Space-Efficient VDisks . . . . . . . . . . . . . . . . . . . . . . . . . .
9.10.3 Metro Mirror and Space-Efficient VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
687
687
688
690
698
700
702
705
709
711
712
714
715
719
722
725
728
729
732
733
735
739
742
745
747
748
751
753
754
759
761
763
766
767
770
771
771
773
779
Appendix A. Scripting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Scripting structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Automated virtual disk creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SVC tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Scripting alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
785
786
787
790
797
Appendix B. Node replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Replacing nodes nondisruptively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Expanding an existing SVC cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Moving VDisks to a new I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Replacing nodes disruptively (rezoning the SAN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
799
800
804
806
807
Appendix C. Performance data and statistics gathering. . . . . . . . . . . . . . . . . . . . . . . 809
SVC performance overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
Contents
xv
Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Collecting performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performance data collection and TotalStorage Productivity Center for Disk . . . . . . . .
810
810
810
810
812
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
How to get IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
815
815
815
816
817
817
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
xvi
Implementing the IBM System Storage SAN Volume Controller V5.1
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
© Copyright IBM Corp. 2010. All rights reserved.
xvii
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX 5L™
AIX®
developerWorks®
DS4000®
DS6000™
DS8000®
Enterprise Storage Server®
FlashCopy®
GPFS™
IBM Systems Director Active Energy
Manager™
IBM®
Power Systems™
Redbooks®
Redbooks (logo)
®
Solid®
System i®
System p®
System Storage™
System Storage DS®
System x®
System z®
Tivoli®
TotalStorage®
WebSphere®
XIV®
z/OS®
The following terms are trademarks of other companies:
Emulex, and the Emulex logo are trademarks or registered trademarks of Emulex Corporation.
Novell, SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States
and other countries.
QLogic, and the QLogic logo are registered trademarks of QLogic Corporation. SANblade is a registered
trademark in the United States.
ACS, Red Hat, and the Shadowman logo are trademarks or registered trademarks of Red Hat, Inc. in the U.S.
and other countries.
VMotion, VMware, the VMware "boxes" logo and design are registered trademarks or trademarks of VMware,
Inc. in the United States and/or other jurisdictions.
Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both.
Microsoft, Windows NT, Windows, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Intel Xeon, Intel, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered
trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
xviii
Implementing the IBM System Storage SAN Volume Controller V5.1
Summary of changes
This section describes the technical changes made in this edition of the book and in previous
editions. This edition might also include minor corrections and editorial changes that are not
identified.
Summary of Changes
for SG24-6423-07
for Implementing the IBM System Storage SAN Volume Controller V5.1
as created or updated on March 30, 2010.
March 2010, Eighth Edition
This revision reflects the addition, deletion, or modification of new and changed information
described next.
New information
Added iSCSI information
Added Solid® State Drive information
Changed information
Removed duplicate information
Consolidated chapters
Removed dated material
© Copyright IBM Corp. 2010. All rights reserved.
xix
xx
Implementing the IBM System Storage SAN Volume Controller V5.1
Preface
This IBM® Redbooks® publication is a detailed technical guide to the IBM System Storage™
SAN Volume Controller (SVC), a virtualization appliance solution that maps virtualized
volumes visible to hosts and applications to physical volumes on storage devices. Each
server within the storage area network (SAN) has its own set of virtual storage addresses,
which are mapped to physical addresses. If the physical addresses change, the server
continues running using the same virtual addresses that it had before. Therefore, volumes or
storage can be added or moved while the server is still running. The IBM virtualization
technology improves management of information at the “block” level in a network, enabling
applications and servers to share storage devices on a network. This book is intended to
allow you to implement the SVC at a 5.1.0 release level with a minimum of effort.
The team who wrote this book
This book was produced by a team of specialists from around the world working at Brocade
Communications, San Jose, and the International Technical Support Organization, San Jose
Center.
Jon Tate is a Project Manager for IBM System Storage SAN Solutions at the International
Technical Support Organization, San Jose Center. Before joining the ITSO in 1999, he
worked in the IBM Technical Support Center, providing Level 2 and 3 support for IBM storage
products. Jon has 24 years of experience in storage software and management, services,
and support, and is both an IBM Certified IT Specialist and an IBM SAN Certified Specialist.
He is also the UK Chairman of the Storage Networking Industry Association.
Pall Beck is a SAN Technical Team Lead in IBM Nordic. He has 12 years of experience
working with storage and joined the IBM ITD DK in 2005. Prior to working for IBM in Denmark,
he worked as an IBM service representative performing hardware installations and repairs for
IBM System i®, System p®, and System z® in Iceland. As a SAN Technical Team Lead for
ITD DK, he led a team of administrators running several of the largest SAN installations in
Europe. His current position involves the creation and implementation of operational
standards and aligning best practices throughout the Nordics. Pall has a diploma as an
Electronic Technician from Odense Tekniske Skole in Denmark and IR in Reykjavik, Iceland.
Angelo Bernasconi is a Certified ITS Senior Storage and SAN Software Specialist in IBM
Italy. He has 24 years of experience in the delivery of maintenance and professional services
for IBM Enterprise clients in z/OS® and open systems. He holds a degree in Electronics and
his areas of expertise include storage hardware, SAN, storage virtualization, de-duplication,
and disaster recovery solutions. He has written extensively about SAN and virtualization
products in three IBM Redbooks publications, and he is the Technical Leader of the Italian
Open System Storage Professional Services Community.
Werner Eggli is a Senior IT Specialist with IBM Switzerland. He has more than 25 years of
experience in Software Development, Project Management, and Consulting concentrating in
the Networking and Telecommunication Segment. Werner joined IBM in 2001 and works in
pre-sales as a Storage Systems Engineer for Open Systems. His expertise is the design and
implementation of IBM Storage Solutions. He holds a degree in Dipl.Informatiker (FH) from
Fachhochschule Konstanz, Germany.
We extend our thanks to the following people for their contributions to this project.
© Copyright IBM Corp. 2010. All rights reserved.
xxi
There are many people who contributed to this book. In particular, we thank the development
and PFE teams in Hursley. Matt Smith was also instrumental in moving any issues along and
ensuring that they maintained a high profile.
In particular, we thank the previous authors of this book:
Matt Amanat
Angelo Bernasconi
Steve Cody
Sean Crawford
Sameer Dhulekar
Katja Gebuhr
Deon George
Amarnath Hiriyannappa
Thorsten Hoss
Juerg Hossli
Philippe Jachimczyk
Kamalakkannan J Jayaraman
Dan Koeck
Bent Lerager
Craig McKenna
Andy McManus
Joao Marcos Leite
Barry Mellish
Suad Musovich
Massimo Rosati
Fred Scholten
Robert Symons
Marcus Thordal
Xiao Peng Zhao
We also want to thank the following people for their contributions to previous editions and to
those people who contributed to this edition:
John Agombar
Alex Ainscow
Trevor Boardman
Chris Canto
Peter Eccles
Carlos Fuente
Alex Howell
Colin Jewell
Paul Mason
Paul Merrison
Jon Parkes
Steve Randle
Lucy Raw
Bill Scales
Dave Sinclair
Matt Smith
Steve White
Barry Whyte
IBM Hursley
Bill Wiegand
IBM Advanced Technical Support
xxii
Implementing the IBM System Storage SAN Volume Controller V5.1
Dorothy Faurot
IBM Raleigh
Sharon Wang
IBM Chicago
Chris Saul
IBM San Jose
Sangam Racherla
IBM ITSO
A special mention must go to Brocade for their unparalleled support of this residency in terms
of equipment and support in many areas throughout. Namely:
Jim Baldyga
Yong Choi
Silviano Gaona
Brian Steffler
Steven Tong
Brocade Communications Systems
Now you can become a published author, too!
Here’s an opportunity to spotlight your skills, grow your career, and become a published
author - all at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base. Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us.
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review IBM Redbooks form found at:
ibm.com/redbooks
Send your comments in an e-mail to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface
xxiii
Stay connected to IBM Redbooks
Find us on Facebook:
http://www.facebook.com/pages/IBM-Redbooks/178023492563?ref=ts
Follow us on twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
xxiv
Implementing the IBM System Storage SAN Volume Controller V5.1
1
Chapter 1.
Introduction to storage
virtualization
This chapter defines storage virtualization. It gives a short overview of today’s most critical
storage issues and explains how storage virtualization can help you solve these issues.
© Copyright IBM Corp. 2010. All rights reserved.
1
1.1 Storage virtualization
Storage virtualization is an overused term. Often, people use it as a buzzword to claim that a
product is virtualized. Almost every storage hardware and software product can technically
claim to provide a form of block-level virtualization. So, where do we define actual storage
virtualization? Does the fact that a mobile computer has logical volumes that are created from
a single physical drive mean that the computer is virtual? Not really.
So, what is storage virtualization? The IBM explanation of storage virtualization is clear:
Storage virtualization is a technology that makes one set of resources look and feel like
another set of resources, preferably with more desirable characteristics.
It is a logical representation of resources not constrained by physical limitations:
– Hides part of the complexity
– Adds or integrates new function with existing services
– Can be nested or applied to multiple layers of a system
When discussing storage virtualization, it is important to understand that virtualization can be
implemented on separate layers in the I/O stack. We have to clearly distinguish between
virtualization on the file system layer and virtualization on the block, that is, the disk layer.
The focus of this book is block-level virtualization, that is, the block aggregation layer. File
system virtualization is out of the intended scope of this book.
If you are interested in file system virtualization, refer to IBM General Parallel File System
(GPFS™) or IBM scale out file services, which is based on GPFS. For more information and
an overview of the IBM General Parallel File System (GPFS) Version 3, Release 2 for AIX®,
Linux®, and Windows®, go to this Web site:
http://www-03.ibm.com/systems/clusters/software/whitepapers/gpfs_intro.html
For the IBM scale out file services, go to this Web site:
http://www-935.ibm.com/services/us/its/html/sofs-landing.html
The Storage Networking Industry Association’s (SNIA) block aggregation model (Figure 1-1
on page 3) provides a good overview of the storage domain and its layers.
Figure 1-1 on page 3 shows the three layers of a storage domain: the file, the block
aggregation, and the block subsystem layers. The model splits the block aggregation layer
into three sublayers. Block aggregation can be realized within hosts (servers), in the storage
network (storage routers and storage controllers), or in storage devices (intelligent disk
arrays).
The IBM implementation of a block aggregation solution is the IBM System Storage SAN
Volume Controller (SVC). The SVC is implemented as a clustered appliance in the storage
network layer. Chapter 2, “IBM System Storage SAN Volume Controller” on page 7 provides a
more in-depth discussion of why IBM has chosen to implement its IBM System Storage SAN
Volume Controller in the storage network layer.
2
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 1-1 SNIA block aggregation model
The key concept of virtualization is to decouple the storage (which is delivered by commodity
two-way Redundant Array of Independent Disks (RAID) controllers attaching physical disk
drives) from the storage functions that are expected from servers in today’s storage area
network (SAN) environment.
Decoupling is abstracting the physical location of data from the logical representation that an
application on a server uses to access data. The virtualization engine presents logical
entities, which are called volumes, to the user and internally manages the process of mapping
the volume to the actual physical location. The realization of this mapping depends on the
specific implementation. Another implementation-specific issue is the granularity of the
mapping, which can range from a small fraction of a physical disk, up to the full capacity of a
single physical disk. A single block of information in this environment is identified by its logical
unit identifier (LUN), which is the physical disk, and an offset within that LUN, which is known
as a logical block address (LBA).
Be aware that the term physical disk that is used in this context describes a piece of storage
that might be carved out of a RAID array in the underlying disk subsystem.
The address space is mapped between the logical entity, which is usually referred to as a
virtual disk (VDisk), and the physical disks, which are identified by their LUNs. We refer to
these LUNs, which are provided by the storage controllers to the virtualization layer, as
managed disks (MDisks) throughout this book.
Figure 1-2 on page 4 shows an overview of block-level virtualization.
Chapter 1. Introduction to storage virtualization
3
Figure 1-2 Block level virtualization overview
The server and the application only know about logical entities and access these logical
entities via a consistent interface that is provided by the virtualization layer. Each logical entity
owns a common and well defined set of functionality that is independent of where the physical
representation is located.
The functionality of a VDisk that is presented to a server, such as expanding or reducing the
size of a VDisk, mirroring a VDisk to a secondary site, creating a FlashCopy/Snapshot, thin
provisioning/over-allocating, and so on, is implemented in the virtualization layer and does not
rely in any way on the functionality that is provided by the disk subsystems that deliver the
MDisks. Data that is stored in a virtualized environment is stored in a location-independent
way, which allows a user to move or migrate its data, or parts of it, to another place or storage
pool, that is, the place where the data really belongs.
The logical entity can be resized, moved, replaced, replicated, over-allocated, mirrored,
migrated, and so on without any disruption to the server and the application. After you have
an abstraction layer in the SAN, you can perform almost any task.
We refer to block-level storage virtualization as the cornerstones of virtualization, which are
the core advantages that a product, such as the SVC, can provide over traditional directly
attached SAN storage:
The SVC provides online volume migration while applications are running, which is
possibly the greatest advantage for storage virtualization. With online migration while
applications are running, you can put your data where it belongs, and, if the requirements
change over time, move it to the right place or storage pool without impacting your server
or application. Implementing a tiered storage environment can provide various storage
classes for information life cycle management (ILM), can balance I/O across controllers,
4
Implementing the IBM System Storage SAN Volume Controller V5.1
can allow you to add, upgrade, and retire storage; in essence, it allows you to put your
data where it really belongs.
The SVC simplifies storage management by providing a single image for multiple
controllers and a consistent user interface for provisioning heterogeneous storage (after
the initial array setup).
The SVC provides enterprise-level copy services for existing storage. You can license a
function one time and use it everywhere. You can purchase new storage as low-cost RAID
“bricks.” The source and target of a copy relationship can be on separate controllers.
You can increase storage utilization by pooling storage across the SAN.
You have the potential to increase system performance by reducing hot spots, striping
disks across many arrays and controllers, and in certain implementations, providing
additional caching.
The ability to deliver these functions in a homogeneous way on a scalable and highly
available platform, over any attached storage and to every attached server, is the key
challenge for every block-level virtualization solution.
1.2 User requirements that drive storage virtualization
In today’s environment with emphasis on a smarter planet and a dynamic infrastructure, you
need a storage environment that is as flexible as the application and server mobility. Business
demands change quickly.
These key client concerns drive storage virtualization:
Growth in datacenter costs
Inability of IT organizations to respond quickly to business demands
Poor asset utilization
Poor availability or service levels
Lack of skilled staff for storage administration
You can see the importance of addressing the complexity of managing storage networks by
applying the total cost of ownership (TCO) metric to storage networks. Industry analyses
show that storage acquisition costs are only about 20% of the TCO. Most of the remaining
costs are related to managing the storage system.
How much of managing multiple systems with separate interfaces can be managed as a
single entity? In an non-virtualized storage environment, every system is an island. Even if
you have a large system that claims to virtualize, that system is an island that you will need to
replace in the future.
With the SVC, you can reduce the number of separate environments that you need to
manage to one environment ideally. However, depending on how many tens or thousands of
systems you have, even reducing the number is a step in the right direction.
The SVC provides a single interface for storage management. Of course, there is an initial
effort for the setup of the disk subsystems; however, all of the day-to-day storage
management can be performed on the SVC. For example, you can use the data migration
functionality of the SVC for data migration as disk subsystems are phased out. SVC can
move the data online and without any impact on your servers.
Also, the virtualization layer offers advanced functions, such as data mirroring or FlashCopy®
so there is no need to purchase them again for each new disk subsystem.
Chapter 1. Introduction to storage virtualization
5
Today, it is typical that open systems run at significantly less than 50% of the usable capacity
that the RAID disk subsystems provide. Using the installed raw capacity in the disk
subsystems will, dependent on the RAID level that is used, show utilization numbers of less
than 35%. A block-level virtualization solution, such as the SVC, will support you to increase
that utilization to approximately 75 - 80%.
With the SVC, you do not need to keep and manage free space in each disk subsystem. You
do not need to worry whether there is sufficient free space on the right storage tier, or in a
single system.
Even if there is enough free space in one system, it might not be accessible in a
non-virtualized environment for a specific server or application due to multipath driver issues.
The SVC is able to handle the storage resources that it manages as a single storage pool.
Disk space allocation from this pool is a matter of minutes for every server connected to the
SVC, because you provision the capacity as needed, without disrupting applications.
1.3 Conclusion
Storage virtualization is no longer merely a concept or an unproven technology. All major
storage vendors offer storage virtualization products. Making use of storage virtualization as
the foundation for a flexible and reliable storage solution helps a company better align
business and IT by optimizing the storage infrastructure and storage management to meet
business demands.
The IBM System Storage SAN Volume Controller is a mature, fifth generation virtualization
solution, which uses open standards and is consistent with the Storage Networking Industry
Association (SNIA) storage model. The SVC is an appliance-based in-band block
virtualization process, in which intelligence, including advanced storage functions, is migrated
from individual storage devices to the storage network.
We expect the use of SVC will improve the utilization of your storage resources, simplify the
storage management, and improve the availability of your applications.
6
Implementing the IBM System Storage SAN Volume Controller V5.1
2
Chapter 2.
IBM System Storage SAN Volume
Controller
This chapter describes the major concepts of the IBM System Storage SAN Volume
Controller (SVC). It not only covers the hardware architecture but also the software concepts.
We provide a brief history of the product, and we describe the additional functionalities that
will be available with the newest release.
© Copyright IBM Corp. 2010. All rights reserved.
7
2.1 SVC history
The IBM implementation of block-level storage virtualization, the IBM System Storage SAN
Volume Controller (SVC), is based on an IBM project that was initiated in the second half of
1999 at the IBM Almaden Research Center. The project was called COMPASS (COMmodity
PArts Storage System). One of its goals was to build a system almost exclusively built from
off-the-shelf standard parts. As any enterprise-level storage control system, it had to deliver
high performance and availability that were comparable to the highly optimized storage
controllers of previous generations. The idea of building a storage control system that is
based on a scalable cluster of lower performance Pentium®-based servers, instead of a
monolithic architecture of two nodes, is still a compelling idea.
COMPASS also had to address a major challenge for the heterogeneous open systems
environment, namely to reduce the complexity of managing storage on block devices.
The first publications covering this project were released to the public in 2003 in the form of
the IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003, “The architecture of a SAN storage
control system”, by J. S. Glider, C. F. Fuente, and W. J. Scales, which you can read at this
Web site:
http://domino.research.ibm.com/tchjr/journalindex.nsf/e90fc5d047e64ebf85256bc80066
919c/b97a551f7e510eff85256d660078a12e?OpenDocument
The results of the COMPASS project defined the fundamentals for the product architecture.
The announcement of the first release of the IBM System Storage SAN Volume Controller
took place in July 2003.
The following releases brought new, more powerful hardware nodes, which approximately
doubled the I/O performance and throughput of its predecessors, provided new functionality,
and offered additional interoperability with new elements in host environments, disk
subsystems, and the storage area network (SAN).
Major steps in the product’s evolution were:
SVC Release 2, February 2005
SVC Release 3, October 2005
New 8F2 node hardware (based on IBM X336, 8 GB cache, 4 x 2 Gb Fibre Channel (FC)
port)
SVC Release 4.1, May 2006
New 8F4 node hardware (based on IBM X336, 8 GB cache, 4 x 4 Gb FC port)
SVC Release 4.2, May 2007:
– New 8A4 entry-level node hardware (based on IBM X3250, 8 GB cache, 4 x 4 Gb FC
port)
– New 8G4 node hardware (based on IBM X3550, 8 GB cache, 4 x 4 Gb FC port)
SVC Release 4.3, May 2008
In 2008, the 15,000th SVC engine was shipped by IBM. More than 5,000 SVC systems
worldwide are in operation.
With the new release of SVC that is introduced in this book, we will get a new generation of
hardware nodes. This hardware, which will approximately double the performance of its
predecessors, also provides solid-state drive (SSD) support. New software features are iSCSI
support (which will be available on all hardware nodes that support the new firmware) and
8
Implementing the IBM System Storage SAN Volume Controller V5.1
multiple SVC partnerships, which will support data replication between the members of a
group of up to four SVC clusters.
2.2 Architectural overview
The IBM System Storage SAN Volume Controller is a SAN block aggregation appliance that
is designed for attachment to a variety of host computer systems.
There are three major approaches in use today to be considered for the implementation of
block-level aggregation:
Network-based: Appliance
The device is a SAN appliance that sits in the data path, and all I/O flows through the
device. This kind of implementation is also referred to as symmetric virtualization or
in-band. The device is both target and initiator. It is the target of I/O requests from the host
perspective and the initiator of I/O requests from the storage perspective. The redirection
is performed by issuing new I/O requests to the storage.
Switch-based: Split-path
The device is usually an intelligent SAN switch that intercepts I/O requests on the fabric
and redirects the frames to the correct storage location. The actual I/O requests are
themselves redirected. This kind of implementation is also referred to as asymmetric
virtualization or out-of-band. Data and the control data path are separated, and a specific
(preferably highly available and disaster tolerant) controller outside of the switch holds the
metainformation and the configuration to manage the split data paths.
Controller-based
The device is a storage controller that provides an internal switch for external storage
attachment. In this approach, the storage controller intercepts and redirects I/O requests
to the external storage as it does for internal storage.
Figure 2-1 on page 10 shows the three approaches.
Chapter 2. IBM System Storage SAN Volume Controller
9
Figure 2-1 Overview of the block-level aggregation architectures
While all of these approaches provide in essence the same cornerstones of virtualization,
several have interesting side effects.
All three approaches can provide the required functionality. Although, the implementation
(especially the switch-based split I/O architecture) can make it more difficult to implement part
of the required functionality.
This challenge is especially true for FlashCopy services. Taking a point-in-time clone of a
device in a split I/O architecture means that all of the data has to be copied from the source to
the target first.
The drawback is that the target copy cannot be brought online until the entire copy has
completed, that is, minutes or hours later. Think of using this approach for implementing a
sparse flash, which is a flash copy without a background copy where the target disk is only
populated with the blocks or extents that are modified after the point in time when the flash
copy was taken (or an incremental series of cascaded copies).
Scalability is another issue, because it might be difficult to try to scale out to n-way clusters of
intelligent line cards. A multiway switch design is also difficult to code and implement,
because of the issues in maintaining fast updates to metadata to keep the metadata
synchronized across all processing blades; the updates must occur at wire speed or you lose
that claim.
For the same reason, space-efficient copies and replication are also difficult to implement.
Both synchronous and asynchronous replication require a level of buffering of I/O requests while switches have buffering built in, the number of additional buffers is huge and grows as
the link distance increases. Most of today’s intelligent line cards do not provide anywhere near
this level of local storage. The most common solution is to use an external system to provide
the replication services, which means another system to manage and maintain, which
conflicts with the concept of virtualization.
10
Implementing the IBM System Storage SAN Volume Controller V5.1
Also, remember when choosing a split I/O architecture, your virtualization implementation is
limited to the actual switch type and the hardware that you use, which makes it hard to
implement any future changes.
The controller-based approach has high functionality, but it fails in terms of scalability or
upgradability. Because of the nature of its design, there is no true decoupling with this
approach, which becomes an issue for the life cycle of this solution, such as a controller. You
will be challenged with data migration issues and questions, such as how to reconnect the
servers to the new controller, and how to reconnect them online without any impact to your
applications.
Be aware that you not only replace a controller in this scenario, but also, implicitly, replace
your entire virtualization solution. You not only have to replace your hardware, but you also
must update or repurchase the licenses for the virtualization feature, advanced copy
functions, and so on.
With a network-based appliance solution that is based on a scale-out cluster architecture, life
cycle management tasks, such as adding or replacing new disk subsystems or migrating data
between them, are extremely simple. Servers and applications remain online, data migration
takes place transparently on the virtualization platform, and licenses for virtualization and
copy services require no update, that is, cause no additional costs when disk subsystems
have to be replaced. Only the network-based appliance solution provides you with an
independent and scalable virtualization platform that can provide enterprise-class copy
services, is open for future interfaces and protocols, lets you choose the disk subsystems that
best fit your requirements, and does not lock you into specific SAN hardware.
For these reasons, IBM has chosen the network-based appliance approach for the
implementation of the IBM System Storage SAN Volume Controller.
The SVC has these key characteristics:
Highly scalable: Easy growth path to two-n nodes (grow in a pair of nodes)
SAN interface-independent: Actually supports FC and iSCSI, but it is also open for future
enhancements, such as InfiniBand or other enhancements
Host-independent: For fixed block-based Open Systems environments
Storage (RAID controller)-independent: Ongoing plan to qualify additional types of
Redundant Array of Independent Disks (RAID) controllers
Able to utilize commodity RAID controllers: Also known as “low complexity RAID bricks”
Able to utilize node internal disks (solid state disks)
On the SAN storage that is provided by the disk subsystems, the SVC can offer the following
services:
The ability to create and manage a single pool of storage attached to the SAN
Block-level virtualization (logical unit virtualization)
Advanced functions to the entire SAN, such as:
– Large scalable cache
– Advanced Copy Services:
•
FlashCopy (point-in-time copy)
•
Metro Mirror and Global Mirror (remote copy, synchronous/asynchronous)
•
Data migration
Chapter 2. IBM System Storage SAN Volume Controller
11
This feature list will grow for future releases. This additional layer can provide future features,
such as policy-based space management mapping your storage resources based on desired
performance characteristics, or the dynamic reallocation of entire virtual disks (VDisks) or
part of a VDisk according to user-definable performance policies. Extensive functionality is
possible as soon as you set up the decoupling properly (installed an additional layer between
the server and the storage).
You can configure SAN-based storage infrastructures using SVC with two or more SVC
nodes, which are arranged in a cluster. These nodes are attached to the SAN fabric, along
with RAID controllers and host systems. The SAN fabric is zoned to allow the SVC to “see”
the RAID controllers, and for the hosts to “see” the SVC. The hosts are not usually able to
directly “see” or operate on the RAID controllers unless a “split controller” configuration is in
use. You can use the zoning capabilities of the SAN switch to create these distinct zones. The
assumptions that are made about the SAN fabric will be limited to make it possible to support
a number of separate SAN fabrics with a minimum development effort. Anticipated SAN
fabrics include FC, iSCSI over Gigabit Ethernet, and other types might follow in the future.
Figure 2-2 shows a conceptual diagram of a storage system utilizing the SVC. It shows a
number of hosts that are connected to a SAN fabric or LAN. In practical implementations that
have high availability requirements (the majority of the target clients for SVC), the SAN fabric
“cloud” represents a redundant SAN. A redundant SAN is composed of a fault-tolerant
arrangement of two or more counterpart SANs, therefore providing alternate paths for each
SAN-attached device.
Both scenarios (using a single network and using two physically separate networks) are
supported for iSCSI-based/LAN-based access networks to the SVC. Redundant paths to
VDisks can be provided for both scenarios.
Figure 2-2 SVC conceptual overview
A cluster of SVC nodes are connected to the same fabric and present VDisks to the hosts.
These VDisks are created from MDisks that are presented by the RAID controllers. There are
two distinct zones shown in the fabric: a host zone, in which the hosts can see and address
12
Implementing the IBM System Storage SAN Volume Controller V5.1
the SVC nodes, and a storage zone, in which the SVC nodes can see and address the
MDisk/logical unit numbers (LUNs) presented by the RAID controllers. Hosts are not
permitted to operate on the RAID LUNs directly, and all data transfer happens through the
SVC nodes. This design is commonly described as symmetric virtualization. Figure 2-3
shows the SVC logical topology.
Figure 2-3 SVC topology overview
For simplicity, Figure 2-3 shows only one SAN fabric and two types of zones. In an actual
environment, we recommend using two redundant SAN fabrics. The SVC can be connected
to up to four fabrics. You set up zoning for each host, disk subsystem, and fabric. Learn about
zoning details in 3.3.2, “SAN zoning and SAN connections” on page 76.
For iSCSI-based access, using two networks and separating iSCSI traffic within the networks
by using a dedicated virtual local area network (VLAN) path for storage traffic will prevent any
IP interface, switch, or target port failure from compromising the host server’s access to the
VDisk LUNs.
2.2.1 SVC virtualization concepts
The SVC product provides block-level aggregation and volume management for disk storage
within the SAN. In simpler terms, SVC manages a number of back-end storage controllers
and maps the physical storage within those controllers into logical disk images that can be
seen by application servers and workstations in the SAN.
The SAN is zoned so that the application servers cannot see the back-end physical storage,
which prevents any possible conflict between the SVC and the application servers both trying
to manage the back-end storage. The SVC is based on the following virtualization concepts,
which are discussed more throughout this chapter.
A node is an SVC, which provides virtualization, cache, and copy services to the SAN. SVC
nodes are deployed in pairs, to make up a cluster. A cluster can have between one and four
SVC node pairs in it, which is a product limit not an architectural limit.
Chapter 2. IBM System Storage SAN Volume Controller
13
Each pair of SVC nodes is also referred to as an I/O Group. An SVC cluster might have
between one and up to four I/O Groups. A specific virtual disk or VDisk is always presented
to a host server by a single I/O Group of the cluster.
When a host server performs I/O to one of its VDisks, all the I/Os for a specific VDisk are
directed to one specific I/O Group in the cluster. During normal operating conditions, the I/Os
for a specific VDisk are always processed by the same node of the I/O Group. This node is
referred to as the preferred node for this specific VDisk.
Both nodes of an I/O Group act as the preferred node for its specific subset of the total
number of VDisks that the I/O Group presents to the host servers. But, both nodes also act as
failover nodes for their specific partner node in the I/O Group. A node will take over the I/O
handling from its partner node, if required.
In an SVC-based environment, the I/O handling for a VDisk can switch between the two
nodes of an I/O Group. Therefore, it is mandatory for servers that are connected through FC
to use multipath drivers to be able to handle these failover situations.
SVC 5.1 introduces iSCSI as an alternative means of attaching hosts. However, all
communications with back-end storage subsystems, and with other SVC clusters, is still
through FC. The node failover can be handled without a multipath driver installed on the
server. An iSCSI-attached server can simply reconnect after a node failover to the original
target IP address, which is now presented by the partner node. To protect the server against
link failures in the network or host bus adapter (HBA) failures, a multipath driver is mandatory.
The SVC I/O Groups are connected to the SAN so that all application servers accessing
VDisks from this I/O Group have access to this group. Up to 256 host server objects can be
defined per I/O Group; these host server objects can consume VDisks that are provided by
this specific I/O Group.
If required, host servers can be mapped to more than one I/O Group of an SVC cluster;
therefore, they can access VDisks from separate I/O Groups. You can move VDisks between
I/O Groups to redistribute the load between the I/O Groups. With the current release of SVC,
I/Os to the VDisk that is being moved have to be quiesced for a short time for the duration of
the move.
The SVC cluster and its I/O Groups view the storage that is presented to the SAN by the
back-end controllers as a number of disks, known as managed disks or MDisks. Because the
SVC does not attempt to provide recovery from physical disk failures within the back-end
controllers, an MDisk is usually, but not necessarily, provisioned from a RAID array. The
application servers however do not see the MDisks at all. Instead, they see a number of
logical disks, which are known as virtual disks or VDisks, which are presented by the SVC I/O
Groups through the SAN (FC) or LAN (iSCSI) to the servers. A VDisk is storage that is
provisioned out of one Managed Disk Group (MDG), or if it is a mirrored VDisk, out of two
MDGs.
An MDG is a collection of up to 128 MDisks, which creates the storage pools out of which
VDisks are provisioned. A single cluster can manage up to 128 MDGs. The size of these
pools can be changed (expanded or shrunk) at run time without taking the MDG or the VDisks
that are provided by it offline. At any point in time, an MDisk can only be a member in one
MDG with one exception (image mode VDisk), which will be explained later in this chapter.
MDisks that are used in a specific MDG must have the following characteristics:
They must have the same hardware characteristics, for example, the same RAID type,
RAID array size, disk type, and disk revolutions per minute (RPMs). Be aware that it is
14
Implementing the IBM System Storage SAN Volume Controller V5.1
always the weakest element (MDisk) in a chain of elements that defines the maximum
strength of that chain (MDG).
The disk subsystems providing the MDisks must have similar characteristics, for example,
maximum input/output operations per second (IOPS), response time, cache, and
throughput.
We recommend that you use MDisks of the same size and MDisks that provide the same
number of extents, which you need to remember when adding MDisks to an existing MDG.
If that is not feasible, check the distribution of the VDisks’ extents in that MDG.
For further details, refer to SAN Volume Controller Best Practices and Performance
Guidelines, SG24-7521, at this Web site:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
VDisks can be mapped to a host to allow access for a specific server to a set of VDisks. A
host within the SVC is a collection of HBA worldwide port names (WWPNs) or iSCSI qualified
names (IQNs), defined on the specific server. Note that iSCSI names are internally identified
by “fake” WWPNs, or WWPNs that are generated by the SVC. VDisks might be mapped to
multiple hosts, for example, a VDisk that is accessed by multiple hosts of a server cluster.
Figure 2-4 shows the relationships of these entities to each other.
Figure 2-4 SVC I/O Group overview
An MDisk can be provided by a SAN disk subsystem or by the solid state drives that are
provided by the SVC nodes themselves. Each MDisk is divided into a number of extents. The
size of the extent will be selected by the user at the creation time of an MDG. The size of the
extent ranges from 16 MB (default) up to 2 GB.
We recommend that you use the same extent size for all MDGs in a cluster, which is a
prerequisite for supporting VDisk migration between two MDGs. If the extent size does not fit,
you must use VDisk Mirroring (see 2.2.7, “Mirrored VDisk” on page 21) as a workaround. For
Chapter 2. IBM System Storage SAN Volume Controller
15
copying (not migrating) the data into another MDG to a new VDisk, you can use SVC
Advanced Copy Services.
Figure 2-5 shows the two most popular ways to provision VDisks out of an MDG. Striped
mode is the recommended method for most cases. Sequential extent allocation mode might
slightly increase the sequential performance for certain workloads.
Figure 2-5 MDG overview
You can allocate the extents for a VDisk in many ways. The process is under full user control
at VDisk creation time and can be changed at any time by migrating single extents of a VDisk
to another MDisk within the MDG. You can obtain details of how to create VDisks and migrate
extents via GUI or CLI in Chapter 7, “SAN Volume Controller operations using the
command-line interface” on page 339, Chapter 8, “SAN Volume Controller operations using
the GUI” on page 469, and Chapter 9, “Data migration” on page 675.
SVC limits the number of extents in a cluster. The number is currently 222 ~= 4 million
extents, and this number might change in future releases. Because the number of
addressable extents is limited, the total capacity of an SVC cluster depends on the extent size
that is chosen by the user. The capacity numbers that are specified in Table 2-1 for an SVC
cluster assume that all defined MDGs have been created with the same extent size.
Table 2-1 Extent size to addressability matrix
Extent size maximum
Cluster capacity
Extent size maximum
Cluster capacity
16 MB
64 TB
256 MB
1 PB
32 MB
128 TB
512 MB
2 PB
64 MB
256 TB
1024 MB
4 PB
128 MB
512 TB
2048 MB
8 PB
For most clusters, a capacity of 1 - 2 PB is sufficient. We therefore recommend that you use
256 MB or, for larger clusters, 512 MB as the standard extent size.
16
Implementing the IBM System Storage SAN Volume Controller V5.1
2.2.2 MDisk overview
The maximum size of an MDisk is 2 TB. An SVC cluster supports up to 4,096 MDisks. At any
point of time, an MDisk is in one of the following three modes:
Unmanaged MDisk
An MDisk is reported as unmanaged when it is not a member of any MDG. An unmanaged
MDisk is not associated with any VDisks and has no metadata stored on it. SVC will not
write to an MDisk, which is in unmanaged mode, except when it attempts to change the
mode of the MDisk to one of the other modes. SVC can see the resource, but it is not
assigned to a pool, that is, an MDG.
Managed MDisk
Managed mode MDisks are always members of an MDG and contribute extents to the
pool of extents available in the MDG. Zero or more VDisks (if not operated in image mode,
which we discuss next) can use these extents. MDisks operating in managed mode might
have metadata extents allocated from them and can be used as quorum disks.
Image mode MDisk
Image mode provides a direct block-for-block translation from the MDisk to the VDisk by
using virtualization. This mode is provided to satisfy three major usage scenarios:
– Image mode allows virtualization of MDisks that already contain data that was written
directly, not through an SVC. It allows a client to insert the SVC into the data path of an
existing storage configuration with minimal downtime. Chapter 9, “Data migration” on
page 675 provides details of the data migration process.
– Image mode allows a VDisk that is managed by the SVC to be used with the copy
services that are provided by the underlying RAID controller. In order to avoid the loss
of data integrity when the SVC is used in this way, it is important that you disable the
SVC cache for the VDisk.
– SVC provides the ability to migrate to image mode, which allows the SVC to export
VDisks and access them directly without the SVC from the server.
An image mode MDisk is associated with exactly one VDisk. The last extent is partial if the
(image mode) MDisk is not a multiple of the MDisk Group’s extent size (see Figure 2-6 on
page 18). An image mode VDisk is a pass-through one-to-one map of its MDisk. It cannot
be a quorum disk and will not have any SVC metadata extents allocated on it. Managed or
image mode MDisks are always members of an MDG.
Chapter 2. IBM System Storage SAN Volume Controller
17
Figure 2-6 Image mode MDisk overview
It is a best practice if you work with image mode MDisks to put them in a dedicated MDG
and use a special name for it (Example: MDG_IMG_xxx). And, remember that the extent
size chosen for this specific MDG has to be the same as the extent size in which you plan
to migrate the data. All of SVC copy services can be applied to image mode disks.
2.2.3 VDisk overview
The maximum size of an VDisk is 256 TB. An SVC cluster supports up to 4,096 VDisks.
VDisks support the following services:
You can create and delete a VDisk.
You can change the size of a VDisk (expand or shrink).
VDisks can be migrated (full or partially) at run time to another MDisk or a storage pool
(MDG).
VDisks can be created as fully allocated or Space-Efficient VDisks. A conversion from a
fully allocated to a Space-Efficient VDisk and vice versa can be done at run time.
VDisks can be stored in MDGs (mirrored) to make them resistant to disk subsystem
failures or to improve the read performance.
VDisks can be mirrored synchronously for distances up to 100 KM or asynchronously for
longer distances. An SVC cluster can run active data mirrors to a maximum of three other
SVC clusters.
You can use FlashCopy on VDisks. Multiple snapshots and quick restore from snapshots
(reverse flash copy) are supported.
VDisks have two modes: image mode and managed mode. The following state diagram in
Figure 2-7 on page 19 shows the state transitions.
18
Implementing the IBM System Storage SAN Volume Controller V5.1
create managed
mode vdisk
Doesn't
exist
Managed
mode
delete
vdisk
create image
mode vdisk
delete
vdisk
migrate to
image mode
complete
migrate
Managed
mode
migrating
Image
mode
migrate to
image mode
Figure 2-7 VDisk state transitions
Managed mode VDisks have two policies: the sequential policy and the striped policy. Policies
define how the extents of a VDisk are carved out of an MDG.
2.2.4 Image mode VDisk
Image mode provides a one-to-one mapping between the logical block addresses (LBAs)
between a VDisk and an MDisk. Image mode VDisks have a minimum size of one block (512
bytes) and always occupy at least one extent. An image mode MDisk is mapped to one and
only one image mode VDisk. The VDisk capacity that is specified must be less than or equal
to the size of the image mode MDisk. When you create an image mode VDisk, the specified
MDisk must be in “unmanaged” mode and must not be a member of an MDG. The MDisk is
made a member of the specified MDG (MDG_IMG_xxx) as a result of the creation of the
image mode VDisk. The SVC also supports the reverse process in which a managed mode
VDisk can be migrated to image mode VDisks. If a VDisk is migrated to another MDisk, it is
represented as being in managed mode during the migration and only represented as an
image mode VDisk after has reached the state where it is a straight-through mapping.
2.2.5 Managed mode VDisk
VDisks operating in managed mode provide a full set of virtualization functions. Within an
MDG, SVC supports an arbitrary relationship between extents on (managed mode) VDisks
and extents on MDisks. Subject to the constraints in which each MDisk extent is contained, at
most, one VDisk, each VDisk extent maps to exactly one MDisk extent.
Figure 2-8 on page 20 represents this diagrammatically. It shows VDisk V, which is made up
of a number of extents. Each of these extents is mapped to an extent on one of the MDisks: A,
B, or C. The mapping table stores the details of this indirection. Note that several of the MDisk
extents are unused. There is no VDisk extent, which maps to them. These unused extents are
available for use in creating new VDisks, migration, expansion, and so on.
Chapter 2. IBM System Storage SAN Volume Controller
19
Figure 2-8 Simple view of block virtualization
A managed mode VDisk can have a size of zero blocks, in which case, it occupies zero
extents. This type of a VDisk cannot be mapped to a host or take part in any Advanced Copy
Services functions.
The allocation of a specific number of extents from a specific set of MDisks is performed by
the following algorithm: If the set of MDisks from which to allocate extents contains more than
one disk, extents are allocated from MDisks in a round-robin fashion. If an MDisk has no free
extents when its turn arrives, its turn is missed and the round-robin moves to the next MDisk
in the set that has a free extent.
Beginning with SVC 5.1, when creating a new VDisk, the first MDisk from which to allocate an
extent is chosen in a pseudo random way rather than simply choosing the next disk in a
round-robin fashion. The pseudo random algorithm avoids the situation whereby the “striping
effect” inherent in a round-robin algorithm places the first extent for a large number of VDisks
on the same MDisk. Placing the first extent of a number of VDisks on the same MDisk might
lead to poor performance for workloads that place a large I/O load on the first extent of each
VDisk or that create multiple sequential streams.
2.2.6 Cache mode and cache-disabled VDisks
Prior to SVC V3.1, enabling any copy services function in a RAID array controller for a LUN
that was being virtualized by SVC was not supported, because the behavior of the write-back
cache in the SVC led to data corruption. With the advent of cache-disabled VDisks, it
becomes possible to enable copy services in the underlying RAID array controller for LUNs
that are virtualized by the SVC.
Wherever possible, we recommend using SVC copy services in preference to the underlying
controller copy services.
20
Implementing the IBM System Storage SAN Volume Controller V5.1
2.2.7 Mirrored VDisk
Starting with SVC 4.3, the mirrored VDisk feature provides a simple RAID-1 function, which
allows a VDisk to remain accessible even when an MDisk on which it depends has become
inaccessible.
This function is achieved using two copies of the VDisk, which are typically allocated from
separate MDGs or using image-mode copies. The VDisk is the entity that participates in
FlashCopy and a Remote Copy relationship, is served by an I/O Group, and has a preferred
node. The copy now has the virtualization attributes, such as MDG and policy (striped,
sequential, or image).
A copy is not a separate object and cannot be created or manipulated except in the context of
the VDisk. Copies are identified via the configuration interface with a copy ID of their parent
VDisk. This copy ID can be either 0 or 1. Depending on the configuration history, a single
copy can have an ID of either 0 or 1.
The feature does provide a “point-in-time” copy functionality that is achieved by “splitting” a
copy from the VDisk. The feature does not address other forms of mirroring based on Remote
Copy (sometimes called “Hyperswap”), which mirrors VDisks across I/O Groups or clusters,
nor is it intended to manage mirroring or remote copy functions in back-end controllers.
Figure 2-9 gives an overview of VDisk Mirroring.
Figure 2-9 VDisk Mirroring overview
A copy can be added to a VDisk with only one copy or removed from a VDisk with two copies.
Checks will prevent the accidental removal the sole copy of a VDisk. A newly created,
unformatted VDisk with two copies will initially have the copies out-of-synchronization. The
primary copy will be defined as “fresh” and the secondary copy as “stale”. The
synchronization process will update the secondary copy until it is synchronized, which will be
done at the default “synchronization rate” or one defined when creating the VDisk or
subsequently modifying it.
Chapter 2. IBM System Storage SAN Volume Controller
21
If a two-copy mirrored VDisk is created with the format parameter, both copies are formatted
in parallel and the VDisk comes online when both operations are complete with the copies in
sync.
If mirrored VDisks get expanded or shrunk, all of their copies also get expanded or shrunk.
If it is known that MDisk space, which will be used for creating copies, is already formatted, or
if the user does not require read stability, a “no synchronization” option can be selected which
declares the copies as “synchronized” (even when they are not).
The time for a copy, which has become unsynchronized, to resynchronize is minimized by
copying only those 256 KB grains that have been written to since synchronization was lost.
This approach is known as an “incremental synchronization”. Only those changed grains
need be copied to restore synchronization.
Important: An unmirrored VDisk can be migrated from a source to a destination by adding
a copy at the desired destination, waiting for the two copies to synchronize, and then
removing the original copy. This operation can be stopped at any time.The two copies can
be in separate MDGs with separate extent sizes.
Where there are two copies of a VDisk, one copy is known as the primary copy. If the primary
is available and synchronized, reads from the VDisk are directed to it. The user can select the
primary when creating the VDisk or can change it later. Selecting the copy allocated on the
higher-performance controller will maximize the read performance of the VDisk. The write
performance will be constrained by the lower-performance controller, because writes must
complete to both copies before the VDisk is considered to have been successfully written.
Remember that writes to both copies must complete to be considered successfully written
when VDisk Mirroring creates one copy in a solid-state drive MDG and the second copy in an
MDG populated with resources from a disk subsystem.
Note: SVC does not prevent you from creating the two copies in one or more solid-state
drive MDGs of the same node. Although doing so means that you lose redundancy and
might therefore be faced with access loss to your VDisk if the node fails or restarts.
A VDisk with copies can be checked to see whether all of the copies are identical. If a medium
error is encountered while reading from any copy, it will be repaired using data from another
fresh copy. This process can be asynchronous but will give up if the copy with the error goes
offline.
Mirrored VDisks consume bitmap space at a rate of 1 bit per 256 KB grain, which translates to
1 MB of bitmap space supporting 2 TB-worth of mirrored VDisk. The default allocation of
bitmap space in 20 MB, which supports 40 TB of mirrored VDisk. If all 512 MB of variable
bitmap space is allocated to mirrored VDisks, 1 PB of mirrored VDisks can be supported.
The advent of the mirrored VDisk feature will inevitably lead clients to think about two-site
solutions for cluster and VDisk availability.
Generally, the advice is not to split a cluster, that is, the single I/O Groups, across sites. But
there are certain configurations that will be effective. Be careful that you prevent a situation
that is referred to as a “split brain” scenario (caused, for example, by a power outage on the
SAN switches; the SVC nodes are protected by their own uninterruptible power supply unit).
In this scenario, the connectivity between components will be lost and a contest for the SVC
cluster quorum disk occurs. Which set of nodes wins is effectively arbitrary. If the set of nodes
which won the quorum disk then experiences a permanent power loss, the cluster is lost. The
way to prevent this split brain scenario is to use a configuration that will provide effective
22
Implementing the IBM System Storage SAN Volume Controller V5.1
redundancy because of the exact placement of system components in “fault domains”. You
can obtain the details of this configuration and the required prerequisites in Chapter 3,
“Planning and configuration” on page 65.
2.2.8 Space-Efficient VDisks
Starting with SVC 4.3, VDisks can be configured to either be “Space-Efficient” or “Fully
Allocated”. A Space-Efficient VDisk (SE VDisk) will behave with respect to application reads
and writes as though they were fully allocated, including the requirements of Read Stability
and Write Atomicity. When an SE VDisk is created, the user will specify two capacities: the
real capacity of the VDisk and its virtual capacity.
The real capacity will determine the quantity of MDisk extents that will be allocated for the
VDisk. The virtual capacity will be the capacity of the VDisk reported to other SVC
components (for example, FlashCopy, Cache, and Remote Copy) and to the host servers.
The real capacity will be used to store both the user data and the metadata for the SE VDisk.
The real capacity can be specified as an absolute value or a percentage of the virtual
capacity.
The Space-Efficient VDisk feature can be used on its own to create over-allocated or
late-allocation VDisks, or it can be used in conjunction with FlashCopy to implement
Space-Efficient FlashCopy. SE VDisk can be used in conjunction with the mirrored VDisks
feature, as well, which we refer to as Space-Efficient Copies of VDisks.
When an SE VDisk is initially created, a small amount of the real capacity will be used for
initial metadata. Write I/Os to grains of the SE VDisk that have not previously been written to
will cause grains of the real capacity to be used to store metadata and user data. Write I/Os to
grains that have previously been written to will update the grain where data was previously
written. The grain is defined when the VDisk is created and can be 32 KB, 64 KB, 128 KB, or
256 KB.
Figure 2-10 on page 24 provides an overview.
Chapter 2. IBM System Storage SAN Volume Controller
23
Figure 2-10 Overview SE VDisk
SE VDisks store both user data and metadata. Each grain requires metadata. The overhead
will never be greater than 0.1% of the user data. The overhead is independent of the virtual
capacity of the SE VDisk. If you are using SE VDisks in a FlashCopy map, use the same grain
size as the map grain size for the best performance. If you are using the Space-Efficient
VDisk directly with a host system, use a small grain size.
SE VDisk format: SE VDisks do not need formatting. A read I/O, which requests data from
unallocated data space, will return zeroes. When a write I/O causes space to be allocated,
the grain will be zeroed prior to use. Consequently, an SE VDisk will always be formatted
regardless of whether the format flag is specified when the VDisk is created. The
formatting flag will be ignored when an SE VDisk is created or when the real capacity is
expanded; the virtualization component will never format the real capacity for an SE VDisk.
The real capacity of an SE VDisk can be changed provided that the VDisk is not in image
Mode. Increasing the real capacity allows a larger amount of data and metadata to be stored
on the VDisk. SE VDisks use the real capacity of a VDisk in ascending order as new data is
written to the VDisk. Consequently, if the user initially assigns too much real capacity to an SE
VDisk, the real capacity can be reduced to free up storage for other uses. It is not possible to
reduce the real capacity of an SE VDisk to be less than the capacity that is currently in use
other than by deleting the VDisk.
An SE VDisk can be configured to autoexpand, which causes SVC to automatically expand
the real capacity of an SE VDisk as its real capacity is used. Autoexpand attempts to maintain
a fixed amount of unused real capacity on the VDisk. This amount is known as the
“contingency capacity”.
24
Implementing the IBM System Storage SAN Volume Controller V5.1
The contingency capacity is initially set to the real capacity that is assigned when the VDisk is
created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and real capacity.
A VDisk that is created with a zero contingency capacity will go offline as soon as it needs to
expand whereas a VDisk with a non-zero contingency capacity will stay online until it has
been used up.
Autoexpand will not cause space to be assigned to the VDisk that can never be used.
Autoexpand will not cause the real capacity to grow much beyond the virtual capacity. The
real capacity can be manually expanded to more than the maximum that is required by the
current virtual capacity, and the contingency capacity will be recalculated.
To support the autoexpansion of SE VDisks, the MDGs from which they are allocated have a
configurable warning capacity. When the used free capacity of the group exceeds the warning
capacity, a warning is logged. To allow for capacity used by quorum disks and partial extents
of image mode VDisks, the calculation uses the free capacity. For example, if a warning of
80% has been specified, the warning will be logged when 20% of the free capacity remains.
SE VDisks: SE VDisks require additional I/O operations to read and write metadata to
back-end storage and to generate additional load on the SVC nodes. We therefore do not
recommend the use of SE VDisks for high performance applications.
An SE VDisk can be converted to a fully allocated VDisk using VDisk Mirroring.
SVC 5.1.0 introduces the ability to convert a fully allocated VDisk to an SE VDisk, by using
the following procedure:
1. Start with a VDisk that has one fully allocated copy.
2. Add a Space-Efficient copy to the VDisk.
3. Allow VDisk Mirroring to synchronize the copies.
4. Remove the fully allocated copy.
This procedure uses a zero-detection algorithm. Note that as of 5.1.0, this algorithm is used
only for I/O that is generated by the synchronization of mirrored VDisks; I/O from other
components (for example, FlashCopy) is written using normal procedures.
Note: Consider SE VDisks as targets in Flash Copy relationships. Using them as a target
in Metro Mirror or Global Mirror relationships makes no sense, because during the initial
synchronization, the target will become fully allocated.
2.2.9 VDisk I/O governing
It is possible to constrain I/O operations so that a system is constrained to the amount of I/O
that it can perform to a VDisk in a period of time. You can use this governing to satisfy a
quality of service constraint, or a contractual obligation (for example, a customer agrees to
pay for I/Os performed, but will not pay for I/Os beyond a certain rate). Only commands that
Chapter 2. IBM System Storage SAN Volume Controller
25
access the medium (Read (6/10), Write (6/10), or Write and Verify) are subject to I/O
governing.
I/O governing: I/O governing is applied to remote copy secondaries, as well as primaries.
If an I/O governing rate has been set on a VDisk, which is a remote copy secondary, this
governing rate will also be applied to the primary. If governing is in use on both the primary
and the secondary VDisks, each governed quantity will be limited to the lower of the two
specified values. Governing has no effect on FlashCopy or data migration I/O.
An I/O budget is expressed as a number of I/Os, or a number of MBs, over a minute. The
budget is evenly divided between all SVC nodes that service that VDisk, that is, between the
nodes that form the I/O Group of which that VDisk is a member.
The algorithm operates two levels of policing. While a VDisk on each SVC node has been
receiving I/O at a rate lower than the governed level, no governing is performed. A check is
made every minute that the VDisk on each node is continuing to receive I/O at a rate lower
than the threshold level. Where this check shows that the host has exceeded its limit on one
or more nodes, policing begins for new I/Os.
The following conditions exist while policing is in force:
A budget allowance is calculated for a 1 second period.
I/Os are counted over a period of a second.
If I/Os are received in excess of the one second budget on any node in the I/O Group,
those I/Os and later I/Os are pended.
When the second expires, a new budget is established, and any pended I/Os are redriven
under the new budget.
This algorithm might cause I/O to backlog in the front end, which might eventually cause
“Queue Full Condition” to be reported to hosts that continue to flood the system with I/O. If a
host stays within its 1 second budget on all nodes in the I/O Group for a period of 1 minute,
the policing is relaxed, and monitoring takes place over the 1 minute period as before.
2.2.10 iSCSI overview
SVC 4.3.1 and earlier support Fibre Channel (FC) as the sole transport protocol for
communicating with hosts, storage, and other SVC clusters. SVC 5.1.0 introduces iSCSI as
an alternative means of attaching hosts. However, all communications with back-end storage
subsystems, and with other SVC clusters, still occur via FC.
New iSCSI feature: The new iSCSI feature is a software feature that is provided by the
new SVC 5.1 code. This feature will be available on any SVC hardware node that supports
SVC 5.1 code. It is not restricted to the new 2145-CF8 nodes.
In the simplest terms, iSCSI allows the transport of SCSI commands and data over a TCP/IP
network, based on IP routers and Ethernet switches. iSCSI is a block-level protocol that
encapsulates SCSI commands into TCP/IP packets and thereby leverages an existing IP
network, instead of requiring expensive FC HBAs and a SAN fabric infrastructure.
A pure SCSI architecture is based on the client/server model. A client (for example, server or
workstation) initiates read or write requests for data from a target server (for example, a data
storage system). Commands, which are sent by the client and processed by the server, are
26
Implementing the IBM System Storage SAN Volume Controller V5.1
put into the Command Descriptor Block (CDB). The server executes a command, and
completion is indicated by a special signal alert.
The major functions of iSCSI include encapsulation and the reliable delivery of CDB
transactions between initiators and targets through the TCP/IP network, especially over a
potentially unreliable IP network.
The concepts of names and addresses have been carefully separated in iSCSI:
An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An
iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms
“initiator name” and “target name” also refer to an iSCSI name.
An iSCSI Address specifies not only the iSCSI name of an iSCSI node, but also a location
of that node. The address consists of a host name or IP address, a TCP port number (for
the target), and the iSCSI name of the node. An iSCSI node can have any number of
addresses, which can change at any time, particularly if they are assigned by way of
Dynamic Host Configuration Protocol (DHCP). An SVC node represents an iSCSI node
and provides statically allocated IP addresses.
Each iSCSI node, that is, an initiator or target, has a unique iSCSI Qualified Name (IQN),
which can have a size of up to 255 bytes. The IQN is formed according to the rules adopted
for Internet nodes.
The iSCSI qualified name format is defined in RFC3720 and contains (in order) these
elements:
The string “iqn”.
A date code specifying the year and month in which the organization registered the
domain or sub-domain name used as the naming authority string.
The organizational naming authority string, which consists of a valid, reversed domain or a
subdomain name.
Optionally, a colon (:), followed by a string of the assigning organization’s choosing, which
must make each assigned iSCSI name unique.
For SVC, the IQN for its iSCSI target is specified as:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>
On a Windows server, the IQN, that is, the name for the iSCSI Initiator, can be defined as:
iqn.1991-05.com.microsoft:<computer name>
You can abbreviate IQNs by a descriptive name, known as an alias. An alias can be assigned
to an initiator or a target. The alias is independent of the name and does not have to be
unique. Because it is not unique, the alias must be used in a purely informational way. It
cannot be used to specify a target at login or used during authentication. Both targets and
initiators can have aliases.
An iSCSI name provides the correct identification of an iSCSI device irrespective of its
physical location. Remember, the IQN is an identifier, not an address.
Be careful: Before changing cluster or node names for an SVC cluster that has servers
connected to it by way of SCSI, be aware that because the cluster and node name are part
of the SVC’s IQN, you can lose access to your data by changing these names. The SVC
GUI will display a specific warning, the CLI does not.
Chapter 2. IBM System Storage SAN Volume Controller
27
The iSCSI session, which consists of a login phase and a full feature phase, is completed with
a special command.
The login phase of the iSCSI is identical to the FC port login process (PLOGI). It is used to
adjust various parameters between two network entities and to confirm the access rights of
an initiator.
If the iSCSI login phase is completed successfully, the target confirms the login for the
initiator; otherwise, the login is not confirmed and the TCP connection breaks.
As soon as the login is confirmed, the iSCSI session enters the full feature phase. If more
than one TCP connection was established, iSCSI requires that each command/response pair
goes through one TCP connection. Thus, each separate read or write command will be
carried out without the necessity to trace each request for passing separate flows. However,
separate transactions can be delivered through separate TCP connections within one
session.
Figure 2-11 shows an overview of the various block-level storage protocols and where the
iSCSI layer is positioned.
Figure 2-11 Overview of block-level protocol stacks
2.2.11 Usage of IP addresses and Ethernet ports
The addition of iSCSI changes the manner in which you configure Ethernet access to an SVC
cluster. The SVC 5.1 releases of the GUI and the command-line interface (CLI) show these
changes.
The existing SVC node hardware has two Ethernet ports. Until now, only one Ethernet port
has been used for cluster configuration. With the introduction of iSCSI, you can now use a
second port. The configuration details of the two Ethernet ports can be displayed by the GUI
or CLI, but they will also be displayed on the node’s panel.
There are now two kinds of IP addresses:
A cluster management IP address is used for access to the SVC CLI, as well as to the
Common Information Model Object Manager (CIMOM) that runs on the SVC configuration
node. As before, only a single configuration node presents a cluster management IP
address at any one time, and failover of the configuration node is unchanged. However,
there can now be two cluster management IP addresses, one for each of the two Ethernet
ports.
28
Implementing the IBM System Storage SAN Volume Controller V5.1
A port IP address is used to perform iSCSI I/O to the cluster. Each node can have a port IP
address for each of its ports.
In the case of an upgrade to the SVC 5.1 code, the original cluster IP address will be retained
and will always be found on the eth0 interface on the configuration node. A second, new
cluster IP address can be optionally configured in SVC 5.1. This second cluster IP address
will always be on the eth1 interface on the configuration node. When the configuration node
fails, both configuration IP addresses will move to the new configuration node.
Figure 2-12 shows an overview of the new IP addresses on an SVC node port and the rules
regarding how these IP addresses are moved between the nodes of an I/O Group.
The management IP addresses and the ISCSI target IP addresses will fail over to the partner
node N2 if node N1 restarts (and vice versa). The ISCSI target IPs will fail back to their
corresponding ports on node N1 when node N1 is up and running again.
Figure 2-12 SVC 5.1 IP address overview
In an SVC cluster running 5.1 code, an eight node cluster with full iSCSI coverage (maximum
configuration) therefore has the following number of IP addresses:
Two IPV4 configuration addresses (one configuration address is always associated with
the eth0:0 alias for the eth0 interface of the configuration node, and the other configuration
address goes with eth1:0).
One IPV4 service mode fixed address (although many DCHP addresses can also be
used). This address is always associated with the eth0:0 alias for the eth0 interface of the
configuration node.
Two IPV6 configuration addresses (one address is always associated with the eth0:0 alias
for the eth0 interface of the configuration node, and the other address goes with eth1:0).
One IPV6 service mode fixed address (although many DCHP addresses can also be
used). This address is always associated with the eth0:0 alias for the eth0 interface of the
configuration node.
Chapter 2. IBM System Storage SAN Volume Controller
29
Sixteen IPV4 addresses are used for iSCSI access to each node (these addresses are
associated with the eth0:1 or eth1:1 alias for the eth0 or eth1 interface on each node).
Sixteen IPV6 addresses are used for iSCSI access to each node (these addresses are
associated with eth0 and eth1 interfaces on each node).
We show the configuration of the SVC ports in great detail in Chapter 7, “SAN Volume
Controller operations using the command-line interface” on page 339 and in Chapter 8, “SAN
Volume Controller operations using the GUI” on page 469.
2.2.12 iSCSI VDisk discovery
The iSCSI target implementation on the SVC nodes makes use of the hardware off-load
features that are provided by the node’s hardware. This implementation results in minimal
impact on the node’s CPU load for handling iSCSI traffic and simultaneously delivers
excellent throughput (up to 95 MBps user data) on each of the two 1 Gbps LAN ports. The
plan is to support jumbo frames (maximum transmission unit (MTU) sizes greater than 1,500
bytes) in future SVC releases.
Hosts can discover VDisks through one of the following three mechanisms:
Internet Storage Name Service (iSNS): SVC can register itself with an iSNS name server;
you set the IP address of this server by using the svctask chcluster command. A host
can then query the iSNS server for available iSCSI targets.
Service Location Protocol (SLP): The SVC node runs an SLP daemon, which responds to
host requests. This daemon reports the available services on the node, such as the
CIMOM service that runs on the configuration node; the iSCSI I/O service can now also be
reported.
iSCSI Send Target request. The host can also send a Send Target request using the iSCSI
protocol to the iSCSI TCP/IP port (port 3260).
2.2.13 iSCSI authentication
Authentication of the host sever toward the SVC cluster is optional and is disabled by default.
The user can choose to enable Challenge Handshake Authentication Protocol (CHAP)
authentication, which involves sharing a CHAP secret between the SVC cluster and the host.
After the successful completion of the link establishment phase, the SVC as authenticator
sends a challenge message to the specific server (peer). The server responds with a value
that is calculated by using a one-way hash function on the index/secret/challenge, such as an
MD5 checksum hash.
The response is checked by the SVC against its own calculation of the expected hash value.
If there is a match, the SVC acknowledges the authentication. If not, the SVC will terminate
the connection and will not allow any I/O to VDisks. At random intervals, the SVC might send
new challenges to the peer to recheck the authentication.
You can assign a CHAP secret to each SVC host object. The host must then use CHAP
authentication in order to begin a communications session with a node in the cluster. You can
also assign a CHAP secret to the cluster if two-way authentication is required. While creating
an iSCSI host within an SVC cluster, you will get the initiator’s IQN, for example, for a
Windows server:
iqn.1991-05.com.microsoft:ITSO_W2008
In addition, you must specify an (optional) CHAP secret.
30
Implementing the IBM System Storage SAN Volume Controller V5.1
You add a VDisk to a host, or perform LUN masking, in the same way that you connect hosts
by way of FC to the SVC.
Because you can use iSCSI in networks where data can be accessed illegally, the
specification allows separate security methods. You can set up security, for example, via a
method, such as IPSec, which is transparent for higher levels, such as iSCSI, because it is
implemented at the IP level. You can obtain details about securing iSCSI in RFC3723,
Securing Block Storage Protocols over IP, which is available at this Web site:
http://tools.ietf.org/html/rfc3723
2.2.14 iSCSI multipathing
Multipathing drivers means that the host can send commands down multiple paths to the
SVC to the same VDisk. A fundamental multipathing difference exists between FC and iSCSI
environments.
If FC-attached hosts see their FC target, and VDisks go offline, for example, due to a problem
in the target node, its ports, or the network, the host has to use a separate SAN path to
continue I/O. A multipathing driver is therefore always required on the host.
SCSI-attached hosts see a pause in I/O when a (target) node is reset, but (this action is the
key difference) the host is reconnected to the same IP target that reappears after a short
period of time and its VDisks continue to be available for I/O.
Be aware: With the iSCSI implementation in SVC, an IP address failover/failback between
partner nodes of an I/O Group will only take place in cases of a planned or unplanned node
restart. In the case of a problem in the network link (switches, ports, or links), no such
failover takes place.
A host multipathing driver for iSCSI is required if you want these capabilities:
To protect a server from network link failures
To protect a server from network failures, if the server is connected via two HBAs to two
separate networks
To protect a server from a server HBA failure (if two HBAs are in use)
To provide load balancing on the server’s HBA and the network links
2.2.15 Advanced Copy Services overview
The SVC supports the following copy services:
Synchronous remote copy
Asynchronous remote copy
FlashCopy with a full target
Block virtualization and data migration
Copy services are implemented between VDisks within a single SVC or multiple SVC
clusters. They are therefore independent of the functionalities of the underlying disk
subsystems that are used to provide storage resources to an SVC cluster.
Synchronous/Asynchronous remote copy
The general application of remote copy seeks to maintain two copies of a data set. Often the
two copies will be separated by distance, but not necessarily.
Chapter 2. IBM System Storage SAN Volume Controller
31
The remote copy can be maintained in one of two modes: synchronous or asynchronous. The
definition of an asynchronous remote copy needs to be supplemented by describing the
maximum degree of asynchronicity.
With the SVC, Metro Mirror and Global Mirror are the IBM branded terms for the functions that
are synchronous remote copy and asynchronous remote copy.
Synchronous remote copy ensures that updates are committed at both the primary and the
secondary before the application considers the updates complete; therefore, the secondary is
fully up-to-date if it is needed in a failover. However, the application is fully exposed to the
latency and bandwidth limitations of the communication link to the secondary. In a truly
remote situation, this extra latency can have a significant adverse effect on application
performance.
SVC assumes that the FC fabric to which it is attached contains hardware that achieves the
long distance requirement for the application. This hardware makes distant storage
accessible as though it were local storage. Specifically, it enables a group of up to four SVC
clusters to connect (FC login) to each other and establish communications in the same way
as though they were located nearby on the same fabric. The only differences are in the
expected latency of that communication, the bandwidth capability of the links, and the
availability of the links as compared with the local fabric. Special configuration guidelines exist
for SAN fabrics that are used for data replication. Issues to consider are the distance and the
bandwidth of the site interconnections.
In asynchronous remote copy, the application considers an update complete before that
update has necessarily been committed at the secondary. Hence, on a failover, certain
updates might be missing at the secondary. The application must have an external
mechanism for recovering the missing updates and reapplying them. This mechanism can
involve user intervention. Asynchronous remote copy provides comparable functionality to a
continuous backup process that is missing the last few updates. Recovery on the secondary
site involves bringing up the application on this recent “backup” and, then, reapplying the
most recent updates to bring the secondary up-to-date.
The asynchronous remote copy must present at the secondary a view to the application that
might not contain the latest updates, but is always consistent. If consistency has to be
guaranteed at the secondary, applying updates in an arbitrary order is not an option. At the
primary side, the application is enforcing an ordering implicitly by not scheduling an I/O until a
previous dependent I/O has completed. We do not know the actual ordering constraints of the
application; the best approach is to choose an ordering that the application might see if I/O at
the primary was stopped at a suitable point. One example is to apply I/Os at the secondary in
the order that they were completed at the primary. Thus, the secondary always reflects a state
that can have been seen at the primary if we froze I/O there.
The SVC Global Mirror protocol operates to identify small groups of I/Os, which are known to
be active concurrently in the primary cluster. The process to identify these groups of I/Os
does not significantly contribute to the latency of these I/Os when they execute at the primary.
These groups are applied at the secondary in the order in which they were executed at the
primary. By identifying groups of I/Os that can be applied concurrently at the secondary, the
protocol maintains good throughput as the system size grows.
The relationship between the two copies is not symmetrical. One copy of the data set is
considered the primary copy, which is sometimes also known as the source. This copy
provides the reference for normal runtime operation. Updates to this copy are shadowed to a
secondary copy, which is sometimes known as the destination or even the target. The
secondary copy is not normally referenced for performing I/O. If the primary copy fails, the
32
Implementing the IBM System Storage SAN Volume Controller V5.1
secondary copy can be enabled for I/O operation. A typical use of this function might involve
two sites where the first site provides service during normal operations and the second site is
only activated when a failure of the first site is detected.
The secondary copy is not accessible for application I/O other than the I/Os that are
performed for the remote copy process. The SVC allows read-only access to the secondary
storage when it contains a consistent image. This capability is only intended to allow boot
time operating system discovery to complete without error so that any hosts at the secondary
site can be ready to start up the applications with minimum delay, if required. For instance,
many operating systems need to read logical block address (LBA) 0 to configure a logical
unit.
“Enabling” the secondary copy for active operation will require SVC, operating system, and
possibly application-specific work, which needs to be performed as part of the entire failover
process. The SVC software at the secondary must be instructed to stop the relationship,
which makes the secondary logical unit accessible for normal I/O access. The operating
system might need to mount file systems, or similar work, which can typically only happen
when the logical unit is accessible for writes. The application might have a log of work to
recover.
Note that this property of remote copy, the requirement to enable the secondary copy,
differentiates it from RAID-1 mirroring. The latter aims to emulate a single, reliable disk,
regardless of what system accesses it. Remote copy retains the property that there are two
volumes in existence, but it suppresses one volume while the copy is being maintained.
The underlying storage at the primary or secondary of a remote copy will normally be RAID
storage, but it can be any storage, which can be managed by the SVC.
Making use of a secondary copy involves a conscious policy decision by a user that a failover
is required. The application work involved in establishing operation on the secondary copy is
substantial. The goal is to make this rapid but not seamless. Rapid is still much faster
compared to recovering from a backup copy.
Most clients will aim to automate this remote copy through failover management software.
SVC provides Simple Network Management Protocol (SNMP) traps and interfaces to enable
this automation. IBM Support for automation is provided by IBM Tivoli® Storage Productivity
Center for Replication.
Or, you can access the documentation online at the IBM Tivoli Storage Productivity Center
information center:
http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp
2.2.16 FlashCopy
FlashCopy makes a copy of a source VDisk to a target VDisk. The original content of the
target VDisk is lost. After the copy operation has started, the target VDisk has the contents of
the source VDisk as it existed at a single point in time. Although the copy operation takes
time, the resulting data at the target appears as though the copy was made instantaneously.
You can run FlashCopy on multiple source and target VDisks. FlashCopy permits the
management operations to be coordinated so that a common single point in time is chosen
for copying target VDisks from their respective source VDisks. This capability allows a
consistent copy of data, which spans multiple VDisks.
SVC also permits multiple Target VDisks to be FlashCopied from each Source VDisk. You can
use this capability to create images from separate points in time for each Source VDisk, you
Chapter 2. IBM System Storage SAN Volume Controller
33
can also create multiple images from a Source VDisk at a common point in time. Source and
Target VDisks can be SE VDisks.
Starting with SVC 5.1, Reverse FlashCopy is supported. It enables target VDisks to become
restore points for the source without breaking the FlashCopy relationship and without having
to wait for the original copy operation to complete. SVC supports multiple targets and thus
multiple rollback points.
FlashCopy is sometimes described as an instance of a Time-Zero copy (T0) or a Point in
Time (PiT) copy technology. Although the FlashCopy operation takes a finite time, this time is
several orders of magnitude less than the time that is required to copy the data using
conventional techniques.
Most clients aim to integrate the FlashCopy feature for point in time copies and quick recovery
of their applications and databases. IBM Support is provided by Tivoli Storage FlashCopy
Manager:
http://www-01.ibm.com/software/tivoli/products/storage-flashcopy-mgr/
You can read a detailed description of Data Mirroring and FlashCopy copy services in
Chapter 7, “SAN Volume Controller operations using the command-line interface” on
page 339. We discuss data migration in Chapter 6, “Advanced Copy Services” on page 255.
2.3 SVC cluster overview
In simple terms, a cluster is a collection of servers that, together, provide a set of resources to
a client. The key point is that the client has no knowledge of the underlying physical hardware
of the cluster. The client is isolated and protected from changes to the physical hardware,
which offers many benefits, most significantly, high availability.
Resources on clustered servers act as highly available versions of unclustered resources. If a
node (an individual computer) in the cluster is unavailable, or too busy to respond to a request
for a resource, the request is transparently passed to another node capable of processing it,
so that clients are unaware of the exact locations of the resources they are using.
For example, a client can request the use of an application without being concerned about
either where the application resides or which physical server is processing the request. The
user simply gains access to the application in a timely and reliable manner. Another benefit is
scalability. If you need to add users or applications to your system and want performance to
be maintained at existing levels, additional systems can be incorporated into the cluster.
The SVC is a collection of up to eight cluster nodes, which are added in pairs. In future
releases, the cluster size might be increased to permit further performance scalability. These
nodes are managed as a set (cluster) and present a single point of control to the
administrator for configuration and service activity.
The actual eight node limit within an SVC cluster is a limitation of the actual product, not an
architectural one. Larger clusters are possible without changing the underlying architecture.
SVC demonstrated its ability to scale during a recently run project:
http://www-03.ibm.com/press/us/en/pressrelease/24996.wss
Based on a 14-node cluster, coupled with solid-state drive controllers, the project achieved a
data rate of over one million IOPS with a response time of under 1 millisecond (ms).
34
Implementing the IBM System Storage SAN Volume Controller V5.1
Although the SVC code is based on a purpose-optimized Linux kernel, the clustering feature
is not based on Linux clustering code. The cluster software used within SVC, that is, the event
manager cluster framework, is based on the outcome of the COMPASS research project. It is
the key element to isolate the SVC application from the underlying hardware nodes. The
cluster software makes the code portable and provides the means to keep the single
instances of the SVC code running on separate cluster nodes in sync. Node restarts (during a
code upgrade), adding new nodes, or removing old nodes from a cluster or node failures
therefore cannot impact the SVC’s availability.
It is key for all active nodes of a cluster to know that they are members of the cluster.
Especially in situations, such as the split brain scenario where single nodes lose contact to
other nodes and cannot determine if the other nodes can be reached anymore, it is key to
have a solid mechanism to decide which nodes form the active cluster. A worst case scenario
is a cluster that splits into two separate clusters.
Within an SVC cluster, the voting set and an optional quorum disk are responsible for the
integrity of the cluster. If nodes are added to a cluster, they get added to the voting set; if
nodes are removed, they will also quickly be removed from the voting set. Over time, the
voting set, and hence the nodes in the cluster, can completely change so that the cluster has
migrated onto a completely separate set of nodes from the set on which it started.
Within an SVC cluster, the quorum is defined in one of these ways:
More than half the nodes in the voting set
Exactly half of the nodes in the voting set and the quorum disk from the voting set
When there is no quorum disk in the voting set, exactly half of the nodes in the voting set,
if that half includes the node that appears first in the voting set (a node is entered into the
voting set in the first available free slot)
These rules guarantee that there is only ever at most one group of nodes able to operate as
the cluster, so the cluster never splits into two. The SVC cluster implements a dynamic
quorum. Following a loss of nodes, if the cluster can continue operation, the cluster will adjust
the quorum requirement, so that further node failure can be tolerated.
The lowest Node Unique ID in a cluster becomes the boss node for the group of nodes and
proceeds to determine (from the quorum rules) whether the nodes can operate as the cluster.
This node also presents the maximum two cluster IP addresses on one or both of its node’s
Ethernet ports to allow access for cluster management.
2.3.1 Quorum disks
The cluster uses the quorum disk for two purposes: as a tie breaker in the event of a SAN
fault, when exactly half of the nodes that were previously members of the cluster are present,
and to hold a copy of important cluster configuration data. Just over 256 MB is reserved for
this purpose on each quorum disk candidate. There is only one active quorum disk in a
cluster; however, the cluster uses three MDisks as quorum disk candidates. The cluster
automatically selects the actual active quorum disk from the pool of assigned quorum disk
candidates.
If a tiebreaker condition occurs, the one half of the cluster nodes, which is able to reserve the
quorum disk after the split has occurred, locks the disk and continues to operate. The other
half stops its operation. This design prevents both sides from becoming inconsistent with
each other.
When MDisks are added to the SVC cluster, the SVC cluster checks the MDisk to see if it can
be used as a quorum disk. If the MDisk fulfills the requirements, the SVC will assign the three
Chapter 2. IBM System Storage SAN Volume Controller
35
first MDisks added to the cluster as quorum candidates. One of them is selected as the active
quorum disk.
Note: To be considered eligible as a quorum disk, an LUN must meet the following criteria:
It must be presented by a disk subsystem that is supported to provide SVC quorum
disks.
It cannot be allocated on one of the node’s internal flash disks.
It has been manually allowed to be a quorum disk candidate using the svctask
chcontroller -allow_quorum yes command.
It must be in managed mode (no image mode disks).
It must have sufficient free extents to hold the cluster state information, plus the stored
configuration metadata.
It must be visible to all of the nodes in the cluster.
If possible, the SVC will place the quorum candidates on separate disk subsystems. After the
quorum disk has been selected, however, no attempt is made to ensure that the other quorum
candidates are presented through separate disk subsystems.
With SVC 5.1, quorum disk candidates and the active quorum disk in a cluster can be listed
by the svcinfo lsquorum command. When the set of quorum disk candidates has been
chosen, it is fixed.
A new quorum disk candidate will only be chosen in one of these conditions:
The administrator requests that a specific MDisk becomes a quorum disk by using the
svctask setquorum command.
An MDisk that is a quorum disk is deleted from an MDG.
An MDisk that is a quorum disk changes to image mode.
An offline MDisk will not be replaced as a quorum disk candidate.
A cluster needs to be regarded as a single entity for disaster recovery purposes. The cluster
and the quorum disk need to be colocated.
There are special considerations concerning the placement of the active quorum disk for a
stretched cluster and stretched I/O Group configurations. Details are available at this Web
site:
http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311
Important: Running an SVC cluster without a quorum disk can seriously affect your
operation. A lack of available quorum disks for storing metadata will prevent any migration
operation (including a forced MDisk delete). Mirrored VDisks might be taken offline if there
is no quorum disk available. This behavior occurs, because synchronization status for
mirrored VDisks is recorded on the quorum disk.
During the normal operation of the cluster, the nodes communicate with each other. If a node
is idle for a few seconds, a heartbeat signal is sent to ensure connectivity with the cluster. If a
node fails for any reason, the workload that is intended for it is taken over by another node
until the failed node has been restarted and readmitted to the cluster (which happens
automatically). In the event that the microcode on a node becomes corrupted, resulting in a
failure, the workload is transferred to another node. The code on the failed node is repaired,
and the node is readmitted to the cluster (again, all automatically).
36
Implementing the IBM System Storage SAN Volume Controller V5.1
2.3.2 I/O Groups
For I/O purposes, the SVC nodes within the cluster are grouped into pairs, called I/O Groups,
with a single pair being responsible for serving I/O on a given VDisk. One node within the I/O
Group represents the preferred path for I/O to a given VDisk. The other node provides the
failover path. This preference alternates between nodes as each VDisk is created within an
I/O Group, which is an approach to balance the workload evenly between the two nodes.
Preferred node: The preferred node does not signify absolute ownership. The data can
still be accessed by the partner node in the I/O Group in the event of a failure.
2.3.3 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a
magnetic disk drive suffer from both seek and latency time at the drive level, which can result
in from one to 10 ms of response time (for an enterprise-class disk).
The new 2145-CF8 nodes combined with SVC 5.1 provide 24 GB memory per node, or 48
GB per I/O Group, or 192 GB per SVC cluster. The SVC provides a flexible cache model, and
the node’s memory can be used as read or write cache. The size of the write cache is limited
to a maximum of 12 GB of the node’s memory. Dependent on the current I/O situation on a
node, the free part of the memory (maximum 24 GB) can be fully used as read cache.
Cache is allocated in 4 KB pages. A page belongs to one track. A track is the unit of locking
and destage granularity in the cache. It is 32 KB in size (eight pages). A track might only be
partially populated with valid pages. The SVC coalesces writes up to the 32 KB track size if
the writes reside in the same tracks prior to destage; for example, if 4 KB is written into a
track, another 4 KB is written to another location in the same track. Therefore, the blocks
written from the SVC to the disk subsystem can be any size between 512 bytes up to 32 KB.
When data is written by the host, the preferred node within the I/O Group saves the data in its
cache. Before the cache returns completion to the host, the write must be mirrored to the
partner node, or copied in the cache of its partner node, for availability reasons. After having a
copy of the written data, the cache returns completion to the host.
Write data that is held in cache is not destaged to disk; therefore, if only one copy of the data
is kept, you risk losing data. Write cache entries without updates during the last two minutes
are automatically destaged to disk.
If one node of an I/O Group is missing, due to a restart or a hardware failure, the remaining
node empties all of its write cache and proceeds in a operation mode, which is referred to as
write-through mode. A node operating in write-through mode writes data directly to the disk
subsystem before sending an “I/O complete” status message back to the host. Running in
this mode can degrade the performance of the specific I/O Group.
Starting with SVC Version 4.2.1, write cache partitioning was introduced to the SVC. This
feature restricts the maximum amount of write cache that a single MDG can allocate in a
cluster. Table 2-2 shows the upper limit of write cache data that a single MDG in a cluster can
occupy.
Table 2-2 Upper limit of write cache per MDG
One MDG
Two MDGs
Three MDGs
Four MDGs
More than four
MDGs
100%
66%
40%
33%
25%
Chapter 2. IBM System Storage SAN Volume Controller
37
For in-depth information about SVC cache partitioning, we strongly recommend IBM SAN
Volume Controller 4.2.1 Cache Partitioning, REDP-4426, which is available at this Web site:
http://www.redbooks.ibm.com/abstracts/redp4426.html?Open
An SVC node can treat part or all of its physical memory as non-volatile. Non-volatile means
that its contents are preserved across power losses and resets. Besides the bitmaps for Flash
Copy and Remote Mirroring relationships, the Virtualization Table and the Write Cache are
the most important items in the non-volatile memory. The actual amount that can be treated
as non-volatile is dependent on the hardware.
In the event of a disruption or external power loss, the physical memory is copied to a file in
the file system on the node’s internal disk drive, so that the contents can be recovered when
external power is restored. The uninterruptible power supply units, which are delivered with
each node’s hardware, ensure that there is sufficient internal power to keep a node
operational to perform this dump when external power is removed. After dumping the content
of the non-volatile part of the memory to disk, the SVC node shuts down.
2.3.4 Cluster management
The SVC can be managed by one of the following three interfaces:
A textual Command-line interface (CLI) accessed via a Secure Shell connection (SSH).
A Web browser-based graphical user interface (GUI) written as a CIM Client (ICAT) using
the SVC CIMOM. It supports flexible and rapid access to storage management
information.
A CIMOM, which can be used write alternative CIM Clients (such as IBM System Storage
Productivity Center).
Starting with SVC release 4.3.1, the SVC Console (ICAT) can use the CIM Agent that is
embedded in the SVC cluster. With release 5.1 of the code, using the embedded CIMOM is
mandatory. This CIMOM will support the Storage Management Initiative Specification (SMI-S)
Version 1.3 standard.
User account migration
During the upgrade from SAN Volume Controller Console Version 4.3.1 to Version 5.1, the
installation program attempts to migrate user accounts that are currently defined to the
CIMOM on the cluster. If the migration of those accounts fails with the installation program,
you can manually migrate the user accounts with the help of a script. You can obtain details in
the SVC Software Installation and Configuration Guide, SC23-6628-04.
Hardware Management Console
The management console for SVC is referred to as the IBM System Storage Productivity
Center. IBM System Storage Productivity Center is a hardware and software solution that
includes a suite of storage infrastructure management software that can centralize, automate,
and simplify the management of complex and heterogeneous storage environments.
IBM System Storage Productivity Center
IBM System Storage Productivity Center is based on server hardware (IBM System
x®-based) and a set of pre-installed and optional software modules. Several of these
pre-installed modules provide base functionality only, or are not activated. You can activate
these modules, or the enhanced functionalities, by adding separate licenses.
IBM System Storage Productivity Center contains these functions:
38
Implementing the IBM System Storage SAN Volume Controller V5.1
Tivoli Integrated Portal: IBM Tivoli Integrated Portal is a standards-based architecture for
Web administration. The installation of Tivoli Integrated Portal is required to enable single
sign-on (SSO) for Tivoli Storage Productivity Center. Tivoli Storage Productivity Center
now installs Tivoli Integrated Portal along with Tivoli Storage Productivity Center.
Tivoli Storage Productivity Center: IBM Tivoli Storage Productivity Center Basic Edition
4.1.0 is pre-installed on the IBM System Storage Productivity Center server. There are
several other commercially available products of Tivoli Storage Productivity Center that
provide additional functionality beyond Tivoli Storage Productivity Center Basic Edition.
You can activate these packages by adding the specific licenses to the pre-installed Basic
Edition:
– Tivoli Storage Productivity Center for Disk allows you to monitor storage systems for
performance.
– Tivoli Storage Productivity Center for Data allows you to collect and monitor file
systems and databases.
– Tivoli Storage Productivity Center Standard Edition is a bundle that includes all of the
other packages, along with SAN planning tools that make use of information that is
collected from the Tivoli Storage Productivity Center components.
Tivoli Storage Productivity Center for Replication: The functions of Tivoli Storage
Productivity Center for Replication provide the management of the IBM FlashCopy, Metro
Mirror, and Global Mirror capabilities for the IBM Enterprise Storage Server® Model 800,
IBM DS6000™, DS8000®, and IBM SAN Volume Controller. You can activate this
package by adding the specific licenses.
SVC GUI (ICAT)
SSH Client (PuTTY)
Windows Server 2008 Enterprise Edition
Several base software packets that are required for Tivoli Productivity Center
Optional software packages, such as anti-virus software or DS3000/4000/5000 Storage
Manager, can be installed on the IBM System Storage Productivity Center server by the
client.
Figure 2-13 on page 40 provides an overview of the SVC management components. We
describe the details in Chapter 4, “SAN Volume Controller initial configuration” on page 103.
You can obtain details about the IBM System Storage Productivity Center in IBM System
Storage Productivity Center User’s Guide Version 1 Release 4, SC27-2336-03.
Chapter 2. IBM System Storage SAN Volume Controller
39
Figure 2-13 SVC management overview
2.3.5 User authentication
With SVC 5.1, several changes concerning user authentication for an SVC cluster have been
introduced to make user authentication simpler.
Earlier SVC releases authenticated all users locally. SVC 5.1 has two authentication
methods:
Local authentication: Local authentication is similar to the existing method and will be
described next.
Remote authentication: Remote authentication supports the use of a remote
authentication server, which for SVC is the Tivoli Embedded Security Services, to validate
the passwords. The Tivoli Embedded Security Services is part of the Tivoli Integrated
Portal, which is one of the three components that come with Tivoli Productivity Center 4.1
(Tivoli Productivity Center, Tivoli Productivity Center for Replication, and Tivoli Integrated
Portal) that are pre-installed on the IBM System Storage Productivity Center 1.4. The IBM
System Storage Productivity Center 1.4 is the management console for SVC 5.1 clusters.
Each SVC cluster can have multiple users defined. The cluster maintains an audit log of
successfully executed commands, indicating which users made what actions at what times.
User names can contain only printable ASCII characters:
Forbidden characters are single quotation mark (‘), colon (:), percent symbol (%), asterisk
(*), comma (,), and double quotation marks (“).
A user name cannot begin or end with a blank.
Passwords for local users do not have any forbidden characters, but passwords cannot begin
or end with blanks.
40
Implementing the IBM System Storage SAN Volume Controller V5.1
SVC superuser
There is a special local user called the superuser that always exists on every cluster. It cannot
be deleted. Its password is set by the user during cluster initialization. The superuser
password can be reset from the node’s front panel, and this function can be disabled,
although doing this makes the cluster inaccessible if all of the users forget their passwords or
lose their SSH keys. The superuser’s password supersedes the cluster administrator
password that was present in previous software releases.
To register an SSH key for the superuser to provide command-line access, you use the GUI,
usually at the end of the cluster initialization process. But, you can also add it later.
The superuser is always a member of user group 0, which has the most privileged role within
the SVC.
2.3.6 SVC roles and user groups
Each user group is associated with a single role. The role for a user group cannot be
changed, but additional new user groups (with one of the defined roles) can be created.
User groups are used for local and remote authentication. Because SVC knows of five roles,
there are, by default, five user groups defined in an SVC cluster (see Table 2-3).
Table 2-3 User groups
User group ID
User group
Role
0
SecurityAdmin
SecurityAdmin
1
Administrator
Administrator
2
CopyOperator
CopyOperator
3
Service
Service
4
Monitor
Monitor
The access rights for a user belonging to a specific user group are defined by the role that is
assigned to the user group. It is the role that defines what a user can do (or cannot do) on an
SVC cluster.
Table 2-4 on page 42 shows the roles ordered (from the top) by starting with the least
privileged Monitor role down to the most privileged SecurityAdmin role.
Chapter 2. IBM System Storage SAN Volume Controller
41
Table 2-4 Commands permitted for each role
Role
Allowed commands
Monitor
All svcinfo commands:
svctask: finderr, dumperrlog, dumpinternallog, chcurrentuser
svcconfig: backup
Service
All commands allowed for Monitor role, plus:
svctask: applysoftware, setlocale, addnode, rmnode, cherrstate,
writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps,
settimezone, stopcluster, startstats, stopstats, settime
CopyOperator
All commands allowed for Monitor role, plus:
svctask: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp,
chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap,
startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp,
chrcconsistgrp,
startrcrelationship, stoprcrelationship, switchrcrelationship,
chrcrelationship, chpartnership
Administrator
All commands, except:
svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp,
chusergrp, setpwdreset
SecurityAdmin
All commands
2.3.7 SVC local authentication
Local users are those users managed entirely on the cluster without the intervention of a
remote authentication service. Local users must have either a password, an SSH public key,
or both. The password is used for authentication and the SSH key is used for command-line
or file transfer (SecureCopy) access. Therefore, for local users, the user can access the SVC
cluster via the GUI only if a password is specified.
Local users: Be aware that local users are created per each SVC cluster. Each user has a
name, which must be unique across all users in one cluster. If you want to allow access for
a user on multiple clusters, you have to define the user in each cluster with the same name
and the same privileges.
A local user always belongs to only one user group.
Figure 2-14 on page 43 shows an overview of local authentication within the SVC.
42
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 2-14 Simplified overview of SVC local authentication
2.3.8 SVC remote authentication and single sign-on
You can configure an SVC cluster to use a remote authentication service. Remote users are
those users that are managed by the remote authentication service and require
command-line or file-transfer access.
Remote users only have to be defined in the SVC if command-line access is required. In that
case, the remote authentication flag has to be set, and an SSH key and its password have to
be defined for this user. Remember that for users requiring CLI access with remote
authentication, defining the password locally for this user is mandatory.
Remote users cannot belong to any user group, because the remote authentication service,
for example, an Lightweight Directory Access Protocol (LDAP) directory server, such as IBM
Tivoli Directory Server or Microsoft® Active Directory, will deliver the user group information.
The upgrade from SVC 4.3.1 is seamless. Existing users and roles are migrated without
interruption. Remote authentication can be enabled after the upgrade is complete.
Figure 2-15 on page 44 gives an overview of SVC remote authentication.
Chapter 2. IBM System Storage SAN Volume Controller
43
Figure 2-15 Simplified overview of SVC 5.1 remote authentication
The authentication service supported by SVC is the Tivoli Embedded Security Services server
component level 6.2.
The Tivoli Embedded Security Services server provides the following two key features:
Tivoli Embedded Security Services isolates the SVC from the actual directory protocol in
use, which means that the SVC communicates only with Tivoli Embedded Security
Services to get its authentication information. The type of protocol that is used to access
the central directory or the kind of the directory system that is used is transparent to SVC.
Tivoli Embedded Security Services provides a secure token facility that is used to enable
single sign-on (SSO). SSO means that users do not have to log in multiple times when
using what appears to them to be a single system. It is used within Tivoli Productivity
Center. When the SVC Console is launched from within Tivoli Productivity Center, the user
will not have to log on to the SVC Console, because the user has already logged in to
Tivoli Productivity Center.
With reference to Figure 2-16 on page 45, the user starts application A with a user name and
password (1), which are authenticated using the Tivoli Embedded Security Services server
(2). The server returns a token (3), which is an opaque string that can only be interpreted by
the Tivoli Embedded Security Services server. The server also supplies the user’s groups and
an expiry time stamp for the token. The client device (SVC in our case) is responsible for
mapping an Tivoli Embedded Security Services user group to roles.
Application A needs to launch application B. Instead of getting the user to enter a new
password to authenticate to application B, A passes B the Tivoli Embedded Security Services
token (4). Application B passes the Tivoli Embedded Security Services token to the Tivoli
Embedded Security Services server (5), which decodes the token and returns the user’s ID
and groups to application B (6) along with an expiry time stamp.
44
Implementing the IBM System Storage SAN Volume Controller V5.1
2: auth( u, p )
1: login( u, p )
Application A
3: auth_ok( tk, ts, g )
ESS
Server
4: launch( tk )
LDAP
Server
5: auth( tk )
Application B
6: auth_ok( tk, ts, u, g )
Figure 2-16 SSO with Tivoli Embedded Security Services
The token expiry time stamp is advice to the Tivoli Embedded Security Services client
applications A and B about credential caching. The applications are permitted to cache and
use a token or user name-password combination until the time stamp expires and is returned
by the server.
So, in the our example, application B can cache the fact that a particular token maps to a
particular user ID and groups, which is a performance boost, because it saves the latency of
querying the Tivoli Embedded Security Services server on each interaction between A and B.
After the lifetime of the token has expired, application A must query the server again and
obtain a new time stamp to rejuvenate the token (or alternatively discover that the credentials
are now invalid).
The Tivoli Embedded Security Services server administrator can configure the length of time
that is used to set expiry timestamps. This system is only effective if the Tivoli Embedded
Security Services server and the applications have synchronized clocks.
Using a remote authentication service
Use the following steps to use SVC with a remote authentication service:
1. Configure the cluster with the location of the remote authentication server.
You can change the settings with this command:
svctask chauthservice.......
You can view settings with this command:
svcinfo lscluster.......
SVC supports either an HTTP or HTTPS connection to the Tivoli Embedded Security
Services server. If the HTTP option is used, the user and password information is
transmitted in clear text over the IP network.
2. Configure user groups on the cluster matching those user groups that are used by the
authentication service. For each group of interest that is known to the authentication
Chapter 2. IBM System Storage SAN Volume Controller
45
service, there must be an SVC user group with the same name and the remote setting
enabled.
For example, you can have a group called sysadmins, whose members require the SVC
Administrator role. Configure this group by using the command:
svctask mkusergrp -name sysadmins -remote -role Administrator
If none of a user’s groups match any of the SVC user groups, the user is not permitted to
access the cluster.
3. Configure users that do not require SSH access. Any SVC users that are to be used with
the remote authentication service and do not require SSH access need to be deleted from
the system. The superuser cannot be deleted; it is a local user and cannot use the remote
authentication service.
4. Configure users that do require SSH access. Any SVC users that are to be used with the
remote authentication service and do require SSH access must have their remote setting
enabled and the same password set on the cluster and the authentication service. The
remote setting instructs SVC to consult the authentication service for group information
after the SSH key authentication step to determine the user’s role. The need to configure
the user’s password on the cluster in addition to the authentication service is due to a
limitation in the Tivoli Embedded Security Services server software.
5. Configure the system time. For correct operation, both the SVC cluster and the system
running the Tivoli Embedded Security Services server must have the exact same view of
the current time; the easiest way is to have them both use the same Network Time
Protocol (NTP) server.
Failure to follow this step can lead to poor interactive performance of the SVC user
interface or incorrect user-role assignments.
Also, Tivoli Productivity Center 4.1 leverages the Tivoli Integrated Portal infrastructure and its
underlying WebSphere® Application Server capabilities to make use of an LDAP registry and
enable single sign-on (SSO).
You can obtain more information about implementing SSO within Tivoli Productivity Center
4.1 in Chapter 6 (LDAP authentication support and single sign-on) of the IBM Tivoli Storage
Productivity Center V4.1 Release Guide, SG247725, at this Web site:
http://www.redbooks.ibm.com/redpieces/abstracts/sg247725.html?Open
2.4 SVC hardware overview
The SVC 5.1 release will also provide new, more powerful hardware nodes. Also, these new
nodes will be, as defined in the underlying COMPASS architecture, based on Intel®
processors with standard PCI-X adapters to interface with the SAN and the LAN.
The new SVC 2145-CF8 Storage Engine has the following key hardware features:
New SVC engine based on Intel Core i7 2.4 GHz quad-core processor
24 GB memory, with future growth possibilities
Four 8 Gbps FC ports
Up to four solid-state drives, enabling scale-out high performance solid-state drive support
with SVC
Two power supplies
Double bandwidth compared to its predecessor node (2145-8G4)
46
Implementing the IBM System Storage SAN Volume Controller V5.1
Up to double IOPS compared to its predecessor node (2145-8G4)
A 19-inch rack-mounted enclosure
IBM Systems Director Active Energy Manager™-enabled
The new nodes can be smoothly integrated within existing SVC clusters. New nodes can be
intermixed in pairs within existing SVC clusters. Mixing engine types in a cluster results in
VDisk throughput characteristics of the engine type in that I/O Group. The cluster
nondisruptive upgrade capability can be used to replace older engines with new 2145-CF8
engines.
They are 1U high, fit into 19 inch racks, and use the same uninterruptible power supply unit
models as previous models. Integration into existing clusters requires that the cluster runs
SVC 5.1 code. The only node that does not support SVC 5.1 code is the 2145-4F2-type node.
An upgrade scenario for SVC clusters based, or containing, these first generation nodes will
be available later this year. Figure 2-17 shows the front-side view of the new SVC 2145-CF8
node.
Figure 2-17 The SVC 2145-CF8 storage engine
Remember that several of the new features in the new SVC 5.1 release, such as iSCSI, are
software features and are therefore available on all nodes supporting this release.
2.4.1 Fibre Channel interfaces
The IBM SAN Volume Controller provides the following FC interfaces on the node types:
Supported link speed of 2/4/8 Gbps on SVC 2145-CF8 nodes
Supported link speed of 1/2/4 Gbps on SVC 2145-8G4, SVC 2145-8A4, and SVC
2145-8F4 nodes
The nodes come with a 4-port HBA. The FC ports on these node types autonegotiate the link
speed that is used with the FC switch. The ports normally operate at the maximum speed that
is supported by both the SVC port and the switch. However, if a large number of link errors
occur, the ports might operate at a lower speed than what is supported.
The actual port speed for each of the four ports can be displayed via the GUI, the CLI, the
node’s front panel, and also by light-emitting diodes (LEDs) that are placed at the rear of the
node. For details, consult the node-specific SVC hardware installation guides:
IBM System Storage SAN Volume Controller Model 2145-CF8 Hardware Installation
Guide, GC52-1356
IBM System Storage SAN Volume Controller Model 2145-8A4 Hardware Installation
Guide, GC27-2219
Chapter 2. IBM System Storage SAN Volume Controller
47
IBM System Storage SAN Volume Controller Model 2145-8G4 Hardware Installation
Guide, GC27-2220
IBM System Storage SAN Volume Controller Models 2145-8F2 and 2145-8F4 Hardware
Installation Guide, GC27-2221
The SVC imposes no limit on the FC optical distance between SVC nodes and host servers.
FC standards, along with small form-factor pluggable optics (SFP) capabilities and cable type,
dictate the maximum FC distances that are supported.
If you use longwave SFPs in the SVC node itself, the longest supported FC link between the
SVC and switch is 10 km (6.21 miles).
Table 2-5 shows the actual cable length that is supported with shortwave SFPs.
Table 2-5 Overview of supported cable length
FC-O
OM1 (M6)
standard 62.2/125
microseconds
OM2 (M5)
standard 50/125
microseconds
OM3 (M5E)
optimized 50/125
microseconds-300
2 Gbps FC
150 m
300 m
500 m
4 Gbps FC
70 m
150 m
380 m
8 Gbps FC limiting
21 m
50 m
150 m
Table 2-6 shows the rules that apply with respect to the number of inter-switch link (ISL) hops
allowed in a SAN fabric between SVC nodes or the cluster.
Table 2-6 Number of supported ISL hops
Between nodes in an
I/O Group
Between nodes in
separate I/O Groups
Between nodes and
the disk subsystem
Between nodes and
the host server
0
(connect to the same
switch)
1
(recommended: 0,
connect to the same
switch)
1
(recommended: 0,
connect to the same
switch)
Maximum 3
2.4.2 LAN interfaces
The 2145-CF8 node supports (as its predecessor nodes did) two 1 Gbps LAN ports. In SVC
4.3.1 and before, the SVC cluster presented a single IP interface, which was used by the SVC
configuration interfaces (CLI and CIMOM). Although multiple physical nodes were present in
the SVC cluster, only a single node (the configuration node) was active on the IP network.
This configuration IP address was presented from the eth0 port of the configuration node.
If the configuration node failed, a separate node in the cluster took over the duties of the
configuration node and the IP address for the cluster was then presented at the eth0 port of
that new configuration. The configuration node supported concurrent access on the IPv4 and
IPv6 configuration addresses on the eth0 port from SVC 4.3 onward.
Starting with SVC 5.1, the cluster configuration node can now be accessed on either eth0 or
eth1. The cluster can have two IPv4 and two IPv6 addresses that are used for configuration
purposes (CLI or CIMOM access). The cluster can therefore be managed by SSH clients or
GUIs on System Storage Productivity Centers on separate physical IP networks. This
capability provides redundancy in the event of a failure of one of these IP networks.
48
Implementing the IBM System Storage SAN Volume Controller V5.1
Support for iSCSI introduces one additional IPv4 and one additional IPv6 address for each
SVC node port; these IP addresses are independent of the cluster configuration IP
addresses.
Figure 2-12 on page 29 shows an overview.
2.5 Solid-state drives
You can use solid-state drives, or more specifically, single layer cell (SLC) or multilayer cell
(MLC) NAND Flash-based disks (for the sake of simplicity, we call them solid-state drives in
the following chapters), to overcome a growing problem that is known as the memory/storage
bottleneck.
2.5.1 Storage bottleneck problem
The memory/storage bottleneck describes the steadily growing gap between the time
required for a CPU to access data located in its cache/memory (typically in nanoseconds)
and data located on external storage (typically in milliseconds).
While CPUs and cache/memory devices continually improve their performance, this is not
true in general for mechanical disks that are used as external storage.
Figure 2-18 shows these access time differences.
Figure 2-18 The memory/storage bottleneck
The single times that are shown are not that important, but look at the time differences
between accessing data that is located in cache and data that is located on external disk.
We have added a second scale to Figure 2-18, which gives you an idea of how long it takes to
access the data in a scenario where a single CPU cycle takes 1 second. This scale gives you
an idea of the importance of future storage technologies closing or reducing the gap between
access times for data stored in cache/memory versus access times for data stored on a
external medium.
Chapter 2. IBM System Storage SAN Volume Controller
49
Since magnetic disks were first introduced by IBM in 1956 (RAMAC), they have shown a
remarkable performance regarding capacity growth, form factor/size reduction, price
decrease ($/GB), and reliability.
However, the number of I/Os that a disk can handle and the response time that it takes to
process a single I/O on it have not increased at the same rate — although they have certainly
increased. In actual environments, we can expect from today’s enterprise-class FC
serial-attached SCSI (SAS) disk up to 200 IOPS per disk with an average response time (a
latency) of approximately 7 ms per I/O.
To simplify it, today rotating disks are getting, and still will, bigger in capacity (several TBs),
smaller in form factor/footprint (3.5 inches, 2.5 inches, and 1.8 inches), and less expensive
($/GB), but not necessarily faster.
The limiting factor is the number of revolutions per minute (rpm) that a disk can perform
(actually 15,000). This factor defines the time that is required to access a specific data block
on a rotating device. There might be smaller improvements in the future, but a big step, such
as doubling the number of revolutions, if technically even possible, inevitably has a massive
increase in power consumption and a price increase.
2.5.2 Solid-state drive solution
The solid-state drives can provide a solution for this dilemma. No rotating parts mean
improved robustness and lower power consumption. A remarkable improvement in I/O
performance and a massive reduction in the average I/O response times (latency) are the
compelling reasons to use solid-state drives in today’s storage subsystems.
Enterprise-class solid-state drives deliver typically 50,000 read and 20,000 write IOPs with
latencies of typically 50us for reads and 800us for writes. Their form factors (2.5 inches/3.5
inches) and their interfaces (FC/SAS/Serial Advanced Technology Attachment (SATA)) make
them easy to integrate into existing disk shelves.
Adding solid-state drives: Specific performance problems might be solved by carefully
adding solid-state drives to an existing disk subsystem. But be aware, solving performance
problems by using solid-state drives excessively in existing disk subsystems will inevitably
create performance bottlenecks on the underlying RAID controllers.
2.5.3 Solid-state drive market
The solid-state drive storage market is rapidly evolving. The key differentiator among today’s
solid-state drive products that are available on the market is not the storage medium, but the
logic in the disk internal controllers. Optimally handling what is referred to as wear-out
leveling, which defines the controller’s capability to ensure a device’s durability, and closing
the remarkable gap between read and write I/O performance are the top priorities in today’s
controller development.
Today’s solid-state drive technology is only a first step into the world of high performance
persistent semiconductor storage. A group of the approximately 10 most promising
technologies are collectively referred to as Storage Class Memory (SCM).
Storage Class Memory
SCM promises a massive improvement in performance (IOPS), areal density, cost, and
energy efficiency compared to today’s solid-state drive technology. IBM Research is actively
engaged in these new technologies.
50
Implementing the IBM System Storage SAN Volume Controller V5.1
You can obtain details of nanoscale devices at this Web site:
http://www.almaden.ibm.com/st/nanoscale_st/nano_devices/
You can obtain details of Storage Class Memory at this Web site:
http://tinyurl.com/plk7as
You can read a comprehensive and worthwhile overview of the solid-state drive technology in
a subset of the well known Spring 2009 SNIA Technical Tutorials, which are available on the
SNIA Web site:
http://www.snia.org/education/tutorials/2009/spring/solid
When these technologies become a reality, it will fundamentally change the architecture of
today’s storage infrastructures.
The next topic describes integrating the first releases of this new technology into the SVC.
2.6 Solid-state drives in the SVC
The solid-state drives in the new 2145-CF8 nodes provide a new ultra-high-performance
storage option. They are available in the 2145-CF8 nodes only. Solid-state drives can be
pre-installed in the new nodes or installed as a field hardware upgrade on a per disk basis at
a later point in time without interrupting service.
Solid-state drives include the following features:
Up to four solid-state drives can be installed on each SVC 2145-CF8 node.
An IBM PCIe SAS HBA is required on each node that contains a solid-state drive.
Each solid-state drive is a 2.5-inch Serial Attached SCSI (SAS) drive.
Each solid-state drive provides up to 140 GB of capacity.
Solid-state drives are hot-pluggable and hot-swappable.
Up to four solid-state drives are supported per node, which will provide up to 560 GB of
usable solid-state drive capacity per node. Always install the same amount of solid-state drive
capacity in both nodes of an I/O Group.
In a cluster running 5.1 code, node pairs with solid-state drives can be mixed with older node
pairs, either with or without local solid-state drives installed.
This scalable architecture enables clients to take advantage of the throughput capabilities of
the solid-state drive. The following performance exists per I/O Group (from solid-state drives
only):
IOPS: 200 K reads, 80 K writes, and 56 K 70/30 mix
MBps: 800 MBps reads and 400 MBps writes
SSDs are local drives in an SVC node and are presented as MDisks to the SVC cluster. They
belong to an SVC internal controller. These controller objects will have the worldwide node
name (WWNN) of the node in question, but they will be reported as standard controller
objects that can be renamed by the user. SVC reserves eight of these controller objects for
the internal SSD controllers.
Chapter 2. IBM System Storage SAN Volume Controller
51
MDisks based on SSD can be identified by showing their attributes via GUI/CLI. For these
MDisks, the attributes Node ID and Node Name are set. In all other MDisk views, these
attributes are blank.
2.6.1 Solid-state drive configuration rules
You must follow the SVC solid-state drive configuration rules for nodes, I/O Groups, and
clusters:
Nodes that contain solid-state drives can coexist in a single SVC cluster with any other
supported nodes.
Do not combine nodes that contain solid-state drives and nodes that do not contain
solid-state drives in a single I/O Group. It is acceptable to temporarily mix node types in an
I/O Group while upgrading SVC node hardware from an older model to the 2145-CF8.
Nodes that contain solid-state drives in a single I/O Group must share the same solid-state
drive capacities.
Quorum functionality is not supported on solid-state drives within SVC nodes.
You must follow the SVC solid-state drive configuration rules for MDisks and MDisk groups:
Each solid-state drive is recognized by the cluster as a single MDisk.
For each node that contains solid-state drives, create a single MDisk group that includes
only the solid-state drives that are installed in that node.
Terminology: An MDG using solid-state drives contained within an SVC node will be
referenced as SVC solid-state drive storage throughout this book. The configuration rules
given in this book apply to SVC solid-state drive storage. Do not confuse this term with
solid-state drive storage that is contained in SAN-attached storage controllers, such as the
IBM DS8000 or DS5000.
When you add a new solid-state drive to an MDisk group (move it from unmanaged to
managed mode), the solid-state drive is automatically formatted and set to a block size of 512
bytes.
You must follow these configuration rules for VDisks using storage from solid-state drives
within SVC nodes:
VDisks using SVC solid-state drive storage must be created in the I/O Group where the
solid-state drives physically reside.
VDisks using SVC solid-state drive storage must be mirrored to another MDG to provide
fault tolerance. There are two supported mirroring configurations:
– For the highest performance, the two VDisk copies must be created in the two MDGs
that correspond to the SVC solid-state drive storage in two nodes in the same I/O
Group. The recommended solid-state drive configuration for highest performance is
shown in Figure 2-19 on page 54.
– For the best utilization of the solid-state drive capacity, the primary VDisk copy must be
placed on SVC solid-state drive storage and the secondary copy can be placed on Tier
1 storage, such as an IBM DS8000. Under certain failure scenarios, the performance
of the VDisk will degrade to the performance of the non-solid-state drive storage. All
read I/Os are sent to the primary copy of a mirrored VDisk; therefore, reads will
experience solid-state drive performance. Write I/Os are mirrored to both locations, so
performance will match the speed of the slowest copy. The recommended solid-state
52
Implementing the IBM System Storage SAN Volume Controller V5.1
drive configuration for the best solid-state drive capacity utilization is shown in
Figure 2-20 on page 55.
To balance the read workload, evenly split the primary and secondary VDisk copies on
each node that contains solid-state drives.
The preferred node of the VDisk must be the same node that contains the solid-state
drives that are used by the primary VDisk copy.
Important: For VDisks that are provisioned out of SVC solid-state drive storage, VDisk
Mirroring is mandatory to maintain access to the data that is stored on solid-state drives if
one of the nodes in the I/O Group is being serviced or fails.
Remember that VDisks that are based on SVC solid-state drive storage must always be
presented by the I/O Group and, during normal operation, by the node to which the solid-state
drive belongs. These rules are designed to direct all host I/O to the node containing the
relevant solid-state drives.
Existing VDisks can be migrated while online to SVC solid-state drive storage. It might be
necessary to move the VDisk into the correct I/O Group first, which requires quiescing I/O to
this VDisk during the move.
Figure 2-19 on page 54 shows the recommended solid-state drive configuration for the
highest performance.
Chapter 2. IBM System Storage SAN Volume Controller
53
Figure 2-19 Solid-state drive configuration for highest performance
For a read-intensive application, mirrored VDisks can keep their secondary copy on a
SAN-based MDG, such as an IBM DS8000 providing Tier 1 storage resources to an SVC
cluster.
Because all read I/Os are sent to the primary copy (which is set as the solid-state drive),
reasonable performance occurs as long as the Tier 1 storage can sustain the write I/O rate.
Performance will decrease if the primary copy fails. Ensure that the node on which the
primary VDisk copy resides is also the preferred node for the VDisk. Figure 2-20 on page 55
shows the recommended solid-state drive configuration for the best capacity utilization.
54
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 2-20 Recommended solid-state drive configuration for best solid-state drive capacity utilization
Remember these considerations when using SVC solid-state drive storage:
I/O requests to solid-state drives that are in other nodes are automatically forwarded.
However, this forwarding introduces additional delays. Try to avoid these configurations by
following the configuration rules.
Be careful migrating image mode VDisks to SVC solid-state drive storage or deleting a
copy of a mirrored VDisk based on SVC solid-state drive storage. In all of the scenarios
where your data is stored in one single solid-state drive-based MDG, your data is not
protected against node or disk failures any longer.
If you delete or replace nodes containing local solid-state drives from a cluster, remember
that the data stored on its solid-state drives might have to be decommissioned.
If you shut down a node that contains SVC solid-state drive storage containing VDisks
without mirrors on another node or storage system, you will lose access to any VDisks that
are associated with that SVC solid-state drive storage. A force option is provided to
prevent an unintended loss of access.
SVC 5.1 provides the functionality to upgrade the solid-state drive’s firmware and pre-GA
code.
For details, see IBM System Storage SAN Volume Controller Software Installation and
Configuration Guide Version, SC23-6628.
2.6.2 SVC 5.1 supported hardware list, device driver, and firmware levels
With the SVC 5.1 release, as in every release, IBM offers functional enhancements and new
hardware that can be integrated into existing or new SVC clusters and also interoperability
Chapter 2. IBM System Storage SAN Volume Controller
55
enhancements or new support for servers, SAN switches, and disk subsystems. See the most
current information at this Web site:
http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277
2.6.3 SVC 4.3.1 features
Before we introduce the new features of SVC 5.1, we review the features that were added
with Release 4.3.1:
New node type: 2145-8A4: The Entry Edition hardware comes with identical functionality
as the 2145-8G4 nodes: 8 GB memory, and four 4 Gbps FC interfaces. The 2145-8A4
nodes provide approximately 60% of the performance of the actual 2145-8G4 nodes. The
2145-8A4 is an ideal choice for entry-level solutions with reduced performance
requirements, but without any functional restrictions. It uses physical disk-based licensing.
Embedded CIMOM
The CIMOM, and the associated SVC CIM Agent, is the software component that provides
the industry standard CIM protocol as a management interface to SVC. Up to SVC 4.3.0,
the CIMOM ran on the SVC Master Console, which was replaced in SVC 4.2.0 by the
System Storage Productivity Center-based management console. The System Storage
Productivity Center is an integrated package of hardware and software that provides all of
the management software (SVC CIMOM and SVC GUI) that is required to manage the
SVC, as well as components for managing other storage systems.
Clients can continue to use either the Master Console or IBM System Storage Productivity
Center to manage SVC 4.3.1. In addition, the software components required to manage
the SVC (SVC CIMOM and SVC GUI) are provided by IBM in software form, allowing
clients that have a suitable hardware platform to build their own Master Console.
Note: With SVC 5.1, the usage of the embedded CIMOM is mandatory. We therefore
recommend, when upgrading, that you switch the existing configurations from the
Master Console/IBM System Storage Productivity Center-based CIMOM to the
embedded CIMOM (remember to update the Tivoli Productivity Center configuration if it
in use). Then, upgrade the Master Console/IBM System Storage Productivity Center,
and finally, upgrade the SVC cluster.
Windows Server 2008 support for the SVC GUI and Master Console
IBM System Storage Productivity Center 1.3 support
NTP synchronization
The SVC cluster time operates in one of two exclusive modes:
– Default mode in which the cluster uses the configuration node’s system clock
– NTP mode in which the cluster uses an NTP time server as its time source and adjusts
the configuration node’s system clock according to time values obtained from the NTP
server. When operating in NTP mode, the SVC cluster will log an error if an NTP server
is unavailable.
Performance enhancement for overlapped Global Mirror writes
2.6.4 New with SVC 5.1
We have already described most of the new features that are available with SVC Release 5.1.
This list summarizes the new features:
New hardware nodes (CF8)
56
Implementing the IBM System Storage SAN Volume Controller V5.1
SVC 5.1 offers a new SVC engine that is based on IBM System x3550M2 server Intel
Core i7 2.4 GHz quad-core processor. It provides 24 GB of cache (with future growth
possibilities) and four 8 Gbps FC ports.
It provides support for solid-state drives (up to four per SVC node) enabling scale-out high
performance solid-state drive support with SVC. The new nodes can be intermixed in pairs
with other engines in SVC clusters. We describe the details in 2.4, “SVC hardware
overview” on page 46.
64-bit kernel in Model 8F2 and later
The SVC software kernel has been upgraded to take advantage of the 64-bit hardware on
SVC nodes. Model 4F2 is not supported with SVC 5.1 software, but it is supported with
SVC 4.3.x software. The 2145-8A4 is an effective replacement for the 4F2, and it doubles
the performance of the 4F2.
Going to 64-bit mode will improve performance capability. It allows for a cache increase
(24 GB) in the 2145-CF8 and will be used in future SVC releases for cache increases and
other expansion options.
Solid-state disk support
Optional solid-state drives in SVC engines provide a new ultra-high-performance storage
option. Up to four solid-state drives per node (140 GB each, larger in the future) can be
added to a node. This capability provides up to 540 GB of usable solid-state drive capacity
per I/O Group, or more than 2 TB to an 8-node SVC cluster. The SVC’s scalable
architecture enables clients to take advantage of the throughput capabilities of the
solid-state drive. The solid-state drives are fully integrated into the SVC architecture.
VDisks can be migrated to and from solid-state drive VDisks without application disruption.
FlashCopy can be used for backup or to copy data to solid-state drive VDisks.
We describe details in 2.5, “Solid-state drives” on page 49.
iSCSI support
SVC 5.1 provides native attachment to SVC for host systems using the iSCSI protocol.
This iSCSI support is a software feature. It will be supported on older SVC nodes that
support SVC 5.1. iSCSI is not used for storage attachment, for SVC cluster-to-cluster
communication, or for communication between the SVC engines in a cluster. These
functions will still be performed via FC.
We describe the details in 2.2.10, “iSCSI overview” on page 26.
Multiple relationships for synchronous data mirroring (Metro Mirror)
Multiple cluster mirroring enables Metro Mirror (MM) and Global Mirror (GM) relationships
to exist between a maximum of four SVC clusters. Remember that a VDisk can be in only
one MM/GM relationship.
The creation of up to 8,192 Metro Mirror and Global Mirror relationships is supported. The
single relationships are individually controllable (create/delete and start/stop).
We describe the details in “Synchronous/Asynchronous remote copy” on page 31.
Enhancements to FlashCopy and support for reverse FlashCopy
SVC 5.1 enables FlashCopy targets to become restore points for the source without
breaking the FlashCopy relationship and without having to wait for the original copy
operation to complete. Multiple targets and thus multiple rollback points are supported.
We describe the details in 2.2.16, “FlashCopy” on page 33.
Zero detection
Zero detection provides the means to reclaim unused allocated disk space (zeros) when
converting a fully allocated VDisk to a Space-Efficient VDisk using VDisk Mirroring. To
Chapter 2. IBM System Storage SAN Volume Controller
57
migrate from a fully allocated to a Space-Efficient VDisk, add the target space-efficient
copy, wait for synchronization to complete, and then remove the source fully allocated
copy.
We describe the details in 2.2.7, “Mirrored VDisk” on page 21.
User authentication changes
SVC 5.1 will support remote authentication and SSO by using an external service running
on the IBM System Storage Productivity Center. The external service will be the Tivoli
Embedded Security Services installed on the IBM System Storage Productivity Center.
Current local authentication methods will still be supported.
We describe the details in 2.3.5, “User authentication” on page 40.
Reliability, availability, and serviceability (RAS) enhancements
In addition to the existing SVC e-mail and SNMP trap facilities, SVC 5.1 adds syslog error
event logging for those clients that are already using syslog in their configurations. This
feature enables optional transmission over a syslog interface to a remote syslog daemon
when parsing the Error Event Log. The format and content of messages sent to a syslog
server are identical to the format and content of messages that are transmitted in a SNMP
trap message.
2.7 Maximum supported configurations
For a list of the maximum supported configurations, visit the SVC support site at this Web
site:
http://www.ibm.com/storage/support/2145
Several limits have been removed with SVC 5.1, but not all of them. The following list gives an
overview of the most important limits. For details, always consult the SVC support site:
iSCSI support
All host iSCSI names are converted to an internally generated WWPN (one per iSCSI
name per I/O Group). Each iSCSI name in an I/O Group consumes one WWPN that
otherwise is available for a “real” FC WWPN.
So, the limits for ports per I/O Group/cluster/host object remain the same, but these limits
are now shared between FC WWPNs and iSCSI names.
The number of cluster partnerships has been lifted from one up to a maximum of three
partnerships, which means that a single SVC cluster can have partnerships of up to three
clusters at the same time.
Remote Copy (RC):
– The number of RC relationships has increased from 1,024 to 8,192. Remember that a
single VDisk at a single point of time can be a member of exactly one RC relationship.
– The number of RC relationships per RC consistency group has also increased to
8,192.
VDisk
A VDisk can contain a maximum of 217 (or 131,072) extents. With an extent size of 2 GB,
the maximum VDisk size is 256 TB.
58
Implementing the IBM System Storage SAN Volume Controller V5.1
2.8 Useful SVC links
The SVC Support Page is at this Web site:
http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4&bra
ndind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continue.x=1
SVC online documentation is at this Web site:
http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp
You can see the lBM Redbooks publications about SVC at this Web site:
http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC
2.9 Commonly encountered terms
Channel extender
A channel extender is a device for long distance communication connecting other SAN fabric
components. Generally, channel extenders can involve protocol conversion to asynchronous
transfer mode (ATM), Internet Protocol (IP), or another long distance communication protocol.
Cluster
A cluster is a group of 2,145 nodes that presents a single configuration and service interface
to the user.
Consistency group
A consistency group is a group of VDisks that has copy relationships that need to be
managed as a single entity.
Copied
Copied is a FlashCopy state that indicates that a copy has been triggered after the copy
relationship was created. The copy process is complete, and the target disk has no further
dependence on the source disk. The time of the last trigger event is normally displayed with
this status.
Configuration node
While the cluster is operational, a single node in the cluster is appointed to provide
configuration and service functions over the network interface. This node is termed the
configuration node. This configuration node manages a cache of the configuration
information that describes the cluster configuration and provides a focal point for configuration
commands. If the configuration node fails, another node in the cluster will assume the role.
Counterpart SAN
A counterpart SAN is a non-redundant portion of a redundant SAN. A counterpart SAN
provides all of the connectivity of the redundant SAN, but without the 100% redundancy. An
SVC node is typically connected to a redundant SAN made out of two counterpart SANs. A
counterpart SAN is often called a SAN fabric.
Error code
An error code is a value used to identify an error condition to a user. This value might map to
one or more error IDs or to values that are presented on the service panel. This value is used
to report error conditions to IBM and to provide an entry point into the service guide.
Chapter 2. IBM System Storage SAN Volume Controller
59
Error ID
An error ID is a value that is used to identify a unique error condition detected by the 2145
cluster. An error ID is used internally in the cluster to identify the error.
Excluded
Excluded is a status condition that describes an MDisk that the 2145 cluster has decided is
no longer sufficiently reliable to be managed by the cluster. The user must issue a command
to include the MDisk in the cluster-managed storage.
Extent
A fixed size unit of data that is used to manage the mapping of data between MDisks and
VDisks.
FC port logins
FC port logins is the number of hosts that can see any one SVC node port. Certain disk
subsystems, such as the IBM DS8000, recommend limiting the number of hosts that use
each port, to prevent excessive queuing at that port. Clearly, if the port fails or the path to that
port fails, the host might fail over to another port and the fan-in criteria might be exceeded in
this degraded mode.
Front end and back end
The SVC takes MDisks and presents these MDisks to application servers (hosts). The
MDisks are looked after by the “back-end” application of the SVC. The VDisks presented to
hosts are looked after by the “front-end” application in the SVC.
Field replaceable units
Field replaceable units (FRUs) are individual parts, which are held as spares by the service
organization.
Grain
A grain is the unit of data that is represented by a single bit in a FlashCopy bitmap (64 KB/256
KB) in the SVC. It is also the unit to extend the real size of a Space-Efficient VDisk (32,64,128
or 256 KB).
Host bus adapter
A host bus adapter (HBA) is an interface card that connects between a host bus, such as a
Peripheral Component Interconnect (PCI), and the SAN.
Host ID
A numeric identifier assigned to a group of host FC ports or iSCSI host names for the
purposes of LUN mapping. For each host ID, there is a separate mapping of SCSI IDs to
VDisks. The intent is to have a one-to-one relationship between hosts and host IDs, although
this relationship cannot be policed.
IQN (iSCSI qualified name)
Special names refer to both iSCSI initiators and targets. One of the three name formats that
iSCSI provides is IQN. The format is iqn.yyyy-mm.{reversed domain name}, for example, the
default for an SVC node is: iqn.1986-03.com.ibm:2145.<clustername>.<nodename>
60
Implementing the IBM System Storage SAN Volume Controller V5.1
iSNS (Internet storage name service)
The Internet storage name service (iSNS) protocol allows automated discovery,
management, and configuration of iSCSI and FC devices. It has been defined in RFC 4171.
Image mod
Image mod is a configuration mode similar to the router mode but with the addition of cache
and copy functions. SCSI commands are not forwarded directly to the MDisk.
I/O Group
An I/O Group is a collection of VDisk and node relationships, that is, an SVC node pair that
presents a common interface to host systems. Each SVC node is associated with exactly one
I/O Group. The two nodes in the I/O Group provide access to the VDisks in the I/O Group.
ISL hop
An inter-switch link (ISL) is a connection between two switches and is counted as an “ISL
hop.” The number of “hops” is always counted on the shortest route between two N-ports
(device connections). In an SVC environment, the number of ISL hops is counted on the
shortest route between the pair of nodes farthest apart. It measures distance only in terms of
ISLs in the fabric.
Local fabric
Because the SVC supports remote copy, there might be significant distances between the
components in the local cluster and those components in the remote cluster. The local fabric
is composed of those SAN components (switches, cables, and so on) that connect the
components (nodes, hosts, and switches) of the local cluster together.
Local and remote fabric interconnect
The local fabric interconnect and the remote fabric interconnect are the SAN components that
are used to connect the local and remote fabrics. They can be single-mode optical fibers that
are driven by high-power gigabit interface converters (GBICs) or SFPs, or more sophisticated
components, such as channel extenders or special SFP modules that are used to extend the
distance between SAN components.
LU and LUN
LUN is formally defined by the SCSI standards as a logical unit number. It is used as an
abbreviation for an entity, which exhibits disk-like behavior, for example, a VDisk or an MDisk.
Managed disk (MDisk)
An MDisk is a SCSI disk that is presented by a RAID controller and that is managed by the
cluster. The MDisk is not visible to host systems on the SAN.
Managed Disk Group (MDiskgrp or MDG)
A collection of MDisks that jointly contains all of the data for a specified set of VDisks.
Managed space mode
The managed space mode is a configuration mode that is similar to image mode but with the
addition of space management functions.
Chapter 2. IBM System Storage SAN Volume Controller
61
Master Console (MC)
The Master Console is the platform on which the software used to manage the SVC runs.
With Version 4.3, it is being replaced by the System Storage Productivity Center. However,
V4.3 GUI console code is supported on existing Master Consoles.
Node
A node is a single processing unit, which provides virtualization, cache, and copy services for
the SAN. SVC nodes are deployed in pairs called I/O Groups. One node in the cluster is
designated the configuration node.
Oversubscription
Oversubscription is the ratio of the sum of the traffic on the initiator N-port connection, or
connections to the traffic on the most heavily loaded ISLs where more than one connection is
used between these switches. Oversubscription assumes a symmetrical network, and a
specific workload applied evenly from all initiators and directed evenly to all targets. A
symmetrical network means that all the initiators are connected at the same level, and all the
controllers are connected at the same level.
Prepare
Prepare is a configuration command that is used to cause cached data to be flushed in
preparation for a copy trigger operation.
RAS
RAS stands for reliability, availability, and serviceability.
RAID
RAID stands for a redundant array of independent disks.
Redundant SAN
A redundant SAN is a SAN configuration in which there is no single point of failure (SPoF), so
no matter what component fails, data traffic will continue. Connectivity between the devices
within the SAN is maintained, although possibly with degraded performance, when an error
has occurred. A redundant SAN design is normally achieved by splitting the SAN into two
independent counterpart SANs (two SAN fabrics), so that if one counterpart SAN is
destroyed, the other counterpart SAN keeps functioning.
Remote fabric
Because the SVC supports remote copy, there might be significant distances between the
components in the local cluster and those components in the remote cluster. The remote
fabric is composed of those SAN components (switches, cables, and so on) that connect the
components (nodes, hosts, and switches) of the remote cluster together.
SAN
SAN stands for storage area network.
SAN Volume Controller
The IBM System Storage SAN Volume Controller is a SAN-based appliance designed for
attachment to a variety of host computer systems, which carries out block-level virtualization
of disk storage.
SCSI
SCSI stands for Small Computer Systems Interface.
62
Implementing the IBM System Storage SAN Volume Controller V5.1
Service Location Protocol
The Service Location Protocol (SLP) is a service discovery protocol that allows computers
and other devices to find services in a local area network without prior configuration. It has
been defined in RFC 2608.
IBM System Storage Productivity Center
IBM System Storage Productivity Center replaces the Master Console for new installations of
SAN Volume Controller Version 4.3.0. For IBM System Storage Productivity Center planning,
installation, and configuration information, see the following Web site:
http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp
Virtual disk (VDisk)
A virtual disk (VDisk) is an SVC device that appears to host systems attached to the SAN as
a SCSI disk. Each VDisk is associated with exactly one I/O Group.
Chapter 2. IBM System Storage SAN Volume Controller
63
64
Implementing the IBM System Storage SAN Volume Controller V5.1
3
Chapter 3.
Planning and configuration
In this chapter, we describe the steps that are required when planning the installation of an
IBM System Storage SAN Volume Controller (SVC) in your storage network. We look at the
implications for your storage network and discuss performance considerations.
© Copyright IBM Corp. 2010. All rights reserved.
65
3.1 General planning rules
To achieve the most benefit from the SVC, pre-installation planning must include several
important steps. These steps ensure that SVC provides the best possible performance,
reliability, and ease of management for your application needs. Proper configuration also
helps minimize downtime by avoiding changes to the SVC and the storage area network
(SAN) environment to meet future growth needs.
Tip: The IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551,
contains comprehensive information that goes into greater depth regarding the topics that
we discuss here.
We also go into much more depth about these topics in SAN Volume Controller Best
Practices and Performance Guidelines, SG24-7521, which is available at this Web site:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
Planning the SVC requires that you follow these steps:
1. Collect and document the number of hosts (application servers) to attach to the SVC, the
traffic profile activity (read or write, sequential or random), and the performance
requirements (I/O per second (IOPS)).
2. Collect and document the storage requirements and capacities:
– The total back-end storage already present in the environment to be provisioned on the
SVC
– The total back-end new storage to be provisioned on the SVC
– The required virtual storage capacity that is used as a fully managed virtual disk
(VDisk) and used as a Space-Efficient VDisk
– The required storage capacity for local mirror copy (VDisk Mirroring)
– The required storage capacity for point-in-time copy (FlashCopy)
– The required storage capacity for remote copy (Metro and Global Mirror)
– Per host: Storage capacity, the host logical unit number (LUN) quantity, and sizes
3. Define the local and remote SAN fabrics and clusters, if a remote copy or a secondary site
is needed.
4. Define the number of clusters and the number of pairs of nodes (between 1 and 4) for
each cluster. Each pair of nodes (an I/O Group) is the container for the VDisks. The
number of necessary I/O Groups depends on the overall performance requirements.
5. Design the SAN according to the requirement for high availability and best performance.
Consider the total number of ports and the bandwidth needed between the host and the
SVC, the SVC and the disk subsystem, between the SVC nodes, and for the inter-switch
link (ISL) between the local and remote fabric.
6. Design the iSCSI network according to the requirements for high availability and best
performance. Consider the total number of ports and the bandwidth needed between the
host and the SVC.
7. Determine the SVC service IP address and the IBM System Storage Productivity Center
(SVC Console).
8. Determine the IP addresses for the SVC cluster and for the host that is connected via
iSCSI connections.
9. Define a naming convention for the SVC nodes, the host, and the storage subsystem.
66
Implementing the IBM System Storage SAN Volume Controller V5.1
10.Define the managed disks (MDisks) in the disk subsystem.
11.Define the Managed Disk Groups (MDGs). The MDGs depend on the disk subsystem in
place and the data migration needs.
12.Plan the logical configuration of the VDisks between the I/O Groups and the MDGs in such
a way as to optimize the I/O load between the hosts and the SVC. You can set up an equal
repartition of all of the VDisks between the nodes or a repartition that takes into account
the expected load from the hosts.
13.Plan for the physical location of the equipment in the rack.
SVC planning can be categorized into two types:
Physical planning
Logical planning
3.2 Physical planning
There are several key factors to consider when performing the physical planning of an SVC
installation. The physical site must have the following characteristics:
Power, cooling, and location requirements are present for the SVC and the uninterruptible
power supply units.
SVC nodes and their uninterruptible power supply units must be in the same rack.
We suggest that you place SVC nodes belonging to the same I/O Group in separate racks.
Plan for two separate power sources if you have ordered a redundant AC power switch
(available as an optional feature).
An SVC node is one Electronic Industries Association (EIA) unit high.
Each uninterruptible power supply unit that comes with SVC V5.1 is one EIA unit high. The
uninterruptible power supply unit shipped with the earlier version of the SVC is two EIA
units high.
The IBM System Storage Productivity Center (SVC Console) is two EIA units high: one
unit for the server and one unit for the keyboard and monitor.
Other hardware devices can be in the same SVC rack, such as IBM System Storage
DS4000®, IBM System Storage DS6000, SAN switches, Ethernet switch, and other
devices.
Consider the maximum power rating of the rack; it must not be exceeded.
Chapter 3. Planning and configuration
67
In Figure 3-1, we show two 2145-CF8 SVC nodes.
Figure 3-1 2145-CF8 SVC nodes
3.2.1 Preparing your uninterruptible power supply unit environment
Ensure that your physical site meets the installation requirements for the uninterruptible
power supply unit.
Uninterruptible power supply unit: The 2145 UPS-1U is a Powerware 5115.
2145 UPS-1U
The 2145 Uninterruptible Power Supply-1U (2145 UPS-1U) is one EIA unit high and is
shipped, and can only operate, on the following node types:
SAN Volume Controller 2145-CF8
SAN Volume Controller 2145-8A4
SAN Volume Controller 2145-8G4
SAN Volume Controller 2145-8F2
SAN Volume Controller 2145-8F4
It was also shipped and will operate with the SVC 2145-4F2.
When configuring the 2145 UPS-1U, the voltage that is supplied to it must be 200 – 240 V,
single phase.
Tip: The 2145 UPS-1U has an integrated circuit breaker and does not require external
protection.
68
Implementing the IBM System Storage SAN Volume Controller V5.1
3.2.2 Physical rules
The SVC must be installed in pairs to provide high availability, and each node in the cluster
must be connected to a separate uninterruptible power supply unit. Figure 3-2 shows an
example of power connections for the 2145-8G4.
Figure 3-2 Node uninterruptible power supply unit setup
Be aware of these considerations:
Each SVC node of an I/O Group must be connected to a separate uninterruptible power
supply unit.
Each uninterruptible power supply unit pair that supports a pair of nodes must be
connected to a separate power domain (if possible) to reduce the chances of input power
loss.
The uninterruptible power supply units, for safety reasons, must be installed in the lowest
positions in the rack. If necessary, move lighter units toward the top of the rack to make
way for the uninterruptible power supply units.
The power and serial connection from a node must be connected to the same
uninterruptible power supply unit; otherwise, the node will not start.
The 2145-CF8, 2145-8A4, 2145-8G4, 2145-8F2, and 2145-8F4 hardware models must be
connected to a 5115 uninterruptible power supply unit. They will not start with a 5125
uninterruptible power supply unit.
Important: Do not share the SVC uninterruptible power supply unit with any other devices.
Figure 3-3 on page 70 shows ports for the 2145-CF8.
Chapter 3. Planning and configuration
69
Figure 3-3 Ports for the 2145-CF8
Figure 3-4 on page 71 shows a power cabling example for the 2145-CF8.
70
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 3-4 2145-CF8 power cabling
There are guidelines to follow for Fibre Channel (FC) cable connections. Occasionally, the
introduction of a new SVC hardware model means that there are internal changes. One
example is the worldwide port name (WWPN) mapping in the port mapping. The 2145-8G4
and 2145-CF8 have the same mapping.
Figure 3-5 on page 72 shows the WWPN mapping.
Chapter 3. Planning and configuration
71
Figure 3-5 WWPN mapping
Figure 3-6 on page 73 shows a sample layout within a separate rack.
72
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 3-6 Sample rack layout
We suggest that you place the racks in separate rooms, if possible, in order to gain protection
against critical events (fire, water, power loss, and so on) that might affect one room only.
Remember the maximum distances that are supported between the nodes in one I/O Group
(100 m (or 320 ft., 1.13 inches)). You can extend this distance by submitting a formal SCORE
request to increase the limit by following the rules that will be specified in any SCORE
approval.
3.2.3 Cable connections
Create a cable connection table or documentation following your environment’s
documentation procedure to track all of the connections that are required for the setup:
Nodes
Uninterruptible power supply unit
Ethernet
iSCSI connections
FC ports
IBM System Storage Productivity Center (SVC Console)
Chapter 3. Planning and configuration
73
3.3 Logical planning
For logical planning, we intend to cover these topics:
Management IP addressing plan
SAN zoning and SAN connections
iSCSI IP addressing plan
Back-end storage subsystem configuration
SVC cluster configuration
MDG configuration
VDisk configuration
Host mapping (LUN masking)
Advanced copy functions
SAN start-up support
Data migration from non-virtualized storage subsystems
SVC configuration backup procedure
3.3.1 Management IP addressing plan
For management, remember these rules:
In addition to an FC connection, each node has an Ethernet connection for configuration
and error reporting.
Each SVC cluster needs at least two IP addresses.
The first IP address is used for management, and the second IP address is used for
service. The service IP address will become usable only when the SVC cluster is in
service mode, and remember that service mode is a disruptive operation. Both IP
addresses must be in the same IP subnet.
Example 3-1 Management IP address sample
management IP add. 10.11.12.120
service IP add. 10.11.12.121
Each node in an SVC cluster needs to have at least one Ethernet connection.
IBM supports the option of having multiple console access, using the traditional SVC
hardware management console (HMC) or the IBM System Storage Productivity Center
console. Multiple Master Consoles or IBM System Storage Productivity Center consoles
can access a single cluster, but when multiple Master Consoles access one cluster, you
cannot concurrently perform configuration and service tasks.
The Master Console can be supplied on either pre-installed hardware, or just software
supplied to and subsequently installed by the user.
With SVC 5.1, the cluster configuration node can now be accessed on both Ethernet ports,
and this capability means that the cluster can have two IPv4 addresses and two IPv6
addresses that are used for configuration purposes.
Figure 3-7 on page 75 shows the IP configuration possibilities.
74
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 3-7 IP configuration possibilities
The cluster can therefore be managed by IBM System Storage Productivity Centers on
separate networks, which provides redundancy in the event of a failure of one of these
networks.
Support for iSCSI introduces one additional IPv4 and one additional IPv6 address for each
Ethernet port on every node; these IP addresses are independent of the cluster configuration
IP addresses. The command-line interface (CLI) commands for managing the cluster IP
addresses have therefore been moved from svctask chcluster to svctask chclusterip in
SVC 5.1. And, new commands have been introduced to manage the iSCSI IP addresses.
When connecting to the SVC with Secure Shell (SSH), choose one of the available IP
addresses to connect to. There is no automatic failover capability, so if one network is down,
use the other IP address.
Clients might be able to use intelligence in domain name servers (DNS) to provide partial
failover.
When using the GUI, clients can add the cluster to the SVC Console multiple times (one time
per IP address). Failover is achieved by using the functional IP address when launching the
SVC Console interface.
Chapter 3. Planning and configuration
75
3.3.2 SAN zoning and SAN connections
SAN storage systems using the SVC can be configured with two, or up to eight, SVC nodes,
arranged in an SVC cluster. These SVC nodes are attached to the SAN fabric, along with disk
subsystems and host systems. The SAN fabric is zoned to allow the SVCs to “see” each
other’s nodes and the disk subsystems, and for the hosts to “see” the SVCs. The hosts are
not able to directly “see” or operate LUNs on the disk subsystems that are assigned to the
SVC cluster. The SVC nodes within an SVC cluster must be able to see each other and all of
the storage that is assigned to the SVC cluster.
The zoning capabilities of the SAN switch are used to create these distinct zones. SVC 5.1
supports 2 Gbps, 4 Gbps, or 8 Gbps FC fabric, which depends on the hardware platform and
on the switch where the SVC is connected.
We recommend connecting the SVC and the disk subsystem to the switch operating at the
highest speed, in an environment where you have a fabric with multiple speed switches.
All SVC nodes in the SVC cluster are connected to the same SANs, and they present VDisks
to the hosts. These VDisks are created from MDGs that are composed of MDisks presented
by the disk subsystems. There must be three distinct zones in the fabric:
SVC cluster zone: Create one zone per fabric with all of the SVC ports cabled to this fabric
to allow SVC intracluster node communication.
Host zones: Create an SVC host zone for each server that receives storage from the SVC
cluster.
Storage zone: Create one SVC storage zone for each storage subsystem that is
virtualized by the SVC.
Zoning considerations for Metro Mirror and Global Mirror
Ensure that you are familiar with the constraints for zoning a switch to support the Metro
Mirror and Global Mirror feature.
SAN configurations that use intracluster Metro Mirror and Global Mirror relationships do not
require additional switch zones.
SAN configurations that use intercluster Metro Mirror and Global Mirror relationships require
the following additional switch zoning considerations:
A cluster can be configured so that it can detect all of the nodes in all of the remote
clusters. Alternatively, a cluster can be configured so that it detects only a subset of the
nodes in the remote clusters.
Use of inter-switch link (ISL) trunking in a switched fabric.
Use of redundant fabrics.
For intercluster Metro Mirror and Global Mirror relationships, you must perform the following
steps to create the additional required zones:
1. Configure your SAN so that FC traffic can be passed between the two clusters. To
configure the SAN this way, you can connect the clusters to the same SAN, merge the
SANs, or use routing technologies.
2. (Optional) Configure zoning to allow all of the nodes in the local fabric to communicate
with all of the nodes in the remote fabric.
McData Eclipse routers: If you use McData Eclipse routers, Model 1620, only 64 port
pairs are supported, regardless of the number of iFCP links that is used.
76
Implementing the IBM System Storage SAN Volume Controller V5.1
3. (Optional) As an alternative to Step 2, choose a subset of nodes in the local cluster to be
zoned to the nodes in the remote cluster. Minimally, you must ensure that one whole I/O
Group in the local cluster has connectivity to one whole I/O Group in the remote cluster.
I/O between the nodes in each cluster is then routed to find a path that is permitted by the
configured zoning.
Reducing the number of nodes that are zoned together can reduce the complexity of the
intercluster zoning and might reduce the cost of the routing hardware that is required for
large installations. Reducing the number of nodes also means that I/O must make extra
hops between the nodes in the system, which increases the load on the intermediate
nodes and can increase the performance impact, in particular, for Metro Mirror.
4. Optionally, modify the zoning so that the hosts that are visible to the local cluster can
recognize the remote cluster. This capability allows a host to examine data in both the
local and remote clusters.
5. Verify that cluster A cannot recognize any of the back-end storage that is owned by cluster
B. A cluster cannot access logical units (LUs) that a host or another cluster can also
access.
Figure 3-8 shows the SVC zoning topology.
Figure 3-8 SVC zoning topology
Figure 3-9 on page 78 shows an example of SVC, host, and storage subsystem connections.
Chapter 3. Planning and configuration
77
Figure 3-9 Example of SVC, host, and storage subsystem connections
You must also apply the following guidelines:
Hosts are not permitted to operate on the disk subsystem LUNs directly if the LUNs are
assigned to the SVC. All data transfer happens through the SVC nodes. Under certain
circumstances, a disk subsystem can present LUNs to both the SVC (as MDisks, which it
then virtualizes to hosts) and to other hosts in the SAN.
Mixed speeds are permitted within the fabric, but not for intracluster communication. You
can use lower speeds to extend the distance.
Uniform SVC port speed for 2145-4F2 and 2145-8F2 nodes: The optical fiber connections
between FC switches and all 2145-4F2 or 2145-8F2 SVC nodes in a cluster must run at
one speed, either 1 Gbps or 2 Gbps. The 2145-4F2 or 2145-8F2 nodes with other speeds
running on the node to switch connections in a single cluster is an unsupported
configuration (and is impossible to configure anyway). This rule does not apply to
2145-8F4, 2145-8G4, 2145-8A4, and 2145-CF8 nodes, because the FC ports on these
nodes auto-negotiate their speed independently of one another and can run at 2 Gbps,
4 Gbps, or 8 Gbps.
Each of the local or remote fabrics must not contain more than three ISL hops within each
fabric. An operation with more ISLs is unsupported. When a local and a remote fabric are
connected together for remote copy purposes, there must only be one ISL hop between
the two SVC clusters. Therefore, certain ISLs can be used in a cascaded switch link
between local and remote clusters, provided that the local and remote cluster internal ISL
count is fewer than three. This approach gives a maximum of seven ISL hops in an SVC
environment with both local and remote fabrics.
The switch configuration in an SVC fabric must comply with the switch manufacturer’s
configuration rules, which can impose restrictions on the switch configuration. For
example, a switch manufacturer might limit the number of supported switches in a SAN.
Operation outside of the switch manufacturer’s rules is not supported.
78
Implementing the IBM System Storage SAN Volume Controller V5.1
The SAN contains only supported switches; operation with other switches is unsupported.
Host bus adapters (HBAs) in dissimilar hosts or dissimilar HBAs in the same host need to
be in separate zones. For example, if you have AIX and Microsoft hosts, they need to be in
separate zones. Here, “dissimilar” means that the hosts are running separate operating
systems or are using separate hardware platforms. Therefore, various levels of the same
operating system are regarded as similar. This requirement is a SAN interoperability issue
rather than an SVC requirement.
We recommend that the host zones contain only one initiator (HBA) each, and as many
SVC node ports as you need, depending on the high availability and performance that you
want to have from your configuration.
Note: In SVC Version 3.1 and later, the command svcinfo lsfabric generates a
report that displays the connectivity between nodes and other controllers and hosts.
This report is particularly helpful in diagnosing SAN problems.
Zoning examples
Figure 3-10 shows an SVC cluster zoning example.
Figure 3-10 SVC cluster zoning example
Figure 3-11 on page 80 shows a storage subsystem zoning example.
Chapter 3. Planning and configuration
79
Figure 3-11 Storage subsystem zoning example
Figure 3-12 shows a host zoning example.
Figure 3-12 Host zoning example
80
Implementing the IBM System Storage SAN Volume Controller V5.1
3.3.3 iSCSI IP addressing plan
SVC 5.1 supports host access via iSCSI (as an alternative to FC), and the following
considerations apply:
SVC uses the built-in Ethernet ports for iSCSI traffic.
All node types, which can run SVC 5.1, can use the iSCSI feature.
SVC supports the Challenge Handshake Authentication Protocol (CHAP) authentication
methods for iSCSI.
iSCSI IP addresses can fail over to the partner node in the I/O Group if a node fails. This
design reduces the need for multipathing support in the iSCSI host.
iSCSI IP addresses can be configured for one or more nodes.
iSCSI Simple Name Server (iSNS) addresses can be configured in the SVC.
The iSCSI qualified name (IQN) for an SVC node will be:
iqn.1986-03.com.ibm:2145.<cluster_name>.<node_name>. Because the IQN contains the
cluster name and the node name, it is important not to change these names after iSCSI is
deployed.
Each node can be given an iSCSI alias, as an alternative to the IQN.
The IQN of the host to an SVC host object is added in the same way that you add FC
WWPNs.
Host objects can have both WWPNs and IQNs.
Standard iSCSI host connection procedures can be used to discover and configure SVC
as an iSCSI target.
Next, we show several ways that SVC 5.1 can be configured.
Figure 3-13 shows the use of IPv4 management and iSCSI addresses in the same subnet.
Figure 3-13 Use of IPv4 addresses
You can set up the equivalent configuration with only IPv6 addresses.
Chapter 3. Planning and configuration
81
Figure 3-14 shows the use of IPv4 management and iSCSI addresses in two separate
subnets.
Figure 3-14 IPv4 address plan with two subnets
Figure 3-15 shows the use of redundant networks.
Figure 3-15 Redundant networks
Figure 3-16 on page 83 shows the use of a redundant network and a third subnet for
management.
82
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 3-16 Redundant network with third subnet for management
Figure 3-17 shows the use of a redundant network for both iSCSI data and management.
Figure 3-17 Redundant network for iSCSI and management
Be aware of these considerations:
All of the examples are valid using IPv4 and IPv6 addresses.
It is valid to use IPv4 addresses on one port and IPv6 addresses on the other port.
It is valid to have separate subnet configurations for IPv4 and IPv6 addresses.
Chapter 3. Planning and configuration
83
3.3.4 Back-end storage subsystem configuration
Back-end storage subsystem configuration planning must be applied to all of the storage that
will supply disk space to an SVC cluster. See the following Web site for the currently
supported storage subsystems:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
Apply the following general guidelines for back-end storage subsystem configuration
planning:
In the SAN, disk subsystems that are used by the SVC cluster are always connected to
SAN switches and nothing else.
Other disk subsystem connections out of the SAN are possible.
Multiple connections are allowed from the redundant controllers in the disk subsystem to
improve data bandwidth performance. It is not mandatory to have a connection from each
redundant controller in the disk subsystem to each counterpart SAN, but it is
recommended. Therefore, controller A in the DS4000 can be connected to SAN A only, or
to SAN A and SAN B, and controller B in the DS4000 can be connected to SAN B only, or
to SAN B and SAN A.
Split controller configurations are supported with certain rules and configuration
guidelines. See IBM System Storage SAN Volume Controller Planning Guide, GA32-0551,
for more information.
All SVC nodes in an SVC cluster must be able to see the same set of disk subsystem
ports on each disk subsystem controller. Operation in a mode where two nodes see a
separate set of ports on the same controller becomes degraded. This degradation can
occur if inappropriate zoning was applied to the fabric. It can also occur if inappropriate
LUN masking is used. This guideline has important implications for a disk subsystem,
such as DS3000, DS4000, or DS5000, which imposes exclusivity rules on which HBA
worldwide names (WWNs) a storage partition can be mapped to.
In general, configure disk subsystems as though there is no SVC; however, we recommend
the following specific guidelines:
Disk drives:
– Be careful with large disk drives so that you do not have too few spindles to handle the
load.
– RAID-5 is suggested, but RAID-10 is viable and useful.
Array sizes:
– 8+P or 4+P is recommended for the DS4000 and DS5000 families, if possible.
– Use the DS4000 segment size of 128 KB or larger to help the sequential performance.
– Avoid Serial Advanced Technology Attachment (SATA) disk unless running SVC 4.2.1.x
or later
– Upgrade to EXP810 drawers, if possible.
– Create LUN sizes that are equal to the RAID array/rank if it does not exceed 2 TB.
– Create a minimum of one LUN per FC port on a disk controller zoned with the SVC.
– When adding more disks to a subsystem, consider adding the new MDisks to existing
MDGs versus creating additional small MDGs.
– Use a Perl script to restripe VDisk extents evenly across all MDisks in the MDG.
Go to this Web site http://www.ibm.com/alphaworks, and search using “svctools”.
84
Implementing the IBM System Storage SAN Volume Controller V5.1
Maximum of 64 worldwide node names (WWNNs):
– EMC DMX/SYMM, All HDS, and SUN/HP HDS clones use one WWNN per port; each
WWNN appears as a separate controller to the SVC.
– Upgrade to SVC 4.2.1 or later so that you can map LUNs through up to 16 FC ports,
which results in 16 WWNNs/WWPNs used out of the maximum of 64.
– IBM, EMC Clariion, and HP use one WWNN per subsystem; each WWNN appears as
a single controller with multiple ports/WWPNs, for a maximum of 16 ports/WWPNs per
WWNN using one out of the maximum of 64.
DS8000 using four or eight 4 port HA cards:
– Use port 1 and 3 or 2 and 4 on each card.
– This setup provides 8 or 16 ports for SVC use.
– Use 8 ports minimum up to 40 ranks.
– Use 16 ports, which is the maximum, for 40 or more ranks.
Upgrade to SVC 4.2.1.9 or later to drive more workload to DS8000.
Increased queue depth for DS4000, DS5000, DS6000, DS8000, or EMC DMX
DS4000/DS5000 – EMC Clariion/CX:
– Both systems have the preferred controller architecture, and SVC supports this
configuration.
– Use a minimum of 4 ports, and preferably 8 or more ports up to maximum of 16 ports,
so that more ports equate to more concurrent I/O that is driven by the SVC.
– Support for mapping controller A ports to Fabric A and controller B ports to Fabric B or
cross connecting ports to both fabrics from both controllers. The latter approach is
preferred to avoid AVT/Trespass occurring if a fabric or all paths to a fabric fail.
– Upgrade to SVC 4.3.1 or later for an SVC queue depth change for CX models, because
it drives more I/O per port per MDisk.
DS3400:
– Use a minimum of 4 ports.
– Upgrade to SVC 4.3.x or later for better resiliency if the DS3400 controllers reset.
XIV® requirements and restrictions:
– The SVC cluster must be running Version 4.3.0.1 or later to support the XIV.
– The use of certain XIV functions on LUNs presented to the SVC is not supported.
– You cannot perform snaps, thin provisioning, synchronous replication, or LUN
expansion on XIV MDisks.
– A maximum of 511 LUNs from one XIV system can be mapped to an SVC cluster.
Full 15 module XIV recommendations – 79 TB usable:
– Use two interface host ports from each of the six interface modules.
– Use ports 1 and 3 from each interface module and zone these 12 ports with all SVC
node ports.
– Create 48 LUNs of equal size, each of which is a multiple of 17 GB, and you will get
1,632 GB approximately if using the entire full frame XIV with the SVC.
– Map LUNs to the SVC as 48 MDisks, and add all of them to the one XIV MDG so that
the SVC will drive the I/O to four MDisks/LUNs for each of the 12 XIV FC ports. This
design provides a good queue depth on the SVC to drive XIV adequately.
Chapter 3. Planning and configuration
85
Six module XIV recommendations – 27 TB usable:
– Use two interface host ports from each of the two active interface modules.
– Use ports 1 and 3 from interface modules 4 and 5. (Interface module 6 is inactive.)
And, zone these four ports with all SVC node ports.
– Create 16 LUNs of equal size, each of which is a multiple of 17 GB, and you will get
1,632 GB approximately if using the entire XIV with the SVC.
– Map LUNs to the SVC as 16 MDisks, and add all of them to the one XIV MDG so that
the SVC will drive I/O to four MDisks/LUNs per each of the four XIV FC ports. This
design provides a good queue depth on the SVC to drive XIV adequately.
Nine module XIV recommendations – 43 TB usable:
– Use two interface host ports from each of the four active interface modules.
– Use ports 1 and 3 from interface modules 4, 5, 7, and 8. (Interface modules 6 and 9 are
inactive.) And, zone these eight ports with all of the SVC node ports.
– Create 26 LUNs of equal size, each of which is a multiple of 17 GB, and you will get
1,632 GB approximately if using the entire XIV with the SVC.
– Map LUNs to SVC as 26 MDisks, and map add all of them to the one XIV MDG, so that
the SVC will drive I/O to three MDisks/LUNs on each of six ports and four
MDisks/LUNs on the other two XIV FC ports. This design provides a good queue depth
on SVC to drive XIV adequately.
Configure XIV host connectivity for the SVC cluster:
– Create one host definition on XIV, and include all SVC node WWPNs.
– You can create clustered host definitions (one per I/O Group), but the preceding
method is easier.
– Map all LUNs to all SVC node WWPNs.
3.3.5 SVC cluster configuration
To ensure high availability in SVC installations, consider the following guidelines when you
design a SAN with the SVC:
The 2145-4F2 and 2145-8F2 SVC nodes contain two HBAs, each of which has two FC
ports. If an HBA fails, this configuration remains valid, and the node operates in degraded
mode. If an HBA is physically removed from an SVC node, the configuration is
unsupported. The 2145-CF8, 2145-8A4, 2145-8G4, and 2145-8F4 models have one HBA
with four ports.
All nodes in a cluster must be in the same LAN segment, because the nodes in the cluster
must be able to assume the same cluster, or service IP, address. Make sure that the
network configuration will allow any of the nodes to use these IP addresses. Note that if
you plan to use the second Ethernet port on each node, it is possible to have two LAN
segments. However, port 1 of every node must be in one LAN segment, and port 2 of
every node must be in the other LAN segment.
To maintain application uptime in the unlikely event of an individual SVC node failing, SVC
nodes are always deployed in pairs (I/O Groups). If a node fails or is removed from the
configuration, the remaining node operates in a degraded mode, but it is still a valid
configuration. The remaining node operates in write-through mode, meaning that the data
is written directly to the disk subsystem (the cache is disabled for the write).
The uninterruptible power supply unit must be in the same rack as the node to which it
provides power, and each uninterruptible power supply unit can only have one node
connected.
86
Implementing the IBM System Storage SAN Volume Controller V5.1
The FC SAN connections between the SVC node and the switches are optical fiber. These
connections can run at either 2 Gbps, 4 Gbps, or 8 Gbps, depending on your SVC and
switch hardware. The 2145-CF8, 2145-8A4, 2145-8G4, and 2145-8F4 SVC nodes
auto-negotiate the connection speed with the switch. The 2145-4F2 and 2145-8F2 nodes
are capable of a maximum of 2 Gbps, which is determined by the cluster speed.
The SVC node ports must be connected to the FC fabric only. Direct connections between
the SVC and the host, or the disk subsystem, are unsupported.
Two SVC clusters cannot share the same LUNs in a subsystem. The consequences of
sharing the same disk subsystem can result in data loss. If the same MDisk becomes
visible on two separate SVC clusters, this error can cause data corruption.
The two nodes within an I/O Group can be co-located (within the same set of racks) or can
be located in separate racks and separate rooms to deploy a simple business continuity
solution.
If a split node cluster (split I/O Group) solution is implemented, observe the maximum
distance allowed (100 m (or 320 ft., 1.13 inches)) between the nodes in an I/O Group.
Otherwise, you will require a SCORE request in order to be supported for longer
distances. Ask your IBM service representative for more detailed information about the
SCORE process.
If a split node cluster (split I/O Group) solution is implemented, we recommend using a
business continuity solution for the storage subsystem using the VDisk Mirroring option.
Note the SVC cluster quorum disk location, as shown in Figure 3-18 on page 88, where
the quorum disk is located separately in a third site or room.
The SVC uses three MDisks as a quorum disk for the cluster. We recommend that you, for
redundancy purposes, locate, if possible, the three MDisks in three separate storage
subsystems.
If a split node cluster (split I/O Group) solution is implemented, two of the three quorum
disks can be co-located in the same room where the SVC nodes are located, but the
active quorum disk (as displayed in the lsquorum output) must be in a separate room.
Figure 3-18 on page 88 shows a schematic split I/O Group solution.
Chapter 3. Planning and configuration
87
Figure 3-18 Split I/O Group solution
3.3.6 Managed Disk Group configuration
The Managed Disk Group (MDG) is at the center of the many-to-many relationship between
the MDisks and the VDisks. It acts as a container into which managed disks contribute
chunks of disk blocks, which are known as extents, and from which VDisks consume these
extents of storage.
MDisks in the SVC are LUNs assigned from the underlying disk subsystems to the SVC and
can be either managed or unmanaged. A managed MDisk is an MDisk that is assigned to an
MDG:
MDGs are collections of MDisks. An MDisk is contained within exactly one MDG.
An SVC supports up to 128 MDGs.
There is no limit to the number of VDisks that can be in an MDG other than the limit per
cluster.
MDGs are collections of VDisks. Under normal circumstances, a VDisk is associated with
exactly one MDG. The exception to this rule is when a VDisk is migrated, or mirrored,
between MDGs.
SVC supports extent sizes of 16, 32, 64, 128, 256, 512, 1,024, and 2,048 MB. The extent size
is a property of the MDG, which is set when the MDG is created. It cannot be changed, and
all MDisks, which are contained in the MDG, have the same extent size, so all VDisks that are
associated with the MDG must also have the same extent size.
Table 3-1 on page 89 shows all of the extent sizes that are available in an SVC.
88
Implementing the IBM System Storage SAN Volume Controller V5.1
Table 3-1 Extent size and maximum cluster capacities
Extent size
Maximum cluster capacity
16 MB
64 TB
32 MB
128 TB
64 MB
256 TB
128 MB
512 TB
256 MB
1 PB
512 MB
2 PB
1,024 MB
4 PB
2,048 MB
8 PB
There are several additional MDG considerations:
Maximum cluster capacity is related to the extent size:
– 16 MB extent = 64 TB and doubles for each increment in extent size, for example,
32 MB = 128 TB. We strongly recommend a minimum 128/256 MB. The IBM Sales
Productivity Center (SPC) benchmarks used a 256 MB extent.
– Pick the extent size and use that size for all MDGs.
– You cannot migrate VDisks between MDGs with various extent sizes.
MDG reliability, availability, and serviceability (RAS) considerations:
– It might make sense to create multiple MDGs if you ensure a host only gets its VDisks
built from one of the MDGs. If the MDG goes offline, it impacts only a subset of all of
the hosts using the SVC; however, creating multiple MDGs can cause a high number of
MDGs, approaching the SVC limits.
– If you do not isolate hosts to MDGs, create one large MDG. Creating one large MDG
assumes that the physical disks are all the same size, speed, and RAID level.
– The MDG goes offline if an MDisk is unavailable even if the MDisk has no data on it. Do
not put MDisks into an MDG until needed.
– Create at least one separate MDG for all the image mode VDisks.
– Make sure that the LUNs that are given to the SVC have any host persistent reserves
removed.
MDG performance considerations
It might make sense to create multiple MDGs if attempting to isolate workloads to separate
disk spindles. MDGs with too few MDisks cause an MDisk overload, so it is better to have
more spindle counts in an MDG to meet workload requirements.
The MDG and SVC cache relationship
SVC 4.2.1 first introduced cache partitioning to the SVC code base. The decision was
made to provide flexible partitioning, rather than hard-coding a specific number of
partitions. This flexibility is provided on an MDG boundary. That is, the cache will
automatically partition the available resources on a per MDG basis. Most users create a
single MDG from the LUNs provided by a single disk controller, or a subset of a
controller/collection of the same controllers, based on the characteristics of the LUNs
themselves. Characteristics are, for example, RAID-5 versus RAID-10, 10,000 revolutions
per minute (RPM) versus 15,000 RPM, and so on. The overall strategy is provided to
Chapter 3. Planning and configuration
89
protect from individual controller overloading or faults. If many controllers (or in this case
MDGs) are overloaded, the overreached controllers can still suffer.
Table 3-2 shows the limit of the write cache data.
Table 3-2 Limit of the cache data
Number of MDGs
Upper limit
1
100%
2
66%
3
40%
4
30%
5 or more
25%
Think of the rule as no single partition can occupy more than its upper limit of cache
capacity with write data. These limits are upper limits, and they are the points at which the
SVC cache will start to limit incoming I/O rates for VDisks created from the MDG. If a
particular partition reaches this upper limit, the net result is the same as a global cache
resource that is full. That is, the host writes will be serviced on a one-out-one-in basis —
as the cache destages writes to the back-end disks. However, only writes targeted at the
full partition are limited, all I/O destined for other (non-limited) MDGs will continue as
normal. Read I/O requests for the limited partition will also continue as normal. However,
because the SVC is destaging write data at a rate that is obviously greater than the
controller can actually sustain (otherwise the partition does not reach the upper limit),
reads are likely to be serviced equally slowly.
3.3.7 Virtual disk configuration
An individual virtual disk (VDisk) is a member of one MDG and one I/O Group. When you
want to create a VDisk, first of all, you have to know the purpose for which this VDisk will be
created. Based on that information, you can decide which MDG you have to select to fit your
requirements in terms of cost, performance, and availability:
The MDG defines which MDisks provided by the disk subsystem make up the VDisk.
The I/O Group (two nodes make an I/O Group) defines which SVC nodes provide I/O
access to the VDisk.
Note: There is no fixed relationship between I/O Groups and MDGs.
Therefore, you can define the VDisks using the following considerations:
Optimize the performance between the hosts and the SVC by distributing the VDisks
between the various nodes of the SVC cluster, which means spreading the load equally on
the nodes in the SVC cluster.
Get the level of performance, reliability, and capacity you require by using the MDG that
corresponds to your needs (you can access any MDG from any node), that is, choose the
MDG that fulfils the demands for your VDisk, with respect to performance, reliability, and
capacity.
90
Implementing the IBM System Storage SAN Volume Controller V5.1
I/O Group considerations:
– When you create a VDisk, it is associated with one node of an I/O Group. By default,
every time that you create a new VDisk, it is associated with the next node using a
round-robin algorithm. You can specify a preferred access node, which is the node
through which you send I/O to the VDisk instead of using the round-robin algorithm. A
VDisk is defined for an I/O Group.
– Even if you have eight paths for each VDisk, all I/O traffic flows only toward one node
(the preferred node). Therefore, only four paths are really used by the IBM Subsystem
Device Driver (SDD). The other four paths are used only in the case of a failure of the
preferred node or when concurrent code upgrade is running.
Creating image mode VDisks:
– Use image mode VDisks when an MDisk already has data on it, from a non-virtualized
disk subsystem. When an image mode VDisk is created, it directly corresponds to the
MDisk from which it is created. Therefore, VDisk logical block address (LBA) x = MDisk
LBA x. The capacity of image mode VDisks defaults to the capacity of the supplied
MDisk.
– When you create an image mode disk, the MDisk must have a mode of unmanaged
and therefore does not belong to any MDG. A capacity of 0 is not allowed. Image mode
VDisks can be created in sizes with a minimum granularity of 512 bytes, and they must
be at least one block (512 bytes) in size.
Creating managed mode VDisks with sequential or striped policy
When creating a managed mode VDisk with sequential or striped policy, you must use a
number of MDisks containing extents that are free and of a size that is equal or greater
than the size of the VDisk that you want to create. There might be sufficient extents
available on the MDisk, but there might not be a contiguous block large enough to satisfy
the request.
Space-Efficient VDisk considerations:
– While creating the space-efficient volume, it is necessary to understand the utilization
patterns by the applications or group users accessing this volume. Items, such as the
actual size of the data, the rate of creation of new data, modifying or deletion of existing
data, and so on, all need to be taken into consideration.
– There are two operating modes for Space-Efficient VDisks. Autoexpand VDisks
allocate storage from an MDG on demand with minimal user intervention required, but
a misbehaving application can cause a VDisk to expand until it has consumed all of the
storage in an MDG. Non-autoexpand VDisks have a fixed amount of storage assigned.
In this case, the user must monitor the VDisk and assign additional capacity if or when
required. A misbehaving application can only cause the VDisk that it is using to fill up.
– Depending on the initial size for the real capacity, the grain size and a warning level can
be set. If a disk goes offline, either through a lack of available physical storage on
autoexpand, or because a disk marked as non-expand has not been expanded, there
is a danger of data being left in the cache until storage is made available. This situation
is not a data integrity or data loss issue, but you must not rely on the SVC cache as a
backup storage mechanism.
Recommendations:
We highly recommend that you keep a warning level on the used capacity so that it
provides adequate time for the provision of more physical storage.
Warnings must not be ignored by an administrator.
Use the autoexpand feature of the Space-Efficient VDisks.
Chapter 3. Planning and configuration
91
– The grain size allocation unit for the real capacity in the VDisk can be set as 32 KB,
64 KB, 128 KB, or 256 KB. A smaller grain size utilizes space more effectively, but it
results in a larger directory map, which can reduce performance.
– Space-Efficient VDisks require more I/Os because of directory accesses. For truly
random workloads with 70% read and 30% write, a Space-Efficient VDisk will require
approximately one directory I/O for every user I/O so performance can be up to 50%
less than that of a normal VDisk.
– The directory is two-way write-back-cached (just like the SVC fastwrite cache), so
certain applications will perform better.
– Space-Efficient VDisks require more CPU processing, so the performance per I/O
Group will be poorer.
– Starting with SVC 5.1, we have Space-Efficient VDisks - zero detect. This feature
enables clients to reclaim unused allocated disk space (zeros) when converting a fully
allocated VDisk to a Space-Efficient VDisk (SEV) using VDisk Mirroring.
VDisk Mirroring. If you are planning to use the VDisk Mirroring option, you must apply the
following guidelines:
– Create or identify two separate MDGs to allocate space for your mirrored VDisk.
– If it is possible to use an MDG with MDisks that share the same characteristics;
otherwise, the VDisk performance can be affected by the poorer performing MDisk.
3.3.8 Host mapping (LUN masking)
For the host and application servers, the following guidelines apply:
Each SVC node presents a VDisk to the SAN through four paths. Because two nodes are
used in normal operations to provide redundant paths to the same storage, a host with two
HBAs can see eight paths to each LUN that is presented by the SVC. We suggest using
zoning to limit the pathing from a minimum of two paths to the maximum available of eight
paths, depending on the kind of high availability and performance that you want to have in
your configuration.
We recommend using zoning to limit the pathing to four paths. The hosts must run a
multipathing device driver to resolve this back to a single device. The multipathing driver
supported and delivered by SVC is the IBM Subsystem Device Driver (SDD). Native
multipath I/O (MPIO) drivers on selected hosts are supported. For operating system
specific information about MPIO support, see this Web site:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
The number of paths to a VDisk from a host to the nodes in the I/O Group that owns the
VDisk must not exceed eight, even if eight is not the maximum number of paths supported
by the multipath driver (SDD supports up to 32). To restrict the number of paths to a host
VDisk, the fabrics must be zoned so that each host FC port is zoned with one port from
each SVC node in the I/O Group that owns the VDisk.
VDisk paths: The recommended number of VDisk paths is four.
If a host has multiple HBA ports, each port must be zoned to a separate set of SVC ports
to maximize high availability and performance.
In order to configure greater than 256 hosts, you will need to configure the host to iogrp
mappings on the SVC. Each iogrp can contain a maximum of 256 hosts, so it is possible to
create 1,024 host objects on an eight-node SVC cluster. VDisks can only be mapped to a
host that is associated with the I/O Group to which the VDisk belongs.
92
Implementing the IBM System Storage SAN Volume Controller V5.1
Port masking. You can use a port mask to control the node target ports that a host can
access, which satisfies two requirements:
– As part of a security policy, to limit the set of WWPNs that are able to obtain access to
any VDisks through a given SVC port
– As part of a scheme to limit the number of logins with mapped VDisks visible to a host
multipathing driver (such as SDD) and thus limit the number of host objects configured
without resorting to switch zoning
The port mask is an optional parameter of the svctask mkhost and chhost commands.
The port mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to
1111 (all ports enabled). For example, a mask of 0011 enables port 1 and port 2. The
default value is 1111 (all ports enabled).
The SVC supports connection to the Cisco MDS family and Brocade family. See the
following Web site for the latest support information:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
3.3.9 Advanced Copy Services
The SVC offers these Advanced Copy Services:
FlashCopy
Metro Mirror
Global Mirror
SVC Advanced Copy Services must apply the following guidelines.
FlashCopy guidelines
Consider these FlashCopy guidelines:
Identify each application that must have a FlashCopy function implemented for its VDisk.
FlashCopy is a relationship between VDisks. Those VDisks can belong to separate MDGs
and separate storage subsystems.
You can use FlashCopy for backup purposes by interacting with the Tivoli Storage
Manager Agent, or for cloning a particular environment.
Define which FlashCopy best fits your requirements: NO copy, Full copy, Space Efficient,
or Incremental.
Define which FlashCopy rate best fits your requirement in terms of performance and time
to get the FlashCopy completed. The relationship of the background copy rate value to the
attempted number of grains to be split per second is shown in Table 3-3 on page 94.
Define the grain size that you want to use. Larger grain sizes can cause a longer
FlashCopy elapsed time and a higher space usage in the FlashCopy target VDisk. Smaller
grain sizes can have the opposite effect. Remember that the data structure and the source
data location can modify those effects. In an actual environment, check the results of your
FlashCopy procedure in terms of the data copied at every run and in terms of elapsed
time, comparing them to the new SVC FlashCopy results, and eventually adapt the
grain/second and the copy rate parameter to fit your environment’s requirements.
Chapter 3. Planning and configuration
93
Table 3-3 Grain splits per second
User percentage
Data copied per
second
256 KB grain per
second
64 KB grain per
second
1 - 10
128 KB
0.5
2
11 - 20
256 KB
1
4
21 - 30
512 KB
2
8
31 - 40
1 MB
4
16
41 - 50
2 MB
8
32
51 - 60
4 MB
16
64
61 - 70
8 MB
32
128
71 - 80
16 Mb
64
256
81 - 90
32 MB
128
512
91 - 100
64 MB
256
1,024
Metro Mirror and Global Mirror guidelines
SVC supports both intracluster and intercluster Metro Mirror and Global Mirror. From the
intracluster point of view, any single cluster is a reasonable candidate for a Metro Mirror or
Global Mirror operation. Intercluster operation, however, needs at least two clusters, which
are separated by a number of moderately high bandwidth links.
Figure 3-19 shows a schematic of Metro Mirror connections.
Figure 3-19 Metro Mirror connections
Figure 3-19 contains two redundant fabrics. Part of each fabric exists at the local cluster and
at the remote cluster. There is no direct connection between the two fabrics.
94
Implementing the IBM System Storage SAN Volume Controller V5.1
Technologies for extending the distance between two SVC clusters can be broadly divided
into two categories:
FC extenders
SAN multiprotocol routers
Due to the more complex interactions involved, IBM explicitly tests products of this class for
interoperability with the SVC. The current list of supported SAN routers can be found in the
supported hardware list on the SVC support Web site:
http://www.ibm.com/storage/support/2145
IBM has tested a number of FC extenders and SAN router technologies with the SVC, which
must be planned, installed, and tested so that the following requirements are met:
For SVC 4.1.0.x, the round-trip latency between sites must not exceed 68 ms (34 ms one
way) for FC extenders, or 20 ms (10 ms one-way) for SAN routers.
For SVC 4.1.1.x and later, the round-trip latency between sites must not exceed 80 ms
(40 ms one-way). For Global Mirror, this limit allows a distance between the primary and
secondary sites of up to 8,000 km (4,970.96 miles) using a planning assumption of 100
km (62.13 miles) per 1 ms of round-trip link latency.
The latency of long distance links depends upon the technology that is used to implement
them. A point-to-point dark fiber-based link will typically provide a round-trip latency of
1ms per 100 km (62.13 miles) or better. Other technologies will provide longer round-trip
latencies, which will affect the maximum supported distance.
The configuration must be tested with the expected peak workloads.
When Metro Mirror or Global Mirror is used, a certain amount of bandwidth will be required
for SVC intercluster heartbeat traffic. The amount of traffic depends on how many nodes
are in each of the two clusters.
Figure 3-20 shows the amount of heartbeat traffic, in megabits per second, that is
generated by various sizes of clusters.
Figure 3-20 Amount of heartbeat traffic
These numbers represent the total traffic between the two clusters, when no I/O is taking
place to mirrored VDisks. Half of the data is sent by one cluster, and half of the data is sent
by the other cluster. The traffic will be divided evenly over all available intercluster links;
therefore, if you have two redundant links, half of this traffic will be sent over each link,
during fault free operation.
The bandwidth between sites must be, at the least, sized to meet the peak workload
requirements while maintaining the maximum latency specified previously. The peak
workload requirement must be evaluated by considering the average write workload over a
period of one minute or less, plus the required synchronization copy bandwidth. With no
synchronization copies active and no write I/O disks in Metro Mirror or Global Mirror
relationships, the SVC protocols will operate with the bandwidth indicated in Figure 3-20,
Chapter 3. Planning and configuration
95
but the true bandwidth required for the link can only be determined by considering the
peak write bandwidth to VDisks participating in Metro Mirror or Global Mirror relationships
and adding to it the peak synchronization copy bandwidth.
If the link between the sites is configured with redundancy so that it can tolerate single
failures, the link must be sized so that the bandwidth and latency statements continue to
be true even during single failure conditions.
The configuration is tested to simulate the failure of the primary site (to test the recovery
capabilities and procedures), including eventual failback to the primary site from the
secondary.
The configuration must be tested to confirm that any failover mechanisms in the
intercluster links interoperate satisfactorily with the SVC.
The FC extender must be treated as a normal link.
The bandwidth and latency measurements must be made by, or on behalf of the client,
and are not part of the standard installation of the SVC by IBM. IBM recommends that
these measurements are made during installation and that records are kept. Testing must
be repeated following any significant changes to the equipment providing the intercluster
link.
Global Mirror guidelines
Consider these guidelines:
When using SVC Global Mirror, all components in the SAN must be capable of sustaining
the workload generated by application hosts, as well as the Global Mirror background copy
workload. If not true, Global Mirror can automatically stop your relationships to protect
your application hosts from increased response times. Therefore, it is important to
configure each component correctly.
In addition, use a SAN performance monitoring tool, such as IBM System Storage
Productivity Center, which will allow you to continuously monitor the SAN components for
error conditions and performance problems. This tool will assist you to detect potential
issues before they impact your disaster recovery solution.
The long-distance link between the two clusters must be provisioned to allow for the peak
application write workload to the Global Mirror source VDisks, plus the client-defined level
of background copy.
The peak application write workload must ideally be determined by analyzing the SVC
performance statistics.
Statistics must be gathered over a typical application I/O workload cycle, which might be
days, weeks, or months depending on the environment on which the SVC is used. These
statistics must be used to find the peak write workload that the link must be able to
support.
Characteristics of the link can change with use, for example, the latency might increase as
the link is used to carry an increased bandwidth. The user must be aware of the link’s
behavior in such situations and ensure that the link remains within the specified limits. If
the characteristics are not known, testing must be performed to gain confidence of the
link’s suitability.
Users of Global Mirror must consider how to optimize the performance of the
long-distance link, which will depend upon the technology that is used to implement the
link. For example, when transmitting FC traffic over an IP link, it might be desirable to
enable jumbo frames to improve efficiency.
Using Global Mirror and Metro Mirror between the same two clusters is supported.
It is not supported for cache-disabled VDisks to participate in a Global Mirror relationship.
96
Implementing the IBM System Storage SAN Volume Controller V5.1
The gmlinktolerance parameter of the remote copy partnership must be set to an
appropriate value. The default value is 300 seconds (5 minutes), which will be appropriate
for most clients.
During SAN maintenance, the user must either reduce the application I/O workload for the
duration of the maintenance (so that the degraded SAN components are capable of the
new workload), disable the gmlinktolerance feature, increase the gmlinktolerance value
(meaning that application hosts might see extended response times from Global Mirror
VDisks), or stop the Global Mirror relationships. If the gmlinktolerance value is increased
for maintenance lasting x minutes, it must only be reset to the normal value x minutes after
the end of the maintenance activity. If gmlinktolerance is disabled for the duration of the
maintenance, it must be re-enabled after the maintenance is complete.
Global Mirror VDisks must have their preferred nodes evenly distributed between the
nodes of the clusters. Each VDisk within an I/O Group has a preferred node property that
can be used to balance the I/O load between nodes in that group.
Figure 3-21 shows the correct relationship between VDisks in a Metro Mirror or Global Mirror
solution.
Figure 3-21 Correct VDisk relationship
The capabilities of the storage controllers at the secondary cluster must be provisioned to
allow for the peak application workload to the Global Mirror VDisks, plus the client-defined
level of background copy, plus any other I/O being performed at the secondary site. The
performance of applications at the primary cluster can be limited by the performance of
the back-end storage controllers at the secondary cluster to maximize the amount of I/O
that applications can perform to Global Mirror VDisks.
We do not recommend using SATA for Metro Mirror or Global Mirror secondary VDisks
without complete review. Be careful using a slower disk subsystem for the secondary
VDisks for high performance primary VDisks, because SVC cache might not be able to
buffer all the writes, and flushing cache writes to SATA might slow I/O at the production
site.
Global Mirror VDisks at the secondary cluster must be in dedicated MDisk groups (which
contain no non-Global Mirror VDisks).
Chapter 3. Planning and configuration
97
Storage controllers must be configured to support the Global Mirror workload that is
required of them. Either dedicate storage controllers to only Global Mirror VDisks,
configure the controller to guarantee sufficient quality of service for the disks being used
by Global Mirror, or ensure that physical disks are not shared between Global Mirror
VDisks and other I/O (for example, by not splitting an individual RAID array).
MDisks within a Global Mirror MDisk group must be similar in their characteristics (for
example, RAID level, physical disk count, and disk speed). This requirement is true of all
MDisk groups, but it is particularly important to maintain performance when using Global
Mirror.
When a consistent relationship is stopped, for example, by a persistent I/O error on the
intercluster link, the relationship enters the consistent_stopped state. I/O at the primary
site continues, but the updates are not mirrored to the secondary site. Restarting the
relationship will begin the process of synchronizing new data to the secondary disk. While
this synchronization is in progress, the relationship will be in the inconsistent_copying
state. Therefore, the Global Mirror secondary VDisk will not be in a usable state until the
copy has completed and the relationship has returned to a Consistent state. Therefore, it
is highly advisable to create a FlashCopy of the secondary VDisk before restarting the
relationship. When started, the FlashCopy will provide a consistent copy of the data, even
while the Global Mirror relationship is copying. If the Global Mirror relationship does not
reach the Synchronized state (if, for example, the intercluster link experiences further
persistent I/O errors), the FlashCopy target can be used at the secondary site for disaster
recovery purposes.
If you are planning to use an FCIP intercluster link, it is extremely important to design and
size the pipe correctly.
Example 3-2 shows a best-guess bandwidth sizing formula.
Example 3-2 WAN link calculation example
Amount of write data within 24 hours times 4 to allow for peaks
Translate into MB/s to determine WAN link needed
Example:
250 GB a day
250 GB * 4 = 1 TB
24 hours * 3600 secs/hr. = 86400 secs
1,000,000,000,000/ 86400 = approximately 12 MB/s
Which means OC3 or higher is needed (155 Mbps or higher)
If compression is available on routers or WAN communication devices, smaller pipelines
might be adequate. Note that workload is probably not evenly spread across 24 hours. If
there are extended periods of high data change rates, you might want to consider
suspending Global Mirror during that time frame.
If the network bandwidth is too small to handle the traffic, application write I/O response
times might be elongated. For the SVC, Global Mirror must support short term “Peak
Write” bandwidth requirements. Remember that SVC Global Mirror is much more sensitive
to a lack of bandwidth than the DS8000.
You will need to consider the initial sync and re-sync workload, as well. The Global Mirror
partnership’s background copy rate must be set to a value that is appropriate to the link
and secondary back-end storage. Remember, the more bandwidth that you give to the
sync and re-sync operation, the less workload can be delivered by the SVC for the regular
data traffic.
The Metro Mirror or Global Mirror background copy rate is predefined, the per VDisk limit
is 25 MBps, and the maximum per I/O Group is roughly 250 MBps.
98
Implementing the IBM System Storage SAN Volume Controller V5.1
Be careful using space-efficient secondary VDisks at the disaster recovery site, because a
Space-Efficient VDisk can have performance of up to 50% less of a normal VDisk and can
affect the performance of the VDisks at the primary site.
Do not propose Global Mirror if the data change rate will exceed the communication
bandwidth or if the round-trip latency exceeds 80 - 120 ms. Greater than 80 ms round-trip
latency requires SCORE/RPQ submission.
3.3.10 SAN boot support
The SVC supports SAN boot or startup for AIX, Windows 2003 Server, and other operating
systems. SAN boot support can change from time to time, so we recommend regularly
checking the following Web site:
http://www.ibm.com/systems/storage/software/virtualization/svc/interop.html
3.3.11 Data migration from a non-virtualized storage subsystem
Data migration is an extremely important part of an SVC implementation. So, a data migration
plan must be accurately prepared. You might need to migrate your data because of one of
these reasons:
Redistributing workload within a cluster across the disk subsystem
Moving workload onto newly installed storage
Moving workload off old or failing storage, ahead of decommissioning it
Moving workload to rebalance a changed workload
Migrating data from an older disk subsystem to SVC-managed storage
Migrating data from one disk subsystem to another disk subsystem
Because there are multiple data migration methods, we suggest that you choose the data
migration method that best fits your environment, your operating system platform, your kind of
data, and your application’s service level agreement.
We can define data migration as belonging to three groups:
Based on operating system Logical Volume Manager (LVM) or commands
Based on special data migration software
Based on the SVC data migration feature
With data migration, we recommend that you apply the following guidelines:
Choose which data migration method best fits your operating system platform, your kind of
data, and your service level agreement.
Check the interoperability matrix for the storage subsystem to which your data is being
migrated:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
Choose where you want to place your data after migration in terms of the MDG related to
a specific storage subsystem tier.
Check if a sufficient amount of free space or extents are available in the target MDG.
Decide if your data is critical and must be protected by a VDisk Mirroring option or if it has
to be replicated in a remote site for disaster recovery.
Prepare offline all of the zone and LUN masking/host mappings that you might need in
order to minimize downtime during the migration.
Prepare a detailed operation plan so that you do not overlook anything at data migration
time.
Chapter 3. Planning and configuration
99
Execute a data backup before you start any data migration. Data backup must be part of
the regular data management process.
You might want to use the SVC as a data mover to migrate data from a non-virtualized
storage subsystem to another non-virtualized storage subsystem. In this case, you might
have to add additional checks that are related to the specific storage subsystem to which
you want to migrate. Be careful using slower disk subsystems for the secondary VDisks for
high performance primary VDisks, because SVC cache might not be able to buffer all the
writes and flushing cache writes to SATA might slow I/O at the production site.
3.3.12 SVC configuration backup procedure
We recommend that you save the configuration externally when changes, such as adding
new nodes, disk subsystems, and so on, have been performed on the cluster. Configuration
saving is a crucial part of the SVC management, and various methods can be applied to back
up your SVC configuration. We suggest that you implement an automatic configuration
backup by applying the configuration backup command. We describe this command for the
CLI and the GUI in Chapter 7, “SAN Volume Controller operations using the command-line
interface” on page 339 and in Chapter 8, “SAN Volume Controller operations using the GUI”
on page 469.
3.4 Performance considerations
While storage virtualization with the SVC improves flexibility and provides simpler
management of a storage infrastructure, it can also provide a substantial performance
advantage for a variety of workloads. The SVC’s caching capability and its ability to stripe
VDisks across multiple disk arrays are the reasons why performance improvement is
significant when implemented with midrange disk subsystems, because this technology is
often only provided with high-end enterprise disk subsystems.
Tip: Technically, almost all storage controllers provide both striping (RAID-1 or RAID-10)
and a form of caching. The real advantage is the degree to which you can stripe the data,
that is, across all MDisks in a group and therefore have the maximum number of spindles
active at one time. The caching is secondary. The SVC provides additional caching to what
midrange controllers provide (usually a couple of GB), whereas enterprise systems have
much larger caches.
To ensure the desired performance and capacity of your storage infrastructure, we
recommend that you do a performance and capacity analysis to reveal the business
requirements of your storage environment. When this is done, you can use the guidelines in
this chapter to design a solution that meets the business requirements.
When discussing performance for a system, it always comes down to identifying the
bottleneck, and thereby the limiting factor of a given system. At the same time, you must take
into consideration the component for whose workload you identify a limiting factor, because it
might not be the same component that is identified as the limiting factor for other workloads.
When designing a storage infrastructure using SVC, or implementing SVC in an existing
storage infrastructure, you must therefore take into consideration the performance and
capacity of the SAN, the disk subsystems, the SVC, and the known/expected workload.
100
Implementing the IBM System Storage SAN Volume Controller V5.1
3.4.1 SAN
The SVC now has many models: 2145-4F2, 2145-8F2, 2145-8F4, 2145-8G4, 2145-8A4, and
2145-CF8. All of them can connect to 2 Gbps, 4 Gbps, or 8 Gbps switches. From a
performance point of view, it is better to connect the SVC to 8 Gbps switches.
Correct zoning on the SAN switch will bring security and performance together. We
recommend that you implement a dual HBA approach at the host to access the SVC.
3.4.2 Disk subsystems
From a performance perspective, there are a few guidelines in connecting to an SVC:
Connect all storage ports to the switch, and zone them to all of the SVC ports. You zone all
ports on the disk back-end storage to all ports on the SVC nodes in a cluster. And, you
must also make sure to configure the storage subsystem LUN masking settings to map all
LUNs to all the SVC WWPNs in the cluster. The SVC is designed to handle large
quantities of multiple paths from the back-end storage.
Using as many as possible 15,000 RPM disks will improve performance considerably.
Creating one LUN per array will help in a sequential workload environment.
In most cases, the SVC will be able to improve the performance, especially on middle to low
end disk subsystems, older disk subsystems with slow controllers, or uncached disk systems,
for these reasons:
The SVC has the capability to stripe across disk arrays, and it can do so across the entire
set of supported physical disk resources.
The SVC has a 4 GB, 8 GB, or 24 GB cache in the latest 2145-CF8 model and it has an
advanced caching mechanism.
The SVC’s large cache and advanced cache management algorithms also allow it to improve
upon the performance of many types of underlying disk technologies. The SVC’s capability to
manage, in the background, the destaging operations incurred by writes (while still supporting
full data integrity) has the potential to be particularly important in achieving good database
performance.
Depending upon the size, age, and technology level of the disk storage system, the total
cache available in the SVC can be larger, smaller, or about the same as that associated with
the disk storage. Because hits to the cache can occur in either the upper (SVC) or the lower
(disk controller) level of the overall system, the system as a whole can take advantage of the
larger amount of cache wherever it is located. Thus, if the storage control level of cache has
the greater capacity, expect hits to this cache to occur, in addition to hits in the SVC cache.
Also, regardless of their relative capacities, both levels of cache will tend to play an important
role in allowing sequentially organized data to flow smoothly through the system. The SVC
cannot increase the throughput potential of the underlying disks in all cases. Its ability to do
so depends upon both the underlying storage technology, as well as the degree to which the
workload exhibits “hot spots” or sensitivity to cache size or cache algorithms.
IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, shows the SVC’s cache
partitioning capability:
http://www.redbooks.ibm.com/abstracts/redp4426.html?Open
Chapter 3. Planning and configuration
101
3.4.3 SVC
The SVC cluster is scalable up to eight nodes, and the performance is almost linear when
adding more nodes into an SVC cluster, until it becomes limited by other components in the
storage infrastructure. While virtualization with the SVC provides a great deal of flexibility, it
does not diminish the necessity to have a SAN and disk subsystems that can deliver the
desired performance. Essentially, SVC performance improvements are gained by having as
many MDisks as possible, therefore creating a greater level of concurrent I/O to the back end
without overloading a single disk or array.
Assuming that there are no bottlenecks in the SAN or on the disk subsystem, remember that
specific guidelines must be followed when you are performing these tasks:
Creating an MDG
Creating VDisks
Connecting or configuring hosts that must receive disk space from an SVC cluster
You can obtain more detailed information about performance and best practices for the SVC
in SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
3.4.4 Performance monitoring
Performance monitoring must be an integral part of the overall IT environment. For the SVC,
just as for the other IBM storage subsystems, the official IBM tool to collect performance
statistics and supply a performance report is the TotalStorage® Productivity Center.
You can obtain more information about using the TotalStorage Productivity Center to monitor
your storage subsystem in Monitoring Your Storage Subsystems with TotalStorage
Productivity Center, SG24-7364:
http://www.redbooks.ibm.com/abstracts/sg247364.html?Open
See Chapter 8, “SAN Volume Controller operations using the GUI” on page 469 for detailed
information about collecting performance statistics.
102
Implementing the IBM System Storage SAN Volume Controller V5.1
4
Chapter 4.
SAN Volume Controller initial
configuration
In this chapter, we discuss these topics:
Managing the cluster
System Storage Productivity Center overview
SAN Volume Controller (SVC) Hardware Management Console
SVC initial configuration steps
SVC ICA application upgrade
© Copyright IBM Corp. 2010. All rights reserved.
103
4.1 Managing the cluster
There are three ways to manage the SVC:
Using the System Storage Productivity Center (SSPC)
Using an SVC Management Console
Using a PuTTY-based SVC command-line interface
Figure 4-1 shows the three ways to manage an SVC cluster.
HMC
•icat
•http://
•Putty client
SSPC
•icat
•http://
•Putty client
•TPC-SE
OEM
Desktop
•http://
•Putty client
Figure 4-1 SVC cluster management
You still have full management control of the SVC no matter which method you choose. IBM
System Storage Productivity Center is supplied by default when you purchase your SVC
cluster.
If you already have a previously installed SVC cluster in your environment, it is possible that
you are using the SVC Console (Hardware Management Console (HMC)). You can still use it
together with IBM System Storage Productivity Center, but you can only log in to your SVC
from one of them at a time.
If you decide to manage your SVC cluster with the SVC CLI, it does not matter if you are
using the SVC Console or IBM System Storage Productivity Center, because the SVC CLI is
located on the cluster and accessed via Secure Shell (SSH), which can be installed
anywhere.
4.1.1 TCP/IP requirements for SAN Volume Controller
To plan your installation, consider the TCP/IP address requirements of the SVC cluster and
the requirements for the SVC to access other services. You must also plan the address
allocation and the Ethernet router, gateway, and firewall configuration to provide the required
access and network security.
Figure 4-2 shows the TCP/IP ports and services that are used by the SVC.
104
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-2 TCP/IP ports
For more information about TCP/IP prerequisites, see Chapter 3, “Planning and
configuration” on page 65 and also the IBM System Storage Productivity Center: Introduction
and Planning Guide, SC23-8824.
In order to start an SVC initial configuration, Figure 4-3 shows a common flowchart that
covers all of the types of management.
Chapter 4. SAN Volume Controller initial configuration
105
Figure 4-3 SVC initial configuration flowchart
In the next sections, we describe each of the steps shown in Figure 4-3.
106
Implementing the IBM System Storage SAN Volume Controller V5.1
4.2 Systems Storage Productivity Center overview
The System Storage Productivity Center (SSPC) is an integrated hardware and software
solution that provides a single management console for managing IBM SVC, IBM DS8000,
and other components of your data storage infrastructure.
The current release of System Storage Productivity Center consists of the following
components:
IBM Tivoli Storage Productivity Center Basic Edition 4.1.1
IBM Tivoli Storage Productivity Center Basic Edition 4.1.1 is preinstalled on the System
Storage Productivity Center server.
Tivoli Storage Productivity Center for Replication is preinstalled. An additional license
is required.
IBM SAN Volume Controller Console 5.1.0
IBM SAN Volume Controller Console 5.1.0 is preinstalled on the System Storage
Productivity Center server. Because this level of the console no longer requires a
Common Information Model (CIM) agent to communicate with the SVC, a CIM Agent is
not installed with the console. Instead, you can use the CIM Agent that is embedded in the
SVC hardware. To manage prior levels of the SVC, install the corresponding CIM Agent on
the IBM System Storage Productivity Center server. PuTTY remains installed on the
System Storage Productivity Center and is available for key generation.
IBM System Storage DS® Storage Manager 10.60 is available for you to optionally
install on the System Storage Productivity Center server, or on a remote server. The DS
Storage Manager 10.60 can manage the IBM DS3000, IBM DS4000, and IBM DS5000.
With DS Storage Manager 10.60, when you use Tivoli Storage Productivity Center to add
and discover a DS CIM Agent, you can launch the DS Storage Manager from the topology
viewer, the Configuration Utility, or the Disk Manager of the Tivoli Storage Productivity
Center.
IBM Java™ 1.5 is preinstalled. IBM Java is preinstalled and supports DS Storage
Manager 10.60. You do not need to download Java from Sun Microsystems.
DS CIM Agent management commands. The DS CIM Agent management commands
(DSCIMCLI) for 5.4.3 are preinstalled on the System Storage Productivity Center.
Figure 4-4 shows the product stack in the IBM System Storage Productivity Center Console
1.4.
Chapter 4. SAN Volume Controller initial configuration
107
Figure 4-4 IBM System Storage Productivity Center 1.4 product stack
The IBM System Storage Productivity Center Console replaces the functionality of the SVC
Master Console (MC), which was a dedicated management console for the SVC. The Master
Console is still supported and will run the latest code levels of the SVC Console software
components.
IBM System Storage Productivity Center has all of the software components preinstalled and
tested on a System xTM machine model IBM System Storage Productivity Center 2805-MC4
with Windows installed on it.
All the software components installed on the IBM System Storage Productivity Center can be
ordered and installed on hardware that meets or exceeds minimum requirements. The SVC
Console software components are also available on the Web.
When using the IBM System Storage Productivity Center with the SVC, you have to install it
and configure it before configuring the SVC. For a detailed guide to the IBM System Storage
Productivity Center, we recommend that you refer to the IBM System Storage Productivity
Center Software Installation and User’s Guide, SC23-8823.
For information pertaining to physical connectivity to the SVC, see Chapter 3, “Planning and
configuration” on page 65.
4.2.1 IBM System Storage Productivity Center hardware
The hardware used by the IBM System Storage Productivity Center solution is the IBM
System Storage Productivity Center 2805-MC4. It is a 1U rack-mounted server. It has the
following initial configuration:
One Intel Xeon® quad-core central processing unit, with speed of 2.4 GHz, cache of
8 MB, and power consumption of 80 W
108
Implementing the IBM System Storage SAN Volume Controller V5.1
8 GB of RAM (eight 1-inch dual inline memory modules of double-data-rate 3 (DDR3)
memory, with a data rate of 1,333 MHz
Two 146 GB hard disk drives, each with a speed of 15,000 RPM
One Broadcom 6708 Ethernet card
One CD/DVD bay with read and write-read capability Microsoft Windows 2008 Enterprise
Edition
It is designed to perform System Storage Productivity Center functions. If you plan to upgrade
System Storage Productivity Center for more functions, you can purchase the Performance
Upgrade Kit to add more capacity to your hardware.
4.2.2 SVC installation planning information for System Storage Productivity
Center
Consider the following steps when planning the System Storage Productivity Center
installation:
Verify that the hardware and software prerequisites have been met.
Determine the location of the rack where the System Storage Productivity Center is to be
installed.
Verify that the System Storage Productivity Center will be installed in line of sight to the
SVC nodes.
Verify that you have a keyboard, mouse, and monitor available to use.
Determine the cabling required.
Determine the network IP address.
Determine the System Storage Productivity Center host name.
For detailed installation guidance, see the IBM System Storage Productivity Center:
Introduction and Planning Guide, SC23-8824:
https://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind=
5000033&familyind=5356448
Also, see the IBM Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center
for Replication Installation and Configuration Guide, SC27-2337:
http://http://www-01.ibm.com/support/docview.wss?rs=1181&uid=ssg1S7002597
Figure 4-5 shows the front view of the System Storage Productivity Center Console based on
the 2805-MC4 hardware.
Chapter 4. SAN Volume Controller initial configuration
109
Figure 4-5 System Storage Productivity Center 2805-MC4 front view
Figure 4-6 shows a rear view of System Storage Productivity Center Console based on the
2805-MC4 hardware.
Figure 4-6 System Storage Productivity Center 2805-MC4 rear view
4.2.3 SVC installation planning information for the HMC
Consider the following steps when planning for HMC installation:
Verify that the hardware and software prerequisites have been met.
Determine the location of the rack where the HMC is to be installed.
Verify that the HMC will be installed in line of sight to the SVC nodes.
Verify that you have a keyboard, mouse, and monitor available to use.
Determine the cabling required.
Determine the network IP address.
Determine the HMC host name.
For detailed installation guidance, see the IBM System Storage SAN Volume Controller:
Master Console Guide, SC27-2223:
http://www-01.ibm.com/support/docview.wss?rs=591&context=STCCCXR&context=STCCCYH&d
c=DA400&q1=english&q2=-Japanese&uid=ssg1S7002609&loc=en_US&cs=utf-8&lang=en
110
Implementing the IBM System Storage SAN Volume Controller V5.1
4.3 Setting up the SVC cluster
This section provides step-by-step instructions for building the SVC cluster initially.
4.3.1 Creating the cluster (first time) using the service panel
This section provides the step-by-step instructions that are needed to create the cluster for
the first time using the service panel.
Use Figure 4-7 as a reference for the SVC 2145-8F2 and 2145-8F4 node model buttons to be
pushed in the steps that follow. Use Figure 4-8 for the SVC Node 2145-8G4 and 2145-8A4
models. And, use Figure 4-9 as a reference for the SVC Node 2145-CF8 model.
Chapter 4. SAN Volume Controller initial configuration
111
Figure 4-7 SVC 8F2 node and SVC 8F4 node front and operator panel
112
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-8 SVC 8G4 node front and operator panel
Figure 4-9 shows the CF8 model front panel.
Chapter 4. SAN Volume Controller initial configuration
113
Figure 4-9 CF8 front panel
4.3.2 Prerequisites
Ensure that the SVC nodes are physically installed. Prior to configuring the cluster, ensure
that the following information is available:
License: The license indicates whether the client is permitted to use FlashCopy,
MetroMirror, or both. It also indicates how much capacity the client is licensed to virtualize.
For IPv4 addressing:
– Cluster IPv4 addresses: These addresses include one address for the cluster and
another address for the service address.
– IPv4 subnet mask.
– Gateway IPv4 address.
For IPv6 addressing:
– Cluster IPv6 addresses: These addresses include one address for the cluster and
another address for the service address.
– IPv6 prefix.
– Gateway IPv6 address.
114
Implementing the IBM System Storage SAN Volume Controller V5.1
4.3.3 Initial configuration using the service panel
After the hardware is physically installed into racks, complete the following steps to initially
configure the cluster through the service panel:
1. Choose any node that is to become a member of the cluster being created.
2. At the service panel of that node, press and release the up or down navigation button
continuously until Node: is displayed.
Important: If a time-out occurs when entering the input for the fields during these
steps, you must begin again from step 2. All of the changes are lost, so be sure to have
all of the information available before beginning again.
3. Press and release the left or right navigation button continuously until Create Cluster? is
displayed. Press the select button.
4. If IPv4 Address: is displayed on line 1 of the service display, go to step 5. If Delete
Cluster? is displayed on line 1 of the service display, this node is already a member of a
cluster. Either the wrong node was selected, or this node was already used in a previous
cluster. The ID of this existing cluster is displayed on line 2 of the service display:
a. If the wrong node was selected, this procedure can be exited by pressing the left, right,
up, or down button (it cancels automatically after 60 seconds).
b. If you are certain that the existing cluster is not required, follow these steps:
i. Press and hold the up button.
ii. Press and release the select button. Then, release the up button, which deletes the
cluster information from the node. Go back to step 1 and start again.
Important: When a cluster is deleted, all of the client data that is contained in that
cluster is lost.
5. If you are creating the cluster with IPv4, then, press the select button; otherwise for IPv6,
press the down arrow to display IPv6 Address:, and press the select button.
6. Use the up or down navigation buttons to change the value of the first field of the IP
address to the value that has been chosen.
Note: For IPv4, pressing and holding the up or down buttons will increment or
decrease the IP address field by units of 10. The field value rotates from 0 to 255 with
the down button, and from 255 to 0 with the up button.
For IPv6, you do the same steps except that it is a 4-digit hexadecimal field, and the
individual characters will increment.
7. Use the right navigation button to move to the next field. Use the up or down navigation
buttons to change the value of this field.
8. Repeat step 7 for each of the remaining fields of the IP address.
9. When the last field of the IP address has been changed, press the select button.
10.Press the right arrow button:
a. For IPv4, IPv4 Subnet: is displayed.
b. For IPv6, IPv6 Prefix: is displayed.
11.Press the select button.
Chapter 4. SAN Volume Controller initial configuration
115
12.Change the fields for IPv4 Subnet in the same way that the IPv4 IP address fields were
changed. There is only a single field for IPv6 Prefix.
13.When the last field of IPv4 Subnet/IPv6 Mask has been changed, press the select button.
14.Press the right navigation button:
a. For IPv4, IPv4 Gateway: is displayed.
b. For IPv6, IPv6 Gateway: is displayed.
15.Press the select button.
16.Change the fields for the appropriate Gateway in the same way that the IPv4/IPv6 address
fields were changed.
17.When the changes to all of the Gateway fields have been made, press the select button.
18.Press the right navigation button:
a. For IPv4, IPv4 Create Now? is displayed.
b. For IPv6, IPv6 Create Now? is displayed.
19.When the settings have all been verified as accurate, press the select button.
To review the settings before creating the cluster, use the right and left buttons. Make any
necessary changes, return to Create Now?, and press the select button.
If the cluster is created successfully, Password: is displayed on line 1 of the service display
panel. Line 2 contains a randomly generated password, which is used to complete the
cluster configuration in the next section.
Important: Make a note of this password now. It is case sensitive. The password is
displayed only for approximately 60 seconds. If the password is not recorded, the
cluster configuration procedure must be started again from the beginning.
20.When Cluster: is displayed on line 1 of the service display and the Password: display has
timed out, the cluster was created successfully. Also, the cluster IP address is displayed
on line 2 when the initial creation of the cluster is completed.
If the cluster is not created, Create Failed: is displayed on line 1 of the service display.
Line 2 contains an error code. Refer to the error codes that are documented in IBM
System Storage SAN Volume Controller: Service Guide, GC26-7901, to identify the
reason why the cluster creation failed and the corrective action to take.
Important: At this time, do not repeat this procedure to add other nodes to the cluster.
Adding nodes to the cluster is accomplished in 7.8.2, “Adding a node” on page 388 and in
8.10.3, “Adding nodes to the cluster” on page 560.
4.4 Adding the cluster to the SSPC or the SVC HMC
After you have performed the activities in 4.3, “Setting up the SVC cluster” on page 111,
complete the cluster setup using the SVC Console. Follow 4.4.1, “Configuring the GUI” on
page 117 to create the cluster and complete the configuration.
Important: Make sure that the SVC cluster IP address (svcclusterip) can be reached
successfully with a ping command from the SVC Console.
116
Implementing the IBM System Storage SAN Volume Controller V5.1
4.4.1 Configuring the GUI
If this is the first time that the SVC administration GUI is being used, you must configure it:
1. Open the GUI using one of the following methods:
– Double-click the icon marked SAN Volume Controller Console on the SVC Console’s
desktop.
– Open a Web browser on the SVC Console and point to this address:
http://localhost:9080/ica (We accessed the SVC Console using this method.)
– Open a Web browser on a separate workstation and point to this address:
http://svcconsoleipaddress:9080/ica
Figure 4-10 shows the SVC 5.1 Welcome window.
Figure 4-10 Welcome window
2. Click Add SAN Volume Controller Cluster, and you will be presented with the window
that is shown in Figure 4-11.
Figure 4-11 Adding the SVC cluster IP address
Chapter 4. SAN Volume Controller initial configuration
117
Important: Do not forget to select Create Initialize Cluster. Without this flag, you will
not be able to initialize the cluster and you will get the error message CMMVC5753E.
Figure 4-12 shows the CMMVC5753E error.
Figure 4-12 CMMVC5753E error
3. Click OK and a pop-up window opens and prompts for the user ID and the password of the
SVC cluster, as shown in Figure 4-13. Enter the user ID admin and the cluster admin
password that was set earlier in 4.3.1, “Creating the cluster (first time) using the service
panel” on page 111, and click OK.
Figure 4-13 SVC cluster user ID and password sign-on window
4. The browser accesses the SVC and displays the Create New Cluster wizard window, as
shown in Figure 4-14. Click Continue.
Figure 4-14 Create New Cluster wizard
118
Implementing the IBM System Storage SAN Volume Controller V5.1
5. At the Create New Cluster window (Figure 4-15), fill in the following details:
– A new superuser password to replace the random one that the cluster generated: The
password is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore.
It cannot start with a number and has a minimum of one character and a maximum of
15 characters.
Users: The Admin user that was previously used will no longer be needed. It will be
replaced by the superuser user that will be created at the cluster initialization time.
Starting from SVC 5.1, the CIM Agent has been moved inside the SVC cluster.
– A service password to access the cluster for service operation: The password is case
sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore. It cannot start
with a number and has a minimum of one character and a maximum of 15 characters.
– A cluster name: The cluster name is case sensitive and can consist of A to Z, a to z, 0
to 9, and the underscore. It cannot start with a number and has a minimum of one
character and a maximum of 15 characters.
– A service IP address to access the cluster for service operations. Choose between an
automatically assigned IP address from Dynamic Host Configuration Protocol (DHCP)
or a static IP address.
Tip: The service IP address differs from the cluster IP address. However, because
the service IP address is configured for the cluster, it must be on the same IP
subnet.
– The fabric speed of the FC network.
– The Administrator Password Policy check box, if selected, enables a user to reset the
password from the service panel (this reset is helpful, for example, if the password is
forgotten). This check box is optional.
Important: The SVC must be in a secure room if this function is enabled, because
anyone who knows the correct key sequence can reset the admin password:
Use this key sequence:
a. From the Cluster: menu item displayed on the service panel, press the left or
right button until Recover Cluster? is displayed.
b. Press the select button. Service Access? is displayed.
c. Press and hold the up button, and then press and release the select button.
This step generates a new random password. Write it down.
Important: Be careful, because pressing and holding the down button, and
pressing and releasing the select button, places the node in service mode.
6. After you have filled in the details, click Create New Cluster (Figure 4-15).
Chapter 4. SAN Volume Controller initial configuration
119
Figure 4-15 Cluster details
Important: Make sure that you confirm the Administrator and Service passwords and
retain them in a safe place for future use.
7. A Creating New Cluster window opens, as shown in Figure 4-16. Click Continue each
time when prompted.
120
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-16 Creating New Cluster
8. A Created New Cluster window opens, as shown in Figure 4-17. Click Continue.
Figure 4-17 Created New Cluster
9. A Password Changed window will confirm that the password has been modified, as shown
in Figure 4-18. Click Continue.
Figure 4-18 Password Changed
Note: By this time, the service panel display on the front of the configured node
displays the cluster name that was entered previously (for example, ITSO-CLS3).
10.Then, you are redirected to the License setting window, as shown in Figure 4-19. Choose
the type of license that is appropriate for your purchase, and click GO to continue.
Chapter 4. SAN Volume Controller initial configuration
121
Figure 4-19 License Settings
11.Next, the Capacity Licensing Settings window is displayed, as shown in Figure 4-20. To
continue, fill out the fields for Virtualization Limit, FlashCopy Limit, and Global and Metro
Mirror Limit for the number of Terabytes that are licensed. If you do not have a license for
any of these features, leave the value at 0. Click Set License Settings.
Figure 4-20 Capacity Licensing Settings
12.A confirmation window will confirm the settings for the features, as shown in Figure 4-21.
Click Continue.
122
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-21 Capacity Licensing Settings confirmation
13.A window confirming that you have successfully created the initial settings for the cluster
opens, as shown in Figure 4-22.
Figure 4-22 Cluster successfully created
Chapter 4. SAN Volume Controller initial configuration
123
14.Closing the previous task window by clicking X in the upper-right corner will redirect you to
the Viewing Clusters window (the cluster will appear as unauthenticated). After selecting
your cluster and clicking Go, you will be asked to authenticate your access by inserting
your predefined superuser user ID and password.
Figure 4-23 shows the Viewing Clusters window.
Figure 4-23 Viewing Clusters window
15.Perform the following steps to complete the SVC cluster configuration:
a. Add an additional node to the cluster.
b. Configure SSH keys for the command line user, as shown in 4.5, “Secure Shell
overview and CIM Agent” on page 125.
c. Configure user authentication and authorization.
d. Set up the call home options.
e. Set up event notifications and inventory reporting.
f. Create the MDGs.
g. Add an MDisk to the MDG.
h. Identify and create VDisks.
i. Create a map host objects map.
j. Identify and configure FlashCopy mappings and Metro Mirror relationship.
k. Back up configuration data.
We describe all of these steps in Chapter 7, “SAN Volume Controller operations using the
command-line interface” on page 339, and in Chapter 8, “SAN Volume Controller operations
using the GUI” on page 469.
124
Implementing the IBM System Storage SAN Volume Controller V5.1
4.5 Secure Shell overview and CIM Agent
Prior to SVC Version 5.1, Secure Shell (SSH) was used to secure data flow between the SVC
cluster configuration node (SSH server) and a client, either a command-line client through the
command-line interface (CLI) or the Common Information Model object manager (CIMOM).
The connection is secured by the means of a private key and a public key pair:
1. A public key and a private key are generated together as a pair.
2. A public key is uploaded to the SSH server.
3. A private key identifies the client and is checked against the public key during the
connection. The private key must be protected.
4. The SSH server must also identify itself with a specific host key.
5. If the client does not have that host key yet, it is added to a list of known hosts.
Secure Shell is the communication vehicle between the management system (usually the
System Storage Productivity Center) and the SVC cluster.
SSH is a client/server network application. The SVC cluster acts as the SSH server in this
relationship. The SSH client provides a secure environment from which to connect to a
remote machine. It uses the principles of public and private keys for authentication.
The communication interfaces prior to SVC version 5.1 are shown in Figure 4-24.
Figure 4-24 Communication interfaces
SSH keys are generated by the SSH client software. The SSH keys include a public key,
which is uploaded and maintained by the cluster, and a private key that is kept private to the
workstation that is running the SSH client. These keys authorize specific users to access the
administration and service functions on the cluster. Each key pair is associated with a
user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored
on the cluster. New IDs and keys can be added, and unwanted IDs and keys can be deleted.
To use the CLI or, for the SVC graphical user interface (GUI) prior to SVC 5.1, an SSH client
must be installed on that system, the SSH key pair must be generated on the client system,
and the client’s SSH public key must be stored on the SVC cluster or clusters.
The System Storage Productivity Center and the HMC must have the freeware
implementation of SSH-2 for Windows called PuTTY preinstalled. This software provides the
Chapter 4. SAN Volume Controller initial configuration
125
SSH client function for users logged into the SVC Console that want to invoke the CLI or GUI
to manage the SVC cluster.
Starting with SVC 5.1, the management design has been changed, and the CIM Agent has
been moved into the SVC cluster.
With SVC 5.1, SSH keys authentication is no longer needed for the GUI but only for the SVC
command-line interface.
Figure 4-25 shows the SVC management design.
Figure 4-25 SVC management design
4.5.1 Generating public and private SSH key pairs using PuTTY
Perform the following steps to generate SSH keys on the SSH client system:
Note: These keys will be used in the step documented in 4.6, “Using IPv6” on page 136.
1. Start the PuTTY Key Generator to generate public and private SSH keys. From the client
desktop, select Start  Programs  PuTTY  PuTTYgen.
2. On the PuTTY Key Generator GUI window (Figure 4-26), generate the keys:
a. Select SSH2 RSA.
b. Leave the number of bits in a generated key value at 1024.
c. Click Generate.
126
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-26 PuTTY key generator GUI
3. Move the cursor on the blank area in order to generate the keys.
To generate keys: The blank area indicated by the message is the large blank
rectangle on the GUI inside the section of the GUI labelled Key. Continue to move the
mouse pointer over the blank area until the progress bar reaches the far right. This
action generates random characters to create a unique key pair.
4. After the keys are generated, save them for later use:
a. Click Save public key, as shown in Figure 4-27.
Chapter 4. SAN Volume Controller initial configuration
127
Figure 4-27 Saving the public key
b. You are prompted for a name (for example, pubkey) and a location for the public key
(for example, C:\Support Utils\PuTTY). Click Save.
If another name or location is chosen, ensure that a record of the name or location is
kept, because the name and location of this SSH public key must be specified in the
steps that are documented in 4.5.2, “Uploading the SSH public key to the SVC cluster”
on page 129.
Tip: The PuTTY Key Generator saves the public key with no extension, by default.
We recommend that you use the string “pub” in naming the public key, for example,
“pubkey”, to easily differentiate the SSH public key from the SSH private key.
c. In the PuTTY Key Generator window, click Save private key.
d. You are prompted with a warning message, as shown in Figure 4-28. Click Yes to save
the private key without a passphrase.
Figure 4-28 Saving the private key without a passphrase
128
Implementing the IBM System Storage SAN Volume Controller V5.1
e. When prompted, enter a name (for example, icat) and location for the private key (for
example, C:\Support Utils\PuTTY). Click Save.
If you choose another name or location, ensure that you keep a record of it, because
the name and location of the SSH private key must be specified when the PuTTY
session is configured in the steps that are documented in 4.6, “Using IPv6” on
page 136.
We suggest that you use the default name icat.ppk, because, in SVC clusters running
on versions prior to SVC 5.1, this key has been used for icat application authentication
and must have this default name.
Private key extension: The PuTTY Key Generator saves the private key with the
PPK extension.
5. Close the PuTTY Key Generator GUI.
6. Navigate to the directory where the private key was saved (for example, C:\Support
Utils\PuTTY).
7. Copy the private key file (for example, icat.ppk) to the C:\Program
Files\IBM\svcconsole\cimom directory.
Important: If the private key was named something other than icat.ppk, make sure that
you rename it to the icat.ppk file in the C:\Program Files\IBM\svcconsole\cimom
folder. The GUI (which will be used later) expects the file to be called icat.ppk and for it
to be in this location. This key is no longer used in SVC 5.1, but it is still valid for the
previous version.
4.5.2 Uploading the SSH public key to the SVC cluster
After you have created your SSH key pair, you need to upload your SSH private key into the
SVC cluster:
1. From your browser:
http://svcconsoleipaddress:9080/ica
Select Users, and then on the next window, select Create a User from the list, as shown
Figure 4-29, and click Go.
Figure 4-29 Create a user
2. From the Create a User window, insert the user ID name that you want to create and the
password. At the bottom of the window, select the access level that you want to assign to
your user (remember that the Security Administrator is the maximum level) and choose
the location where you want to upload the SSH pub key file you have created for this user,
as shown Figure 4-30. Click Ok.
Chapter 4. SAN Volume Controller initial configuration
129
Figure 4-30 Create user and password
3. You have completed your user creation process and uploaded the users’ SSH public key
that will be paired later with the users’ private .ppk key, as described in 4.5.3, “Configuring
the PuTTY session for the CLI” on page 130. Figure 4-31 shows the successful upload of
the SSH admin key.
Figure 4-31 Adding the SSH admin key successfully
4. You have now completed the basic setup requirements for the SVC cluster using the SVC
cluster Web interface.
4.5.3 Configuring the PuTTY session for the CLI
Before the CLI can be used, the PuTTY session must be configured using the SSH keys that
were generated earlier in 4.5.1, “Generating public and private SSH key pairs using PuTTY”
on page 126.
130
Implementing the IBM System Storage SAN Volume Controller V5.1
Perform these steps to configure the PuTTY session on the SSH client system:
1. From the System Storage Productivity Center Windows desktop, select Start 
Programs  PuTTY  PuTTY to open the PuTTY Configuration GUI window.
2. In the PuTTY Configuration window (Figure 4-32), from the Category pane on the left,
click Session, if it is not selected.
Tip: The items selected in the Category pane affect the content that appears in the right
pane.
Figure 4-32 PuTTY Configuration window
3. In the right pane, under the “Specify the destination you want to connect to” section, select
SSH. Under the “Close window on exit” section, select Only on clean exit, which ensures
that if there are any connection errors, they will be displayed on the user’s window.
4. From the Category pane on the left side of the PuTTY Configuration window, click
Connection  SSH to display the PuTTY SSH Configuration window, as shown in
Figure 4-33.
Chapter 4. SAN Volume Controller initial configuration
131
Figure 4-33 PuTTY SSH connection configuration window
5. In the right pane, in the “Preferred SSH protocol version” section, select 2.
6. From the Category pane on the left side of the PuTTY Configuration window, select
Connection  SSH  Auth.
7. On Figure 4-34, in the right pane, in the “Private key file for authentication:” field under the
Authentication Parameters section, either browse to or type the fully qualified directory
path and file name of the SSH client private key file created earlier (for example,
C:\Support Utils\PuTTY\icat.PPK). See Figure 4-34.
132
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-34 PuTTY Configuration: Private key location
8. From the Category pane on the left side of the PuTTY Configuration window, click
Session.
9. In the right pane, follow these steps, as shown in Figure 4-35:
a. Under the “Load, save, or delete a stored session” section, select Default Settings,
and click Save.
b. For the Host Name (or IP address), type the IP address of the SVC cluster.
c. In the Saved Sessions field, type a name (for example, SVC) to associate with this
session.
d. Click Save.
Chapter 4. SAN Volume Controller initial configuration
133
Figure 4-35 PuTTY Configuration: Saving a session
You can now either close the PuTTY Configuration window or leave it open to continue.
Tip: Normally, output that comes from the SVC is wider than the default PuTTY window
size. We recommend that you change your PuTTY window appearance to use a font with a
character size of 8. To change, click the Appearance item in the Category tree, as shown
in Figure 4-35, and then, click Font. Choose a font with a character size of 8.
4.5.4 Starting the PuTTY CLI session
The PuTTY application is required for all CLI tasks. If it was closed for any reason, restart the
session as detailed here:
1. From the SVC Console desktop, open the PuTTY application by selecting Start 
Programs  PuTTY.
2. On the PuTTY Configuration window (Figure 4-36), select the session saved earlier (in our
example, ITSO-SVC1), and click Load.
3. Click Open.
134
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-36 Open PuTTY command-line session
4. If this is the first time that the PuTTY application is being used since generating and
uploading the SSH key pair, a PuTTY Security Alert window with a prompt opens stating
that there is a mismatch between the private and public keys, as shown in Figure 4-37.
Click Yes, which invokes the CLI.
Figure 4-37 PuTTY Security Alert
5. At the Login as: prompt, type admin and press Enter (the user ID is case sensitive). As
shown in Example 4-1, the private key used in this PuTTY session is now authenticated
against the public key that was uploaded to the SVC cluster.
Example 4-1 Authenticating
login as: admin
Authenticating with public key "rsa-key-20080617"
Last login: Wed Aug 18 03:30:21 2009 from 10.64.210.240
IBM_2145:ITSO-CL1:admin>
Chapter 4. SAN Volume Controller initial configuration
135
You have now completed the tasks that are required to configure the CLI for SVC
administration from the SVC Console. You can close the PuTTY session.
4.5.5 Configuring SSH for AIX clients
To configure SSH for AIX clients, follow these steps:
1. The SVC cluster IP address must be able to be successfully reached using the ping
command from the AIX workstation from which cluster access is desired.
2. Open SSL must be installed for OpenSSH to work. Install OpenSSH on the AIX client:
a. The installation images can be found at this Web site:
https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=aixbp
http://sourceforge.net/projects/openssh-aix
b. Follow the instructions carefully, because OpenSSL must be installed before using
SSH.
3. Generate an SSH key pair:
a. Run the cd command to go to the /.ssh directory.
b. Run the ssh-keygen -t rsa command.
c. The following message is displayed:
Generating public/private rsa key pair. Enter file in which to save the key
(//.ssh/id_rsa)
d. Pressing Enter will use the default file that is shown in parentheses; otherwise, enter a
file name (for example, aixkey), and press Enter.
e. The following prompt is displayed:
Enter a passphrase (empty for no passphrase)
We recommend entering a passphrase when the CLI will be used interactively,
because there is no other authentication when connecting through the CLI. After typing
in the passphrase, press Enter.
f. The following prompt is displayed:
Enter same passphrase again:
Type the passphrase again, and then, press Enter again.
g. A message is displayed indicating that the key pair has been created. The private key
file will have the name entered previously (for example, aixkey). The public key file will
have the name entered previously with an extension of .pub (for example, aixkey.pub).
Using a passphrase: If you are generating an SSH keypair so that you can
interactively use the CLI, we recommend that you use a passphrase so you will need to
authenticate every time that you connect to the cluster. It is possible to have a
passphrase-protected key for scripted usage, but you will have to use the expect
command or a similar command to have the passphrase parsed into the ssh command.
4.6 Using IPv6
SVC V4.3 introduced IPv6 functionality to the console and clusters. You can use IPv4, or IPv6
in a dual stack configuration. Migrating to (or from) IPv6 can be done remotely and is
nondisruptive, except that you need to remove and redefine the cluster to the SVC Console.
136
Implementing the IBM System Storage SAN Volume Controller V5.1
Using IPv6: To remotely access the SVC Console and clusters running IPv6, you are
required to run Internet Explorer 7 and have IPv6 configured on your local workstation.
4.6.1 Migrating a cluster from IPv4 to IPv6
As a prerequisite, have IPv6 already enabled and configured on the System Storage
Productivity Center/Windows server running the SVC Console. We have configured an
interface with IPv4 and IPv6 addresses on the System Storage Productivity Center, as shown
in Example 4-2.
Example 4-2 Output of ipconfig on System Storage Productivity Center
C:\Documents and Settings\Administrator>ipconfig
Windows IP Configuration
Ethernet adapter IPv6:
Connection-specific
IP Address. . . . .
Subnet Mask . . . .
IP Address. . . . .
IP Address. . . . .
Default Gateway . .
DNS
. .
. .
. .
. .
. .
Suffix
. . . .
. . . .
. . . .
. . . .
. . . .
.
.
.
.
.
.
:
:
:
:
:
:
10.0.1.115
255.255.255.0
2001:610::115
fe80::214:5eff:fecd:9352%5
To migrate a cluster, follow these steps:
1. Select Manage Cluster  Modify IP Addresses, as shown in Figure 4-38.
Figure 4-38 Modify IP Addresses window
Chapter 4. SAN Volume Controller initial configuration
137
2. In the IPv6 section that is shown in Figure 4-38, select an IPv6 interface, and click Modify.
3. Then, in the window that is shown in Figure 4-39:
a. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of
0 to 127.
b. Type an IPv6 address in the Cluster IP field.
c. Type an IPv6 address in the Service IP address field.
d. Type an IPv6 gateway in the Gateway field.
e. Click Modify Settings.
Figure 4-39 Modify IP Addresses: Adding IPv6 addresses
4. A confirmation window displays (Figure 4-40). Click X in the upper-right corner to close
this tab.
Figure 4-40 Modify IP Addresses window
5. Before you remove the cluster from the SVC Console, test the IPv6 connectivity using the
ping command from a cmd.exe session on the System Storage Productivity Center (as
shown in Example 4-3 on page 139).
138
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 4-3 Testing IPv6 connectivity to the SVC cluster
C:\Documents and Settings\Administrator>ping
2001:0610:0000:0000:0000:0000:0000:119
Pinging 2001:610::119 from 2001:610::115 with 32 bytes of data:
Reply
Reply
Reply
Reply
from
from
from
from
2001:610::119:
2001:610::119:
2001:610::119:
2001:610::119:
time=3ms
time<1ms
time<1ms
time<1ms
Ping statistics for 2001:610::119:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 3ms, Average = 0ms
6. In the Viewing Clusters pane, in the GUI Welcome window, select the cluster that you want
to remove. Select Remove a Cluster from the list, and click Go.
7. The Viewing Clusters window reopens, without the cluster that you have removed. Select
Add a Cluster from the list, and click OK (Figure 4-41).
Figure 4-41 Adding a cluster
8. The Adding a Cluster window opens. Enter your IPv6 address, as shown in Figure 4-42,
and click OK.
Figure 4-42 iPv6 address
9. You will be asked to insert your CIM user ID (superuser) and your password
(default=passw0rd), as shown in Figure 4-43.
Chapter 4. SAN Volume Controller initial configuration
139
Figure 4-43 Insert CIM user ID and password
10.The Viewing Clusters window reopens with the cluster displaying an IPv6 address, as
shown in Figure 4-44. Click Launch the SAN Volume Controller Console for the cluster,
and go back to modifying IP addresses, as you did in step 1.
Figure 4-44 Viewing Clusters window: Displaying the new cluster using the IPv6 address
11.In the Modify IP Addresses window, select the IPv4 address port, select Clear Port
Settings, and click GO, as shown in Figure 4-45.
140
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-45 Clear Port Settings
12.A confirmation message appears, as shown in Figure 4-46. Click OK.
Figure 4-46 Confirmation of IP address change
13.A second window (Figure 4-47) opens, confirming that the IPv4 stack has been disabled
and the associated addresses have been removed. Click Return.
Figure 4-47 IPv4 stack has been removed
4.6.2 Migrating a cluster from IPv6 to IPv4
The process of migrating a cluster from IPv6 to IPv4 is identical to the process described in
4.6.1, “Migrating a cluster from IPv4 to IPv6” on page 137, except that you add IPv4
addresses and remove the IPv6 addresses.
Chapter 4. SAN Volume Controller initial configuration
141
4.7 Upgrading the SVC Console software
This section takes you through the steps to upgrade your existing SVC Console GUI. You can
also use these steps to install a new SVC Console on another server.
Follow these steps:
1. Download the latest available version of the ICA application and check for compatibility
with your running version from the following Web site:
http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002888
2. Save your account definitions, documenting all defined users, password, and SSH keys,
because you might need to reuse these users, password, and keys in case you encounter
any problems during the GUI upgrade process.
Example 4-4 shows you how to list the defined accounts using the CLI.
Example 4-4 Accounts list
IBM_2145:ITSO-CLS3:admin>svcinfo lsuser
id
name
password
ssh_key
remote
sergrp_id
usergrp_name
0
superuser
yes
no
no
0
SecurityAdmin
1
admin
yes
yes
no
0
SecurityAdmin
IBM_2145:ITSO-CLS3:admin>svcinfo lsuser 0
id 0
name superuser
password yes
ssh_key no
remote no
usergrp_id 0
usergrp_name SecurityAdmin
IBM_2145:ITSO-CLS3:admin>svcinfo lsuser 1
id 1
name admin
password yes
ssh_key yes
remote no
usergrp_id 0
usergrp_name SecurityAdmin
IBM_2145:ITSO-CLS3:admin>
u
3. Execute the setup.exe file from the location where you have saved and unzipped the
latest SVC Console file.
Figure 4-48 shows the location of the setup.exe file on our system.
142
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-48 Location of the setup.exe file
4. The Installation wizard will start. This first window (as shown in Figure 4-49) asks you to
shut down any running Windows programs, stop all SVC services, and review the readme
file.
5. Figure 4-49 Shows how to stop SVC services.
Figure 4-49 Stop CIMOM service
Chapter 4. SAN Volume Controller initial configuration
143
6. Figure 4-50 shows the wizard Welcome window.
Figure 4-50 Wizard welcome window
After you have reviewed the installation instructions and the readme file, click Next.
7. The Installation will ask you to read and accept the terms of the license agreement, as
shown in Figure 4-51. Click Next.
144
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-51 License agreement window
8. The installation detects your existing SVC Console installation (if you are upgrading). If it
does detect your existing SVC Console installation, it will ask you to perform these steps:
– Select Preserve Configuration if you want to keep your existing configuration. (You
must make sure that this option is checked.)
– Manually shut down the SVC Console services:
•
•
•
IBM System Storage SAN Volume Controller Pegasus Server
Service Location Protocol
IBM WebSphere Application Server V6 - SVC
There might be differences in the existing services, depending on which version you
are upgrading from. Follow the instructions on the dialog wizard for which services to
shut down, as shown in Figure 4-52. Click Next.
Chapter 4. SAN Volume Controller initial configuration
145
Figure 4-52 Product Installation Check
Important: If you want to keep your SVC configuration, make sure that you select
Preserve Configuration. If you omit this selection, you will lose your entire SVC
Console setup, and you will have to reconfigure your console as though it were a new
installation.
9. The installation wizard then checks that the appropriate services are shut down, removes
the previous version, and shows the Installation Confirmation window, as shown in
Figure 4-53. If the wizard detects any problems, it first shows you a page detailing the
possible problems, giving you time to fix them before proceeding.
146
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-53 Installation Confirmation
10.Figure 4-54 shows the progress of the installation. For our environment, it took
approximately 10 minutes to complete.
Figure 4-54 Installation Progress
11.The installation process now starts the migration for the cluster user accounts. Starting
with SVC 5.1, the CIMOM has been moved into the cluster, and it is no longer present in
the SVC Console or System Storage Productivity Center. The CIMOM authentication login
process will be performed in the ICA application when we launch the SVC management
application.
Chapter 4. SAN Volume Controller initial configuration
147
As part of the migration input, Figure 4-55 shows where to enter the “admin” password to
each of the clusters that you already own.
This password was generated during the SVC cluster first creation and must be carefully
saved.
Figure 4-55 Migration Input
12.At the end of the user accounts migration process, you might get the error that is shown in
Figure 4-56.
148
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-56 SVC cluster user account migration error
This message is normal behavior, because, in our environment, we have implemented
only the superuser user ID. The GUI upgrade wizard is intended to work only for user
accounts; it is not intended to be used for migrating the superuser user.
If you get this error, when you try to access your SVC cluster using the GUI, it will require
you to enter the default CIMOM user id=superuser and password=passw0rd, because the
superuser account has not been migrated and you will have to use the default in the
meantime.
13.Click Next. The wizard will either restart all of the appropriate SVC Console processes, or
inform you that you will need to reboot, and then, give you a summary of the installation. In
this case, we were told we need to reboot, as shown in Figure 4-57.
Chapter 4. SAN Volume Controller initial configuration
149
Figure 4-57 Installation summary
14.The wizard requires us to restart our computer (Figure 4-58).
Figure 4-58 Installation finished: Requesting reboot
150
Implementing the IBM System Storage SAN Volume Controller V5.1
15.And finally, to see the new interface, you can launch the SVC Console by using the icon on
the desktop. Log in and confirm that the upgrade was successful by noting the Console
Version number on the right side of the window under the graphic. See Figure 4-59.
Figure 4-59 Launching the upgraded SVC Console
You have completed the upgrade of your SVC Console.
To access the SVC, you must click Clusters on the left pane. You will be redirected to the
Viewing Clusters window, as shown in Figure 4-60.
Figure 4-60 Viewing Clusters
Chapter 4. SAN Volume Controller initial configuration
151
As you can see, the cluster’s availability status is “Unauthenticated”, which is to be expected.
Select the cluster, click GO, and launch the SAN Volume Controller Application. You will be
required to insert your CIMOM user ID (superuser) and your password (password) as shown
in Figure 4-61.
Figure 4-61 Sign on to cluster
Finally, you can manage your SVC cluster, as shown in Figure 4-62.
Figure 4-62 Cluster management window
152
Implementing the IBM System Storage SAN Volume Controller V5.1
5
Chapter 5.
Host configuration
In this chapter, we describe the basic host configuration procedures that are required to
connect supported hosts to the IBM System Storage SAN Volume Controller (SVC).
© Copyright IBM Corp. 2010. All rights reserved.
153
5.1 SVC setup
Traditionally in IBM SAN Volume Controller (SVC) environments, hosts were connected to an
SVC via a storage area network (SAN). In actual implementations that have high availability
requirements (the majority of the target clients for SVC), the SAN is implemented as two
separate fabrics providing a fault tolerant arrangement of two or more counterpart SANs. For
the hosts, each SAN provides alternate paths to the resources (virtual disks (VDisks)) that are
provided by the SVC.
Starting with SVC 5.1, iSCSI is introduced as an alternative protocol to attaching hosts via a
LAN to the SVC. However, within the SVC, all communications with back-end storage
subsystems, and with other SVC clusters, take place via Fibre Channel (FC).
For iSCSI/LAN-based access networks to the SVC using a single network, or using two
physically separated networks, is supported. The iSCSI feature is a software feature that is
provided by the SVC 5.1 code. It will be available on the new CF8 nodes and also on the
existing nodes that support the SVC 5.1 release. The existing SVC node hardware has
multiple 1 Gbps Ethernet ports. Until now, only one 1 Gbps Ethernet port has been used, and
it has been used for cluster configuration. With the introduction of iSCSI, both ports can now
be used.
Redundant paths to VDisks can be provided for the SAN, as well as for the iSCSI
environment.
Figure 5-1 shows the attachments that are supported with the SVC 5.1 release.
Figure 5-1 SVC host attachment overview
5.1.1 Fibre Channel and SAN setup overview
Hosts using Fibre Channel (FC) as the connection to an SVC are always connected to a SAN
switch. For SVC configurations, we strongly recommend the use of two redundant SAN
fabrics. Therefore, each server is equipped with a minimum of two host bus adapters (HBAs),
with each of the HBAs connected to a SAN switch in one of the two fabrics (assuming one
port per HBA).
154
Implementing the IBM System Storage SAN Volume Controller V5.1
SVC imposes no special limit on the FC optical distance between the SVC nodes and the
host servers. A server can therefore be attached to an edge switch in a core-edge
configuration while the SVC cluster is at the core. SVC supports up to three inter-switch link
(ISL) hops in the fabric. Therefore, the server and the SVC node can be separated by up to
five actual FC links, four of which can be 10 km (6.2 miles) long if longwave small form-factor
pluggables (SFPs) are used. For high performance servers, the rule is to avoid ISL hops, that
is, connect the servers to the same switch to which the SVC is connected, if possible.
Remember these limits when connecting host servers to an SVC:
Up to 256 hosts per I/O Group, which results in a total of 1,024 hosts per cluster. Note that
if the same host is connected to multiple I/O Groups of a cluster, it counts as a host in
each of these groups.
A total of 512 distinct configured host worldwide port names (WWPNs) are supported per
I/O Group. This limit is the sum of the FC host ports and the host iSCSI names (an internal
WWPN is generated for each iSCSI name) that are associated with all of the hosts that are
associated with a single I/O Group.
The access from a server to an SVC cluster via the SAN fabrics is defined by the use of
zoning. Consider these rules for host zoning with the SVC:
For configurations of fewer than 64 hosts per cluster, the SVC supports a simple set of
zoning rules that enables the creation of a small set of host zones for various
environments. Switch zones containing HBAs must contain fewer than 40 initiators in total,
including the SVC ports that act as initiators. Thus, a valid zone is 32 host ports, plus eight
SVC ports. This restriction exists, because the order N2 scaling of the number of remote
status change notification messages (RSCN) with the number of initiators per zone [N]
can cause problems. We recommend that you zone using single HBA port zoning, as
described in the next paragraph.
For configurations of more than 64 hosts per cluster, the SVC supports a more restrictive
set of host zoning rules. Each HBA port must be placed in a separate zone. Also included
in this zone is exactly one port from each SVC node in the I/O Groups that are associated
with this host. We recommend that hosts are zoned this way in smaller configurations, too,
but it is not mandatory.
Switch zones containing HBAs must contain HBAs from similar hosts or similar HBAs in
the same host. For example, AIX and Windows NT® hosts must be in separate zones, and
t QLogic and Emulex adapters must be in separate zones.
To obtain the best performance from a host with multiple FC ports, ensure that each FC
port of a host is zoned with a separate group of SVC ports.
To obtain the best overall performance of the subsystem and to prevent overloading, the
workload to each SVC port must be equal, typically by zoning approximately the same
number of host FC ports to each SVC FC port.
For any given VDisk, the number of paths through the SAN from the SVC nodes to a host
must not exceed eight. For most configurations, four paths to an I/O Group (four paths to
each VDisk that is provided by this I/O Group) are sufficient.
Figure 5-2 on page 156 shows an overview for a setup with servers that have two single port
HBAs each. Follow this method to connect them:
Try to distribute the actual hosts equally between two logical sets per I/O Group. Connect
hosts from each set always to the same group of SVC ports. This “port group” includes
exactly one port from each SVC node in the I/O Group. The zoning defines the correct
connections.
Chapter 5. Host configuration
155
The “port groups” are defined this way:
– Hosts in host set one of an I/O Group are always zoned to the P1 and P4 ports on both
nodes, for example, N1/N2 of I/O Group zero.
– Hosts in host set two of an I/O Group are always zoned to the P2 and P3 ports on both
nodes of an I/O Group.
You can create aliases for these “port groups” (per I/O Group):
– Fabric A: IOGRP0_PG1  N1_P1;N2_P1,IOGRP0_PG2  N1_P3;N2_P3
– Fabric B: IOGRP0_PG1  N1_P4;N2_P4,IOGRP0_PG2  N1_P2;N2_P2
Create host zones by always using the host port WWPN, plus the PG1 alias for hosts in
the first host set. Always use the host port WWPN, plus the PG2 alias for hosts from the
second host set. If a host has to be zoned to multiple I/O Groups, simply add the PG1 or
PG2 aliases from the specific I/O Groups to the host zone.
Using this schema provides four paths to one I/O Group for each host. It helps to maintain an
equal distribution of host connections on the SVC ports. Figure 5-2 shows an overview of this
host zoning schema.
Figure 5-2 Overview of four path host zoning
We recommend whenever possible to use the minimum number of paths that are necessary
to achieve sufficient redundancy in the SAN environment, for SVC environments, no more
than four paths per I/O Group or VDisk.
156
Implementing the IBM System Storage SAN Volume Controller V5.1
Remember that all paths have to be managed by the multipath driver on the host side. If we
assume a server is connected via four ports to the SVC, each VDisk is seen via eight paths.
With 125 VDisks mapped to this server, the multipath driver has to support handling up to
1,000 active paths (8 x 125). You can obtain details and current limitations for the IBM
Subsystem Device Driver (SDD) in Storage Multipath Subsystem Device Driver User’s Guide,
GC52-1309-01, at this Web site:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S7000303&aid=1
For hosts using four HBAs/ports with eight connections to an I/O Group, use the zoning
schema that is shown in Figure 5-3. You can combine this schema with the previous four path
zoning schema.
Figure 5-3 Overview of eight path host zoning
5.1.2 Port mask
SVC V4.1 added the concept of a port mask. With prior releases, any particular host saw the
same set of SCSI logical unit numbers (LUNs) from each of the four FC ports in each node in
a particular I/O Group.
The port mask is associated with a host object. The port mask controls which SVC (target)
ports any particular host can access. The port mask applies to logins from any of the host
(initiator) ports associated with the host object in the configuration model. The port mask
consists of four binary bits, represented in the command-line interface (CLI) as 0 or 1. The
rightmost bit is associated with FC port 1 on each node. The leftmost bit is associated with
port 4. A 1 in any particular bit position allows access to that port and a zero denies access.
The default port mask is 1111, preserving the behavior of the product prior to the introduction
of this feature.
Chapter 5. Host configuration
157
For each login between an HBA port and an SVC node port, SVC decides whether to allow
access or to deny access by examining the port mask that is associated with the host object
to which the HBA belongs. If access is denied, SVC responds to SCSI commands as though
the HBA port is unknown to the SVC.
5.2 iSCSI overview
iSCSI is a block-level protocol that encapsulates SCSI commands into TCP/IP packets and,
thereby, leverages an existing IP network instead of requiring FC HBAs and SAN fabric
infrastructure.
5.2.1 Initiators and targets
An iSCSI client, which is known as an (iSCSI) initiator, sends SCSI commands over an IP
network to an iSCSI target. We refer to a single iSCSI initiator or iSCSI target as an iSCSI
node. An iSCSI target refers to a storage resource that is located on an iSCSI server, or, to be
more precise, to one of potentially many instances of iSCSI nodes running on that server as a
“target.”
5.2.2 Nodes
There are one or more iSCSI nodes within a network entity. The iSCSI node is accessible via
one or more network portals. A network portal is a component of a network entity that has a
TCP/IP network address and that can be used by an iSCSI node.
An iSCSI node is identified by its unique iSCSI name and is referred to as an IQN. Remember
that this name serves only for the identification of the node; it is not the node’s address, and in
iSCSI, the name is separated from the addresses. This separation allows multiple iSCSI
nodes to use the same addresses, or, while it is implemented in the SVC, the same iSCSI
node to use multiple addresses.
5.2.3 IQN
An SVC cluster can provide up to eight iSCSI targets, one per node. Each SVC node has its
own IQN, which by default will be in this form:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>
An iSCSI host in SVC is defined by specifying its iSCSI initiator names, for an example of an
IQN of a Windows Server:
iqn.1991-05.com.microsoft:itsoserver01
During the configuration of an iSCSI host in the SVC, you must specify the host’s initiator
IQNs. You can read about host creation in detail in Chapter 7, “SAN Volume Controller
operations using the command-line interface” on page 339, and in Chapter 8, “SAN Volume
Controller operations using the GUI” on page 469.
An alias string can also be associated with an iSCSI node. The alias allows an organization to
associate a user friendly string with the iSCSI name. However, the alias string is not a
substitute for the iSCSI name.
Figure 5-4 on page 159 shows an overview of iSCSI implementation in the SVC.
158
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 5-4 SVC iSCSI overview
A host that is using iSCSI as the communication protocol to access its VDisks on an SVC
cluster uses its single or multiple Ethernet adapters to connect to an IP LAN. The nodes of
the SVC cluster are connected to the LAN by the existing 1 Gbps Ethernet ports on the node.
For iSCSI, both ports can be used.
Note that Ethernet link aggregation (port trunking) or “channel bonding” for the SVC nodes’
Ethernet ports is not supported for the 1 Gbps ports in this release. The support for Jumbo
Frames, that is, support for MTU sizes greater than 1,500 bytes, is planned for future SVC
releases.
For each SVC node, that is, for each instance of an iSCSI target node in the SVC node, two
IPv4 and two IPv6 addresses or iSCSI network portals can be defined. Figure 2-12 on
page 29 shows one IPv4 and one IPv6 address per Ethernet port.
5.3 VDisk discovery
Hosts can discover VDisks through one of the following three mechanisms:
Internet Storage Name Service (iSNS)
SVC can register itself with an iSNS name server; the IP address of this server is set using
the svctask chcluster command. A host can then query the iSNS server for available
iSCSI targets.
Service Location Protocol (SLP)
The SVC node runs an SLP daemon, which responds to host requests. This daemon
reports the available services on the node. One service is the CIMOM, which runs on the
configuration node; iSCSI I/O service can now also be reported.
Chapter 5. Host configuration
159
SCSI Send Target request
The host can also send a Send Target request using the iSCSI protocol to the iSCSI
TCP/IP port (port 3260). You must define the network portal IP addresses of the iSCSI
targets before a discovery can be started.
5.4 Authentication
Authentication of hosts is optional; by default, it is disabled. The user can choose to enable
Challenge Handshake Authentication Protocol (CHAP) or CHAP authentication, which
involves sharing a CHAP secret between the cluster and the host. If the correct key is not
provided by the host, the SVC will not allow it to perform I/O to VDisks. The cluster can also
be assigned a CHAP secret.
A new feature with iSCSI is you can move IP addresses, which are used to address an iSCSI
target on the SVC node, between the nodes of an I/O Group. IP addresses will only be moved
from one node to its partner node if a node goes through a planned or unplanned restart. If
the Ethernet link to the SVC cluster fails due to a cause outside of the SVC (such as the cable
being disconnected, the Ethernet router failing, and so on), the SVC makes no attempt to fail
over an IP address to restore IP access to the cluster. To enable validation of the Ethernet
access to the nodes, it will respond to ping with the standard one-per-second rate without
frame loss.
The SVC 5.1 release introduced a new concept, which is used for handling the iSCSI IP
address failover, that is called a “clustered Ethernet port”. A clustered Ethernet port consists
of one physical Ethernet port on each node in the cluster and contains configuration settings
that are shared by all of these ports. These clustered ports are referred to as Port 1 and Port
2 in the CLI or GUI on each node of an SVC cluster. Clustered Ethernet ports can be used for
iSCSI or management ports.
Figure 5-5 on page 161 shows an example of an iSCSI target node failover. It gives a
simplified overview of what happens during a planned or unplanned node restart in an SVC
I/O Group:
1. During normal operation, one iSCSI node target node instance is running on each SVC
node. All of the IP addresses (IPv4/IPv6) belonging to this iSCSI target, including the
management addresses if the node acts as the configuration node, are presented on the
two ports (P1/P2) of a node.
2. During a restart of an SVC node (N1), the iSCSI initiator, including all of its network portal
(IPv4/IPv6) IP addresses defined on Port1/Port2 and the management (IPv4/IPv6) IP
addresses (if N1 acted as the configuration node), will fail over to Port1/Port2 of the
partner node within the I/O Group, that is, node N2. An iSCSI initiator running on a server
will execute a reconnect to its iSCSI target, that is, the same IP addresses presented now
by a new node of the SVC cluster.
3. As soon as the node (N1) has finished its restart, the iSCSI target node (including its IP
addresses) running on N2 will fail back to N1. Again, the iSCSI initiator running on a server
will execute a reconnect to its iSCSI target. The management addresses will not fail back.
N2 will remain in the role of the configuration node for this cluster.
160
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 5-5 iSCSI node failover scenario
From the server’s point of view, it is not required to have a multipathing driver (MPIO) in place
to be able to handle an SVC node failover. In the case of a node restart, the server simply
reconnects to the IP addresses of the iSCSI target node that will reappear after several
seconds on the ports of the partner node.
A host multipathing driver for iSCSI is required in these situations:
To protect a server from network link failures, including port failures on the SVC nodes
To protect a server from a server HBA failure (if two HBAs are in use)
To protect a server form network failures, if the server is connected via two HBAs to two
separate networks
To provide load balancing on the server’s HBA and the network links
The commands for the configuration of the iSCSI IP addresses have been separated from the
configuration of the cluster IP addresses.
The following commands are new commands for managing iSCSI IP addresses:
The svcinfo lsportip command lists the iSCSI IP addresses assigned for each port on
each node in the cluster.
The svctask cfgportip command assigns an IP address to each node’s Ethernet port for
iSCSI I/O.
The following commands are new commands for managing the cluster IP addresses:
The svcinfo lsclusterip command returns a list of the cluster management IP
addresses configured for each port.
The svctask chclusterip command modifies the IP configuration parameters for the
cluster.
You can obtain a detailed description of how to use these commands in Chapter 7, “SAN
Volume Controller operations using the command-line interface” on page 339.
Chapter 5. Host configuration
161
The parameters for remote services (ssh and Web services) will remain associated with the
cluster object. During a software upgrade from 4.3.1, the configuration settings for the cluster
will be used to configure clustered Ethernet Port1.
For iSCSI-based access, using two separate networks and separating iSCSI traffic within the
networks by using a dedicated VLAN path for storage traffic will prevent any IP interface,
switch, or target port failure from compromising the host server’s access to the VDisk LUNs.
5.5 AIX-specific information
The following section details specific information that relates to the connection of AIX-based
hosts into an SVC environment.
AIX-specific information: In this section, the IBM System p information applies to all AIX
hosts that are listed on the SVC interoperability support site, including IBM System i
partitions and IBM JS blades.
5.5.1 Configuring the AIX host
To configure the AIX host, follow these steps:
1. Install the HBAs in the AIX host system.
2. Ensure that you have installed the correct operating systems and version levels on your
host, including any updates and Authorized Program Analysis Reports (APARs) for the
operating system.
3. Connect the AIX host system to the FC switches.
4. Configure the FC switches (zoning) if needed.
5. Install and configure the 2145 and IBM Subsystem Device Driver (SDD) drivers.
6. Configure the host, VDisks, and host mapping on the SAN Volume Controller.
7. Run the cfgmgr command to discover the VDisks created on the SVC.
The following sections detail the current support information. It is vital that you check the Web
sites that are listed regularly for any updates.
5.5.2 Operating system versions and maintenance levels
At the time of writing, the following AIX levels are supported:
AIX V4.3.3
AIX 5L™ V5.1
AIX 5L V5.2
AIX 5L V5.3
AIX V6.1.3
For the latest information, and device driver support, always refer to this site:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278#_AIX
5.5.3 HBAs for IBM System p hosts
Ensure that your IBM System p AIX hosts use the correct host bus adapters (HBAs).
162
Implementing the IBM System Storage SAN Volume Controller V5.1
The following IBM Web site provides current interoperability information about supported
HBAs and firmware:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#_pSeries
Note: The maximum number of FC ports that are supported in a single host (or logical
partition) is four. These ports can be four single-port adapters or two dual-port adapters or
a combination, as long as the maximum number of ports that are attached to the SAN
Volume Controller does not exceed four.
Installing the host attachment script on IBM System p hosts
To attach an IBM System p AIX host, you must install the AIX host attachment script.
Perform the following steps to install the host attachment scripts:
1. Access the following Web site:
http://www.ibm.com/servers/storage/support/software/sdd/downloading.html
2. Select Host Attachment Scripts for AIX.
3. Select either Host Attachment Script for SDDPCM or Host Attachment Scripts for
SDD from the options, depending on your multipath device driver.
4. Download the AIX host attachment script for your multipath device driver.
5. Follow the instructions that are provided on the Web site or any readme files to install the
script.
5.5.4 Configuring for fast fail and dynamic tracking
For hosts systems that run an AIX 5L V5.2 or later operating system, you can achieve the
best results by using the fast fail and dynamic tracking attributes.
Perform the following steps to configure your host system to use the fast fail and dynamic
tracking attributes:
1. Issue the following command to set the FC SCSI I/O Controller Protocol Device to each
Adapter:
chdev -l fscsi0 -a fc_err_recov=fast_fail
The previous command was for adapter fscsi0. Example 5-1 shows the command for both
adapters on our test system running AIX 5L V5.3.
Example 5-1 Enable fast fail
#chdev
fscsi0
#chdev
fscsi1
-l fscsi0 -a fc_err_recov=fast_fail
changed
-l fscsi1 -a fc_err_recov=fast_fail
changed
2. Issue the following command to enable dynamic tracking for each FC device:
chdev -l fscsi0 -a dyntrk=yes
The previous example command was for adapter fscsi0. Example 5-2 on page 164 shows
the command for both adapters on our test system running AIX 5L V5.3.
Chapter 5. Host configuration
163
Example 5-2 Enable dynamic tracking
#chdev
fscsi0
#chdev
fscsi1
-l fscsi0 -a dyntrk=yes
changed
-l fscsi1 -a dyntrk=yes
changed
Host adapter configuration settings
You can check the availability of the FC host adapters by using the command shown in
Example 5-3.
Example 5-3 FC host adapter availability
#lsdev -Cc adapter |grep fcs
fcs0
Available 1Z-08
FC Adapter
fcs1
Available 1D-08
FC
Adapter
You can find the worldwide port number (WWPN) of your FC host adapter and check the
firmware level, as shown in Example 5-4. The network address is the worldwide port name
(WWPN) for the FC adapter.
Example 5-4 FC host adapter settings and WWPN
#lscfg -vpl fcs0
fcs0
U0.1-P2-I4/Q1
FC Adapter
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A68D
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number..................
00P4495
Network Address.............10000000C932A7FB
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C932A7FB
Device Specific.(Z9)........CS3.91A1
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(YL)........U0.1-P2-I4/Q1
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP9002
Node: [email protected]
164
Implementing the IBM System Storage SAN Volume Controller V5.1
Device Type: fcp
Physical Location:
U0.1-P2-I4/Q1
5.5.5 Subsystem Device Driver (SDD) Path Control Module (SDDPCM)
SDD is a pseudo device driver that is designed to support the multipath configuration
environments within IBM products. It resides on a host system along with the native disk
device driver and provides the following functions:
Enhanced data availability
Dynamic I/O load balancing across multiple paths
Automatic path failover protection
Concurrent download of licensed internal code
SDD works by grouping each physical path to an SVC logical unit number (LUN), represented
by individual hdisk devices within AIX, into a vpath device. For example, if you have four
physical paths to an SVC LUN, this design produces four new hdisk devices within AIX). From
this point forward, AIX uses this vpath device to route I/O to the SVC LUN. Therefore, when
making a Logical Volume Manager (LVM) Volume Group using mkvg, we specify the vpath
device as the destination and not the hdisk device.
The SDD support matrix for AIX is available at this Web site:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278#_AIX
SDD/SDDPCM installation
After downloading the appropriate version of SDD, install it using the standard AIX installation
procedure. The currently supported SDD Levels are available at:
http://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind=5
000033&familyind=5329528&taskind=2
Check the driver readmefile and make sure your AIX system fulfills all the prerequisites.
SDD installation
In Example 5-5, we show the appropriate version of SDD downloaded into the /tmp/sdd
directory. From here, we extract it and initiate the inutoc command, which generates a
dot.toc (.toc) file that is needed by the installp command prior to installing SDD. Finally,
we initiate the installp command, which installs SDD onto this AIX host.
Example 5-5 Installing SDD on AIX
#ls -l
total 3032
-rw-r----1 root
system
1546240 Jun 24 15:29 devices.sdd.53.rte.tar
#tar -tvf devices.sdd.53.rte.tar
-rw-r----0 0 1536000 Oct 06 11:37:13 2006 devices.sdd.53.rte
#tar -xvf devices.sdd.53.rte.tar
x devices.sdd.53.rte, 1536000 bytes, 3000 media blocks.
# inutoc .
#ls -l
total 6032
-rw-r--r-1 root
system
476 Jun 24 15:33 .toc
-rw-r----1 root
system
1536000 Oct 06 2006 devices.sdd.53.rte
-rw-r----1 root
system
1546240 Jun 24 15:29 devices.sdd.53.rte.tar
Chapter 5. Host configuration
165
# installp -ac -d . all
Example 5-6 checks the installation of SDD.
Example 5-6 Checking SDD device driver
#lslpp -l | grep -i sdd
devices.sdd.53.rte
devices.sdd.53.rte
1.7.0.0
1.7.0.0
COMMITTED
COMMITTED
IBM Subsystem Device Driver
IBM Subsystem Device Driver
The 2145 devices.fcp file: A specific “2145” devices.fcp file no longer exists. The
standard devices.fcp file now has combined support for SVC/Enterprise Storage
Server/DS8000/DS6000.
We can also check that the SDD server is operational, as shown in Example 5-7.
Example 5-7 SDD server is operational
#lssrc -s sddsrv
Subsystem
sddsrv
Group
#ps -aef | grep sdd
root 135174 41454
root 168430 127292
/usr/sbin/sddsrv
PID
168430
0 15:38:20
0 15:10:27
pts/1
-
Status
active
0:00 grep sdd
0:00
Enabling the SDD or SDDPCM Web interface is shown in 5.15, “Using SDDDSM, SDDPCM,
and SDD Web interface” on page 251.
SDDPCM installation
In Example 5-8, we show the appropriate version of SDDPCM downloaded into the
/tmp/sddpcm directory. From here, we extract it and initiate the inutoc command, which
generates a dot.toc (.toc) file that is needed by the installp command prior to installing
SDDPCM. Finally, we initiate the installp command, which installs SDDPCM onto this AIX
host.
Example 5-8 Installing SDDPCM on AIX
# ls -l
total 3232
-rw-r----1 root
system
1648640 Jul 15 13:24
devices.sddpcm.61.rte.tar
# tar -tvf devices.sddpcm.61.rte.tar
-rw-r----- 271001 449628 1638400 Oct 31 12:16:23 2007 devices.sddpcm.61.rte
# tar -xvf devices.sddpcm.61.rte.tar
x devices.sddpcm.61.rte, 1638400 bytes, 3200 media blocks.
# inutoc .
# ls -l
total 6432
-rw-r--r-1 root
system
531 Jul 15 13:25 .toc
-rw-r----1 271001
449628
1638400 Oct 31 2007 devices.sddpcm.61.rte
-rw-r----1 root
system
1648640 Jul 15 13:24
devices.sddpcm.61.rte.tar
# installp -ac -d . all
166
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 5-9 checks the installation of SDDPCM.
Example 5-9 Checking SDDPCM device driver
# lslpp -l | grep sddpcm
devices.sddpcm.61.rte
devices.sddpcm.61.rte
2.2.0.0
2.2.0.0
COMMITTED
COMMITTED
IBM SDD PCM for AIX V61
IBM SDD PCM for AIX V61
Enabling the SDD or SDDPCM Web interface is shown in 5.15, “Using SDDDSM, SDDPCM,
and SDD Web interface” on page 251.
5.5.6 Discovering the assigned VDisk using SDD and AIX 5L V5.3
Before adding a new volume from the SVC, the AIX host system Kanga had a simple, typical
configuration, as shown in Example 5-10.
Example 5-10 Status of AIX host system Kanaga
#lspv
hdisk0
hdisk1
hdisk2
#lsvg
rootvg
0009cddaea97bf61
0009cdda43c9dfd5
0009cddabaef1d99
rootvg
rootvg
rootvg
active
active
active
In Example 5-11, we show SVC configuration information relating to our AIX host, specifically,
the host definition, the VDisks created for this host, and the VDisk-to-host mappings for this
configuration.
Using the SVC CLI, we can check that the host WWPNs, which are listed in Example 5-4 on
page 164, are logged into the SVC for the host definition “aix_test”, by entering:
svcinfo lshost aix_test
We can also find the serial numbers of the VDisks using the following command:
svcinfo lshostvdiskmap
Example 5-11 SVC definitions for host system aix_test
IBM_2145:ITSO-CLS1:admin>svcinfo lshost Kanaga
id 2
name Kanaga
port_count 2
type generic
mask 1111
iogrp_count 2
WWPN 10000000C932A7FB
node_logged_in_count 2
state active
WWPN 10000000C932A800
node_logged_in_count 2
state active
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Kanaga
id name SCSI_id vdisk_id vdisk_name
wwpn
vdisk_UID
Chapter 5. Host configuration
167
2 Kanaga
0
13
Kanaga0001
60050768018301BF2800000000000015
2 Kanaga
1
14
Kanaga0002
60050768018301BF2800000000000016
2 Kanaga
2
15
Kanaga0003
60050768018301BF2800000000000017
2 Kanaga
3
16
Kanaga0004
60050768018301BF2800000000000018
10000000C932A7FB
10000000C932A7FB
10000000C932A7FB
10000000C932A7FB
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Kanaga0001
id 13
name Kanaga0001
IO_group_id 0
IO_group_name io_grp0
status offline
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 5.0GB
type striped
formatted yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018301BF2800000000000015
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status offline
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 5.00GB
real_capacity 5.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
168
Implementing the IBM System Storage SAN Volume Controller V5.1
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskhostmap Kanaga0001
id name
SCSI_id host_id host_name
wwpn
vdisk_UID
13 Kanaga0001 0
2
Kanaga 10000000C932A7FB 60050768018301BF2800000000000015
13 Kanaga0001 0
2
Kanaga 10000000C932A800 60050768018301BF2800000000000015
We need to run cfgmgr on the AIX host to discover the new disks and enable us to start the
vpath configuration; if we run the config manager (cfgmgr) on each FC adapter, it will not
create the vpaths, only the new hdisks. To configure the vpaths, we need to run the
cfallvpath command after issuing the cfgmgr command on each of the FC adapters:
# cfgmgr -l fcs0
# cfgmgr -l fcs1
# cfallvpath
Alternatively, use the cfgmgr -vS command to check the complete system. This command
will probe the devices sequentially across all FC adapters and attached disks; however, it is
extremely time intensive:
# cfgmgr -vS
The raw SVC disk configuration of the AIX host system now appears, as shown in
Example 5-12. We can see the multiple hdisk devices, representing the multiple routes to the
same SVC LUN, and we can see the vpath devices available for configuration.
Example 5-12 VDisks from SVC added with multiple separate paths for each VDisk
#lsdev -Cc disk
hdisk0 Available
hdisk1 Available
hdisk2 Available
hdisk3 Available
hdisk4 Available
hdisk5 Available
hdisk6 Available
hdisk7 Available
hdisk8 Available
hdisk9 Available
hdisk10 Available
hdisk11 Available
hdisk12 Available
hdisk13 Available
hdisk14 Available
hdisk15 Available
hdisk16 Available
hdisk17 Available
hdisk18 Available
vpath0 Available
vpath1 Available
vpath2 Available
vpath3 Available
1S-08-00-8,0
1S-08-00-9,0
1S-08-00-10,0
1Z-08-02
1Z-08-02
1Z-08-02
1Z-08-02
1D-08-02
1D-08-02
1D-08-02
1D-08-02
1Z-08-02
1Z-08-02
1Z-08-02
1Z-08-02
1D-08-02
1D-08-02
1D-08-02
1D-08-02
16 Bit LVD SCSI Disk Drive
16 Bit LVD SCSI Disk Drive
16 Bit LVD SCSI Disk Drive
SAN Volume Controller Device
SAN Volume Controller Device
SAN Volume Controller Device
SAN Volume Controller Device
SAN Volume Controller Device
SAN Volume Controller Device
SAN Volume Controller Device
SAN Volume Controller Device
SAN Volume Controller Device
SAN Volume Controller Device
SAN Volume Controller Device
SAN Volume Controller Device
SAN Volume Controller Device
SAN Volume Controller Device
SAN Volume Controller Device
SAN Volume Controller Device
Data Path Optimizer Pseudo Device
Data Path Optimizer Pseudo Device
Data Path Optimizer Pseudo Device
Data Path Optimizer Pseudo Device
Driver
Driver
Driver
Driver
To make a Volume Group (for example, itsoaixvg) to host the vpath1 device, we use the mkvg
command passing the vpath device as a parameter instead of the hdisk device, which is
shown in Example 5-13 on page 170.
Chapter 5. Host configuration
169
Example 5-13 Running the mkvg command
#mkvg -y itsoaixvg vpath1
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg
Now, by running the lspv command, we can see that vpath1 has been assigned into the
itsoaixvg Volume Group, as shown in Example 5-14.
Example 5-14 Showing the vpath assignment into the Volume Group
#lspv
hdisk0
hdisk1
hdisk2
vpath1
0009cddaea97bf61
0009cdda43c9dfd5
0009cddabaef1d99
0009cddabce27ba5
rootvg
rootvg
rootvg
itsoaixvg
active
active
active
active
The lsvpcfg command also displays the new relationship between vpath1 and the itsoaixvg
Volume Group, but it also shows each hdisk that is associated with vpath1, as shown in
Example 5-15.
Example 5-15 Displaying the vpath to hdisk to Volume Group relationship
#lsvpcfg
vpath0 (Avail
vpath1 (Avail
hdisk8 (Avail
vpath2 (Avail
vpath3 (Avail
)
) 60050768018301BF2800000000000015 = hdisk3 (Avail ) hdisk7 (Avail )
pv itsoaixvg) 60050768018301BF2800000000000016 = hdisk4 (Avail )
)
) 60050768018301BF2800000000000017 = hdisk5 (Avail ) hdisk9 (Avail )
) 60050768018301BF2800000000000018 = hdisk6 (Avail ) hdisk10 (Avail
In Example 5-16, running the lspv vpath1 command shows a more verbose output for
vpath1.
Example 5-16 Verbose details of vpath1
#lspv vpath1
PHYSICAL VOLUME:
vpath1
VOLUME GROUP:
PV IDENTIFIER:
0009cddabce27ba5 VG IDENTIFIER
0009cdda00004c000000011abce27c89
PV STATE:
active
STALE PARTITIONS:
0
ALLOCATABLE:
PP SIZE:
8 megabyte(s)
LOGICAL VOLUMES:
TOTAL PPs:
639 (5112 megabytes)
VG DESCRIPTORS:
FREE PPs:
639 (5112 megabytes)
HOT SPARE:
USED PPs:
0 (0 megabytes)
MAX REQUEST:
FREE DISTRIBUTION: 128..128..127..128..128
USED DISTRIBUTION:
00..00..00..00..00
itsoaixvg
yes
0
2
no
256 kilobytes
5.5.7 Using SDD
Within SDD, we are able to check the status of the adapters and devices now under SDD
control with the use of the datapath command set. In Example 5-17 on page 171, we can see
the status of both HBA cards as NORMAL and ACTIVE.
170
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 5-17 SDD commands used to check the availability of the adapters
#datapath query adapter
Active Adapters :2
Adpt#
0
1
Name
State
fscsi0 NORMAL
fscsi1 NORMAL
Mode
ACTIVE
ACTIVE
Select
0
56
Errors
0
0
Paths
4
4
Active
1
1
In Example 5-18, we see detailed information about each vpath device. Initially, we see that
vpath1 is the only vpath device in an open status. It is open, because it is the only vpath that
is currently assigned to a Volume Group. Additionally, for vpath1, we see that only path 1 and
path 3 have been selected (used) by SDD. These paths are the two physical paths that
connect to the preferred node of the I/O Group of this SVC cluster. The remaining two paths
within this vpath device are only accessed in a failover scenario.
Example 5-18 SDD commands that are used to check the availability of the devices
#datapath query device
Total Devices : 4
DEV#:
0 DEVICE NAME: vpath0 TYPE: 2145
POLICY:
Optimized
SERIAL: 60050768018301BF2800000000000015
==========================================================================
Path#
Adapter/Hard Disk
State
Mode
Select
Errors
0
fscsi0/hdisk3
CLOSE
NORMAL
0
0
1
fscsi1/hdisk7
CLOSE
NORMAL
0
0
2
fscsi0/hdisk11
CLOSE
NORMAL
0
0
3
fscsi1/hdisk15
CLOSE
NORMAL
0
0
DEV#:
1 DEVICE NAME: vpath1 TYPE: 2145
POLICY:
Optimized
SERIAL: 60050768018301BF2800000000000016
==========================================================================
Path#
Adapter/Hard Disk
State
Mode
Select
Errors
0
fscsi0/hdisk4
OPEN
NORMAL
0
0
1
fscsi1/hdisk8
OPEN
NORMAL
28
0
2
fscsi0/hdisk12
OPEN
NORMAL
32
0
3
fscsi1/hdisk16
OPEN
NORMAL
0
0
DEV#:
2 DEVICE NAME: vpath2 TYPE: 2145
POLICY:
Optimized
SERIAL: 60050768018301BF2800000000000017
==========================================================================
Path#
Adapter/Hard Disk
State
Mode
Select
Errors
0
fscsi0/hdisk5
CLOSE
NORMAL
0
0
1
fscsi1/hdisk9
CLOSE
NORMAL
0
0
2
fscsi0/hdisk13
CLOSE
NORMAL
0
0
3
fscsi1/hdisk17
CLOSE
NORMAL
0
0
DEV#:
3 DEVICE NAME: vpath3 TYPE: 2145
POLICY:
Optimized
SERIAL: 60050768018301BF2800000000000018
==========================================================================
Path#
Adapter/Hard Disk
State
Mode
Select
Errors
Chapter 5. Host configuration
171
0
1
2
3
fscsi0/hdisk6
fscsi1/hdisk10
fscsi0/hdisk14
fscsi1/hdisk18
CLOSE
CLOSE
CLOSE
CLOSE
NORMAL
NORMAL
NORMAL
NORMAL
0
0
0
0
0
0
0
0
5.5.8 Creating and preparing volumes for use with AIX 5L V5.3 and SDD
The itsoaixvg Volume Group is created using vpath1. A logical volume is created using the
Volume Group. Then, the testlv1 file system is created and mounted on the /testlv1 mount
point, as shown in Example 5-19.
Example 5-19 Host system new Volume Group and file system configuration
#lsvg -o
itsoaixvg
rootvg
#lsvg -l itsoaixvg
itsoaixvg:
LV NAME
TYPE
loglv01
jfs2log
fslv00
jfs2
fslv01
jfs2
#df -g
Filesystem
GB blocks
/dev/hd4
0.03
/dev/hd2
9.06
/dev/hd9var
0.03
/dev/hd3
0.12
/dev/hd1
0.03
/proc
/dev/hd10opt
0.09
/dev/lv00
0.41
/dev/fslv00
2.00
/dev/fslv01
2.00
LPs
1
128
128
PPs
1
128
128
Free %Used
0.01
62%
4.32
53%
0.03
10%
0.12
7%
0.03
2%
0.01
86%
0.39
4%
2.00
1%
2.00
1%
PVs
1
1
1
LV STATE
open/syncd
open/syncd
open/syncd
MOUNT POINT
N/A
/teslv1
/teslv2
Iused %Iused Mounted on
1357
31% /
17341
2% /usr
137
3% /var
31
1% /tmp
11
1% /home
- /proc
1947
38% /opt
19
1% /usr/sys/inst.images
4
1% /teslv1
4
1% /teslv2
5.5.9 Discovering the assigned VDisk using AIX V6.1 and SDDPCM
Before adding a new volume from the SVC, the AIX host system Atlantic had a simple, typical
configuration, as shown in Example 5-20.
Example 5-20 Status of AIX host system Kanaga
# lspv
hdisk0
hdisk1
hdisk2
# lsvg
rootvg
0009cdcaeb48d3a3
0009cdcac26dbb7c
0009cdcab5657239
rootvg
rootvg
rootvg
active
active
active
In Example 5-22 on page 174, we show the SVC configuration information relating to our AIX
host, specifically the host definition, the VDisks that were created for this host, and the
VDisk-to-host mappings for this configuration.
172
Implementing the IBM System Storage SAN Volume Controller V5.1
Our example host is named Atlantic. Example 5-21 shows the HBA information for our
example host.
Example 5-21 Example of HBA information for the host Atlantic
## lsdev -Cc adapter | grep fcs
fcs1 Available 1H-08
FC Adapter
fcs2
Available 1D-08
FC Adapter
# lscfg -vpl fcs1
fcs1
U0.1-P2-I4/Q1 FC Adapter
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A644
Manufacturer................001E
Customer Card ID Number.....2765
FRU Number..................
00P4495
Network Address.............10000000C932A865
ROS Level and ID............02C039D0
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401411
Device Specific.(Z5)........02C039D0
Device Specific.(Z6)........064339D0
Device Specific.(Z7)........074339D0
Device Specific.(Z8)........20000000C932A865
Device Specific.(Z9)........CS3.93A0
Device Specific.(ZA)........C1D3.93A0
Device Specific.(ZB)........C2D3.93A0
Device Specific.(ZC)........00000000
Hardware Location Code......U0.1-P2-I4/Q1
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP9002
Node: [email protected]
Device Type: fcp
Physical Location: U0.1-P2-I4/Q1
## lscfg -vpl fcs2
fcs2
U0.1-P2-I5/Q1 FC Adapter
Part Number.................80P4383
EC Level....................A
Serial Number...............1F5350CD42
Manufacturer................001F
Customer Card ID Number.....2765
FRU Number..................
80P4384
Network Address.............10000000C94C8C1C
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Chapter 5. Host configuration
173
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C94C8C1C
Device Specific.(Z9)........CS3.91A1
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(ZC)........00000000
Hardware Location Code......U0.1-P2-I5/Q1
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP9002
Node: [email protected]
Device Type: fcp
Physical Location: U0.1-P2-I5/Q1
#
Using the SVC CLI, we can check that the host WWPNs, as listed in Example 5-22, are
logged into the SVC for the host definition Atlantic, by entering this command:
svcinfo lshost Atlantic
We can also discover the serial numbers of the VDisks by using the following command:
svcinfo lshostvdiskmap Atlantic
Example 5-22 SVC definitions for host system Atlantic
IBM_2145:ITSO-CLS2:admin>svcinfo lshost Atlantic
id 8
name Atlantic
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 10000000C94C8C1C
node_logged_in_count 2
state active
WWPN 10000000C932A865
node_logged_in_count 2
state active
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Atlantic
id
name
SCSI_id
vdisk_id
wwpn
vdisk_UID
8
Atlantic
0
14
10000000C94C8C1C 6005076801A180E90800000000000060
8
Atlantic
1
22
10000000C94C8C1C 6005076801A180E90800000000000061
8
Atlantic
2
23
10000000C94C8C1C 6005076801A180E90800000000000062
IBM_2145:ITSO-CLS2:admin>
174
Implementing the IBM System Storage SAN Volume Controller V5.1
vdisk_name
Atlantic0001
Atlantic0002
Atlantic0003
We need to run the cfgmgr command on the AIX host to discover the new disks and to enable
us to use the disks:
# cfgmgr -l fcs1
# cfgmgr -l fcs2
Alternatively, use the cfgmgr -vS command to check the complete system. This command
will probe the devices sequentially across all FC adapters and attached disks; however, it is
extremely time intensive:
# cfgmgr -vS
The raw SVC disk configuration of the AIX host system now appears, as shown in
Example 5-23. We can see the multiple MPIO FC 2145 devices, representing the SVC LUN.
Example 5-23 VDisks from SVC added with multiple various paths for each VDisk
# lsdev -Cc disk
hdisk0 Available
hdisk1 Available
hdisk2 Available
hdisk3 Available
hdisk4 Available
hdisk5 Available
1S-08-00-8,0
1S-08-00-9,0
1S-08-00-10,0
1D-08-02
1D-08-02
1D-08-02
16 Bit LVD SCSI Disk Drive
16 Bit LVD SCSI Disk Drive
16 Bit LVD SCSI Disk Drive
MPIO FC 2145
MPIO FC 2145
MPIO FC 2145
To make a Volume Group (for example, itsoaixvg) to host the LUNs, we use the mkvg
command passing the device as a parameter. This action is shown in Example 5-24.
Example 5-24 Running the mkvg command
# mkvg -y itsoaixvg hdisk3
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg
# mkvg -y itsoaixvg1 hdisk4
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg1
# mkvg -y itsoaixvg2 hdisk5
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg2
Now, by running the lspv command, we can see the disks and the assigned Volume Groups,
as shown in Example 5-25.
Example 5-25 Showing the vpath assignment into the Volume Group
# lspv
hdisk0
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
0009cdcaeb48d3a3
0009cdcac26dbb7c
0009cdcab5657239
0009cdca28b589f5
0009cdca28b87866
0009cdca28b8ad5b
rootvg
rootvg
rootvg
itsoaixvg
itsoaixvg1
itsoaixvg2
active
active
active
active
active
active
In Example 5-26 on page 176, we show that running the lspv hdisk3 command shows a
more verbose output for one of the SVC LUNs.
Chapter 5. Host configuration
175
Example 5-26 Verbose details of hdisk3
# lspv hdisk3
PHYSICAL VOLUME:
hdisk3
VOLUME GROUP:
PV IDENTIFIER:
0009cdca28b589f5 VG IDENTIFIER
0009cdca00004c000000011b28b58ae2
PV STATE:
active
STALE PARTITIONS:
0
ALLOCATABLE:
PP SIZE:
8 megabyte(s)
LOGICAL VOLUMES:
TOTAL PPs:
511 (4088 megabytes)
VG DESCRIPTORS:
FREE PPs:
511 (4088 megabytes)
HOT SPARE:
USED PPs:
0 (0 megabytes)
MAX REQUEST:
FREE DISTRIBUTION: 103..102..102..102..102
USED DISTRIBUTION: 00..00..00..00..00
#
itsoaixvg
yes
0
2
no
256 kilobytes
5.5.10 Using SDDPCM
Within SDD, we are able to check the status of the adapters and devices that are now under
SDDPCM control with the use of the pcmpath command set. In Example 5-27, we can see the
status and mode of both HBA cards as NORMAL and ACTIVE.
Example 5-27 SDDPCM commands that are used to check the availability of the adapters
# pcmpath query adapter
Active Adapters :2
Adpt#
0
1
Name
fscsi1
fscsi2
State
NORMAL
NORMAL
Mode
ACTIVE
ACTIVE
Select
407
425
Errors
0
0
Paths
6
6
Active
6
6
From Example 5-28, we see detailed information about each MPIO device. The asterisk (*)
next to the path numbers shows which paths have been selected (used) by SDDPCM. These
paths are the two physical paths that connect to the preferred node of the I/O Group of this
SVC cluster. The remaining two paths within this MPIO device are only accessed in a failover
scenario.
Example 5-28 SDDPCM commands that are used to check the availability of the devices
# pcmpath query device
Total Devices : 3
DEV#:
3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 6005076801A180E90800000000000060
==========================================================================
Path#
Adapter/Path Name
State
Mode
Select
Errors
0
fscsi1/path0
OPEN
NORMAL
152
0
1*
fscsi1/path1
OPEN
NORMAL
48
0
2*
fscsi2/path2
OPEN
NORMAL
48
0
3
fscsi2/path3
OPEN
NORMAL
160
0
DEV#:
176
4
DEVICE NAME: hdisk4
TYPE: 2145
ALGORITHM:
Implementing the IBM System Storage SAN Volume Controller V5.1
Load Balance
SERIAL: 6005076801A180E90800000000000061
==========================================================================
Path#
Adapter/Path Name
State
Mode
Select
Errors
0*
fscsi1/path0
OPEN
NORMAL
37
0
1
fscsi1/path1
OPEN
NORMAL
66
0
2
fscsi2/path2
OPEN
NORMAL
71
0
3*
fscsi2/path3
OPEN
NORMAL
38
0
DEV#:
5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 6005076801A180E90800000000000062
==========================================================================
Path#
Adapter/Path Name
State
Mode
Select
Errors
0
fscsi1/path0
OPEN
NORMAL
66
0
1*
fscsi1/path1
OPEN
NORMAL
38
0
2*
fscsi2/path2
OPEN
NORMAL
38
0
3
fscsi2/path3
OPEN
NORMAL
70
0
#
5.5.11 Creating and preparing volumes for use with AIX V6.1 and SDDPCM
The itsoaixvg Volume Group is created using hdisk3. A logical volume is created using the
Volume Group. Then, the testlv1 file system is created and mounted on the /testlv1 mount
point, as shown in Example 5-29.
Example 5-29 Host system new Volume Group and file system configuration
# lsvg -o
itsoaixvg2
itsoaixvg1
itsoaixvg
rootvg
# crfs -v jfs2 -g itsoaixvg -a size=3G
File system created successfully.
3145428 kilobytes total disk space.
New File System size is 6291456
# lsvg -l itsoaixvg
itsoaixvg:
LV NAME
TYPE
LPs
loglv00
jfs2log
1
fslv00
jfs2
384
#
-m /itsoaixvg -p rw -a agblksize=4096
PPs
1
384
PVs
1
1
LV STATE
closed/syncd
closed/syncd
MOUNT POINT
N/A
/itsoaixvg
5.5.12 Expanding an AIX volume
It is possible to expand a VDisk in the SVC cluster, even if it is mapped to a host. Certain
operating systems, such as AIX 5L Version 5.2 and later-level versions, can handle the
volumes being expanded, even if the host has applications running. In the following examples,
we show the procedure with AIX 5L V5.3 and SDD, but the procedure is also the same
procedure when using AIX V6 or SDDPCM. The Volume Group where the VDisk is assigned,
if it is assigned to any Volume Group, must not be a concurrent accessible Volume Group. A
VDisk that is defined in a FlashCopy, Metro Mirror, or Global Mirror mapping on the SVC
cannot be expanded, unless the mapping is removed, which means that the FlashCopy,
Metro Mirror, or Global Mirror on that VDisk has to be stopped before it is possible to expand
the VDisk.
Chapter 5. Host configuration
177
The following steps show how to expand a volume on an AIX host, where the volume is a
VDisk from the SVC:
1. To list a VDisk size, use the svcinfo lsvdisk <VDisk_name> command. Example 5-30
shows the Kanga0002 VDisk that we have allocated to our AIX server before we expand it.
Here, the capacity is 5 GB, and the vdisk_UID is 60050768018301BF2800000000000016.
Example 5-30 Expanding a VDisk on AIX
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Kanaga0002
id 14
name Kanaga0002
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 5.0GB
type striped
formatted yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018301BF2800000000000016
throttling 0
preferred_node_id 2
fast_write_state not_empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 5.00GB
real_capacity 5.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
178
Implementing the IBM System Storage SAN Volume Controller V5.1
2. To identify to which vpath this VDisk is associated on the AIX host, we use the datapath
query device SDD command, as shown in Example 5-19 on page 172. Here, we can see
that the VDisk with vdisk_UID 60050768018301BF2800000000000016 is associated with
vpath1, because the vdisk_UID matches the SERIAL field on the AIX host.
3. To see the size of the volume on the AIX host, we use the lspv command, as shown in
Example 5-31. This command shows that the volume size is 5,112 MB, equal to 5 GB, as
shown in Example 5-30 on page 178.
Example 5-31 Finding the size of the volume in AIX
#lspv vpath1
PHYSICAL VOLUME:
vpath1
VOLUME GROUP:
PV IDENTIFIER:
0009cddabce27ba5 VG IDENTIFIER
0009cdda00004c000000011abce27c89
PV STATE:
active
STALE PARTITIONS:
0
ALLOCATABLE:
PP SIZE:
8 megabyte(s)
LOGICAL VOLUMES:
TOTAL PPs:
639 (5112 megabytes)
VG DESCRIPTORS:
FREE PPs:
0 (0 megabytes)
HOT SPARE:
USED PPs:
639 (5112 megabytes)
MAX REQUEST:
FREE DISTRIBUTION: 00..00..00..00..00
USED DISTRIBUTION:
128..128..127..128..128
itsoaixvg
yes
2
2
no
256 kilobytes
4. To expand the volume on the SVC, we use the svctask expandvdisksize command to
increase the capacity on the VDisk. In Example 5-32, we expand the VDisk by 1 GB.
Example 5-32 Expanding a VDisk
IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 1 -unit gb Kanaga0002
5. To check that the VDisk has been expanded, use the svcinfo lsvdisk command. Here,
we can see that the Kanaga0002 VDisk has been expanded to a capacity of 6 GB
(Example 5-33).
Example 5-33 Verifying that the VDisk has been expanded
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Kanaga0002
id 14
name Kanaga0002
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 6.0GB
type striped
formatted yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018301BF2800000000000016
throttling 0
Chapter 5. Host configuration
179
preferred_node_id 2
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 6.00GB
real_capacity 6.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
6. AIX has not yet recognized a change in the capacity of the vpath1 volume, because no
dynamic mechanism exists within the operating system to provide a configuration update
communication. Therefore, to encourage AIX to recognize the extra capacity on the
volume without stopping any applications, we use the chvg -g fc_source_vg command,
where fc_source_vg is the name of the Volume Group to which vpath1 belongs.
If AIX does not return any messages, the command was successful, and the volume
changes in this Volume Group have been saved. If AIX cannot see any changes in the
volumes, it will return an explanatory message.
7. To verify that the size of vpath1 has changed, we use the lspv command again, as shown
in Example 5-34.
Example 5-34 Verify that AIX can see the newly expanded VDisk
#lspv vpath1
PHYSICAL VOLUME:
vpath1
VOLUME GROUP:
PV IDENTIFIER:
0009cddabce27ba5 VG IDENTIFIER
0009cdda00004c000000011abce27c89
PV STATE:
active
STALE PARTITIONS:
0
ALLOCATABLE:
PP SIZE:
8 megabyte(s)
LOGICAL VOLUMES:
TOTAL PPs:
767 (6136 megabytes)
VG DESCRIPTORS:
FREE PPs:
128 (1024 megabytes)
HOT SPARE:
USED PPs:
639 (5112 megabytes)
MAX REQUEST:
FREE DISTRIBUTION: 00..00..00..00..128
USED DISTRIBUTION:
154..153..153..153..26
180
Implementing the IBM System Storage SAN Volume Controller V5.1
itsoaixvg
yes
2
2
no
256 kilobytes
Here, we can see that the volume now has a size of 6,136 MB, equal to 6 GB. Now, we can
expand the file systems in this Volume Group to use the new capacity.
5.5.13 Removing an SVC volume on AIX
Before we remove a VDisk assigned to an AIX host, we have to make sure that there is no
data on it, and that no applications are dependent upon the volume. This procedure is a
standard AIX procedure. We move all data off the volume, remove the volume in the Volume
Group, and delete the vpath and the hdisks that are associated with the vpath. Next, we
remove the vdiskhostmap on the SVC for that volume, and that VDisk is no longer needed.
Then, we delete it so that the extents will be available when we create a new VDisk on the
SVC.
5.5.14 Running SVC commands from an AIX host system
To issue CLI commands, you must install and prepare the SSH client system on the AIX host
system. For AIX 5L V5.1 and later, you can get OpenSSH from the Bonus Packs. You also
need its prerequisite, OpenSSL, from the AIX toolbox for Linux applications for Power
Systems™. For AIX V4.3.3, the software is available from the AIX toolbox for Linux
applications.
The AIX installation images from IBM developerWorks® are available at this Web site:
http://sourceforge.net/projects/openssh-aix
Perform the following steps:
1. To generate the key files on AIX, issue the following command:
ssh-keygen -t rsa -f filename
The -t parameter specifies the type of key to generate: rsa1, rsa2, or dsa. The value for
rsa2 is only rsa. For rsa1, the type must be rsa1. When creating the key to the SVC, use
type rsa2. The -f parameter specifies the file names of the private and public keys on the
AIX server (the public key gets the extension .pub after the file name).
2. Next, you have to install the public key on the SVC, which can be done by using the Master
Console. Copy the public key to the Master Console, and install the key to the SVC, as
described in Chapter 4, “SAN Volume Controller initial configuration” on page 103.
3. On the AIX server, make sure that the private key and the public key are in the .ssh
directory and in the home directory of the user.
4. To connect to the SVC and use a CLI session from the AIX host, issue the following
command:
ssh -l admin -i filename svc
5. You can also issue the commands directly on the AIX host, which is useful when making
scripts. To do this, add the SVC commands to the previous command. For example, to list
the hosts that are defined on the SVC, enter the following command:
ssh -l admin -i filename svc svcinfo lshost
In this command, -l admin is the user on the SVC to which we will connect, -i filename is
the filename of the private key generated, and svc is the name or IP address of the SVC.
Chapter 5. Host configuration
181
5.6 Windows-specific information
In the following sections, we detail specific information about the connection of Windows
2000-based hosts to the SVC environment.
5.6.1 Configuring Windows Server 2000, Windows 2003 Server, and Windows
Server 2008 hosts
This section provides an overview of the requirements for attaching the SVC to a host running
Windows Server 2000, Windows 2003 Server, or Windows Server 2008.
Before you attach the SVC to your host, make sure that all of the following requirements are
fulfilled:
For Windows Server 2003 x64 Edition operating system, you must install the Hotfix from
KB 908980. If you do not install it before operation, preferred pathing is not available. You
can find the Hotfix at this Web site:
http://support.microsoft.com/kb/908980
Check LUN limitations for your host system. Ensure that there are enough FC adapters
installed in the server to handle the total LUNs that you want to attach.
5.6.2 Configuring Windows
To configure the Windows hosts, follow these steps:
1. Make sure that the latest OS Hotfixes are applied to your Microsoft server.
2. Use the latest firmware and driver levels on your host system.
3. Install the HBA or HBAs on the Windows server, as shown in 5.6.4, “Host adapter
installation and configuration” on page 183.
4. Connect the Windows 2000/2003/2008 server FC host adapters to the switches.
5. Configure the switches (zoning).
6. Install the FC host adapter driver, as described in 5.6.3, “Hardware lists, device driver,
HBAs, and firmware levels” on page 183.
7. Configure the HBA for hosts running Windows, as described in 5.6.4, “Host adapter
installation and configuration” on page 183.
8. Check the HBA driver readme file for the required Windows registry settings, as described
in 5.6.3, “Hardware lists, device driver, HBAs, and firmware levels” on page 183.
9. Check the disk timeout on Microsoft Windows Server, as described in 5.6.5, “Changing the
disk timeout on Microsoft Windows Server” on page 185.
10.Install and configure SDD/Subsystem Device Driver Device Specific Module (SDDDSM).
11.Restart the Windows 2000/2003/2008 host system.
12.Configure the host, VDisks, and host mapping in the SVC.
13.Use Rescan disk in Computer Management of the Windows server to discover the VDisks
that were created on the SAN Volume Controller.
182
Implementing the IBM System Storage SAN Volume Controller V5.1
5.6.3 Hardware lists, device driver, HBAs, and firmware levels
The latest information about supported hardware, device driver, and firmware is available at
this Web site:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#_Windows
At this Web site, you will also find the hardware list for supported HBAs and the driver levels
for Windows. Check the supported firmware and driver level for your HBA and follow the
manufacturer’s instructions to upgrade the firmware and driver levels for each type of HBA. In
most manufacturers’ driver readme files, you will find instructions for the Windows registry
parameters that have to be set for the HBA driver:
For the Emulex HBA driver, SDD requires the port driver, not the miniport port driver.
For the QLogic HBA driver, SDDDSM requires the storport version of the miniport driver.
For the QLogic HBA driver, SDD requires the scsiport version of the miniport driver.
5.6.4 Host adapter installation and configuration
Install the host adapters into your system. Refer to the manufacturer’s instructions for
installation and configuration of the HBAs.
In IBM System x servers, the HBA must always be installed in the first slots. If you install, for
example, two HBAs and two network cards, the HBAs must be installed in slot 1 and slot 2,
and the network cards can be installed in the remaining slots.
Configure the QLogic HBA for hosts running Windows
After you have installed the HBA in the server, and have applied the HBA firmware and device
driver, you have to configure the HBA. Perform the following steps:
1. Restart the server.
2. When you see the QLogic banner, press the Ctrl+Q keys to open the FAST!UTIL menu
panel.
3. From the Select Host Adapter menu, select the Adapter Type QLA2xxx.
4. From the Fast!UTIL Options menu, select Configuration Settings.
5. From the Configuration Settings menu, click Host Adapter Settings.
6. From the Host Adapter Settings menu, select the following values:
a. Host Adapter BIOS: Disabled
b. Frame size: 2048
c. Loop Reset Delay: 5 (minimum)
d. Adapter Hard Loop ID: Disabled
e. Hard Loop ID: 0
f. Spinup Delay: Disabled
g. Connection Options: 1 - point to point only
h. Fibre Channel Tape Support: Disabled
i. Data Rate: 2
7. Press the Esc key to return to the Configuration Settings menu.
8. From the Configuration Settings menu, select Advanced Adapter Settings.
9. From the Advanced Adapter Settings menu, set the following parameters:
Chapter 5. Host configuration
183
a. Execution throttle: 100
b. Luns per Target: 0
c. Enable LIP Reset: No
d. Enable LIP Full Login: Yes
e. Enable Target Reset: No Note: If you are using a subsystem device driver (SDD) lower
than 1.6, set Enable Target Reset to Yes.
f. Login Retry Count: 30
g. Port Down Retry Count: 15
h. Link Down Timeout: 30
i. Extended error logging: Disabled (might be enabled for debugging)
j. RIO Operation Mode: 0
k. Interrupt Delay Timer: 0
10.Press Esc to return to the Configuration Settings menu.
11.Press Esc.
12.From the Configuration settings modified window, select Save changes.
13.From the Fast!UTIL Options menu, select Select Host Adapter if more than one QLogic
adapter were installed in your system.
14.Select the other Host Adapter and repeat all steps from step 4 to 12.
15.You have to repeat this process for all installed QLogic adapters in your system. When you
are done, press Esc to exit the QLogic BIOS and restart the server.
Configuring the Emulex HBA for hosts running Windows
After you have installed the Emulex HBA and driver, you must configure your HBA.
For the Emulex HBA StorPort driver, accept the default settings and set the topology to 1 (1 =
F Port Fabric). For the Emulex HBA FC Port driver, use the default settings and change the
parameters to the parameters that are provided in Table 5-1.
Table 5-1 FC port driver changes
184
Parameters
Recommended settings
Query name server for all N-ports (BrokenRSCN)
Enabled
LUN mapping (MapLuns)
Enabled (1)
Automatic LUN mapping (MapLuns)
Enabled (1)
Allow multiple paths to SCSI target
(MultipleSCSIClaims)
Enabled
Scan in device ID order (ScanDeviceIDOrder)
Disabled
Translate queue full to busy (TransleteQueueFull)
Enabled
Retry timer (RetryTimer)
2000 milliseconds
Maximum number of LUNs (MaximumLun)
Equal to or greater than the number of the SVC
LUNs that are available to the HBA
Implementing the IBM System Storage SAN Volume Controller V5.1
Note: The parameters that are shown in Table 5-1 correspond to the parameters in
HBAnywhere.
5.6.5 Changing the disk timeout on Microsoft Windows Server
This section describes how to change the disk I/O timeout value on Windows Server 2000,
Windows 2003 Server, and Windows Server 2008 operating systems.
On your Windows server hosts, change the disk I/O timeout value to 60 in the Windows
registry:
1. In Windows, click Start, and select Run.
2. In the dialog text box, type regedit and press Enter.
3. In the registry browsing tool, locate the
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\TimeOutValue key.
4. Confirm that the value for the key is 60 (decimal value), and, if necessary, change the
value to 60, as shown in Figure 5-6.
Figure 5-6 Regedit
5.6.6 Installing the SDD driver on Windows
At the time of writing, the SDD levels in Table 5-2 are supported.
Table 5-2 Currently supported SDD levels
Windows operating system
SDD level
NT 4
1.5.1.1
Windows 2000 Server and Windows 2003 Server
service pack (SP2) (32-bit)/2003 SP2 (IA-64)
1.6.3.0-2
Windows 2000 Server with Microsoft Cluster
Server (MSCS) and Veritas Volume Manager/
Windows 2003 Server SP2 (32-bit) with MSCS
and Veritas Volume Manager
Not available
See the following Web site for the latest information about SDD for Windows:
http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7
001350&loc=en_US&cs=utf-8&lang=en
Chapter 5. Host configuration
185
SDD: We recommend that you use SDD only on existing systems where you do not want
to change from SDD to SDDDSM. New operating systems will only be supported with
SDDDSM.
Before installing the SDD driver, the HBA driver has to be installed on your system. SDD
requires the HBA SCSI port driver.
After downloading the appropriate version of SDD from the Web site, extract the file and run
setup.exe to install SDD. A command line will appear. Answer Y (Figure 5-7) to install the
driver.
Figure 5-7 Confirm SDD installation
After the setup has completed, answer Y again to reboot your system (Figure 5-8).
Figure 5-8 Reboot system after installation
To check if your SDD installation is complete, open the Windows Device Manager, expand
SCSI and RAID Controllers, right-click Subsystem Device Driver Management, and click
Properties (see Figure 5-9 on page 187).
186
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 5-9 Subsystem Device Driver Management
Chapter 5. Host configuration
187
The Subsystem Device Driver Management Properties window opens. Select the Driver tab,
and make sure that you have installed the correct driver version (see Figure 5-10).
Figure 5-10 Subsystem Device Driver Management Properties Driver tab
5.6.7 Installing the SDDDSM driver on Windows
The following sections show how to install the SDDDSM driver on Windows.
Windows 2003 Server, Windows Server 2008, and MPIO
Microsoft Multi Path Input Output (MPIO) solutions are designed to work in conjunction with
device-specific modules (DSMs) written by vendors, but the MPIO driver package does not,
by itself, form a complete solution. This joint solution allows the storage vendors to design
device-specific solutions that are tightly integrated with the Windows operating system.
MPIO is not shipped with the Windows operating system; storage vendors must pack the
MPIO drivers with their own DSM. IBM Subsystem Device Driver DSM (SDDDSM) is the IBM
multipath I/O solution that is based on Microsoft MPIO technology; it is a device-specific
module specifically designed to support IBM storage devices on Windows 2003 Server and
Windows Server 2008 servers.
The intention of MPIO is to get a better integration of multipath storage solution with the
operating system, and it allows the use of multipaths in the SAN infrastructure during the boot
process for SAN boot hosts.
Subsystem Device Driver Device Specific Module (SDDDSM) for SVC
Subsystem Device Driver Device Specific Module (SDDDSM) installation is a package for the
SVC device for the Windows 2003 Server and Windows Server 2008 operating systems.
188
Implementing the IBM System Storage SAN Volume Controller V5.1
SDDDSM is the IBM multipath I/O solution that is based on Microsoft MPIO technology, and it
is a device-specific module that is specifically designed to support IBM storage devices.
Together with MPIO, it is designed to support the multipath configuration environments in the
IBM System Storage SAN Volume Controller. It resides in a host system with the native disk
device driver and provides the following functions:
Enhanced data availability
Dynamic I/O load-balancing across multiple paths
Automatic path failover protection
Concurrent download of licensed internal code
Path-selection policies for the host system
No SDDDSM support for Windows Server 2000
For the HBA driver, SDDDSM requires the StorPort version of HBA miniport driver
Table 5-3 shows, at the time of writing, the supported SDDDSM driver levels.
Table 5-3 Currently supported SDDDSM driver levels
Windows operating system
SDD level
Windows 2003 Server SP2 (32-bit)/Windows
2003 Server SP2 (x64)
2.2.0.0-11
Windows Server 2008 (32-bit)/Windows Server
2008 (x64)
2.2.0.0-11
To check which levels are available, go to the Web site:
http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7
001350&loc=en_US&cs=utf-8&lang=en#WindowsSDDDSM
To download SDDDSM, go to the Web site:
http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D430&uid=ssg1S40
00350&loc=en_US&cs=utf-8&lang=en
The installation procedure for SDDDSM and SDD are the same, but remember that you have
to use the StorPort HBA driver instead of the SCSI driver. We describe the SDD installation in
5.6.6, “Installing the SDD driver on Windows” on page 185. After completing the installation,
you will see the Microsoft MPIO in Device Manager (Figure 5-11 on page 190).
Chapter 5. Host configuration
189
Figure 5-11 Windows Device Manager: MPIO
We describe the SDDDSM installation for Windows Server 2008 in 5.8, “Example
configuration of attaching an SVC to a Windows Server 2008 host” on page 200.
5.7 Discovering assigned VDisks in Windows Server 2000 and
Windows 2003 Server
In this section, we describe how to discover assigned VDisks in Windows Server 2000 and
Windows 2003 Server. The screen captures show a Windows 2003 Server host with
SDDDSM installed. Discovering the disks in Windows Server 2000 or with SDD is the same
procedure.
Before adding a new volume from the SVC, the Windows 2003 Server host system had the
configuration that is shown in Figure 5-12 on page 191, with only local disks.
190
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 5-12 Windows 2003 Server host system before adding a new volume from SVC
We can check that the WWPN is logged into the SVC for the host named Senegal by entering
the following command (Example 5-35):
svcinfo lshost Senegal
Example 5-35 Host information for Senegal
IBM_2145:ITSO-CLS2:admin>svcinfo lshost Senegal
id 1
name Senegal
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 210000E08B89B9C0
node_logged_in_count 2
state active
WWPN 210000E08B89CCC2
node_logged_in_count 2
state active
The configuration of the Senegal host, the Senegal_bas0001 VDisk, and the mapping
between the host and the VDisk are defined in the SVC, as described in Example 5-36. In our
example, the Senegal_bas0002 and Senegal_bas003 VDisks have the same configuration as
the Senegal_bas0001 VDisk.
Example 5-36 VDisk mapping: Senegal
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal
id name SCSI_id vdisk_id vdisk_name
wwpn
vdisk_UID
1 Senegal 0
7
Senegal_bas0001 210000E08B89B9C0
6005076801A180E9080000000000000F
1 Senegal 1
8
Senegal_bas0002 210000E08B89B9C0
6005076801A180E90800000000000010
Chapter 5. Host configuration
191
1 Senegal 2
9
Senegal_bas0003
6005076801A180E90800000000000011
210000E08B89B9C0
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001
id 7
name Senegal_bas0001
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_0_DS45
capacity 10.0GB
type striped
formatted yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801A180E9080000000000000F
throttling 0
preferred_node_id 3
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_0_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
We can also obtain the serial number of the VDisks by entering the following command
(Example 5-37):
svcinfo lsvdiskhostmap Senegal_bas0001
Example 5-37 VDisk serial number: Senegal_bas0001
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdiskhostmap Senegal_bas0001
192
Implementing the IBM System Storage SAN Volume Controller V5.1
id name
SCSI_id host_id host_name
wwpn
7 Senegal_bas0001 0
1
Senegal
210000E08B89B9C0
6005076801A180E9080000000000000F
7 Senegal_bas0001 0
1
Senegal
210000E08B89CCC2
6005076801A180E9080000000000000F
vdisk_UID
After installing the necessary drivers and the rescan disks operation completes, the new disks
are found in the Computer Management window, as shown in Figure 5-13.
Figure 5-13 Windows 2003 Server host system with three new volumes from SVC
In Windows Device Manager, the disks are shown as IBM 2145 SCSI Disk Device
(Figure 5-14 on page 194). The number of IBM 2145 SCSI Disk Devices that you see is equal
to:
(number of VDisks) x (number of paths per I/O Group per HBA) x (number of HBAs)
The IBM 2145 Multi-Path Disk Devices are the devices that are created by the multipath driver
(Figure 5-14 on page 194). The number of these devices is equal to the number of VDisks
that are presented to the host.
Chapter 5. Host configuration
193
Figure 5-14 Windows 2003 Server Device Manager with assigned VDisks
When following the SAN zoning recommendation, this calculation gives us, for one VDisk and
a host with two HBAs:
(number of VDisks) x (number of paths per I/O Group per HBA) x (number of HBAs) = 1 x 2 x
2 = 4 paths
You can check if all of the paths are available if you select Start  All Programs 
Subsystem Device Driver (DSM)  Subsystem Device Driver (DSM). The SDD (DSM)
command-line interface will appear. Enter the following command to see which paths are
available to your system (Example 5-38).
Example 5-38 Datapath query device
Microsoft Windows [Version 5.2.3790]
(C) Copyright 1985-2003 Microsoft Corp.
C:\Program Files\IBM\SDDDSM>datapath query device
Total Devices : 3
DEV#:
0 DEVICE NAME: Disk1 Part0 TYPE: 2145
POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000002A
============================================================================
Path#
Adapter/Hard Disk
State Mode
Select
Errors
0
Scsi Port2 Bus0/Disk1 Part0
OPEN
NORMAL
47
0
1
Scsi Port2 Bus0/Disk1 Part0
OPEN
NORMAL
0
0
2
Scsi Port3 Bus0/Disk1 Part0
OPEN
NORMAL
0
0
3
Scsi Port3 Bus0/Disk1 Part0
OPEN
NORMAL
28
0
DEV#:
1 DEVICE NAME: Disk2 Part0 TYPE: 2145
POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000010
============================================================================
Path#
Adapter/Hard Disk
State Mode
Select
Errors
0
Scsi Port2 Bus0/Disk2 Part0
OPEN
NORMAL
0
0
194
Implementing the IBM System Storage SAN Volume Controller V5.1
1
2
3
Scsi Port2 Bus0/Disk2 Part0
Scsi Port3 Bus0/Disk2 Part0
Scsi Port3 Bus0/Disk2 Part0
OPEN
OPEN
OPEN
NORMAL
NORMAL
NORMAL
162
155
0
0
0
0
DEV#:
2 DEVICE NAME: Disk3 Part0 TYPE: 2145
POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000011
============================================================================
Path#
Adapter/Hard Disk
State Mode
Select
Errors
0
Scsi Port2 Bus0/Disk3 Part0
OPEN
NORMAL
51
0
1
Scsi Port2 Bus0/Disk3 Part0
OPEN
NORMAL
0
0
2
Scsi Port3 Bus0/Disk3 Part0
OPEN
NORMAL
0
0
3
Scsi Port3 Bus0/Disk3 Part0
OPEN
NORMAL
25
0
C:\Program Files\IBM\SDDDSM>
Note: All path states have to be OPEN. The path state can be OPEN or CLOSE. If one
path state is CLOSE, it means that the system is missing a path that it saw during startup.
If you restart your system, the CLOSE paths are removed from this view.
5.7.1 Extending a Windows Server 2000 or Windows 2003 Server volume
It is possible to expand a VDisk in the SVC cluster, even if it is mapped to a host. Certain
operating systems, such as Windows Server 2000 and Windows 2003 Server, can handle the
volumes being expanded even if the host has applications running. A VDisk that is defined to
be in a FlashCopy, Metro Mirror, or Global Mirror mapping on the SVC cannot be expanded
unless the mapping is removed, which means that the FlashCopy, Metro Mirror, or Global
Mirror on that VDisk has to be stopped before it is possible to expand the VDisk.
Important:
For VDisk expansion to work on Windows Server 2000, apply Windows Server 2000
Hotfix Q327020, which is available from the Microsoft Knowledge Base at this Web site:
http://support.microsoft.com/kb/327020
If you want to expand a logical drive in a extended partition in Windows 2003 Server,
apply the Hotfix from KB 841650, which is available from the Microsoft Knowledge Base
at this Web site:
http://support.microsoft.com/kb/841650/en-us
Use the updated Diskpart version for Windows 2003 Server, which is available from the
Microsoft Knowledge Base at this Web site:
http://support.microsoft.com/kb/923076/en-us
If the volume is part of a Microsoft Cluster (MSCS), Microsoft recommends that you shut
down all nodes except one node, and that applications in the resource that use the volume
that is going to be expanded are stopped before expanding the volume. Applications running
in other resources can continue. After expanding the volume, start the application and the
resource, and then restart the other nodes in the MSCS.
Chapter 5. Host configuration
195
To expand a volume in use on Windows Server 2000 and Windows 2003 Server, we used
Diskpart. The Diskpart tool is part of Windows 2003 Server; for other Windows versions, you
can download it free of charge from Microsoft. Diskpart is a tool that was developed by
Microsoft to ease administration of storage. It is a command-line interface where you can
manage disks, partitions, and volumes, by using scripts or direct input on the command line.
You can list disks and volumes, select them, and after selecting them, get more detailed
information, create partitions, extend volumes, and more. For more information, see the
Microsoft Web site:
http://www.microsoft.com
or
http://support.microsoft.com/default.aspx?scid=kb;en-us;304736&sd=tech
An example of how to expand a volume on a Windows 2003 Server host, where the volume is
a VDisk from the SVC, is shown in the following discussion.
To list a VDisk size, use the svcinfo lsvdisk <VDisk_name> command. This command gives
this information for the Senegal_bas0001 before expanding the VDisk (Example 5-36 on
page 191).
Here, we can see that the capacity is 10 GB, and also what the vdisk_UID is. To find on what
vpath this VDisk is on the Windows 2003 Server host, we use the datapath query device
SDD command on the Windows host (Figure 5-15).
We can see that the serial 6005076801A180E9080000000000000F of Disk1 on the Windows
host (Figure 5-15) matches the vdisk ID of Senegal_bas0001 (Example 5-36 on page 191).
To see the size of the volume on the Windows host, we use Disk Manager, as shown in
Figure 5-15.
Figure 5-15 Windows 2003 Server: Disk Management
196
Implementing the IBM System Storage SAN Volume Controller V5.1
This window shows that the volume size is 10 GB. To expand the volume on the SVC, we use
the svctask expandvdisksize command to increase the capacity on the VDisk. In this
example, we expand the VDisk by 1 GB (Example 5-39).
Example 5-39 svctask expandvdisksize command
IBM_2145:ITSO-CLS2:admin>svctask expandvdisksize -size 1 -unit gb Senegal_bas0001
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001
id 7
name Senegal_bas0001
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_0_DS45
capacity 11.0GB
type striped
formatted yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801A180E9080000000000000F
throttling 0
preferred_node_id 3
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_0_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 11.00GB
real_capacity 11.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
To check that the VDisk has been expanded, we use the svctask expandvdisksize
command. In Example 5-39, we can see that the Senegal_bas0001 VDisk has been
expanded to 11 GB in capacity.
Chapter 5. Host configuration
197
After performing a “Disk Rescan” in Windows, you will see the new unallocated space in
Windows Disk Management, as shown in Figure 5-16.
Figure 5-16 Expanded volume in Disk Manager
This window shows that Disk1 now has 1 GB unallocated new capacity. To make this capacity
available for the file system, use the following commands, as shown in Example 5-40:
diskpart
Starts DiskPart in a DOS prompt
list volume
Shows you all available volumes
select volume
Selects the volume to expand
detail volume
Displays details for the selected volume, including the unallocated
capacity
extend
Extends the volume to the available unallocated space
Example 5-40 Using Diskpart
C:\>diskpart
Microsoft DiskPart version 5.2.3790.3959
Copyright (C) 1999-2001 Microsoft Corporation.
On computer: SENEGAL
DISKPART> list volume
Volume ###
---------Volume 0
Volume 1
Volume 2
Ltr
--C
S
D
Label
----------SVC_Senegal
Fs
----NTFS
NTFS
Type
---------Partition
Partition
DVD-ROM
DISKPART> select volume 1
Volume 1 is the selected volume.
DISKPART> detail volume
198
Implementing the IBM System Storage SAN Volume Controller V5.1
Size
------75 GB
10 GB
0 B
Status
--------Healthy
Healthy
Healthy
Info
-------System
Disk ###
-------* Disk 1
Status
---------Online
Size
------11 GB
Free
------1020 MB
Dyn
---
Gpt
---
Readonly
: No
Hidden
: No
No Default Drive Letter: No
Shadow Copy
: No
DISKPART> extend
DiskPart successfully extended the volume.
DISKPART> detail volume
Disk ###
-------* Disk 1
Status
---------Online
Size
------11 GB
Free
------0 B
Dyn
---
Gpt
---
Readonly
: No
Hidden
: No
No Default Drive Letter: No
Shadow Copy
: No
After extending the volume, the detail volume command shows that there is no free capacity
on the volume anymore. The list volume command shows the file system size. The Disk
Management window also shows the new disk size, as shown in Figure 5-17.
Figure 5-17 Disk Management after extending disk
The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded
by expanding the underlying SVC VDisk. The new space will appear as unallocated space at
the end of the disk.
Chapter 5. Host configuration
199
In this case, you do not need to use the DiskPart tool; you can use Windows Disk
Management functions to allocate the new space. Expansion works irrespective of the volume
type (simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded
without stopping I/O in most cases.
Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without
backing up your data, because this operation is disruptive for the data, due to a change in
the position of the logical block address (LBA) on the disks.
5.8 Example configuration of attaching an SVC to a Windows
Server 2008 host
This section describes an example configuration that shows the attachment of a Windows
Server 2008 host system to the SVC. We discuss more details about Windows Server 2008
and the SVC in 5.6, “Windows-specific information” on page 182.
5.8.1 Installing SDDDSM on a Windows Server 2008 host
Download the HBA driver and the SDDDSM package and copy them to your host system. We
describe information about the recommended SDDDSM package in 5.6.7, “Installing the
SDDDSM driver on Windows” on page 188. We list the HBA driver details in 5.6.3, “Hardware
lists, device driver, HBAs, and firmware levels” on page 183. We perform the steps that are
described in 5.6.2, “Configuring Windows” on page 182 to achieve this task.
As a prerequisite for this example, we have already performed steps 1 to 5 for the hardware
installation, SAN configuration is done, and the hotfixes are applied. The Disk timeout value is
set to 60 seconds (see 5.6.5, “Changing the disk timeout on Microsoft Windows Server” on
page 185), and we will start with the driver installation.
Installing the HBA driver
Perform these steps to install the HBA driver:
1. Extract the QLogic driver package to your hard drive.
2. Select Start  Run.
3. Enter the devmgmt.msc command, click OK, and the Device Manager will appear.
4. Expand Storage Controllers.
200
Implementing the IBM System Storage SAN Volume Controller V5.1
5. Right-click the HBA, and select Update driver Software (Figure 5-18).
Figure 5-18 Windows Server 2008 driver update
6. Click Browse my computer for driver software (Figure 5-19).
Figure 5-19 Windows Server 2008 driver update
7. Enter the path to the extracted QLogic driver, and click Next (Figure 5-20 on page 202).
Chapter 5. Host configuration
201
Figure 5-20 Windows Server 2008 driver update
8. Windows installs the driver (Figure 5-21).
Figure 5-21 Windows Server 2008 driver installation
202
Implementing the IBM System Storage SAN Volume Controller V5.1
9. When the driver update is complete, click Close to exit the wizard (Figure 5-22).
Figure 5-22 Windows Server 2008 driver installation
10.Repeat steps 1 to 8 for all of the HBAs that are installed in the system.
5.8.2 Installing SDDDSM
To install the SDDDSM driver on your system, perform the following steps:
1. Extract the SDDDSM driver package to a folder on your hard drive.
2. Open the folder with the extracted files.
3. Run the setup.exe command, and a DOS command prompt will appear.
4. Type Y and press Enter to install SDDDSM (Figure 5-23).
Figure 5-23 Installing SDDDSM
5. After the SDDDSM Setup is finished, type Y and press Enter to restart your system.
After the reboot, the SDDDSM installation is complete. You can verify the installation
completion in Device Manager, because the SDDDSM device will appear (Figure 5-24 on
page 204), and the SDDDSM tools will have been installed (Figure 5-25 on page 204).
Chapter 5. Host configuration
203
Figure 5-24 SDDDSM installation
Figure 5-25 SDDDSM installation
204
Implementing the IBM System Storage SAN Volume Controller V5.1
5.8.3 Attaching SVC VDisks to Windows Server 2008
Create the VDisks on the SVC and map them to the Windows Server 2008 host.
In this example, we have mapped three SVC disks to the Windows Server 2008 host named
Diomede, as shown in Example 5-41.
Example 5-41 SVC host mapping to host Diomede
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Diomede
id name SCSI_id vdisk_id vdisk_name
wwpn
0 Diomede 0
20
Diomede_0001 210000E08B0541BC
6005076801A180E9080000000000002B
0 Diomede 1
21
Diomede_0002 210000E08B0541BC
6005076801A180E9080000000000002C
0 Diomede 2
22
Diomede_0003 210000E08B0541BC
6005076801A180E9080000000000002D
vdisk_UID
Perform the following steps to use the devices on your Windows Server 2008 host:
1. Click Start, and click Run.
2. Enter the diskmgmt.msc command, and click OK. The Disk Management window opens.
3. Select Action, and click Rescan Disks (Figure 5-26).
Figure 5-26 Windows Server 2008: Rescan disks
4. The SVC disks will now appear in the Disk Management window (Figure 5-27 on
page 206).
Chapter 5. Host configuration
205
Figure 5-27 Windows Server 2008 Disk Management window
After you have assigned the SVC disks, they are also available in Device Manager. The three
assigned drives are represented by SDDDSM/MPIO as IBM-2145 Multipath disk devices in
the Device Manager (Figure 5-28).
Figure 5-28 Windows Server 2008 Device Manager
206
Implementing the IBM System Storage SAN Volume Controller V5.1
5. To check that the disks are available, select Start  All Programs  Subsystem Device
Driver DSM, and click Subsystem Device Driver DSM (Figure 5-29). The SDDDSM
Command Line Utility will appear.
Figure 5-29 Windows Server 2008 Subsystem Device Driver DSM utility
6. Enter the datapath query device command and press Enter (Example 5-42). This
command will display all of the disks and the available paths, including their states.
Example 5-42 Windows Server 2008 SDDDSM command-line utility
Microsoft Windows [Version 6.0.6001]
Copyright (c) 2006 Microsoft Corporation.
All rights reserved.
C:\Program Files\IBM\SDDDSM>datapath query device
Total Devices : 3
DEV#:
0 DEVICE NAME: Disk1 Part0 TYPE: 2145
POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000002B
============================================================================
Path#
Adapter/Hard Disk
State Mode
Select
Errors
0
Scsi Port2 Bus0/Disk1 Part0
OPEN
NORMAL
0
0
1
Scsi Port2 Bus0/Disk1 Part0
OPEN
NORMAL
1429
0
2
Scsi Port3 Bus0/Disk1 Part0
OPEN
NORMAL
1456
0
3
Scsi Port3 Bus0/Disk1 Part0
OPEN
NORMAL
0
0
DEV#:
1 DEVICE NAME: Disk2 Part0 TYPE: 2145
POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000002C
============================================================================
Path#
Adapter/Hard Disk
State Mode
Select
Errors
0
Scsi Port2 Bus0/Disk2 Part0
OPEN
NORMAL
1520
0
1
Scsi Port2 Bus0/Disk2 Part0
OPEN
NORMAL
0
0
Chapter 5. Host configuration
207
2
3
Scsi Port3 Bus0/Disk2 Part0
Scsi Port3 Bus0/Disk2 Part0
OPEN
OPEN
NORMAL
NORMAL
0
1517
0
0
DEV#:
2 DEVICE NAME: Disk3 Part0 TYPE: 2145
POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000002D
============================================================================
Path#
Adapter/Hard Disk
State Mode
Select
Errors
0
Scsi Port2 Bus0/Disk3 Part0
OPEN
NORMAL
27
0
1
Scsi Port2 Bus0/Disk3 Part0
OPEN
NORMAL
1396
0
2
Scsi Port3 Bus0/Disk3 Part0
OPEN
NORMAL
1459
0
3
Scsi Port3 Bus0/Disk3 Part0
OPEN
NORMAL
0
0
C:\Program Files\IBM\SDDDSM>
SAN zoning recommendation: When following the SAN zoning recommendation, we get
this result, using one VDisk and a host with two HBAs, (number of VDisks) x (number of
paths per I/O Group per HBA) x (number of HBAs) = 1 x 2 x 2 = four paths.
7. Right-click the disk in Disk Management, and select Online to place the disk online
(Figure 5-30).
Figure 5-30 Windows Server 2008: Place disk online
8. Repeat step 7 for all of your attached SVC disks.
9. Right-click one disk again, and select Initialize Disk (Figure 5-31).
Figure 5-31 Windows Server 2008: Initialize Disk
208
Implementing the IBM System Storage SAN Volume Controller V5.1
10.Mark all of the disks that you want to initialize, and click OK (Figure 5-32).
Figure 5-32 Windows Server 2008: Initialize Disk
11.Right-click the unallocated disk space, and select New Simple Volume (Figure 5-33).
Figure 5-33 Windows Server 2008: New Simple Volume
12.The New Simple Volume Wizard window opens. Click Next.
13.Enter a disk size, and click Next (Figure 5-34).
Figure 5-34 Windows Server 2008: New Simple Volume
14.Assign a drive letter, and click Next (Figure 5-35 on page 210).
Chapter 5. Host configuration
209
Figure 5-35 Windows Server 2008: New Simple Volume
15.Enter a volume label, and click Next (Figure 5-36).
Figure 5-36 Windows Server 2008: New Simple Volume
210
Implementing the IBM System Storage SAN Volume Controller V5.1
16.Click Finish, and repeat this step for every SVC disk on your host system (Figure 5-37).
Figure 5-37 Windows Server 2008: Disk Management
5.8.4 Extending a Windows Server 2008 volume
Using SVC and Windows Server 2008 gives you the ability to extend volumes while they are
in use. We describe the steps to extend a volume in 5.7.1, “Extending a Windows Server
2000 or Windows 2003 Server volume” on page 195.
Windows Server 2008 also uses the DiskPart utility to extend volumes. To start it, select
Start  Run, and enter DiskPart. The DiskPart utility will appear. The procedure is exactly
the same as the procedure in Windows 2003 Server. Follow the Windows 2003 Server
description to extend your volume.
5.8.5 Removing a disk on Windows
When we want to remove a disk from Windows, and the disk is an SVC VDisk, we follow the
standard Windows procedure to make sure that there is no data that we want to preserve on
the disk, that no applications are using the disk, and that no I/O is going to the disk. After
completing this procedure, we remove the VDisk mapping on the SVC. We must make sure
that we are removing the correct VDisk. To verify, we use SDD to find the serial number for the
disk, and on the SVC, we use lshostvdiskmap to find the VDisk name and number. We also
check that the SDD Serial number on the host matches the UID on the SVC for the VDisk.
When the VDisk mapping is removed, we perform a rescan for the disk, Disk Management on
the server removes the disk, and the vpath goes into the status of CLOSE on the server. We
can verify these actions by using the datapath query device SDD command, but the vpath
that is closed will first be removed after a reboot of the server.
In the following sequence of examples, we show how we can remove an SVC VDisk from a
Windows server. We show it on a Windows 2003 Server operating system, but the steps also
apply to Windows Server 2000 and Windows Server 2008.
Chapter 5. Host configuration
211
Figure 5-15 on page 196 shows the Disk Manager before removing the disk.
We will remove Disk 1. To find the correct VDisk information, we find the Serial/UID number
using SDD (Example 5-43).
Example 5-43 Removing SVC disk from the Windows server
C:\Program Files\IBM\SDDDSM>datapath query device
Total Devices : 3
DEV#:
0 DEVICE NAME: Disk1 Part0 TYPE: 2145
POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000000F
============================================================================
Path#
Adapter/Hard Disk
State Mode
Select
Errors
0
Scsi Port2 Bus0/Disk1 Part0
OPEN
NORMAL
1471
0
1
Scsi Port2 Bus0/Disk1 Part0
OPEN
NORMAL
0
0
2
Scsi Port3 Bus0/Disk1 Part0
OPEN
NORMAL
0
0
3
Scsi Port3 Bus0/Disk1 Part0
OPEN
NORMAL
1324
0
DEV#:
1 DEVICE NAME: Disk2 Part0 TYPE: 2145
POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000010
============================================================================
Path#
Adapter/Hard Disk
State Mode
Select
Errors
0
Scsi Port2 Bus0/Disk2 Part0
OPEN
NORMAL
20
0
1
Scsi Port2 Bus0/Disk2 Part0
OPEN
NORMAL
94
0
2
Scsi Port3 Bus0/Disk2 Part0
OPEN
NORMAL
55
0
3
Scsi Port3 Bus0/Disk2 Part0
OPEN
NORMAL
0
0
DEV#:
2 DEVICE NAME: Disk3 Part0 TYPE: 2145
POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000011
============================================================================
Path#
Adapter/Hard Disk
State Mode
Select
Errors
0
Scsi Port2 Bus0/Disk3 Part0
OPEN
NORMAL
100
0
1
Scsi Port2 Bus0/Disk3 Part0
OPEN
NORMAL
0
0
2
Scsi Port3 Bus0/Disk3 Part0
OPEN
NORMAL
0
0
3
Scsi Port3 Bus0/Disk3 Part0
OPEN
NORMAL
69
0
Knowing the Serial/UID of the VDisk and the host name Senegal, we find the VDisk mapping
to remove by using the lshostvdiskmap command on the SVC, and then, we remove the
actual VDisk mapping (Example 5-44).
Example 5-44 Finding and removing the VDisk mapping
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal
id name SCSI_id vdisk_id vdisk_name
wwpn
1 Senegal 0
7
Senegal_bas0001
210000E08B89B9C0
6005076801A180E9080000000000000F
1 Senegal 1
8
Senegal_bas0002
210000E08B89B9C0
6005076801A180E90800000000000010
1 Senegal 2
9
Senegal_bas0003
210000E08B89B9C0
6005076801A180E90800000000000011
vdisk_UID
IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Senegal Senegal_bas0001
212
Implementing the IBM System Storage SAN Volume Controller V5.1
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal
id name SCSI_id vdisk_id vdisk_name
wwpn
1 Senegal 1
8
Senegal_bas0002 210000E08B89B9C0
6005076801A180E90800000000000010
1 Senegal 2
9
Senegal_bas0003 210000E08B89B9C0
6005076801A180E90800000000000011
vdisk_UID
Here, we can see that the VDisk is removed from the server. On the server, we then perform
a disk rescan in Disk Management, and we now see that the correct disk (Disk1) has been
removed, as shown in Figure 5-38.
Figure 5-38 Disk Management: Disk has been removed
SDD also shows us that the status for all paths to Disk1 has changed to CLOSE, because the
disk is not available (Example 5-45 on page 214).
Chapter 5. Host configuration
213
Example 5-45 SDD: Closed path
C:\Program Files\IBM\SDDDSM>datapath query device
Total Devices : 3
DEV#:
0 DEVICE NAME: Disk1 Part0 TYPE: 2145
POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000000F
============================================================================
Path#
Adapter/Hard Disk
State Mode
Select
Errors
0
Scsi Port2 Bus0/Disk1 Part0
CLOSE
NORMAL
1471
0
1
Scsi Port2 Bus0/Disk1 Part0
CLOSE
NORMAL
0
0
2
Scsi Port3 Bus0/Disk1 Part0
CLOSE
NORMAL
0
0
3
Scsi Port3 Bus0/Disk1 Part0
CLOSE
NORMAL
1324
0
DEV#:
1 DEVICE NAME: Disk2 Part0 TYPE: 2145
POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000010
============================================================================
Path#
Adapter/Hard Disk
State Mode
Select
Errors
0
Scsi Port2 Bus0/Disk2 Part0
OPEN
NORMAL
20
0
1
Scsi Port2 Bus0/Disk2 Part0
OPEN
NORMAL
124
0
2
Scsi Port3 Bus0/Disk2 Part0
OPEN
NORMAL
72
0
3
Scsi Port3 Bus0/Disk2 Part0
OPEN
NORMAL
0
0
DEV#:
2 DEVICE NAME: Disk3 Part0 TYPE: 2145
POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000011
============================================================================
Path#
Adapter/Hard Disk
State Mode
Select
Errors
0
Scsi Port2 Bus0/Disk3 Part0
OPEN
NORMAL
134
0
1
Scsi Port2 Bus0/Disk3 Part0
OPEN
NORMAL
0
0
2
Scsi Port3 Bus0/Disk3 Part0
OPEN
NORMAL
0
0
3
Scsi Port3 Bus0/Disk3 Part0
OPEN
NORMAL
82
0
The disk (Disk1) is now removed from the server. However, to remove the SDD information of
the disk, we need to reboot the server, but we can wait until a more suitable time.
5.9 Using the SVC CLI from a Windows host
To issue CLI commands, we must install and prepare the SSH client system on the Windows
host system.
We can install the PuTTY SSH client software on a Windows host by using the PuTTY
installation program. This program is in the SSHClient\PuTTY directory of the SAN Volume
Controller Console CD-ROM, or you can download PuTTY from the following Web site:
http://www.chiark.greenend.org.uk/~sgtatham/putty/
The following Web site offers SSH client alternatives for Windows:
http://www.openssh.com/windows.html
Cygwin software has an option to install an OpenSSH client. You can download Cygwin from
the following Web site:
http://www.cygwin.com/
214
Implementing the IBM System Storage SAN Volume Controller V5.1
We discuss more information about the CLI in Chapter 7, “SAN Volume Controller operations
using the command-line interface” on page 339.
5.10 Microsoft Volume Shadow Copy
The SVC provides support for the Microsoft Volume Shadow Copy Service. The Microsoft
Volume Shadow Copy Service can provide a point-in-time (shadow) copy of a Windows host
volume while the volume is mounted and the files are in use.
In this section, we discuss how to install the Microsoft Volume Copy Shadow Service.
The following operating system versions are supported:
Windows 2003 Server Standard Server Edition, 32-bit and 64-bit (x64) versions
Windows 2003 Server Enterprise Edition, 32-bit and 64-bit (x64) versions
Windows 2003 Server Standard Server R2 Edition, 32-bit and 64-bit (x64) versions
Windows 2003 Server Enterprise R2 Edition, 32-bit and 64-bit (x64) versions
Windows Server 2008 Standard
Windows Server 2008 Enterprise
The following components are used to provide support for the service:
SAN Volume Controller
SAN Volume Controller Master Console
IBM System Storage hardware provider, known as the IBM System Storage Support for
Microsoft Volume Shadow Copy Service
Microsoft Volume Shadow Copy Service
The IBM System Storage provider is installed on the Windows host.
To provide the point-in-time shadow copy, the components complete the following process:
1. A backup application on the Windows host initiates a snapshot backup.
2. The Volume Shadow Copy Service notifies the IBM System Storage hardware provider
that a copy is needed.
3. The SAN Volume Controller prepares the volume for a snapshot.
4. The Volume Shadow Copy Service quiesces the software applications that are writing
data on the host and flushes file system buffers to prepare for a copy.
5. The SAN Volume Controller creates the shadow copy using the FlashCopy Service.
6. The Volume Shadow Copy Service notifies the writing applications that I/O operations can
resume and notifies the backup application that the backup was successful.
The Volume Shadow Copy Service maintains a free pool of VDisks for use as a FlashCopy
target and a reserved pool of VDisks. These pools are implemented as virtual host systems
on the SAN Volume Controller.
Chapter 5. Host configuration
215
5.10.1 Installation overview
The steps for implementing the IBM System Storage Support for Microsoft Volume Shadow
Copy Service must be completed in the correct sequence.
Before you begin, you must have experience with, or knowledge of, administering a Windows
operating system. And you must also have experience with, or knowledge of, administering a
SAN Volume Controller.
You will need to complete the following tasks:
Verify that the system requirements are met.
Install the SAN Volume Controller Console if it is not already installed.
Install the IBM System Storage hardware provider.
Verify the installation.
Create a free pool of volumes and a reserved pool of volumes on the SAN Volume
Controller.
5.10.2 System requirements for the IBM System Storage hardware provider
Ensure that your system satisfies the following requirements before you install the IBM
System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service
software on the Windows operating system:
SAN Volume Controller and Master Console Version 2.1.0 or later with FlashCopy
enabled. You must install the SAN Volume Controller Console before you install the IBM
System Storage Hardware provider.
IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk
Service software Version 3.1 or later.
5.10.3 Installing the IBM System Storage hardware provider
This section includes the steps to install the IBM System Storage hardware provider on a
Windows server. You must satisfy all of the system requirements before starting the
installation.
During the installation, you will be prompted to enter information about the SAN Volume
Controller Master Console, including the location of the truststore file. The truststore file is
generated during the installation of the Master Console. You must copy this file to a location
that is accessible to the IBM System Storage hardware provider on the Windows server.
When the installation is complete, the installation program might prompt you to restart the
system. Complete the following steps to install the IBM System Storage hardware provider on
the Windows server:
1. Download the installation program files from the IBM Web site, and place a copy on the
Windows server where you will install the IBM System Storage hardware provider:
http://www-1.ibm.com/support/docview.wss?rs=591&context=STCCCXR&context=STCCCYH
&dc=D400&uid=ssg1S4000663&loc=en_US&cs=utf-8&lang=en
2. Log on to the Windows server as an administrator, and navigate to the directory where the
installation program is located.
3. Run the installation program by double-clicking IBMVSS.exe.
216
Implementing the IBM System Storage SAN Volume Controller V5.1
4. The Welcome window opens, as shown in Figure 5-39. Click Next to continue with the
installation. You can click Cancel at any time to exit the installation. To move back to
previous windows while using the wizard, click Back.
Figure 5-39 IBM System Storage Support for Microsoft Volume Shadow Copy installation
5. The License Agreement window opens (Figure 5-40). Read the license agreement
information. Select whether you accept the terms of the license agreement, and click
Next. If you do not accept, it means that you cannot continue with the installation.
Figure 5-40 IBM System Storage Support for Microsoft Volume Shadow Copy installation
Chapter 5. Host configuration
217
6. The Choose Destination Location window opens (Figure 5-41). Click Next to accept the
default directory where the setup program will install the files, or click Change to select
another directory. Click Next.
Figure 5-41 IBM System Storage Support for Microsoft Volume Shadow Copy installation
7. Click Install to begin the installation (Figure 5-42).
Figure 5-42 IBM System Storage Support for Microsoft Volume Shadow Copy installation
218
Implementing the IBM System Storage SAN Volume Controller V5.1
8. From the next window, select the required CIM server, or select “Enter the CIM Server
address manually”, and click Next (Figure 5-43).
Figure 5-43 IBM System Storage Support for Microsoft Volume Shadow Copy installation
9. The Enter CIM Server Details window opens. Enter the following information in the fields
(Figure 5-44):
a. In the CIM Server Address field, type the name of the server where the SAN Volume
Controller Console is installed.
b. In the CIM User field, type the user name that the IBM System Storage Support for
Microsoft Volume Shadow Copy Service and Virtual Disk Service software will use to
gain access to the server where the SAN Volume Controller Console is installed.
c. In the CIM Password field, type the password for the user name that the IBM System
Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service
software will use to gain access to the SAN Volume Controller Console.
d. Click Next.
Figure 5-44 IBM System Storage Support for Microsoft Volume Shadow Copy installation
10.In the next window, click Finish. If necessary, the InstallShield Wizard prompts you to
restart the system (Figure 5-45 on page 220).
Chapter 5. Host configuration
219
Figure 5-45 IBM System Storage Support for Microsoft Volume Shadow Copy installation
Add it on al information:
If these settings change after installation, you can use the ibmvcfg.exe tool to update
the Microsoft Volume Shadow Copy and Virtual Disk Services software with the new
settings.
If you do not have the CIM Agent server, port, or user information, contact your CIM
Agent administrator.
5.10.4 Verifying the installation
Perform the following steps to verify the installation:
1. Select Start  All Programs  Administrative Tools  Services from the Windows
server task bar.
2. Ensure that the service named “IBM System Storage Support for Microsoft Volume
Shadow Copy Service and Virtual Disk Service” software appears and that Status is set to
Started and that Startup Type is set to Automatic.
3. Open a command prompt window, and issue the following command:
vssadmin list providers
220
Implementing the IBM System Storage SAN Volume Controller V5.1
This command ensures that the service named IBM System Storage Support for Microsoft
Volume Shadow Copy Service and Virtual Disk Service software is listed as a provider
(Example 5-46).
Example 5-46 Microsoft Software Shadow copy provider
C:\Documents and Settings\Administrator>vssadmin list providers
vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
(C) Copyright 2001 Microsoft Corp.
Provider name: 'Microsoft Software Shadow Copy provider 1.0'
Provider type: System
Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5}
Version: 1.0.0.7
Provider name: 'IBM System Storage Volume Shadow Copy Service Hardware
Provider'
Provider type: Hardware
Provider Id: {d90dd826-87cf-42ce-a88d-b32caa82025b}
Version: 3.1.0.1108
If you are able to successfully perform all of these verification tasks, the IBM System Storage
Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software was
successfully installed on the Windows server.
5.10.5 Creating the free and reserved pools of volumes
The IBM System Storage hardware provider maintains a free pool of volumes and a reserved
pool of volumes. Because these objects do not exist on the SAN Volume Controller, the free
pool of volumes and the reserved pool of volumes are implemented as virtual host systems.
You must define these two virtual host systems on the SAN Volume Controller.
When a shadow copy is created, the IBM System Storage hardware provider selects a
volume in the free pool, assigns it to the reserved pool, and then removes it from the free
pool. This process protects the volume from being overwritten by other Volume Shadow Copy
Service users.
To successfully perform a Volume Shadow Copy Service operation, there must be enough
VDisks mapped to the free pool. The VDisks must be the same size as the source VDisks.
Use the SAN Volume Controller Console or the SAN Volume Controller command-line
interface (CLI) to perform the following steps:
1. Create a host for the free pool of VDisks. You can use the default name VSS_FREE or
specify another name. Associate the host with the worldwide port name (WWPN)
5000000000000000 (15 zeroes) (Example 5-47).
Example 5-47 Creating an mkhost for the free pool
IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_FREE -hbawwpn 5000000000000000
-force
Host, id [2], successfully created
2. Create a virtual host for the reserved pool of volumes. You can use the default name
VSS_RESERVED or specify another name. Associate the host with the WWPN
5000000000000001 (14 zeroes) (Example 5-48 on page 222).
Chapter 5. Host configuration
221
Example 5-48 Creating an mkhost for the reserved pool
IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_RESERVED -hbawwpn
5000000000000001 -force
Host, id [3], successfully created
3. Map the logical units (VDisks) to the free pool of volumes. The VDisks cannot be mapped
to any other hosts. If you already have VDisks created for the free pool of volumes, you
must assign the VDisks to the free pool.
4. Create VDisk-to-host mappings between the VDisks selected in step 3 and the
VSS_FREE host to add the VDisks to the free pool. Alternatively, you can use the ibmvcfg
add command to add VDisks to the free pool (Example 5-49).
Example 5-49 Host mappings
IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0001
Virtual Disk to Host map, id [0], successfully created
IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0002
Virtual Disk to Host map, id [1], successfully created
5. Verify that the VDisks have been mapped. If you do not use the default WWPNs
5000000000000000 and 5000000000000001, you must configure the IBM System
Storage hardware provider with the WWPNs (Example 5-50).
Example 5-50 Verify hosts
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap VSS_FREE
id name
SCSI_id vdisk_id vdisk_name
wwpn
vdisk_UID
2 VSS_FREE 0
10
msvc0001 5000000000000000
6005076801A180E90800000000000012
2 VSS_FREE 1
11
msvc0002 5000000000000000
6005076801A180E90800000000000013
5.10.6 Changing the configuration parameters
You can change the parameters that you defined when you installed the IBM System Storage
Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software.
Therefore, you must use the ibmvcfg.exe utility. It is a command-line utility that is the located
in C:\Program Files\IBM\Hardware Provider for VSS-VDS directory (Example 5-51).
Example 5-51 Using ibmvcfg.exe utility help
C:\Program Files\IBM\Hardware Provider for VSS-VDS>ibmvcfg.exe
IBM System Storage VSS Provider Configuration Tool Commands
---------------------------------------ibmvcfg.exe <command> <command arguments>
Commands:
/h | /help | -? | /?
showcfg
listvols <all|free|unassigned>
add <volume esrial number list> (separated by spaces)
rem <volume serial number list> (separated by spaces)
Configuration:
set user <CIMOM user name>
222
Implementing the IBM System Storage SAN Volume Controller V5.1
set
set
set
set
set
set
set
set
set
set
set
set
set
password <CIMOM password>
trace [0-7]
trustpassword <trustpassword>
truststore <truststore location>
usingSSL <YES | NO>
vssFreeInitiator <WWPN>
vssReservedInitiator <WWPN>
FlashCopyVer <1 | 2> (only applies to ESS)
cimomPort <PORTNUM>
cimomHost <Hostname>
namespace <Namespace>
targetSVC <svc_cluster_ip>
backgroundCopy <0-100>
Table 5-4 shows the available commands.
Table 5-4 Available ibmvcfg.util commands
Command
Description
Example
ibmvcfg showcfg
Lists the current settings.
ibmvcfg showcfg
ibmvcfg set username
<username>
Sets the user name to access
the SAN Volume Controller
Console.
ibmvcfg set username Dan
ibmvcfg set password
<password>
Sets the password of the user
name that will access the SAN
Volume Controller Console.
ibmvcfg set password
mypassword
ibmvcfg set targetSVC
<ipaddress>
Specifies the IP address of the
SAN Volume Controller on
which the VDisks are located
when VDisks are moved to and
from the free pool with the
ibmvcfg add and ibmvcfg rem
commands. The IP address is
overridden if you use the -s flag
with the ibmvcfg add and
ibmvcfg rem commands.
set targetSVC 9.43.86.120
set backgroundCopy
Sets the background copy rate
for FlashCopy.
set backgroundCopy 80
ibmvcfg set usingSSL
Specifies whether to use
Secure Sockets Layer protocol
to connect to the SAN Volume
Controller Console.
ibmvcfg set usingSSL yes
ibmvcfg set cimomPort
<portnum>
Specifies the SAN Volume
Controller Console port
number. The default value is
5,999.
ibmvcfg set cimomPort 5999
ibmvcfg set cimomHost
<server name>
Sets the name of the server
where the SAN Volume
Controller Console is installed.
ibmvcfg set cimomHost
cimomserver
Chapter 5. Host configuration
223
224
Command
Description
Example
ibmvcfg set namespace
<namespace>
Specifies the namespace value
that the Master Console is
using. The default value is
\root\ibm.
ibmvcfg set namespace
\root\ibm
ibmvcfg set vssFreeInitiator
<WWPN>
Specifies the WWPN of the
host. The default value is
5000000000000000. Modify
this value only if there is a host
already in your environment
with a WWPN of
5000000000000000.
ibmvcfg set vssFreeInitiator
5000000000000000
ibmvcfg set
vssReservedInitiator <WWPN>
Specifies the WWPN of the
host. The default value is
5000000000000001. Modify
this value only if there is a host
already in your environment
with a WWPN of
5000000000000001.
ibmvcfg set vssFreeInitiator
5000000000000001
ibmvcfg listvols
Lists all VDisks, including
information about the size,
location, and VDisk to host
mappings.
ibmvcfg listvols
ibmvcfg listvols all
Lists all VDisks, including
information about the size,
location, and VDisk to host
mappings.
ibmvcfg listvols all
ibmvcfg listvols free
Lists the volumes that are
currently in the free pool.
ibmvcfg listvols free
ibmvcfg listvols unassigned
Lists the volumes that are
currently not mapped to any
hosts.
ibmvcfg listvols unassigned
ibmvcfg add -s ipaddress
Adds one or more volumes to
the free pool of volumes. Use
the -s parameter to specify the
IP address of the SAN Volume
Controller where the VDisks are
located. The -s parameter
overrides the default IP address
that is set with the ibmvcfg set
targetSVC command.
ibmvcfg add vdisk12 ibmvcfg
add 600507 68018700035000000
0000000BA -s 66.150.210.141
ibmvcfg rem -s ipaddress
Removes one or more volumes
from the free pool of volumes.
Use the -s parameter to specify
the IP address of the SAN
Volume Controller where the
VDisks are located. The -s
parameter overrides the default
IP address that is set with the
ibmvcfg set targetSVC
command.
ibmvcfg rem vdisk12 ibmvcfg
rem 600507 68018700035000000
0000000BA -s 66.150.210.141
Implementing the IBM System Storage SAN Volume Controller V5.1
5.11 Specific Linux (on Intel) information
The following sections describe specific information pertaining to the connection of Linux on
Intel-based hosts to the SVC environment.
5.11.1 Configuring the Linux host
Follow these steps to configure the Linux host:
1. Use the latest firmware levels on your host system.
2. Install the HBA or HBAs on the Linux server, as described in 5.6.4, “Host adapter
installation and configuration” on page 183.
3. Install the supported HBA driver/firmware and upgrade the kernel if required, as described
in 5.11.2, “Configuration information” on page 225.
4. Connect the Linux server FC host adapters to the switches.
5. Configure the switches (zoning) if needed.
6. Install SDD for Linux, as described in 5.11.5, “Multipathing in Linux” on page 226.
7. Configure the host, VDisks, and host mapping in the SAN Volume Controller.
8. Rescan for LUNs on the Linux server to discover the VDisks that were created on the
SVC.
5.11.2 Configuration information
The SAN Volume Controller supports hosts that run the following Linux distributions:
Red Hat Enterprise Linux
SUSE Linux Enterprise Server
For the latest information, always refer to this site:
http://www.ibm.com/storage/support/2145
For SVC Version 4.3, the following support information was available at the time of writing:
Software supported levels:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278
Hardware supported levels:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277
At this Web site, you will find the hardware list for supported HBAs and device driver levels for
Windows. Check the supported firmware and driver level for your HBA, and follow the
manufacture’s instructions to upgrade the firmware and driver levels for each type of HBA.
5.11.3 Disabling automatic Linux system updates
Many Linux distributions give you the ability to configure your systems for automatic system
updates. Red Hat provides this ability in the form of a program called up2date, while Novell
SUSE provides the YaST Online Update utility. These features periodically query for updates
that are available for each host and can be configured to automatically install any new
updates that they find.
Chapter 5. Host configuration
225
Often, the automatic update process also upgrades the system to the latest kernel level.
Hosts running SDD must turn off the automatic update of kernel levels, because certain
drivers that are supplied by IBM, such as SDD, are dependent on a specific kernel and will
cease to function on a new kernel. Similarly, HBA drivers need to be compiled against specific
kernels in order to function optimally. By allowing automatic updates of the kernel, you risk
affecting your host systems unexpectedly.
5.11.4 Setting queue depth with QLogic HBAs
The queue depth is the number of I/O operations that can be run in parallel on a device.
Configure your host running the Linux operating system by using the formula that is specified
in 5.16, “Calculating the queue depth” on page 252.
Perform the following steps to set the maximum queue depth:
1. Add the following line to the /etc/modules.conf file:
– For the 2.4 kernel (SUSE Linux Enterprise Server 8 or Red Hat Enterprise Linux):
options qla2300 ql2xfailover=0 ql2xmaxqdepth=new_queue_depth
– For the 2.6 kernel (SUSE Linux Enterprise Server 9, or later, or Red Hat Enterprise
Linux 4, or later):
options qla2xxx ql2xfailover=0 ql2xmaxqdepth=new_queue_depth
2. Rebuild the RAM disk that is associated with the kernel being used by using one of the
following commands:
– If you are running on a SUSE Linux Enterprise Server operating system, run the
mk_initrd command.
– If you are running on a Red Hat Enterprise Linux operating system, run the mkinitrd
command, and then restart.
5.11.5 Multipathing in Linux
Red Hat Enterprise Linux 5 and later and SUSE Linux Enterprise Server 10 and later provide
their own multipath support by the operating system. On older systems, it is necessary to
install the IBM SDD multipath driver.
Installing SDD
This section describes how to install SDD for older distributions. Before performing these
steps, always check for the currently supported levels, as described in 5.11.2, “Configuration
information” on page 225.
226
Implementing the IBM System Storage SAN Volume Controller V5.1
The cat /proc/scsi/scsi command in Example 5-52 shows the devices that the SCSI driver
has probed. In our configuration, we have two HBAs installed in our server, and we configured
the zoning to access our VDisk from four paths.
Example 5-52 cat /proc/scsi/scsi command example
[[email protected] sdd]# cat /proc/scsi/scsi
Attached devices:
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: IBM
Model: 2145
Type:
Unknown
Host: scsi5 Channel: 00 Id: 00 Lun: 00
Vendor: IBM
Model: 2145
Type:
Unknown
[[email protected] sdd]#
Rev: 0000
ANSI SCSI revision: 04
Rev: 0000
ANSI SCSI revision: 04
The rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpm command installs the package, as shown
in Example 5-53.
Example 5-53 rpm command example
[[email protected] sdd]# rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpm
Preparing...
########################################### [100%]
1:IBMsdd
########################################### [100%]
Added following line to /etc/inittab:
srv:345:respawn:/opt/IBMsdd/bin/sddsrv > /dev/null 2>&1
[[email protected] sdd]#
To manually load and configure SDD on Linux, use the service sdd start command (SUSE
Linux users can use the sdd start command). If you are not running a supported kernel, you
will get an error message.
If your kernel is supported, you see an OK success message, as shown in Example 5-54.
Example 5-54 Supported kernel for SDD
[[email protected] sdd]# sdd start
Starting IBMsdd driver load:
Issuing killall sddsrv to trigger respawn...
Starting IBMsdd configuration:
[
OK
]
[
OK
]
Issue the cfgvpath query command to view the name and serial number of the VDisk that is
configured in the SAN Volume Controller, as shown in Example 5-55.
Example 5-55 cfgvpath query example
[[email protected] ~]# cfgvpath query
RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00
total datalen=52 datalen_str=0x00 00 00 30
RTPG succeeded: sd_name=/dev/sda df_ctlr=0
/dev/sda ( 8,
0) host=0 ch=0 id=0 lun=0
vid=IBM
pid=2145
serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035
ctlr_flag=1 ctlr_nbr=1 df_ctlr=0
RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00
total datalen=52 datalen_str=0x00 00 00 30
RTPG succeeded: sd_name=/dev/sdb df_ctlr=0
Chapter 5. Host configuration
227
/dev/sdb ( 8, 16) host=0 ch=0 id=1 lun=0
vid=IBM
pid=2145
serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035
ctlr_flag=1 ctlr_nbr=0 df_ctlr=0
RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00
total datalen=52 datalen_str=0x00 00 00 30
RTPG succeeded: sd_name=/dev/sdc df_ctlr=0
/dev/sdc ( 8, 32) host=1 ch=0 id=0 lun=0
vid=IBM
pid=2145
serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035
ctlr_flag=1 ctlr_nbr=0 df_ctlr=0
RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00
total datalen=52 datalen_str=0x00 00 00 30
RTPG succeeded: sd_name=/dev/sdd df_ctlr=0
/dev/sdd ( 8, 48) host=1 ch=0 id=1 lun=0
vid=IBM
pid=2145
serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035
ctlr_flag=1 ctlr_nbr=1 df_ctlr=0
[[email protected] ~]#
The cfgvpath command configures the SDD vpath devices, as shown in Example 5-56.
Example 5-56 cfgvpath command example
[[email protected] ~]# cfgvpath
c--------- 1 root root 253, 0 Jun 5
WARNING: vpatha
path sda has
WARNING: vpatha
path sdb has
WARNING: vpatha
path sdc has
WARNING: vpatha
path sdd has
Writing out new configuration to file
[[email protected] ~]#
09:04 /dev/IBMsdd
already been configured.
already been configured.
already been configured.
already been configured.
/etc/vpath.conf
The configuration information is saved by default in the /etc/vpath.conf file. You can save
the configuration information to a specified file name by entering the following command:
cfgvpath -f file_name.cfg
Issue the chkconfig command to enable SDD to run at system startup:
chkconfig sdd on
To verify the setting, enter the following command:
chkconfig --list sdd
This verification is shown in Example 5-57.
Example 5-57 sdd run level example
[[email protected] sdd]# chkconfig --list sdd
sdd
0:off
1:off
2:on
[[email protected] sdd]#
3:on
4:on
5:on
6:off
If necessary, you can disable the startup option by entering this command:
chkconfig sdd off
228
Implementing the IBM System Storage SAN Volume Controller V5.1
Run the datapath query commands to display the online adapters and the paths to the
adapters. Notice that the preferred paths are used from one of the nodes, that is, path 0 and
path 2. Path 1 and path 3 connect to the other node and are used as alternate or backup
paths for high availability, as shown in Example 5-58.
Example 5-58 datapath query command example
[[email protected] ~]# datapath query adapter
Active Adapters :2
Adpt#
Name
State
Mode
0 Host0Channel0 NORMAL
ACTIVE
1 Host1Channel0 NORMAL
ACTIVE
[[email protected] ~]#
[[email protected] ~]# datapath query device
Select
1
0
Errors
0
0
Paths
2
2
Active
0
0
Total Devices : 1
DEV#:
0 DEVICE NAME: vpatha TYPE: 2145
POLICY: Optimized Sequential
SERIAL: 60050768018201bee000000000000035
============================================================================
Path#
Adapter/Hard Disk
State
Mode
Select
Errors
0
Host0Channel0/sda
CLOSE
NORMAL
1
0
1
Host0Channel0/sdb
CLOSE
NORMAL
0
0
2
Host1Channel0/sdc
CLOSE
NORMAL
0
0
3
Host1Channel0/sdd
CLOSE
NORMAL
0
0
[[email protected] ~]#
SDD has three path-selection policy algorithms:
Failover only (fo): All I/O operations for the device are sent to the same (preferred) path
unless the path fails because of I/O errors. Then, an alternate path is chosen for
subsequent I/O operations.
Load balancing (lb): The path to use for an I/O operation is chosen by estimating the load
on the adapter to which each path is attached. The load is a function of the number of I/O
operations currently in process. If multiple paths have the same load, a path is chosen at
random from those paths. Load-balancing mode also incorporates failover protection. The
load-balancing policy is also known as the optimized policy.
Round robin (rr): The path to use for each I/O operation is chosen at random from paths
that were not used for the last I/O operation. If a device has only two paths, SDD
alternates between the two paths.
You can dynamically change the SDD path-selection policy algorithm by using the datapath
set device policy SDD command.
You can see the SDD path-selection policy algorithm that is active on the device when you
use the datapath query device command. Example 5-58 shows that the active policy is
optimized, which means that the SDD path-selection policy algorithm active is Optimized
Sequential.
Chapter 5. Host configuration
229
Example 5-59 shows the VDisk information from the SVC command-line interface.
Example 5-59 svcinfo redhat1
IBM_2145:ITSOSVC42A:admin>svcinfo lshost linux2
id 6
name linux2
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 210000E08B89C1CD
node_logged_in_count 2
state active
WWPN 210000E08B054CAA
node_logged_in_count 2
state active
IBM_2145:ITSOSVC42A:admin>
IBM_2145:ITSOSVC42A:admin>svcinfo lshostvdiskmap linux2
id
name
SCSI_id
vdisk_id
wwpn
vdisk_UID
6
linux2
0
33
210000E08B89C1CD 60050768018201BEE000000000000035
IBM_2145:ITSOSVC42A:admin>
IBM_2145:ITSOSVC42A:admin>svcinfo lsvdisk linux_vd1
id 33
name linux_vd1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG0
capacity 1.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018201BEE000000000000035
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
IBM_2145:ITSOSVC42A:admin>
230
Implementing the IBM System Storage SAN Volume Controller V5.1
vdisk_name
linux_vd1
5.11.6 Creating and preparing the SDD volumes for use
Follow these steps to create and prepare the volumes:
1. Create a partition on the vpath device, as shown in Example 5-60.
Example 5-60 fdisk example
[[email protected] ~]# fdisk /dev/vpatha
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): m
Command action
a
toggle a bootable flag
b
edit bsd disklabel
c
toggle the dos compatibility flag
d
delete a partition
l
list known partition types
m
print this menu
n
add a new partition
o
create a new empty DOS partition table
p
print the partition table
q
quit without saving changes
s
create a new empty Sun disklabel
t
change a partition's system id
u
change display/entry units
v
verify the partition table
w
write table to disk and exit
x
extra functionality (experts only)
Command (m for help): n
Command action
e
extended
p
primary partition (1-4)
e
Partition number (1-4): 1
First cylinder (1-1011, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1011, default 1011):
Using default value 1011
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[[email protected] ~]#
Chapter 5. Host configuration
231
2. Create a file system on the vpath, as shown in Example 5-61.
Example 5-61 mkfs command example
[[email protected] ~]# mkfs -t ext3 /dev/vpatha
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
131072 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 27 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[[email protected] ~]#
3. Create the mount point, and mount the vpath drive, as shown in Example 5-62.
Example 5-62 Mount point
[[email protected] ~]# mkdir /itsosvc
[[email protected] ~]# mount -t ext3 /dev/vpatha /itsosvc
4. The drive is now ready for use. The df command shows us the mounted disk /itsosvc, and
the datapath query command shows that four paths are available (Example 5-63).
Example 5-63 Display mounted drives
[[email protected] ~]# df
Filesystem
1K-blocks
/dev/mapper/VolGroup00-LogVol00
74699952
/dev/hda1
101086
none
1033136
/dev/vpatha
1032088
[[email protected] ~]#
Used Available Use% Mounted on
2564388
13472
0
34092
68341032
82395
1033136
945568
4%
15%
0%
4%
/
/boot
/dev/shm
/itsosvc
[[email protected] ~]# datapath query device
Total Devices : 1
DEV#:
0 DEVICE NAME: vpatha TYPE: 2145
POLICY: Optimized Sequential
SERIAL: 60050768018201bee000000000000035
============================================================================
232
Implementing the IBM System Storage SAN Volume Controller V5.1
Path#
Adapter/Hard Disk
0
Host0Channel0/sda
1
Host0Channel0/sdb
2
Host1Channel0/sdc
3
Host1Channel0/sdd
[[email protected] ~]#
State
OPEN
OPEN
OPEN
OPEN
Mode
NORMAL
NORMAL
NORMAL
NORMAL
Select
1
6296
6178
0
Errors
0
0
0
0
5.11.7 Using the operating system MPIO
Red Hat Enterprise Linux 5 and later and SUSE Linux Enterprise Server 10 and later provide
their own multipath support for the operating system. Therefore, you do not have to install an
additional device driver. Always check whether your operating system includes one of the
supported multipath drivers.
You will find this information in the links that are provided in 5.11.2, “Configuration
information” on page 225. In SLES10, the multipath drivers and tools are installed by default,
but for RHEL5, the user has to explicitly choose the multipath components during the OS
installation to install them.
Each of the attached SAN Volume Controller LUNs has a special device file in the Linux /dev
directory.
Hosts that use 2.6 kernel Linux operating systems can have as many FC disks as the SVC
allows. The following Web site provides the most current information about the maximum
configuration for the SAN Volume Controller:
http://www.ibm.com/storage/support/2145
5.11.8 Creating and preparing MPIO volumes for use
First, you have to start the MPIO daemon on your system. Run the following commands on
your host system:
1. Enable MPIO for SLES10 by running the following commands:
a. /etc/init.d/boot.multipath {start|stop}
b. /etc/init.d/multipathd
{start|stop|status|try-restart|restart|force-reload|reload|probe}
Tip: Run insserv boot.multipath multipathd to automatically load the multipath driver
and multipathd daemon during startup.
2. Enable MPIO for RHEL5 by running the following commands:
a. modprobe dm-multipath
b. modprobe dm-round-robin
c. service multipathd start
d. chkconfig multipathd on
Example 5-64 on page 234 shows the commands issued on a Red Hat Enterprise Linux 5.1
operating system.
Chapter 5. Host configuration
233
Example 5-64 Starting MPIO daemon on Red Hat Enterprise Linux
[[email protected]
[[email protected]
[[email protected]
[[email protected]
~]# modprobe dm-round-robin
~]# multipathd start
~]# chkconfig multipathd on
~]#
3. Open the multipath.conf file, and follow the instructions to enable multipathing for IBM
devices. The file is located in the /etc directory. Example 5-65 shows editing using vi.
Example 5-65 Editing the multipath.conf file
[[email protected] etc]# vi multipath.conf
4. Add the following entry to the multipath.conf file:
device {
vendor "IBM"
product "2145"
path_grouping_policy group_by_prio
prio_callout "/sbin/mpath_prio_alua /dev/%n"
}
5. Restart the multipath daemon (Example 5-66).
Example 5-66 Stopping and starting the multipath daemon
[[email protected] ~]# service multipathd stop
Stopping multipathd daemon:
[[email protected] ~]# service multipathd start
Starting multipathd daemon:
[
OK
]
[
OK
]
6. Type the multipath -dl command to see the mpio configuration. You will see two groups
with two paths each. All paths must have the state [active][ready] and one group will be
[enabled].
234
Implementing the IBM System Storage SAN Volume Controller V5.1
7. Use the fdisk command to create a partition on the SVC disk, as shown in Example 5-67.
Example 5-67 fdisk
[[email protected] scsi]# fdisk -l
Disk /dev/hda: 80.0 GB, 80032038912 bytes
255 heads, 63 sectors/track, 9730 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
/dev/hda1
*
/dev/hda2
Start
1
14
End
13
9730
Blocks
104391
78051802+
Id
83
8e
System
Linux
Linux LVM
Disk /dev/sda: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sda doesn't contain a valid partition table
Disk /dev/sdb: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sde doesn't contain a valid partition table
Disk /dev/sdf: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sdf doesn't contain a valid partition table
Disk /dev/sdg: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sdg doesn't contain a valid partition table
Chapter 5. Host configuration
235
Disk /dev/sdh: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sdh doesn't contain a valid partition table
Disk /dev/dm-2: 4244 MB, 4244635648 bytes
255 heads, 63 sectors/track, 516 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3: 4244 MB, 4244635648 bytes
255 heads, 63 sectors/track, 516 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-3 doesn't contain a valid partition table
[[email protected] scsi]# fdisk /dev/dm-2
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e
extended
p
primary partition (1-4)
e
Partition number (1-4): 1
First cylinder (1-516, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-516, default 516):
Using default value 516
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table.
The new table will be used at the next reboot.
[[email protected] scsi]# shutdown -r now
236
Implementing the IBM System Storage SAN Volume Controller V5.1
8. Create a file system using the mkfs command (Example 5-68).
Example 5-68 mkfs command
[[email protected] ~]# mkfs -t ext3 /dev/dm-2
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
518144 inodes, 1036288 blocks
51814 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1061158912
32 block groups
32768 blocks per group, 32768 fragments per group
16192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[[email protected] ~]#
9. Create a mount point, and mount the drive, as shown in Example 5-69.
Example 5-69 Mount point
[[email protected] ~]# mkdir /svcdisk_0
[[email protected] ~]# cd /svcdisk_0/
[[email protected] svcdisk_0]# mount -t ext3 /dev/dm-2 /svcdisk_0
[[email protected] svcdisk_0]# df
Filesystem
1K-blocks
Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
73608360
1970000 67838912
3% /
/dev/hda1
101086
15082
80785 16% /boot
tmpfs
967984
0
967984
0% /dev/shm
/dev/dm-2
4080064
73696
3799112
2% /svcdisk_0
5.12 VMware configuration information
This section explains the requirements and additional information for attaching the SAN
Volume Controller to a variety of guest host operating systems running on the VMware
operating system.
Chapter 5. Host configuration
237
5.12.1 Configuring VMware hosts
To configure the VMware hosts, follow these steps:
1. Install the HBAs in your host system, as described in 5.12.4, “HBAs for hosts running
VMware” on page 238.
2. Connect the server FC host adapters to the switches.
3. Configure the switches (zoning), as described in 5.12.6, “VMware storage and zoning
recommendations” on page 240.
4. Install the VMware operating system (if not already done) and check the HBA timeouts, as
described in 5.12.7, “Setting the HBA timeout for failover in VMware” on page 241.
5. Configure the host, VDisks, and host mapping in the SVC, as described in 5.12.9,
“Attaching VMware to VDisks” on page 242.
5.12.2 Operating system versions and maintenance levels
For the latest information about VMware support, refer to this Web site:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
At the time of writing, the following versions are supported:
ESX V3.5
ESX V3.51
ESX V3.02
ESX V2.5.3
ESX V2.5.2
ESX V2.1 with Virtual Machine File System (VMFS) disks
Important: If you are running the VMware V3.01 build, you are required to move to a
minimum VMware level of V3.02 for continued support.
5.12.3 Guest operating systems
Also, make sure that you are using supported guest operating systems. The latest information
is available at this Web site:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278#_VMWare
5.12.4 HBAs for hosts running VMware
Ensure that your hosts that are running on VMware operating systems use the correct HBAs
and firmware levels.
Install the host adapters in your system. Refer to the manufacturer’s instructions for
installation and configuration of the HBAs.
In IBM System x servers, the HBA must always be installed in the first slots. Therefore, if you
install, for example, two HBAs and two network cards, the HBAs must be installed in slot 1
and slot 2 and the network cards can be installed in the remaining slots.
For older ESX versions, you will find the supported HBAs at the IBM Web Site:
http://www.ibm.com/storage/support/2145
The interoperability matrixes for ESX V3.02, V3.5, and V3.51 are available at the VMware
Web site (clicking this link opens or downloads the PDF):
238
Implementing the IBM System Storage SAN Volume Controller V5.1
V3.02
http://www.vmware.com/pdf/vi3_io_guide.pdf
V3.5
http://www.vmware.com/pdf/vi35_io_guide.pdf
The supported HBA device drivers are already included in the ESX server build.
After installing, load the default configuration of your FC HBAs. We recommend using the
same model of HBA with the same firmware in one server. It is not supported to have Emulex
and QLogic HBAs that access the same target in one server.
5.12.5 Multipath solutions supported
Only single path is supported in ESX V2.1, and multipathing is supported in ESX V2.5.x.
The VMware operating system provides multipathing support, so installing multipathing
software is not required.
VMware multipathing software dynamic pathing
VMware multipathing software does not support dynamic pathing. Preferred paths that are set
in the SAN Volume Controller are ignored. The VMware multipathing software performs static
load balancing for I/O, based upon a host setting that defines the preferred path for a given
volume.
Multipathing configuration maximums
When you configure, remember the maximum configuration for the VMware multipathing
software: 256 is the maximum number of SCSI devices supported by the VMware software
and the maximum number of paths to each VDisk is four, giving you a total number of paths,
on a server, of 1,024.
Paths: Each path to a VDisk equates to a single SCSI device.
Clustering support for hosts running VMware
The SVC provides cluster support on VMware guest operating systems. The following Web
Site provides the current interoperability information:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#_VMware
SAN boot support
SAN boot of any guest OS is supported under VMware. The very nature of VMware means
that SAN boot is a requirement on any guest OS. The guest OS must reside on a SAN disk.
If you are unfamiliar with the VMware environments and the advantages of storing virtual
machines and application data on a SAN, we recommend that you get an overview about the
VMware products before continuing.
VMware documentation is available at this Web site:
http://www.vmware.com/support/pubs/
Chapter 5. Host configuration
239
5.12.6 VMware storage and zoning recommendations
The VMware ESX server can use a Virtual Machine File System (VMFS), which is a file
system that is optimized to run multiple virtual machines as one workload to minimize disk
I/O. It is also able to handle concurrent access from multiple physical machines, because it
enforces the appropriate access controls. Therefore, multiple ESX hosts can share the same
set of LUNs (Figure 5-46).
Figure 5-46 VMware: SVC zoning example
Theoretically, you can run all of your virtual machines on one LUN, but for performance
reasons, in more complex scenarios, it can be better to load balance virtual machines over
separate HBAs, storages, or arrays.
For example, if you run an ESX host, with several virtual machines, it makes sense to use one
“slow” array, for example, for Print and Active Directory Services guest operating systems
without high I/O, and another fast array for database guest operating systems.
Using fewer VDisks has the following advantages:
More flexibility to create virtual machines without creating new space on the SVC
More possibilities for taking VMware snapshots
Fewer VDisks to manage
Using more and smaller VDisks has the following advantages:
Separate I/O characteristics of the guest operating systems
240
Implementing the IBM System Storage SAN Volume Controller V5.1
More flexibility (the multipathing policy and disk shares are set per VDisk)
Microsoft Cluster Service requires its own VDisk for each cluster disk resource
More documentation about designing your VMware infrastructure is provided at one of these
Web sites:
http://www.vmware.com/vmtn/resources/
http://www.vmware.com/resources/techresources/1059
Guidelines:
ESX Server hosts that use shared storage for virtual machine failover or load balancing
must be in the same zone.
You can have only one VMFS volume per VDisk.
5.12.7 Setting the HBA timeout for failover in VMware
The timeout for failover for ESX hosts must be set to 30 seconds:
For QLogic HBAs, the timeout depends on the PortDownRetryCount parameter. The
timeout value is 2 x PortDownRetryCount + 5 sec. It is recommended to set the
qlport_down_retry parameter to 14.
For Emulex HBAs, the lpfc_linkdown_tmo and the lpcf_nodev_tmo parameters must be
set to 30 seconds.
To make these changes on your system, perform the following steps (Example 5-70):
1.
2.
3.
4.
5.
Back up the /etc/vmware/esx.cof file.
Open the /etc/vmware/esx.cof file for editing.
The file includes a section for every installed SCSI device.
Locate your SCSI adapters, and edit the previously described parameters.
Repeat this process for every installed HBA.
Example 5-70 Setting the HBA timeout
[[email protected] svc]# cp /etc/vmware/esx.conf /etc/vmware/esx.confbackup
[[email protected] svc]# vi /etc/vmware/esx.conf
Chapter 5. Host configuration
241
5.12.8 Multipathing in ESX
The ESX Server performs multipathing. You do not need to install a multipathing driver, such
as SDD, either on the ESX server or on the guest operating systems.
5.12.9 Attaching VMware to VDisks
First, we make sure that the VMware host is logged into the SAN Volume Controller. In our
examples, we use the VMware ESX server V3.5 and the host name Nile.
Enter the following command to check the status of the host:
svcinfo lshost <hostname>
Example 5-71 shows that the host Nile is logged into the SVC with two HBAs.
Example 5-71 lshost Nile
IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile
id 1
name Nile
port_count 2
type generic
mask 1111
iogrp_count 2
WWPN 210000E08B892BCD
node_logged_in_count 4
state active
WWPN 210000E08B89B8C0
node_logged_in_count 4
state active
Then, we have to set the SCSI Controller Type in VMware. By default, ESX Server disables
the SCSI bus sharing and does not allow multiple virtual machines to access the same VMFS
file at the same time (Figure 5-47 on page 243).
But in many configurations, such as those configurations for high availability, the virtual
machines have to share the same VMFS file to share a disk.
To set the SCSI Controller Type in VMware:
1. Log on to your Infrastructure Client, shut down the virtual machine, right-click it, and select
Edit settings.
2. Highlight the SCSI Controller, and select one of the three available settings, depending on
your configuration:
– None: Disks cannot be shared by other virtual machines.
– Virtual: Disks can be shared by virtual machines on the same server.
– Physical: Disks can be shared by virtual machines on any server.
Click OK to apply the setting.
242
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 5-47 Changing SCSI bus settings
3. Create your VDisks on the SVC, and map them to the ESX hosts.
Tips:
If you want to use features, such as VMotion, the VDisks that own the VMFS file have to
be visible to every ESX host that will be able to host the virtual machine. In SVC, select
Allow the virtual disks to be mapped even if they are already mapped to a host.
The VDisk has to have the same SCSI ID on each ESX host.
For this example configuration, we have created one VDisk and have mapped it to our ESX
host, as shown in Example 5-72.
Example 5-72 Mapped VDisk to ESX host Nile
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Nile
id name SCSI_id vdisk_id vdisk_name
wwpn
1
Nile
0
12
VMW_pool 210000E08B892BCD
60050768018301BF2800000000000010
vdisk_UID
ESX does not automatically scan for SAN changes (except when rebooting the entire ESX
server). If you have made any changes to your SVC or SAN configuration, perform the
following steps:
1. Open your VMware Infrastructure Client.
2. Select the host.
3. In the Hardware window, choose Storage Adapters.
4. Click Rescan.
Chapter 5. Host configuration
243
To configure a storage device to use it in VMware, perform the following steps:
1. Open your VMware Infrastructure Client.
2. Select the host for which you want to see the assigned VDisks, and click the
Configuration tab.
3. In the Hardware window on the left side, click Storage.
4. To create a new storage pool, select click here to create a datastore or Add storage if
the yellow field does not appear (Figure 5-48).
Figure 5-48 VMWare add datastore
5. The Add storage wizard will appear.
6. Select Create Disk/Lun, and click Next.
7. Select the SVC VDisk that you want to use for the datastore, and click Next.
8. Review the disk layout and click Next.
9. Enter a datastore name and click Next.
10.Select a block size, enter the size of the new partition, and then, click Next.
11.Review your selections, and click Finish.
Now, the created VMFS datastore appears in the Storage window (Figure 5-49). You will see
the details for the highlighted datastore. Check whether all of the paths are available and that
the Path Selection is set to Most Recently Used.
Figure 5-49 VMWare storage configuration
If not all of the paths are available, check your SAN and storage configuration. After fixing the
problem, select Refresh to perform a path rescan. The view will be updated to the new
configuration.
244
Implementing the IBM System Storage SAN Volume Controller V5.1
The recommended Multipath Policy for SVC is Most Recently Used. If you have to edit this
policy, perform the following steps:
1. Highlight the datastore.
2. Click Properties.
3. Click Managed Paths.
4. Click Change (see Figure 5-50).
5. Select Most Recently Used.
6. Click OK.
7. Click Close.
Now, your VMFS datastore has been created, and you can start using it for your guest
operating systems.
5.12.10 VDisk naming in VMware
In the Virtual Infrastructure Client, a VDisk is displayed as a sequence of three or four
numbers, separated by colons (Figure 5-50):
<SCSI HBA>:<SCSI target>:<SCSi VDisk>:<disk partition>
where:
SCSI HBA
The number of the SCSI HBA (can change).
SCSI target
The number of the SCSI target (can change).
SCSI VDisk
The number of the VDisk (never changes).
disk partition
The number of the disk partition (never changes). If the last number is not displayed, the
name stands for the entire VDisk.
Figure 5-50 VDisk naming in VMware
Chapter 5. Host configuration
245
5.12.11 Setting the Microsoft guest operating system timeout
For a Microsoft Windows 2000 Server or Windows 2003 Server installed as a VMware guest
operating system, the disk timeout value must be set to 60 seconds.
We provide the instructions to perform this task in 5.6.5, “Changing the disk timeout on
Microsoft Windows Server” on page 185.
5.12.12 Extending a VMFS volume
It is possible to extend VMFS volumes while virtual machines are running. First, you have to
extend the VDisk on the SVC, and then, you are able to extend the VMFS volume. Before
performing these steps, we recommend having a backup of your data.
Perform the following steps to extend a volume:
1. The VDisk can be expanded with the svctask expandvdisksize -size 1 -unit gb
<VDiskname> command (Example 5-73).
Example 5-73 Expanding a VDisk in SVC
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool
id 12
name VMW_pool
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 60.0GB
type striped
formatted yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018301BF2800000000000010
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
246
Implementing the IBM System Storage SAN Volume Controller V5.1
fast_write_state empty
used_capacity 60.00GB
real_capacity 60.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb VMW_pool
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool
id 12
name VMW_pool
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 65.0GB
type striped
formatted yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018301BF2800000000000010
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 65.00GB
real_capacity 65.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
IBM_2145:ITSO-CLS1:admin>
Chapter 5. Host configuration
247
2. Open the Virtual Infrastructure Client.
3. Select the host.
4. Select Configuration.
5. Select Storage Adapters.
6. Click Rescan.
7. Make sure that the Scan for new Storage Devices check box is marked, and click OK.
After the scan has completed, the new capacity is displayed in the Details section.
8. Click Storage.
9. Right-click the VMFS volume and click Properties.
10.Click Add Extend.
11.Select the new free space, and click Next.
12.Click Next.
13.Click Finish.
The VMFS volume has now been extended, and the new space is ready for use.
5.12.13 Removing a datastore from an ESX host
Before you remove a datastore from an ESX host, you have to migrate or delete all of the
virtual machines that reside on this datastore.
To remove it, perform the following steps:
1. Back up the data.
2. Open the Virtual Infrastructure Client.
3. Select the host.
4. Select Configuration.
5. Select Storage.
6. Highlight the datastore that you want to remove.
7. Click Remove.
8. Read the warning, and if you are sure that you want to remove the datastore and delete all
of the data on it, click Yes.
9. Remove the host mapping on the SVC, or delete the VDisk (as shown in Example 5-74).
10.In the VI Client, select Storage Adapters.
11.Click Rescan.
12.Make sure that the Scan for new Storage Devices check box is marked, and click OK.
13.After the scan completes, the disk disappears from the view.
Your datastore has been successfully removed from the system.
Example 5-74 Remove VDisk host mapping: Delete VDisk
IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile VMW_pool
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk VMW_pool
248
Implementing the IBM System Storage SAN Volume Controller V5.1
5.13 SUN Solaris support information
For the latest information about supported software and driver levels, always refer to this site:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
5.13.1 Operating system versions and maintenance levels
At the time of writing, Sun Solaris 8, Sun Solaris 9, and Sun Solaris 10 are supported in 64-bit
only.
5.13.2 SDD dynamic pathing
Solaris supports dynamic pathing when you either add more paths to an existing VDisk, or if
you present a new VDisk to a host. No user intervention is required. SDD is aware of the
preferred paths that SVC sets per VDisk.
SDD will use a round-robin algorithm when failing over paths, that is, it will try the next known
preferred path. If this method fails and all preferred paths have been tried, it will use a
round-robin algorithm on the non-preferred paths until it finds a path that is available. If all
paths are unavailable, the VDisk will go offline. Therefore, it can take time to perform path
failover when multiple paths go offline.
SDD under Solaris performs load balancing across the preferred paths where appropriate.
Veritas Volume Manager with dynamic multipathing
Veritas Volume Manager (VM) with dynamic multipathing (DMP) automatically selects the
next available I/O path for I/O requests without action from the administrator. VM with DMP is
also informed when you repair or restore a connection, and when you add or remove devices
after the system has been fully booted (provided that the operating system recognizes the
devices correctly). The new Java Native Interface (JNI) drivers support the mapping of new
VDisks without rebooting the Solaris host.
Note the following support characteristics:
Veritas VM with DMP does not support preferred pathing with SVC.
Veritas VM with DMP does support load balancing across multiple paths with SVC.
Co-existence with SDD and Veritas VM with DMP
Veritas Volume Manager with DMP will coexist in “pass-through” mode with SDD. DMP will
use the vpath devices that are provided by SDD.
OS cluster support
Solaris with Symantec Cluster V4.1, Symantec SFHA and SFRAC V4.1/5.0, and Solaris with
Sun Cluster V3.1/3.2 are supported at the time of writing.
SAN boot support
Note the following support characteristics:
Boot from SAN is supported under Solaris 9 running Symantec Volume Manager.
Boot from SAN is not supported when SDD is used as the multipathing software.
Chapter 5. Host configuration
249
5.14 Hewlett-Packard UNIX configuration information
For the latest information about Hewlett-Packard UNIX® (HP-UX) support, refer to this Web
site:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
5.14.1 Operating system versions and maintenance levels
At the time of writing, HP-UX V11.0 and V11i v1/v2/v3 are supported (64-bit only).
5.14.2 Multipath solutions supported
At the time of writing, SDD V1.6.3.0 for HP-UX is supported. Multipathing Software PV Link
and Cluster Software Service Guard V11.14/11.16/11.17/11.18 are also supported, but in a
cluster environment, we recommend SDD.
SDD dynamic pathing
HP-UX supports dynamic pathing when you either add more paths to an existing VDisk or if
you present a new VDisk to a host.
SDD is aware of the preferred paths that SVC sets per VDisk. SDD will use a round-robin
algorithm when failing over paths, that is, it will try the next known preferred path. If this
method fails and all preferred paths have been tried, it will use a round-robin algorithm on the
non-preferred paths until it finds a path that is available. If all paths are unavailable, the VDisk
will go offline. It can take time, therefore, to perform path failover when multiple paths go
offline.
SDD under HP-UX performs load balancing across the preferred paths where appropriate.
Physical volume links (PVLinks) dynamic pathing
Unlike SDD, PVLinks does not load balance and is unaware of the preferred paths that SVC
sets per VDisk. Therefore, we strongly recommend SDD, except when in a clustering
environment or when using an SVC VDisk as your boot disk.
When creating a Volume Group, specify the primary path that you want HP-UX to use when
accessing the Physical Volume that is presented by SVC. This path, and only this path, will be
used to access the PV as long as it is available, no matter what the SVC’s preferred path to
that VDisk is. Therefore, be careful when creating Volume Groups so that the primary links to
the PVs (and load) are balanced over both HBAs, FC switches, SVC nodes, and so on.
When extending a Volume Group to add alternate paths to the PVs, the order in which you
add these paths is HP-UX’s order of preference if the primary path becomes unavailable.
Therefore, when extending a Volume Group, the first alternate path that you add must be from
the same SVC node as the primary path, to avoid unnecessary node failover due to an HBA,
FC link, or FC switch failure.
5.14.3 Co-existence of SDD and PV Links
If you want to multipath a VDisk with PVLinks while SDD is installed, you need to make sure
that SDD does not configure a vpath for that VDisk. To do this, you need to put the serial
number of any VDisks that you want SDD to ignore in the /etc/vpathmanualexcl.cfg
directory. In the case of SAN boot, if you are booting from an SVC VDisk, when you install
SDD (from Version 1.6 onward), SDD will automatically ignore the boot VDisk.
250
Implementing the IBM System Storage SAN Volume Controller V5.1
SAN boot support
SAN boot is supported on HP-UX by using PVLinks as the multipathing software on the boot
device. You can use PVLinks or SDD to provide the multipathing support for the other devices
that are attached to the system.
5.14.4 Using an SVC VDisk as a cluster lock disk
ServiceGuard does not provide a way to specify alternate links to a cluster lock disk. When
using an SVC VDisk as your lock disk, if the path to FIRST_CLUSTER_LOCK_PV becomes
unavailable, the HP node will not be able to access the lock disk if a 50-50 split in quorum
occurs.
To ensure redundancy, when editing your Cluster Configuration ASCII file, make sure that the
variable FIRST_CLUSTER_LOCK_PV has a separate path to the lock disk for each HP node
in your cluster. For example, when configuring a two-node HP cluster, make sure that
FIRST_CLUSTER_LOCK_PV on HP server A is on a separate SVC node and through a
separate FC switch than the FIRST_CLUSTER_LOCK_PV on HP server B.
5.14.5 Support for HP-UX with greater than eight LUNs
HP-UX will not recognize more than eight LUNS per port using the generic SCSI behavior.
To accommodate this behavior, SVC supports a “type” associated with a host. This type can
be set using the svctask mkhost command and modified using the svctask chhost
command. The type can be set to generic, which is the default for HP-UX.
When an initiator port, which is a member of a host of type HP-UX, accesses an SVC, the
SVC will behave in the following way:
Flat Space Addressing mode is used rather than the Peripheral Device Addressing Mode.
When an inquiry command for any page is sent to LUN 0 using Peripheral Device
Addressing, it is reported as Peripheral Device Type 0Ch (controller).
When any command other than an inquiry is sent to LUN 0 using Peripheral Device
Addressing, SVC will respond as an unmapped LUN 0 normally responds.
When an inquiry is sent to LUN 0 using Flat Space Addressing, it is reported as Peripheral
Device Type 00h (Direct Access Device) if a LUN is mapped at LUN 0 or 1Fh Unknown
Device Type.
When an inquiry is sent to an unmapped LUN that is not LUN 0 using Peripheral Device
Addressing, the Peripheral qualifier returned is 001b and the Peripheral Device type is 1Fh
(unknown or no device type). This response is in contrast to the behavior for generic hosts,
where peripheral Device Type 00h is returned.
5.15 Using SDDDSM, SDDPCM, and SDD Web interface
After installing the SDDDSM or SDD driver, there are specific commands available. To open a
command window for SDDDSM or SDD, from the desktop, select Start  Programs 
Subsystem Device Driver  Subsystem Device Driver Management.
The command documentation for the various operating systems is available in the Multipath
Subsystem Device Driver User Guides:
http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7
000303&loc=en_US&cs=utf-8&lang=en
Chapter 5. Host configuration
251
It is also possible to configure the multipath driver so that it offers a Web interface to run the
commands. Before this configuration can work, we need to configure the Web interface.
Sddsrv does not bind to any TCP/IP port by default, but it allows port binding to be
dynamically enabled or disabled.
For all platforms except Linux, the multipath driver package ships an sddsrv.conf template
file named the sample_sddsrv.conf file. On all UNIX platforms except Linux, the
sample_sddsrv.conf file is located in the /etc directory. On Windows platforms, the
sample_sddsrv.conf file is in the directory where SDD is installed.
You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory
as the sample_sddsrv.conf file by simply copying it and naming the copied file sddsrv.conf.
You can then dynamically change port binding by modifying the parameters in the
sddsrv.conf file and changing the values of Enableport and Loopbackbind to True.
Figure 5-51 shows the start window of the multipath driver Web interface.
Figure 5-51 SDD Web interface
5.16 Calculating the queue depth
The queue depth is the number of I/O operations that can be run in parallel on a device. It is
usually possible to set a limit on the queue depth on the SDD paths (or equivalent) or the
HBA. Ensure that you configure the servers to limit the queue depth on all of the paths to the
SAN Volume Controller disks in configurations that contain a large number of servers or
VDisks.
You might have a number of servers in the configuration that are idle, or do not initiate the
calculated quantity of I/O operations. If so, you might not need to limit the queue depth.
252
Implementing the IBM System Storage SAN Volume Controller V5.1
5.17 Further sources of information
For more information about host attachment and configuration to the SVC, refer to the IBM
System Storage SAN Volume Controller: Host Attachment Guide, SC26-7563.
For more information about SDDDSM or SDD configuration, refer to the IBM TotalStorage
Multipath Subsystem Device Driver User’s Guide, SC30-4096.
When looking for information about certain storage subsystems, this link is usually helpful:
http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp
5.17.1 Publications containing SVC storage subsystem attachment guidelines
It is beyond the intended scope of this book to describe the attachment to each and every
subsystem that the SVC supports. Here is a short list of what we found especially useful in
the writing of this book, and in the field:
SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521,
describes in detail how you can tune your back-end storage to maximize your performance
on the SVC:
http://www.redbooks.ibm.com/redbooks/pdfs/sg247521.pdf
Chapter 14 in DS8000 Performance Monitoring and Tuning, SG24-7146, describes the
guidelines and procedures to make the most of the performance that is available from your
DS8000 storage subsystem when attached to the IBM SAN Volume Controller:
http://www.redbooks.ibm.com/redbooks/pdfs/sg247146.pdf
DS4000 Best Practices and Performance Tuning Guide, SG24-6363, explains how to
connect and configure your storage for optimized performance on the SVC:
http://www.redbooks.ibm.com/redbooks/pdfs/sg2476363.pdf
IBM XIV Storage System: Architecture, Implementation and Usage, SG24-7659,
discusses specific considerations for attaching the XIV Storage System to a SAN Volume
Controller:
http://www.redbooks.ibm.com/redpieces/pdfs/sg247659.pdf
Chapter 5. Host configuration
253
254
Implementing the IBM System Storage SAN Volume Controller V5.1
6
Chapter 6.
Advanced Copy Services
Before we discuss FlashCopy, Metro Mirror, and Global Mirror, we first describe the IBM
System Storage SAN Volume Controller (SVC) Advanced Copy Services of FlashCopy, Metro
Mirror, and Global Mirror.
In Chapter 7, “SAN Volume Controller operations using the command-line interface” on
page 339, we describe how to use the command-line interface and Advanced Copy Services.
In Chapter 8, “SAN Volume Controller operations using the GUI” on page 469, we describe
how to use the GUI and Advanced Copy Services.
© Copyright IBM Corp. 2010. All rights reserved.
255
6.1 FlashCopy
The FlashCopy function of the IBM System Storage SAN Volume Controller (SVC) provides
the capability to perform a point-in-time copy of one or more virtual disks (VDisks).
In the topics that follow, we describe how FlashCopy works on the SVC, and we present
examples of configuring and utilizing FlashCopy.
FlashCopy is also known as point-in-time copy. You can use the FlashCopy technique to help
solve the challenge of making a consistent copy of a data set that is constantly being
updated. The FlashCopy source is frozen for a few seconds or less during the point-in-time
copy process. It will be able to accept I/O when the point-in-time copy bitmap is set up and the
FlashCopy function is ready to intercept read/write requests in the I/O path. Although the
background copy operation takes time, the resulting data at the target appears as though the
copy were made instantaneously.
SVC’s FlashCopy service provides the capability to perform a point-in-time copy of one or
more VDisks. Because the copy is performed at the block level, it operates underneath the
operating system and application caches. The image that is presented is “crash-consistent”:
that is to say, it is similar to an image that is seen in a crash event, such as an unexpected
power failure.
6.1.1 Business requirement
The business applications for FlashCopy are many and various. An important use is
facilitating consistent backups of constantly changing data, and, in these instances, a
FlashCopy is created to capture a point-in-time copy. The resulting image can be backed up
to tertiary storage, such as tape. After the copied data is on tape, the FlashCopy target is
redundant.
Various tasks can benefit from the use of FlashCopy. In the following sections, we describe
the most common situations.
6.1.2 Moving and migrating data
When you need to move a consistent data set from one host to another host, FlashCopy can
facilitate this action with a minimum of downtime for the host application that is dependent on
the source VDisk.
It might be beneficial to quiesce the application on the host and flush the application and OS
buffers so that the new VDisk contains data that is “clean” to the application. Though without
this step, the newly created VDisk data will still be usable by the application, it will require
recovery procedures (such as log replay) to use. Quiescing the application ensures that the
startup time against the mirrored copy is minimized.
The cache on the SVC is also flushed using the FlashCopy prestartfcmap command; see
“Preparing” on page 275 prior to performing the FlashCopy.
The data set that has been created on the FlashCopy target is immediately available, as well
as the source VDisk.
256
Implementing the IBM System Storage SAN Volume Controller V5.1
6.1.3 Backup
FlashCopy does not affect your backup time, but it allows you to create a point-in-time
consistent data set (across VDisks), with a minimum of downtime for your source host. The
FlashCopy target can then be mounted on another host (or the backup server) and backed
up. Using this procedure, the backup speed becomes less important, because the backup
time does not require downtime for the host that is dependent on the source VDisks.
6.1.4 Restore
You can keep periodically created FlashCopy targets online to provide extremely fast restore
of specific files from the point-in-time consistent data set revealed on the FlashCopy targets.
You simply copy the specific files to the source VDisk in case a restore is needed.
6.1.5 Application testing
You can test new applications and new operating system releases against a FlashCopy of
your production data. The risk of data corruption is eliminated, and your application does not
need to be taken offline for an extended period of time to perform the copy of the data.
Data mining is a good example of an area where FlashCopy can help you. Data mining can
now extract data without affecting your application.
6.1.6 SVC FlashCopy features
The FlashCopy function in SVC supports these features:
The target is the time-zero copy of the source (known as FlashCopy mapping targets).
The source VDisk and target VDisk are available (almost) immediately.
One source VDisk can have up to 256 target VDisks at the same or various points in time.
Consistency groups are supported to enable FlashCopy across multiple VDisks.
The target VDisk can be updated independently of the source VDisk.
Bitmaps governing I/O redirection (I/O indirection layer) are maintained in both nodes of
the SVC I/O Group to prevent a single point of failure.
FlashCopy mapping can be automatically withdrawn after the completion of background
copy.
FlashCopy consistency groups can be automatically withdrawn after the completion of
background copy.
Multiple Target FlashCopy: FlashCopy now supports up to 256 target copies from a single
source VDisk.
Space-Efficient FlashCopy: Space-Efficient FlashCopy uses disk space only for changes
between source and target data and not for the entire capacity of a VDisk copy.
FlashCopy licensing: The FlashCopy previously was licensed by the source and target
virtual capacity. It will now be licensed only by source virtual capacity.
Incremental FlashCopy: A mapping created with the “incremental” flag copies only the
data that has been changed on the source or the target since the previous copy
completed. This incremental FlashCopy can substantially reduce the time that is required
to recreate an independent image.
Chapter 6. Advanced Copy Services
257
Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy relationship and without having to wait for the original
copy operation to complete.
Cascaded FlashCopy: The target VDisk of a FlashCopy mapping can be the source VDisk
in a future FlashCopy mapping.
6.2 Reverse FlashCopy
With SVC Version 5.1.x, Reverse FlashCopy support is available. Reverse FlashCopy
enables FlashCopy targets to become restore points for the source without breaking the
FlashCopy relationship and without having to wait for the original copy operation to complete.
It supports multiple targets and thus multiple rollback points.
A key advantage of SVC Multiple Target Reverse FlashCopy function is that the reverse
FlashCopy does not destroy the original target. Thus, any process using the target, such as a
tape backup process, will not be disrupted. Multiple recovery points can be tested.
SVC is also unique in that an optional copy of the source VDisk can be made before starting
the reverse copy operation in order to diagnose problems.
When a user suffers a disaster and needs to restore from an on-disk backup, the user follows
this procedure:
1. (Optional) Create a new target VDisk (VDisk Z) and FlashCopy the production VDisk
(VDisk X) onto the new target for later problem analysis.
2. Create a new FlashCopy map with the backup to be restored (VDisk Y) or (VDisk W) as
the source VDisk and VDisk X as the target VDisk, if this map does not already exist.
3. Start the FlashCopy map (VDisk Y  VDisk X) with the new -restore option to copy the
backup data onto the production disk.
4. The production disk is instantly available with the backup data.
Figure 6-1 on page 259 shows an example of Reverse FlashCopy.
258
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 6-1 Reverse FlashCopy
Regardless of whether the initial FlashCopy map (VDisk X  VDisk Y) is incremental, the
reverse operation only copies the modified data.
Consistency groups are reversed by creating a set of new “reverse” FlashCopy maps and
adding them to a new “reverse” consistency group. A consistency group cannot contain more
than one FlashCopy map with the same target VDisk.
6.2.1 FlashCopy and Tivoli Storage Manager
The management of many large Reverse FlashCopy consistency groups is a complex task,
without a tool for assistance.
IBM Tivoli FlashCopy Manager V2.1 is a new product that will improve the interlock between
SVC and Tivoli Storage Manager for Advanced Copy Services, as well.
Figure 6-2 on page 260 shows the Tivoli Storage Manager for Advanced Copy Services
features.
Chapter 6. Advanced Copy Services
259
Figure 6-2 Tivoli Storage Manager for Advanced Copy Services features
Tivoli FlashCopy Manager provides many of the features of Tivoli Storage Manager for
Advanced Copy Services without the requirement to use Tivoli Storage Manager. With Tivoli
FlashCopy Manager, you can coordinate and automate host preparation steps before issuing
FlashCopy start commands to ensure that a consistent backup of the application is made.
You can put databases into hot backup mode, and before starting FlashCopy, you flush the
filesystem cache.
FlashCopy Manager also allows for easier management of on-disk backups using FlashCopy
and provides a simple interface to the “reverse” operation.
Figure 6-3 on page 261 shows the FlashCopy Manager feature.
260
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 6-3 Tivoli Storage Manager FlashCopy Manager features
It is beyond the intended scope of this book to describe Tivoli Storage Manager FlashCopy
Manager.
6.3 How FlashCopy works
FlashCopy works by defining a FlashCopy mapping that consists of one source VDisk
together with one target VDisk. You can define multiple FlashCopy mappings, and
point-in-time consistency can be observed across multiple FlashCopy mappings using
consistency groups. See “Consistency group with Multiple Target FlashCopy” on page 265.
When FlashCopy is started, it makes a copy of a source VDisk to a target VDisk, and the
original contents of the target VDisk are overwritten. When the FlashCopy operation is
started, the target VDisk presents the contents of the source VDisk as they existed at the
single point-in-time of FlashCopy starting. This operation is also referred to as a time-zero
copy (T0 ).
When a FlashCopy is started, the source and target VDisks are instantaneously available.
When FlashCopy starts, bitmaps are created to govern and redirect I/O to the source or target
VDisk, depending on where the requested block is located, while the blocks are copied in the
background from the source VDisk to the target VDisk.
For more details about background copy, see 6.4.5, “Grains and the FlashCopy bitmap” on
page 266.
Figure 6-4 on page 262 illustrates the redirection of the host I/O toward the source VDisk and
the target VDisk.
Chapter 6. Advanced Copy Services
261
Figure 6-4 Redirection of host I/O
6.4 Implementing SVC FlashCopy
In the topics that follow, we describe how FlashCopy is implemented in the SVC.
6.4.1 FlashCopy mappings
In the SVC, FlashCopy occurs between a source VDisk and a target VDisk. The source and
target VDisks must be the same size. The minimum granularity that SVC supports for
FlashCopy is an entire VDisk; it is not possible to use FlashCopy to copy only part of a VDisk.
The source and target VDisks must both belong to the same SVC cluster, but they can be in
separate I/O Groups within that cluster. SVC FlashCopy associates a source VDisk to a target
VDisk in a FlashCopy mapping.
VDisks, which are members of a FlashCopy mapping, cannot have their size increased or
decreased while they are members of the FlashCopy mapping. The SVC supports the
creation of enough FlashCopy mappings to allow every VDisk to be a member of a FlashCopy
mapping.
A FlashCopy mapping is the act of creating a relationship between a source VDisk and a
target VDisk. FlashCopy mappings can be either stand-alone or a member of a consistency
group. You can perform the act of preparing, starting, or stopping on either the stand-alone
mapping or the consistency group.
Rule: After a mapping is in a consistency group, you can only operate on the group, and
you can no longer prepare, start, or stop the individual mapping.
Figure 6-5 on page 263 illustrates the concept of FlashCopy mapping.
262
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 6-5 FlashCopy mapping
6.4.2 Multiple Target FlashCopy
SVC supports up copying up to 256 target VDisks from a single source VDisk. Each copy is
managed by a unique mapping. In general, each mapping acts independently and is not
affected by other mappings sharing the same source VDisk. Figure 6-6 illustrates how a view
of a Multiple Target FlashCopy implementation.
Figure 6-6 Multiple Target FlashCopy implementation
Figure 6-6 shows four targets and mappings taken from a single source. It also shows that
there is an ordering to the targets: Target 1 is the oldest (as measured from the time it was
started) through to Target 4, which is the newest. The ordering is important because of the
way in which data is copied when multiple target VDisks are defined and because of the
dependency chain that results. A write to the source VDisk does not cause its data to be
copied to all of the targets; instead, it is copied to the newest target VDisk only (Target 4 in
Figure 6-6). The older targets will refer to new targets first before referring to the source.
From the point of view of an intermediate target disk (neither the oldest or the newest), it
treats the set of newer target VDisks and the true source VDisk as a type of composite
source.
It treats all older VDisks as a kind of target (and behaves like a source to them). If the
mapping for an intermediate target VDisk shows 100% progress, its target VDisk contains a
complete set of data. In this case, mappings treat the set of newer target VDisks, up to and
including the 100% progress target, as a form of composite source. A dependency
relationship exists between a particular target and all newer targets (up to and including a
target that shows 100% progress) that share the same source until all data has been copied
to this target and all older targets.
You can read more information about Multiple Target FlashCopy in 6.4.6, “Interaction and
dependency between Multiple Target FlashCopy mappings” on page 267.
Chapter 6. Advanced Copy Services
263
6.4.3 Consistency groups
Consistency groups address the issue where the objective is to preserve data consistency
across multiple VDisks, because the applications have related data that spans multiple
VDisks. A requirement for preserving the integrity of data that is being written is to ensure that
“dependent writes” are executed in the application’s intended sequence. Because the SVC
provides point-in-time semantics, a self-consistent data set is obtained.
FlashCopy mappings can be members of a consistency group, or they can be operated in a
stand-alone manner, not as part of a consistency group.
FlashCopy commands can be issued to a FlashCopy consistency group, which affects all
FlashCopy mappings in the consistency group, or to a single FlashCopy mapping if it is not
part of a defined FlashCopy consistency group.
Figure 6-7 illustrates a consistency group consisting of two FlashCopy mappings.
Figure 6-7 FlashCopy consistency group
Dependent writes
To illustrate why it is crucial to use consistency groups when a data set spans multiple VDisks,
consider the following typical sequence of writes for a database update transaction:
1. A write is executed to update the database log, indicating that a database update is to be
performed.
2. A second write is executed to update the database.
3. A third write is executed to update the database log, indicating that the database update
has completed successfully.
The database ensures the correct ordering of these writes by waiting for each step to
complete before starting the next step. However, if the database log (updates 1 and 3) and the
database itself (update 2) are on separate VDisks and a FlashCopy mapping is started during
this update, you need to exclude the possibility that the database itself is copied slightly
before the database log. This will result in the target VDisks seeing writes (1) and (3) but not
(2), because the database was copied before the write was completed.
In this case, if the database was restarted using the backup that was made from the
FlashCopy target disks, the database log indicates that the transaction had completed
successfully when, in fact, that is not the case, because the FlashCopy of the VDisk with the
database file was started (bitmap was created) before the write was on the disk. Therefore,
the transaction is lost, and the integrity of the database is in question.
264
Implementing the IBM System Storage SAN Volume Controller V5.1
To overcome the issue of dependent writes across VDisks and to create a consistent image of
the client data, it is necessary to perform a FlashCopy operation on multiple VDisks as an
atomic operation. To achieve this condition, the SVC supports the concept of consistency
groups.
A FlashCopy consistency group can contain up to 512 FlashCopy mappings (up to the
maximum number of FlashCopy mappings supported by the SVC cluster). FlashCopy
commands can then be issued to the FlashCopy consistency group and thereby
simultaneously for all of the FlashCopy mappings that are defined in the consistency group.
For example, when issuing a FlashCopy start command to the consistency group, all of the
FlashCopy mappings in the consistency group are started at the same time, resulting in a
point-in-time copy that is consistent across all of the FlashCopy mappings that are contained
in the consistency group.
Consistency group with Multiple Target FlashCopy
It is important to note that a consistency group aggregates FlashCopy mappings, not VDisks.
Thus, where a source VDisk has multiple FlashCopy mappings, they can be in the same or
separate consistency groups. If a particular VDisk is the source VDisk for multiple FlashCopy
mappings, you might want to create separate consistency groups to separate each mapping
of the same source VDisk. If the source VDisk with multiple target VDisks is in the same
consistency group, the result is that when the consistency group is started, multiple identical
copies of the VDisk will be created. However, this result might be what the user wants. For
example, the user might want to run multiple simulations on the same set of source data. IF
so, this approach is one way of obtaining identical sets of source data.
Maximum configurations
Table 6-1 shows the FlashCopy properties and maximum configurations.
Table 6-1 FlashCopy properties and maximum configuration
FlashCopy property
Maximum
Comment
FlashCopy targets per source
256
This maximum is the maximum number of
FlashCopy mappings that can exist with the same
source VDisk.
FlashCopy mappings per cluster
4,096
The number of mappings is no longer limited by
the number of VDisks in the cluster, and so, the
FlashCopy component limit applies.
FlashCopy consistency groups
per cluster
127
This maximum is an arbitrary limit that is policed
by the software.
FlashCopy VDisks per I/O Group
1,024
This maximum is a limit on the quantity of
FlashCopy mappings using bitmap space from
this I/O Group. This maximum configuration will
consume all 512 MB of bitmap space for the I/O
Group and allow no Metro and Global Mirror
bitmap space. The default is 40 TB.
FlashCopy mappings per
consistency group
512
This limit is due to the time that is taken to prepare
a consistency group with a large number of
mappings.
Chapter 6. Advanced Copy Services
265
6.4.4 FlashCopy indirection layer
The FlashCopy indirection layer governs the I/O to both the source and target VDisks when a
FlashCopy mapping is started, which is done using a FlashCopy bitmap. The purpose of the
FlashCopy indirection layer is to enable both the source and target VDisks for read and write
I/O immediately after the FlashCopy has been started.
To illustrate how the FlashCopy indirection layer works, we look at what happens when a
FlashCopy mapping is prepared and subsequently started.
When a FlashCopy mapping is prepared and started, the following sequence is applied:
1. Flush write the data in cache onto a source VDisk or VDisks that are part of a consistency
group.
2. Put cache into write-through on the source VDisks.
3. Discard cache for the target VDisks.
4. Establish a sync point on all of the source VDisks in the consistency group (creating the
FlashCopy bitmap).
5. Ensure that the indirection layer governs all of the I/O to the source VDisks and target
VDisks.
6. Enable cache on both the source VDisks and target VDisks.
FlashCopy provides the semantics of a point-in-time copy, using the indirection layer, which
intercepts the I/Os that targeted at either the source VDisks or target VDisks. The act of
starting a FlashCopy mapping causes this indirection layer to become active in the I/O path,
which occurs as an atomic command across all FlashCopy mappings in the consistency
group. The indirection layer makes a decision about each I/O. This decision is based upon
these factors:
The VDisk and the logical block address (LBA) to which the I/O is addressed
Its direction (read or write)
The state of an internal data structure, the FlashCopy bitmap
The indirection layer either allows the I/O to go through the underlying storage, redirects the
I/O from the target VDisk to the source VDisk, or stalls the I/O while it arranges for data to be
copied from the source VDisk to the target VDisk. To explain in more detail which action is
applied for each I/O, we first look at the FlashCopy bitmap.
6.4.5 Grains and the FlashCopy bitmap
When data is copied between VDisks by FlashCopy, either from source to target or from
target to target, it is copied in units of address space known as grains. The grain size is
256 KB or 64 KB. The FlashCopy bitmap contains one bit for each grain. The bit records
whether the associated grain has yet been split by copying the grain from the source to the
target.
Source reads
Reads of the source are always passed through to the underlying source disk.
Target reads
In order for FlashCopy to process a read from the target disk, FlashCopy must consult its
bitmap. If the data being read has already been copied to the target, the read is sent to the
target disk. If it has not, the read is sent to the source VDisk or possibly to another target
VDisk if multiple FlashCopy mappings exist for the source VDisk. Clearly, this algorithm
266
Implementing the IBM System Storage SAN Volume Controller V5.1
requires that while this read is outstanding, no writes are allowed to execute that change the
data being read. The SVC satisfies this requirement by using by a cluster-wide locking
scheme.
Writes to the source or target
Where writes occur to source or target to an area (grain), which has not yet been copied,
these writes will usually be stalled while a copy operation is performed to copy data from the
source to the target, to maintain the illusion that the target contains its own copy. A specific
optimization is performed where an entire grain is written to the target VDisk. In this case, the
new grain contents are written to the target VDisk. If this write succeeds, the grain is marked
as split in the FlashCopy bitmap without a copy from the source to the target having been
performed. If the write fails, the grain is not marked as split.
The rate at which the grains are copied across from the source VDisk to the target VDisk is
called the copy rate. By default, the copy rate is 50, although you can alter this rate. For more
information about copy rates, see 6.4.13, “Space-efficient FlashCopy” on page 276.
The FlashCopy indirection layer algorithm
Imagine the FlashCopy indirection layer as the I/O traffic cop when a FlashCopy mapping is
active. The I/O is intercepted and handled according to whether it is directed at the source
VDisk or at the target VDisk, depending on the nature of the I/O (read or write) and the state
of the grain (whether it has been copied).
In Figure 6-8, we illustrate how the background copy runs while I/Os are handled according to
the indirection layer algorithm.
Figure 6-8 I/O processing with FlashCopy
6.4.6 Interaction and dependency between Multiple Target FlashCopy
mappings
Figure 6-9 on page 268 represents a set of four FlashCopy mappings that share a common
source. The FlashCopy mappings will target VDisks Target 0, Target 1, Target 2, and Target 3.
Chapter 6. Advanced Copy Services
267
Figure 6-9 Interactions between MTFC mappings
Target 0 is not dependent on a source, because it has completed copying. Target 0 has two
dependent mappings (Target 1 and Target 2).
Target 1 is dependent upon Target 0. It will remain dependent until all of Target 1 has been
copied. Target 2 is dependent on it, because Target 2 is 20% copy complete. After all of
Target 1 has been copied, it can then move to the idle_copied state.
Target 2 is dependent upon Target 0 and Target 1 and will remain dependent until all of Target
2 has been copied. No target is dependent on Target 2, so when all of the data has been
copied to Target 2, it can move to the Idle_copied state.
Target 3 has actually completed copying, so it is not dependent on any other maps.
Write to target VDisk
A write to an intermediate or newest target VDisk must consider the state of the grain within
its own mapping, as well as that of the grain of the next oldest mapping:
If the grain of the next oldest mapping has not yet been copied, it must be copied before
the write is allowed to proceed in order to preserve the contents of the next oldest
mapping. The data written to the next oldest mapping comes from a target or source.
If the grain in the target being written has not yet been copied, the grain is copied from the
oldest already copied grain in the mappings that are newer than it, or the source if none
are already copied. After this copy has been done, the write can be applied to the target.
Read to target VDisk
If the grain being read has been split, the read simply returns data from the target being read.
If the read is to an uncopied grain on an intermediate target VDisk, each of the newer
mappings is examined in turn to see if the grain has been split. The read is surfaced from the
first split grain found or from the source VDisk if none of the newer mappings has a split grain.
268
Implementing the IBM System Storage SAN Volume Controller V5.1
Stopping the copy process
An important scenario arises when a stop command is delivered to a mapping for a target that
has dependent mappings.
After a mapping is in the Stopped state, it can be deleted or restarted, which must not be
allowed if there are still grains that hold data upon which other mappings depend. To avoid
this situation, when a mapping receives a stopfcmap or stopfcconsistgrp command, rather
than immediately moving to the Stopped state, it enters the Stopping state. An automatic copy
process is driven that will find and copy all of the data that is uniquely held on the target VDisk
of the mapping that is being stopped, to the next oldest mapping that is in the Copying state.
Stopping the copy process: The stopping copy process can be ongoing for several
mappings sharing the same source at the same time. At the completion of this process, the
mapping will automatically make an asynchronous state transition to the Stopped state or
the idle_copied state if the mapping was in the Copying state with progress = 100%.
For example, if the mapping associated with Target 0 was issued a stopfcmap or
stopfcconsistgrp command, Target 0 enters the Stopping state while a process copies the
data of Target 0 to Target 1. After all of the data has been copied, Target 0 enters the Stopped
state, and Target 1 is no longer dependent upon Target 0, but Target 1 remains dependent on
Target 2.
6.4.7 Summary of the FlashCopy indirection layer algorithm
Table 6-2 summarizes the indirection layer algorithm.
Table 6-2 Summary table of the FlashCopy indirection layer algorithm
VDisk being
accessed
Has the grain
been split
(copied)?
Source
No
Read from source VDisk.
Copy grain to most recently
started target for this source,
then write to the source.
Yes
Read from source VDisk.
Write to source VDisk.
No
If any newer targets exist for
this source in which this grain
has already been copied,
read from the oldest of these
targets. Otherwise, read from
the source.
Hold the write. Check the
dependency target VDisks to
see if the grain is split. If the
grain is not already copied to
the next oldest target for this
source, copy the grain to the
next oldest target. Then,
write to the target.
Yes
Read from target VDisk.
Write to target VDisk.
Target
Host I/O operation
Read
Write
6.4.8 Interaction with the cache
This copy-on-write process can introduce significant latency into write operations. In order to
isolate the active application from this latency, the FlashCopy indirection layer is placed
logically beneath the cache.
Chapter 6. Advanced Copy Services
269
Therefore, the copy latency is typically only seen when destaged from the cache, rather than
for write operations from an application; otherwise, the copy operation might be blocked
waiting for the write to complete.
In Figure 6-10, we illustrate the logical placement of the FlashCopy indirection layer.
Figure 6-10 Logical placement of the FlashCopy indirection layer
6.4.9 FlashCopy rules
With SVC 5.1, the maximum number of supported FlashCopy mappings has been improved
to 8,192 per SVC cluster. Consider the following rules when defining FlashCopy mappings:
There is a one-to-one mapping of the source VDisk to the target VDisk.
One source VDisk can have 256 target VDisks.
The source VDisks and target VDisks can be in separate I/O Groups of the same cluster.
The minimum FlashCopy granularity is the entire VDisk.
The source and target must be exactly equal in size.
The size of the source VDisk and the target VDisk cannot be altered (increased or
decreased) after the FlashCopy mapping is created.
There is a per I/O Group limit of 1,024 TB on the quantity of the source VDisk and target
VDisk capacity that can participate in FlashCopy mappings.
6.4.10 FlashCopy and image mode disks
You can use FlashCopy with an image mode VDisk. Because the source and target VDisks
must be exactly the same size when creating a FlashCopy mapping, you must create a VDisk
with the exact same size as the image mode VDisk. To accomplish this task, use the svcinfo
lsvdisk -bytes VDiskName command. The size in bytes is then used to create the VDisk to
use in the FlashCopy mapping.
In Example 6-1 on page 271, we list the size of the Image_VDisk_A VDisk. Subsequently, the
VDisk_A_copy VDisk is created, specifying the same size.
270
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 6-1 Listing the size of a VDisk in bytes and creating a VDisk of equal size
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Image_VDisk_A
id 8
name Image_VDisk_A
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name MDG_Image
capacity 36.0GB
type image
.
.
.
autoexpand
warning
grainsize
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -size 36 -unit gb -name VDisk_A_copy
-mdiskgrp MDG_DS47 -vtype striped -iogrp 1
Virtual Disk, id [19], successfully created
Tip: Alternatively, you can use the expandvdisksize and shrinkvdisksize VDisk
commands to modify the size of the VDisk. See 7.4.10, “Expanding a VDisk” on page 367
and 7.4.16, “Shrinking a VDisk” on page 372 for more information.
You can use an image mode VDisk as either a FlashCopy source VDisk or target VDisk.
6.4.11 FlashCopy mapping events
In this section, we explain the series of events that modify the states of a FlashCopy. In
Figure 6-11 on page 272, the FlashCopy mapping state diagram shows an overview of the
states that apply to a FlashCopy mapping. We describe the mapping events in Table 6-3 on
page 272.
Overview of a FlashCopy sequence of events:
1. Associate the source data set with a target location (one or more source and target
VDisks).
2. Create a FlashCopy mapping for each source VDisk to the corresponding target VDisk.
The target VDisk must be equal in size to the source VDisk.
3. Discontinue access to the target (application dependent).
4. Prepare (pre-trigger) the FlashCopy:
a. Flush cache for the source.
b. Discard cache for the target.
5. Start (trigger) the FlashCopy:
a. Pause I/O (briefly) on the source.
b. Resume I/O on the source.
c. Start I/O on the target.
Chapter 6. Advanced Copy Services
271
Figure 6-11 FlashCopy mapping state diagram
Table 6-3 Mapping events
272
Mapping event
Description
Create
A new FlashCopy mapping is created between the specified source
VDisk and the specified target VDisk. The operation fails if any of the
following conditions is true:
For SAN Volume Controller software Version 4.1.0 or earlier, the
source or target VDisk is already a member of a FlashCopy
mapping.
For SAN Volume Controller software Version 4.2.0 or later, the
source or target VDisk is already a target VDisk of a FlashCopy
mapping.
For SAN Volume Controller software Version 4.2.0 or later, the
source VDisk is already a member of 16 FlashCopy mappings.
For SAN Volume Controller software Version 4.3.0 or later, the
source VDisk is already a member of 256 FlashCopy mappings.
The node has insufficient bitmap memory.
The source and target VDisk sizes differ.
Prepare
The prestartfcmap or prestartfcconsistgrp command is directed to
either a consistency group for FlashCopy mappings that are members
of a normal consistency group or to the mapping name for FlashCopy
mappings that are stand-alone mappings. The prestartfcmap or
prestartfcconsistgrp command places the FlashCopy mapping into
the Preparing state.
Important: The prestartfcmap or prestartfcconsistgrp command
can corrupt any data that previously resided on the target VDisk
because cached writes are discarded. Even if the FlashCopy mapping
is never started, the data from the target might have logically changed
during the act of preparing to start the FlashCopy mapping.
Implementing the IBM System Storage SAN Volume Controller V5.1
Mapping event
Description
Flush done
The FlashCopy mapping automatically moves from the Preparing state
to the Prepared state after all cached data for the source is flushed and
all cached data for the target is no longer valid.
Start
When all of the FlashCopy mappings in a consistency group are in the
Prepared state, the FlashCopy mappings can be started.
To preserve the cross volume consistency group, the start of all of the
FlashCopy mappings in the consistency group must be synchronized
correctly with respect to I/Os that are directed at the VDisks by using
the startfcmap or startfcconsistgrp command.
The following actions occur during the startfcmap or
startfcconsistgrp command’s run:
New reads and writes to all source VDisks in the consistency
group are paused in the cache layer until all ongoing reads and
writes beneath the cache layer are completed.
After all FlashCopy mappings in the consistency group are
paused, the internal cluster state is set to allow FlashCopy
operations.
After the cluster state is set for all FlashCopy mappings in the
consistency group, read and write operations continue on the
source VDisks.
The target VDisks are brought online.
As part of the startfcmap or startfcconsistgrp command, read and
write caching is enabled for both the source and target VDisks.
Modify
You can modify the following FlashCopy mapping properties:
FlashCopy mapping name
Clean rate
Consistency group
Copy rate (for background copy)
Automatic deletion of the mapping when the background copy is
complete
Stop
There are two separate mechanisms by which a FlashCopy mapping
can be stopped:
You have issued a command.
An I/O error has occurred.
Delete
This command requests that the specified FlashCopy mapping be
deleted. If the FlashCopy mapping is in the Stopped state, the force
flag must be used.
Flush failed
If the flush of data from the cache cannot be completed, the FlashCopy
mapping enters the Stopped state.
Copy complete
After all of the source data has been copied to the target and there are
no dependent mappings, the state is set to Copied. If the option to
automatically delete the mapping after the background copy completes
is specified, the FlashCopy mapping is automatically deleted. If this
option is not specified, the FlashCopy mapping is not automatically
deleted and can be reactivated by preparing and starting again.
Bitmap online/offline
The node has failed.
Chapter 6. Advanced Copy Services
273
6.4.12 FlashCopy mapping states
In this section, we explain the states of a FlashCopy mapping in more detail.
Idle_or_copied
Read and write caching is enabled for both the source and the target. A FlashCopy mapping
exists between the source and target, but the source and the target behave as independent
VDisks in this state.
Copying
The FlashCopy indirection layer governs all I/O to the source and target VDisks while the
background copy is running.
Reads and writes are executed on the target as though the contents of the source were
instantaneously copied to the target during the startfcmap or startfcconsistgrp command.
The source and target can be independently updated. Internally, the target depends on the
source for certain tracks.
Read and write caching is enabled on the source and the target.
Stopped
The FlashCopy was stopped either by a user command or by an I/O error.
When a FlashCopy mapping is stopped, any useful data in the target VDisk is lost. Therefore,
while the FlashCopy mapping is in this state, the target VDisk is in the Offline state. To regain
access to the target, the mapping must be started again (the previous point-in-time will be
lost) or the FlashCopy mapping must be deleted. The source VDisk is accessible, and
read/write caching is enabled for the source. In the Stopped state, a mapping can be
prepared again or it can be deleted.
Stopping
The mapping is in the process of transferring data to an dependency mapping. The behavior
of the target VDisk depends on whether the background copy process had completed while
the mapping was in the Copying state. If the copy process had completed, the target VDisk
remains online while the stopping copy process completes. If the copy process had not
completed, data in the cache is discarded for the target VDisk. The target VDisk is taken
offline, and the stopping copy process runs. After the data has been copied, a stop complete
asynchronous event notification is issued. The mapping will move to the Idle/Copied state if
the background copy has completed or to the Stopped state if the background copy has not
completed.
The source VDisk remains accessible for I/O.
Suspended
The target has been “flashed” from the source and was in the Copying or Stopping state.
Access to the metadata has been lost, and as a consequence, both the source and target
VDisks are offline. The background copy process has been halted.
When the metadata becomes available again, the FlashCopy mapping will return to the
Copying or Stopping state, the access to the source and target VDisks will be restored, and
the background copy or stopping process will be resumed. Unflushed data that was written to
the source or target before the FlashCopy was suspended is pinned in the cache, consuming
resources, until the FlashCopy mapping leaves the Suspended state.
274
Implementing the IBM System Storage SAN Volume Controller V5.1
Preparing
Because the FlashCopy function is placed logically beneath the cache to anticipate any write
latency problem, it demands no read or write data for the target and no write data for the
source in the cache at the time that the FlashCopy operation is started. This design ensures
that the resulting copy is consistent.
Performing the necessary cache flush as part of the startfcmap or startfcconsistgrp
command unnecessarily delays the I/Os that are received after the startfcmap or
startfcconsistgrp command is executed, because these I/Os must wait for the cache flush
to complete.
To overcome this problem, SVC FlashCopy supports the prestartfcmap or
prestartfcconsistgrp command, which prepares for a FlashCopy start while still allowing
I/Os to continue to the source VDisk.
In the Preparing state, the FlashCopy mapping is prepared by the following steps:
1. Flushing any modified write data associated with the source VDisk from the cache. Read
data for the source will be left in the cache.
2. Placing the cache for the source VDisk into write-through mode, so that subsequent writes
wait until data has been written to disk before completing the write command that is
received from the host.
3. Discarding any read or write data that is associated with the target VDisk from the cache.
While in this state, writes to the source VDisk will experience additional latency, because the
cache is operating in write-through mode.
While the FlashCopy mapping is in this state, the target VDisk is reported as online, but it will
not perform reads or writes. These reads and writes are failed by the SCSI front end.
Before starting the FlashCopy mapping, it is important that any cache at the host level, for
example, the buffers in the host OSs or applications, are also instructed to flush any
outstanding writes to the source VDisk.
Prepared
When in the Prepared state, the FlashCopy mapping is ready to perform a start. While the
FlashCopy mapping is in this state, the target VDisk is in the Offline state. In the Prepared
state, writes to the source VDisk experience additional latency because the cache is
operating in write-through mode.
Summary of FlashCopy mapping states
Table 6-4 on page 276 lists the various FlashCopy mapping states and the corresponding
states of the source and target VDisks.
Chapter 6. Advanced Copy Services
275
Table 6-4 FlashCopy mapping state summary
State
Source
Target
Online/Offline
Cache state
Online/Offline
Cache state
Idling/Copied
Online
Write-back
Online
Write-back
Copying
Online
Write-back
Online
Write-back
Stopped
Online
Write-back
Offline
N/A
Stopping
Online
Write-back
Online if copy
complete
Offline if copy not
complete
N/A
Suspended
Offline
Write-back
Offline
N/A
Preparing
Online
Write-through
Online but not
accessible
N/A
Prepared
Online
Write-through
Online but not
accessible
N/A
6.4.13 Space-efficient FlashCopy
You can have a mix of space-efficient and fully allocated VDisks in FlashCopy mappings. One
common combination is a fully allocated source with a space-efficient target, which allows the
target to consume a smaller amount of real storage than the source.
For the best performance, the grain size of the Space-Efficient VDisk must match the grain
size of the FlashCopy mapping. However, if the grain sizes differ, the mapping still proceeds.
Consider the following information when you create your FlashCopy mappings:
If you are using a fully allocated source with a space-efficient target, disable the
background copy and cleaning mode on the FlashCopy map by setting both the
background copy rate and cleaning rate to zero. Otherwise, if these features are enabled,
all of the source is copied onto the target VDisk, which causes the Space-Efficient VDisk
to either go offline or to grow as large as the source.
If you are using only a space-efficient source, only the space that is used on the source
VDisk is copied to the target VDisk. For example, if the source VDisk has a virtual size of
800 GB and a real size of 100 GB, of which 50 GB has been used, only the used 50 GB is
copied.
Multiple space-efficient targets for FlashCopy
The SVC implementation of Multiple Target FlashCopy ensures that when new data is written
to a source or target, that data is copied to zero or one other targets. A consequence of this
implementation is that Space-Efficient VDisks can be used in conjunction with Multiple Target
FlashCopy without causing allocations to occur on multiple targets when data is written to the
source.
Space-efficient incremental FlashCopy
The implementation of Space-Efficient VDisks does not preclude the use of incremental
FlashCopy on the same VDisks. It does not make sense to have a fully allocated source
VDisk and to use incremental FlashCopy to copy this fully allocated source VDisk to a
space-efficient target VDisk; however, this combination is possible.
276
Implementing the IBM System Storage SAN Volume Controller V5.1
Two more interesting combinations of incremental FlashCopy and Space-Efficient VDisks are:
A space-efficient source VDisk can be incrementally copied using FlashCopy to a
space-efficient target VDisk. Whenever the FlashCopy is retriggered, only data that has
been modified is recopied to the target. Note that if space is allocated on the target
because of I/O to the target VDisk, this space is not reclaimed when the FlashCopy is
retriggered.
A fully allocated source VDisk can be incrementally copied using FlashCopy to another
fully allocated VDisk at the same time as being copied to multiple space-efficient targets
(taken at separate points in time). This combination allows a single full backup to be kept
for recovery purposes and separates the backup workload from the production workload,
and at the same time, allowing older space-efficient backups to be retained.
Migration from and to a Space-Efficient VDisk
There are various scenarios to migrate a non-Space-Efficient VDisk to a Space-Efficient
VDisk. We describe migration fully in Chapter 9, “Data migration” on page 675.
6.4.14 Background copy
The FlashCopy background feature enables you to copy all of the data in a source VDisk to
the corresponding target VDisk. Without the FlashCopy background feature, only data that
changed on the source VDisk can be copied to the target VDisk. The benefit of using a
FlashCopy mapping with background copy enabled is that the target VDisk becomes a real
clone (independent from the source VDisk) of the FlashCopy mapping source VDisk.
The background copy rate is a property of a FlashCopy mapping that is expressed as a value
between 0 and 100. It can be changed in any FlashCopy mapping state and can differ in the
mappings of one consistency group. A value of 0 disables background copy.
The relationship of the background copy rate value to the attempted number of grains to be
split (copied) per second is shown in Table 6-5.
Table 6-5 Background copy rate
Value
Data copied per second
Grains per second
1 - 10
128 KB
0.5
11 - 20
256 KB
1
21 - 30
512 KB
2
31 - 40
1 MB
4
41 - 50
2 MB
8
51 - 60
4 MB
16
61 - 70
8 MB
32
71 - 80
16 MB
64
81 - 90
32 MB
128
91 - 100
64 MB
256
The grains per second numbers represent the maximum number of grains that the SVC will
copy per second, assuming that the bandwidth to the managed disks (MDisks) can
accommodate this rate.
Chapter 6. Advanced Copy Services
277
If the SVC is unable to achieve these copy rates because of insufficient bandwidth from the
SVC nodes to the MDisks, background copy I/O contends for resources on an equal basis
with the I/O that is arriving from the hosts. Both background copy I/O and I/O that is arriving
from the hosts tend to see an increase in latency and a consequential reduction in
throughput. Both background copy and foreground I/O continue to make forward progress,
and do not stop, hang, or cause the node to fail. The background copy is performed by both
nodes of the I/O Group in which the source VDisk resides.
6.4.15 Synthesis
The FlashCopy functionality in SVC simply creates copy VDisks. All of the data in the source
VDisk is copied to the destination VDisk, including operating system control information, as
well as application data and metadata.
Certain operating systems are unable to use FlashCopy without an additional step, which is
termed synthesis. In summary, synthesis performs a type of transformation on the operating
system metadata in the target VDisk so that the operating system can use the disk.
6.4.16 Serialization of I/O by FlashCopy
In general, the FlashCopy function in the SVC introduces no explicit serialization into the I/O
path. Therefore, many concurrent I/Os are allowed to the source and target VDisks.
However, there is a lock for each grain. The lock can be in shared or exclusive mode. For
multiple targets, a common lock is shared and the mappings are derived from a particular
source VDisk. The lock is used in the following modes under the following conditions:
The lock is held in shared mode for the duration of a read from the target VDisk, which
touches a grain that is not split.
The lock is held in exclusive mode during a grain split, which happens prior to FlashCopy
starting any destage (or write-through) from the cache to a grain that is going to be split
(the destage waits for the grain to be split). The lock is held during the grain split and
released before the destage is processed.
If the lock is held in shared mode, and another process wants to use the lock in shared mode,
this request is granted unless a process is already waiting to use the lock in exclusive mode.
If the lock is held in shared mode and it is requested to be exclusive, the requesting process
must wait until all holders of the shared lock free it.
Similarly, if the lock is held in exclusive mode, a process wanting to use the lock in either
shared or exclusive mode must wait for it to be freed.
6.4.17 Error handling
When a FlashCopy mapping is not copying or stopping, the FlashCopy function does not
affect the error handling or the reporting of errors in the I/O path. Error handling and reporting
are only affected by FlashCopy when a FlashCopy mapping is copying or stopping.
We describe these scenarios in the following sections.
Node failure
Normally, two copies of the FlashCopy bitmaps are maintained; one copy of the FlashCopy
bitmaps is on each of the two nodes making up the I/O Group of the source VDisk. When a
node fails, one copy of the bitmaps, for all FlashCopy mappings whose source VDisk is a
278
Implementing the IBM System Storage SAN Volume Controller V5.1
member of the failing node’s I/O Group, will become inaccessible. FlashCopy will continue
with a single copy of the FlashCopy bitmap being stored as non-volatile in the remaining node
in the source I/O Group. The cluster metadata is updated to indicate that the missing node no
longer holds up-to-date bitmap information.
When the failing node recovers, or a replacement node is added to the I/O Group, up-to-date
bitmaps will be reestablished on the new node, and it will again provide a redundant location
for the bitmaps:
When the FlashCopy bitmap becomes available again (at least one of the SVC nodes in
the I/O Group is accessible), the FlashCopy mapping will return to the Copying state,
access to the source and target VDisks will be restored, and the background copy process
will be resumed. Unflushed data that was written to the source or target before the
FlashCopy was suspended is pinned in the cache until the FlashCopy mapping leaves the
Suspended state.
Normally, two copies of the FlashCopy bitmaps are maintained (in non-volatile memory),
one copy on each of the two SVC nodes making up the I/O Group of the source VDisk. If
only one of the SVC nodes in the I/O Group to which the source VDisk belongs goes
offline, the FlashCopy mapping will continue in the Copying state, with a single copy of the
FlashCopy bitmap. When the failed SVC node recovers, or a replacement SVC node is
added to the I/O Group, up-to-date FlashCopy bitmaps will be reestablished on the
resuming SVC node and again provide a redundant location for the FlashCopy bitmaps.
If both nodes in the I/O Group become unavailable: If both nodes in the I/O Group to
which the target VDisk belongs become unavailable, the host cannot access the target
VDisk.
Path failure (Path Offline state)
In a fully functioning cluster, all of the nodes have a software representation of every VDisk in
the cluster within their application hierarchy.
Because the storage area network (SAN) that links the SVC nodes to each other and to the
MDisks is made up of many independent links, it is possible for a subset of the nodes to be
temporarily isolated from several of the MDisks. When this situation happens, the managed
disks are said to be Path Offline on certain nodes.
Other nodes: Other nodes might see the managed disks as Online, because their
connection to the managed disks is still functioning.
When an MDisk enters the Path Offline state on an SVC node, all of the VDisks that have any
extents on the MDisk also become Path Offline. Again, this situation happens only on the
affected nodes. When a VDisk is Path Offline on a particular SVC node, the host access to
that VDisk through the node will fail with the SCSI sensor indicating Offline.
Path Offline for the source VDisk
If a FlashCopy mapping is in the Copying state and the source VDisk goes Path Offline, this
Path Offline state is propagated to all target VDisks up to but not including the target VDisk for
the newest mapping that is 100% copied but remains in the Copying state. If no mappings are
100% copied, all of the target VDisks are taken offline. Again, note that Path Offline is a state
that exists on a per-node basis. Other nodes might not be affected. If the source VDisk comes
Online, the target and source VDisks are brought back Online.
Chapter 6. Advanced Copy Services
279
Path Offline for the target VDisk
If a target VDisk goes Path Offline, but the source VDisk is still Online, and if there are any
dependent mappings, those target VDisks will also go Path Offline. The source VDisk will
remain Online.
6.4.18 Asynchronous notifications
FlashCopy raises informational error logs when mappings or consistency groups make
certain state transitions.
These state transitions occur as a result of configuration events that complete
asynchronously, and the informational errors can be used to generate Simple Network
Management Protocol (SNMP) traps to notify the user. Other configuration events complete
synchronously, and no informational errors are logged as a result of these events:
PREPARE_COMPLETED: This state transition is logged when the FlashCopy mapping or
consistency group enters the Prepared state as a result of a user request to prepare. The
user can now start (or stop) the mapping or consistency group.
COPY_COMPLETED: This state transition is logged when the FlashCopy mapping or
consistency group enters the Idle_or_copied state when it was previously in the Copying
or Stopping state. This state transition indicates that the target disk now contains a
complete copy and no longer depends on the source.
STOP_COMPLETED: This state transition is logged when the FlashCopy mapping or
consistency group has entered the Stopped state as a result of a user request to stop. It
will be logged after the automatic copy process has completed. This state transition
includes mappings where no copying needed to be performed. This state transition differs
from the error that is logged when a mapping or group enters the Stopped state as a result
of an I/O error.
6.4.19 Interoperation with Metro Mirror and Global Mirror
FlashCopy can work together with Metro Mirror and Global Mirror to provide better protection
of the data. For example, we can perform a Metro Mirror copy to duplicate data from Site_A to
Site_B and, then, perform a daily FlashCopy and copy the data elsewhere.
Table 6-6 lists which combinations of FlashCopy and Remote Copy are supported. In the
table, remote copy refers to Metro Mirror and Global Mirror.
Table 6-6 FlashCopy and remote copy interaction
280
Component
Remote copy primary
Remote copy secondary
FlashCopy
Source
Supported
Supported
Latency: When the FlashCopy
relationship is in the Preparing
and Prepared states, the cache
at the remote copy secondary
site operates in write-through
mode.
This process adds additional
latency to the already latent
remote copy relationship.
FlashCopy
Destination
Not supported
Not supported
Implementing the IBM System Storage SAN Volume Controller V5.1
6.4.20 Recovering data from FlashCopy
You can use FlashCopy to recover the data if a form of corruption has happened. For
example, if a user deletes data by mistake, you can map the FlashCopy target VDisks to the
application server, and import all the logical volume-level configurations, start the application,
and restore the data back to a given point in time.
Tip: It is better to map a FlashCopy target VDisk to a backup machine with the same
application installed. We do not recommend that you map a FlashCopy target VDisk to the
same application server to which the FlashCopy source VDisk is mapped, because the
FlashCopy target and source VDisks have the same signature, pvid, vgda, and so on.
Special steps are necessary to handle the conflict at the OS level. For example, you can
use the recreatevg command in AIX to generate separate vg, lv, file system, and so on,
names in order to avoid a naming conflict.
FlashCopy backup is a disk-based backup copy that can be used to restore service more
quickly than other backup techniques. This application is further enhanced by the ability to
maintain multiple backup targets, spread over a range of time, allowing the user to choose a
backup from before the time of the corruption.
6.5 Metro Mirror
In the following topics, we describe the Metro Mirror copy service, which is a synchronous
remote copy function. Metro Mirror in SVC is similar to Metro Mirror in the IBM System
Storage DS family.
SVC provides a single point of control when enabling Metro Mirror in your SAN, regardless of
the disk subsystems that are used.
The general application of Metro Mirror is to maintain two real-time synchronized copies of a
disk. Often, two copies are geographically dispersed to two SVC clusters, although it is
possible to use Metro Mirror in a single cluster (within an I/O Group). If the primary copy fails,
you can enable a secondary copy for I/O operation.
Tips: Intracluster Metro Mirror will consume more resources for a specific cluster,
compared to an intercluster Metro Mirror relationship. We recommend using intercluster
Metro Mirror when possible.
A typical application of this function is to set up a dual-site solution using two SVC clusters.
The first site is considered the primary or production site, and the second site is considered
the backup site or failover site, which is activated when a failure at the first site is detected.
6.5.1 Metro Mirror overview
Metro Mirror works by establishing a Metro Mirror relationship between two VDisks of equal
size. To maintain data integrity for dependency writes, you can use consistency groups to
group a number of Metro Mirror relationships together, similar to FlashCopy consistency
groups. SVC provides both intracluster and intercluster Metro Mirror.
Intracluster Metro Mirror
You can apply intracluster Metro Mirror within a single I/O Group.
Chapter 6. Advanced Copy Services
281
Applying Metro Mirror across I/O Groups in the same SVC cluster is not supported, because
intracluster Metro Mirror can only be performed between VDisks in the same I/O Group.
Intercluster Metro Mirror
Intercluster Metro Mirror operations require a pair of SVC clusters that are separated by a
number of moderately high bandwidth links. Two SVC clusters must be defined in an SVC
partnership, which must be performed on both SVC clusters to establish a fully functional
Metro Mirror partnership.
Using standard single mode connections, the supported distance between two SVC clusters
in a Metro Mirror partnership is 10 km (6.2 miles), although greater distances can be achieved
by using extenders. For extended distance solutions, contact your IBM representative.
Limit: When a local and a remote fabric are connected together for Metro Mirror purposes,
the inter-switch link (ISL) hop count between a local node and a remote node cannot
exceed seven.
6.5.2 Remote copy techniques
Metro Mirror is a synchronous remote copy, which we briefly explain next. To illustrate the
differences between synchronous and asynchronous remote copy, we also explain
asynchronous remote copy.
Synchronous remote copy
Metro Mirror is a fully synchronous remote copy technique that ensures that, as long as writes
to the secondary VDisks are possible, writes are committed at both the primary and
secondary VDisks before the application is given an acknowledgement of the completion of a
write.
Errors, such as a loss of connectivity between the two clusters, can mean that it is not
possible to replicate data from the primary VDisk to the secondary VDisk. In this case, Metro
Mirror operates to ensure that a consistent image is left at the secondary VDisk, and then
continues to allow I/O to the primary VDisk, so as not to affect the operations at the
production site.
Figure 6-12 on page 283 illustrates how a write to the master VDisk is mirrored to the cache
of the auxiliary VDisk before an acknowledgement of the write is sent back to the host that
issued the write. This process ensures that the secondary is synchronized in real time, in
case it is needed in a failover situation.
However, this process also means that the application is fully exposed to the latency and
bandwidth limitations (if any) of the communication link to the secondary site. This process
might lead to unacceptable application performance, particularly when placed under peak
load. Therefore, using Metro Mirror has distance limitations.
282
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 6-12 Write on VDisk in Metro Mirror relationship
6.5.3 SVC Metro Mirror features
SVC Metro Mirror supports the following features:
Synchronous remote copy of VDisks dispersed over metropolitan scale distances is
supported.
SVC implements Metro Mirror relationships between VDisk pairs, with each VDisk in a pair
managed by an SVC cluster.
SVC supports intracluster Metro Mirror, where both VDisks belong to the same cluster
(and I/O Group).
SVC supports intercluster Metro Mirror, where each VDisk belongs to a separate SVC
cluster. You can configure a specific SVC cluster for partnership with another cluster. All
intercluster Metro Mirror processing takes place between two SVC clusters that are
configured in a partnership.
Intercluster and intracluster Metro Mirror can be used concurrently within a cluster for
separate relationships.
SVC does not require that a control network or fabric is installed to manage Metro Mirror.
For intercluster Metro Mirror, SVC maintains a control link between two clusters. This
control link is used to control the state and coordinate updates at either end. The control
link is implemented on top of the same FC fabric connection that the SVC uses for Metro
Mirror I/O.
SVC implements a configuration model that maintains the Metro Mirror configuration and
state through major events, such as failover, recovery, and resynchronization, to minimize
user configuration action through these events.
Chapter 6. Advanced Copy Services
283
SVC maintains and polices a strong concept of consistency and makes this concept
available to guide configuration activity.
SVC implements flexible resynchronization support enabling it to resynchronize VDisk
pairs that have suffered write I/O to both disks and to resynchronize only those regions
that are known to have changed.
6.5.4 Multiple Cluster Mirroring
With the introduction of Multiple Cluster Mirroring in SVC 5.1, you can configure a cluster with
multiple partner clusters.
Multiple Cluster Mirroring enables Metro Mirror and Global Mirror relationships to exist
between a maximum of four SVC clusters.
The SVC clusters can take advantage of the maximum number of remote mirror relationships
because Multiple Cluster Mirroring enables clients to copy from several remote sites to a
single SVC cluster at a disaster recovery (DR) site. It supports implementation of
consolidated DR strategies and helps clients that are moving or consolidating data centers.
Figure 6-13 shows an example of a Multiple Cluster Mirroring configuration.
Figure 6-13 Multiple Cluster Mirroring configuration example
Supported Multiple Cluster Mirroring topologies
Prior to SVC 5.1, you used one of the two cluster topologies that were allowed:
A (no partnership configured)
A  B (one partnership configured)
284
Implementing the IBM System Storage SAN Volume Controller V5.1
With Multiple Cluster Mirroring, there is a wider range of possible topologies. You can connect
a maximum of four clusters, directly or indirectly. Therefor, a cluster can never have any more
than three partners.
For example, these topologies are allowed:
A  B, A  C, and A  D
Figure 6-14 shows a star topology.
Figure 6-14 SVC star topology
Figure 6-14 shows four clusters in a star topology, with cluster A at the center. Cluster A can
be a central DR site for the three other locations.
Using a star topology, you can migrate separate applications at separate times by using a
process, such as this example:
1. Suspend application at A.
2. Remove the A  B relationship.
3. Create the A  C relationship (or alternatively, the B  C relationship).
4. Synchronize to cluster C, and ensure A  C is established:
– A  B, A  C, A  D, B  C, B  D, and C  D
– A  B, A  C, and B  C
Figure 6-15 on page 286 shows a triangle topology.
Chapter 6. Advanced Copy Services
285
Figure 6-15 SVC triangle topology
There are three clusters in a triangle topology.
Figure 6-16 shows a fully connected topology.
Figure 6-16 SVC fully connected topology
Figure 6-16 is a fully connected mesh where every cluster has a partnership to each of the
three other clusters. Therefore, VDisks can be replicated between any pair of clusters, but
note that this topology is not required, unless relationships are needed between every pair of
clusters:
A  B, B  C, and C  D
The other option is a daisy-chain topology between four clusters, where we have a cascading
solution; however, a VDisk must be in only one relationship, such as A  B, for example. At
the time of writing, a three-site solution, such as DS8000 Metro Global Mirror, is not
supported.
Figure 6-17 on page 287 shows a daisy-chain topology.
286
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 6-17 SVC daisy-chain topology
Unsupported topology
As an illustration of what is not supported, we show this example:
A  B, B  C, C  D, and D  E
Figure 6-18 shows this unsupported topology.
Figure 6-18 SVC unsupported topology
This topology is unsupported, because five clusters are indirectly connected. If the cluster can
detect this topology at the time of the fourth mkpartnership command, the command will be
rejected.
Upgrade restrictions: The introduction of Multiple Cluster Mirroring necessitates upgrade
restrictions:
Concurrent code upgrade to 5.1.0 is supported from 4.3.1.x only.
If the cluster is in a partnership, the partnered cluster must meet a minimum software
level to allow concurrent I/O; the partnered cluster must be running 4.2.1 or higher.
6.5.5 Metro Mirror relationship
A Metro Mirror relationship is composed of two VDisks that are equal in size. The master
VDisk and the auxiliary VDisk can be in the same I/O Group, within the same SVC cluster
(intracluster Metro Mirror), or they can be on separate SVC clusters that are defined as SVC
partners (intercluster Metro Mirror).
Rules:
A VDisk can only be part of one Metro Mirror relationship at a time.
A VDisk that is a FlashCopy target cannot be part of a Metro Mirror relationship.
Chapter 6. Advanced Copy Services
287
Figure 6-19 illustrates the Metro Mirror relationship.
Figure 6-19 Metro Mirror relationship
Metro Mirror relationship between primary and secondary VDisks
When creating a Metro Mirror relationship, you must define one VDisk as the master and the
other VDisk as the auxiliary. The relationship between two copies is symmetric. When a Metro
Mirror relationship is created, the master VDisk is initially considered the primary copy (often
referred to as the source), and the auxiliary VDisk is considered the secondary copy (often
referred to as the target). The initial copy direction mirrors the master VDisk to the auxiliary
VDisk. After the initial synchronization is complete, you can change the copy direction, if
appropriate.
In the most common applications of Metro Mirror, the master VDisk contains the production
copy of the data and is used by the host application, while the auxiliary VDisk contains a
mirrored copy of the data and is used for failover in DR scenarios. The terms master and
auxiliary describe this use. However, if Metro Mirror is applied differently, the terms master
VDisk and auxiliary VDisk need to be interpreted appropriately.
6.5.6 Importance of write ordering
Many applications that use block storage must survive failures, such as the loss of power or a
software crash, and not lose the data that existed prior to the failure. Because many
applications need to perform large numbers of update operations in parallel with storage,
maintaining write ordering is key to ensuring the correct operation of applications following a
disruption.
An application that performs a high volume of database updates is usually designed with the
concept of dependent writes. With dependent writes, it is important to ensure that an earlier
write has completed before a later write is started. Reversing the order of dependent writes
can undermine an application’s algorithms and can lead to problems, such as detected, or
undetected, data corruption.
Dependent writes that span multiple VDisks
The following scenario illustrates a simple example of a sequence of dependent writes, and in
particular, what can happen if they span multiple VDisks.
Consider the following typical sequence of writes for a database update transaction:
1. A write is executed to update the database log, indicating that a database update will be
performed.
2. A second write is executed to update the database.
3. A third write is executed to update the database log, indicating that a database update has
completed successfully.
288
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 6-20 shows the write sequence.
Figure 6-20 Dependent writes for a database
The database ensures the correct ordering of these writes by waiting for each step to
complete before starting the next step.
Database logs: All databases have logs associated with them. These logs keep records of
database changes. If a database needs to be restored to a point beyond the last full, offline
backup, logs are required to roll the data forward to the point of failure.
But imagine if the database log and the database itself are on separate VDisks and a Metro
Mirror relationship is stopped during this update. In this case, you need to consider the
possibility that the Metro Mirror relationship for the VDisk with the database file is stopped
slightly before the VDisk containing the database log. If this situation occurs, it is possible that
the secondary VDisks see writes (1) and (3), but not (2).
Then, if the database was restarted using data available from secondary disks, the database
log will indicate that the transaction had completed successfully, when it did not. In this
scenario, the integrity of the database is in question.
Metro Mirror consistency groups
Metro Mirror consistency groups address the issue of dependent writes across VDisks, where
the objective is to preserve data consistency across multiple Metro Mirrored VDisks.
Consistency groups ensure a consistent data set, because applications have relational data
spanning across multiple VDisks.
Chapter 6. Advanced Copy Services
289
A Metro Mirror consistency group can contain an arbitrary number of relationships up to the
maximum number of Metro Mirror relationships that is supported by the SVC cluster. Metro
Mirror commands can be issued to a Metro Mirror consistency group and, therefore,
simultaneously for all Metro Mirror relationships defined within that consistency group, or to a
single Metro Mirror relationship that is not part of a Metro Mirror consistency group. For
example, when issuing a Metro Mirror startrcconsistgrp command to the consistency
group, all of the Metro Mirror relationships in the consistency group are started at the same
time.
Figure 6-21 illustrates the concept of Metro Mirror consistency groups.
Because the MM_Relationship 1 and 2 are part of the consistency group, they can be
handled as one entity, while the stand-alone MM_Relationship 3 is handled separately.
Figure 6-21 Metro Mirror consistency group
Certain uses of Metro Mirror require manipulation of more than one relationship. Metro Mirror
consistency groups can provide the ability to group relationships, so that they are
manipulated in unison. Metro Mirror relationships within a consistency group can be in any
form:
Metro Mirror relationships can be part of a consistency group, or they can be stand-alone
and therefore handled as single instances.
A consistency group can contain zero or more relationships. An empty consistency group,
with zero relationships in it, has little purpose until it is assigned its first relationship, except
that it has a name.
All of the relationships in a consistency group must have matching master and auxiliary
SVC clusters.
290
Implementing the IBM System Storage SAN Volume Controller V5.1
Although it is possible to use consistency groups to manipulate sets of relationships that do
not need to satisfy these strict rules, this manipulation can lead to undesired side effects. The
rules behind a consistency group mean that certain configuration commands are prohibited.
These configuration commands are not prohibited if the relationship is not part of a
consistency group.
For example, consider the case of two applications that are completely independent, yet they
are placed into a single consistency group. In the event of an error, there is a loss of
synchronization, and a background copy process is required to recover synchronization.
While this process is in progress, Metro Mirror rejects attempts to enable access to secondary
VDisks of either application.
If one application finishes its background copy much more quickly than the other application,
Metro Mirror still refuses to grant access to its secondary VDisks even though it is safe in this
case, because Metro Mirror policy is to refuse access to the entire consistency group if any
part of it is inconsistent.
Stand-alone relationships and consistency groups share a common configuration and state
model. All of the relationships in a non-empty consistency group have same state as the
consistency group.
6.5.7 How Metro Mirror works
In the sections that follow, we describe how Metro Mirror works.
Intercluster communication and zoning
All intercluster communication is performed over the SAN. Prior to creating intercluster Metro
Mirror relationships, you must create a partnership between the two clusters.
SVC node ports on each SVC cluster must be able to access each other to facilitate the
partnership creation. Therefore, you must define a zone in each fabric for intercluster
communication (see Chapter 3, “Planning and configuration” on page 65).
SVC cluster partnership
Each SVC cluster can only be in a partnership with between one and three other SVC
clusters. When an SVC cluster partnership has been defined on both clusters of a pair of
clusters, further communication facilities between nodes in each of the clusters are
established:
A single control channel, which is used to exchange and coordinate configuration
information
I/O channels between each of these nodes in the clusters
These channels are maintained and updated as nodes appear and disappear and as links
fail, and they are repaired to maintain operation where possible. If communication between
SVC clusters is interrupted or lost, an error is logged (and consequently, Metro Mirror
relationships will stop).
To handle error conditions, you can configure SVC to raise Simple Network Management
Protocol (SNMP) traps to the enterprise monitoring system.
Maintenance of the intercluster link
All SVC nodes maintain a database of other devices that are visible on the fabric. This
database is updated as devices appear and disappear.
Chapter 6. Advanced Copy Services
291
Devices that advertise themselves as SVC nodes are categorized according to the SVC
cluster to which they belong. SVC nodes that belong to the same cluster establish
communication channels between themselves and begin to exchange messages to
implement clustering and the functional protocols of SVC.
Nodes that are in separate clusters do not exchange messages after initial discovery is
complete, unless they have been configured together to perform Metro Mirror.
The intercluster link carries control traffic to coordinate activity between two clusters. It is
formed between one node in each cluster. The traffic between the designated nodes is
distributed among logins that exist between those nodes.
If the designated node fails (or all of its logins to the remote cluster fail), a new node is chosen
to carry control traffic. This node change causes the I/O to pause, but it does not put the
relationships in a Consistent Stopped state.
6.5.8 Metro Mirror process
Several major steps exist in the Metro Mirror process:
1. An SVC cluster partnership is created between two SVC clusters (for intercluster Metro
Mirror).
2. A Metro Mirror relationship is created between two VDisks of the same size.
3. To manage multiple Metro Mirror relationships as one entity, relationships can be made
part of a Metro Mirror consistency group, which ensures data consistency across multiple
Metro Mirror relationships and provides ease of management.
4. When a Metro Mirror relationship is started, and when the background copy has
completed, the relationship becomes consistent and synchronized.
5. After the relationship is synchronized, the secondary VDisk holds a copy of the production
data at the primary, which can be used for DR.
6. To access the auxiliary VDisk, the Metro Mirror relationship must be stopped with the
access option enabled before write I/O is submitted to the secondary.
7. The remote host server is mapped to the auxiliary VDisk, and the disk is available for I/O.
6.5.9 Methods of synchronization
This section describes three methods that can be used to establish a relationship.
Full synchronization after creation
The full synchronization after creation method is the default method. It is the simplest in that it
requires no administrative activity apart from issuing the necessary commands. However, in
certain environments, the available bandwidth can make this method unsuitable.
Use this command sequence for a single relationship:
1. Run mkrcrelationship without specifying the -sync option.
2. Run startrcrelationship without specifying the -clean option.
292
Implementing the IBM System Storage SAN Volume Controller V5.1
Synchronized before creation
In this method, the administrator must ensure that the master and auxiliary VDisks contain
identical data before creating the relationship. There are two ways to ensure that the master
and auxiliary VDisks contain identical data:
Both disks are created with the security delete feature so as to make all data zero.
A complete tape image (or other method of moving data) is copied from one disk to the
other disk.
In either technique, no write I/O must take place to either the master or the auxiliary before
the relationship is established.
Then, the administrator must run these commands:
Run mkrcrelationship with the -sync flag.
Run startrcrelationship without the -clean flag.
If these steps are performed incorrectly, Metro Mirror will report the relationship as being
consistent when it is not, therefore, likely making any secondary disk useless. This method
has an advantage over full synchronization, because it does not require all of the data to be
copied over a constrained link. However, if data needs to be copied, the master and auxiliary
disks cannot be used until the copy is complete, which might be unacceptable.
Quick synchronization after creation
In this method, the administrator must still copy data from the master to the auxiliary, but the
administrator can use this method without stopping the application at the master. The
administrator must ensure that these steps are taken:
A mkrcrelationship command is issued with the -sync flag.
A stoprcrelationship command is issued with the -access flag.
A tape image (or other method of transferring data) is used to copy the entire master disk
to the auxiliary disk.
After the copy is complete, the administrator must ensure that a startrcrelationship
command is issued with the -clean flag.
With this technique, only data that has changed since the relationship was created, including
all regions that were incorrect in the tape image, is copied from the master to the auxiliary. As
with “Synchronized before creation” on page 293, the copy step must be performed correctly
or the auxiliary will be useless, although the copy operation will report it as being
synchronized.
Metro Mirror states and events
In this section, we explain the various states of a Metro Mirror relationship and the series of
events that modify these states.
In Figure 6-22 on page 294, the Metro Mirror relationship state diagram shows an overview of
states that can apply to a Metro Mirror relationship in a connected state.
Chapter 6. Advanced Copy Services
293
Figure 6-22 Metro Mirror mapping state diagram
When creating the Metro Mirror relationship, you can specify if the auxiliary VDisk is already
in sync with the master VDisk, and the background copy process is then skipped. This
capability is especially useful when creating Metro Mirror relationships for VDisks that have
been created with the format option.
The numbers in Figure 6-22 relate to the following numbers. To create the relationship:
Step 1:
a. The Metro Mirror relationship is created with the -sync option, and the Metro Mirror
relationship enters the Consistent stopped state.
b. The Metro Mirror relationship is created without specifying that the master and auxiliary
VDisks are in sync, and the Metro Mirror relationship enters the Inconsistent stopped
state.
Step 2:
a. When starting a Metro Mirror relationship in the Consistent stopped state, the Metro
Mirror relationship enters the Consistent synchronized state. Therefore, no updates
(write I/O) have been performed on the primary VDisk while in the Consistent stopped
state. Otherwise, the -force option must be specified, and the Metro Mirror relationship
then enters the Inconsistent copying state, while the background copy is started.
b. When starting a Metro Mirror relationship in the Inconsistent stopped state, the Metro
Mirror relationship enters the Inconsistent copying state, while the background copy is
started.
294
Implementing the IBM System Storage SAN Volume Controller V5.1
Step 3
When the background copy completes, the Metro Mirror relationship transits from the
Inconsistent copying state to the Consistent synchronized state.
Step 4:
a. When stopping a Metro Mirror relationship in the Consistent synchronized state,
specifying the -access option, which enables write I/O on the secondary VDisk, the
Metro Mirror relationship enters the Idling state.
b. To enable write I/O on the secondary VDisk, when the Metro Mirror relationship is in
the Consistent stopped state, issue the command svctask stoprcrelationship
specifying the -access option, and the Metro Mirror relationship enters the Idling state.
Step 5:
a. When starting a Metro Mirror relationship that is in the Idling state, you must specify the
-primary argument to set the copy direction. Given that no write I/O has been
performed (to either the master or auxiliary VDisk) while in the Idling state, the Metro
Mirror relationship enters the Consistent synchronized state.
b. If write I/O has been performed to either the master or the auxiliary VDisk, the -force
option must be specified, and the Metro Mirror relationship then enters the Inconsistent
copying state, while the background copy is started.
Stop or Error: When a Metro Mirror relationship is stopped (either intentionally or due to an
error), a state transition is applied:
For example, the Metro Mirror relationships in the Consistent synchronized state enter the
Consistent stopped state, and the Metro Mirror relationships in the Inconsistent copying
state enter the Inconsistent stopped state.
In case the connection is broken between the SVC clusters in a partnership, then all
(intercluster) Metro Mirror relationships enter a Disconnected state. For further
information, refer to “Connected versus disconnected” on page 295.
Common states: Stand-alone relationships and consistency groups share a common
configuration and state model. All Metro Mirror relationships in a consistency group that is
not empty have the same state as the consistency group.
6.5.10 State overview
SVC-defined concepts of state are key to understanding configuration concepts. We explain
them in more detail next.
Connected versus disconnected
This distinction can arise when a Metro Mirror relationship is created with the two VDisks in
separate clusters.
Under certain error scenarios, communications between the two clusters might be lost. For
example, power might fail, causing one complete cluster to disappear. Alternatively, the fabric
connection between the two clusters might fail, leaving the two clusters running but unable to
communicate with each other.
When the two clusters can communicate, the clusters and the relationships spanning them
are described as connected. When they cannot communicate, the clusters and the
relationships spanning them are described as disconnected.
Chapter 6. Advanced Copy Services
295
In this scenario, each cluster is left with half of the relationship and has only a portion of the
information that was available to it before. Limited configuration activity is possible and is a
subset of what was possible before.
The disconnected relationships are portrayed as having a changed state. The new states
describe what is known about the relationship and what configuration commands are
permitted.
When the clusters can communicate again, the relationships become connected again. Metro
Mirror automatically reconciles the two state fragments, taking into account any configuration
or other event that took place while the relationship was disconnected. As a result, the
relationship can either return to the state that it was in when it became disconnected or it can
enter another connected state.
Relationships that are configured between VDisks in the same SVC cluster (intracluster) will
never be described as being in a disconnected state.
Consistent versus inconsistent
Relationships that contain VDisks that are operating as secondaries can be described as
being consistent or inconsistent. Consistency groups that contain relationships can also be
described as being consistent or inconsistent. The consistent or inconsistent property
describes the relationship of the data on the secondary to the one on the primary VDisk. It
can be considered a property of the secondary VDisk itself.
A secondary is described as consistent if it contains data that might have been read by a host
system from the primary if power had failed at an imaginary point in time while I/O was in
progress, and power was later restored. This imaginary point in time is defined as the
recovery point. The requirements for consistency are expressed with respect to activity at the
primary up to the recovery point:
The secondary VDisk contains the data from all of the writes to the primary for which the
host received successful completion and that data had not been overwritten by a
subsequent write (before the recovery point).
For writes for which the host did not receive a successful completion (that is, it received
bad completion or no completion at all), and the host subsequently performed a read from
the primary of that data and that read returned successful completion and no later write
was sent (before the recovery point), the secondary contains the same data as that
returned by the read from the primary.
From the point of view of an application, consistency means that a secondary VDisk contains
the same data as the primary VDisk at the recovery point (the time at which the imaginary
power failure occurred).
If an application is designed to cope with unexpected power failure, this guarantee of
consistency means that the application will be able to use the secondary and begin operation
just as though it had been restarted after the hypothetical power failure.
Again, the application is dependent on the key properties of consistency:
Write ordering
Read stability for correct operation at the secondary
If a relationship, or set of relationships, is inconsistent and an attempt is made to start an
application using the data in the secondaries, a number of outcomes are possible:
The application might decide that the data is corrupt and crash or exit with an error code.
The application might fail to detect that the data is corrupt and return erroneous data.
296
Implementing the IBM System Storage SAN Volume Controller V5.1
The application might work without a problem.
Because of the risk of data corruption, and in particular undetected data corruption, Metro
Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data.
Consistency as a concept can be applied to a single relationship or a set of relationships in a
consistency group. Write ordering is a concept that an application can maintain across a
number of disks accessed through multiple systems; therefore, consistency must operate
across all those disks.
When deciding how to use consistency groups, the administrator must consider the scope of
an application’s data, taking into account all of the interdependent systems that communicate
and exchange information.
If two programs or systems communicate and store details as a result of the information
exchanged, either of the following actions might occur:
All of the data accessed by the group of systems must be placed into a single consistency
group.
The systems must be recovered independently (each within its own consistency group).
Then, each system must perform recovery with the other applications to become
consistent with them.
Consistent versus synchronized
A copy that is consistent and up-to-date is described as synchronized. In a synchronized
relationship, the primary and secondary VDisks only differ in regions where writes are
outstanding from the host.
Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data that was frozen at a point in time in the past. Write I/O might have continued to a
primary and not have been copied to the secondary. This state arises when it becomes
impossible to keep up-to-date and maintain consistency. An example is a loss of
communication between clusters when writing to the secondary.
When communication is lost for an extended period of time, Metro Mirror tracks the changes
that happen at the primary, but not the order of such changes, or the details of such changes
(write data). When communication is restored, it is impossible to synchronize the secondary
without sending write data to the secondary out-of-order and, therefore, losing consistency.
Two policies can be used to cope with this situation:
Make a point-in-time copy of the consistent secondary before allowing the secondary to
become inconsistent. In the event of a disaster before consistency is achieved again, the
point-in-time copy target provides a consistent, although out-of-date, image.
Accept the loss of consistency and the loss of a useful secondary, while synchronizing the
secondary.
Chapter 6. Advanced Copy Services
297
6.5.11 Detailed states
The following sections detail the states that are portrayed to the user, for either consistency
groups or relationships. It also details the extra information that is available in each state. The
major states are designed to provide guidance about the configuration commands that are
available.
InconsistentStopped
InconsistentStopped is a connected state. In this state, the primary is accessible for read and
write I/O, but the secondary is not accessible for either read or write I/O. A copy process
needs to be started to make the secondary consistent.
This state is entered when the relationship or consistency group was InconsistentCopying
and has either suffered a persistent error or received a stop command that has caused the
copy process to stop.
A start command causes the relationship or consistency group to move to the
InconsistentCopying state. A stop command is accepted, but it has no effect.
If the relationship or consistency group becomes disconnected, the secondary side transits to
InconsistentDisconnected. The primary side transits to IdlingDisconnected.
InconsistentCopying
InconsistentCopying is a connected state. In this state, the primary is accessible for read and
write I/O, but the secondary is not accessible for either read or write I/O.
This state is entered after a start command is issued to an InconsistentStopped relationship
or a consistency group. It is also entered when a forced start is issued to an Idling or
ConsistentStopped relationship or consistency group.
In this state, a background copy process runs that copies data from the primary to the
secondary VDisk.
In the absence of errors, an InconsistentCopying relationship is active, and the copy progress
increases until the copy process completes. In certain error situations, the copy progress
might freeze or even regress.
A persistent error or stop command places the relationship or consistency group into an
InconsistentStopped state. A start command is accepted, but it has no effect.
If the background copy process completes on a stand-alone relationship, or on all
relationships for a consistency group, the relationship or consistency group transits to the
ConsistentSynchronized state.
If the relationship or consistency group becomes disconnected, the secondary side transits to
InconsistentDisconnected. The primary side transitions to IdlingDisconnected.
ConsistentStopped
ConsistentStopped is a connected state. In this state, the secondary contains a consistent
image, but it might be out-of-date with respect to the primary.
This state can arise when a relationship was in a Consistent Synchronized state and suffers
an error that forces a Consistency Freeze. It can also arise when a relationship is created with
a CreateConsistentFlag set to TRUE.
298
Implementing the IBM System Storage SAN Volume Controller V5.1
Normally, following an I/O error, subsequent write activity causes updates to the primary and
the secondary is no longer synchronized (set to false). In this case, to re-establish
synchronization, consistency must be given up for a period. You must use a start command
with the -force option to acknowledge this situation, and the relationship or consistency group
transits to InconsistentCopying. Enter this command only after all of the outstanding errors
are repaired.
In the unusual case where the primary and the secondary are still synchronized (perhaps
following a user stop, and no further write I/O was received), a start command takes the
relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual
case, you can enter a switch command that moves the relationship or consistency group to
ConsistentSynchronized and reverses the roles of the primary and the secondary.
If the relationship or consistency group becomes disconnected, the secondary transits to
ConsistentDisconnected. The primary transitions to IdlingDisconnected.
An informational status log is generated every time that a relationship or consistency group
enters the ConsistentStopped with a status of Online state. You can configure this situation to
enable an SNMP trap and provide a trigger to automation software to consider issuing a
start command following a loss of synchronization.
ConsistentSynchronized
ConsistentSynchronized is a connected state. In this state, the primary VDisk is accessible for
read and write I/O, and the secondary VDisk is accessible for read-only I/O.
Writes that are sent to the primary VDisk are sent to both the primary and secondary VDisks.
Either successful completion must be received for both writes, the write must be failed to the
host, or a state must transit out of the ConsistentSynchronized state before a write is
completed to the host.
A stop command takes the relationship to the ConsistentStopped state. A stop command
with the -access parameter takes the relationship to the Idling state.
A switch command leaves the relationship in the ConsistentSynchronized state, but it
reverses the primary and secondary roles.
A start command is accepted, but it has no effect.
If the relationship or consistency group becomes disconnected, the same transitions are
made as for ConsistentStopped.
Idling
Idling is a connected state. Both master and auxiliary disks operate in the primary role.
Consequently, both master and auxiliary are accessible for write I/O.
In this state, the relationship or consistency group accepts a start command. Metro Mirror
maintains a record of regions on each disk that received write I/O while idling. This record is
used to determine what areas need to be copied following a start command.
The start command must specify the new copy direction. A start command can cause a
loss of consistency if either VDisk in any relationship has received write I/O, which is indicated
by the Synchronized status. If the start command leads to loss of consistency, you must
specify the -force parameter.
Chapter 6. Advanced Copy Services
299
Following a start command, the relationship or consistency group transits to
ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is
a loss of consistency.
Also, while in this state, the relationship or consistency group accepts a -clean option on the
start command. If the relationship or consistency group becomes disconnected, both sides
change their state to IdlingDisconnected.
IdlingDisconnected
IdlingDisconnected is a disconnected state. The VDisk or disks in this half of the relationship
or consistency group are all in the primary role and accept read or write I/O.
The major priority in this state is to recover the link and make the relationship or consistency
group connected again.
No configuration activity is possible (except for deletes or stops) until the relationship
becomes connected again. At that point, the relationship transits to a connected state. The
exact connected state that is entered depends on the state of the other half of the relationship
or consistency group, which depends on these factors:
The state when it became disconnected
The write activity since it was disconnected
The configuration activity since it was disconnected
If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected.
While IdlingDisconnected, if a write I/O is received that causes loss of synchronization
(synchronized attribute transits from true to false) and the relationship was not already
stopped (either through a user stop or a persistent error), an error log is raised to notify you of
this situation. This error log is the same error log that occurs when the same situation arises
for ConsistentSynchronized.
InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The VDisks in this half of the relationship
or consistency group are all in the secondary role and do not accept read or write I/O.
No configuration activity, except for deletes, is permitted until the relationship becomes
connected again.
When the relationship or consistency group becomes connected again, the relationship
becomes InconsistentCopying automatically unless either condition is true:
The relationship was InconsistentStopped when it became disconnected.
The user issued a stop command while disconnected.
In either case, the relationship or consistency group becomes InconsistentStopped.
ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The VDisks in this half of the relationship or
consistency group are all in the secondary role and accept read I/O but not write I/O.
This state is entered from ConsistentSynchronized or ConsistentStopped when the
secondary side of a relationship becomes disconnected.
In this state, the relationship or consistency group displays an attribute of FreezeTime, which
is the point in time that Consistency was frozen. When entered from ConsistentStopped, it
retains the time that it had in that state. When entered from ConsistentSynchronized, the
300
Implementing the IBM System Storage SAN Volume Controller V5.1
FreezeTime shows the last time at which the relationship or consistency group was known to
be consistent. This time corresponds to the time of the last successful heartbeat to the other
cluster.
A stop command with the -access flag set to true transits the relationship or consistency
group to the IdlingDisconnected state. This state allows write I/O to be performed to the
secondary VDisk and is used as part of a DR scenario.
When the relationship or consistency group becomes connected again, the relationship or
consistency group becomes ConsistentSynchronized only if this action does not lead to a loss
of consistency. These conditions must be true:
The relationship was ConsistentSynchronized when it became disconnected.
No writes received successful completion at the primary while disconnected.
Otherwise, the relationship become ConsistentStopped. The FreezeTime setting is retained.
Empty
This state only applies to consistency groups. It is the state of a consistency group that has
no relationships and no other state information to show.
It is entered when a consistency group is first created. It is exited when the first relationship is
added to the consistency group, at which point, the state of the relationship becomes the
state of the consistency group.
Background copy
Metro Mirror paces the rate at which background copy is performed by the appropriate
relationships. Background copy takes place on relationships that are in the
InconsistentCopying state with a status of Online.
The quota of background copy (configured on the intercluster link) is divided evenly between
all of the nodes that are performing background copy for one of the eligible relationships. This
allocation is made irrespective of the number of disks for which the node is responsible. Each
node in turn divides its allocation evenly between the multiple relationships performing a
background copy.
For intracluster relationships, each node is assigned a static quota of 25 MBps.
6.5.12 Practical use of Metro Mirror
The master VDisk is the production VDisk and updates to this copy are mirrored in real time
to the auxiliary VDisk. The contents of the auxiliary VDisk that existed when the relationship
was created are destroyed.
Switching copy direction: The copy direction for a Metro Mirror relationship can be
switched so the auxiliary VDisk becomes the primary, and the master VDisk becomes the
secondary.
While the Metro Mirror relationship is active, the secondary copy (VDisk) is not accessible for
host application write I/O at any time. The SVC allows read-only access to the secondary
VDisk when it contains a “consistent” image. This time period is only intended to allow boot
time operating system discovery to complete without error, so that any hosts at the secondary
site can be ready to start up the applications with minimum delay, if required.
Chapter 6. Advanced Copy Services
301
For example, many operating systems must read logical block address (LBA) zero to
configure a logical unit. Although read access is allowed at the secondary in practice, the data
on the secondary volumes cannot be read by a host, because most operating systems write a
“dirty bit” to the file system when it is mounted. Because this write operation is not allowed on
the secondary volume, the volume cannot be mounted.
This access is only provided where consistency can be guaranteed. However, there is no way
in which coherency can be maintained between reads that are performed at the secondary
and later write I/Os that are performed at the primary.
To enable access to the secondary VDisk for host operations, you must stop the Metro Mirror
relationship by specifying the -access parameter.
While access to the secondary VDisk for host operations is enabled, the host must be
instructed to mount the VDisk and related tasks before the application can be started, or
instructed to perform a recovery process.
For example, the Metro Mirror requirement to enable the secondary copy for access
differentiates it from third-party mirroring software on the host, which aims to emulate a
single, reliable disk regardless of what system is accessing it. Metro Mirror retains the
property that there are two volumes in existence, but it suppresses one volume while the copy
is being maintained.
Using a secondary copy demands a conscious policy decision by the administrator that a
failover is required and that the tasks to be performed on the host involved in establishing
operation on the secondary copy are substantial. The goal is to make this rapid (much faster
when compared to recovering from a backup copy) but not seamless.
The failover process can be automated through failover management software. The SVC
provides Simple Network Management Protocol (SNMP) traps and programming (or
scripting) for the command-line interface (CLI) to enable this automation.
6.5.13 Valid combinations of FlashCopy and Metro Mirror or Global Mirror
functions
Table 6-7 outlines the combinations of FlashCopy and Metro Mirror or Global Mirror functions
that are valid for a single VDisk.
Table 6-7 VDisk valid combination
FlashCopy
Metro Mirror or Global Mirror
Primary
Metro Mirror or Global Mirror
Secondary
FlashCopy Source
Supported
Supported
FlashCopy Target
Not supported
Not supported
6.5.14 Metro Mirror configuration limits
Table 6-8 lists the Metro Mirror configuration limits.
Table 6-8 Metro Mirror configuration limits
302
Parameter
Value
Number of Metro Mirror
consistency groups per cluster
256
Implementing the IBM System Storage SAN Volume Controller V5.1
Parameter
Value
Number of Metro Mirror
relationships per cluster
8,192
Number of Metro Mirror
relationships per consistency
group
8,192
Total VDisk size per I/O Group
There is a per I/O Group limit of 1,024 TB on the quantity of
primary and secondary VDisk address space that can participate
in Metro Mirror and Global Mirror relationships. This maximum
configuration will consume all 512 MB of bitmap space for the I/O
Group and allow no FlashCopy bitmap space.
6.6 Metro Mirror commands
For comprehensive details about Metro Mirror Commands, refer to the IBM System Storage
SAN Volume Controller Command-Line Interface User’s Guide, SC26-7903.
The command set for Metro Mirror contains two broad groups:
Commands to create, delete, and manipulate relationships and consistency groups
Commands to cause state changes
Where a configuration command affects more than one cluster, Metro Mirror performs the
work to coordinate configuration activity between the clusters. Certain configuration
commands can only be performed when the clusters are connected and fail with no effect
when they are disconnected.
Other configuration commands are permitted even though the clusters are disconnected. The
state is reconciled automatically by Metro Mirror when the clusters become connected again.
For any given command, with one exception, a single cluster actually receives the command
from the administrator. This design is significant for defining the context for a
CreateRelationship mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp
command, in which case, the cluster receiving the command is called the local cluster.
The exception mentioned previously is the command that sets clusters into a Metro Mirror
partnership. The mkpartnership command must be issued to both the local and remote
clusters.
The commands here are described as an abstract command set and are implemented as
either method:
A command-line interface (CLI), which can be used for scripting and automation
A graphical user interface (GUI), which can be used for one-off tasks
6.6.1 Listing available SVC cluster partners
To create an SVC cluster partnership, use the svcinfo lsclustercandidate command.
svcinfo lsclustercandidate
The svcinfo lsclustercandidate command is used to list the clusters that are available for
setting up a two-cluster partnership. This command is a prerequisite for creating Metro Mirror
relationships.
Chapter 6. Advanced Copy Services
303
6.6.2 Creating the SVC cluster partnership
To create an SVC cluster partnership, use the svctask mkpartnership command.
svctask mkpartnership
The svctask mkpartnership command is used to establish a one-way Metro Mirror
partnership between the local cluster and a remote cluster.
To establish a fully functional Metro Mirror partnership, you must issue this command to both
clusters. This step is a prerequisite to creating Metro Mirror relationships between VDisks on
the SVC clusters.
When creating the partnership, you can specify the bandwidth to be used by the background
copy process between the local and the remote SVC cluster, and if it is not specified, the
bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or
equal to the bandwidth that can be sustained by the intercluster link.
Background copy bandwidth effect on foreground I/O latency
The background copy bandwidth determines the rate at which the background copy for the
SVC will be attempted. The background copy bandwidth can affect the foreground I/O latency
in one of three ways:
The following results can occur if the background copy bandwidth is set too high for the
Metro Mirror intercluster link capacity:
– The background copy I/Os can back up on the Metro Mirror intercluster link.
– There is a delay in the synchronous secondary writes of foreground I/Os.
– The foreground I/O latency will increase as perceived by applications.
If the background copy bandwidth is set too high for the storage at the primary site, the
background copy read I/Os overload the primary storage and delay foreground I/Os.
If the background copy bandwidth is set too high for the storage at the secondary site,
background copy writes at the secondary overload the secondary storage and again delay
the synchronous secondary writes of foreground I/Os.
In order to set the background copy bandwidth optimally, make sure that you consider all
three resources (the primary storage, the intercluster link bandwidth, and the secondary
storage). Provision the most restrictive of these three resources between the background
copy bandwidth and the peak foreground I/O workload. This provisioning can be done by a
calculation (as previously described) or alternatively by determining experimentally how much
background copy can be allowed before the foreground I/O latency becomes unacceptable,
and then backing off to allow for peaks in workload and a safety margin.
svctask chpartnership
In case it is needed to change the bandwidth that is available for background copy in an SVC
cluster partnership, you can use the svctask chpartnership command to specify the new
bandwidth.
6.6.3 Creating a Metro Mirror consistency group
To create a Metro Mirror consistency group, use the svctask mkrcconsistgrp command.
svctask mkrcconsistgrp
The svctask mkrcconsistgrp command is used to create a new empty Metro Mirror
consistency group.
304
Implementing the IBM System Storage SAN Volume Controller V5.1
The Metro Mirror consistency group name must be unique across all of the consistency
groups that are known to the clusters owning this consistency group. If the consistency group
involves two clusters, the clusters must be in communication throughout the creation process.
The new consistency group does not contain any relationships and will be in the Empty state.
Metro Mirror relationships can be added to the group either upon creation or afterward by
using the svctask chrelationship command.
6.6.4 Creating a Metro Mirror relationship
To create a Metro Mirror relationship, use the command svctask mkrcrelationship.
svctask mkrcrelationship
The svctask mkrcrelationship command is used to create a new Metro Mirror relationship.
This relationship persists until it is deleted.
The auxiliary VDisk must be equal in size to the master VDisk or the command will fail, and if
both VDisks are in the same cluster, they must both be in the same I/O Group. The master
and auxiliary VDisk cannot be in an existing relationship and cannot be the target of a
FlashCopy mapping. This command returns the new relationship (relationship_id) when
successful.
When creating the Metro Mirror relationship, it can be added to an already existing
consistency group, or it can be a stand-alone Metro Mirror relationship if no consistency
group is specified.
To check whether the master or auxiliary VDisks comply with the prerequisites to participate
in a Metro Mirror relationship, use the svcinfo lsrcrelationshipcandidate command.
svcinfo lsrcrelationshipcandidate
The svcinfo lsrcrelationshipcandidate command is used to list available VDisks that are
eligible for a Metro Mirror relationship.
When issuing the command, you can specify the master VDisk name and auxiliary cluster to
list candidates that comply with prerequisites to create a Metro Mirror relationship. If the
command is issued with no flags, all VDisks that are not disallowed by another configuration
state, such as being a FlashCopy target, are listed.
6.6.5 Changing a Metro Mirror relationship
To modify the properties of a Metro Mirror relationship, use the command svctask
chrcrelationship.
svctask chrcrelationship
The svctask chrcrelationship command is used to modify the following properties of a
Metro Mirror relationship:
Change the name of a Metro Mirror relationship.
Add a relationship to a group.
Remove a relationship from a group using the -force flag.
Adding a Metro Mirror relationship: When adding a Metro Mirror relationship to a
consistency group that is not empty, the relationship must have the same state and copy
direction as the group in order to be added to it.
Chapter 6. Advanced Copy Services
305
6.6.6 Changing a Metro Mirror consistency group
To change the name of a Metro Mirror consistency group, use the svctask chrcconsistgrp
command.
svctask chrcconsistgrp
The svctask chrcconsistgrp command is used to change the name of a Metro Mirror
consistency group.
6.6.7 Starting a Metro Mirror relationship
To start a stand-alone Metro Mirror relationship, use the svctask startrcrelationship
command.
svctask startrcrelationship
The svctask startrcrelationship command is used to start the copy process of a Metro
Mirror relationship.
When issuing the command, the copy direction can be set, if it is undefined, and optionally
mark the secondary VDisk of the relationship as clean. The command fails it if it is used to
attempt to start a relationship that is part of a consistency group.
This command can only be issued to a relationship that is connected. For a relationship that is
idling, this command assigns a copy direction (primary and secondary roles) and begins the
copy process. Otherwise, this command restarts a previous copy process that was stopped
either by a stop command or by an I/O error.
If the resumption of the copy process leads to a period when the relationship is inconsistent,
you must specify the -force flag when restarting the relationship. This situation can arise if, for
example, the relationship was stopped, and then, further writes were performed on the
original primary of the relationship. The use of the -force flag here is a reminder that the data
on the secondary will become inconsistent while resynchronization (background copying)
occurs, and therefore, the data is not usable for DR purposes before the background copy has
completed.
In the Idling state, you must specify the primary VDisk to indicate the copy direction. In other
connected states, you can provide the -primary argument, but it must match the existing
setting.
6.6.8 Stopping a Metro Mirror relationship
To stop a stand-alone Metro Mirror relationship, use the svctask stoprcrelationship
command.
svctask stoprcrelationship
The svctask stoprcrelationship command is used to stop the copy process for a
relationship. It can also be used to enable write access to a consistent secondary VDisk by
specifying the -access flag.
This command applies to a stand-alone relationship. It is rejected if it is addressed to a
relationship that is part of a consistency group. You can issue this command to stop a
relationship that is copying from primary to secondary.
306
Implementing the IBM System Storage SAN Volume Controller V5.1
If the relationship is in an Inconsistent state, any copy operation stops and does not resume
until you issue a svctask startrcrelationship command. Write activity is no longer copied
from the primary to the secondary VDisk. For a relationship in the ConsistentSynchronized
state, this command causes a consistency freeze.
When a relationship is in a Consistent state (that is, in the ConsistentStopped,
ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access
parameter with the stoprcrelationship command to enable write access to the secondary
VDisk.
6.6.9 Starting a Metro Mirror consistency group
To start a Metro Mirror consistency group, use the svctask startrcconsistgrp command.
The svctask startrcconsistgrp command is used to start a Metro Mirror consistency group.
This command can only be issued to a consistency group that is connected.
For a consistency group that is idling, this command assigns a copy direction (primary and
secondary roles) and begins the copy process. Otherwise, this command restarts a previous
copy process that was stopped either by a stop command or by an I/O error.
6.6.10 Stopping a Metro Mirror consistency group
To stop a Metro Mirror consistency group, use the svctask stoprcconsistgrp command.
svctask stoprcconsistgrp
The svctask startrcconsistgrp command is used to stop the copy process for a Metro
Mirror consistency group. It can also be used to enable write access to the secondary VDisks
in the group if the group is in a Consistent state.
If the consistency group is in an Inconsistent state, any copy operation stops and does not
resume until you issue the svctask startrcconsistgrp command. Write activity is no longer
copied from the primary to the secondary VDisks belonging to the relationships in the group.
For a consistency group in the ConsistentSynchronized state, this command causes a
consistency freeze.
When a consistency group is in a Consistent state (for example, in the ConsistentStopped,
ConsistentSynchronized, or ConsistentDisconnected state), the -access argument can be
used with the svctask stoprcconsistgrp command to enable write access to the secondary
VDisks within that group.
6.6.11 Deleting a Metro Mirror relationship
To delete a Metro Mirror relationship, use the svctask rmrcrelationship command.
svctask rmrcrelationship
The svctask rmrcrelationship command is used to delete the relationship that is specified.
Deleting a relationship only deletes the logical relationship between the two VDisks. It does
not affect the VDisks themselves.
If the relationship is disconnected at the time that the command is issued, the relationship is
only deleted on the cluster on which the command is being run. When the clusters reconnect,
then the relationship is automatically deleted on the other cluster.
Chapter 6. Advanced Copy Services
307
Alternatively, if the clusters are disconnected, and you still want to remove the relationship on
both clusters, you can issue the rmrcrelationship command independently on both of the
clusters.
If you delete an inconsistent relationship, the secondary VDisk becomes accessible even
though it is still inconsistent. This situation is the one case in which Metro Mirror does not
inhibit access to inconsistent data.
6.6.12 Deleting a Metro Mirror consistency group
To delete a Metro Mirror consistency group, use the svctask rmrcconsistgrp command.
svctask rmrcconsistgrp
The svctask rmrcconsistgrp command is used to delete a Metro Mirror consistency group.
This command deletes the specified consistency group. You can issue this command for any
existing consistency group.
If the consistency group is disconnected at the time that the command is issued, the
consistency group is only deleted on the cluster on which the command is being run. When
the clusters reconnect, the consistency group is automatically deleted on the other cluster.
Alternatively, if the clusters are disconnected, and you still want to remove the consistency
group on both clusters, you can issue the svctask rmrcconsistgrp command separately on
both of the clusters.
If the consistency group is not empty, the relationships within it are removed from the
consistency group before the group is deleted. These relationships then become stand-alone
relationships. The state of these relationships is not changed by the action of removing them
from the consistency group.
6.6.13 Reversing a Metro Mirror relationship
To reverse a Metro Mirror relationship, use the svctask switchrcrelationship command.
svctask switchrcrelationship
The svctask switchrcrelationship command is used to reverse the roles of the primary and
secondary VDisks when a stand-alone relationship is in a Consistent state. When issuing the
command, the desired primary is specified.
6.6.14 Reversing a Metro Mirror consistency group
To reverse a Metro Mirror consistency group, use the svctask switchrcconsistgrp
command.
svctask switchrcconsistgrp
The svctask switchrcconsistgrp command is used to reverse the roles of the primary and
secondary VDisks when a consistency group is in a Consistent state. This change is applied
to all of the relationships in the consistency group, and when issuing the command, the
desired primary is specified.
308
Implementing the IBM System Storage SAN Volume Controller V5.1
6.6.15 Background copy
Metro Mirror paces the rate at which background copy is performed by the appropriate
relationships. Background copy takes place on relationships that are in the
InconsistentCopying state with a status of Online.
The quota of background copy (configured on the intercluster link) is divided evenly between
the nodes that are performing background copy for one of the eligible relationships. This
allocation is made without regard for the number of disks for which the node is responsible.
Each node in turn divides its allocation evenly between the multiple relationships performing a
background copy.
For intracluster relationships, each node is assigned a static quota of 25 MBps.
6.7 Global Mirror overview
In the following topics, we describe the Global Mirror copy service, which is an asynchronous
remote copy service. It provides and maintains a consistent mirrored copy of a source VDisk
to a target VDisk. Data is written from the source VDisk to the target VDisk asynchronously.
This method was previously known as Asynchronous Peer-to-Peer Remote Copy.
Global Mirror works by defining a Global Mirror relationship between two VDisks of equal size
and maintains the data consistency in an asynchronous manner. Therefore, when a host
writes to a source VDisk, the data is copied from the source VDisk cache to the target VDisk
cache. At the initiation of that data copy, the confirmation of I/O completion is transmitted back
to the host.
Minimum firmware requirement: The minimum firmware requirement for Global Mirror
functionality is V4.1.1. Any cluster or partner cluster that is not running this minimum level
will not have Global Mirror functionality available. Even if you have a Global Mirror
relationship running on a down-level partner cluster and you only want to use intracluster
Global Mirror, the functionality will not be available to you.
SVC provides both intracluster and intercluster Global Mirror.
6.7.1 Intracluster Global Mirror
Although Global Mirror is available for intracluster, it has no functional value for production
use. Intracluster Metro Mirror provides the same capability with less overhead. However,
leaving this functionality in place simplifies testing and allows for client experimentation and
testing (for example, to validate server failover on a single test cluster).
6.7.2 Intercluster Global Mirror
Intercluster Global Mirror operations require a pair of SVC clusters that are commonly
separated by a number of moderately high bandwidth links. The two SVC clusters must be
defined in an SVC cluster partnership to establish a fully functional Global Mirror relationship.
Limit: When a local and a remote fabric are connected together for Global Mirror
purposes, the ISL hop count between a local node and a remote node must not exceed
seven hops.
Chapter 6. Advanced Copy Services
309
6.8 Remote copy techniques
Global Mirror is an asynchronous remote copy, which we explain next. To illustrate the
differences between synchronous and asynchronous remote copy, we also explain
synchronous remote copy.
6.8.1 Asynchronous remote copy
Global Mirror is an asynchronous remote copy technique. In asynchronous remote copy, write
operations are completed on the primary site and the write acknowledgement is sent to the
host before it is received at the secondary site. An update of this write operation is sent to the
secondary site at a later stage, which provides the capability to perform remote copy over
distances exceeding the limitations of synchronous remote copy.
The Global Mirror function provides the same function as Metro Mirror Remote Copy, but over
long distance links with higher latency, without requiring the hosts to wait for the full round-trip
delay of the long distance link.
Figure 6-23 shows that a write operation to the master VDisk is acknowledged back to the
host issuing the write before the write operation is mirrored to the cache for the auxiliary
VDisk.
Figure 6-23 Global Mirror write sequence
The Global Mirror algorithms maintain a consistent image at the secondary at all times. They
achieve this consistent image by identifying sets of I/Os that are active concurrently at the
primary, assigning an order to those sets, and applying those sets of I/Os in the assigned
order at the secondary. As a result, Global Mirror maintains the features of Write Ordering
and Read Stability that are described in this chapter.
The multiple I/Os within a single set are applied concurrently. The process that marshals the
sequential sets of I/Os operates at the secondary cluster and, so, is not subject to the latency
of the long distance link. These two elements of the protocol ensure that the throughput of the
total cluster can be grown by increasing cluster size, while maintaining consistency across a
growing data set.
310
Implementing the IBM System Storage SAN Volume Controller V5.1
In a failover scenario, where the secondary site needs to become the primary source of data,
certain updates might be missing at the secondary site. Therefore, any applications that will
use this data must have an external mechanism for recovering the missing updates and
reapplying them, for example, such as a transaction log replay.
6.8.2 SVC Global Mirror features
SVC Global Mirror supports the following features:
Asynchronous remote copy of VDisks dispersed over metropolitan scale distances is
supported.
SVC implements the Global Mirror relationship between a VDisk pair, with each VDisk in
the pair being managed by an SVC cluster.
SVC supports intracluster Global Mirror, where both VDisks belong to the same cluster
(and I/O Group). Although, as stated earlier, this functionality is better suited to Metro
Mirror.
SVC supports intercluster Global Mirror, where each VDisk belongs to its separate SVC
cluster. A given SVC cluster can be configured for partnership with between one and three
other clusters.
Intercluster and intracluster Global Mirror can be used concurrently within a cluster for
separate relationships.
SVC does not require a control network or fabric to be installed to manage Global Mirror.
For intercluster Global Mirror, the SVC maintains a control link between the two clusters.
This control link is used to control the state and to coordinate the updates at either end.
The control link is implemented on top of the same FC fabric connection that the SVC
uses for Global Mirror I/O.
SVC implements a configuration model that maintains the Global Mirror configuration and
state through major events, such as failover, recovery, and resynchronization, to minimize
user configuration action through these events.
SVC maintains and polices a strong concept of consistency and makes this concept
available to guide configuration activity.
SVC implements flexible resynchronization support, enabling it to resynchronize VDisk
pairs that have experienced write I/Os to both disks and to resynchronize only those
regions that are known to have changed.
Colliding writes are supported.
An optional feature for Global Mirror permits a delay simulation to be applied on writes that
are sent to secondary VDisks.
SVC 5.1 introduces Multiple Cluster Mirroring.
Colliding writes
Prior to V4.3.1, the Global Mirror algorithm required that only a single write is active on any
given 512 byte LBA of a VDisk. If a further write is received from a host while the secondary
write is still active, even though the primary write might have completed, the new host write
will be delayed until the secondary write is complete. This restriction is needed in case a
series of writes to the secondary have to be retried (called “reconstruction”). Conceptually,
the data for reconstruction comes from the primary VDisk.
If multiple writes are allowed to be applied to the primary for a given sector, only the most
recent write will get the correct data during reconstruction, and if reconstruction is interrupted
for any reason, the intermediate state of the secondary is Inconsistent.
Chapter 6. Advanced Copy Services
311
Applications that deliver such write activity will not achieve the performance that Global Mirror
is intended to support. A VDisk statistic is maintained about the frequency of these collisions.
From V4.3.1 onward, an attempt is made to allow multiple writes to a single location to be
outstanding in the Global Mirror algorithm. There is still a need for primary writes to be
serialized, and the intermediate states of the primary data must be kept in a non-volatile
journal while the writes are outstanding to maintain the correct write ordering during
reconstruction. Reconstruction must never overwrite data on the secondary with an earlier
version. The VDisk statistic monitoring colliding writes is now limited to those writes that are
not affected by this change.
Figure 6-24 shows a colliding write sequence example.
Figure 6-24 Colliding writes example
These numbers correspond to the numbers in Figure 6-24:
(1) Original Global Mirror write in progress
(2) Second write to same sector and in-flight write logged to the journal file
(3 and 4) Second write to the secondary cluster
(5) Initial write completes
Delay simulation
An optional feature for Global Mirror permits a delay simulation to be applied on writes that
are sent to secondary VDisks. This feature allows testing to be performed that detects
colliding writes, and therefore, this feature can be used to test an application before the full
deployment of the feature. The feature can be enabled separately for each of the intracluster
or intercluster Global Mirrors. You specify the delay setting by using the chcluster command
and viewed by using the lscluster command. The gm_intra_delay_simulation field
expresses the amount of time that intracluster secondary I/Os are delayed. The
gm_inter_delay_simulation field expresses the amount of time that intercluster secondary
I/Os are delayed. A value of zero disables the feature.
312
Implementing the IBM System Storage SAN Volume Controller V5.1
Multiple Cluster Mirroring
SVC 5.1 introduces Multiple Cluster Mirroring. The rules for a Global Mirror Multiple Cluster
Mirroring environment are the same as the rules in an Metro Mirror environment. For more
detailed information, see 6.5.4, “Multiple Cluster Mirroring” on page 284.
6.9 Global Mirror relationships
Global Mirror relationships are similar to FlashCopy mappings. They can be stand-alone or
combined in consistency groups. You can issue the start and stop commands either against
the stand-alone relationship or the consistency group.
Figure 6-25 illustrates the Global Mirror relationship.
Figure 6-25 Global Mirror relationship
A Global Mirror relationship is composed of two VDisks that are equal in size. The master
VDisk and the auxiliary VDisk can be in the same I/O Group, within the same SVC cluster
(intracluster Global Mirror), or can be on separate SVC clusters that are defined as SVC
partners (intercluster Global Mirror).
Rules:
A VDisk can only be part of one Global Mirror relationship at a time.
A VDisk that is a FlashCopy target cannot be part of a Global Mirror relationship.
6.9.1 Global Mirror relationship between primary and secondary VDisks
When creating a Global Mirror relationship, the master VDisk is initially assigned as the
primary, and the auxiliary VDisk is initially assigned as the secondary. This design implies that
the initial copy direction is mirroring the master VDisk to the auxiliary VDisk. After the initial
synchronization is complete, the copy direction can be changed, if appropriate.
In the most common applications of Global Mirror, the master VDisk contains the production
copy of the data and is used by the host application, while the auxiliary VDisk contains the
mirrored copy of the data and is used for failover in DR scenarios. The terms master and
auxiliary help explain this use. If Global Mirror is applied differently, the terms master and
auxiliary VDisks need to be interpreted appropriately.
6.9.2 Importance of write ordering
Many applications that uses block storage have a requirement to survive failures, such as loss
of power or a software crash, and to not lose data that existed prior to the failure. Because
many applications must perform large numbers of update operations in parallel to that block
storage, maintaining write ordering is key to ensuring the correct operation of applications
following a disruption.
Chapter 6. Advanced Copy Services
313
An application that performs a high volume of database updates is usually designed with the
concept of dependent writes. With dependent writes, it is important to ensure that an earlier
write has completed before a later write is started. Reversing the order of dependent writes
can undermine the application’s algorithms and can lead to problems, such as detected or
undetected data corruption.
6.9.3 Dependent writes that span multiple VDisks
The following scenario illustrates a simple example of a sequence of dependent writes and, in
particular, what can happen if they span multiple VDisks. Consider the following typical
sequence of writes for a database update transaction:
1. A write is executed to update the database log, indicating that a database update is to be
performed.
2. A second write is executed to update the database.
3. A third write is executed to update the database log, indicating that the database update
has completed successfully.
Figure 6-26 illustrates the write sequence.
Figure 6-26 Dependent writes for a database
The database ensures the correct ordering of these writes by waiting for each step to
complete before starting the next step.
Database logs: All databases have logs associated with them. These logs keep records of
database changes. If a database needs to be restored to a point beyond the last full, offline
backup, logs are required to roll the data forward to the point of failure.
314
Implementing the IBM System Storage SAN Volume Controller V5.1
But imagine if the database log and the database are on separate VDisks and a Global Mirror
relationship is stopped during this update. In this case, you must consider the possibility that
the Global Mirror relationship for the VDisk with the database file is stopped slightly before the
VDisk containing the database log.
If this happens, it is possible that the secondary VDisks see writes (1) and (3) but not write
(2). Then, if the database was restarted using the data available from the secondary disks,
the database log indicates that the transaction had completed successfully, when it did not. In
this scenario, the integrity of the database is in question.
6.9.4 Global Mirror consistency groups
Global Mirror consistency groups address the issue of dependent writes across VDisks,
where the objective is to preserve data consistency across multiple Global Mirrored VDisks.
Consistency groups ensure a consistent data set, because applications have relational data
spanning across multiple VDisks.
A Global Mirror consistency group can contain an arbitrary number of relationships up to the
maximum number of Global Mirror relationships that is supported by the SVC cluster. Global
Mirror commands can be issued to a Global Mirror consistency group, and thereby
simultaneously for all Global Mirror relationships that are defined within that consistency
group, or to a single Metro Mirror relationship, if not part of a Global Mirror consistency group.
For example, when issuing a Global Mirror start command to the consistency group, all of
the Global Mirror relationships in the consistency group are started at the same time.
Figure 6-27 on page 316 illustrates the concept of Global Mirror consistency groups. Because
GM_Relationship 1 and GM_Relationship 2 are part of the consistency group, they can be
handled as one entity, while the stand-alone GM_Relationship 3 is handled separately.
Chapter 6. Advanced Copy Services
315
Figure 6-27 Global Mirror consistency group
Certain uses of Global Mirror require the manipulation of more than one relationship. Global
Mirror consistency groups can provide the ability to group relationships so that they are
manipulated in unison. Global Mirror relationships within a consistency group can be in any
form:
Global Mirror relationships can be part of a consistency group, or be stand-alone and
therefore handled as single instances.
A consistency group can contain zero or more relationships. An empty consistency group,
with zero relationships in it, has little purpose until it is assigned its first relationship, except
that it has a name.
All of the relationships in a consistency group must have matching master and auxiliary
SVC clusters.
Although it is possible to use consistency groups to manipulate sets of relationships that do
not need to satisfy these strict rules, such manipulation can lead to undesired side effects.
The rules behind a consistency group mean that certain configuration commands are
prohibited. These specific configuration commands are not prohibited if the relationship is not
part of a consistency group.
For example, consider the case of two applications that are completely independent, yet they
are placed into a single consistency group. In the event of an error, there is a loss of
synchronization, and a background copy process is required to recover synchronization.
While this process is in progress, Global Mirror rejects attempts to enable access to the
secondary VDisks of either application.
If one application finishes its background copy much more quickly than the other application,
Global Mirror still refuses to grant access to its secondary VDisk. Even though it is safe in this
316
Implementing the IBM System Storage SAN Volume Controller V5.1
case, Global Mirror policy refuses access to the entire consistency group if any part of it is
inconsistent.
Stand-alone relationships and consistency groups share a common configuration and state
model. All of the relationships in a consistency group that is not empty have the same state as
the consistency group.
6.10 Global Mirror
This section discusses how Global Mirror works.
6.10.1 Intercluster communication and zoning
All intercluster communication is performed through the SAN. Prior to creating intercluster
Global Mirror relationships, you must create a partnership between the two clusters.
SVC node ports on each SVC cluster must be able to access each other to facilitate the
partnership creation. Therefore, you must define a zone in each fabric for intercluster
communication; see Chapter 3, “Planning and configuration” on page 65 for more
information.
6.10.2 SVC cluster partnership
When the SVC cluster partnership has been defined on both clusters, further communication
facilities between the nodes in each of the clusters are established. The communication
facilities consist of these components:
A single control channel, which is used to exchange and coordinate configuration
information
I/O channels between each of the nodes in the clusters
These channels are maintained and updated as nodes appear and disappear and as links
fail, and are repaired to maintain operation where possible. If communication between the
SVC clusters is interrupted or lost, an error is logged (and, consequently, Global Mirror
relationships will stop).
To handle error conditions, you can configure the SVC to raise SNMP traps or e-mail. Or, if
Tivoli Storage Productivity Center for Replication is in place, the Tivoli Storage Productivity
Center for Replication can control the link’s status and issue an alert using SNMP traps or
e-mail, too.
6.10.3 Maintenance of the intercluster link
All SVC nodes maintain a database of the other devices that are visible on the fabric. This
database is updated as devices appear and disappear.
Devices that advertise themselves as SVC nodes are categorized according to the SVC
cluster to which they belong. SVC nodes that belong to the same cluster establish
communication channels between themselves and begin to exchange messages to
implement the clustering and functional protocols of SVC.
Nodes that are in separate clusters do not exchange messages after the initial discovery is
complete unless they have been configured together to perform Global Mirror.
Chapter 6. Advanced Copy Services
317
The intercluster link carries control traffic to coordinate activity between two clusters. It is
formed between one node in each cluster. The traffic between the designated nodes is
distributed among logins that exist between those nodes.
If the designated node fails (or if all of its logins to the remote cluster fail), a new node is
chosen to carry control traffic. This event causes I/O to pause, but it does not cause
relationships to become Consistent Stopped.
6.10.4 Distribution of work among nodes
Global Mirror VDisks must have their preferred nodes evenly distributed among the nodes of
the clusters. Each VDisk within an I/O Group has a preferred node property that can be used
to balance the I/O load between nodes in that group. Global Mirror also uses this property to
route I/O between clusters.
Figure 6-28 shows the best relationship between VDisks and their preferred nodes in order to
get the best performance.
Figure 6-28 Preferred VDisk Global Mirror relationship
6.10.5 Background copy performance
Background copy resources for intercluster remote copy are available within two nodes of an
I/O Group to perform background copy at a maximum of 200 MBps (each data read and data
written) total. The background copy performance is subject to sufficient RAID controller
bandwidth. Performance is also subject to other potential bottlenecks (such as the intercluster
fabric) and possible contention from host I/O for the SVC bandwidth resources.
Background copy I/O will be scheduled to avoid bursts of activity that might have an adverse
effect on system behavior. An entire grain of tracks on one VDisk will be processed at around
the same time but not as a single I/O. Double buffering is used to try to take advantage of
sequential performance within a grain. However, the next grain within the VDisk might not be
scheduled for a while. Multiple grains might be copied simultaneously and might be enough to
satisfy the requested rate, unless the available resources cannot sustain the requested rate.
Background copy proceeds from the low LBA to the high LBA in sequence to avoid convoying
conflicts with FlashCopy, which operates in the opposite direction. It is expected that
background copy will not convoy conflict with sequential applications, because it tends to vary
disks more often.
318
Implementing the IBM System Storage SAN Volume Controller V5.1
6.10.6 Space-efficient background copy
Prior to SVC 4.3.1, if a primary VDisk was space-efficient, the background copy process
caused the secondary to become fully allocated. When both primary and secondary clusters
are running SVC 4.3.1 or higher, Metro Mirror and Global Mirror relationships can preserve
the space-efficiency of the primary.
Conceptually, the background copy process detects an unallocated region of the primary and
sends a special “zero buffer” to the secondary. If the secondary VDisk is space-efficient, and
the region is unallocated, the special buffer prevents a write (and, therefore, an allocation). If
the secondary VDisk is not space-efficient, or the region in question is an allocated region of
a Space-Efficient VDisk, a buffer of “real” zeros is synthesized on the secondary and written
as normal.
If the secondary cluster is running code prior to SVC 4.3.1, this version of the code is
detected by the primary cluster and a buffer of “real” zeros is transmitted and written on the
secondary. The background copy rate controls the rate at which the virtual capacity is being
copied.
6.11 Global Mirror process
There are several steps in the Global Mirror process:
1. An SVC cluster partnership is created between two SVC clusters (for intercluster Global
Mirror).
2. A Global Mirror relationship is created between two VDisks of the same size.
3. To manage multiple Global Mirror relationships as one entity, the relationships can be
made part of a Global Mirror consistency group to ensure data consistency across multiple
Global Mirror relationships, or simply for ease of management.
4. The Global Mirror relationship is started, and when the background copy has completed,
the relationship is consistent and synchronized.
5. When synchronized, the secondary VDisk holds a copy of the production data at the
primary that can be used for DR.
6. To access the auxiliary VDisk, the Global Mirror relationship must be stopped with the
access option enabled, before write I/O is submitted to the secondary.
7. The remote host server is mapped to the auxiliary VDisk, and the disk is available for I/O.
6.11.1 Methods of synchronization
This section describes three methods that can be used to establish a relationship.
Full synchronization after creation
Full synchronization after creation is the default method. It is the simplest method, and it
requires no administrative activity apart from issuing the necessary commands. However, in
certain environments, the bandwidth that is available makes this method unsuitable.
Use this sequence for a single relationship:
A new relationship is created (mkrcrelationship is issued) without specifying the -sync
flag.
A new relationship is started (startrcrelationship is issued) without the -clean flag.
Chapter 6. Advanced Copy Services
319
Synchronized before creation
In this method, the administrator must ensure that the master and auxiliary VDisks contain
identical data before creating the relationship. There are two ways to ensure that the master
and auxiliary VDisks contain identical data:
Both disks are created with the security delete (-fmtdisk) feature to make all data zero.
A complete tape image (or other method of moving data) is copied from one disk to the
other disk.
In either technique, no write I/O must take place either on the master or the auxiliary before
the relationship is established.
Then, the administrator must ensure that commands are issued:
A new relationship is created (mkrcrelationship is issued) with the -sync flag.
A new relationship is started (startrcrelationship is issued) without the -clean flag.
If these steps are not performed correctly, the relationship is reported as being consistent,
when it is not. This situation most likely makes any secondary disk useless. This method has
an advantage over full synchronization: It does not require all of the data to be copied over a
constrained link. However, if the data must be copied, the master and auxiliary disks cannot
be used until the copy is complete, which might be unacceptable.
Quick synchronization after creation
In this method, the administrator must still copy data from the master to the auxiliary, but the
data can be used without stopping the application at the master. The administrator must
ensure that these commands are issued:
A new relationship is created (mkrcrelationship is issued) with the -sync flag.
A new relationship is stopped (mkrcrelationship is issued) with the -access flag.
A tape image (or other method of transferring data) is used to copy the entire master disk
to the auxiliary disk.
After the copy is complete, the administrator must ensure that a new relationship is started
(startrcrelationship is issued) with the -clean flag.
With this technique, only the data that has changed since the relationship was created,
including all regions that were incorrect in the tape image, is copied from master and auxiliary.
As with “Synchronized before creation” on page 320, the copy step must be performed
correctly, or else the auxiliary is useless, although the copy reports it as being synchronized.
Global Mirror states and events
In this section, we explain the states of a Global Mirror relationship and the series of events
that modify these states.
Figure 6-29 on page 321 shows an overview of the states that apply to a Global Mirror
relationship in the connected state.
320
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 6-29 Global Mirror state diagram
When creating the Global Mirror relationship, you can specify whether the auxiliary VDisk is
already in sync with the master VDisk, and the background copy process is then skipped.
This capability is especially useful when creating Global Mirror relationships for VDisks that
have been created with the format option. The following steps explain the Global Mirror state
diagram (these numbers correspond to the numbers in Figure 6-29):
Step 1:
a. The Global Mirror relationship is created with the -sync option, and the Global Mirror
relationship enters the Consistent stopped state.
b. The Global Mirror relationship is created without specifying that the master and
auxiliary VDisks are in sync, and the Global Mirror relationship enters the Inconsistent
stopped state.
Step 2:
a. When starting a Global Mirror relationship in the Consistent stopped state, it enters the
Consistent synchronized state. This state implies that no updates (write I/O) have been
performed on the primary VDisk while in the Consistent stopped state. Otherwise, you
must specify the -force option, and the Global Mirror relationship then enters the
Inconsistent copying state, while the background copy is started.
b. When starting a Global Mirror relationship in the Inconsistent stopped state, it enters
the Inconsistent copying state, while the background copy is started.
Step 3:
a. When the background copy completes, the Global Mirror relationship transits from the
Inconsistent copying state to the Consistent synchronized state.
Chapter 6. Advanced Copy Services
321
Step 4:
a. When stopping a Global Mirror relationship in the Consistent synchronized state,
where specifying the -access option enables write I/O on the secondary VDisk, the
Global Mirror relationship enters the Idling state.
b. To enable write I/O on the secondary VDisk, when the Global Mirror relationship is in
the Consistent stopped state, issue the command svctask stoprcrelationship,
specifying the -access option, and the Global Mirror relationship enters the Idling state.
Step 5:
a. When starting a Global Mirror relationship that is in the Idling state, you must specify
the -primary argument to set the copy direction. Because no write I/O has been
performed (to either the master or auxiliary VDisk) while in the Idling state, the Global
Mirror relationship enters the Consistent synchronized state.
b. In case write I/O has been performed to either the master or the auxiliary VDisk, then
you must specify the -force option. The Global Mirror relationship then enters the
Inconsistent copying state, while the background copy is started.
If the Global Mirror relationship is intentionally stopped or experiences an error, a state
transition is applied. For example, Global Mirror relationships in the Consistent synchronized
state enter the Consistent stopped state, and Global Mirror relationships in the Inconsistent
copying state enter the Inconsistent stopped state.
In a case where the connection is broken between the SVC clusters in a partnership, all of the
(intercluster) Global Mirror relationships enter a Disconnected state. For further information,
refer to “Connected versus disconnected” on page 322.
Common configuration and state model: Stand-alone relationships and consistency
groups share a common configuration and state model. All of the Global Mirror
relationships in a consistency group that is not empty have the same state as the
consistency group.
6.11.2 State overview
The SVC defined concepts of state are key to understanding the configuration concepts. We
explain them in more detail next.
Connected versus disconnected
This distinction can arise when a Global Mirror relationship is created with the two VDisks in
separate clusters.
Under certain error scenarios, communications between the two clusters might be lost. For
example, power might fail, causing one complete cluster to disappear. Alternatively, the fabric
connection between the two clusters might fail, leaving the two clusters running but unable to
communicate with each other.
When the two clusters can communicate, the clusters and the relationships spanning them
are described as connected. When they cannot communicate, the clusters and the
relationships spanning them are described as disconnected.
In this scenario, each cluster is left with half of the relationship, and each cluster has only a
portion of the information that was available to it before. Only a subset of the normal
configuration activity is available.
322
Implementing the IBM System Storage SAN Volume Controller V5.1
The disconnected relationships are portrayed as having a changed state. The new states
describe what is known about the relationship and which configuration commands are
permitted.
When the clusters can communicate again, the relationships become connected again.
Global Mirror automatically reconciles the two state fragments, taking into account any
configuration activity or other event that took place while the relationship was disconnected.
As a result, the relationship can either return to the state that it was in when it became
disconnected or it can enter another connected state.
Relationships that are configured between VDisks in the same SVC cluster (intracluster) will
never be described as being in a disconnected state.
Consistent versus inconsistent
Relationships or consistency groups that contain relationships can be described as being
consistent or inconsistent. The consistent or inconsistent property describes the state of the
data on the secondary VDisk in relation to the data on the primary VDisk. Consider the
consistent or inconsistent property to be a property of the secondary VDisk.
A secondary is described as consistent if it contains data that might have been read by a host
system from the primary if power had failed at an imaginary point in time while I/O was in
progress, and power was later restored. This imaginary point in time is defined as the
recovery point. The requirements for consistency are expressed with respect to activity at the
primary up to the recovery point:
The secondary VDisk contains the data from all writes to the primary for which the host
had received successful completion and that data has not been overwritten by a
subsequent write (before the recovery point).
The writes are on the secondary and the host did not receive successful completion for
these writes (that is, the host received bad completion or no completion at all), and the
host subsequently performed a read from the primary of that data. If that read returned
successful completion and no later write was sent (before the recovery point), the
secondary contains the same data as the data that was returned by the read from the
primary.
From the point of view of an application, consistency means that a secondary VDisk contains
the same data as the primary VDisk at the recovery point (the time at which the imaginary
power failure occurred).
If an application is designed to cope with an unexpected power failure, this guarantee of
consistency means that the application will be able to use the secondary and begin operation
just as though it had been restarted after the hypothetical power failure.
Again, the application is dependent on the key properties of consistency:
Write ordering
Read stability for correct operation at the secondary
If a relationship, or a set of relationships, is inconsistent and if an attempt is made to start an
application using the data in the secondaries, a number of outcomes are possible:
The application might decide that the data is corrupt and crash or exit with an error code.
The application might fail to detect that the data is corrupt and return erroneous data.
The application might work without a problem.
Because of the risk of data corruption, and, in particular, undetected data corruption, Global
Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data.
Chapter 6. Advanced Copy Services
323
You can apply consistency as a concept to a single relationship or to a set of relationships in
a consistency group. Write ordering is a concept that an application can maintain across a
number of disks that are accessed through multiple systems, and therefore, consistency must
operate across all of those disks.
When deciding how to use consistency groups, the administrator must consider the scope of
an application’s data, taking into account all of the interdependent systems that communicate
and exchange information.
If two programs or systems communicate and store details as a result of the information
exchanged, either of the following actions might occur:
All of the data that is accessed by the group of systems must be placed into a single
consistency group.
The systems must be recovered independently (each within its own consistency group).
Then, each system must perform recovery with the other applications to become
consistent with them.
Consistent versus synchronized
A copy that is consistent and up-to-date is described as synchronized. In a synchronized
relationship, the primary and secondary VDisks only differ in the regions where writes are
outstanding from the host.
Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data that was frozen at an earlier point in time. Write I/O might have continued to a
primary and not have been copied to the secondary. This state arises when it becomes
impossible to keep up-to-date and maintain consistency. An example is a loss of
communication between clusters when writing to the secondary.
When communication is lost for an extended period of time, Global Mirror tracks the changes
that happen at the primary, but not the order of these changes, or the details of these
changes (write data). When communication is restored, it is impossible to make the
secondary synchronized without sending write data to the secondary out-of-order and,
therefore, losing consistency.
You can use two policies to cope with this situation:
Make a point-in-time copy of the consistent secondary before allowing the secondary to
become inconsistent. In the event of a disaster, before consistency is achieved again, the
point-in-time copy target provides a consistent, though out-of-date, image.
Accept the loss of consistency, and the loss of a useful secondary, while making it
synchronized.
6.11.3 Detailed states
The following sections detail the states that are portrayed to the user, for either consistency
groups or relationships. It also details the extra information that is available in each state. We
described the various major states to provide guidance regarding the available configuration
commands.
InconsistentStopped
InconsistentStopped is a connected state. In this state, the primary is accessible for read and
write I/O, but the secondary is inaccessible for either read or write I/O. A copy process needs
to be started to make the secondary consistent.
324
Implementing the IBM System Storage SAN Volume Controller V5.1
This state is entered when the relationship or consistency group was InconsistentCopying
and has either suffered a persistent error or received a stop command that has caused the
copy process to stop.
A start command causes the relationship or consistency group to move to the
InconsistentCopying state. A stop command is accepted, but it has no effect.
If the relationship or consistency group becomes disconnected, the secondary side transits to
InconsistentDisconnected. The primary side transits to IdlingDisconnected.
InconsistentCopying
InconsistentCopying is a connected state. In this state, the primary is accessible for read and
write I/O, but the secondary is inaccessible for either read or write I/O.
This state is entered after a start command is issued to an InconsistentStopped relationship
or consistency group. It is also entered when a forced start is issued to an Idling or
ConsistentStopped relationship or consistency group.
In this state, a background copy process runs, which copies data from the primary to the
secondary VDisk.
In the absence of errors, an InconsistentCopying relationship is active, and the copy progress
increases until the copy process completes. In certain error situations, the copy progress
might freeze or even regress.
A persistent error or stop command places the relationship or consistency group into the
InconsistentStopped state. A start command is accepted, but it has no effect.
If the background copy process completes on a stand-alone relationship, or on all
relationships for a consistency group, the relationship or consistency group transits to the
ConsistentSynchronized state.
If the relationship or consistency group becomes disconnected, the secondary side transits to
InconsistentDisconnected. The primary side transitions to IdlingDisconnected.
ConsistentStopped
ConsistentStopped is a connected state. In this state, the secondary contains a consistent
image, but it might be out-of-date with respect to the primary.
This state can arise when a relationship is in the Consistent Synchronized state and
experiences an error that forces a Consistency Freeze. It can also arise when a relationship is
created with a CreateConsistentFlag set to true.
Normally, following an I/O error, subsequent write activity causes updates to the primary, and
the secondary is no longer synchronized (set to false). In this case, to re-establish
synchronization, consistency must be given up for a period. A start command with the -force
option must be used to acknowledge this situation, and the relationship or consistency group
transits to InconsistentCopying. Issue this command only after all of the outstanding errors
are repaired.
In the unusual case where the primary and secondary are still synchronized (perhaps
following a user stop, and no further write I/O was received), a start command takes the
relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual
case, a switch command is permitted that moves the relationship or consistency group to
ConsistentSynchronized and reverses the roles of the primary and the secondary.
Chapter 6. Advanced Copy Services
325
If the relationship or consistency group becomes disconnected, then the secondary side
transits to ConsistentDisconnected. The primary side transitions to IdlingDisconnected.
An informational status log is generated every time a relationship or consistency group enters
the ConsistentStopped with a status of Online state. This can be configured to enable an
SNMP trap and provide a trigger to automation software to consider issuing a start
command following a loss of synchronization.
ConsistentSynchronized
This is a connected state. In this state, the primary VDisk is accessible for read and write I/O.
The secondary VDisk is accessible for read-only I/O.
Writes that are sent to the primary VDisk are sent to both primary and secondary VDisks.
Either successful completion must be received for both writes, the write must be failed to the
host, or a state must transit out of the ConsistentSynchronized state before a write is
completed to the host.
A stop command takes the relationship to the ConsistentStopped state. A stop command
with the -access parameter takes the relationship to the Idling state.
A switch command leaves the relationship in the ConsistentSynchronized state, but it
reverses the primary and secondary roles.
A start command is accepted, but it has no effect.
If the relationship or consistency group becomes disconnected, the same transitions are
made as for ConsistentStopped.
Idling
Idling is a connected state. Both master and auxiliary disks are operating in the primary role.
Consequently, both master and auxiliary disks are accessible for write I/O.
In this state, the relationship or consistency group accepts a start command. Global Mirror
maintains a record of regions on each disk that received write I/O while Idling. This record is
used to determine what areas need to be copied following a start command.
The start command must specify the new copy direction. A start command can cause a
loss of consistency if either VDisk in any relationship has received write I/O, which is indicated
by the synchronized status. If the start command leads to loss of consistency, you must
specify a -force parameter.
Following a start command, the relationship or consistency group transits to
ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is
a loss of consistency.
Also, while in this state, the relationship or consistency group accepts a -clean option on the
start command. If the relationship or consistency group becomes disconnected, both sides
change their state to IdlingDisconnected.
IdlingDisconnected
IdlingDisconnected is a disconnected state. The VDisk or disks in this half of the relationship
or consistency group are all in the primary role and accept read or write I/O.
The major priority in this state is to recover the link and reconnect the relationship or
consistency group.
326
Implementing the IBM System Storage SAN Volume Controller V5.1
No configuration activity is possible (except for deletes or stops) until the relationship is
reconnected. At that point, the relationship transits to a connected state. The exact connected
state that is entered depends on the state of the other half of the relationship or consistency
group, which depends on these factors:
The state when it became disconnected
The write activity since it was disconnected
The configuration activity since it was disconnected
If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected.
While IdlingDisconnected, if a write I/O is received that causes loss of synchronization
(synchronized attribute transits from true to false) and the relationship was not already
stopped (either through a user stop or a persistent error), an error log is raised. This error log
is the same error log that is raised when the same situation arises in the
ConsistentSynchronized state.
InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The VDisks in this half of the relationship
or consistency group are all in the secondary role and do not accept read or write I/O.
No configuration activity, except for deletes, is permitted until the relationship reconnects.
When the relationship or consistency group reconnects, the relationship becomes
InconsistentCopying automatically unless either of these conditions exist:
The relationship was InconsistentStopped when it became disconnected.
The user issued a stop while disconnected.
In either case, the relationship or consistency group becomes InconsistentStopped.
ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The VDisks in this half of the relationship or
consistency group are all in the secondary role and accept read I/O but not write I/O.
This state is entered from ConsistentSynchronized or ConsistentStopped when the
secondary side of a relationship becomes disconnected.
In this state, the relationship or consistency group displays an attribute of FreezeTime, which
is the point in time that Consistency was frozen. When entered from ConsistentStopped, it
retains the time that it had in that state. When entered from ConsistentSynchronized, the
FreezeTime shows the last time at which the relationship or consistency group was known to
be consistent. This time corresponds to the time of the last successful heartbeat to the other
cluster.
A stop command with the -access flag set to true transits the relationship or consistency
group to the IdlingDisconnected state. This state allows write I/O to be performed to the
secondary VDisk and is used as part of a DR scenario.
When the relationship or consistency group reconnects, the relationship or consistency group
becomes ConsistentSynchronized only if this state does not lead to a loss of consistency.
This is the case provided that these conditions are true:
The relationship was ConsistentSynchronized when it became disconnected.
No writes received successful completion at the primary while disconnected.
Otherwise, the relationship becomes ConsistentStopped. The FreezeTime setting is retained.
Chapter 6. Advanced Copy Services
327
Empty
This state only applies to consistency groups. It is the state of a consistency group that has
no relationships and no other state information to show.
It is entered when a consistency group is first created. It is exited when the first relationship is
added to the consistency group, at which point, the state of the relationship becomes the
state of the consistency group.
6.11.4 Practical use of Global Mirror
To use Global Mirror, you must define a relationship between two VDisks.
When creating the Global Mirror relationship, one VDisk is defined as the master, and the
other VDisk is defined as the auxiliary. The relationship between the two copies is
asymmetric. When the Global Mirror relationship is created, the master VDisk is initially
considered the primary copy (often referred to as the source), and the auxiliary VDisk is
considered the secondary copy (often referred to as the target).
The master VDisk is the production VDisk, and updates to this copy are real-time mirrored to
the auxiliary VDisk. The contents of the auxiliary VDisk that existed when the relationship was
created are destroyed.
Switching the copy direction: The copy direction for a Global Mirror relationship can be
switched so the auxiliary VDisk becomes the primary and the master VDisk becomes the
secondary.
While the Global Mirror relationship is active, the secondary copy (VDisk) is inaccessible for
host application write I/O at any time. The SVC allows read-only access to the secondary
VDisk when it contains a “consistent” image. This read-only access is only intended to allow
boot time operating system discovery to complete without error, so that any hosts at the
secondary site can be ready to start up the applications with minimal delay, if required.
For example, many operating systems need to read logical block address (LBA) 0 (zero) to
configure a logical unit. Although read access is allowed at the secondary in practice, the data
on the secondary volumes cannot be read by a host, because most operating systems write a
“dirty bit” to the file system when it is mounted. Because this write operation is not allowed on
the secondary volume, the volume cannot be mounted.
This access is only provided where consistency can be guaranteed. However, there is no way
in which coherency can be maintained between reads that are performed at the secondary
and later write I/Os that are performed at the primary.
To enable access to the secondary VDisk for host operations, you must stop the Global Mirror
relationship by specifying the -access parameter.
While access to the secondary VDisk for host operations is enabled, you must instruct the
host to mount the VDisk and other related tasks, before the application can be started or
instructed to perform a recovery process.
Using a secondary copy demands a conscious policy decision by the administrator that a
failover is required, and the tasks to be performed on the host that is involved in establishing
operation on the secondary copy are substantial. The goal is to make this failover rapid (much
faster than recovering from a backup copy), but it is not seamless.
328
Implementing the IBM System Storage SAN Volume Controller V5.1
You can automate the failover process by using failover management software. The SVC
provides Simple Network Management Protocol (SNMP) traps and programming (or
scripting) for the command-line interface (CLI) to enable this automation.
6.11.5 Valid combinations of FlashCopy and Metro Mirror or Global Mirror
functions
Table 6-7 on page 302 outlines the combinations of FlashCopy and Metro Mirror or Global
Mirror functions that are valid for a VDisk.
Table 6-9 VDisk valid combinations
FlashCopy
Metro Mirror or Global Mirror
Primary
Metro Mirror or Global Mirror
Secondary
FlashCopy Source
Supported
Supported
FlashCopy Target
Not supported
Not supported
6.11.6 Global Mirror configuration limits
Table 6-10 lists the Global Mirror configuration limits.
Table 6-10 Global Mirror configuration limits
Parameter
Value
Number of Metro Mirror
consistency groups per cluster
256
Number of Metro Mirror
relationships per cluster
8,192
Number of Metro Mirror
relationships per consistency
group
8,192
Total VDisk size per I/O Group
A per I/O Group limit of 1,024 TB exists on the quantity of Primary
and Secondary VDisk address spaces that can participate in
Metro Mirror and Global Mirror relationships. This maximum
configuration will consume all 512 MB of bitmap space for the I/O
Group and allow no FlashCopy bitmap space.
6.12 Global Mirror commands
Here, we summarize several of the most important Global Mirror commands. For complete
details about all of the Global Mirror commands, see IBM System Storage SAN Volume
Controller: Command-Line Interface User's Guide, SC26-7903.
The command set for Global Mirror contains two broad groups:
Commands to create, delete, and manipulate relationships and consistency groups
Commands that cause state changes
Where a configuration command affects more than one cluster, Global Mirror performs the
work to coordinate configuration activity between the clusters. Certain configuration
commands can only be performed when the clusters are connected, and those commands
fail with no effect when the clusters are disconnected.
Chapter 6. Advanced Copy Services
329
Other configuration commands are permitted even though the clusters are disconnected. The
state is reconciled automatically by Global Mirror when the clusters are reconnected.
For any given command, with one exception, a single cluster actually receives the command
from the administrator. This action is significant for defining the context for a
CreateRelationship (mkrcrelationship) command or a CreateConsistencyGroup
(mkrcconsistgrp) command, in which case, the cluster receiving the command is called the
local cluster.
The exception is the command that sets clusters into a Global Mirror partnership. The
administrator must issue the mkpartnership command to both the local and to the remote
cluster.
The commands are described here as an abstract command set. You can implement these
commands in one of two ways:
A command-line interface (CLI), which can be used for scripting and automation
A graphical user interface (GUI), which can be used for one-off tasks
6.12.1 Listing the available SVC cluster partners
To create an SVC cluster partnership, we use the svcinfo lsclustercandidate command.
svcinfo lsclustercandidate
Use the svcinfo lsclustercandidate command to list the clusters that are available for
setting up a two-cluster partnership. This command is a prerequisite for creating Global Mirror
relationships.
To display the characteristics of the cluster, use the svcinfo lscluster command, specifying
the name of the cluster.
svctask chcluster
There are three parameters for Global Mirror in the command output:
-gmlinktolerance link_tolerance
This parameter specifies the maximum period of time that the system will tolerate delay
before stopping Global Mirror relationships. Specify values between 60 and 86400
seconds in increments of 10 seconds. The default value is 300. Do not change this value
except under the direction of IBM Support.
-gminterdelaysimulation link_tolerance
This parameter specifies the number of milliseconds that I/O activity (intercluster copying
to a secondary VDisk) is delayed. This parameter permits you to test performance
implications before deploying Global Mirror and obtaining a long distance link. Specify a
value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use
this argument to test each intercluster Global Mirror relationship separately.
-gmintradelaysimulation link_tolerance
This parameter specifies the number of milliseconds that I/O activity (intracluster copying
to a secondary VDisk) is delayed. This parameter permits you to test performance
implications before deploying Global Mirror and obtaining a long distance link. Specify a
value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use
this argument to test each intracluster Global Mirror relationship separately.
Use the svctask chcluster command to adjust these values:
svctask chcluster -gmlinktolerance 300
330
Implementing the IBM System Storage SAN Volume Controller V5.1
You can view all of these parameter values with the svcinfo lscluster <clustername>
command.
gmlinktolerance
The gmlinktolerance parameter needs a particular and detailed note.
If poor response extends past the specified tolerance, a 1920 error is logged and one or more
Global Mirror relationships are automatically stopped, which protects the application hosts at
the primary site. During normal operation, application hosts experience a minimal effect from
the response times, because the Global Mirror feature uses asynchronous replication.
However, if Global Mirror operations experience degraded response times from the
secondary cluster for an extended period of time, I/O operations begin to queue at the
primary cluster. This queue results in an extended response time to application hosts. In this
situation, the gmlinktolerance feature stops Global Mirror relationships and the application
host’s response time returns to normal. After a 1920 error has occurred, the Global Mirror
auxiliary VDisks are no longer in the consistent_synchronized state until you fix the cause of
the error and restart your Global Mirror relationships. For this reason, ensure that you monitor
the cluster to track when this 1920 error occurs.
You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0 (zero).
However, the gmlinktolerance feature cannot protect applications from extended response
times if it is disabled. It might be appropriate to disable the gmlinktolerance feature in the
following circumstances:
During SAN maintenance windows where degraded performance is expected from SAN
components and application hosts can withstand extended response times from Global
Mirror VDisks.
During periods when application hosts can tolerate extended response times and it is
expected that the gmlinktolerance feature might stop the Global Mirror relationships. For
example, if you test using an I/O generator, which is configured to stress the back-end
storage, the gmlinktolerance feature might detect the high latency and stop the Global
Mirror relationships. Disabling the gmlinktolerance feature prevents this result at the risk of
exposing the test host to extended response times.
We suggest using a script to periodically monitor the Global Mirror status.
Example 6-2 shows an example of a script in ksh to check the Global Mirror status.
Example 6-2 Script example
[[email protected]] /usr/GMC > cat checkSVCgm
#!/bin/sh
#
# Description
#
# GM_STATUS GM Status variable
# HOSTsvcNAME SVC cluster ipaddress
# PARA_TEST Consistent syncronized variable
# PARA_TESTSTOPIN Stop inconsistent variable
# PARA_TESTSTOP Consistent stopped variable
# IDCONS Consistency Group ID variable
# variable definition
HOSTsvcNAME="128.153.3.237"
IDCONS=255
PARA_TEST="consistent_synchronized"
PARA_TESTSTOP="consistent_stopped"
PARA_TESTSTOPIN="inconsistent_stopped"
Chapter 6. Advanced Copy Services
331
FLOG="/usr/GMC/log/gmtest.log"
VAR=0
# Start Programm
if [[ $1 == "" ]]
then
CICLI="true"
fi
while $CICLI
do
GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F:
'NR==2 {print $8 }'`
echo "`date` Gobal Mirror STATUS <$GM_STATUS> " >> $FLOG
if [[ $GM_STATUS = $PARA_TEST ]]
then
sleep 600
else
sleep 600
GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F:
'NR==2 {print $8 }'`
if [[ $GM_STATUS = $PARA_TESTSTOP || $GM_STATUS = $PARA_TESTSTOPIN ]]
then
ssh -l admin $HOSTsvcNAME svctask startrcconsistgrp -force $IDCONS
TESTEX=`echo $?`
echo "`date` Gobal Mirror RESTARTED.......... con RC=$TESTEX " >> $FLOG
fi
GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F:
'NR==2 {print $8 }'`
if [[ $GM_STATUS = $PARA_TESTSTOP ]]
then
echo "`date` Global Mirror restarted <$GM_STATUS>"
else
echo "`date` ERROR Global Mirro Failed <$GM_STATUS>"
fi
sleep 600
fi
((VAR+=1))
done
PARA_TESTSTOP="consistent_stopped"
The script in Example 6-2 on page 331 performs these functions:
Check the Global Mirror status every 600 seconds.
If the status is Consistent_Syncronized, wait another 600 seconds and test again.
If the status is Consistent_Stopped or Inconsistent_Stopped, wait another 600 seconds
and then try to restart Global Mirror. If the status is the status is Consistent_Stopped or
Inconsistent_Stopped, probably we have a 1920 error scenario, which means that we
might have a performance problem. Waiting 600 seconds before restarting Global Mirror
can give the SVC enough time to deliver the high workload that is requested by the server.
Because Global Mirror has been stopped for 10 minutes (600 seconds), the secondary
copy is now out-of-date by this amount of time and must be resynchronized.
Sample script: The script that is described in Example 6-2 on page 331 is supplied as-is.
332
Implementing the IBM System Storage SAN Volume Controller V5.1
A 1920 error indicates that one or more of the SAN components are unable to provide the
performance that is required by the application hosts. This situation can be temporary (for
example, a result of a maintenance activity) or permanent (for example, a result of a hardware
failure or an unexpected host I/O workload).
If you experience 1920 errors, we suggest that you install a SAN performance analysis tool,
such as the IBM Tivoli Storage Productivity Center, and make sure that the tool is correctly
configured and monitoring statistics to look for problems and to try to prevent them.
6.12.2 Creating an SVC cluster partnership
To create an SVC cluster partnership, use the svctask mkpartnership command.
svctask mkpartnership
Use the svctask mkpartnership command to establish a one-way Global Mirror partnership
between the local cluster and a remote cluster.
To establish a fully functional Global Mirror partnership, you must issue this command on both
clusters. This step is a prerequisite for creating Global Mirror relationships between VDisks on
the SVC clusters.
When creating the partnership, you can specify the bandwidth to be used by the background
copy process between the local and the remote SVC cluster, and if it is not specified, the
bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or
equal to the bandwidth that can be sustained by the intercluster link.
Background copy bandwidth effect on foreground I/O latency
The background copy bandwidth determines the rate at which the background copy will be
attempted for Global Mirror. The background copy bandwidth can affect foreground I/O
latency in one of three ways:
The following result can occur if the background copy bandwidth is set too high compared
to the Global Mirror intercluster link capacity:
– The background copy I/Os can back up on the Global Mirror intercluster link.
– There is a delay in the synchronous secondary writes of foreground I/Os.
– The foreground I/O latency will increase as perceived by applications.
If the background copy bandwidth is set too high for the storage at the primary site,
background copy read I/Os overload the primary storage and delay foreground I/Os.
If the background copy bandwidth is set too high for the storage at the secondary site,
background copy writes at the secondary overload the secondary storage and again delay
the synchronous secondary writes of foreground I/Os.
In order to set the background copy bandwidth optimally, make sure that you consider all
three resources (the primary storage, the intercluster link bandwidth, and the secondary
storage). Provision the most restrictive of these three resources between the background
copy bandwidth and the peak foreground I/O workload. Perform this provisioning by
calculation or, alternatively, by determining experimentally how much background copy can
be allowed before the foreground I/O latency becomes unacceptable and then reducing the
background copy to accommodate peaks in workload and an additional safety margin.
svctask chpartnership
To change the bandwidth that is available for background copy in an SVC cluster partnership,
use the svctask chpartnership command to specify the new bandwidth.
Chapter 6. Advanced Copy Services
333
6.12.3 Creating a Global Mirror consistency group
To create a Global Mirror consistency group, use the svctask mkrcconsistgrp command.
svctask mkrcconsistgrp
Use the svctask mkrcconsistgrp command to create a new, empty Global Mirror consistency
group.
The Global Mirror consistency group name must be unique across all consistency groups that
are known to the clusters owning this consistency group. If the consistency group involves two
clusters, the clusters must be in communication throughout the creation process.
The new consistency group does not contain any relationships and will be in the Empty state.
You can add Global Mirror relationships to the group, either upon creation or afterward, by
using the svctask chrelationship command.
6.12.4 Creating a Global Mirror relationship
To create a Global Mirror relationship, use the svctask mkrcrelationship command.
Optional parameter: If you do not use the -global optional parameter, a Metro Mirror
relationship will be created instead of a Global Mirror relationship.
svctask mkrcrelationship
Use the svctask mkrcrelationship command to create a new Global Mirror relationship. This
relationship persists until it is deleted.
The auxiliary VDisk must be equal in size to the master VDisk or the command will fail, and if
both VDisks are in the same cluster, they must both be in the same I/O Group. The master
and auxiliary VDisk cannot be in an existing relationship, and they cannot be the target of a
FlashCopy mapping. This command returns the new relationship (relationship_id) when
successful.
When creating the Global Mirror relationship, you can add it to a consistency group that
already exists, or it can be a stand-alone Global Mirror relationship if no consistency group is
specified.
To check whether the master or auxiliary VDisks comply with the prerequisites to participate
in a Global Mirror relationship, use the svcinfo lsrcrelationshipcandidate command, as
shown in “svcinfo lsrcrelationshipcandidate” on page 334.
svcinfo lsrcrelationshipcandidate
Use the svcinfo lsrcrelationshipcandidate command to list the available VDisks that are
eligible to form a Global Mirror relationship.
When issuing the command, you can specify the master VDisk name and auxiliary cluster to
list candidates that comply with the prerequisites to create a Global Mirror relationship. If the
command is issued with no parameters, all VDisks that are not disallowed by another
configuration state, such as being a FlashCopy target, are listed.
6.12.5 Changing a Global Mirror relationship
To modify the properties of a Global Mirror relationship, use the svctask chrcrelationship
command.
334
Implementing the IBM System Storage SAN Volume Controller V5.1
svctask chrcrelationship
Use the svctask chrcrelationship command to modify the following properties of a Global
Mirror relationship:
Change the name of a Global Mirror relationship.
Add a relationship to a group.
Remove a relationship from a group using the -force flag.
Adding a Global Mirror relationship: When adding a Global Mirror relationship to a
consistency group that is not empty, the relationship must have the same state and copy
direction as the group in order to be added to it.
6.12.6 Changing a Global Mirror consistency group
To change the name of a Global Mirror consistency group, use the following command.
svctask chrcconsistgrp
Use the svctask chrcconsistgrp command to change the name of a Global Mirror
consistency group.
6.12.7 Starting a Global Mirror relationship
To start a stand-alone Global Mirror relationship, use the following command.
svctask startrcrelationship
Use the svctask startrcrelationship command to start the copy process of a Global Mirror
relationship.
When issuing the command, you can set the copy direction if it is undefined, and, optionally,
you can mark the secondary VDisk of the relationship as clean. The command fails if it is
used as an attempt to start a relationship that is already a part of a consistency group.
You can only issue this command to a relationship that is connected. For a relationship that is
idling, this command assigns a copy direction (primary and secondary roles) and begins the
copy process. Otherwise, this command restarts a previous copy process that was stopped
either by a stop command or by an I/O error.
If the resumption of the copy process leads to a period when the relationship is inconsistent,
you must specify the -force parameter when restarting the relationship. This situation can
arise if, for example, the relationship was stopped and then further writes were performed on
the original primary of the relationship. The use of the -force parameter here is a reminder
that the data on the secondary will become inconsistent while resynchronization (background
copying) takes place and, therefore, is unusable for DR purposes before the background copy
has completed.
In the Idling state, you must specify the primary VDisk to indicate the copy direction. In other
connected states, you can provide the primary argument, but it must match the existing
setting.
6.12.8 Stopping a Global Mirror relationship
To stop a stand-alone Global Mirror relationship, use the svctask stoprcrelationship
command.
Chapter 6. Advanced Copy Services
335
svctask stoprcrelationship
Use the svctask stoprcrelationship command to stop the copy process for a relationship.
You can also use this command to enable write access to a consistent secondary VDisk by
specifying the -access parameter.
This command applies to a stand-alone relationship. It is rejected if it is addressed to a
relationship that is part of a consistency group. You can issue this command to stop a
relationship that is copying from primary to secondary.
If the relationship is in an inconsistent state, any copy operation stops and does not resume
until you issue an svctask startrcrelationship command. Write activity is no longer copied
from the primary to the secondary VDisk. For a relationship in the ConsistentSynchronized
state, this command causes a Consistency Freeze.
When a relationship is in a consistent state (that is, in the ConsistentStopped,
ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access
parameter with the svctask stoprcrelationship command to enable write access to the
secondary VDisk.
6.12.9 Starting a Global Mirror consistency group
To start a Global Mirror consistency group, use the svctask startrcconsistgrp command.
svctask startrcconsistgrp
Use the svctask startrcconsistgrp command to start a Global Mirror consistency group.
You can only issue this command to a consistency group that is connected.
For a consistency group that is idling, this command assigns a copy direction (primary and
secondary roles) and begins the copy process. Otherwise, this command restarts a previous
copy process that was stopped either by a stop command or by an I/O error.
6.12.10 Stopping a Global Mirror consistency group
To stop a Global Mirror consistency group, use the svctask stoprcconsistgrp command.
svctask stoprcconsistgrp
Use the svctask startrcconsistgrp command to stop the copy process for a Global Mirror
consistency group. You can also use this command to enable write access to the secondary
VDisks in the group if the group is in a consistent state.
If the consistency group is in an inconsistent state, any copy operation stops and does not
resume until you issue the svctask startrcconsistgrp command. Write activity is no longer
copied from the primary to the secondary VDisks, which belong to the relationships in the
group. For a consistency group in the ConsistentSynchronized state, this command causes a
Consistency Freeze.
When a consistency group is in a consistent state (for example, in the ConsistentStopped,
ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access
parameter with the svctask stoprcconsistgrp command to enable write access to the
secondary VDisks within that group.
6.12.11 Deleting a Global Mirror relationship
To delete a Global Mirror relationship, use the svctask rmrcrelationship command.
336
Implementing the IBM System Storage SAN Volume Controller V5.1
svctask rmrcrelationship
Use the svctask rmrcrelationship command to delete the relationship that is specified.
Deleting a relationship only deletes the logical relationship between the two VDisks. It does
not affect the VDisks themselves.
If the relationship is disconnected at the time that the command is issued, the relationship is
only deleted on the cluster on which the command is being run. When the clusters reconnect,
the relationship is automatically deleted on the other cluster.
Alternatively, if the clusters are disconnected, and you still want to remove the relationship on
both clusters, you can issue the rmrcrelationship command independently on both of the
clusters.
A relationship cannot be deleted if it is part of a consistency group. You must first remove the
relationship from the consistency group.
If you delete an inconsistent relationship, the secondary VDisk becomes accessible even
though it is still inconsistent. This situation is the one case in which Global Mirror does not
inhibit access to inconsistent data.
6.12.12 Deleting a Global Mirror consistency group
To delete a Global Mirror consistency group, use the svctask rmrcconsistgrp command.
svctask rmrcconsistgrp
Use the svctask rmrcconsistgrp command to delete a Global Mirror consistency group. This
command deletes the specified consistency group. You can issue this command for any
existing consistency group.
If the consistency group is disconnected at the time that the command is issued, the
consistency group is only deleted on the cluster on which the command is being run. When
the clusters reconnect, the consistency group is automatically deleted on the other cluster.
Alternatively, if the clusters are disconnected, and you still want to remove the consistency
group on both clusters, you can issue the svctask rmrcconsistgrp command separately on
both of the clusters.
If the consistency group is not empty, the relationships within it are removed from the
consistency group before the group is deleted. These relationships then become stand-alone
relationships. The state of these relationships is not changed by the action of removing them
from the consistency group.
6.12.13 Reversing a Global Mirror relationship
To reverse a Global Mirror relationship, use the svctask switchrcrelationship command.
svctask switchrcrelationship
Use the svctask switchrcrelationship command to reverse the roles of the primary VDisk
and the secondary VDisk when a stand-alone relationship is in a consistent state; when
issuing the command, the desired primary needs to be specified.
6.12.14 Reversing a Global Mirror consistency group
To reverse a Global Mirror consistency group, use the svctask switchrcconsistgrp
command.
Chapter 6. Advanced Copy Services
337
svctask switchrcconsistgrp
Use the svctask switchrcconsistgrp command to reverse the roles of the primary VDisk and
the secondary VDisk when a consistency group is in a consistent state. This change is
applied to all of the relationships in the consistency group, and when issuing the command,
the desired primary needs to be specified.
338
Implementing the IBM System Storage SAN Volume Controller V5.1
7
Chapter 7.
SAN Volume Controller
operations using the
command-line interface
In this chapter, we describe operational management. We use the command-line interface
(CLI) to demonstrate both normal operation and, then, advanced operation.
You can use either the CLI or GUI to manage IBM System Storage SAN Volume Controller
(SVC) operations. We prefer to use the CLI in this chapter. You might want to script these
operations, and we think it is easier to create the documentation for the scripts using the CLI.
This chapter assumes a fully functional SVC environment.
© Copyright IBM Corp. 2010. All rights reserved.
339
7.1 Normal operations using CLI
In the following topics, we describe those commands that best represent normal operational
commands.
7.1.1 Command syntax and online help
Two major command sets are available:
The svcinfo command set allows us to query the various components within the SVC
environment.
The svctask command set allows us to make changes to the various components within
the SVC.
When the command syntax is shown, you will see certain parameters in square brackets, for
example, [parameter], indicating that the parameter is optional in most, if not all, instances.
Any information that is not in square brackets is required information. You can view the syntax
of a command by entering one of the following commands:
svcinfo -?: Shows a complete list of information commands.
svctask -?: Shows a complete list of task commands.
svcinfo commandname -?: Shows the syntax of information commands.
svctask commandname -?: Shows the syntax of task commands.
svcinfo commandname -filtervalue?: Shows the filters that you can use to reduce the
output of the information commands.
Help: You can also use -h instead of -?, for example, the svcinfo -h or svctask
commandname -h command.
If you look at the syntax of the command by typing svcinfo command name -?, you often see
-filter listed as a parameter. Be aware that the correct parameter is -filtervalue.
Tip: You can use the up and down arrow keys on your keyboard to recall commands that
were recently issued. Then, you can use the left and right, backspace, and delete keys to
edit commands before you resubmit them.
7.2 Working with managed disks and disk controller systems
This section details the various configuration and administration tasks that you can perform
on the managed disks (MDisks) within the SVC environment and the tasks that you can
perform at a disk controller level.
7.2.1 Viewing disk controller details
Use the svcinfo lscontroller command to display summary information about all available
back-end storage systems.
To display more detailed information about a specific controller, run the command again and
append the controller name parameter, for example, controller id 0, as shown in Example 7-1
on page 341.
340
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-1 svctask lscontroller command
IBM_2145:ITSO_SVC_4:admin>svcinfo lscontroller 0
id 0
controller_name ITSO_XIV_01
WWNN 50017380022C0000
mdisk_link_count 10
max_mdisk_link_count 10
degraded no
vendor_id IBM
product_id_low 2810XIVproduct_id_high LUN-0
product_revision 10.1
ctrl_s/n
allow_quorum yes
WWPN 50017380022C0170
path_count 2
max_path_count 4
WWPN 50017380022C0180
path_count 2
max_path_count 2
WWPN 50017380022C0190
path_count 4
max_path_count 6
WWPN 50017380022C0182
path_count 4
max_path_count 12
WWPN 50017380022C0192
path_count 4
max_path_count 6
WWPN 50017380022C0172
path_count 4
max_path_count 6
7.2.2 Renaming a controller
Use the svctask chcontroller command to change the name of a storage controller. To
verify the change, run the svcinfo lscontroller command. Example 7-2 shows both of
these commands.
Example 7-2 svctask chcontroller command
IBM_2145:ITSO-CLS1:admin>svctask chcontroller -name DS4500 controller0
IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller -delim ,
id,controller_name,ctrl_s/n,vendor_id,product_id_low,product_id_high
0,DS4500,,IBM
,1742-900,
1,DS4700,,IBM
,1814
, FAStT
This command renames the controller named controller0 to DS4500.
Choosing a new name: The chcontroller command specifies the new name first. You
can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new
name can be between one and 15 characters in length. However, the new name cannot
start with a number, dash, or the word “controller” (because this prefix is reserved for SVC
assignment only).
Chapter 7. SAN Volume Controller operations using the command-line interface
341
7.2.3 Discovery status
Use the svcinfo lsdiscoverystatus command, as shown in Example 7-3, to determine if a
discovery operation is in progress. The output of this command is the status of active or
inactive.
Example 7-3 lsdiscoverystatus command
IBM_2145:ITSO-CLS1:admin>svcinfo lsdiscoverystatus
status
inactive
7.2.4 Discovering MDisks
In general, the cluster detects the MDisks automatically when they appear in the network.
However, certain Fibre Channel (FC) controllers do not send the required Small Computer
System Interface (SCSI) primitives that are necessary to automatically discover the new
MDisks.
If new storage has been attached and the cluster has not detected it, it might be necessary to
run this command before the cluster can detect the new MDisks.
Use the svctask detectmdisk command to scan for newly added MDisks (Example 7-4).
Example 7-4 svctask detectmdisk
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk
To check whether any newly added MDisks were successfully detected, run the svcinfo
lsmdisk command and look for new unmanaged MDisks.
If the disks do not appear, check that the disk is appropriately assigned to the SVC in the disk
subsystem, and that the zones are set up properly.
Note: If you have assigned a large number of logical unit numbers (LUNs) to your SVC, the
discovery process can take time. Check, several times, using the svcinfo lsmdisk
command if all of the MDisks that you were expecting are present.
When all of the disks allocated to the SVC are seen from the SVC cluster, the following
procedure is a good way to verify which MDisks are unmanaged and ready to be added to the
Managed Disk Group (MDG).
Perform the following steps to display MDisks:
1. Enter the svcinfo lsmdiskcandidate command, as shown in Example 7-5. This command
displays all detected MDisks that are not currently part of an MDG.
Example 7-5 svcinfo lsmdiskcandidate command
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskcandidate
id
0
1
2
.
.
342
Implementing the IBM System Storage SAN Volume Controller V5.1
Alternatively, you can list all MDisks (managed or unmanaged) by issuing the svcinfo
lsmdisk command, as shown in Example 7-6.
Example 7-6 svcinfo lsmdisk command
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,
id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_name,UID
0,mdisk0,online,unmanaged,,,36.0GB,0000000000000000,controller0,600a0b8000174431000000eb
47139cca00000000000000000000000000000000
1,mdisk1,online,unmanaged,,,36.0GB,0000000000000001,controller0,600a0b8000174431000000ef
47139e1c00000000000000000000000000000000
2,mdisk2,online,unmanaged,,,36.0GB,0000000000000002,controller0,600a0b8000174431000000f1
47139e7200000000000000000000000000000000
3,mdisk3,online,unmanaged,,,36.0GB,0000000000000003,controller0,600a0b8000174431000000e4
4713575400000000000000000000000000000000
4,mdisk4,online,unmanaged,,,36.0GB,0000000000000004,controller0,600a0b8000174431000000e6
4713576000000000000000000000000000000000
5,mdisk5,online,unmanaged,,,36.0GB,0000000000000000,controller1,600a0b800026b28200003ea3
4851577c00000000000000000000000000000000
6,mdisk6,online,unmanaged,,,36.0GB,0000000000000005,controller0,600a0b8000174431000000e7
47139cb600000000000000000000000000000000
7,mdisk7,online,unmanaged,,,36.0GB,0000000000000001,controller1,600a0b80002904de00004188
485157a400000000000000000000000000000000
8,mdisk8,online,unmanaged,,,36.0GB,0000000000000006,controller0,600a0b8000174431000000ea
47139cc400000000000000000000000000000000
From this output, you can see additional information about each MDisk (such as the
current status). For the purpose of our current task, we are only interested in the
unmanaged disks, because they are candidates for MDGs (all MDisks, in our case).
Tip: The -delim parameter collapses output instead of wrapping text over multiple lines.
2. If not all of the MDisks that you expected are visible, rescan the available FC network by
entering the svctask detectmdisk command, as shown in Example 7-7.
Example 7-7 svctask detectmdisk
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk
3. If you run the svcinfo lsmdiskcandidate command again and your MDisk or MDisks are
still not visible, check that the LUNs from your subsystem have been properly assigned to
the SVC and that appropriate zoning is in place (for example, the SVC can see the disk
subsystem). See Chapter 3, “Planning and configuration” on page 65 for details about
setting up your storage area network (SAN) fabric.
7.2.5 Viewing MDisk information
When viewing information about the MDisks (managed or unmanaged), we can use the
svcinfo lsmdisk command to display overall summary information about all available
managed disks. To display more detailed information about a specific MDisk, run the
command again and append the -mdisk name parameter (for example, mdisk0).
The overview command is svcinfo lsmdisk -delim, as shown in Example 7-8 on page 344.
The summary for an individual MDisk is svcinfo lsmdisk (name/ID of the MDisk from which
you want the information), as shown in Example 7-9 on page 344.
Chapter 7. SAN Volume Controller operations using the command-line interface
343
Example 7-8 svcinfo lsmdisk command
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,
id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam
e,UID
0,mdisk0,online,managed,0,MDG_DS47,16.0GB,0000000000000000,controller0,600a0b80004
86a6600000ae94a89575900000000000000000000000000000000
1,mdisk1,online,unmanaged,,,16.0GB,0000000000000001,controller0,600a0b80004858a000
000e134a895d6e00000000000000000000000000000000
2,mdisk2,online,managed,0,MDG_DS47,16.0GB,0000000000000002,controller0,600a0b80004
858a000000e144a895d9400000000000000000000000000000000
3,mdisk3,online,managed,0,MDG_DS47,16.0GB,0000000000000003,controller0,600a0b80004
858a000000e154a895db000000000000000000000000000000000
Example 7-9 shows a summary for a single MDisk.
Example 7-9 Usage of the command svcinfo lsmdisk (ID)
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 2
id 2
name mdisk2
status online
mode managed
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
capacity 16.0GB
quorum_index 0
block_size 512
controller_name controller0
ctrl_type 4
ctrl_WWNN 200600A0B84858A0
controller_id 0
path_count 2
max_path_count 2
ctrl_LUN_# 0000000000000002
UID 600a0b80004858a000000e144a895d9400000000000000000000000000000000
preferred_WWPN 200600A0B84858A2
active_WWPN 200600A0B84858A2
7.2.6 Renaming an MDisk
Use the svctask chmdisk command to change the name of an MDisk. When using the
command, be aware that the new name comes first and then the ID/name of the MDisk being
renamed. Use this format: svcinfo chmdisk -name (new name) (current ID/name). Use the
svcinfo lsmdisk command to verify the change. Example 7-10 show both of these
commands.
Example 7-10 svctask chmdisk command
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name mdisk_6 mdisk6
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,
id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam
e,UID
6,mdisk_6,online,managed,0,MDG_DS45,36.0GB,0000000000000005,DS4500,600a0b800017443
1000000e747139cb600000000000000000000000000000000
344
Implementing the IBM System Storage SAN Volume Controller V5.1
This command renamed the MDisk named mdisk6 to mdisk_6.
The chmdisk command: The chmdisk command specifies the new name first. You can
use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new
name can be between one and 15 characters in length. However, the new name cannot
start with a number, dash, or the word “mdisk” (because this prefix is reserved for SVC
assignment only).
7.2.7 Including an MDisk
If a significant number of errors occur on an MDisk, the SVC automatically excludes it. These
errors can result from a hardware problem, a SAN problem, or the result of poorly planned
maintenance. If it is a hardware fault, you receive Simple Network Management Protocol
(SNMP) alerts about the state of the disk subsystem (before the disk was excluded), and you
can undertake preventive maintenance. If not, the hosts that were using virtual disks
(VDisks), which used the excluded MDisk, now have I/O errors.
By running the svcinfo lsmdisk command, you can see that mdisk9 is excluded in
Example 7-11.
Example 7-11 svcinfo lsmdisk command: Excluded MDisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,
id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam
e,UID
8,mdisk8,online,managed,0,MDG_DS45,36.0GB,0000000000000006,DS4500,600a0b8000174431
000000ea47139cc400000000000000000000000000000000
9,mdisk9,excluded,managed,1,MDG_DS47,36.0GB,0000000000000002,DS4700,600a0b800026b2
8200003ed6485157b600000000000000000000000000000000
After taking the necessary corrective action to repair the MDisk (for example, replace the
failed disk, repair the SAN zones, and so on), we need to include the MDisk again by issuing
the svctask includemdisk command (Example 7-12), because the SVC cluster does not
include the MDisk automatically.
Example 7-12 svctask includemdisk
IBM_2145:ITSO-CLS1:admin>svctask includemdisk mdisk9
Running the svcinfo lsmdisk command again shows mdisk9 online again, as shown in
Example 7-13.
Example 7-13 svcinfo lsmdisk command: Verifying that MDisk is included
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,
id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam
e,UID
8,mdisk8,online,managed,0,MDG_DS45,36.0GB,0000000000000006,DS4500,600a0b8000174431
000000ea47139cc400000000000000000000000000000000
9,mdisk9,online,managed,1,MDG_DS47,36.0GB,0000000000000002,DS4700,600a0b800026b282
00003ed6485157b600000000000000000000000000000000
Chapter 7. SAN Volume Controller operations using the command-line interface
345
7.2.8 Adding MDisks to a managed disk group
If you created an empty MDG or you simply assign additional MDisks to your already
configured MDG, you can use the svctask addmdisk command to populate the MDG
(Example 7-14).
Example 7-14 svctask addmdisk command
IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk mdisk6 MDG_DS45
You can only add unmanaged MDisks to an MDG. This command adds the MDisk named
mdisk6 to the MDG named MDG_DS45.
Important: Do not add this MDisk to an MDG if you want to create an image mode VDisk
from the MDisk that you are adding. As soon as you add an MDisk to an MDG, it becomes
managed, and extent mapping is not necessarily one-to-one anymore.
7.2.9 Showing the Managed Disk Group
Use the svcinfo lsmdisk command as before to display information about the MDG to which
an MDisk belongs, as shown in Example 7-15.
Example 7-15 svcinfo lsmdisk command
id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_
capacity,used_capacity,real_capacity,overallocation,warning
0,MDG_DS45,online,13,4,468.0GB,512,355.0GB,140.00GB,100.00GB,112.00GB,29,0
1,MDG_DS47,online,8,3,288.0GB,512,217.5GB,120.00GB,20.00GB,70.00GB,41,0
7.2.10 Showing MDisks in an managed disk group
Use the svcinfo lsmdisk -filtervalue command, as shown in Example 7-16, to see which
MDisks are part of a specific MDG. This command shows all of the MDisks that are part of the
MDG named MDG2.
Example 7-16 svcinfo lsmdisk -filtervalue: Mdisks in MDG
IBM_2145:ITSOSVC42A:admin>svcinfo lsmdisk -filtervalue mdisk_grp_name=MDG2 -delim
:
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_nam
e:UID
6:mdisk6:online:managed:2:MDG2:3.0GB:0000000000000006:DS4000:600a0b800017423300000
044465c0a2700000000000000000000000000000000
7:mdisk7:online:managed:2:MDG2:6.0GB:0000000000000007:DS4000:600a0b800017443100000
06f465bf93200000000000000000000000000000000
21:mdisk21:online:image:2:MDG2:2.0GB:0000000000000015:DS4000:600a0b800017443100000
0874664018600000000000000000000000000000000
7.2.11 Working with Managed Disk Groups
Before we can create any volumes on the SVC cluster, we need to virtualize the allocated
storage that is assigned to the SVC. After we have assigned volumes to the SVC’s “managed
disks”, we cannot start using them until they are members of an MDG. Therefore, one of our
first operations is to create an MDG where we can place our MDisks.
346
Implementing the IBM System Storage SAN Volume Controller V5.1
This section describes the operations using MDisks and MDGs. It explains the tasks that we
can perform at an MDG level.
7.2.12 Creating a managed disk group
After the successful login to the CLI interface of the SVC, we create the MDG.
Using the svctask mkmdiskgrp command, create an MDG, as shown in Example 7-17.
Example 7-17 svctask mkmdiskgrp
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_DS47 -ext 512
MDisk Group, id [0], successfully created
This command creates an MDG called MDG_DS47. The extent size that is used within this
group is 512 MB, which is the most commonly used extent size.
We have not added any MDisks to the MDG yet, so it is an empty MDG.
There is a way to add unmanaged MDisks and create the MDG in the same command. Using
the command svctask mkmdiskgrp with the -mdisk parameter and entering the IDs or names
of the MDisks adds the MDisks immediately after the MDG is created.
So, prior to the creation of the MDG, enter the svcinfo lsmdisk command, as shown in
Example 7-18, where we list all of the available MDisks that are seen by the SVC cluster.
Example 7-18 Listing available MDisks
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,
id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam
e,UID
0,mdisk0,online,unmanaged,,,16.0GB,0000000000000000,controller0,600a0b8000486a6600
000ae94a89575900000000000000000000000000000000
1,mdisk1,online,unmanaged,,,16.0GB,0000000000000001,controller0,600a0b80004858a000
000e134a895d6e00000000000000000000000000000000
2,mdisk2,online,managed,0,MDG_DS83,16.0GB,0000000000000002,controller1,600a0b80004
858a000000e144a895d9400000000000000000000000000000000
3,mdisk3,online,managed,0,MDG_DS83,16.0GB,0000000000000003,controller1,600a0b80004
858a000000e154a895db000000000000000000000000000000000
Using the same command as before (svctask mkmdiskgrp) and knowing the MDisk IDs that
we are using, we can add multiple MDisks to the MDG at the same time. We now add the
unmanaged MDisks, as shown in Example 7-18, to the MDG that we created, as shown in
Example 7-19.
Example 7-19 Creating an MDG and adding available MDisks
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_DS47 -ext 512 -mdisk 0:1
MDisk Group, id [0], successfully created
This command creates an MDG called MDG_DS47. The extent size that is used within this
group is 512 MB, and two MDisks (0 and 1) are added to the group.
Chapter 7. SAN Volume Controller operations using the command-line interface
347
MDG name: The -name and -mdisk parameters are optional. If you do not enter a -name,
the default is MDiskgrpx, where x is the ID sequence number that is assigned by the SVC
internally. If you do not enter the -mdisk parameter, an empty MDG is created.
If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and the
underscore. The name can be between one and 15 characters in length, but it cannot start
with a number or the word “mDiskgrp” (because this prefix is reserved for SVC assignment
only).
By running the svcinfo lsmdisk command, you now see the MDisks as “managed” and as
part of the MDG_DS47, as shown in Example 7-20.
Example 7-20 svcinfo lsmdisk command
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,
id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam
e,UID
0,mdisk0,online,managed,0,MDG_DS47,16.0GB,0000000000000000,controller0,600a0b80004
86a6600000ae94a89575900000000000000000000000000000000
1,mdisk1,online,managed,0,MDG_DS47,16.0GB,0000000000000001,controller0,600a0b80004
858a000000e134a895d6e00000000000000000000000000000000
2,mdisk2,online,managed,0,MDG_DS83,16.0GB,0000000000000002,controller1,600a0b80004
858a000000e144a895d9400000000000000000000000000000000
3,mdisk3,online,managed,0,MDG_DS83,16.0GB,0000000000000003,controller1,600a0b80004
858a000000e154a895db000000000000000000000000000000000
You have completed the tasks that are required to create an MDG.
7.2.13 Viewing Managed Disk Group information
Use the svcinfo lsmdiskgrp command, as shown in Example 7-21, to display information
about the MDGs that are defined in the SVC.
Example 7-21 svcinfo lsmdiskgrp command
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp -delim ,
id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_
capacity,used_capacity,real_capacity,overallocation,warning
0,MDG_DS45,online,13,5,468.0GB,512,345.0GB,150.00GB,110.00GB,122.00GB,32,0
1,MDG_DS47,online,8,2,288.0GB,512,227.5GB,110.00GB,10.00GB,60.00GB,38,0
7.2.14 Renaming a managed disk group
Use the svctask chmdiskgrp command to change the name of an MDG. To verify the
change, run the svcinfo lsmdiskgrp command. Example 7-22 shows both of these
commands.
Example 7-22 svctask chmdiskgrp command
IBM_2145:ITSO-CLS1:admin>svctask chmdiskgrp -name MDG_DS81 MDG_DS83
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp -delim ,
id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_
capacity,used_capacity,real_capacity,overallocation,warning
0,MDG_DS45,online,13,5,468.0GB,512,345.0GB,150.00GB,110.00GB,122.00GB,32,0
1,MDG_DS47,online,8,2,288.0GB,512,227.5GB,110.00GB,10.00GB,60.00GB,38,0
348
Implementing the IBM System Storage SAN Volume Controller V5.1
2,MDG_DS81,online,0,0,0,512,0,0.00MB,0.00MB,0.00MB,0,85
This command renamed the MDG from MDG_DS83 to MDG_DS81.
Changing the MDG name: The chmdiskgrp command specifies the new name first. You
can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new
name can be between one and 15 characters in length. However, the new name cannot
start with a number, dash, or the word “mdiskgrp” (because this prefix is reserved for SVC
assignment only).
7.2.15 Deleting a managed disk group
Use the svctask rmmdiskgrp command to remove an MDG from the SVC cluster
configuration (Example 7-23).
Example 7-23 svctask rmmdiskgrp
IBM_2145:ITSO-CLS1:admin>svctask rmmdiskgrp MDG_DS81
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp -delim ,
id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_
capacity,used_capacity,real_capacity,overallocation,warning
0,MDG_DS45,online,13,5,468.0GB,512,345.0GB,150.00GB,110.00GB,122.00GB,32,0
1,MDG_DS47,online,8,2,288.0GB,512,227.5GB,110.00GB,10.00GB,60.00GB,38,0
This command removes MDG_DS81 from the SVC cluster configuration.
Removing an MDG from the SVC cluster configuration: If there are MDisks within the
MDG, you must use the -force flag to remove the MDG from the SVC cluster configuration,
for example:
svctask rmmdiskgrp MDG_DS81 -force
Ensure that you definitely want to use this flag, because it destroys all mapping information
and data held on the VDisks, which cannot be recovered.
7.2.16 Removing MDisks from a managed disk group
Use the svctask rmmdisk command to remove an MDisk from an MDG (Example 7-24).
Example 7-24 svctask rmmdisk command
IBM_2145:ITSO-CLS1:admin>svctask rmmdisk -mdisk 6 -force MDG_DS45
This command removes the MDisk called mdisk6 from the MDG named MDG_DS45.The
-force flag is set, because there are VDisks using this MDG.
Sufficient space: The removal only takes place if there is sufficient space to migrate the
VDisk data to other extents on other MDisks that remain in the MDG. After you remove the
MDisk group, it takes time to change the mode from managed to unmanaged.
Chapter 7. SAN Volume Controller operations using the command-line interface
349
7.3 Working with hosts
This section explains the tasks that can be performed at a host level.
When we create a host in our SVC cluster, we need to define the connection method. Starting
with SVC 5.1, we can now define our host as iSCSI-attached or FC-attached, and we
describe these connection methods in detail in Chapter 2, “IBM System Storage SAN Volume
Controller” on page 7.
7.3.1 Creating a Fibre Channel-attached host
We show creating an FC-attached host under various circumstances in the following sections.
Host is powered on, connected, and zoned to the SVC
When you create your host on the SVC, it is good practice to check whether the host bus
adapter (HBA) worldwide port names (WWPNs) of the server are visible to the SVC. By doing
that, you ensure that zoning is done and that the correct WWPN will be used. Issue the
svcinfo lshbaportcandidate command, as shown in Example 7-25.
Example 7-25 svcinfo lshbaportcandidate command
IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate
id
210000E08B89C1CD
210000E08B054CAA
After you know that the WWPNs that are displayed match your host (use host or SAN switch
utilities to verify), use the svctask mkhost command to create your host.
Name: If you do not provide the -name parameter, the SVC automatically generates the
name hostx (where x is the ID sequence number that is assigned by the SVC internally).
You can use the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the
underscore (_). The name can be between one and 15 characters in length. However, the
name cannot start with a number, dash, or the word “host” (because this prefix is reserved
for SVC assignment only).
The command to create a host is shown in Example 7-26.
Example 7-26 svctask mkhost
IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Palau -hbawwpn
210000E08B89C1CD:210000E08B054CAA
Host, id [0], successfully created
This command creates a host called Palau using WWPN 21:00:00:E0:8B:89:C1:CD and
21:00:00:E0:8B:05:4C:AA.
Ports: You can define from one up to eight ports per host, or you can use the addport
command, which we show in 7.3.5, “Adding ports to a defined host” on page 354.
350
Implementing the IBM System Storage SAN Volume Controller V5.1
Host is not powered on or not connected to the SAN
If you want to create a host on the SVC without seeing your target WWPN by using the
svcinfo lshbaportcandidate command, add the -force flag to your mkhost command, as
shown in Example 7-27. This option is more open for human errors than if you choose the
WWPN from a list, but it is typically used when many host definitions are created at the same
time, such as through a script.
In this case, you can type the WWPN of your HBA or HBAs and use the -force flag to create
the host, regardless of whether they are connected, as shown in Example 7-27.
Example 7-27 mkhost -force
IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Guinea -hbawwpn 210000E08B89C1DC
-force
Host, id [4], successfully created
This command forces the creation of a host called Guinea using WWPN
210000E08B89C1DC.
Note: WWPNs are not case sensitive in the CLI.
If you run the svcinfo lshost command again, you now see your host named Guinea under
host ID 4.
7.3.2 Creating an iSCSI-attached host
Now, we can create host definitions to a host that is not connected to the SAN but that has
LAN access to our SVC nodes. Before we create the host definition, we configure our SVC
clusters to use the new iSCSI connection method. We describe additional information about
configuring your nodes to use iSCSI in 7.7.4, “iSCSI configuration” on page 382.
The iSCSI functionality allows the host to access volumes through the SVC without being
attached to the SAN. Back-end storage and node-to-node communication still need the FC
network to communicate, but the host does not necessarily need to be connected to the SAN.
When we create a host that is going to use iSCSI as a communication method, iSCSI initiator
software must be installed on the host to initiate the communication between the SVC and the
host. This installation creates an iSCSI qualified name (IQN) identifier that is needed before
we create our host.
Before we start, we check our server’s IQN address. We are running Windows Server 2008.
We select Start  Programs  Administrative tools, and we select iSCSI initiator. In our
example, our IQN, as shown in Figure 7-1 on page 352, is:
iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
Chapter 7. SAN Volume Controller operations using the command-line interface
351
Figure 7-1 IQN from the iSCSI initiator tool
We create the host by issuing the mkhost command, as shown in Example 7-28. When the
command completes successfully, we display our newly created host.
It is important to know that when the host is initially configured, the default authentication
method is set to no authentication and no Challenge Handshake Authentication Protocol
(CHAP) secret is set. To set a CHAP secret for authenticating the iSCSI host with the SVC
cluster, use the svctask chhost command with the chapsecret parameter.
Example 7-28 mkhost command
IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Baldur -iogrp 0 -iscsiname
iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
Host, id [4], successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lshost 4
id 4
name Baldur
port_count 1
type generic
mask 1111
iogrp_count 1
iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
node_logged_in_count 0
state offline
We have now created our host definition. We map a VDisk to our new iSCSI server, as shown
in Example 7-29. We have already created the VDisk, as shown in 7.4.1, “Creating a VDisk”
on page 356. In our scenario, our VDisk has ID 21 and the host name is Baldur. We map it to
our iSCSI host.
Example 7-29 Mapping VDisk to iSCSI host
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Baldur 21
Virtual Disk to Host map, id [0], successfully created
352
Implementing the IBM System Storage SAN Volume Controller V5.1
After the VDisk has been mapped to the host, we display the host information again, as
shown in Example 7-30.
Example 7-30 svcinfo lshost
IBM_2145:ITSO-CLS1:admin>svcinfo lshost 4
id 4
name Baldur
port_count 1
type generic
mask 1111
iogrp_count 1
iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
node_logged_in_count 1
state online
Note: FC hosts and iSCSI hosts are handled in the same way operationally after they have
been created.
If you need to display a CHAP secret for an already defined server, use the svcinfo
lsiscsiauth command.
7.3.3 Modifying a host
Use the svctask chhost command to change the name of a host. To verify the change, run
the svcinfo lshost command. Example 7-31 shows both of these commands.
Example 7-31 svctask chhost command
IBM_2145:ITSO-CLS1:admin>svctask chhost -name Angola Guinea
IBM_2145:ITSO-CLS1:admin>svcinfo lshost
id
name
port_count
0
Palau
2
1
Nile
2
2
Kanaga
2
3
Siam
2
4
Angola
1
iogrp_count
4
1
1
2
4
This command renamed the host from Guinea to Angola.
Note: The chhost command specifies the new name first. You can use letters A to Z and a
to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between
one and 15 characters in length. However, it cannot start with a number, dash, or the word
“host” (because this prefix is reserved for SVC assignment only).
Note: If you use Hewlett-Packard UNIX (HP-UX), you use the -type option. See the IBM
System Storage Open Software Family SAN Volume Controller: Host Attachment Guide,
SC26-7563, for more information about the hosts that require the -type parameter.
Chapter 7. SAN Volume Controller operations using the command-line interface
353
7.3.4 Deleting a host
Use the svctask rmhost command to delete a host from the SVC configuration. If your host is
still mapped to VDisks and you use the -force flag, the host and all of the mappings with it are
deleted. The VDisks are not deleted, only the mappings to them.
The command that is shown in Example 7-32 deletes the host called Angola from the SVC
configuration.
Example 7-32 svctask rmhost Angola
IBM_2145:ITSO-CLS1:admin>svctask rmhost Angola
Deleting a host: If there are any VDisks assigned to the host, you must use the -force flag,
for example: svctask rmhost -force Angola.
7.3.5 Adding ports to a defined host
If you add an HBA or a network interface controller (NIC) to a server that is already defined
within the SVC, you can use the svctask addhostport command to add the new port
definitions to your host configuration.
If your host is currently connected through SAN with FC and if the WWPN is already zoned to
the SVC cluster, issue the svcinfo lshbaportcandidate command, as shown in
Example 7-33, to compare with the information that you have from the server administrator.
Example 7-33 svcinfo lshbaportcandidate
IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate
id
210000E08B054CAA
If the WWPN matches your information (use host or SAN switch utilities to verify), use the
svctask addhostport command to add the port to the host.
Example 7-34 shows the command to add a host port.
Example 7-34 svctask addhostport
IBM_2145:ITSO-CLS1:admin>svctask addhostport -hbawwpn 210000E08B054CAA Palau
This command adds the WWPN of 210000E08B054CAA to the Palau host.
Adding multiple ports: You can add multiple ports all at one time by using the separator
or colon (:) between WWPNs, for example:
svctask addhostport -hbawwpn 210000E08B054CAA:210000E08B89C1CD Palau
If the new HBA is not connected or zoned, the svcinfo lshbaportcandidate command does
not display your WWPN. In this case, you can manually type the WWPN of your HBA or HBAs
and use the -force flag to create the host, as shown in Example 7-35.
Example 7-35 svctask addhostport
IBM_2145:ITSO-CLS1:admin>svctask addhostport -hbawwpn 210000E08B054CAA -force
Palau
354
Implementing the IBM System Storage SAN Volume Controller V5.1
This command forces the addition of the WWPN named 210000E08B054CAA to the host
called Palau.
WWPNs: WWPNs are not case sensitive within the CLI.
If you run the svcinfo lshost command again, you see your host with an updated port count
of 2 in Example 7-36.
Example 7-36 svcinfo lshost command: Port count
IBM_2145:ITSO-CLS1:admin>svcinfo lshost
id
name
port_count
0
Palau
2
1
ITSO_W2008
1
2
Thor
3
3
Frigg
1
4
Baldur
1
iogrp_count
4
4
1
1
1
If your host currently uses iSCSI as a connection method, you must have the new iSCSI IQN
ID before you add the port. Unlike FC-attached hosts, you cannot check for available
candidates with iSCSI.
After you have acquired the additional iSCSI IQN, use the svctask addhostport command,
as shown in Example 7-37.
Example 7-37 Adding an iSCSI port to an already configured host
IBM_2145:ITSO-CLS1:admin>svctask addhostport -iscsiname
iqn.1991-05.com.microsoft:baldur 4
7.3.6 Deleting ports
If you make a mistake when adding a port, or if you remove an HBA from a server that is
already defined within the SVC, you can use the svctask rmhostport command to remove
WWPN definitions from an existing host.
Before you remove the WWPN, be sure that it is the correct WWPN by issuing the svcinfo
lshost command, as shown in Example 7-38.
Example 7-38 svcinfo lshost command
IBM_2145:ITSO-CLS1:admin>svcinfo lshost Palau
id 0
name Palau
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 210000E08B054CAA
node_logged_in_count 2
state active
WWPN 210000E08B89C1CD
node_logged_in_count 2
state offline
Chapter 7. SAN Volume Controller operations using the command-line interface
355
When you know the WWPN or iSCSI IQN, use the svctask rmhostport command to delete a
host port, as shown in Example 7-39.
Example 7-39 svctask rmhostport
For removing WWPN
IBM_2145:ITSO-CLS1:admin>svctask rmhostport -hbawwpn 210000E08B89C1CD Palau
and for removing iSCSI IQN
IBM_2145:ITSO-CLS1:admin>svctask rmhostport -iscsiname
iqn.1991-05.com.microsoft:baldur Baldur
This command removes the WWPN of 210000E08B89C1CD from the Palau host and the
iSCSI IQN iqn.1991-05.com.microsoft:baldur from the Baldur host.
Removing multiple ports: You can remove multiple ports at one time by using the
separator or colon (:) between the port names, for example:
svctask rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola
7.4 Working with VDisks
This section details the various configuration and administration tasks that can be performed
on the VDisks within the SVC environment.
7.4.1 Creating a VDisk
The mkvdisk command creates sequential, striped, or image mode VDisk objects. When they
are mapped to a host object, these objects are seen as disk drives with which the host can
perform I/O operations.
When creating a VDisk, you must enter several parameters at the CLI. There are both
mandatory and optional parameters.
See the full command string and detailed information in the Command-Line Interface User’s
Guide, SC26-7903-05.
Creating an image mode disk: If you do not specify the -size parameter when you create
an image mode disk, the entire MDisk capacity is used.
When you are ready to create a VDisk, you must know the following information before you
start creating the VDisk:
In which MDG is the VDisk going to have its extents
From what I/O Group will the VDisk be accessed
Size of the VDisk
Name of the VDisk
When you are ready to create your striped VDisk, you use the svctask mkvdisk command
(we discuss sequential and image mode VDisks later). In Example 7-40 on page 357, this
command creates a 10 GB, striped VDisk with VDisk id0 within the MDG_DS47 MDG and
assigns it to the iogrp_0 I/O Group.
356
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-40 svctask mkvdisk command
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_DS47 -iogrp io_grp0 -size 10 -unit
gb -name Tiger
Virtual Disk, id [0], successfully created
To verify the results, you can use the svcinfo lsvdisk command, as shown in Example 7-41.
Example 7-41 svcinfo lsvdisk command
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk 0
id 0
name Tiger
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AB813F1000000000000000
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00MB
real_capacity 10.00MB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
You have completed the required tasks to create a VDisk.
Chapter 7. SAN Volume Controller operations using the command-line interface
357
7.4.2 VDisk information
Use the svcinfo lsvdisk command to display summary information about all VDisks defined
within the SVC environment. To display more detailed information about a specific VDisk, run
the command again and append the VDisk name parameter (for example, VDisk_D).
Example 7-42 shows both of these commands.
Example 7-42 svcinfo lsvdisk command
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -delim ,
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC
_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count
0,vdisk_A,0,io_grp0,online,0,MDG_DS45,10.0GB,striped,,,,,60050768018301BF2800000000000008,0
,1
1,vdisk_B,1,io_grp1,online,1,MDG_DS47,100.0GB,striped,,,,,60050768018301BF2800000000000001,
0,1
2,vdisk_C,1,io_grp1,online,0,MDG_DS45,40.0GB,striped,,,,,60050768018301BF2800000000000002,0
,1
3,vdisk_D,1,io_grp1,online,0,MDG_DS45,80.0GB,striped,,,,,60050768018301BF2800000000000003,0
,1
4,MM_DBLog_Pri,0,io_grp0,online,0,MDG_DS45,10.0GB,striped,,,4,MMREL2,60050768018301BF280000
0000000004,0,1
5,MM_DB_Pri,0,io_grp0,online,0,MDG_DS45,10.0GB,striped,,,5,MMREL1,60050768018301BF280000000
0000005,0,1
6,MM_App_Pri,1,io_grp1,online,1,MDG_DS47,10.0GB,striped,,,,,60050768018301BF280000000000000
6,0,1
7.4.3 Creating a Space-Efficient VDisk
Example 7-43 shows an example of creating a Space-Efficient VDisk. It is important to know
that, in addition to the normal parameters, you must use the following parameters:
-rsize: This parameter makes the VDisk space-efficient; otherwise, the VDisk is fully
allocated.
-autoexpand: This parameter specifies that space-efficient copies automatically expand
their real capacities by allocating new extents from their MDG.
-grainsize: This parameter sets the grain size (KB) for a Space-Efficient VDisk.
Example 7-43 Usage of the command svctask mkvdisk
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_DS45 -iogrp 1 -vtype
striped -size 10 -unit gb -rsize 50% -autoexpand -grainsize 32
Virtual Disk, id [7], successfully created
This command creates a space-efficient 10 GB VDisk. The VDisk belongs to mdiskgrp MDG
with the name of MDG_DS45 and is owned by the io_grp1 I/O Group. The real_capacity
automatically expands until the VDisk size of 10 GB is reached. The grain size is set to 32 K,
which is the default.
358
Implementing the IBM System Storage SAN Volume Controller V5.1
Disk size: When using the -rsize parameter, you have the following options: disk_size,
disk_size_percentage, and auto.
Specify the disk_size_percentage value using an integer, or an integer immediately
followed by the percent character (%).
Specify the units for a disk_size integer using the -unit parameter; the default is MB.
The -rsize value can be greater than, equal to, or less than the size of the VDisk.
The auto option creates a VDisk copy that uses the entire size of the MDisk; if you
specify the -rsize auto option, you must also specify the -vtype image option.
An entry of 1 GB uses 1,024 MB.
7.4.4 Creating a VDisk in image mode
This virtualization type allows image mode VDisks to be created when an MDisk already has
data on it, perhaps from a pre-virtualized subsystem. When an image mode VDisk is created,
it directly corresponds to the previously unmanaged MDisk from which it was created.
Therefore, with the exception of space-efficient image mode VDisks, VDisk logical block
address (LBA) x equals MDisk LBA x.
You can use this command to bring a non-virtualized disk under the control of the cluster.
After it is under the control of the cluster, you can migrate the VDisk from the single managed
disk. When it is migrated, the VDisk is no longer an image mode VDisk. You can add image
mode VDisks to an already populated MDG with other types of VDisks, such as a striped or
sequential VDisk.
Size: An image mode VDisk must be at least 512 bytes (the capacity cannot be 0). That is,
the minimum size that can be specified for an image mode VDisk must be the same as the
MDisk group extent size to which it is added, with a minimum of 16 MB.
You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The
-fmtdisk parameter cannot be used to create an image mode VDisk.
Capacity: If you create a mirrored VDisk from two image mode MDisks without specifying
a -capacity value, the capacity of the resulting VDisk is the smaller of the two MDisks, and
the remaining space on the larger MDisk is inaccessible.
If you do not specify the -size parameter when you create an image mode disk, the entire
MDisk capacity is used.
Use the svctask mkvdisk command to create an image mode VDisk, as shown in
Example 7-44.
Example 7-44 svctask mkvdisk (image mode)
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Image -iogrp 0 -mdisk
mdisk20 -vtype image -name Image_Vdisk_A
Virtual Disk, id [8], successfully created
This command creates an image mode VDisk called Image_Vdisk_A using the mdisk20
MDisk. The VDisk belongs to the MDG_Image MDG and is owned by the io_grp0 I/O Group.
Chapter 7. SAN Volume Controller operations using the command-line interface
359
If we run the svcinfo lsmdisk command again, notice that mdisk20 now has a status of
image, as shown in Example 7-45.
Example 7-45 svcinfo lsmdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,
id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam
e,UID
19,mdisk19,online,managed,1,MDG_DS47,36.0GB,0000000000000006,DS4700,600a0b800026b2
8200003f9f4851588700000000000000000000000000000000
20,mdisk20,online,image,2,MDG_Image,36.0GB,0000000000000007,DS4700,600a0b80002904d
e00004282485158aa00000000000000000000000000000000
7.4.5 Adding a mirrored VDisk copy
You can create a mirrored copy of a VDisk, which keeps a VDisk accessible even when the
MDisk on which it depends has become unavailable. You can create a copy of a VDisk either
on separate MDGs or by creating an image mode copy of the VDisk. Copies increase the
availability of data; however, they are not separate objects. You can only create or change
mirrored copies from the VDisk.
In addition, you can use VDisk Mirroring as an alternative method of migrating VDisks
between MDGs.
For example, if you have a non-mirrored VDisk in one MDG and want to migrate that VDisk to
another MDG, you can add a new copy of the VDisk and specify the second MDG. After the
copies are synchronized, you can delete the copy on the first MDG. The VDisk is migrated to
the second MDG while remaining online during the migration.
To create a mirrored copy of an VDisk, use the addvdiskcopy command. This command adds
a copy of the chosen VDisk to the selected MDG, which changes a non-mirrored VDisk into a
mirrored VDisk.
In the following scenario, we show creating a VDisk copy mirror from one MDG to another
MDG.
As you can see in Example 7-46, the VDisk has a copy with copy_id 0.
Example 7-46 svcinfo lsvdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_C
id 2
name vdisk_C
IO_group_id 1
IO_group_name io_grp1
status online
mdisk_grp_id 1
mdisk_grp_name MDG_DS47
capacity 45.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
360
Implementing the IBM System Storage SAN Volume Controller V5.1
RC_name
vdisk_UID 60050768018301BF2800000000000002
virtual_disk_throttling (MB) 20
preferred_node_id 3
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 12.00GB
free_capacity 12.00GB
overallocation 375
autoexpand off
warning 23
grainsize 32
In Example 7-47, we add the VDisk copy mirror by using the svctask addvdiskcopy
command.
Example 7-47 svctask addvdiskcopy
IBM_2145:ITSO-CLS1:admin>svctask addvdiskcopy -mdiskgrp MDG_DS45 -vtype striped
-rsize 20 -autoexpand -grainsize 64 -unit gb vdisk_C
Vdisk [2] copy [1] successfully created
During the synchronization process, you can see the status by using the svcinfo
lsvdisksyncprogress command. As shown in Example 7-48, the first time that the status is
checked, the synchronization progress is at 86%, and the estimated completion time is
19:16:54. The second time that the command is run, the progress status is at 100%, and the
synchronization is complete.
Example 7-48 Synchronization
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisksyncprogress -copy 1 vdisk_C
vdisk_id
vdisk_name
copy_id
progress
estimated_completion_time
2
vdisk_C
1
86
080710191654
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisksyncprogress -copy 1 vdisk_C
vdisk_id
vdisk_name
copy_id
progress
estimated_completion_time
2
vdisk_C
1
100
Chapter 7. SAN Volume Controller operations using the command-line interface
361
As you can see in Example 7-49, the new VDisk copy mirror (copy_id 1) has been added and
can be seen by using the svcinfo lsvdisk command.
Example 7-49 svcinfo lsvdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_C
id 2
name vdisk_C
IO_group_id 1
IO_group_name io_grp1
status online
mdisk_grp_id many
mdisk_grp_name many
capacity 45.0GB
type many
formatted no
mdisk_id many
mdisk_name many
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018301BF2800000000000002
virtual_disk_throttling (MB) 20
preferred_node_id 3
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 2
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 12.00GB
free_capacity 12.00GB
overallocation 375
autoexpand off
warning 23
grainsize 32
copy_id 1
status online
sync yes
primary no
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
362
Implementing the IBM System Storage SAN Volume Controller V5.1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.44MB
real_capacity 20.02GB
free_capacity 20.02GB
overallocation 224
autoexpand on
warning 80
grainsize 64
Notice that the VDisk copy mirror (copy_id 1) does not have the same values as the VDisk
copy. While adding a VDisk copy mirror, you are able to define a mirror with separate
parameters than the VDisk copy. Therefore, you can define a Space-Efficient VDisk copy
mirror for a non-Space-Efficient VDisk copy and vice-versa, which is one way to migrate a
non-Space-Efficient VDisk to a Space-Efficient VDisk.
Note: To change the parameters of a VDisk copy mirror, you must delete the VDisk copy
mirror and redefine it with the new values.
7.4.6 Splitting a VDisk Copy
The splitvdiskcopy command creates a new VDisk in the specified I/O Group from a copy of
the specified VDisk. If the copy that you are splitting is not synchronized, you must use the
-force parameter. The command fails if you are attempting to remove the only synchronized
copy. To avoid this failure, wait for the copy to synchronize, or split the unsynchronized copy
from the VDisk by using the -force parameter. You can run the command when either VDisk
copy is offline.
Example 7-50 shows the svctask splitvdiskcopy command, which is used to split a VDisk
copy. It creates a new vdisk_N from the copy of vdisk_B.
Example 7-50 Split VDisk
IBM_2145:ITSO-CLS1:admin>svctask splitvdiskcopy -copy 1 -iogrp 0 -name vdisk_N
vdisk_B
Virtual Disk, id [2], successfully created
As you can see in Example 7-51, the new VDisk, vdisk_N, has been created as an
independent VDisk.
Example 7-51 svcinfo lsvdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_N
id 2
name vdisk_N
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 100.0GB
type striped
formatted no
Chapter 7. SAN Volume Controller operations using the command-line interface
363
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018301BF280000000000002F
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 84.75MB
real_capacity 20.10GB
free_capacity 20.01GB
overallocation 497
autoexpand on
warning 80
grainsize 64
The VDisk copy of vdisk_B VDisk has now lost its mirror. Therefore, a new VDisk has been
created.
7.4.7 Modifying a VDisk
Executing the svctask chvdisk command will modify a single property of a VDisk. Only one
property can be modified at a time. So, changing the name and modifying the I/O Group
require two invocations of the command.
You can specify a new name or label. The new name can be used subsequently to reference
the VDisk. The I/O Group with which this VDisk is associated can be changed. Note that this
requires a flush of the cache within the nodes in the current I/O Group to ensure that all data
is written to disk. I/O must be suspended at the host level before performing this operation.
364
Implementing the IBM System Storage SAN Volume Controller V5.1
Tips: If the VDisk has a mapping to any hosts, it is not possible to move the VDisk to an I/O
Group that does not include any of those hosts.
This operation will fail if there is not enough space to allocate bitmaps for a mirrored VDisk
in the target I/O Group.
If the -force parameter is used and the cluster is unable to destage all write data from the
cache, the contents of the VDisk are corrupted by the loss of the cached data.
If the -force parameter is used to move a VDisk that has out-of-sync copies, a full
resynchronization is required.
7.4.8 I/O governing
You can set a limit on the amount of I/O transactions that is accepted for a VDisk. It is set in
terms of I/Os per second or MB per second. By default, no I/O governing rate is set when a
VDisk is created.
Base the choice between I/O and MB as the I/O governing throttle on the disk access profile
of the application. Database applications generally issue large amounts of I/O, but they only
transfer a relatively small amount of data. In this case, setting an I/O governing throttle based
on MBs per second does not achieve much. It is better to use an I/Os per second as a second
throttle.
At the other extreme, a streaming video application generally issues a small amount of I/O,
but transfers large amounts of data. In contrast to the database example, setting an I/O
governing throttle based on I/Os per second does not achieve much, so it is better to use an
MB per second throttle.
I/O governing rate: An I/O governing rate of 0 (displayed as throttling in the CLI output of
the svcinfo lsvdisk command) does not mean that zero I/Os per second (or MBs per
second) can be achieved. It means that no throttle is set.
An example of the chvdisk command is shown in Example 7-52.
Example 7-52 svctask chvdisk (rate/warning Space-Efficient VDisk)
IBM_2145:ITSO-CLS1:admin>svctask chvdisk -rate 20 -unitmb vdisk_C
IBM_2145:ITSO-CLS1:admin>svctask chvdisk -warning 85% vdisk7
New name: The chvdisk command specifies the new name first. The name can consist of
letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). It can be
between one and 15 characters in length. However, it cannot start with a number, the dash,
or the word “vdisk” (because this prefix is reserved for SVC assignment only).
The first command changes the VDisk throttling of vdisk7 to 20 MBps, while the second
command changes the Space-Efficient VDisk warning to 85%.
Chapter 7. SAN Volume Controller operations using the command-line interface
365
If you want to verify the changes, issue the svcinfo lsvdisk command, as shown in
Example 7-53.
Example 7-53 svcinfo lsvdisk command: Verifying throttling
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk7
id 7
name vdisk7
IO_group_id 1
IO_group_name io_grp1
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 10.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018301BF280000000000000A
virtual_disk_throttling (MB) 20
preferred_node_id 6
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 5.02GB
free_capacity 5.02GB
overallocation 199
autoexpand on
warning 85
grainsize 32
366
Implementing the IBM System Storage SAN Volume Controller V5.1
7.4.9 Deleting a VDisk
When executing this command on an existing managed mode VDisk, any data that remained
on it will be lost. The extents that made up this VDisk will be returned to the pool of free
extents available in the MDG.
If any Remote Copy, FlashCopy, or host mappings still exist for this VDisk, the delete fails
unless the -force flag is specified. This flag ensures the deletion of the VDisk and any VDisk
to host mappings and copy mappings.
If the VDisk is currently the subject of a migrate to image mode, the delete fails unless the
-force flag is specified. This flag halts the migration and then deletes the VDisk.
If the command succeeds (without the -force flag) for an image mode disk, the underlying
back-end controller logical unit will be consistent with the data that a host might previously
have read from the image mode VDisk. That is, all fast write data has been flushed to the
underlying LUN. If the -force flag is used, there is no guarantee.
If there is any nondestaged data in the fast write cache for this VDisk, the deletion of the
VDisk fails unless the -force flag is specified. Now, any nondestaged data in the fast write
cache is deleted.
Use the svctask rmvdisk command to delete a VDisk from your SVC configuration, as shown
in Example 7-54.
Example 7-54 svctask rmvdisk
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk vdisk_A
This command deletes the vdisk_A VDisk from the SVC configuration. If the VDisk is
assigned to a host, you need to use the -force flag to delete the VDisk (Example 7-55).
Example 7-55 svctask rmvdisk (-force)
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk -force vdisk_A
7.4.10 Expanding a VDisk
Expanding a VDisk presents a larger capacity disk to your operating system. Although this
expansion can be easily performed using the SVC, you must ensure that your operating
systems support expansion before using this function.
Assuming your operating systems support it, you can use the svctask expandvdisksize
command to increase the capacity of a given VDisk.
Example 7-56 shows a sample of this command.
Example 7-56 svctask expandvdisksize
IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb vdisk_C
This command expands the vdisk_C VDisk, which was 35 GB before, by another 5 GB to give
it a total size of 40 GB.
To expand a Space-Efficient VDisk, you can use the -rsize option, as shown in Example 7-57
on page 368. This command changes the real size of the vdisk_B VDisk to a real capacity of
55 GB. The capacity of the VDisk remains unchanged.
Chapter 7. SAN Volume Controller operations using the command-line interface
367
Example 7-57 svcinfo lsvdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_B
id 1
name vdisk_B
capacity 100.0GB
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 50.00GB
free_capacity 50.00GB
overallocation 200
autoexpand off
warning 40
grainsize 32
IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -rsize 5 -unit gb vdisk_B
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_B
id 1
name vdisk_B
capacity 100.0GB
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 55.00GB
free_capacity 55.00GB
overallocation 181
autoexpand off
warning 40
grainsize 32
Important: If a VDisk is expanded, its type will become striped even if it was previously
sequential or in image mode. If there are not enough extents to expand your VDisk to the
specified size, you receive the following error message:
CMMVC5860E Ic_failed_vg_insufficient_virtual_extents
7.4.11 Assigning a VDisk to a host
Use the svctask mkvdiskhostmap command to map a VDisk to a host. When executed, this
command creates a new mapping between the VDisk and the specified host, which
essentially presents this VDisk to the host, as though the disk was directly attached to the
host. It is only after this command is executed that the host can perform I/O to the VDisk.
Optionally, a SCSI LUN ID can be assigned to the mapping.
When the HBA on the host scans for devices that are attached to it, it discovers all of the
VDisks that are mapped to its FC ports. When the devices are found, each one is allocated an
identifier (SCSI LUN ID).
For example, the first disk found is generally SCSI LUN 1, and so on. You can control the
order in which the HBA discovers VDisks by assigning the SCSI LUN ID as required. If you do
368
Implementing the IBM System Storage SAN Volume Controller V5.1
not specify a SCSI LUN ID, the cluster automatically assigns the next available SCSI LUN ID,
given any mappings that already exist with that host.
Using the VDisk and host definition that we created in the previous sections, we assign
VDisks to hosts that are ready for their use. We use the svctask mkvdiskhostmap command
(see Example 7-58).
Example 7-58 svctask mkvdiskhostmap
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Tiger vdisk_B
Virtual Disk to Host map, id [2], successfully created
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Tiger vdisk_C
Virtual Disk to Host map, id [1], successfully created
This command assigns vdisk_B and vdisk_C to host Tiger as shown in Example 7-59.
Example 7-59 svcinfo lshostvdiskmap -delim, command
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim ,
id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID
1,Tiger,2,1,vdisk_B,210000E08B892BCD,60050768018301BF2800000000000001
1,Tiger,1,2,vdisk_C,210000E08B892BCD,60050768018301BF2800000000000002
Assigning a specific LUN ID to a VDisk: The optional -scsi scsi_num parameter can help
assign a specific LUN ID to a VDisk that is to be associated with a given host. The default
(if nothing is specified) is to increment based on what is already assigned to the host.
Be aware that certain HBA device drivers stop when they find a gap in the SCSI LUN IDs. For
example:
VDisk 1 is mapped to Host 1 with SCSI LUN ID 1.
VDisk 2 is mapped to Host 1 with SCSI LUN ID 2.
VDisk 3 is mapped to Host 1 with SCSI LUN ID 4.
When the device driver scans the HBA, it might stop after discovering VDisks 1 and 2,
because there is no SCSI LUN mapped with ID 3. Be careful to ensure that the SCSI LUN ID
allocation is contiguous.
It is not possible to map a VDisk to a host more than one time at separate LUNs
(Example 7-60).
Example 7-60 svctask mkvdiskhostmap
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Siam vdisk_A
Virtual Disk to Host map, id [0], successfully created
This command maps the VDisk called vdisk_A to the host called Siam.
You have completed all of the tasks that are required to assign a VDisk to an attached host.
7.4.12 Showing VDisk-to-host mapping
Use the svcinfo lshostvdiskmap command to show which VDisks are assigned to a specific
host (Example 7-61 on page 370).
Chapter 7. SAN Volume Controller operations using the command-line interface
369
Example 7-61 svcinfo lshostvdiskmap
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , Siam
id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID
3,Siam,0,0,vdisk_A,210000E08B18FF8A,60050768018301BF280000000000000C
From this command, you can see that the host Siam has only one assigned VDisk called
vdisk_A. The SCSI LUN ID is also shown, which is the ID by which the VDisk is presented to
the host. If no host is specified, all defined host to VDisk mappings will be returned.
Specifying the flag before the host name: Although the -delim flag normally comes at
the end of the command string, in this case, you must specify this flag before the host
name. Otherwise, it returns the following message:
CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or
incorrect argument sequence has been detected. Ensure that the input is as per
the help.
7.4.13 Deleting a VDisk-to-host mapping
When deleting a VDisk mapping, you are not deleting the volume itself, only the connection
from the host to the volume. If you mapped a VDisk to a host by mistake, or you simply want
to reassign the volume to another host, use the svctask rmvdiskhostmap command to unmap
a VDisk from a host (Example 7-62).
Example 7-62 svctask rmvdiskhostmap
IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Tiger vdisk_D
This command unmaps the VDisk called vdisk_D from the host called Tiger.
7.4.14 Migrating a VDisk
From time to time, you might want to migrate VDisks from one set of MDisks to another set of
MDisks to decommission an old disk subsystem, to have better balanced performance across
your virtualized environment, or simply to migrate data into the SVC environment
transparently using image mode.
You can obtain further information about migration in Chapter 9, “Data migration” on
page 675.
Important: After migration is started, it continues until completion unless it is stopped or
suspended by an error condition or unless the VDisk being migrated is deleted.
As you can see from the parameters in Example 7-63, before you can migrate your VDisk,
you must know the name of the VDisk you want to migrate and the name of the MDG to which
you want to migrate. To discover the name, simply run the svcinfo lsvdisk and svcinfo
lsmdiskgrp commands.
When you know these details, you can issue the migratevdisk command, as shown in
Example 7-63.
Example 7-63 svctask migratevdisk
IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -mdiskgrp MDG_DS47 -vdisk vdisk_C
370
Implementing the IBM System Storage SAN Volume Controller V5.1
This command moves vdisk_C to MDG_DS47.
Tips: If insufficient extents are available within your target MDG, you receive an error
message. Make sure that the source and target MDisk group have the same extent size.
The optional threads parameter allows you to assign a priority to the migration process.
The default is 4, which is the highest priority setting. However, if you want the process to
take a lower priority over other types of I/O, you can specify 3, 2, or 1.
You can run the svcinfo lsmigrate command at any time to see the status of the migration
process (Example 7-64).
Example 7-64 svcinfo lsmigrate command
IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 12
migrate_source_vdisk_index 2
migrate_target_mdisk_grp 1
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 16
migrate_source_vdisk_index 2
migrate_target_mdisk_grp 1
max_thread_count 4
migrate_source_vdisk_copy_id 0
Progress: The progress is given as percent complete. If you get no more replies, the
process has finished.
7.4.15 Migrate a VDisk to an image mode VDisk
Migrating a VDisk to an image mode VDisk allows the SVC to be removed from the data path,
which might be useful where the SVC is used as a data mover appliance. You can use the
svctask migratetoimage command.
To migrate a VDisk to an image mode VDisk, the following rules apply:
The destination MDisk must be greater than or equal to the size of the VDisk.
The MDisk that is specified as the target must be in an unmanaged state.
Regardless of the mode in which the VDisk starts, it is reported as managed mode during
the migration.
Both of the MDisks involved are reported as being image mode during the migration.
If the migration is interrupted by a cluster recovery or by a cache problem, the migration
resumes after the recovery completes.
Chapter 7. SAN Volume Controller operations using the command-line interface
371
Example 7-65 shows an example of the command.
Example 7-65 svctask migratetoimage
IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk vdisk_A -mdisk mdisk8
-mdiskgrp MDG_Image
In this example, you migrate the data from vdisk_A onto mdisk8, and the MDisk must be put
into the MDG_Image MDG.
7.4.16 Shrinking a VDisk
The shrinkvdisksize command reduces the capacity that is allocated to the particular VDisk
by the amount that you specify. You cannot shrink the real size of a space-efficient volume to
less than its used size. All capacities, including changes, must be in multiples of 512 bytes.
An entire extent is reserved even if it is only partially used. The default capacity units are MB.
The command can be used to shrink the physical capacity that is allocated to a particular
VDisk by the specified amount. The command can also be used to shrink the virtual capacity
of a Space-Efficient VDisk without altering the physical capacity assigned to the VDisk:
For a non-Space-Efficient VDisk, use the -size parameter.
For a Space-Efficient VDisk real capacity, use the -rsize parameter.
For the Space-Efficient VDisk virtual capacity, use the -size parameter.
When the virtual capacity of a Space-Efficient VDisk is changed, the warning threshold is
automatically scaled to match. The new threshold is stored as a percentage.
The cluster arbitrarily reduces the capacity of the VDisk by removing a partial extent, one
extent, or multiple extents from those extents that are allocated to the VDisk. You cannot
control which extents are removed, and so, you cannot assume that it is unused space that is
removed.
Reducing disk size: Image mode VDisks cannot be reduced in size. They must first be
migrated to Managed Mode. To run the shrinkvdisksize command on a mirrored VDisk,
all of the copies of the VDisk must be synchronized.
Important:
If the VDisk contains data, do not shrink the disk.
Certain operating systems or file systems use what they consider to be the outer edge
of the disk for performance reasons. This command can shrink FlashCopy target
VDisks to the same capacity as the source.
Before you shrink a VDisk, validate that the VDisk is not mapped to any host objects. If
the VDisk is mapped, data is displayed. You can determine the exact capacity of the
source or master VDisk by issuing the svcinfo lsvdisk -bytes vdiskname command.
Shrink the VDisk by the required amount by issuing the svctask shrinkvdisksize
-size disk_size -unit b | kb | mb | gb | tb | pb vdisk_name | vdisk_id
command.
Assuming your operating system supports it, you can use the svctask shrinkvdisksize
command to decrease the capacity of a given VDisk.
Example 7-66 on page 373 shows an example of this command.
372
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-66 svctask shrinkvdisksize
IBM_2145:ITSO-CLS1:admin>svctask shrinkvdisksize -size 44 -unit gb vdisk_A
This command shrinks a volume called Vdisk_A from a total size of 80 GB, by 44 GB, to a
new total size of 36 GB.
7.4.17 Showing a VDisk on an MDisk
Use the svcinfo lsmdiskmember command to display information about the VDisks that use
space on a specific MDisk, as shown in Example 7-67.
Example 7-67 svcinfo lsmdiskmember command
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskmember mdisk1
id
copy_id
0
0
2
0
3
0
4
0
5
0
This command displays a list of all of the VDisk IDs that correspond to the VDisk copies that
use mdisk1.
To correlate the IDs displayed in this output to VDisk names, we can run the svcinfo lsvdisk
command, which we discuss in more detail in 7.4, “Working with VDisks” on page 356.
7.4.18 Showing VDisks using a managed disk group
Use the svcinfo lsvdisk -filtervalue command, as shown in Example 7-68, to see which
VDisks are part of a specific MDG. This command shows all of the VDisks that are part of the
MDG called MDG_DS47.
Example 7-68 svcinfo lsvdisk -filtervalue: VDisks in the MDG
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -filtervalue mdisk_grp_name=MDG_DS47
-delim ,
id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam
e,UID
5,mdisk5,online,managed,1,MDG_DS47,36.0GB,0000000000000000,DS4700,600a0b800026b282
00003ea34851577c00000000000000000000000000000000
7,mdisk7,online,managed,1,MDG_DS47,36.0GB,0000000000000001,DS4700,600a0b80002904de
00004188485157a400000000000000000000000000000000
9,mdisk9,online,managed,1,MDG_DS47,36.0GB,0000000000000002,DS4700,600a0b800026b282
00003ed6485157b600000000000000000000000000000000
12,mdisk12,online,managed,1,MDG_DS47,36.0GB,0000000000000003,DS4700,600a0b80002904
de000041ba485157d000000000000000000000000000000000
14,mdisk14,online,managed,1,MDG_DS47,36.0GB,0000000000000004,DS4700,600a0b800026b2
8200003f6c4851585200000000000000000000000000000000
18,mdisk18,online,managed,1,MDG_DS47,36.0GB,0000000000000005,DS4700,600a0b80002904
de000042504851586800000000000000000000000000000000
19,mdisk19,online,managed,1,MDG_DS47,36.0GB,0000000000000006,DS4700,600a0b800026b2
8200003f9f4851588700000000000000000000000000000000
Chapter 7. SAN Volume Controller operations using the command-line interface
373
20,mdisk20,online,managed,1,MDG_DS47,36.0GB,0000000000000007,DS4700,600a0b80002904
de00004282485158aa00000000000000000000000000000000
7.4.19 Showing which MDisks are used by a specific VDisk
Use the svcinfo lsvdiskmember command, as shown in Example 7-69, to show which
MDisks a specific VDisk’s extents are from.
Example 7-69 svcinfo lsvdiskmember command
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember vdisk_D
id
0
1
2
3
4
6
10
11
13
15
16
17
If you want to know more about these MDisks, you can run the svcinfo lsmdisk command,
as explained in 7.2, “Working with managed disks and disk controller systems” on page 340
(using the ID displayed in Example 7-69 rather than the name).
7.4.20 Showing from which Managed Disk Group a VDisk has its extents
Use the svcinfo lsvdisk command, as shown in Example 7-70, to show to which MDG a
specific VDisk belongs.
Example 7-70 svcinfo lsvdisk command: MDG name
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_D
id 3
name vdisk_D
IO_group_id 1
IO_group_name io_grp1
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 80.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018301BF2800000000000003
throttling 0
374
Implementing the IBM System Storage SAN Volume Controller V5.1
preferred_node_id 6
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 80.00GB
real_capacity 80.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
If you want to know more about these MDGs, you can run the svcinfo lsmdiskgrp command,
as explained in 7.2.11, “Working with Managed Disk Groups” on page 346.
7.4.21 Showing the host to which the VDisk is mapped
To show the hosts to which a specific VDisk has been assigned, run the svcinfo
lsvdiskhostmap command, as shown in Example 7-71.
Example 7-71 svcinfo lsvdiskhostmap command
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskhostmap -delim , vdisk_B
id,name,SCSI_id,host_id,host_name,wwpn,vdisk_UID
1,vdisk_B,2,1,Nile,210000E08B892BCD,60050768018301BF2800000000000001
1,vdisk_B,2,1,Nile,210000E08B89B8C0,60050768018301BF2800000000000001
This command shows the host or hosts to which the vdisk_B VDisk was mapped. It is normal
for you to see duplicate entries, because there are more paths between the cluster and the
host. To be sure that the operating system on the host sees the disk only one time, you must
install and configure a multipath software application, such as the IBM Subsystem Driver
(SDD).
Specifying the -delim flag: Although the optional -delim flag normally comes at the end of
the command string, in this case, you must specify this flag before the VDisk name.
Otherwise, the command does not return any data.
Chapter 7. SAN Volume Controller operations using the command-line interface
375
7.4.22 Showing the VDisk to which the host is mapped
To show the VDisk to which a specific host has been assigned, run the svcinfo
lshostvdiskmap command, as shown in Example 7-72.
Example 7-72 lshostvdiskmap command example
id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID
3,Siam,0,5,MM_DB_Pri,210000E08B18FF8A,60050768018301BF2800000000000005
3,Siam,1,4,MM_DBLog_Pri,210000E08B18FF8A,60050768018301BF2800000000000004
3,Siam,2,6,MM_App_Pri,210000E08B18FF8A,60050768018301BF2800000000000006
This command shows which VDisks are mapped to the host called Siam.
Specifying the -delim flag: Although the optional -delim flag normally comes at the end of
the command string, in this case. you must specify this flag before the VDisk name.
Otherwise, the command does not return any data.
7.4.23 Tracing a VDisk from a host back to its physical disk
In many cases, you must verify exactly what physical disk is presented to the host, for
example, from what MDG a specific volume comes. From the host side, it is not possible for
the server administrator via the GUI to see on which physical disks the volumes are running.
You must enter the command (listed in Example 7-73) from your multipath command prompt.
1. On your host, run the datapath query device command. You see a long disk serial
number for each vpath device, as shown in Example 7-73.
Example 7-73 datapath query device
DEV#:
0 DEVICE NAME: Disk1 Part0 TYPE: 2145
POLICY: OPTIMIZED
SERIAL: 60050768018301BF2800000000000005
============================================================================
Path#
Adapter/Hard Disk
State Mode
Select
Errors
0
Scsi Port2 Bus0/Disk1 Part0
OPEN
NORMAL
20
0
1
Scsi Port3 Bus0/Disk1 Part0
OPEN
NORMAL
2343
0
DEV#:
1 DEVICE NAME: Disk2 Part0 TYPE: 2145
POLICY: OPTIMIZED
SERIAL: 60050768018301BF2800000000000004
============================================================================
Path#
Adapter/Hard Disk
State Mode
Select
Errors
0
Scsi Port2 Bus0/Disk2 Part0
OPEN
NORMAL
2335
0
1
Scsi Port3 Bus0/Disk2 Part0
OPEN
NORMAL
0
0
DEV#:
2 DEVICE NAME: Disk3 Part0 TYPE: 2145
POLICY: OPTIMIZED
SERIAL: 60050768018301BF2800000000000006
============================================================================
Path#
Adapter/Hard Disk
State Mode
Select
Errors
0
Scsi Port2 Bus0/Disk3 Part0
OPEN
NORMAL
2331
0
1
Scsi Port3 Bus0/Disk3 Part0
OPEN
NORMAL
0
0
State: In Example 7-73, the state of each path is OPEN. Sometimes, you will see the state
CLOSED, which does not necessarily indicate a problem, because it might be a result of the
path’s processing stage.
376
Implementing the IBM System Storage SAN Volume Controller V5.1
2. Run the svcinfo lshostvdiskmap command to return a list of all assigned VDisks
(Example 7-74).
Example 7-74 svcinfo lshostvdiskmap
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , Siam
id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID
3,Siam,0,5,MM_DB_Pri,210000E08B18FF8A,60050768018301BF2800000000000005
3,Siam,1,4,MM_DBLog_Pri,210000E08B18FF8A,60050768018301BF2800000000000004
3,Siam,2,6,MM_App_Pri,210000E08B18FF8A,60050768018301BF2800000000000006
Look for the disk serial number that matches your datapath query device output. This
host was defined in our SVC as Siam.
3. Run the svcinfo lsvdiskmember vdiskname command for a list of the MDisk or MDisks
that make up the specified VDisk (Example 7-75).
Example 7-75 svcinfo lsvdiskmember
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember MM_DBLog_Pri
id
0
1
2
3
4
10
11
13
15
16
17
4. Query the MDisks with the svcinfo lsmdisk mdiskID to find their controller and LUN
number information, as shown in Example 7-76. The output displays the controller name
and the controller LUN ID to help you (provided you named your controller a unique name,
such as a serial number) to track back to a LUN within the disk subsystem
(Example 7-76).
Example 7-76 svcinfo lsmdisk command
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 3
id 3
name mdisk3
status online
mode managed
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 36.0GB
quorum_index
block_size 512
controller_name DS4500
ctrl_type 4
ctrl_WWNN 200400A0B8174431
controller_id 0
path_count 4
max_path_count 4
ctrl_LUN_# 0000000000000003
UID 600a0b8000174431000000e44713575400000000000000000000000000000000
preferred_WWPN 200400A0B8174433
Chapter 7. SAN Volume Controller operations using the command-line interface
377
active_WWPN 200400A0B8174433
7.5 Scripting under the CLI for SVC task automation
Using scripting constructs works better for the automation of regular operational jobs. You can
use available shells to develop scripts. To run an SVC Console where the operating system is
Windows 2000 and higher, you can either purchase licensed shell emulation software or
download Cygwin from this Web site:
http://www.cygwin.com
Scripting enhances the productivity of SVC administrators and the integration of their storage
virtualization environment.
We show an example of scripting in Appendix A, “Scripting” on page 785.
You can create your own customized scripts to automate a large number of tasks for
completion at a variety of times and run them through the CLI.
We recommend that in large SAN environments, where scripting with svctask commands is
used, that you keep the scripting as simple as possible. It is harder to manage fallback,
documentation, and verifying a successful script prior to execution in a large SAN
environment.
7.6 SVC advanced operations using the CLI
In the following topics, we describe the commands that we think best represent advanced
operational commands.
7.6.1 Command syntax
Two major command sets are available:
The svcinfo command set allows us to query the various components within the SVC
environment.
The svctask command set allows us to make changes to the various components within
the SVC.
When the command syntax is shown, you see several parameters in square brackets, for
example, [parameter], which indicates that the parameter is optional in most if not all
instances. Any parameter that is not in square brackets is required information. You can view
the syntax of a command by entering one of the following commands:
svcinfo -?: Shows a complete list of information commands.
svctask -?: Shows a complete list of task commands.
svcinfo commandname -?: Shows the syntax of information commands.
svctask commandname -?: Shows the syntax of task commands.
svcinfo commandname -filtervalue?: Shows which filters you can use to reduce the
output of the information commands.
Help: You can also use -h instead of -?, for example, svcinfo -h or svctask commandname
-h.
378
Implementing the IBM System Storage SAN Volume Controller V5.1
If you look at the syntax of the command by typing svcinfo command name -?, you often see
-filter listed as a parameter. Be aware that the correct parameter is -filtervalue.
Tip: You can use the up and down arrow keys on your keyboard to recall commands that
were recently issued. Then, you can use the left and right, backspace, and delete keys to
edit commands before you resubmit them.
7.6.2 Organizing on window content
Sometimes the output of a command can be long and difficult to read in the window. In cases
where you need information about a subset of the total number of available items, you can
use filtering to reduce the output to a more manageable size.
Filtering
To reduce the output that is displayed by an svcinfo command, you can specify a number of
filters, depending on which svcinfo command you are running. To see which filters are
available, type the command followed by the -filtervalue? flag, as shown in Example 7-77.
Example 7-77 svcinfo lsvdisk -filtervalue? command
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue?
Filters for this view are :
name
id
IO_group_id
IO_group_name
status
mdisk_grp_name
mdisk_grp_id
capacity
type
FC_id
FC_name
RC_id
RC_name
vdisk_name
vdisk_id
vdisk_UID
fc_map_count
copy_count
When you know the filters, you can be more selective in generating output:
Multiple filters can be combined to create specific searches.
You can use an asterisk (*) as a wildcard when using names.
When capacity is used, the units must also be specified using -u b | kb | mb | gb | tb | pb.
For example, if we issue the svcinfo lsvdisk command with no filters, we see the output that
is shown in Example 7-78 on page 380.
Chapter 7. SAN Volume Controller operations using the command-line interface
379
Example 7-78 svcinfo lsvdisk command: No filters
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,typ
e,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count
0,vdisk0,0,io_grp0,online,1,MDG_DS47,10.0GB,striped,,,,,60050768018301BF2800000000
000000,0,1
1,vdisk1,1,io_grp1,online,1,MDG_DS47,100.0GB,striped,,,,,60050768018301BF280000000
0000001,0,1
2,vdisk2,1,io_grp1,online,0,MDG_DS45,40.0GB,striped,,,,,60050768018301BF2800000000
000002,0,1
3,vdisk3,1,io_grp1,online,0,MDG_DS45,80.0GB,striped,,,,,60050768018301BF2800000000
000003,0,1
Tip: The -delim parameter truncates the content in the window and separates data fields
with colons as opposed to wrapping text over multiple lines. This parameter is normally
used in cases where you need to get reports during script execution.
If we now add a filter to our svcinfo command (such as FC_name), we can reduce the
output, as shown in Example 7-79.
Example 7-79 svcinfo lsvdisk command: With a filter
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue mdisk_grp_name=*7 -delim ,
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type
,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count
0,vdisk0,0,io_grp0,online,1,MDG_DS47,10.0GB,striped,,,,,60050768018301BF2800000000
000000,0,1
1,vdisk1,1,io_grp1,online,1,MDG_DS47,100.0GB,striped,,,,,60050768018301BF280000000
0000001,0,1
The first command shows all VDisks with the IO_group_id=0. The second command shows
us all VDisks where the mdisk_grp_name ends with a 7. You can use the wildcard asterisk
character (*) when names are used.
7.7 Managing the cluster using the CLI
In these sections, we show cluster administration.
7.7.1 Viewing cluster properties
Use the svcinfo lscluster command to display summary information about all of the
clusters that are configured to the SVC, as shown in Example 7-80 on page 381.
380
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-80 svcinfo lscluster command
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster
id
name
location
id_alias
000002006AE04FC4 ITSO-CLS1
local
000002006AE04FC4
0000020063E03A38 ITSO-CLS4
remote
0000020063E03A38
0000020061006FCA ITSO-CLS2
remote
0000020061006FCA
partnership
bandwidth
fully_configured 20
fully_configured 50
7.7.2 Changing cluster settings
Use the svctask chcluster command to change the settings of the cluster. This command
modifies the specific features of a cluster. You can change multiple features by issuing a
single command.
If the cluster IP address is changed, the open command-line shell closes during the
processing of the command. You must reconnect to the new IP address. The service IP
address is not used until a node is removed from the cluster. If this node cannot rejoin the
cluster, you can bring the node up in service mode. In this mode, the node can be accessed
as a stand-alone node using the service IP address.
All command parameters are optional; however, you must specify at least one parameter.
Note: Only a user with administrator authority can change the password.
After the cluster IP address is changed, you lose the open shell connection to the cluster.
You must reconnect with the newly specified IP address.
Important: Changing the speed on a running cluster breaks I/O service to the attached
hosts. Before changing the fabric speed, stop I/O from the active hosts and force these
hosts to flush any cached data by unmounting volumes (for UNIX host types) or by
removing drive letters (for Windows host types). Specific hosts might need to be rebooted
to detect the new fabric speed. The fabric speed setting applies only to the 4F2 and 8F2
model nodes in a cluster. The 8F4 nodes automatically negotiate the fabric speed on a
per-port basis.
7.7.3 Cluster authentication
An important point with respect to authentication in SVC 5.1 is that the superuser password
replaces the previous cluster admin. This user is a member of the Security admin. If this
password is not known, you can reset it from the cluster front panel.
We describe the authentication method in detail in Chapter 2, “IBM System Storage SAN
Volume Controller” on page 7.
Tip: If you do not want the password to display as you enter it on the command line, omit
the new password. The command line then prompts you to enter and confirm the password
without the password being displayed.
Chapter 7. SAN Volume Controller operations using the command-line interface
381
The only authentication that can be changed from the chcluster command is the Service
account user password, and to be able to change that, you need to have administrative rights.
The Service account user password is changed in Example 7-81.
Example 7-81 svctask chcluster -servicepwd (for the Service account)
IBM_2145:ITSO-CLS1:admin>svctask chcluster -servicepwd
Enter a value for -password :
Enter password:
Confirm password:
IBM_2145:ITSO-CLS1:admin>
See 7.10.1, “Managing users using the CLI” on page 394 for more information about
managing users.
7.7.4 iSCSI configuration
Starting with SVC 5.1, iSCSI is introduced as a supported method of communication between
the SVC and hosts. All back-end storage and intracluster communication still uses FC and the
SAN, so iSCSI cannot be used for that communication.
In Chapter 2, “IBM System Storage SAN Volume Controller” on page 7, we described in detail
how iSCSi works. In this section, we show how to configure our cluster for usage with iSCSI.
We will configure our nodes to use the primary and secondary Ethernet ports for iSCSI, as
well as contain the cluster IP. When we configure our nodes to be used with iSCSI, we do not
affect our cluster IP. The cluster IP is changed, as shown in 7.7.2, “Changing cluster settings”
on page 381.
It is important to know that we can have more than a one IP address to one physical
connection relationship. We have the capability to have a four to one relationship (4:1),
consisting of two IPv4 plus two IPv6 addresses (four total) to one physical connection per port
per node.
We describe this function in Chapter 2, “IBM System Storage SAN Volume Controller” on
page 7.
Tip: When reconfiguring IP ports, be aware that already configured iSCSI connections will
need to reconnect if changes are made to the IP addresses of the nodes.
There are two ways to perform iSCSI authentication or CHAP, either for the whole cluster or
per host connection. Example 7-82 shows configuring CHAP for the whole cluster.
Example 7-82 Setting a CHAP secret for the entire cluster to “passw0rd”
IBM_2145:ITSO-CLS1:admin>svctask chcluster -iscsiauthmethod chap -chapsecret
passw0rd
IBM_2145:ITSO-CLS1:admin>
In our scenario, we have our cluster IP of 9.64.210.64, which is not affected during our
configuration of the node’s IP addresses.
We start by listing our ports using the svcinfo lsportip command. We can see that we have
two ports per node with which to work. Both ports can have two IP addresses that can be
used for iSCSI.
382
Implementing the IBM System Storage SAN Volume Controller V5.1
In our example, we configure the secondary port in both nodes in our I/O Group, which is
shown in Example 7-83.
Example 7-83 Configuring secondary Ethernet port on SVC nodes
IBM_2145:ITSO-CLS1:admin>svctask cfgportip -node 1 -ip 9.8.7.1 -gw 9.0.0.1 -mask
255.255.255.0 2
IBM_2145:ITSO-CLS1:admin>svctask cfgportip -node 2 -ip 9.8.7.3 -gw 9.0.0.1 -mask
255.255.255.0 2
While both nodes are online, each node will be available to iSCSI hosts on the IP address that
we have configured. Because iSCSI failover between nodes is enabled automatically, if a
node goes offline for any reason, its partner node in the I/O group will become available on
the failed node’s port IP address, ensuring that hosts will continue to be able to perform I/O.
The svcinfo lsportip command will display which port IP addresses are currently active on
each node.
7.7.5 Modifying IP addresses
Starting with SVC 5.1, we can use both IP ports of the nodes. However, the first time that you
configure a second port, all IP information is required, because port 1 on the cluster must
always have one stack fully configured.
There are now two active cluster ports on the configuration node. If the cluster IP address is
changed, the open command-line shell closes during the processing of the command. You
must reconnect to the new IP address if connected through that port.
List the IP address of the cluster by issuing the svcinfo lsclusterip command. Modify the IP
address by issuing the svctask chclusterip command. You can either specify a static IP
address or have the system assign a dynamic IP address, as shown in Example 7-84.
Example 7-84 svctask chclusterip -clusterip
IBM_2145:ITSO-CLS1:admin>svctask chclusterip -clusterip 10.20.133.5 -gw
10.20.135.1 -mask 255.255.255.0 -port 1
This command changes the current IP address of the cluster to 10.20.133.5.
Important: If you specify a new cluster IP address, the existing communication with the
cluster through the CLI is broken and the PuTTY application automatically closes. You
must relaunch the PuTTY application and point to the new IP address, but your SSH key
will still work.
7.7.6 Supported IP address formats
Table 7-1 on page 384 shows the IP address formats.
Chapter 7. SAN Volume Controller operations using the command-line interface
383
Table 7-1 ip_address_list formats
IP type
ip_address_list format
IPv4 (no port set, SVC uses default)
1.2.3.4
IPv4 with specific port
1.2.3.4:22
Full IPv6, default port
1234:1234:0001:0123:1234:1234:1234:1234
Full IPv6, default port, leading zeros suppressed
1234:1234:1:123:1234:1234:1234:1234
Full IPv6 with port
[2002:914:fc12:848:209:6bff:fe8c:4ff6]:23
Zero-compressed IPv6, default port
2002::4ff6
Zero-compressed IPv6 with port
[2002::4ff6]:23
We have completed the tasks that are required to change the IP addresses (cluster and
service) of the SVC environment.
7.7.7 Setting the cluster time zone and time
Use the -timezone parameter to specify the numeric ID of the time zone that you want to set.
Issue the svcinfo lstimezones command to list the time zones that are available on the
cluster; this command displays a list of valid time zone settings.
Tip: If you have changed the time zone, you must clear the error log dump directory before
you can view the error log through the Web application.
Setting the cluster time zone
Perform the following steps to set the cluster time zone and time:
1. Find out for which time zone your cluster is currently configured. Enter the svcinfo
showtimezone command, as shown in Example 7-85.
Example 7-85 svcinfo showtimezone command
IBM_2145:ITSO-CLS1:admin>svcinfo showtimezone
id
timezone
522
UTC
2. To find the time zone code that is associated with your time zone, enter the svcinfo
lstimezones command, as shown in Example 7-86. A truncated list is provided for this
example. If this setting is correct (for example, 522 UTC), you can go to Step 4. If not,
continue with Step 3.
Example 7-86 svcinfo lstimezones command
IBM_2145:ITSO-CLS1:admin>svcinfo lstimezones
id
timezone
.
.
507
Turkey
508
UCT
509
Universal
510
US/Alaska
511
US/Aleutian
512
US/Arizona
384
Implementing the IBM System Storage SAN Volume Controller V5.1
513
514
515
516
517
518
519
520
521
522
.
.
US/Central
US/Eastern
US/East-Indiana
US/Hawaii
US/Indiana-Starke
US/Michigan
US/Mountain
US/Pacific
US/Samoa
UTC
3. Now that you know which time zone code is correct for you, set the time zone by issuing
the svctask settimezone (Example 7-87) command.
Example 7-87 svctask settimezone command
IBM_2145:ITSO-CLS1:admin>svctask settimezone -timezone 520
4. Set the cluster time by issuing the svctask setclustertime command (Example 7-88).
Example 7-88 svctask setclustertime command
IBM_2145:ITSO-CLS1:admin>svctask setclustertime -time 061718402008
The format of the time is MMDDHHmmYYYY.
You have completed the necessary tasks to set the cluster time zone and time.
7.7.8 Start statistics collection
Statistics are collected at the end of each sampling period (as specified by the -interval
parameter). These statistics are written to a file. A new file is created at the end of each
sampling period. Separate files are created for MDisks, VDisks, and node statistics.
Use the svctask startstats command to start the collection of statistics, as shown in
Example 7-89.
Example 7-89 svctask startstats command
IBM_2145:ITSO-CLS1:admin>svctask startstats -interval 15
The interval that we specify (minimum 1, maximum 60) is in minutes. This command starts
statistics collection and gathers data at 15 minute intervals.
Statistics collection: To verify that statistics collection is set, display the cluster properties
again, as shown in Example 7-90.
Example 7-90 Statistics collection status and frequency
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1
statistics_status on
statistics_frequency 15
-- Note that the output has been shortened for easier reading. --
Chapter 7. SAN Volume Controller operations using the command-line interface
385
We have completed the required tasks to start statistics collection on the cluster.
7.7.9 Stopping a statistics collection
Use the svctask stopstats command to stop the collection of statistics within the cluster
(Example 7-91).
Example 7-91 svctask stopstats command
IBM_2145:ITSO-CLS1:admin>svctask stopstats
This command stops the statistics collection. Do not expect any prompt message from this
command.
To verify that the statistics collection is stopped, display the cluster properties again, as
shown in Example 7-92.
Example 7-92 Statistics collection status and frequency
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1
statistics_status off
statistics_frequency 15
-- Note that the output has been shortened for easier reading. -Notice that the interval parameter is not changed, but the status is off. We have completed the
required tasks to stop statistics collection on our cluster.
7.7.10 Status of copy operation
Use the svcinfo lscopystatus command, as shown in Example 7-93, to determine if a file
copy operation is in progress. Only one file copy operation can be performed at a time. The
output of this command is a status of active or inactive.
Example 7-93 lscopystatus command
IBM_2145:ITSO-CLS1:admin>svcinfo lsdiscoverystatus
status
inactive
7.7.11 Shutting down a cluster
If all input power to an SVC cluster is to be removed for more than a few minutes (for example,
if the machine room power is to be shut down for maintenance), it is important to shut down
the cluster before removing the power. If the input power is removed from the uninterruptible
power supply units without first shutting down the cluster and the uninterruptible power supply
units, the uninterruptible power supply units remain operational and eventually become
drained of power.
When input power is restored to the uninterruptible power supply units, they start to recharge.
However, the SVC does not permit any I/O activity to be performed to the VDisks until the
uninterruptible power supply units are charged enough to enable all of the data on the SVC
nodes to be destaged in the event of a subsequent unexpected power loss. Recharging the
uninterruptible power supply can take as long as two hours.
386
Implementing the IBM System Storage SAN Volume Controller V5.1
Shutting down the cluster prior to removing input power to the uninterruptible power supply
units prevents the battery power from being drained. It also makes it possible for I/O activity to
be resumed as soon as input power is restored.
You can use the following procedure to shut down the cluster:
1. Use the svctask stopcluster command to shut down your SVC cluster (Example 7-94).
Example 7-94 svctask stopcluster
IBM_2145:ITSO-CLS1:admin>svctask stopcluster
Are you sure that you want to continue with the shut down?
This command shuts down the SVC cluster. All data is flushed to disk before the power is
removed. At this point, you lose administrative contact with your cluster, and the PuTTY
application automatically closes.
2. You will be presented with the following message:
Warning: Are you sure that you want to continue with the shut down?
Ensure that you have stopped all FlashCopy mappings, Metro Mirror (Remote Copy)
relationships, data migration operations, and forced deletions before continuing. Entering
y to this message will execute the command. No feedback is then displayed. Entering
anything other than y(es) or Y(ES) will result in the command not executing. No feedback
is displayed.
Important: Before shutting down a cluster, ensure that all I/O operations are stopped
that are destined for this cluster, because you will lose all access to all VDisks being
provided by this cluster. Failure to do so can result in failed I/O operations being
reported to the host operating systems.
Begin the process of quiescing all I/O to the cluster by stopping the applications on the
hosts that are using the VDisks that are provided by the cluster.
3. We have completed the tasks that are required to shut down the cluster. To shut down the
uninterruptible power supply units, press the power on button on the front panel of each
uninterruptible power supply unit.
Restarting the cluster: To restart the cluster, you must first restart the uninterruptible
power supply units by pressing the power button on their front panels. Then, press the
power on button on the service panel of one of the nodes within the cluster. After the
node is fully booted up (for example, displaying Cluster: on line 1 and the cluster name
on line 2 of the panel), you can start the other nodes in the same way.
As soon as all of the nodes are fully booted, you can reestablish administrative contact
using PuTTY, and your cluster is fully operational again.
7.8 Nodes
This section details the tasks that can be performed at an individual node level.
Chapter 7. SAN Volume Controller operations using the command-line interface
387
7.8.1 Viewing node details
Use the svcinfo lsnode command to view the summary information about the nodes that are
defined within the SVC environment. To view more details about a specific node, append the
node name (for example, SVCNode_1) to the command.
Example 7-95 shows both of these commands.
Tip: The -delim parameter truncates the content in the window and separates data fields
with colons (:) as opposed to wrapping text over multiple lines.
Example 7-95 svcinfo lsnode command
IBM_2145:ITSO-CLS1:admin>svcinfo lsnode -delim ,
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un
ique_id,hardware
1,node1,1000739007,50050768010037E5,online,0,io_grp0,yes,20400001C3240007,8G4
2,node2,1000739004,50050768010037DC,online,0,io_grp0,no,20400001C3240004,8G4
3,node3,100066C107,5005076801001D1C,online,1,io_grp1,no,20400001864C1007,8G4
4,node4,100066C108,50050768010027E2,online,1,io_grp1,no,20400001864C1008,8G4
IBM_2145:ITSO-CLS1:admin>svcinfo lsnode node1
id 1
name node1
UPS_serial_number 1000739007
WWNN 50050768010037E5
status online
IO_group_id 0
IO_group_name io_grp0
partner_node_id 2
partner_node_name node2
config_node yes
UPS_unique_id 20400001C3240007
port_id 50050768014037E5
port_status active
port_speed 4Gb
port_id 50050768013037E5
port_status active
port_speed 4Gb
port_id 50050768011037E5
port_status active
port_speed 4Gb
port_id 50050768012037E5
port_status active
port_speed 4Gb
hardware 8G4
7.8.2 Adding a node
After cluster creation is completed through the service panel (the front panel of one of the
SVC nodes) and cluster Web interface, only one node (the configuration node) is set up.
To have a fully functional SVC cluster, you must add a second node to the configuration.
388
Implementing the IBM System Storage SAN Volume Controller V5.1
To add a node to a cluster, gather the necessary information, as explained in these steps:
Before you can add a node, you must know which unconfigured nodes you have as
“candidates”. Issue the svcinfo lsnodecandidate command (Example 7-96).
You must specify to which I/O Group you are adding the node. If you enter the svcinfo
lsnode command, you can easily identify the I/O Group ID of the group to which you are
adding your node, as shown in Example 7-97.
Example 7-96 svctask lsnodecandidate command
IBM_2145:ITSO-CLS1:admin>svcinfo lsnodecandidate
id
panel_name
UPS_serial_number
50050768010027E2 108283
50050768010037DC 104603
100066C108
1000739004
UPS_unique_id
hardware
20400001864C1008 8G4
20400001C3240004 8G4
Tip: The node that you want to add must have a separate uninterruptible power supply unit
serial number from the uninterruptible power supply unit on the first node.
Example 7-97 svcinfo lsnode command
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un
ique_id,hardware,iscsi_name,iscsi_alias
1,ITSO_CLS1_0,100089J040,50050768010059E7,online,0,io_grp0,yes,2040000209680100,8G
4,iqn.1986-03.com.ibm:2145.ITSO_CLS1_0.ITSO_CLS1_0_N0,
Now that we know the available nodes, we can use the svctask addnode command to add the
node to the SVC cluster configuration.
Example 7-98 shows the command to add a node to the SVC cluster.
Example 7-98 svctask addnode (wwnodename) command
IBM_2145:ITSO-CLS1:admin>svctask addnode -wwnodename 50050768010027E2 -name Node2
-iogrp io_grp0
Node, id [2], successfully added
This command adds the candidate node with the wwnodename of 50050768010027E2 to the
I/O Group called io_grp0.
We used the -wwnodename parameter (50050768010027E2). However, we can also use the
-panelname parameter (108283) instead (Example 7-99). If you are standing in front of the
node, it is easier to read the panel name than it is to get the WWNN.
Example 7-99 svctask addnode (panelname) command
IBM_2145:ITSO-CLS1:admin>svctask addnode -panelname 108283 -name Node2 -iogrp
io_grp0
We also used the optional -name parameter (Node2). If you do not provide the -name
parameter, the SVC automatically generates the name nodex (where x is the ID sequence
number that is assigned internally by the SVC).
Name: If you want to provide a name, you can use letters A to Z and a to z, numbers 0 to
9, the dash (-), and the underscore (_). The name can be between one and 15 characters
in length. However, the name cannot start with a number, dash, or the word “node”
(because this prefix is reserved for SVC assignment only).
Chapter 7. SAN Volume Controller operations using the command-line interface
389
If the svctask addnode command returns no information, your second node is powered on,
and the zones are correctly defined, preexisting cluster configuration data can be stored in
the node. If you are sure that this node is not part of another active SVC cluster, you can use
the service panel to delete the existing cluster information. After this action is complete,
reissue the svcinfo lsnodecandidate command and you will see it listed.
7.8.3 Renaming a node
Use the svctask chnode command to rename a node within the SVC cluster configuration.
Example 7-100 svctask chnode -name command
IBM_2145:ITSO-CLS1:admin>svctask chnode -name ITSO_CLS1_Node1 4
This command renames node ID 4 to ITSO_CLS1_Node1.
Name: The chnode command specifies the new name first. You can use letters A to Z and
a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one
and 15 characters in length. However, the name cannot start with a number, dash, or the
word “node” (because this prefix is reserved for SVC assignment only).
7.8.4 Deleting a node
Use the svctask rmnode command to remove a node from the SVC cluster configuration
(Example 7-98 on page 389).
Example 7-101 svctask rmnode command
IBM_2145:ITSO-CLS1:admin>svctask rmnode node4
This command removes node4 from the SVC cluster.
Because node4 was also the configuration node, the SVC transfers the configuration node
responsibilities to a surviving node, within the I/O Group. Unfortunately, the PuTTY session
cannot be dynamically passed to the surviving node. Therefore, the PuTTY application loses
communication and closes automatically.
We must restart the PuTTY application to establish a secure session with the new
configuration node.
Important: If this node is the last node in an I/O Group, and there are VDisks still assigned
to the I/O Group, the node is not deleted from the cluster.
If this node is the last node in the cluster, and the I/O Group has no VDisks remaining, the
cluster is destroyed and all virtualization information is lost. Any data that is still required
must be backed up or migrated prior to destroying the cluster.
7.8.5 Shutting down a node
On occasion, it can be necessary to shut down a single node within the cluster to perform
tasks, such as scheduled maintenance, while leaving the SVC environment up and running.
Use the svctask stopcluster -node command, as shown in Example 7-102 on page 391, to
shut down a single node.
390
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-102 svctask stopcluster -node command
IBM_2145:ITSO-CLS1:admin>svctask stopcluster -node n4
Are you sure that you want to continue with the shut down?
This command shuts down node n4 in a graceful manner. When this node has been shut
down, the other node in the I/O Group will destage the contents of its cache and will go into
write-through mode until the node is powered up and rejoins the cluster.
Important: There is no need to stop FlashCopy mappings, Remote Copy relationships,
and data migration operations. The other cluster will handle these activities, but be aware
that this cluster is a single point of failure now.
If this is the last node in an I/O Group, all access to the VDisks in the I/O Group will be lost.
Verify that you want to shut down this node before executing this command. You must specify
the -force flag.
By reissuing the svcinfo lsnode command (Example 7-103), we can see that the node is
now offline.
Example 7-103 svcinfo lsnode command
IBM_2145:ITSO-CLS1:admin>svcinfo lsnode -delim ,
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un
ique_id,hardware
1,n1,1000739007,50050768010037E5,online,0,io_grp0,yes,20400001C3240007,8G4
2,n2,1000739004,50050768010037DC,online,0,io_grp0,no,20400001C3240004,8G4
3,n3,100066C107,5005076801001D1C,online,1,io_grp1,no,20400001864C1007,8G4
6,n4,100066C108,0000000000000000,offline,1,io_grp1,no,20400001864C1008,unknown
IBM_2145:ITSO-CLS1:admin>svcinfo lsnode n4
CMMVC5782E The object specified is offline.
Restart: To restart the node manually, press the power on button from the service panel of
the node.
We have completed the tasks that are required to view, add, delete, rename, and shut down a
node within an SVC environment.
7.9 I/O Groups
This section explains the tasks that you can perform at an I/O Group level.
7.9.1 Viewing I/O Group details
Use the svcinfo lsiogrp command, as shown in Example 7-104 on page 392, to view
information about the I/O Groups that are defined within the SVC environment.
Chapter 7. SAN Volume Controller operations using the command-line interface
391
Example 7-104 I/O Group details
IBM_2145:ITSO-CLS1:admin>svcinfo lsiogrp
id
name
node_count
0
io_grp0
2
1
io_grp1
2
2
io_grp2
0
3
io_grp3
0
4
recovery_io_grp
0
vdisk_count
3
4
0
0
0
host_count
3
3
2
2
0
As we can see, the SVC predefines five I/O Groups. In a four node cluster (similar to our
example), only two I/O Groups are actually in use. The other I/O Groups (io_grp2 and
io_grp3) are for a six or eight node cluster.
The recovery I/O Group is a temporary home for VDisks when all nodes in the I/O Group that
normally owns them have suffered multiple failures. This design allows us to move the VDisks
to the recovery I/O Group and, then, into a working I/O Group. Of course, while temporarily
assigned to the recovery I/O Group, I/O access is not possible.
7.9.2 Renaming an I/O Group
Use the svctask chiogrp command to rename an I/O Group (Example 7-105).
Example 7-105 svctask chiogrp command
IBM_2145:ITSO-CLS1:admin>svctask chiogrp -name io_grpA io_grp1
This command renames the I/O Group io_grp1 to io_grpA.
Name: The chiogrp command specifies the new name first.
If you want to provide a name, you can use letters A to Z, letters a to z, numbers 0 to 9, the
dash (-), and the underscore (_). The name can be between one and 15 characters in
length. However, the name cannot start with a number, dash, or the word “iogrp” (because
this prefix is reserved for SVC assignment only).
To see whether the renaming was successful, issue the svcinfo lsiogrp command again to
see the change.
We have completed the tasks that are required to rename an I/O Group.
7.9.3 Adding and removing hostiogrp
To map or unmap a specific host object to a specific I/O Group to reach the maximum number
of hosts supported by an SVC cluster, use the svctask addhostiogrp command to map a
specific host to a specific I/O Group, as shown in Example 7-106 on page 393.
392
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-106 svctask addhostiogrp command
IBM_2145:ITSO-CLS1:admin>svctask addhostiogrp -iogrp 1 Kanaga
Parameters:
-iogrp iogrp_list -iogrpall
Specifies a list of one or more I/O Groups that must be mapped to the host. This
parameter is mutually exclusive with -iogrpall. The -iogrpall option specifies that all the I/O
Groups must be mapped to the specified host. This parameter is mutually exclusive with
-iogrp.
-host host_id_or_name
Identify the host either by ID or name to which the I/O Groups must be mapped.
Use the svctask rmhostiogrp command to unmap a specific host to a specific I/O Group, as
shown in Example 7-107.
Example 7-107 svctask rmhostiogrp command
IBM_2145:ITSO-CLS1:admin>svctask rmhostiogrp -iogrp 0 Kanaga
Parameters:
-iogrp iogrp_list -iogrpall
Specifies a list of one or more I/O Groups that must be unmapped to the host. This
parameter is mutually exclusive with -iogrpall. The -iogrpall option specifies that all of the
I/O Groups must be unmapped to the specified host. This parameter is mutually exclusive
with -iogrp.
-force
If the removal of a host to I/O Group mapping will result in the loss of VDisk to host
mappings, the command fails if the -force flag is not used. The -force flag, however,
overrides this behavior and forces the deletion of the host to I/O Group mapping.
host_id_or_name
Identify the host either by the ID or name to which the I/O Groups must be mapped.
7.9.4 Listing I/O Groups
To list all of the I/O Groups that are mapped to the specified host and vice versa, use the
svcinfo lshostiogrp command, specifying the host name Kanaga, as shown in
Example 7-108.
Example 7-108 svcinfo lshostiogrp command
IBM_2145:ITSO-CLS1:admin>svcinfo lshostiogrp Kanaga
id
name
1
io_grp1
To list all of the host objects that are mapped to the specified I/O Group, use the svcinfo
lsiogrphost command, as shown in Example 7-109 on page 394.
Chapter 7. SAN Volume Controller operations using the command-line interface
393
Example 7-109 svcinfo lsiogrphost command
IBM_2145:ITSO-CLS1:admin>svcinfo lsiogrphost io_grp1
id
name
1
Nile
2
Kanaga
3
Siam
In Example 7-110, iogrp_1 is the I/O Group name.
7.10 Managing authentication
In the following topics, we show authentication administration.
7.10.1 Managing users using the CLI
In this section, we demonstrate operating and managing authentication using the CLI.
All users must now be a member of a predefined user group. You can list those groups by
using the svcinfo lsusergrp command, as shown in Example 7-110.
Example 7-110 svcinfo lsusergrp command
IBM_2145:ITSO-CLS2:admin>svcinfo
id
name
role
0
SecurityAdmin
SecurityAdmin
1
Administrator
Administrator
2
CopyOperator
CopyOperator
3
Service
Service
4
Monitor
Monitor
lsusergrp
remote
no
no
no
no
no
Example 7-111 is a simple example of creating a user. User John is added to the user group
Monitor with the password m0nitor.
Example 7-111 svctask mkuser called John with password m0nitor
IBM_2145:ITSO-CLS1:admin>svctask mkuser -name John -usergrp Monitor -password
m0nitor
User, id [2], successfully created
IBM_2145:ITSO-CLS1:admin>
Local users are those users that are not authenticated by a remote authentication server.
Remote users are those users that are authenticated by a remote central registry server.
The user groups already have a defined authority role, as shown in Table 7-2 on page 395.
394
Implementing the IBM System Storage SAN Volume Controller V5.1
Table 7-2 Authority roles
User group
Role
User
Security admin
All commands
Superusers
Administrator
All commands except:
svctask: chauthservice,
mkuser, rmuser, chuser,
mkusergrp, rmusergrp,
chusergrp, and setpwdreset
Administrators that control the
SVC
Copy operator
All svcinfo commands and
For those users that control all
of the copy functionality of the
cluster
the following svctask
commands:
prestartfcconsistgrp,
startfcconsistgrp,
stopfcconsistgrp,
chfcconsistgrp, prestartfcmap,
startfcmap, stopfcmap,
chfcmap,
startrcconsistgrp,
stoprcconsistgrp,
switchrcconsistgrp,
chrcconsistgrp,
startrcrelationship,
stoprcrelationship,
switchrcrelationship,
chrcrelationship, and
chpartnership
Service
All svcinfo commands
and the following svctask
commands:
applysoftware, setlocale,
addnode, rmnode, cherrstate,
writesernum, detectmdisk,
includemdisk, clearerrlog,
cleardumps,
settimezone, stopcluster,
startstats, stopstats, and
settime
Monitor
All svcinfo commands and
the following svctask
commands: finderr,
dumperrlog, dumpinternallog,
and chcurrentuser
And svcconfig: backup
For those users that perform
service maintenance and other
hardware tasks on the cluster
For those users only needing
view access
7.10.2 Managing user roles and groups
Role-based security commands are used to restrict the administrative abilities of a user. We
cannot create new user roles, but we can create new user groups and assign a predefined
role to our group.
To view the user roles on your cluster, use the svcinfo lsusergrp command, as shown in
Example 7-112 on page 396, to list all of the users.
Chapter 7. SAN Volume Controller operations using the command-line interface
395
Example 7-112 svcinfo lsusergrp command
IBM_2145:ITSO-CLS2:admin>svcinfo
id
name
role
0
SecurityAdmin
SecurityAdmin
1
Administrator
Administrator
2
CopyOperator
CopyOperator
3
Service
Service
4
Monitor
Monitor
lsusergrp
remote
no
no
no
no
no
To view our currently defined users and the user groups to which they belong, we use the
svcinfo lsuser command, as shown in Example 7-113.
Example 7-113 svcinfo lsuser command
IBM_2145:ITSO-CLS2:admin>svcinfo lsuser -delim ,
id,name,password,ssh_key,remote,usergrp_id,usergrp_name
0,superuser,yes,no,no,0,SecurityAdmin
1,admin,no,yes,no,0,SecurityAdmin
2,Pall,yes,no,no,1,Administrator
7.10.3 Changing a user
To change user passwords, issue the svctask chuser command. To change the Service
account user password, see 7.7.3, “Cluster authentication” on page 381.
The chuser command allows you to modify a user that is already created. You can rename,
assign a new password (if you are logged on with administrative privileges), move a user from
one user group to another user group, but be aware that a member can only be a member of
one group at a time.
7.10.4 Audit log command
The audit log can be extremely helpful to see which commands have been entered on our
cluster.
Most action commands that are issued by the old or new CLI are recorded in the audit log:
The native GUI performs actions by using the CLI programs.
The SVC Console performs actions by issuing Common Information Model (CIM)
commands to the CIM object manager (CIMOM), which then runs the CLI programs.
Actions performed by using both the native GUI and the SVC Console are recorded in the
audit log.
Certain commands are not audited:
svctask cpdumps
svctask cleardumps
svctask finderr
396
Implementing the IBM System Storage SAN Volume Controller V5.1
svctask dumperrlog
svctask dumpinternallog
The audit log contains approximately 1 MB of data, which can contain about 6,000 average
length commands. When this log is full, the cluster copies it to a new file in the /dumps/audit
directory on the config node and resets the in-memory audit log.
To display entries from the audit log, use the svcinfo catauditlog -first 5 command to
return a list of five in-memory audit log entries, as shown in Example 7-114.
Example 7-114 catauditlog command
IBM_2145:ITSO-CLS1:admin>svcinfo catauditlog -first 5 -delim ,
291,090904200329,superuser,10.64.210.231,0,,svctask mkvdiskhostmap -host 1 21
292,090904201238,admin,10.64.210.231,0,,svctask chvdisk -name swiss_cheese 21
293,090904204314,superuser,10.64.210.231,0,,svctask chhost -name ITSO_W2008 1
294,090904204314,superuser,10.64.210.231,0,,svctask chhost -mask 15 1
295,090904204410,admin,10.64.210.231,0,,svctask chvdisk -name SwissCheese 21
If you need to dump the contents of the in-memory audit log to a file on the current
configuration node, use the svctask dumpauditlog command. This command does not
provide any feedback, only the prompt. To obtain a list of the audit log dumps, use the svcinfo
lsauditlogdumps command, as described in Example 7-115.
Example 7-115 svctask dumpauditlog/svcinfo lsauditlogdumps command
IBM_2145:ITSO-CLS1:admin>svctask dumpauditlog
IBM_2145:ITSO-CLS1:admin>svcinfo lsauditlogdumps
id
auditlog_filename
0
auditlog_0_80_20080619134139_0000020060c06fca
7.11 Managing Copy Services
In these topics, we show how to manage copy services.
7.11.1 FlashCopy operations
In this section, we use a scenario to illustrate how to use commands with PuTTY to perform
FlashCopy. See the IBM System Storage Open Software Family SAN Volume Controller:
Command-Line Interface User’s Guide, SC26-7544, for more commands.
Scenario description
We use the following scenario in both the command-line section and the GUI section. In the
following scenario, we want to FlashCopy the following VDisks:
DB_Source
Log_Source
App_Source
Database files
Database log files
Application files
We create consistency groups to handle the FlashCopy of DB_Source and Log_Source,
because data integrity must be kept on DB_Source and Log_Source.
In our scenario, the application files are independent of the database, so we create a single
FlashCopy mapping for App_Source. We will make two FlashCopy targets for DB_Source and
Chapter 7. SAN Volume Controller operations using the command-line interface
397
Log_Source and, therefore, two consistency groups. Example 7-123 on page 403 shows the
scenario.
Figure 7-2 FlashCopy scenario
7.11.2 Setting up FlashCopy
We have already created the source and target VDisks, and the source and target VDisks are
identical in size, which is a requirement of the FlashCopy function:
DB_Source, DB_Target1, and DB_Target2
Log_Source, Log_Target1, and Log_Target2
App_Source and App_Target1
To set up the FlashCopy, we performed the following steps:
1. Create two FlashCopy consistency groups:
– FCCG1
– FCCG2
2. Create FlashCopy mappings for Source VDisks:
–
–
–
–
–
–
DB_Source FlashCopy to DB_Target1, the mapping name is DB_Map1
DB_Source FlashCopy to DB_Target2, the mapping name is DB_Map2
Log_Source FlashCopy to Log_Target1, the mapping name is Log_Map1
Log_Source FlashCopy to Log_Target2, the mapping name is Log_Map2
App_Source FlashCopy to App_Target1, the mapping name is App_Map1
Copyrate 50
7.11.3 Creating a FlashCopy consistency group
To create a FlashCopy consistency group, we use the command svctask mkfcconsistgrp to
create a new consistency group. The ID of the new group is returned. If you have created
several FlashCopy mappings for a group of VDisks that contain elements of data for the same
application, it might be convenient to assign these mappings to a single FlashCopy
consistency group. Then, you can issue a single prepare or start command for the whole
398
Implementing the IBM System Storage SAN Volume Controller V5.1
group, so that, for example, all of the files for a particular database are copied at the same
time.
In Example 7-116, the FCCG1 and FCCG2 consistency groups are created to hold the
FlashCopy maps of DB and Log. This step is extremely important for FlashCopy on database
applications. It helps to keep data integrity during FlashCopy.
Example 7-116 Creating two FlashCopy consistency groups
IBM_2145:ITSO-CLS1:admin>svctask mkfcconsistgrp -name FCCG1
FlashCopy Consistency Group, id [1], successfully created
IBM_2145:ITSO-CLS1:admin>svctask mkfcconsistgrp -name FCCG2
FlashCopy Consistency Group, id [2], successfully created
In Example 7-117, we checked the status of consistency groups. Each consistency group has
a status of empty.
Example 7-117 Checking the status
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp
id
name
status
1
FCCG1
empty
2
FCCG2
empty
If you want to change the name of a consistency group, you can use the svctask
chfcconsistgrp command. Type svctask chfcconsistgrp -h for help with this command.
7.11.4 Creating a FlashCopy mapping
To create a FlashCopy mapping, we use the svctask mkfcmap command. This command
creates a new FlashCopy mapping, which maps a source VDisk to a target VDisk to prepare
for subsequent copying.
When executed, this command creates a new FlashCopy mapping logical object. This
mapping persists until it is deleted. The mapping specifies the source and destination VDisks.
The destination must be identical in size to the source, or the mapping will fail. Issue the
svcinfo lsvdisk -bytes command to find the exact size of the source VDisk for which you
want to create a target disk of the same size.
In a single mapping, source and destination cannot be on the same VDisk. A mapping is
triggered at the point in time when the copy is required. The mapping can optionally be given
a name and assigned to a consistency group. These groups of mappings can be triggered at
the same time, enabling multiple VDisks to be copied at the same time, which creates a
consistent copy of multiple disks. A consistent copy of multiple disks is required for database
products in which the database and log files reside on separate disks.
If no consistency group is defined, the mapping is assigned to the default group 0, which is a
special group that cannot be started as a whole. Mappings in this group can only be started
on an individual basis.
The background copy rate specifies the priority that must be given to completing the copy. If 0
is specified, the copy will not proceed in the background. The default is 50.
Chapter 7. SAN Volume Controller operations using the command-line interface
399
Tip: There is a parameter to delete FlashCopy mappings automatically after completion of
a background copy (when the mapping gets to the idle_or_copied state). Use the
command:
svctask mkfcmap -autodelete
This command does not delete mappings in cascade with dependent mappings, because it
cannot get to the idle_or_copied state in this situation.
In Example 7-118, the first FlashCopy mapping for DB_Source and Log_Source is created.
Example 7-118 Create the first FlashCopy mapping for DB_Source, Log_Source, and App_Source
IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source DB_Source -target DB_Target_1
-name DB_Map1 -consistgrp FCCG1
FlashCopy Mapping, id [0], successfully created
IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source Log_Source -target Log_Target_1
-name Log_Map1 -consistgrp FCCG1
FlashCopy Mapping, id [1], successfully created
IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source App_Source -target App_Target_1
-name App_Map1
FlashCopy Mapping, id [2], successfully created
Example 7-119 shows the command to create a second FlashCopy mapping for VDisk
DB_Source and Log_Source.
Example 7-119 Create additional FlashCopy mappings
IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source DB_Source -target DB_Target2
-name DB_Map2 -consistgrp FCCG2
FlashCopy Mapping, id [3], successfully created
IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source Log_Source -target Log_Target2
-name Log_Map2 -consistgrp FCCG2
FlashCopy Mapping, id [4], successfully created
Example 7-120 shows the result of these FlashCopy mappings. The status of the mapping is
idle_or_copied.
Example 7-120 Check the result of Multiple Target FlashCopy mappings
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap
id
name
source_vdisk_id source_vdisk_name
target_vdisk_id target_vdisk_name group_id
group_name
status
progress
copy_rate
clean_progress incremental
partner_FC_id
partner_FC_name
restoring
0
DB_Map1
0
DB_Source
6
DB_Target_1
1
FCCG1
idle_or_copied 0
50
100
off
1
Log_Map1
1
Log_Source
4
Log_Target_1
1
FCCG1
idle_or_copied 0
50
100
off
2
App_Map1
2
App_Source
3
App_Target_1
idle_or_copied 0
50
100
off
400
Implementing the IBM System Storage SAN Volume Controller V5.1
no
no
no
3
DB_Map2
0
DB_Source
DB_Target_2
2
FCCG2
idle_or_copied
50
100
off
4
Log_Map2
1
Log_Source
Log_Target_2
2
FCCG2
idle_or_copied
50
100
off
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp
id
name
status
1
FCCG1
idle_or_copied
2
FCCG2
idle_or_copied
7
0
no
5
0
no
If you want to change the FlashCopy mapping, you can use the svctask chfcmap command.
Type svctask chfcmap -h to get help with this command.
7.11.5 Preparing (pre-triggering) the FlashCopy mapping
At this point, the mapping has been created, but the cache still accepts data for the source
VDisks. You can only trigger the mapping when the cache does not contain any data for
FlashCopy source VDisks. You must issue an svctask prestartfcmap command to prepare a
FlashCopy mapping to start. This command tells the SVC to flush the cache of any content for
the source VDisk and to pass through any further write data for this VDisk.
When the svctask prestartfcmap command is executed, the mapping enters the Preparing
state. After the preparation is complete, it changes to the Prepared state. At this point, the
mapping is ready for triggering. Preparing and the subsequent triggering are usually
performed on a consistency group basis. Only mappings belonging to consistency group 0
can be prepared on their own, because consistency group 0 is a special group, which
contains the FlashCopy mappings that do not belong to any consistency group. A FlashCopy
must be prepared before it can be triggered.
In our scenario, App_Map1 is not in a consistency group. In Example 7-121, we show how we
initialize the preparation for App_Map1.
Another option is that you add the -prep parameter to the svctask startfcmap command,
which first prepares the mapping and then starts the FlashCopy.
In the example, we also show how to check the status of the current FlashCopy mapping.
App_Map1’s status is prepared.
Example 7-121 Prepare a FlashCopy without a consistency group
IBM_2145:ITSO-CLS1:admin>svctask prestartfcmap App_Map1
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1
id 2
name App_Map1
source_vdisk_id 2
source_vdisk_name App_Source
target_vdisk_id 3
target_vdisk_name App_Target_1
group_id
group_name
status prepared
progress 0
copy_rate 50
start_time
dependent_mappings 0
Chapter 7. SAN Volume Controller operations using the command-line interface
401
autodelete off
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
7.11.6 Preparing (pre-triggering) the FlashCopy consistency group
We use the svctask prestartfcconsistsgrp command to prepare a FlashCopy consistency
group. As with 7.11.5, “Preparing (pre-triggering) the FlashCopy mapping” on page 401, this
command flushes the cache of any data that is destined for the source VDisks and forces the
cache into the write-through mode until the mapping is started. The difference is that this
command prepares a group of mappings (at a consistency group level) instead of one
mapping.
When you have assigned several mappings to a FlashCopy consistency group, you only have
to issue a single prepare command for the whole group to prepare all of the mappings at one
time.
Example 7-122 shows how we prepare the consistency groups for DB and Log and check the
result. After the command has executed all of the FlashCopy maps that we have, all of them
are in the prepared status, and all the consistency groups are in the prepared status, too.
Now, we are ready to start the FlashCopy.
Example 7-122 Prepare a FlashCopy consistency group
IBM_2145:ITSO-CLS1:admin>svctask
IBM_2145:ITSO-CLS1:admin>svctask
IBM_2145:ITSO-CLS1:admin>svcinfo
id 1
name FCCG1
status prepared
autodelete off
FC_mapping_id 0
FC_mapping_name DB_Map1
FC_mapping_id 1
FC_mapping_name Log_Map1
IBM_2145:ITSO-CLS1:admin>svcinfo
id
name
1
FCCG1
2
FCCG2
prestartfcconsistgrp FCCG1
prestartfcconsistgrp FCCG2
lsfcconsistgrp FCCG1
lsfcconsistgrp
status
prepared
prepared
7.11.7 Starting (triggering) FlashCopy mappings
The svctask startfcmap command is used to start a single FlashCopy mapping. When
invoked, a point-in-time copy of the source VDisk is created on the target VDisk.
402
Implementing the IBM System Storage SAN Volume Controller V5.1
When the FlashCopy mapping is triggered, it enters the Copying state. The way that the copy
proceeds depends on the background copy rate attribute of the mapping. If the mapping is set
to 0 (NOCOPY), only data that is subsequently updated on the source will be copied to the
destination. We suggest that you use this scenario as a backup copy while the mapping exists
in the Copying state. If the copy is stopped, the destination is unusable. If you want to end up
with a duplicate copy of the source at the destination, set the background copy rate greater
than 0. This way, the system copies all of the data (even unchanged data) to the destination
and eventually reaches the idle_or_copied state. After this data is copied, you can delete the
mapping and have a usable point-in-time copy of the source at the destination.
In Example 7-123, after the FlashCopy is started, App_Map1 changes to copying status.
Example 7-123 Start App_Map1
IBM_2145:ITSO-CLS1:admin>svctask startfcmap App_Map1
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap
id
name
source_vdisk_id source_vdisk_name
target_vdisk_id target_vdisk_name group_id
group_name
status
progress
copy_rate
clean_progress incremental
partner_FC_id
partner_FC_name
restoring
0
DB_Map1
0
DB_Source
6
DB_Target_1
1
FCCG1
prepared
0
50
100
off
1
Log_Map1
1
Log_Source
4
Log_Target_1
1
FCCG1
prepared
0
50
100
off
2
App_Map1
2
App_Source
3
App_Target_1
copying
0
50
100
off
3
DB_Map2
0
DB_Source
7
DB_Target_2
2
FCCG2
prepared
0
50
100
off
4
Log_Map2
1
Log_Source
5
Log_Target_2
2
FCCG2
prepared
0
50
100
off
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1
id 2
name App_Map1
source_vdisk_id 2
source_vdisk_name App_Source
target_vdisk_id 3
target_vdisk_name App_Target_1
group_id
group_name
status copying
progress 29
copy_rate 50
start_time 090826171647
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
Chapter 7. SAN Volume Controller operations using the command-line interface
no
no
no
no
no
403
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
7.11.8 Starting (triggering) FlashCopy consistency group
We execute the svctask startfcconsistgrp command, as shown in Example 7-124, and
afterward, the database can be resumed. We have created two point-in-time consistent
copies of the DB and Log VDisks. After execution, the consistency group and the FlashCopy
maps are all in the copying status.
Example 7-124 Start FlashCopy consistency group
IBM_2145:ITSO-CLS1:admin>svctask
IBM_2145:ITSO-CLS1:admin>svctask
IBM_2145:ITSO-CLS1:admin>svcinfo
id 1
name FCCG1
status copying
autodelete off
FC_mapping_id 0
FC_mapping_name DB_Map1
FC_mapping_id 1
FC_mapping_name Log_Map1
IBM_2145:ITSO-CLS1:admin>svcinfo
id
name
1
FCCG1
2
FCCG2
startfcconsistgrp FCCG1
startfcconsistgrp FCCG2
lsfcconsistgrp FCCG1
lsfcconsistgrp
status
copying
copying
7.11.9 Monitoring the FlashCopy progress
To monitor the background copy progress of the FlashCopy mappings, we issue the svcinfo
lsfcmapprogress command for each FlashCopy mapping.
Alternatively, you can also query the copy progress by using the svcinfo lsfcmap command.
As shown in Example 7-125, both DB_Map1 and Log_Map1 return information that the
background copy is 21% completed, and both DB_Map2 and Log_Map2 return information
that the background copy is 18% completed.
Example 7-125 Monitoring background copy progress
IBM_2145:ITSO-CLS1:admin>svcinfo
id
progress
0
23
IBM_2145:ITSO-CLS1:admin>svcinfo
id
progress
1
23
IBM_2145:ITSO-CLS1:admin>svcinfo
id
progress
4
23
IBM_2145:ITSO-CLS1:admin>svcinfo
id
progress
3
23
IBM_2145:ITSO-CLS1:admin>svcinfo
404
lsfcmapprogress DB_Map1
lsfcmapprogress Log_Map1
lsfcmapprogress Log_Map2
lsfcmapprogress DB_Map2
lsfcmapprogress App_Map1
Implementing the IBM System Storage SAN Volume Controller V5.1
id
2
progress
53
When the background copy has completed, the FlashCopy mapping enters the
idle_or_copied state, and when all FlashCopy mappings in a consistency group enter this
status, the consistency group will be at idle_or_copied status.
When in this state, the FlashCopy mapping can be deleted, and the target disk can be used
independently, if, for example, another target disk is to be used for the next FlashCopy of the
particular source VDisk.
7.11.10 Stopping the FlashCopy mapping
The svctask stopfcmap command is used to stop a FlashCopy mapping. This command
allows you to stop an active (copying) or suspended mapping. When executed, this command
stops a single FlashCopy mapping.
When a FlashCopy mapping is stopped, the target VDisk becomes invalid and is set offline by
the SVC. The FlashCopy mapping needs to be prepared again or retriggered to bring the
target VDisk online again.
Tip: In a Multiple Target FlashCopy environment, if you want to stop a mapping or group,
consider whether you want to keep any of the dependent mappings. If not, issue the stop
command with the force parameter, which will stop all of the dependent maps and negate
the need for the stopping copy process to run.
Important: Only stop a FlashCopy mapping when the data on the target VDisk is not in
use, or when you want to modify the FlashCopy mapping. When a FlashCopy mapping is
stopped, the target VDisk becomes invalid and is set offline by the SVC, if the mapping is in
the Copying state and progress=100.
Example 7-126 shows how to stop the App_Map1 FlashCopy. The status of App_Map1 has
changed to idle_or_copied.
Example 7-126 Stop APP_Map1 FlashCopy
IBM_2145:ITSO-CLS1:admin>svctask stopfcmap App_Map1
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1
id 2
name App_Map1
source_vdisk_id 2
source_vdisk_name App_Source
target_vdisk_id 3
target_vdisk_name App_Target_1
group_id
group_name
status idle_or_copied
progress 100
copy_rate 50
start_time 090826171647
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 50
Chapter 7. SAN Volume Controller operations using the command-line interface
405
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
7.11.11 Stopping the FlashCopy consistency group
The svctask stopfcconsistgrp command is used to stop any active FlashCopy consistency
group. It stops all mappings in a consistency group. When a FlashCopy consistency group is
stopped for all mappings that are not 100% copied, the target VDisks become invalid and are
set offline by the SVC. The FlashCopy consistency group needs to be prepared again and
restarted to bring the target VDisks online again.
Important: Only stop a FlashCopy mapping when the data on the target VDisk is not in
use, or when you want to modify the FlashCopy consistency group. When a consistency
group is stopped, the target VDisk might become invalid and set offline by the SVC,
depending on the state of the mapping.
As shown in Example 7-127, we stop the FCCG1 and FCCG2 consistency groups. The status
of the two consistency groups has changed to stopped. Most of the FlashCopy mapping
relations now have the status stopped. As you can see, several of them have already
completed the copy operation and are now in a status of idle_or_copied.
Example 7-127 Stop FCCG1 and FCCG2 consistency groups
IBM_2145:ITSO-CLS1:admin>svctask stopfcconsistgrp FCCG1
IBM_2145:ITSO-CLS1:admin>svctask stopfcconsistgrp FCCG2
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp
id
name
status
1
FCCG1
stopped
2
FCCG2
stopped
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_
id,group_name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,p
artner_FC_name,restoring
0,DB_Map1,0,DB_Source,6,DB_Target_1,1,FCCG1,idle_or_copied,100,50,100,off,,,no
1,Log_Map1,1,Log_Source,4,Log_Target_1,1,FCCG1,idle_or_copied,100,50,100,off,,,no
2,App_Map1,2,App_Source,3,App_Target_1,,,idle_or_copied,100,50,100,off,,,no
3,DB_Map2,0,DB_Source,7,DB_Target_2,2,FCCG2,idle_or_copied,100,50,100,off,,,no
4,Log_Map2,1,Log_Source,5,Log_Target_2,2,FCCG2,idle_or_copied,100,50,100,off,,,no
7.11.12 Deleting the FlashCopy mapping
To delete a FlashCopy mapping, we use the svctask rmfcmap command. When the
command is executed, it attempts to delete the specified FlashCopy mapping. If the
FlashCopy mapping is stopped, the command fails unless the -force flag is specified. If the
mapping is active (copying), it must first be stopped before it can be deleted.
406
Implementing the IBM System Storage SAN Volume Controller V5.1
Deleting a mapping only deletes the logical relationship between the two VDisks. However,
when issued on an active FlashCopy mapping using the -force flag, the delete renders the
data on the FlashCopy mapping target VDisk as inconsistent.
Tip: If you want to use the target VDisk as a normal VDisk, monitor the background copy
progress until it is complete (100% copied) and, then, delete the FlashCopy mapping.
Another option is to set the -autodelete option when creating the FlashCopy mapping.
As shown in Example 7-128, we delete App_Map1.
Example 7-128 Delete App_Map1
IBM_2145:ITSO-CLS1:admin>svctask rmfcmap App_Map1
7.11.13 Deleting the FlashCopy consistency group
The svctask rmfcconsistgrp command is used to delete a FlashCopy consistency group.
When executed, this command deletes the specified consistency group. If there are mappings
that are members of the group, the command fails unless the -force flag is specified.
If you want to delete all of the mappings in the consistency group, as well, you must first
delete the mappings and, then, delete the consistency group.
As shown in Example 7-129, we delete all of the maps and consistency groups, and then, we
check the result.
Example 7-129 Remove fcmaps and fcconsistgrp
IBM_2145:ITSO-CLS1:admin>svctask rmfcmap DB_Map1
IBM_2145:ITSO-CLS1:admin>svctask rmfcmap DB_Map2
IBM_2145:ITSO-CLS1:admin>svctask rmfcmap Log_Map1
IBM_2145:ITSO-CLS1:admin>svctask rmfcmap Log_Map2
IBM_2145:ITSO-CLS1:admin>svctask rmfcconsistgrp FCCG1
IBM_2145:ITSO-CLS1:admin>svctask rmfcconsistgrp FCCG2
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap
IBM_2145:ITSO-CLS1:admin>
7.11.14 Migrating a VDisk to a Space-Efficient VDisk
Use the following scenario to migrate a VDisk to a Space-Efficient VDisk:
1. Create a space-efficient target VDisk with exactly the same size as the VDisk that you
want to migrate.
Example 7-130 on page 408 shows the VDisk 8 details. It has been created as a
Space-Efficient VDisk with the same size of App_Source VDisk.
Chapter 7. SAN Volume Controller operations using the command-line interface
407
Example 7-130 svcinfo lsvdisk 8 command
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk 8
id 8
name App_Source_SE
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
capacity 1.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AB813F100000000000000B
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 221.17MB
free_capacity 220.77MB
overallocation 462
autoexpand on
warning 80
grainsize 32
2. Define a FlashCopy mapping in which the non-Space-Efficient VDisk is the source and the
Space-Efficient VDisk is the target. Specify a copy rate as high as possible, and activate
the -autodelete option for the mapping. See Example 7-131.
Example 7-131 svctask mkfcmap
IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source App_Source -target
App_Source_SE -name MigrtoSEV -copyrate 100 -autodelete
FlashCopy Mapping, id [0], successfully created
408
Implementing the IBM System Storage SAN Volume Controller V5.1
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap 0
id 0
name MigrtoSEV
source_vdisk_id 2
source_vdisk_name App_Source
target_vdisk_id 8
target_vdisk_name App_Source_SE
group_id
group_name
status idle_or_copied
progress 0
copy_rate 100
start_time
dependent_mappings 0
autodelete on
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
3. Run the svctask prestartfcmap command and the svcinfo lsfcmap MigrtoSEV
command, as shown in Example 7-132.
Example 7-132 svctask prestartfcmap
IBM_2145:ITSO-CLS1:admin>svctask prestartfcmap MigrtoSEV
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoSEV
id 0
name MigrtoSEV
source_vdisk_id 2
source_vdisk_name App_Source
target_vdisk_id 8
target_vdisk_name App_Source_SE
group_id
group_name
status prepared
progress 0
copy_rate 100
start_time
dependent_mappings 0
autodelete on
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
Chapter 7. SAN Volume Controller operations using the command-line interface
409
partner_FC_name
restoring no
4. Run the svctask startfcmap command, as shown in Example 7-133.
Example 7-133 svctask startfcmap command
IBM_2145:ITSO-CLS1:admin>svctask startfcmap MigrtoSEV
5. Monitor the copy process using the svcinfo lsfcmapprogress command, as shown in
Example 7-134.
Example 7-134 svcinfo lsfcmapprogress command
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress MigrtoSEV
id
progress
0
63
6. The FlashCopy mapping has been deleted automatically, as shown in Example 7-135.
Example 7-135 svcinfo lsfcmap command
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoSEV
id 0
name MigrtoSEV
source_vdisk_id 2
source_vdisk_name App_Source
target_vdisk_id 8
target_vdisk_name App_Source_SE
group_id
group_name
status copying
progress 73
copy_rate 100
start_time 090827095354
dependent_mappings 0
autodelete on
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoSEV
CMMVC5754E The specified object does not exist, or the name supplied does not
meet the naming rules.
An independent copy of the source VDisk (App_Source) has been created. The migration
has completed, as shown in Example 7-136 on page 411.
410
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-136 svcinfo lsvdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk App_Source_SE
id 8
name App_Source_SE
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
capacity 1.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AB813F100000000000000B
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.77MB
overallocation 99
autoexpand on
warning 80
grainsize 32
Real size: Independently of what you defined as the real size of the target SEV, the real
size will be at least the capacity of the source VDisk.
To migrate a Space-Efficient VDisk to a fully allocated VDisk, you can follow the same
scenario.
Chapter 7. SAN Volume Controller operations using the command-line interface
411
7.11.15 Reverse FlashCopy
Starting with SVC 5.1, you can have a reverse FlashCopy mapping without having to remove
the original FlashCopy mapping, and without restarting a FlashCopy mapping from the
beginning.
In Example 7-137, FCMAP0 is the forward FlashCopy mapping, and FCMAP0_rev is a
reverse FlashCopy mapping. Its source is FCMAP0’s target, and its target is FCMAP0’s
source. When starting a reverse FlashCopy mapping, you must use the -restore option to
indicate that the user wants to overwrite the data on the source disk of the forward mapping.
Example 7-137 Reverse FlashCopy
IBM_2145:ITSO-CLS1:admin> svctask mkfcmap -source vdsk0 -target vdsk1 -name FCMAP0
FlashCopy Mapping, id [0], successfully created
IBM_2145:ITSO-CLS1:admin> svctask startfcmap -prep FCMAP0
IBM_2145:ITSO-CLS1:admin> svctask mkfcmap -source vdsk1 -target vdsk0 -name
FCMAP0_rev
FlashCopy Mapping, id [1], successfully created
IBM_2145:ITSO-CLS1:admin> svctask startfcmap -prep -restore FCMAP0_rev
id:name:source_vdisk_id:source_vdisk_name:target_vdisk_id:target_vdisk_name:group_
id:group_name:status:progress:copy_rate:clean_progress:incremental:partner_FC_id:p
artner_FC_name:restoring
0:FCMAP0:75:vdsk0:76:vdsk1:::copying:0:10:99:off:1:FCMAP0_rev:no
1:FCMAP0_rev:76:vdsk1:75:vdsk0:::copying:99:50:100:off:0:FCMAP0:yes
FCMAP0_rev will show a restoring value of yes while the FlashCopy mapping is copying.
After it has finished copying, the restoring value field will change to no.
7.11.16 Split-stopping of FlashCopy maps
The stopfcmap command now has a -split option. This option allows the source target of a
map, which is 100% complete, to be removed from the head of a cascade, when the map is
stopped.
For example, if we have four VDisks in a cascade (A  B  C  D), and the map A  B is
100% complete, using the stopfcmap -split mapAB command results in mapAB becoming
idle_copied and the remaining cascade becomes B  C  D.
Without the -split option, VDisk A remains at the head of the cascade (A  C  D). Consider
this sequence of steps:
1. User takes a backup using the mapping A  B. A is the production VDisk; B is a backup.
2. At a later point, the user experiences corruption on A and, so, reverses the mapping B 
A.
3. The user then takes another backup from the production disk A, resulting in the cascade
B  A  C.
Stopping A  B without the -split option results in the cascade B  C. Note that the backup
disk B is now at the head of this cascade.
When the user next wants to take a backup to B, the user can still start mapping A  B (using
the -restore flag), but the user cannot then reverse the mapping to A (B  A or C  A).
412
Implementing the IBM System Storage SAN Volume Controller V5.1
Stopping A  B with the -split option results in the cascade A  C. This action does not result
in the same problem, because production disk A is at the head of the cascade instead of the
backup disk B.
7.12 Metro Mirror operation
Note: This example is for intercluster operations only. If you want to set up intracluster
operations, we highlight those parts of the following procedure that you do not need to
perform.
In the following scenario, we set up an intercluster Metro Mirror relationship between the SVC
cluster ITSO-CLS1 primary site and the SVC cluster ITSO-CLS4 at the secondary site.
Table 7-3 shows the details of the VDisks.
Table 7-3 VDisk details
Content of VDisk
VDisks at primary site
VDisks at secondary site
Database files
MM_DB_Pri
MM_DB_Sec
Database log files
MM_DBLog_Pri
MM_DBLog_Sec
Application files
MM_App_Pri
MM_App_Sec
Because data consistency is needed across the MM_DB_Pri and MM_DBLog_Pri VDisks, a
CG_WIN2K3_MM consistency group is created to handle Metro Mirror relationships for them.
Because, in this scenario, application files are independent of the database, a stand-alone
Metro Mirror relationship is created for the MM_App_Pri VDisk. Figure 7-3 on page 414
illustrates the Metro Mirror setup.
Chapter 7. SAN Volume Controller operations using the command-line interface
413
Figure 7-3 Metro Mirror scenario
7.12.1 Setting up Metro Mirror
In the following section, we assume that the source and target VDisks have already been
created and that the inter-switch links (ISLs) and zoning are in place, enabling the SVC
clusters to communicate.
To set up the Metro Mirror, perform the following steps:
1. Create an SVC partnership between ITSO-CLS1 and ITSO-CLS4, on both SVC clusters.
2. Create a Metro Mirror consistency group:
Name CG_W2K3_MM
3. Create the Metro Mirror relationship for MM_DB_Pri:
–
–
–
–
–
Master MM_DB_Pri
Auxiliary MM_DB_Sec
Auxiliary SVC cluster ITSO-CLS4
Name MMREL1
Consistency group CG_W2K3_MM
4. Create the Metro Mirror relationship for MM_DBLog_Pri:
–
–
–
–
–
414
Master MM_DBLog_Pri
Auxiliary MM_DBLog_Sec
Auxiliary SVC cluster ITSO-CLS4
Name MMREL2
Consistency group CG_W2K3_MM
Implementing the IBM System Storage SAN Volume Controller V5.1
5. Create the Metro Mirror relationship for MM_App_Pri:
–
–
–
–
Master MM_App_Pri
Auxiliary MM_App_Sec
Auxiliary SVC cluster ITSO-CLS4
Name MMREL3
In the following section, we perform each step by using the CLI.
7.12.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4
We create the SVC partnership on both clusters.
Intracluster Metro Mirror: If you are creating an intracluster Metro Mirror, do not perform
the next step; instead, go to 7.12.3, “Creating a Metro Mirror consistency group” on
page 416.
Pre-verification
To verify that both clusters can communicate with each other, use the svcinfo
lsclustercandidate command.
As shown in Example 7-138, ITSO-CLS4 is an eligible SVC cluster candidate at ITSO-CLS1
for the SVC cluster partnership, and vice versa. Therefore, both clusters are communicating
with each other.
Example 7-138 Listing the available SVC cluster for partnership
IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate
id
configured
name
0000020069E03A42 no
ITSO-CLS3
0000020063E03A38 no
ITSO-CLS4
0000020061006FCA no
ITSO-CLS2
IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate
id
configured
name
0000020069E03A42 no
ITSO-CLS3
000002006AE04FC4 no
ITSO-CLS1
0000020061006FCA no
ITSO-CLS2
Example 7-139 shows the output of the svcinfo lscluster command, before setting up the
Metro Mirror relationship. We show it so that you can compare with the same relationship
after setting up the Metro Mirror relationship.
Example 7-139 Pre-verification of cluster configuration
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster
id
name
location
id_alias
000002006AE04FC4 ITSO-CLS1
local
000002006AE04FC4
IBM_2145:ITSO-CLS4:admin>svcinfo lscluster
id
name
location
id_alias
partnership
bandwidth
partnership
bandwidth
Chapter 7. SAN Volume Controller operations using the command-line interface
415
0000020063E03A38 ITSO-CLS4
0000020063E03A38
local
Partnership between clusters
In Example 7-140, a partnership is created between ITSO-CLS1 and ITSO-CL4, specifying
50 MBps bandwidth to be used for the background copy.
To check the status of the newly created partnership, issue the svcinfo lscluster
command. Also, notice that the new partnership is only partially configured. It remains
partially configured until the Metro Mirror relationship is created on the other node.
Example 7-140 Creating the partnership from ITSO-CLS1 to ITSO-CLS4 and verifying the partnership
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster
id
name
location
partnership
bandwidth
id_alias
000002006AE04FC4 ITSO-CLS1
local
000002006AE04FC4
0000020063E03A38 ITSO-CLS4
remote
fully_configured 50
0000020063E03A38
In Example 7-141, the partnership is created between ITSO-CLS4 back to ITSO-CLS1,
specifying the bandwidth to be used for a background copy of 50 MBps.
After creating the partnership, verify that the partnership is fully configured on both clusters
by reissuing the svcinfo lscluster command.
Example 7-141 Creating the partnership from ITSO-CLS4 to ITSO-CLS1 and verifying the partnership
IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1
IBM_2145:ITSO-CLS4:admin>svcinfo lscluster
id
name
location
partnership
bandwidth
id_alias
0000020063E03A38 ITSO-CLS4
local
0000020063E03A38
000002006AE04FC4 ITSO-CLS1
remote
fully_configured 50
000002006AE04FC4
7.12.3 Creating a Metro Mirror consistency group
In Example 7-142, we create the Metro Mirror consistency group using the svctask
mkrcconsistgrp command. This consistency group will be used for the Metro Mirror
relationships of the database VDisks named MM_DB_Pri and MM_DBLog_Pri. The
consistency group is named CG_W2K3_MM.
Example 7-142 Creating the Global Mirror consistency group CG_W2K3_MM
IBM_2145:ITSO-CLS1:admin>svctask mkrcconsistgrp -cluster ITSO-CLS4 -name
CG_W2K3_MM
RC Consistency Group, id [0], successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp
id
name
master_cluster_id master_cluster_name
aux_cluster_id
aux_cluster_name primary
state
relationship_count copy_type
416
Implementing the IBM System Storage SAN Volume Controller V5.1
0
CG_W2K3_MM
0000020063E03A38 ITSO-CLS4
empty_group
000002006AE04FC4 ITSO-CLS1
empty
0
7.12.4 Creating the Metro Mirror relationships
In Example 7-143, we create the Metro Mirror relationships MMREL1 and MMREL2, for
MM_DB_Pri and MM_DBLog_Pri. Also, we make them members of the Metro Mirror
consistency group CG_W2K3_MM. We use the svcinfo lsvdisk command to list all of the
VDisks in the ITSO-CLS1 cluster, and we then use the svcinfo lsrcrelationshipcandidate
command to show the VDisks in the ITSO-CLS4 cluster.
By using this command, we check the possible candidates for MM_DB_Pri. After checking all
of these conditions, use the svctask mkrcrelationship command to create the Metro Mirror
relationship.
To verify the newly created Metro Mirror relationships, list them with the svcinfo
lsrcrelationship command.
Example 7-143 Creating Metro Mirror relationships MMREL1 and MMREL2
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue name=MM*
id
name
IO_group_id
IO_group_name
status
mdisk_grp_id
mdisk_grp_name
capacity
type
FC_id
FC_name
RC_id
RC_name
vdisk_UID
fc_map_count
copy_count
fast_write_state
13
MM_DB_Pri
0
io_grp0
online
0
MDG_DS47
1.00GB
striped
6005076801AB813F1000000000000010 0
1
empty
14
MM_Log_Pri
0
io_grp0
online
0
MDG_DS47
1.00GB
striped
6005076801AB813F1000000000000011 0
1
empty
15
MM_App_Pri
0
io_grp0
online
0
MDG_DS47
1.00GB
striped
6005076801AB813F1000000000000012 0
1
empty
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate
id
vdisk_name
0
DB_Source
1
Log_Source
2
App_Source
3
App_Target_1
4
Log_Target_1
5
Log_Target_2
6
DB_Target_1
7
DB_Target_2
8
App_Source_SE
9
FC_A
13
MM_DB_Pri
14
MM_Log_Pri
15
MM_App_Pri
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate -aux ITSO-CLS4 -master MM_DB_Pri
id
vdisk_name
0
MM_DB_Sec
1
MM_Log_Sec
2
MM_App_Sec
Chapter 7. SAN Volume Controller operations using the command-line interface
417
IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_DB_Pri -aux MM_DB_Sec -cluster
ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL1
RC Relationship, id [13], successfully created
IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_Log_Pri -aux MM_Log_Sec -cluster
ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL2
RC Relationship, id [14], successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship
id
name
master_cluster_id master_cluster_name master_vdisk_id
master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name
primary
consistency_group_id consistency_group_name state
bg_copy_priority progress
copy_type
13
MMREL1
000002006AE04FC4 ITSO-CLS1
13
MM_DB_Pri
0000020063E03A38 ITSO-CLS4
0
MM_DB_Sec
master
0
CG_W2K3_MM
inconsistent_stopped 50
0
metro
14
MMREL2
000002006AE04FC4 ITSO-CLS1
14
MM_Log_Pri
0000020063E03A38 ITSO-CLS4
1
MM_Log_Sec
master
0
CG_W2K3_MM
inconsistent_stopped 50
0
metro
7.12.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri
In Example 7-144, we create the stand-alone Metro Mirror relationship MMREL3 for
MM_App_Pri. After it is created, we check the status of this Metro Mirror relationship.
Notice that the state of MMREL3 is consistent_stopped. MMREL3 is in this state, because it
was created with the -sync option. The -sync option indicates that the secondary (auxiliary)
VDisk is already synchronized with the primary (master) VDisk. Initial background
synchronization is skipped when this option is used, even though the VDisks are not actually
synchronized in this scenario. We want to illustrate the option of pre-synchronized master and
auxiliary VDisks, before setting up the relationship. We have created the new relationship for
MM_App_Sec using the -sync option.
Tip: The -sync option is only used when the target VDisk has already mirrored all of the
data from the source VDisk. By using this option, there is no initial background copy
between the primary VDisk and the secondary VDisk.
MMREL2 and MMREL1 are in the inconsistent_stopped state, because they were not created
with the -sync option, so their auxiliary VDisks need to be synchronized with their primary
VDisks.
Example 7-144 Creating a stand-alone relationship and verifying it
IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_App_Pri -aux
MM_App_Sec -sync -cluster ITSO-CLS4 -name MMREL3
RC Relationship, id [15], successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship 15
id 15
name MMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 15
418
Implementing the IBM System Storage SAN Volume Controller V5.1
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type metro
sync in_sync
copy_type metro
7.12.6 Starting Metro Mirror
Now that the Metro Mirror consistency group and relationships are in place, we are ready to
use Metro Mirror relationships in our environment.
When implementing Metro Mirror, the goal is to reach a consistent and synchronized state
that can provide redundancy for a dataset if a failure occurs that affects the production site.
In the following section, we show how to stop and start stand-alone Metro Mirror relationships
and consistency groups.
Starting a stand-alone Metro Mirror relationship
In Example 7-145, we start a stand-alone Metro Mirror relationship named MMREL3.
Because the Metro Mirror relationship was in the Consistent stopped state and no updates
have been made to the primary VDisk, the relationship quickly enters the Consistent
synchronized state.
Example 7-145 Starting the stand-alone Metro Mirror relationship
IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship MMREL3
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3
id 15
name MMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 15
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
Chapter 7. SAN Volume Controller operations using the command-line interface
419
progress
freeze_time
status online
sync
copy_type metro
IBM_2145:ITSO-CLS1:admin>
7.12.7 Starting a Metro Mirror consistency group
In Example 7-146, we start the Metro Mirror consistency group CG_W2K3_MM. Because the
consistency group was in the Inconsistent stopped state, it enters the Inconsistent copying
state until the background copy has completed for all of the relationships in the consistency
group.
Upon completion of the background copy, it enters the Consistent synchronized state.
Example 7-146 Starting the Metro Mirror consistency group
IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp CG_W2K3_MM
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary master
state inconsistent_copying
relationship_count 2
freeze_time
status
sync
copy_type metro
RC_rel_id 13
RC_rel_name MMREL1
RC_rel_id 14
RC_rel_name MMREL2
IBM_2145:ITSO-CLS1:admin>
7.12.8 Monitoring the background copy progress
To monitor the background copy progress, we can use the svcinfo lsrcrelationship
command. This command shows us all of the defined Metro Mirror relationships if it is used
without any arguments. In the command output, progress indicates the current background
copy progress.
Our Metro Mirror relationship is shown in Example 7-147 on page 421.
Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification
when Metro Mirror consistency groups or relationships change state.
420
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-147 Monitoring background copy progress example
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL1
id 13
name MMREL1
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 13
master_vdisk_name MM_DB_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 0
aux_vdisk_name MM_DB_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_MM
state consistent_synchronized
bg_copy_priority 50
progress 35
freeze_time
status online
sync
copy_type metro
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL2
id 14
name MMREL2
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 14
master_vdisk_name MM_Log_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 1
aux_vdisk_name MM_Log_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_MM
state consistent_synchronized
bg_copy_priority 50
progress 37
freeze_time
status online
sync
copy_type metro
When all Metro Mirror relationships have completed the background copy, the consistency
group enters the Consistent synchronized state, as shown in Example 7-148.
Example 7-148 Listing the Metro Mirror consistency group
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
Chapter 7. SAN Volume Controller operations using the command-line interface
421
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
RC_rel_id 13
RC_rel_name MMREL1
RC_rel_id 14
RC_rel_name MMREL2
7.12.9 Stopping and restarting Metro Mirror
Now that the Metro Mirror consistency group and relationships are running, in this section and
in the following sections, we describe how to stop, restart, and change the direction of the
stand-alone Metro Mirror relationships, as well as the consistency group.
In this section, we show how to stop and restart the stand-alone Metro Mirror relationships
and the consistency group.
7.12.10 Stopping a stand-alone Metro Mirror relationship
Example 7-149 shows how to stop the stand-alone Metro Mirror relationship, while enabling
access (write I/O) to both the primary and secondary VDisks. It also shows the relationship
entering the Idling state.
Example 7-149 Stopping stand-alone Metro Mirror relationship and enabling access to the secondary
IBM_2145:ITSO-CLS1:admin>svctask stoprcrelationship -access MMREL3
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3
id 15
name MMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 15
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary
consistency_group_id
consistency_group_name
state idling
bg_copy_priority 50
progress
freeze_time
status
sync in_sync
copy_type metro
422
Implementing the IBM System Storage SAN Volume Controller V5.1
7.12.11 Stopping a Metro Mirror consistency group
Example 7-150 shows how to stop the Metro Mirror consistency group without specifying the
-access flag. The consistency group enters the Consistent stopped state.
Example 7-150 Stopping a Metro Mirror consistency group
IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp CG_W2K3_MM
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary master
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type metro
RC_rel_id 13
RC_rel_name MMREL1
RC_rel_id 14
RC_rel_name MMREL2
If, afterwards, we want to enable access (write I/O) to the secondary VDisk, reissue the
svctask stoprcconsistgrp command, specifying the -access flag, and the consistency group
transits to the Idling state, as shown in Example 7-151.
Example 7-151 Stopping a Metro Mirror consistency group and enabling access to the secondary
IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp -access CG_W2K3_MM
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary
state idling
relationship_count 2
freeze_time
status
sync in_sync
copy_type metro
RC_rel_id 13
RC_rel_name MMREL1
RC_rel_id 14
RC_rel_name MMREL2
Chapter 7. SAN Volume Controller operations using the command-line interface
423
7.12.12 Restarting a Metro Mirror relationship in the Idling state
When restarting a Metro Mirror relationship in the Idling state, we must specify the copy
direction.
If any updates have been performed on either the master or the auxiliary VDisk, consistency
will be compromised. Therefore, we must issue the command with the -force flag to restart a
relationship, as shown in Example 7-152.
Example 7-152 Restarting a Metro Mirror relationship after updates in the Idling state
IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force MMREL3
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3
id 15
name MMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 15
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
7.12.13 Restarting a Metro Mirror consistency group in the Idling state
When restarting a Metro Mirror consistency group in the Idling state, we must specify the
copy direction.
If any updates have been performed on either the master or the auxiliary VDisk in any of the
Metro Mirror relationships in the consistency group, the consistency is compromised.
Therefore, we must use the -force flag to start a relationship. If the -force flag is not used, the
command fails.
In Example 7-153, we change the copy direction by specifying the auxiliary VDisks to become
the primaries.
Example 7-153 Restarting a Metro Mirror relationship while changing the copy direction
IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -force -primary aux CG_W2K3_MM
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
424
Implementing the IBM System Storage SAN Volume Controller V5.1
aux_cluster_name ITSO-CLS4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
RC_rel_id 13
RC_rel_name MMREL1
RC_rel_id 14
RC_rel_name MMREL2
7.12.14 Changing copy direction for Metro Mirror
In this section, we show how to change the copy direction of the stand-alone Metro Mirror
relationship and the consistency group.
7.12.15 Switching copy direction for a Metro Mirror relationship
When a Metro Mirror relationship is in the Consistent synchronized state, we can change the
copy direction for the relationship using the svctask switchrcrelationship command,
specifying the primary VDisk.
If the specified VDisk, when you issue this command, is already a primary, the command has
no effect.
In Example 7-154, we change the copy direction for the stand-alone Metro Mirror relationship
by specifying the auxiliary VDisk to become the primary.
Important: When the copy direction is switched, it is crucial that there is no outstanding I/O
to the VDisk that transitions from the primary to the secondary, because all of the I/O will
be inhibited to that VDisk when it becomes the secondary. Therefore, careful planning is
required prior to using the svctask switchrcrelationship command.
Example 7-154 Switching the copy direction for a Metro Mirror consistency group
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3
id 15
name MMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 15
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
Chapter 7. SAN Volume Controller operations using the command-line interface
425
freeze_time
status online
sync
copy_type metro
IBM_2145:ITSO-CLS1:admin>svctask switchrcrelationship -primary aux MMREL3
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3
id 15
name MMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 15
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
7.12.16 Switching copy direction for a Metro Mirror consistency group
When a Metro Mirror consistency group is in the Consistent synchronized state, we can
change the copy direction for the consistency group, by using the svctask
switchrcconsistgrp command and specifying the primary VDisk.
If the specified VDisk is already a primary when you issue this command, the command has
no effect.
In Example 7-155, we change the copy direction for the Metro Mirror consistency group by
specifying the auxiliary VDisk to become the primary.
Important: When the copy direction is switched, it is crucial that there is no outstanding I/O
to the VDisk that transitions from primary to secondary, because all of the I/O will be
inhibited when that VDisk becomes the secondary. Therefore, careful planning is required
prior to using the svctask switchrcconsistgrp command.
Example 7-155 Switching the copy direction for a Metro Mirror consistency group
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary master
426
Implementing the IBM System Storage SAN Volume Controller V5.1
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
RC_rel_id 13
RC_rel_name MMREL1
RC_rel_id 14
RC_rel_name MMREL2
IBM_2145:ITSO-CLS1:admin>svctask switchrcconsistgrp -primary aux CG_W2K3_MM
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
RC_rel_id 13
RC_rel_name MMREL1
RC_rel_id 14
RC_rel_name MMREL2
7.12.17 Creating an SVC partnership among many clusters
Starting with SVC 5.1, you can have a cluster partnership among many SVC clusters. This
capability allows you to create four configurations using a maximum of four connected
clusters:
Star configuration
Triangle configuration
Fully connected configuration
Daisy-chain configuration
In this section, we describe how to configure the SVC cluster partnership for each
configuration.
Important: In order to have a supported and working configuration, all of the SVC clusters
must be at level 5.1 or higher.
In our scenarios, we configure the SVC partnership by referring to the clusters as A, B, C, and
D:
ITSO-CLS1 = A
ITSO-CLS2 = B
Chapter 7. SAN Volume Controller operations using the command-line interface
427
ITSO-CLS3 = C
ITSO-CLS4 = D
Example 7-156 shows the available clusters for a partnership using the lsclustercandidate
command on each cluster.
Example 7-156 Available clusters
IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate
id
configured
name
0000020069E03A42 no
ITSO-CLS3
0000020063E03A38 no
ITSO-CLS4
0000020061006FCA no
ITSO-CLS2
IBM_2145:ITSO-CLS2:admin>svcinfo lsclustercandidate
id
configured
cluster_name
000002006AE04FC4 no
ITSO-CLS1
0000020069E03A42 no
ITSO-CLS3
0000020063E03A38 no
ITSO-CLS4
IBM_2145:ITSO-CLS3:admin>svcinfo lsclustercandidate
id
configured
name
000002006AE04FC4 no
ITSO-CLS1
0000020063E03A38 no
ITSO-CLS4
0000020061006FCA no
ITSO-CLS2
IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate
id
configured
name
0000020069E03A42 no
ITSO-CLS3
000002006AE04FC4 no
ITSO-CLS1
0000020061006FCA no
ITSO-CLS2
7.12.18 Star configuration partnership
Figure 7-4 shows the star configuration.
Figure 7-4 Star configuration
428
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-157 shows the sequence of mkpartnership commands to execute to create a star
configuration.
Example 7-157 Creating a star configuration using the mkpartnership command
From ITSO-CLS1 to multiple clusters
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4
From ITSO-CLS2 to ITSO-CLS1
IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1
From ITSO-CLS3 to ITSO-CLS1
IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1
From ITSO-CLS4 to ITSO-CLS1
IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1
From ITSO-CLS1
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4
0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA
0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42
0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38
From ITSO-CLS2
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA
000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4
0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42
0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38
From ITSO-CLS3
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42
000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4
0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA
0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38
From ITSO-CLS4
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38
000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4
0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA
Chapter 7. SAN Volume Controller operations using the command-line interface
429
0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42
After the SVC partnership has been configured, you can configure any rcrelationship or
rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.
Triangle configuration
Figure 7-5 shows the triangle configuration.
Figure 7-5 Triangle configuration
Example 7-158 shows the sequence of mkpartnership commands to execute to create a
triangle configuration.
Example 7-158 Creating a triangle configuration
From ITSO-CLS1 to ITSO-CLS2 and ITSO-CLS3
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3
From ITSO-CLS2 to ITSO-CLS1 and ITSO-CLS3
IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3
From ITSO-CLS3 to ITSO-CLS1 and ITSO-CLS2
IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2
From ITSO-CLS1
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4
0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA
0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42
From ITSO-CLS2
430
Implementing the IBM System Storage SAN Volume Controller V5.1
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA
000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4
0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42
From ITSO-CLS3
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42
000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4
0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA
After the SVC partnership has been configured, you can configure any rcrelationship or
rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.
Fully connected configuration
Figure 7-6 shows the fully connected configuration.
Figure 7-6 Fully connected configuration
Example 7-159 shows the sequence of mkpartnership commands to execute to create a fully
connected configuration.
Example 7-159 Creating a fully connected configuration
From ITSO-CLS1 to ITSO-CLS2, ITSO-CLS3 and ITSO-CLS4
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4
From ITSO-CLS2 to ITSO-CLS1, ITSO-CLS3 and ITSO-CLS4
IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1
IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3
Chapter 7. SAN Volume Controller operations using the command-line interface
431
IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4
From ITSO-CLS3 to ITSO-CLS1, ITSO-CLS3 and ITSO-CLS4
IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1
IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2
IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4
From ITSO-CLS4 to ITSO-CLS1, ITSO-CLS2 and ITSO-CLS3
IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1
IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2
IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3
From ITSO-CLS1
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4
0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA
0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42
0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38
From ITSO-CLS2
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA
000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4
0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42
0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38
From ITSO-CLS3
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42
000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4
0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA
0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38
From ITSO-CLS4
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38
000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4
0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA
0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42
After the SVC partnership has been configured, you can configure any rcrelationship or
rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.
432
Implementing the IBM System Storage SAN Volume Controller V5.1
Daisy-chain configuration
Figure 7-7 shows the daisy-chain configuration.
Figure 7-7 Daisy-chain configuration
Example 7-160 shows the sequence of mkpartnership commands to execute to create a
daisy-chain configuration.
Example 7-160 Creating a daisy-chain configuration
From ITSO-CLS1 to ITSO-CLS2
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2
From ITSO-CLS2 to ITSO-CLS1 and ITSO-CLS3
IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1
IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3
From ITSO-CLS3 to ITSO-CLS2 and ITSO-CLS4
IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2
IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4
From ITSO-CLS4 to ITSO-CLS3
IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3
From ITSO-CLS1
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4
0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA
From ITSO-CLS2
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA
000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4
0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42
From ITSO-CLS3
Chapter 7. SAN Volume Controller operations using the command-line interface
433
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42
000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4
0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA
From ITSO-CLS4
IBM_2145:ITSO-CLS4:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38
After the SVC partnership has been configured, you can configure any rcrelationship or
rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.
7.13 Global Mirror operation
In the following scenario, we set up an intercluster Global Mirror relationship between the
SVC cluster ITSO-CLS1 at the primary site and the SVC cluster ITSO-CLS4 at the secondary
site.
Note: This example is for an intercluster Global Mirror operation only. In case you want to
set up an intracluster operation, we highlight those parts in the following procedure that
you do not need to perform.
Table 7-4 shows the details of the VDisks.
Table 7-4 Details of VDisks for Global Mirror relationship scenario
Content of VDisk
VDisks at primary site
VDisks at secondary site
Database files
GM_DB_Pri
GM_DB_Sec
Database log files
GM_DBLog_Pri
GM_DBLog_Sec
Application files
GM_App_Pri
GM_App_Sec
Because data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a
consistency group to handle Global Mirror relationships for them. Because, in this scenario,
the application files are independent of the database, we create a stand-alone Global Mirror
relationship for GM_App_Pri. Figure 7-8 on page 435 illustrates the Global Mirror relationship
setup.
434
Implementing the IBM System Storage SAN Volume Controller V5.1
Primary Site
SVC Cluster - ITSO - CLS1
Secondary Site
SVC Cluster - ITSO - CLS4
Consistency Group
CG_W2K3_GM
GM_DB_Pri
GM_Dlog_Pri
GM_App_Pri
GM Relationship 1
GM Relationship 2
GM Relationship 3
GM_DB_Sec
GM_DBlog_Sec
GM_App_Sec
Figure 7-8 Global Mirror scenario
7.13.1 Setting up Global Mirror
In the following section, we assume that the source and target VDisks have already been
created and that the ISLs and zoning are in place, enabling the SVC clusters to communicate.
To set up the Global Mirror, perform the following steps:
1. Create an SVC partnership between ITSO_CLS1 and ITSO_CLS4, on both SVC clusters:
Bandwidth 10 MBps
2. Create a Global Mirror consistency group:
Name CG_W2K3_GM
3. Create the Global Mirror relationship for GM_DB_Pri:
–
–
–
–
–
Master GM_DB_Pri
Auxiliary GM_DB_Sec
Auxiliary SVC cluster ITSO-CLS4
Name GMREL1
Consistency group CG_W2K3_GM
4. Create the Global Mirror relationship for GM_DBLog_Pri:
–
–
–
–
–
Master GM_DBLog_Pri
Auxiliary GM_DBLog_Sec
Auxiliary SVC cluster ITSO-CLS4
Name GMREL2
Consistency group CG_W2K3_GM
Chapter 7. SAN Volume Controller operations using the command-line interface
435
5. Create the Global Mirror relationship for GM_App_Pri:
–
–
–
–
Master GM_App_Pri
Auxiliary GM_App_Sec
Auxiliary SVC cluster ITSO-CLS4
Name GMREL3
In the following sections, we perform each step by using the CLI.
7.13.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4
We create an SVC partnership between both clusters.
Note: If you are creating an intracluster Global Mirror, do not perform the next step;
instead, go to 7.13.3, “Changing link tolerance and cluster delay simulation” on page 437.
Pre-verification
To verify that both clusters can communicate with each other, use the svcinfo
lsclustercandidate command. Example 7-161 confirms that our clusters are
communicating, because ITSO-CLS4 is an eligible SVC cluster candidate, at ITSO-CLS1, for
the SVC cluster partnership, and vice versa. Therefore, both clusters are communicating with
each other.
Example 7-161 Listing the available SVC clusters for partnership
IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate
id
configured
0000020068603A42 no
cluster_name
ITSO-CLS4
IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate
id
configured
0000020060C06FCA no
cluster_name
ITSO-CLS1
In Example 7-162, we show the output of the svcinfo lscluster command, before setting up
the SVC clusters’ partnership for Global Mirror. We show this output for comparison after we
have set up the SVC partnership.
Example 7-162 Pre-verification of cluster configuration
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_addre
ss:cluster_IP_address_6:cluster_service_IP_address_6:id_alias
0000020060C06FCA:ITSO-CLS1:local:::10.64.210.240:10.64.210.241:::0000020060C06FCA
IBM_2145:ITSO-CLS2:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_addre
ss:cluster_IP_address_6:cluster_service_IP_address_6:id_alias
0000020063E03A38:ITSO-CLS4:local:::10.64.210.246.119:10.64.210.247:::0000020063E03
A38
436
Implementing the IBM System Storage SAN Volume Controller V5.1
Partnership between clusters
In Example 7-163, we create the partnership from ITSO-CLS1 to ITSO-CLS4, specifying a
10 MBps bandwidth to use for the background copy.
To verify the status of the newly created partnership, we issue the svcinfo lscluster
command. Notice that the new partnership is only partially configured. It will remain partially
configured until we run the mkpartnership command on the other cluster.
Example 7-163 Creating the partnership from ITSO-CLS1 to ITSO-CLS4 and verifying the partnership
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS4
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster
id
name
location
partnership
bandwidth
id_alias
000002006AE04FC4 ITSO-CLS1
local
000002006AE04FC4
0000020063E03A38 ITSO-CLS4
remote
partially_configured_local 10
0000020063E03A38
In Example 7-164, we create the partnership from ITSO-CLS4 back to ITSO-CLS1, specifying
a 10 MBps bandwidth to be used for the background copy.
After creating the partnership, verify that the partnership is fully configured by reissuing the
svcinfo lscluster command.
Example 7-164 Creating the partnership from ITSO-CLS4 to ITSO-CLS1 and verifying the partnership
IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS1
IBM_2145:ITSO-CLS4:admin>svcinfo lscluster
id
name
location
partnership
bandwidth
id_alias
0000020063E03A38 ITSO-CLS4
local
0000020063E03A38
000002006AE04FC4 ITSO-CLS1
remote
fully_configured 10
000002006AE04FC4
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster
id
name
location
id_alias
000002006AE04FC4 ITSO-CLS1
local
000002006AE04FC4
0000020063E03A38 ITSO-CLS4
remote
0000020063E03A38
partnership
bandwidth
fully_configured 10
7.13.3 Changing link tolerance and cluster delay simulation
The gm_link_tolerance defines the sensitivity of the SVC to inter-link overload conditions.
The value is the number of seconds of continuous link difficulties that will be tolerated before
the SVC will stop the remote copy relationships in order to prevent affecting host I/O at the
primary site. In order to change the value, use the following command:
svctask chcluster -gmlinktolerance link_tolerance
The link_tolerance value is between 60 and 86,400 seconds in increments of 10 seconds.
The default value for the link tolerance is 300 seconds. A value of 0 disables link tolerance.
Chapter 7. SAN Volume Controller operations using the command-line interface
437
Recommendation: We strongly recommend that you use the default value. If the link is
overloaded for a period, which affects host I/O at the primary site, the relationships will be
stopped to protect those hosts.
Intercluster and intracluster delay simulation
This Global Mirror feature permits a simulation of a delayed write to a remote VDisk. This
feature allows testing to be performed that detects colliding writes, and so, you can use this
feature to test an application before the full deployment of the Global Mirror feature. The delay
simulation can be enabled separately for each intracluster or intercluster Global Mirror. To
enable this feature, you need to run the following command either for the intracluster or
intercluster simulation:
For intercluster:
svctask chcluster -gminterdelaysimulation <inter_cluster_delay_simulation>
For intracluster:
svctask chcluster -gmintradelaysimulation <intra_cluster_delay_simulation>
The inter_cluster_delay_simulation and intra_cluster_delay_simulation values express the
amount of time (in milliseconds) secondary I/Os are delayed respectively for intercluster and
intracluster relationships. These values specify the number of milliseconds that I/O activity,
that is, copying a primary VDisk to a secondary VDisk, is delayed. You can set a value from 0
to 100 milliseconds in 1 millisecond increments for the cluster_delay_simulation in the
previous commands. A value of zero (0) disables the feature.
To check the current settings for the delay simulation, use the following command:
svcinfo lscluster <clustername>
In Example 7-165, we show the modification of the delay simulation value and a change of
the Global Mirror link tolerance parameters. We also show the changed values of the Global
Mirror link tolerance and delay simulation parameters.
Example 7-165 Delay simulation and link tolerance modification
IBM_2145:ITSO-CLS1:admin>svctask chcluster
IBM_2145:ITSO-CLS1:admin>svctask chcluster
IBM_2145:ITSO-CLS1:admin>svctask chcluster
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster
id 000002006AE04FC4
name ITSO-CLS1
location local
partnership
bandwidth
total_mdisk_capacity 160.0GB
space_in_mdisk_grps 160.0GB
space_allocated_to_vdisks 19.00GB
total_free_space 141.0GB
statistics_status off
statistics_frequency 15
required_memory 8192
cluster_locale en_US
time_zone 520 US/Pacific
code_level 5.1.0.0 (build 17.1.0908110000)
FC_port_speed 2Gb
console_IP
438
-gminterdelaysimulation 20
-gmintradelaysimulation 40
-gmlinktolerance 200
000002006AE04FC4
Implementing the IBM System Storage SAN Volume Controller V5.1
id_alias 000002006AE04FC4
gm_link_tolerance 200
gm_inter_cluster_delay_simulation 20
gm_intra_cluster_delay_simulation 40
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_state invalid
inventory_mail_interval 0
total_vdiskcopy_capacity 19.00GB
total_used_capacity 19.00GB
total_overallocation 11
total_vdisk_capacity 19.00GB
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method none
iscsi_chap_secret
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
relationship_bandwidth_limit 25
7.13.4 Creating a Global Mirror consistency group
In Example 7-166, we create the Global Mirror consistency group using the svctask
mkrcconsistgrp command. We will use this consistency group for the Global Mirror
relationships for the database VDisks. The consistency group is named CG_W2K3_GM.
Example 7-166 Creating the Global Mirror consistency group CG_W2K3_GM
IBM_2145:ITSO-CLS1:admin>svctask mkrcconsistgrp -cluster ITSO-CLS4 -name
CG_W2K3_GM
RC Consistency Group, id [0], successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp
id
name
master_cluster_id master_cluster_name
aux_cluster_id
aux_cluster_name primary
state
relationship_count copy_type
0
CG_W2K3_GM
000002006AE04FC4 ITSO-CLS1
0000020063E03A38 ITSO-CLS4
empty
0
empty_group
7.13.5 Creating Global Mirror relationships
In Example 7-168 on page 441, we create the GMREL1 and GMREL2 Global Mirror
relationships for the GM_DB_Pri and GM_DBLog_Pri VDisks. We also make them members
of the CG_W2K3_GM Global Mirror consistency group.
Chapter 7. SAN Volume Controller operations using the command-line interface
439
We use the svcinfo lsvdisk command to list all of the VDisks in the ITSO-CLS1 cluster and,
then, use the svcinfo lsrcrelationshipcandidate command to show the possible VDisk
candidates for GM_DB_Pri in ITSO-CLS4.
After checking all of these conditions, use the svctask mkrcrelationship command to create
the Global Mirror relationship.
To verify the newly created Global Mirror relationships, list them with the svcinfo
lsrcrelationship command.
Example 7-167 Creating GMREL1 and GMREL2 Global Mirror relationships
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue name=GM*
id
name
IO_group_id
IO_group_name
status
mdisk_grp_id
mdisk_grp_name
capacity
type
FC_id
FC_name
RC_id
RC_name
vdisk_UID
fc_map_count copy_count
fast_write_state
16
GM_App_Pri
0
io_grp0
online
0
MDG_DS47
1.00GB
striped
6005076801AB813F1000000000000013 0
1
empty
17
GM_DB_Pri
0
io_grp0
online
0
MDG_DS47
1.00GB
striped
6005076801AB813F1000000000000014 0
1
empty
18
GM_DBLog_Pri
0
io_grp0
online
0
MDG_DS47
1.00GB
striped
6005076801AB813F1000000000000015 0
1
empty
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate -aux ITSO-CLS4 -master GM_DB_Pri
id
vdisk_name
0
MM_DB_Sec
1
MM_Log_Sec
2
MM_App_Sec
3
GM_App_Sec
4
GM_DB_Sec
5
GM_DBLog_Sec
6
SEV
IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO-CLS2
-consistgrp CG_W2K3_GM -name GMREL1 -global
RC Relationship, id [9], successfully created
IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO-CLS2
-consistgrp CG_W2K3_GM -name GMREL2 -global
RC Relationship, id [10], successfully created
IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO-CLS4
-consistgrp CG_W2K3_GM -name GMREL1 -global
RC Relationship, id [17], successfully created
IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO-CLS4
-consistgrp CG_W2K3_GM -name GMREL2 -global
RC Relationship, id [18], successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship
id
name
master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name
aux_cluster_id aux_cluster_name aux_vdisk_id
aux_vdisk_name
primary
consistency_group_id
consistency_group_name state
bg_copy_priority progress
copy_type
17
GMREL1
000002006AE04FC4 ITSO-CLS1
17
GM_DB_Pri
0000020063E03A38 ITSO-CLS4
4
GM_DB_Sec
master
0
CG_W2K3_GM
inconsistent_stopped 50
0
global
18
GMREL2
000002006AE04FC4 ITSO-CLS1
18
GM_DBLog_Pri
0000020063E03A38 ITSO-CLS4
5
GM_DBLog_Sec
master
0
CG_W2K3_GM
inconsistent_stopped 50
0
global
440
Implementing the IBM System Storage SAN Volume Controller V5.1
7.13.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri
In Example 7-168, we create the stand-alone Global Mirror relationship GMREL3 for
GM_App_Pri. After it is created, we will check the status of each of our Global Mirror
relationships.
Notice that the status of GMREL3 is consistent_stopped, because it was created with the
-sync option. The -sync option indicates that the secondary (auxiliary) VDisk is already
synchronized with the primary (master) VDisk. The initial background synchronization is
skipped when this option is used.
GMREL1 and GMREL2 are in the inconsistent_stopped state, because they were not created
with the -sync option, so their auxiliary VDisks need to be synchronized with their primary
VDisks.
Example 7-168 Creating a stand-alone Global Mirror relationship and verifying it
IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_App_Pri -aux GM_App_Sec -cluster ITSO-CLS4
-sync -name GMREL3 -global
RC Relationship, id [16], successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship -delim :
id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster_id:aux_cluster_
name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_group_name:state:bg_copy_priority
:progress:copy_type
16:GMREL3:000002006AE04FC4:ITSO-CLS1:16:GM_App_Pri:0000020063E03A38:ITSO-CLS4:3:GM_App_Sec:master:::consist
ent_stopped:50:100:global
17:GMREL1:000002006AE04FC4:ITSO-CLS1:17:GM_DB_Pri:0000020063E03A38:ITSO-CLS4:4:GM_DB_Sec:master:0:CG_W2K3_G
M:inconsistent_stopped:50:0:global
18:GMREL2:000002006AE04FC4:ITSO-CLS1:18:GM_DBLog_Pri:0000020063E03A38:ITSO-CLS4:5:GM_DBLog_Sec:master:0:CG_
W2K3_GM:inconsistent_stopped:50:0:global
7.13.7 Starting Global Mirror
Now that we have created the Global Mirror consistency group and relationships, we are
ready to use the Global Mirror relationships in our environment.
When implementing Global Mirror, the goal is to reach a consistent and synchronized state
that can provide redundancy in case a hardware failure occurs that affects the SAN at the
production site.
In this section, we show how to start the stand-alone Global Mirror relationships and the
consistency group.
7.13.8 Starting a stand-alone Global Mirror relationship
In Example 7-145 on page 419, we start the stand-alone Global Mirror relationship named
GMREL3. Because the Global Mirror relationship was in the Consistent stopped state and no
updates have been made to the primary VDisk, the relationship quickly enters the Consistent
synchronized state.
Example 7-169 Starting the stand-alone Global Mirror relationship
IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship GMREL3
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3
id 16
name GMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
Chapter 7. SAN Volume Controller operations using the command-line interface
441
master_vdisk_id 16
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 3
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
7.13.9 Starting a Global Mirror consistency group
In Example 7-146 on page 420, we start the CG_W2K3_GM Global Mirror consistency group.
Because the consistency group was in the Inconsistent stopped state, it enters the
Inconsistent copying state until the background copy has completed for all of the relationships
that are in the consistency group.
Upon completion of the background copy, the CG_W2K3_GM Global Mirror consistency
group enters the Consistent synchronized state (see Example 7-170).
Example 7-170 Starting the Global Mirror consistency group
IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp CG_W2K3_GM
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary master
state inconsistent_copying
relationship_count 2
freeze_time
status
sync
copy_type global
RC_rel_id 17
RC_rel_name GMREL1
RC_rel_id 18
RC_rel_name GMREL2
442
Implementing the IBM System Storage SAN Volume Controller V5.1
7.13.10 Monitoring background copy progress
To monitor the background copy progress, use the svcinfo lsrcrelationship command.
This command shows us all of the defined Global Mirror relationships if it is used without any
parameters. In the command output, progress indicates the current background copy
progress. Example 7-147 on page 421 shows our Global Mirror relationships.
Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification
when Global Mirror consistency groups or relationships change state.
Example 7-171 Monitoring background copy progress example
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL1
id 17
name GMREL1
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 17
master_vdisk_name GM_DB_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 4
aux_vdisk_name GM_DB_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state inconsistent_copying
bg_copy_priority 50
progress 38
freeze_time
status online
sync
copy_type global
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL2
id 18
name GMREL2
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 18
master_vdisk_name GM_DBLog_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 5
aux_vdisk_name GM_DBLog_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state inconsistent_copying
bg_copy_priority 50
progress 40
freeze_time
status online
sync
copy_type global
Chapter 7. SAN Volume Controller operations using the command-line interface
443
When all of the Global Mirror relationships complete the background copy, the consistency
group enters the Consistent synchronized state, as shown in Example 7-148 on page 421.
Example 7-172 Listing the Global Mirror consistency group
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
RC_rel_id 17
RC_rel_name GMREL1
RC_rel_id 18
RC_rel_name GMREL2
7.13.11 Stopping and restarting Global Mirror
Now that the Global Mirror consistency group and relationships are running, we now describe
how to stop, restart, and also change the direction of the stand-alone Global Mirror
relationships, as well as the consistency group.
First, we show how to stop and restart the stand-alone Global Mirror relationships and the
consistency group.
7.13.12 Stopping a stand-alone Global Mirror relationship
In Example 7-149 on page 422, we stop the stand-alone Global Mirror relationship, while
enabling access (write I/O) to both the primary and the secondary VDisk, and as a result, the
relationship enters the Idling state.
Example 7-173 Stopping the stand-alone Global Mirror relationship
IBM_2145:ITSO-CLS1:admin>svctask stoprcrelationship -access GMREL3
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3
id 16
name GMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 16
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 3
aux_vdisk_name GM_App_Sec
primary
consistency_group_id
consistency_group_name
444
Implementing the IBM System Storage SAN Volume Controller V5.1
state idling
bg_copy_priority 50
progress
freeze_time
status
sync in_sync
copy_type global
7.13.13 Stopping a Global Mirror consistency group
In Example 7-150 on page 423, we stop the Global Mirror consistency group without
specifying the -access parameter; therefore, the consistency group enters the Consistent
stopped state.
Example 7-174 Stopping a Global Mirror consistency group without specifying -access
IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp CG_W2K3_GM
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary master
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
RC_rel_id 17
RC_rel_name GMREL1
RC_rel_id 18
RC_rel_name GMREL2
If, afterwards, we want to enable access (write I/O) for the secondary VDisk, we can reissue
the svctask stoprcconsistgrp command, specifying the -access parameter, and the
consistency group transits to the Idling state, as shown in Example 7-151 on page 423.
Example 7-175 Stopping a Global Mirror consistency group
IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp -access CG_W2K3_GM
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary
state idling
relationship_count 2
freeze_time
status
Chapter 7. SAN Volume Controller operations using the command-line interface
445
sync in_sync
copy_type global
RC_rel_id 17
RC_rel_name GMREL1
RC_rel_id 18
RC_rel_name GMREL2
7.13.14 Restarting a Global Mirror relationship in the Idling state
When restarting a Global Mirror relationship in the Idling state, we must specify the copy
direction.
If any updates have been performed on either the master or the auxiliary VDisk, consistency
will be compromised. Therefore, we must issue the -force parameter to restart the
relationship. If the -force parameter is not used, the command will fail, which is shown in
Example 7-152 on page 424.
Example 7-176 Restarting a Global Mirror relationship after updates in the Idling state
IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force GMREL3
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3
id 16
name GMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 16
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 3
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
7.13.15 Restarting a Global Mirror consistency group in the Idling state
When restarting a Global Mirror consistency group in the Idling state, we must specify the
copy direction.
If any updates have been performed on either the master or the auxiliary VDisk in any of the
Global Mirror relationships in the consistency group, consistency will be compromised.
Therefore, we must issue the -force parameter to start the relationship. If the -force parameter
is not used, the command will fail.
In Example 7-153 on page 424, we restart the consistency group and change the copy
direction by specifying the auxiliary VDisks to become the primaries.
446
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-177 Restarting a Global Mirror relationship while changing the copy direction
IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -primary aux CG_W2K3_GM
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
RC_rel_id 17
RC_rel_name GMREL1
RC_rel_id 18
RC_rel_name GMREL2
7.13.16 Changing direction for Global Mirror
In this section, we show how to change the copy direction of the stand-alone Global Mirror
relationships and the consistency group.
7.13.17 Switching copy direction for a Global Mirror relationship
When a Global Mirror relationship is in the Consistent synchronized state, we can change the
copy direction for the relationship by using the svctask switchrcrelationship command and
specifying the primary VDisk.
If the VDisk that is specified as the primary when issuing this command is already a primary,
the command has no effect.
In Example 7-154 on page 425, we change the copy direction for the stand-alone Global
Mirror relationship, specifying the auxiliary VDisk to become the primary.
Important: When the copy direction is switched, it is crucial that there is no outstanding I/O
to the VDisk that transits from primary to secondary, because all I/O will be inhibited to that
VDisk when it becomes the secondary. Therefore, careful planning is required prior to
using the svctask switchrcrelationship command.
Example 7-178 Switching the copy direction for a Global Mirror relationship
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3
id 16
name GMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 16
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
Chapter 7. SAN Volume Controller operations using the command-line interface
447
aux_vdisk_id 3
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
IBM_2145:ITSO-CLS1:admin>svctask switchrcrelationship -primary aux GMREL3
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3
id 16
name GMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 16
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 3
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
7.13.18 Switching copy direction for a Global Mirror consistency group
When a Global Mirror consistency group is in the Consistent synchronized state, we can
change the copy direction for the relationship by using the svctask switchrcconsistgrp
command and specifying the primary VDisk.
If the VDisk that is specified as the primary when issuing this command is already a primary,
the command has no effect.
In Example 7-155 on page 426, we change the copy direction for the Global Mirror
consistency group, specifying the auxiliary to become the primary.
Important: When the copy direction is switched, it is crucial that there is no outstanding I/O
to the VDisk that transits from primary to secondary, because all I/O will be inhibited when
that VDisk becomes the secondary. Therefore, careful planning is required prior to using
the svctask switchrcconsistgrp command.
448
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-179 Switching the copy direction for a Global Mirror consistency group
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
RC_rel_id 17
RC_rel_name GMREL1
RC_rel_id 18
RC_rel_name GMREL2
IBM_2145:ITSO-CLS1:admin>svctask switchrcconsistgrp -primary aux CG_W2K3_GM
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
RC_rel_id 17
RC_rel_name GMREL1
RC_rel_id 18
RC_rel_name GMREL2
7.14 Service and maintenance
This section details the various service and maintenance tasks that you can execute within
the SVC environment.
Chapter 7. SAN Volume Controller operations using the command-line interface
449
7.14.1 Upgrading software
This section explains how to upgrade the SVC software.
Package numbering and version
The format for software upgrade packages is four positive integers that are separated by
periods. For example, a software upgrade package contains something similar to 5.1.0.0, and
each software package is given a unique number.
Requirement: It is mandatory that you run on SVC 4.3.1.7 cluster code before upgrading
to SVC 5.1.0.0 cluster code.
Check the recommended software levels on the Web at this Web site:
http://www.ibm.com/storage/support/2145
SVC software upgrade test utility
The SAN Volume Controller Software Upgrade Test Utility, which resides on the Master
Console, will check software levels in the system against the recommended levels, which will
be documented on the support Web site. You will be informed if the software levels are
up-to-date, or if you need to download and install newer levels. You can download the utility
and installation instructions from this link:
http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585
After the software file has been uploaded to the cluster (to the /home/admin/upgrade
directory), you can select the software and apply it to the cluster. Use the Web script and the
svctask applysoftware command. When a new code level is applied, it is automatically
installed on all of the nodes within the cluster.
The underlying command-line tool runs the sw_preinstall script, which checks the validity of
the upgrade file, and whether it can be applied over the current level. If the upgrade file is
unsuitable, the pre-install script deletes the files, which prevents the buildup of invalid files on
the cluster.
Precaution before upgrade
Software installation is normally considered to be a client’s task. The SVC supports
concurrent software upgrade. You can perform the software upgrade concurrently with I/O
user operations and certain management activities, but only limited CLI commands will be
operational from the time that the install command starts until the upgrade operation has
either terminated successfully or been backed out. Certain commands will fail with a message
indicating that a software upgrade is in progress.
Before you upgrade the SVC software, ensure that all I/O paths between all hosts and SANs
are working. Otherwise, the applications might have I/O failures during the software upgrade.
Ensure that all I/O paths between all hosts and SANs are working by using the Subsystem
Device Driver (SDD) command. Example 7-180 shows the output.
Example 7-180 Query adapter
#datapath query adapter
Active Adapters :2
Adpt#
0
1
450
Name
State
fscsi0 NORMAL
fscsi1 NORMAL
Mode
ACTIVE
ACTIVE
Select
1445
1888
Implementing the IBM System Storage SAN Volume Controller V5.1
Errors
0
0
Paths
4
4
Active
4
4
#datapath query device
Total Devices : 2
DEV#:
0 DEVICE NAME: vpath0 TYPE: 2145
POLICY:
Optimized
SERIAL: 60050768018201BF2800000000000000
==========================================================================
Path#
Adapter/Hard Disk
State
Mode
Select
Errors
0
fscsi0/hdisk3
OPEN
NORMAL
0
0
1
fscsi1/hdisk7
OPEN
NORMAL
972
0
DEV#:
1 DEVICE NAME: vpath1 TYPE: 2145
POLICY:
Optimized
SERIAL: 60050768018201BF2800000000000002
==========================================================================
Path#
Adapter/Hard Disk
State
Mode
Select
Errors
0
fscsi0/hdisk4
OPEN
NORMAL
784
0
1
fscsi1/hdisk8
OPEN
NORMAL
0
0
Write-through mode: During a software upgrade, there are periods where not all of the
nodes in the cluster are operational, and as a result, the cache operates in write-through
mode. write-through mode has an effect on the throughput, latency, and bandwidth aspects
of performance.
Verify that your uninterruptible power supply unit configuration is also set up correctly (even if
your cluster is running without problems). Specifically, make sure that the following conditions
are true:
Your uninterruptible power supply units are all getting their power from an external source,
and they are not daisy chained. Make sure that each uninterruptible power supply unit is
not supplying power to another node’s uninterruptible power supply unit.
The power cable and the serial cable, which comes from each node, go back to the same
uninterruptible power supply unit. If the cables are crossed and go back to separate
uninterruptible power supply units, during the upgrade, while one node is shut down,
another node might also be mistakenly shut down.
Important: Do not share the SVC uninterruptible power supply unit with any other devices.
You must also ensure that all I/O paths are working for each host that runs I/O operations to
the SAN during the software upgrade. You can check the I/O paths by using the datapath
query commands.
You do not need to check for hosts that have no active I/O operations to the SAN during the
software upgrade.
Procedure
To upgrade the SVC cluster software, perform the following steps:
1. Before starting the upgrade, you must back up the configuration (see 7.14.9, “Backing up
the SVC cluster configuration” on page 466) and save the backup config file in a safe
place.
2. Also, save the data collection for support diagnosis in case of problems, as shown in
Example 7-181 on page 452.
Chapter 7. SAN Volume Controller operations using the command-line interface
451
Example 7-181 svc_snap command
IBM_2145:ITSO-CLS1:admin>svc_snap
Collecting system information...
Copying files, please wait...
Copying files, please wait...
Listing files, please wait...
Copying files, please wait...
Listing files, please wait...
Copying files, please wait...
Listing files, please wait...
Dumping error log...
Creating snap package...
Snap data collected in /dumps/snap.104643.080617.002427.tgz
3. List the dump that was generated by the previous command, as shown in Example 7-182.
Example 7-182 svcinfo ls2145dumps command
IBM_2145:ITSO-CLS1:admin>svcinfo ls2145dumps
id
2145_filename
0
svc.config.cron.bak_node3
1
svc.config.cron.bak_SVCNode_2
2
svc.config.cron.bak_node1
3
dump.104643.070803.015424
4
dump.104643.071010.232740
5
svc.config.backup.bak_ITSOCL1_N1
6
svc.config.backup.xml_ITSOCL1_N1
7
svc.config.backup.tmp.xml
8
svc.config.cron.bak_ITSOCL1_N1
9
dump.104643.080609.202741
10
104643.080610.154323.ups_log.tar.gz
11
104643.trc.old
12
dump.104643.080609.212626
13
104643.080612.221933.ups_log.tar.gz
14
svc.config.cron.bak_Node1
15
svc.config.cron.log_Node1
16
svc.config.cron.sh_Node1
17
svc.config.cron.xml_Node1
18
dump.104643.080616.203659
19
104643.trc
20
ups_log.a
21
snap.104643.080617.002427.tgz
22
ups_log.b
4. Save the generated dump in a safe place using the pscp command, as shown in
Example 7-183.
Example 7-183 pscp -load command
C:\>pscp -load ITSOCL1 [email protected]:/dumps/snap.104643.080617.002427.tgz c:\
snap.104643.080617.002427 | 597 kB | 597.7 kB/s | ETA: 00:00:00 | 100%
5. Upload the new software package using PuTTY Secure Copy. Enter the command, as
shown in Example 7-184 on page 453.
452
Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-184 pscp -load command
C:\>pscp -load ITSOCL1 IBM2145_INSTALL_4.3.0.0
[email protected]:/home/admin/upgrade
IBM2145_INSTALL_4.3.0.0-0 | 103079 kB | 9370.8 kB/s | ETA: 00:00:00 | 100%
6. Upload the SAN Volume Controller Software Upgrade Test Utility by using PuTTY Secure
Copy. Enter the command, as shown in Example 7-185.
Example 7-185 Upload utility
C:\>pscp -load ITSOCL1 IBM2145_INSTALL_svcupgradetest_1.11
[email protected]:/home/admin/upgrade
IBM2145_INSTALL_svcupgrad | 11 kB | 12.0 kB/s | ETA: 00:00:00 | 100%
7. Verify that the packages were successfully delivered through the PuTTY command-line
application by entering the svcinfo lssoftwaredumps command, as shown in
Example 7-186.
Example 7-186 svcinfo lssoftwaredumps command
IBM_2145:ITSO-CLS1:admin>svcinfo lssoftwaredumps
id
software_filename
0
IBM2145_INSTALL_4.3.0.0
1
IBM2145_INSTALL_svcupgradetest_1.11
8. Now that the packages are uploaded, first install the SAN Volume Controller Software
Upgrade Test Utility, as shown in Example 7-187.
Example 7-187 svctask applysoftware command
IBM_2145:ITSO-CLS1:admin>svctask applysoftware -file
IBM2145_INSTALL_svcupgradetest_1.11
CMMVC6227I The package installed successfully.
9. Using the following command, test the upgrade for known issues that might prevent a
software upgrade from completing successfully, as shown in Example 7-188.
Example 7-188 svcupgradetest command
IBM_2145:ITSO-CLS1:admin>svcupgradetest
svcupgradetest version 1.11. Please wait while the tool tests
for issues that may prevent a software upgrade from completing
successfully. The test will take approximately one minute to complete.
The test has not found any problems with the 2145 cluster.
Please proceed with the software upgrade.
Important: If the svcupgradetest command produces any errors, troubleshoot the errors
using the maintenance procedures before continuing further.
10.Now, use the svctask command set to apply the software upgrade, as shown in
Example 7-189.
Example 7-189 Apply upgrade command example
IBM_2145:ITSOSVC42A:admin>svctask applysoftware -file IBM2145_INSTALL_4.3.0.0
Chapter 7. SAN Volume Controller operations using the command-line interface
453
While the upgrade runs, you can check the status, as shown in Example 7-190.
Example 7-190 Check update status
IBM_2145:ITSO-CLS1:admin>svcinfo lssoftwareupgradestatus
status
upgrading
11.The new code is distributed and applied to each node in the SVC cluster. After installation,
each node is automatically restarted one at a time. If a node does not restart automatically
during the upgrade, you must repair it manually.
Solid-state drives: If you use solid-state drives, the data of the solid-state drive within
the restarted node will not be available during the reboot.
12.Eventually both nodes display Cluster: on line one on the SVC front panel and the name
of your cluster on line two of the SVC front panel. Be prepared for a wait (in our case, we
waited approximately 40 minutes).
Performance: During this process, both your CLI and GUI vary from sluggish (slow) to
unresponsive. The important thing is that I/O to the hosts can continue through this
process.
13.To verify that the upgrade was successful, you can perform either of the following options:
– Run the svcinfo lscluster and svcinfo lsnodevpd commands, as shown in
Example 7-191. We have truncated the lscluster and lsnodevpd information for this
example.
Example 7-191 svcinfo lscluster and svcinfo lsnodevpd commands
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1
id 0000020060806FCA
name ITSO-CLS1
location local
partnership
bandwidth
cluster_IP_address 9.43.86.117
cluster_service_IP_address 9.43.86.118
total_mdisk_capacity 756.0GB
space_in_mdisk_grps 756.0GB
space_allocated_to_vdisks 156.00GB
total_free_space 600.0GB
statistics_status off
statistics_frequency 15
required_memory 8192
cluster_locale en_US
SNMP_setting none
SNMP_community
SNMP_server_IP_address 0.0.0.0
subnet_mask 255.255.252.0
default_gateway 9.43.85.1
time_zone 522 UTC
email_setting
email_id
454
Implementing the IBM System Storage SAN Volume Controller V5.1
code_level 4.3.0.0 (build 8.15.0806110000)
FC_port_speed 2Gb
console_IP 9.43.86.115:9080
id_alias 0000020060806FCA
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
email_server 127.0.0.1
email_server_port 25
email_reply [email protected]
email_contact ITSO User
email_contact_primary 555-1234
email_contact_alternate
email_contact_location ITSO
email_state running
email_user_count 1
inventory_mail_interval 0
cluster_IP_address_6
cluster_service_IP_address_6
prefix_6
default_gateway_6
total_vdiskcopy_capacity 156.00GB
total_used_capacity 156.00GB
total_overallocation 20
total_vdisk_capacity 156.00GB
IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svcinfo lsnodevpd 1
id 1
system board: 24 fields
part_number 31P0906
system_serial_number 13DVT31
number_of_processors 4
number_of_memory_slots 8
number_of_fans 6
number_of_FC_cards 1
number_of_scsi/ide_devices 2
BIOS_manufacturer IBM
BIOS_version -[GFE136BUS-1.09]BIOS_release_date 02/08/2008
system_manufacturer IBM
system_product IBM System x3550 -[21458G4].
.
software: 6 fields
code_level 4.3.0.0 (build 8.15.0806110000)
node_name Node1
ethernet_status 1
WWNN 0x50050768010037e5
id 1
Chapter 7. SAN Volume Controller operations using the command-line interface
455
– Copy the error log to your management workstation, as explained in 7.14.2, “Running
maintenance procedures” on page 456. Open the error log in WordPad and search for
Software Install completed.
You have now completed the required tasks to upgrade the SVC software.
7.14.2 Running maintenance procedures
Use the svctask finderr command to generate a list of any unfixed errors in the system. This
command analyzes the last generated log that resides in the /dumps/elogs/ directory on the
cluster.
If you want to generate a new log before analyzing unfixed errors, run the svctask
dumperrlog command (Example 7-192).
Example 7-192 svctask dumperrlog command
IBM_2145:ITSO-CLS2:admin>svctask dumperrlog
This command generates a errlog_timestamp file, such as errlog_100048_080618_042419,
where:
errlog is part of the default prefix for all error log files.
100048 is the panel name of the current configuration node.
080618 is the date (YYMMDD).
042419 is the time (HHMMSS).
You can add the -prefix parameter to your command to change the default prefix of errlog to
something else (Example 7-193).
Example 7-193 svctask dumperrlog -prefix command
IBM_2145:ITSO-CLS2:admin>svctask dumperrlog -prefix svcerrlog
This command creates a file called svcerrlog_timestamp.
To see the file name, you must enter the following command (Example 7-194).
Example 7-194 svcinfo lserrlogdumps command
IBM_2145:ITSO-CLS2:admin>svcinfo lserrlogdumps
id
filename
0
errlog_100048_080618_042049
1
errlog_100048_080618_042128
2
errlog_100048_080618_042355
3
errlog_100048_080618_042419
4
errlog_100048_080618_175652
5
errlog_100048_080618_175702
6
errlog_100048_080618_175724
7
errlog_100048_080619_205900
8
errlog_100048_080624_170214
9
svcerrlog_100048_080624_170257
456
Implementing the IBM System Storage SAN Volume Controller V5.1
Maximum number of error log dump files: A maximum of ten error log dump files per
node will be kept on the cluster. When the eleventh dump is made, the oldest existing
dump file for that node will be overwritten. Note that the directory might also hold log files
retrieved from other nodes. These files are not counted. The SVC will delete the oldest file
(when necessary) for this node in order to maintain the maximum number of files. The SVC
will not delete files from other nodes unless you issue the cleandumps command.
After you generate your error log, you can issue the svctask finderr command to scan the
error log for any unfixed errors, as shown in Example 7-195.
Example 7-195 svctask finderr command
IBM_2145:ITSO-CLS2:admin>svctask finderr
Highest priority unfixed error code is [1230]
As you can see, we have one unfixed error on our system. To analyze this error, download it
onto your own PC.
To know more about this unfixed error, look at the error log in more detail. Use the PuTTY
Secure Copy process to copy the file from the cluster to your local management workstation,
as shown in Example 7-196.
Example 7-196 pscp command: Copy error logs off of the SVC
In W2K3  Start  Run  cmd
C:\Program Files\PuTTY>pscp -load SVC_CL2
[email protected]:/dumps/elogs/svcerrlog_100048_080624_170257
c:\temp\svcerrlog.txt
svcerrlog.txt
| 6390 kB | 3195.1 kB/s | ETA: 00:00:00 | 100%
In order to use the Run option, you must know where your pscp.exe is located. In this case, it
is in the C:\Program Files\PuTTY\ folder.
This command copies the file called svcerrlog_100048_080624_170257 to the C:\temp
directory on our local workstation and calls the file svcerrlog.txt.
Open the file in WordPad (Notepad does not format the window as well). You will see
information similar to what is shown in Example 7-197. We truncated this list for the purposes
of this example.
Example 7-197 errlog in WordPad
Error Log Entry 400
Node Identifier
Object Type
Object ID
Copy ID
Sequence Number
Root Sequence Number
First Error Timestamp
Last Error Timestamp
Error Count
Error ID
:
:
:
:
:
:
:
:
:
:
:
:
Node2
device
0
37404
37404
Sat Jun
Epoch +
Sat Jun
Epoch +
2
10013 :
21 00:08:21 2008
1214006901
21 00:11:36 2008
1214007096
Login Excluded
Chapter 7. SAN Volume Controller operations using the command-line interface
457
Error Code
Status Flag
Type Flag
03
33
33
04
00
00
00
00
00
44
00
00
00
00
00
00
00
17
33
04
00
00
00
00
00
B8
00
00
00
00
00
00
03
A0
05
00
00
00
00
00
: 1230 : Login excluded
: UNFIXED
: TRANSIENT ERROR
00
00
00
00
00
00
00
00
00
05
0B
01
00
00
00
00
00
20
00
00
00
00
00
00
31
00
00
00
00
00
00
00
44
11
00
00
00
00
00
00
17
01
01
00
00
00
00
00
B8
00
00
00
00
00
00
00
A0
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
04
01
01
00
00
00
00
00
20
00
00
00
00
00
00
00
Scrolling through, or searching for the term unfixed, you can find more detail about the
problem. You might see more entries in the errorlog that have the status of unfixed.
After you take the necessary steps to rectify the problem, you can mark the error as fixed in
the log by issuing the svctask cherrstate command against its sequence numbers
(Example 7-198).
Example 7-198 svctask cherrstate command
IBM_2145:ITSO-CLS2:admin>svctask cherrstate -sequencenumber 37404
If you accidentally mark the wrong error as fixed, you can mark it as unfixed again by entering
the same command and appending the -unfix flag to the end, as shown in Example 7-199.
Example 7-199 unfix flag
IBM_2145:ITSO-CLS2:admin>svctask cherrstate -sequencenumber 37406 -unfix
7.14.3 Setting up SNMP notification
To set up error notification, use the svctask mksnmpserver command.
Example 7-200 shows an example of the mksnmpserver command.
Example 7-200 svctask mksnmpserver command
IBM_2145:ITSO-CLS2:admin>svctask mksnmpserver -error on -warning on -info on
9.43.86.160 -community SVC
SNMP Server id [1] successfully created
-ip
This command sends all errors and warning and informational events to the SVC community
on the SNMP manager with the IP address 9.43.86.160.
7.14.4 Set syslog event notification
Starting with SVC 5.1, you can save a syslog to a defined syslog server. The SVC now
provides support for syslog in addition to e-mail and SNMP traps.
The syslog protocol is a client-server standard for forwarding log messages from a sender to
a receiver on an IP network. You can use syslog to integrate log messages from various types
of systems into a central repository. You can configure SVC 5.1 to send information to six
syslog servers.
458
Implementing the IBM System Storage SAN Volume Controller V5.1
You use the svctask mksyslogserver command to configure the SVC using the CLI, as
shown in Example 7-201.
Using this command with the -h parameter gives you information about all of the available
options. In our example, we only configure the SVC to use the default values for our syslog
server.
Example 7-201 Configuring the syslog
IBM_2145:ITSO-CLS2:admin>svctask mksyslogserver -ip 10.64.210.231 -name
Syslogserv1
Syslog Server id [1] successfully created
When we have configured our syslog server, we can display the current syslog server
configurations in our cluster, as shown in Example 7-202.
Example 7-202 svcinfo lssyslogserver command
IBM_2145:ITSO-CLS2:admin>svcinfo lssyslogserver
id
name
IP_address
facility
error
warning
info
0
Syslogsrv
10.64.210.230
on
on
on
1
Syslogserv1 10.64.210.231
on
on
on
4
0
7.14.5 Configuring error notification using an e-mail server
The SVC can use an e-mail server to send event notification and inventory e-mails to e-mail
users. It can transmit any combination of error, warning, and informational notification types.
The SVC supports up to six e-mail servers to provide redundant access to the external e-mail
network. The SVC uses the e-mail servers in sequence until the e-mail is successfully sent
from the SVC.
Important: Before the SVC can start sending e-mails, we must run the svctask
startemail command, which enables this service.
The attempt is successful when the SVC gets a positive acknowledgement from an e-mail
server that the e-mail has been received by the server.
If no port is specified, port 25 is the default port, as shown in Example 7-203.
Example 7-203 The mkemailserver command syntax
IBM_2145:ITSO-CLS1:admin>svctask mkemailserver -ip 192.168.1.1
Email Server id [0] successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lsemailserver 0
id 0
name emailserver0
IP_address 192.168.1.1
port 25
We can configure an e-mail user that will receive e-mail notifications from the SVC cluster.
We can define 12 users to receive e-mails from our SVC.
Chapter 7. SAN Volume Controller operations using the command-line interface
459
Using the svcinfo lsemailuser command, we can verify who is already registered and what
type of information is sent to that user, as shown in Example 7-204.
Example 7-204 svcinfo lsemailuser command
IBM_2145:ITSO-CLS2:admin>svcinfo lsemailuser
id
name
address
user_type
error
warning
info
0
IBM_Support_Center [email protected]
support
on
off
off
inventory
on
We can also create a new user, as shown in Example 7-205 for a SAN administrator.
Example 7-205 svctask mkemailuser command
IBM_2145:ITSO-CLS2:admin>svctask mkemailuser -address [email protected] -error on
-warning on -info on -inventory on
User, id [1], successfully created
7.14.6 Analyzing the error log
The following types of events and errors are logged in the error log:
Events: State changes are detected by the cluster software and are logged for
informational purposes. Events are recorded in the cluster error log.
Errors: Hardware or software problems are detected by the cluster software and require
repair. Errors are recorded in the cluster error log.
Unfixed errors: Errors were detected and recorded in the cluster error log and have not yet
been corrected or repaired.
Fixed errors: Errors were detected and recorded in the cluster error log and have
subsequently been corrected or repaired.
To display the error log, use the svcinfo lserrlog command or the svcinfo caterrlog
command, as shown in Example 7-206 (the output is the same).
Example 7-206 svcinfo caterrlog command
IBM_2145:ITSOSVC42A:admin>svcinfo caterrlog -delim :
id:type:fixed:SNMP_trap_raised:error_type:node_name:sequence_number:root_sequence_
number:first_timestamp:last_timestamp:number_of_errors:error_code
0:cluster:no:no:5:SVCNode_1:0:0:070606094909:070606094909:1:00990101
0:cluster:no:no:5:SVCNode_1:0:0:070606094909:070606094909:1:00990101
12:_grp:no:no:5:SVCNode_1:0:0:070606094858:070606094858:1:00990145
12:mdisk_grp:no:no:5:SVCNode_1:0:0:070606094539:070606094539:1:00990173
0:internal:no:no:5:SVCNode_1:0:0:070606094507:070606094507:1:00990219
12:mdisk_grp:no:no:5:SVCNode_1:0:0:070606094208:070606094208:1:00990148
12:mdisk_grp:no:no:5:SVCNode_1:0:0:070606094139:070606094139:1:00990145
.........
IBM_2145:ITSO-CLS1:admin>svcinfo caterrlog -delim ,
id,type,fixed,SNMP_trap_raised,error_type,node_name,sequence_number,root_sequence_
number,first_timestamp,last_timestamp,number_of_errors,error_code,copy_id
0,cluster,no,yes,6,n4,171,170,080624115947,080624115947,1,00981001,
0,cluster,no,yes,6,n4,170,170,080624115932,080624115932,1,00981001,
0,cluster,no,no,5,n1,0,0,080624105428,080624105428,1,00990101,
0,internal,no,no,5,n1,0,0,080624095359,080624095359,1,00990219,
460
Implementing the IBM System Storage SAN Volume Controller V5.1
0,internal,no,no,5,n1,0,0,080624094301,080624094301,1,00990220,
0,internal,no,no,5,n1,0,0,080624093355,080624093355,1,00990220,
11,vdisk,no,no,5,n1,0,0,080623150020,080623150020,1,00990183,
4,vdisk,no,no,5,n1,0,0,080623145958,080623145958,1,00990183,
5,vdisk,no,no,5,n1,0,0,080623145934,080623145934,1,00990183,
11,vdisk,no,no,5,n1,0,0,080623145017,080623145017,1,00990182,
6,vdisk,no,no,5,n1,0,0,080623144153,080623144153,1,00990183,
.
This command views the last error log that was generated. Use the method that is described
in 7.14.2, “Running maintenance procedures” on page 456 to upload and analyze the error
log in more detail.
To clear the error log, you can issue the svctask clearerrlog command, as shown in
Example 7-207.
Example 7-207 svctask clearerrlog command
IBM_2145:ITSO-CLS1:admin>svctask clearerrlog
Do you really want to clear the log? y
Using the -force flag will stop any confirmation requests from appearing.
When executed, this command will clear all of the entries from the error log. This process will
proceed even if there are unfixed errors in the log. It also clears any status events that are in
the log.
This command is a destructive command for the error log. Only use this command when you
have either rebuilt the cluster, or when you have fixed a major problem that has caused many
entries in the error log that you do not want to fix manually.
7.14.7 License settings
To change the licensing feature settings, use the svctask chlicense command.
Before you change the licensing, you can display the licenses that you already have by
issuing the svcinfo lslicense command, as shown in Example 7-208.
Example 7-208 svcinfo lslicense command
IBM_2145:ITSO-CLS1:admin>svcinfo lslicense
used_flash 0.00
used_remote 0.00
used_virtualization 0.74
license_flash 50
license_remote 20
license_virtualization 80
The current license settings for the cluster are displayed in the viewing license settings log
window. These settings show whether you are licensed to use the FlashCopy, Metro Mirror,
Global Mirror, or Virtualization features. They also show the storage capacity that is licensed
for virtualization. Typically, the license settings log contains entries, because feature options
must be set as part of the Web-based cluster creation process.
Chapter 7. SAN Volume Controller operations using the command-line interface
461
Consider, for example, that you have purchased an additional 5 TB of licensing for the Metro
Mirror and Global Mirror feature. Example 7-209 on page 462 shows the command that you
enter.
Example 7-209 svctask chlicense command
IBM_2145:ITSO-CLS1:admin>svctask chlicense -remote 25
To turn a feature off, add 0 TB as the capacity for the feature that you want to disable.
To verify that the changes you have made are reflected in your SVC configuration, you can
issue the svcinfo lslicense command as before (see Example 7-210).
Example 7-210 svcinfo lslicense command: Verifying changes
IBM_2145:ITSO-CLS1:admin>svcinfo lslicense
used_flash 0.00
used_remote 0.00
used_virtualization 0.74
license_flash 50
license_remote 25
license_virtualization 80
7.14.8 Listing dumps
Several commands are available for you to list the dumps that were generated over a period
of time. You can use the lsxxxxdumps command, where xxxx is the object dumps, to return a
list of dumps in the appropriate directory.
These object dumps are available:
lserrlogdumps
lsfeaturedumps
lsiotracedumps
lsiostatsdumps
lssoftwaredumps
ls2145dumps
If no node is specified, the command lists the dumps that are available on the configuration
node.
Error or event dump
The dumps that are contained in the /dumps/elogs directory are dumps of the contents of the
error and event log at the time that the dump was taken. You create an error or event log
dump by using the svctask dumperrlog command. This command dumps the contents of the
error or event log to the /dumps/elogs directory. If you do not supply a file name prefix, the
system uses the default errlog_ file name prefix. The full, default file name is
errlog_NNNNNN_YYMMDD_HHMMSS. In this file name, NNNNNN is the node front panel name. If
the command is used with the -prefix option, the value that is entered for the -prefix is used
instead of errlog.
The svcinfo lserrlogdumps command lists all of the dumps in the /dumps/elogs directory
(Example 7-211).
Example 7-211 svcinfo lserrlogdumps command
IBM_2145:ITSO-CLS1:admin>svcinfo lserrlogdumps
462
Implementing the IBM System Storage SAN Volume Controller V5.1
id
0
1
2
3
4
5
6
7
8
9
filename
errlog_104643_080617_172859
errlog_104643_080618_163527
errlog_104643_080619_164929
errlog_104643_080619_165117
errlog_104643_080624_093355
svcerrlog_104643_080624_094301
errlog_104643_080624_120807
errlog_104643_080624_121102
errlog_104643_080624_122204
errlog_104643_080624_160522
Featurization log dump
The dumps that are contained in the /dumps/feature directory are dumps of the featurization
log. A featurization log dump is created by using the svctask dumpinternallog command.
This command dumps the contents of the featurization log to the /dumps/feature directory to
a file called feature.txt. Only one of these files exists, so every time that the svctask
dumpinternallog command is run, this file is overwritten.
The svcinfo lsfeaturedumps command lists all of the dumps in the /dumps/feature
directory (Example 7-212).
Example 7-212 svctask lsfeaturedumps command
IBM_2145:ITSO-CLS1:admin>svcinfo lsfeaturedumps
id
feature_filename
0
feature.txt
I/O trace dump
Dumps that are contained in the /dumps/iotrace directory are dumps of I/O trace data. The
type of data that is traced depends on the options that are specified by the svctask settrace
command. The collection of the I/O trace data is started by using the svctask starttrace
command. The I/O trace data collection is stopped when the svctask stoptrace command is
used. When the trace is stopped, the data is written to the file.
The file name is prefix_NNNNNN_YYMMDD_HHMMSS, where NNNNNN is the node front panel
name, and prefix is the value that is entered by the user for the -filename parameter in the
svctask settrace command.
The command to list all of the dumps in the /dumps/iotrace directory is the svcinfo
lsiotracedumps command (Example 7-213).
Example 7-213 svcinfo lsiotracedumps command
IBM_2145:ITSO-CLS1:admin>svcinfo lsiotracedumps
id
iotrace_filename
0
tracedump_104643_080624_172208
1
iotrace_104643_080624_172451
I/O statistics dump
The dumps that are contained in the /dumps/iostats directory are the dumps of the I/O
statistics for the disks on the cluster. An I/O statistics dump is created by using the svctask
startstats command. As part of this command, you can specify a time interval at which you
want the statistics to be written to the file (the default is 15 minutes). Every time that the time
Chapter 7. SAN Volume Controller operations using the command-line interface
463
interval is encountered, the I/O statistics that are collected up to this point are written to a file
in the /dumps/iostats directory.
The file names that are used for storing I/O statistics dumps are
m_stats_NNNNNN_YYMMDD_HHMMSS or v_stats_NNNNNN_YYMMDD_HHMMSS, depending on whether
the statistics are for MDisks or VDisks. In these file names, NNNNNN is the node front panel
name.
The command to list all of the dumps that are in the /dumps/iostats directory is the svcinfo
lsiostatsdumps command (Example 7-214).
Example 7-214 svcinfo lsiostatsdumps command
IBM_2145:ITSO-CLS1:admin>svcinfo lsiostatsdumps
id
iostat_filename
0
Nm_stats_104603_071115_020054
1
Nn_stats_104603_071115_020054
2
Nv_stats_104603_071115_020054
3
Nv_stats_104603_071115_022057
........
Software dump
The svcinfo lssoftwaredump command lists the contents of the /home/admin/upgrade
directory. Any files in this directory are copied there at the time that you perform a software
upgrade. Example 7-215 shows the command.
Example 7-215 svcinfo lssoftwaredumps
IBM_2145:ITSO-CLS1:admin>svcinfo lssoftwaredumps
id
software_filename
0
IBM2145_INSTALL_4.3.0.0
Other node dumps
All of the svcinfo lsxxxxdumps commands can accept a node identifier as input (for example,
append the node name to the end of any of the node dump commands). If this identifier is not
specified, the list of files on the current configuration node is displayed. If the node identifier is
specified, the list of files on that node is displayed.
However, files can only be copied from the current configuration node (using PuTTY Secure
Copy). Therefore, you must issue the svctask cpdumps command to copy the files from a
non-configuration node to the current configuration node. Subsequently, you can copy them to
the management workstation using PuTTY Secure Copy.
For example, you discover a dump file and want to copy it to your management workstation
for further analysis. In this case, you must first copy the file to your current configuration node.
To copy dumps from other nodes to the configuration node, use the svctask cpdumps
command.
In addition to the directory, you can specify a file filter. For example, if you specified
/dumps/elogs/*.txt, all of the files in the /dumps/elogs directory that end in .txt are copied.
464
Implementing the IBM System Storage SAN Volume Controller V5.1
Wildcards: The following rules apply to the use of wildcards with the SAN Volume
Controller CLI:
The wildcard character is an asterisk (*).
The command can contain a maximum of one wildcard.
When you use a wildcard, you must surround the filter entry with double quotation
marks (""), for example:
>svctask cleardumps -prefix "/dumps/elogs/*.txt"
Example 7-216 shows an example of the cpdumps command.
Example 7-216 svctask cpdumps command
IBM_2145:ITSO-CLS1:admin>svctask cpdumps -prefix /dumps/configs n4
Now that you have copied the configuration dump file from Node n4 to your configuration
node, you can use PuTTY Secure Copy to copy the file to your management workstation for
further analysis.
To clear the dumps, you can run the svctask cleardumps command. Again, you can append
the node name if you want to clear dumps off of a node other than the current configuration
node (the default for the svctask cleardumps command).
The commands in Example 7-217 clear all logs or dumps from the SVC Node n1.
Example 7-217 svctask cleardumps command
IBM_2145:ITSO-CLS1:admin>svctask
IBM_2145:ITSO-CLS1:admin>svctask
IBM_2145:ITSO-CLS1:admin>svctask
IBM_2145:ITSO-CLS1:admin>svctask
IBM_2145:ITSO-CLS1:admin>svctask
IBM_2145:ITSO-CLS1:admin>svctask
IBM_2145:ITSO-CLS1:admin>svctask
cleardumps
cleardumps
cleardumps
cleardumps
cleardumps
cleardumps
cleardumps
-prefix
-prefix
-prefix
-prefix
-prefix
-prefix
-prefix
/dumps n1
/dumps/iostats n1
/dumps/iotrace n1
/dumps/feature n1
/dumps/config n1
/dumps/elog n1
/home/admin/upgrade
n1
Application abends dump
The dumps that are contained in the /dumps directory are the dumps resulting from
application (abnormal ends) abends. These dumps are written to the /dumps directory. The
default file names are dump.NNNNNN.YYMMDD.HHMMSS. NNNNNN is the node front panel name.
In addition to the dump file, trace files can be written to this directory. These trace files are
named NNNNNN.trc.
The command to list all of the dumps in the /dumps directory is the svcinfo ls2145dumps
command (Example 7-218).
Example 7-218 svcinfo ls2145dumps command
IBM_2145:ITSO-CLS1:admin>svcinfo ls2145dumps
id
2145_filename
0
svc.config.cron.bak_node3
1
svc.config.cron.bak_SVCNode_2
2
dump.104643.070803.015424
3
dump.104643.071010.232740
4
svc.config.backup.bak_ITSOCL1_N1
Chapter 7. SAN Volume Controller operations using the command-line interface
465
7.14.9 Backing up the SVC cluster configuration
You can back up your cluster configuration by using the Backing Up a Cluster Configuration
window or the CLI svcconfig command. In this section, we describe the overall procedure for
backing up your cluster configuration and the conditions that must be satisfied to perform a
successful backup.
The backup command extracts configuration data from the cluster and saves it to the
svc.config.backup.xml file in the /tmp directory. This process also produces an
svc.config.backup.sh file. You can study this file to see what other commands were issued
to extract information.
A svc.config.backup.log log is also produced. You can study this log for the details of what
was done and when it was done. This log also includes information about the other
commands that were issued.
Any pre-existing svc.config.backup.xml file is archived as the svc.config.backup.bak file.
The system only keeps one archive. We recommend that you immediately move the .XML file
and related KEY files (see the following limitations) off of the cluster for archiving. Then, erase
the files from the /tmp directory using the svcconfig clear -all command. We also
recommend that you change all of the objects having default names to non-default names.
Otherwise, a warning is produced for objects with default names. Also, the object with the
default name is restored with its original name with an “_r” appended. The prefix
_(underscore) is reserved for backup and restore command usage. Do not use this prefix in
any object names.
Important: The tool backs up logical configuration data only, not client data. It does not
replace a traditional data backup and restore tool, but this tool supplements a traditional
data backup and restore tool with a way to back up and restore the client’s configuration.
To provide a complete backup and disaster recovery solution, you must back up both user
(non-configuration) data and configuration (non-user) data. After the restoration of the SVC
configuration, you must fully restore user (non-configuration) data to the cluster’s disks.
Prerequisites
You must have the following prerequisites in place:
All nodes must be online.
No object name can begin with an underscore.
All objects must have non-default names, that is, names that are not assigned by the SVC.
Although we recommend that objects have non-default names at the time that the backup is
taken, this prerequisite is not mandatory. Objects with default names are renamed when they
are restored.
Example 7-219 shows an example of the svcconfig backup command.
Example 7-219 svcconfig backup command
IBM_2145:ITSO-CLS1:admin>svcconfig backup
......
CMMVC6130W Inter-cluster partnership fully_configured will not be restored
...................
CMMVC6112W io_grp io_grp0 has a default name
CMMVC6112W io_grp io_grp1 has a default name
CMMVC6112W mdisk mdisk18 has a default name
CMMVC6112W mdisk mdisk19 has a default name
CMMVC6112W mdisk mdisk20 has a default name
466
Implementing the IBM System Storage SAN Volume Controller V5.1
................
CMMVC6136W No SSH key file svc.config.admin.admin.key
CMMVC6136W No SSH key file svc.config.admincl1.admin.key
CMMVC6136W No SSH key file svc.config.ITSOSVCUser1.admin.key
.......................
CMMVC6112W vdisk vdisk7 has a default name
...................
CMMVC6155I SVCCONFIG processing completed successfully
Example 7-220 shows the pscp command.
Example 7-220 pscp command
C:\Program Files\PuTTY>pscp -load SVC_CL1
[email protected]:/tmp/svc.config.backup.xml c:\temp\clibackup.xml
clibackup.xml
| 97 kB | 97.2 kB/s | ETA: 00:00:00 | 100%
The following scenario illustrates the value of configuration backup:
1. Use the svcconfig command to create a backup file on the cluster that contains details
about the current cluster configuration.
2. Store the backup configuration on a form of tertiary storage. You must copy the backup file
from the cluster or it becomes lost if the cluster crashes.
3. If a sufficiently severe failure occurs, the cluster might be lost. Both the configuration data
(for example, the cluster definitions of hosts, I/O Groups, MDGs, and MDisks) and the
application data on the virtualized disks are lost. In this scenario, it is assumed that the
application data can be restored from normal client backup procedures. However, before
you can perform this restoration, you must reinstate the cluster as it was configured at the
time of the failure. Therefore, you restore the same MDGs, I/O Groups, host definitions,
and VDisks that existed prior to the failure. Then, you can copy the application data back
onto these VDisks and resume operations.
4. Recover the hardware: hosts, SVCs, disk controller systems, disks, and SAN fabric. The
hardware and SAN fabric must physically be the same as the hardware and SAN fabric
that were used before the failure.
5. Re-initialize the cluster with the configuration node; the other nodes will be recovered
when restoring the configuration.
6. Restore your cluster configuration using the backup configuration file that was generated
prior to the failure.
7. Restore the data on your VDisks using your preferred restoration solution or with help from
IBM Service.
8. Resume normal operations.
7.14.10 Restoring the SVC cluster configuration
It is extremely important that you always consult IBM Support before you restore the SVC
cluster configuration from the backup. IBM Support can assist you in analyzing the root
cause of why the cluster configuration was lost.
After the svcconfig restore -execute command is started, consider any prior user data on
the VDisks destroyed. The user data must be recovered through your usual application data
backup and restore process.
Chapter 7. SAN Volume Controller operations using the command-line interface
467
See IBM TotalStorage Open Software Family SAN Volume Controller: Command-Line
Interface User’s Guide, SC26-7544, for more information about this topic.
For a detailed description of the SVC configuration backup and restore functions, see IBM
TotalStorage Open Software Family SAN Volume Controller: Configuration Guide,
SC26-7543.
7.14.11 Deleting configuration backup
In this section, we describe in detail the tasks that you can perform to delete the configuration
backup that is stored in the configuration file directory on the cluster. Never clear this
configuration without having a backup of your configuration stored in a separate, secure
place.
When using the clear command, you erase the files in the /tmp directory. This command
does not clear the running configuration and prevent the cluster from working, but the
command clears all of the configuration backup that is stored in the /tmp directory
(Example 7-221).
Example 7-221 svcconfig clear command
IBM_2145:ITSO-CLS1:admin>svcconfig clear -all
.
CMMVC6155I SVCCONFIG processing completed successfully
7.15 SAN troubleshooting and data collection
When we encounter a SAN issue, the SVC is often extremely helpful in troubleshooting the
SAN, because the SVC is the at the center of the environment through which the
communication travels.
Chapter 14 in SAN Volume Controller Best Practices and Performance Guidelines,
SG24-7521, contains a detailed description of how to troubleshoot and collect data from the
SVC:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
7.16 T3 recovery process
A procedure called “T3 recovery” has been tested and used in select cases where the cluster
has been completely destroyed. (One example is simultaneously pulling power cords from all
nodes to their uninterruptible power supply units; in this case, all nodes boot up to node error
578 when the power is restored.)
This procedure, in certain circumstances, is able to recover most user data. However, this
procedure is not to be used by the client or IBM service representative without direct
involvement from IBM level 3 technical support. This procedure is not published, but we refer
to it here only to indicate that the loss of a cluster can be recoverable without total data loss,
but it requires a restoration of application data from the backup. It is an extremely sensitive
procedure, which is only to be used as a last resort, and cannot recover any data that was
unstaged from cache at the time of the total cluster failure.
468
Implementing the IBM System Storage SAN Volume Controller V5.1
8
Chapter 8.
SAN Volume Controller
operations using the GUI
In this chapter, we show IBM System Storage SAN Volume Controller (SVC) operational
management by using the SVC GUI. We have divided this chapter into normal operations and
advanced operations.
We describe the basic configuration procedures that are required to get your SVC
environment up and running as quickly as possible using the Master Console and its
associated GUI.
Chapter 2, “IBM System Storage SAN Volume Controller” on page 7 describes the features in
greater depth. In this chapter, we focus on the operational aspects.
© Copyright IBM Corp. 2010. All rights reserved.
469
8.1 SVC normal operations using the GUI
In this topic, we discuss several of the operations that we have defined as normal, day-to-day
activities.
It is possible for many users to be logged into the GUI at any given time. However, no locking
mechanism exists, so if two users change the same object at the same time, the last action
entered from the GUI is the one that will take effect.
Important: Data entries made through the GUI are case sensitive.
8.1.1 Organizing on window content
In the following sections, there are several windows within the SVC GUI where you can
perform filtering (to minimize the amount of data that is shown on the window) and sorting (to
organize the content on the window). This section provides a brief overview of these
functions.
The SVC Welcome window (Figure 8-1) is an important window and will be referred to as the
Welcome window throughout this chapter. We expect users to be able to locate this window
without us having to show it each time.
Figure 8-1 The Welcome window
From the Welcome window, select Work with Virtual Disks, and select Virtual Disks.
Table filtering
When you are in the Viewing Virtual Disks list, you can use the table filter option to filter the
visible list, which is useful if the list of entries is too large to work with. You can change the
filtering here as many times as you like, to further reduce the lists or for separate views.
Perform these steps to use table filtering:
1. Use the Show Filter Row icon, as shown in Figure 8-2 on page 471, or select Show
Filter Row in the list, and click Go.
470
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-2 Show Filter Row icon
2. This function enables you to filter based on the column names, as shown in Figure 8-3.
The Filter under each column name shows that no filter is in effect for that column.
Figure 8-3 Show Filter Row
3. If you want to filter on a column, click the word Filter, which opens up a filter window, as
shown in Figure 8-4 on page 472.
Chapter 8. SAN Volume Controller operations using the GUI
471
Figure 8-4 Filter option on Name
A list with virtual disks (VDisks) is displayed that contains names that include 01
somewhere in the name, as shown in Figure 8-5. (Notice the filter line under each column
heading, showing that our filter is in place.) If you want, you can perform additional filtering
on the other columns to further narrow your view.
Figure 8-5 Filtered on Name containing 01 in the name
4. The option to reset the filters is shown in Figure 8-6 on page 473. Use the Clear All
Filters icon or use the Clear All Filters option in the list, and click Go.
472
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-6 Clear All Filter options
Sorting
Regardless of whether you use the pre-filter or additional filter options, when you are in the
Viewing Virtual Disks window, you can sort the displayed data by selecting Edit Sort from the
list and clicking Go, or you can click the small Edit Sort icon highlighted by the mouse pointer
in Figure 8-7.
Figure 8-7 Selecting Edit Sort icon
As shown in Figure 8-8 on page 474, you can sort based on up to three criteria, including
Name, State, I/O Group, Managed Disk Group (MDisk Group), Capacity (MB),
Space-Efficient, Type, Hosts, FlashCopy Pair, FlashCopy Map Count, Relationship Name,
UID, and Copies.
Sort criteria: The actual sort criteria differs based on the information that you are sorting.
Chapter 8. SAN Volume Controller operations using the GUI
473
Figure 8-8 Sorting criteria
When you finish making your choices, click OK to regenerate the display based on your
sorting criteria. Look at the icons next to each column name to see the sort criteria currently
in use, as shown in Figure 8-9.
If you want to clear the sort, simply select Clear All Sorts from the list and click Go, or click
the Clear All Sorts icon that is highlighted by the mouse pointer in Figure 8-9.
Figure 8-9 Selecting to clear all sorts
474
Implementing the IBM System Storage SAN Volume Controller V5.1
8.1.2 Documentation
If you need to access the online documentation, in the upper right corner of the window, click
the information
icon. This action opens the Help Assistant pane on the right side of the
window, as shown in Figure 8-10.
Figure 8-10 Online help using the i icon
8.1.3 Help
If you need to access the online help, in the upper right corner of the window, click the
question mark
icon. This action opens a new window called the information center. Here,
you can search on any item for which you want help (see Figure 8-11 on page 476).
Chapter 8. SAN Volume Controller operations using the GUI
475
Figure 8-11 Online help using the ? icon
8.1.4 General housekeeping
If, at any time, the content in the right side of the frame is abbreviated, you can collapse the
My Work column by clicking the
icon at the top of the My Work column. When collapsed,
the small arrow changes from pointing to the left to pointing to the right (
). Clicking the
small arrow that points right expands the My Work column back to its original size.
In addition, each time that you open a configuration or administration window using the GUI in
the following sections, it creates a link for that window along the top of your Web browser
beneath the banner graphic. As a general housekeeping task, we recommend that you close
each window when you finish using it by clicking the
icon to the right of the window name,
but beneath the
icon. Be careful not to close the entire browser.
8.1.5 Viewing progress
With this view, you can see the status of activities, such as VDisk Migration, MDisk Removal
(Figure 8-12 on page 477), Image Mode Migration, Extend Migration, FlashCopy, Metro
Mirror and Global Mirror, VDisk Formatting, Space Efficient copy repair, VDisk copy
verification, and VDisk copy synchronization.
You can see detailed information about the item by clicking the underlined (progress) number
in the Progress column.
476
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-12 Showing possible processes to view where the MDisk is being removed from the MDG
8.2 Working with managed disks
This section describes the various configuration and administration tasks that you can
perform on the managed disks (MDisks) within the SVC environment.
This section details the tasks that you can perform at a disk controller level.
8.2.1 Viewing disk controller details
Perform the following steps to view information about a back-end disk controller in use by the
SVC environment:
1. Select Work with Managed Disks, and then, select Disk Controller Systems.
2. The Viewing Disk Controller Systems window (Figure 8-13) opens. For more detailed
information about a specific controller, click its ID (highlighted by the mouse cursor in
Figure 8-13).
Figure 8-13 Disk controller systems
3. When you click the controller Name (Figure 8-13), the Viewing General Details for Name
window (Figure 8-14 on page 478) opens for the controller (where Name is the controller
that you selected). Review the details, and click Close to return to the previous window.
Chapter 8. SAN Volume Controller operations using the GUI
477
Figure 8-14 Viewing general details about a disk controller
8.2.2 Renaming a disk controller
Perform the following steps to rename a disk controller that is used by the SVC cluster:
1. Select the controller that you want to rename. Then, select Rename a Disk Controller
System from the list, and click Go.
2. In the Renaming Disk Controller System controllername window (where controllername is
the controller that you selected in the previous step), type the new name that you want to
assign to the controller, and click OK. See Figure 8-15.
Figure 8-15 Renaming a controller
3. You return to the Disk Controller Systems window. You now see the new name of your
controller displayed.
Controller name: The name can consist of the letters A to Z and a to z, the numbers 0
to 9, the dash (-), and the underscore (_). The name can be between one and 15
characters in length. However, the name cannot start with a number, the dash, or the
word “controller” (because this prefix is reserved for SVC assignment only).
478
Implementing the IBM System Storage SAN Volume Controller V5.1
8.2.3 Discovery status
You can view the status of a managed disk (MDisk) discovery from the Viewing Discovery
Status window. This status tells you if there is an ongoing MDisk discovery. A running MDisk
discovery will be displayed with a status of Active.
Perform the following steps to view the status of an MDisk discovery:
1. Select Work with Managed Disks  Discovery Status. The Viewing Discovery Status
window is displayed, as shown in Figure 8-16.
Figure 8-16 Discovery status view
2. Click Close to close this window.
8.2.4 Managed disks
This section details the tasks that can be performed at an MDisk level. You perform each of
the following tasks from the Viewing Managed Disks window (Figure 8-17). To access this
window, from the SVC Welcome window, click Work with Managed Disks, and then, click
Managed Disks.
Figure 8-17 Viewing Managed Disks window
8.2.5 MDisk information
To retrieve information about a specific MDisk, perform the following steps:
Chapter 8. SAN Volume Controller operations using the GUI
479
1. In the Viewing Managed Disks window (Figure 8-18 on page 480), click the underlined
name of any MDisk in the list to reveal more detailed information about the specified
MDisk.
Figure 8-18 Managed disk details
Tip: If, at any time, the content in the right side of frame is abbreviated, you can
minimize the My Work column by clicking the arrow to the right of the My Work heading
at the top right of the column (highlighted with the mouse pointer in Figure 8-17 on
page 479).
After you minimize the column, you see an arrow in the far left position in the same
location where the My Work column formerly appeared.
2. Review the details, and then, click Close to return to the previous window.
8.2.6 Renaming an MDisk
Perform the following steps to rename an MDisk that is controlled by the SVC cluster:
1. Select the MDisk that you want to rename in the window that is shown in Figure 8-17 on
page 479. Select Rename an MDisk from the list, and click Go.
2. On the Renaming Managed Disk MDiskname window (where MDiskname is the MDisk
that you selected in the previous step), type the new name that you want to assign to the
MDisk, and click OK. See Figure 8-19 on page 481.
480
Implementing the IBM System Storage SAN Volume Controller V5.1
MDisk name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9,
the dash (-), and the underscore (_). The name can be between one and 15 characters
in length. However, the name cannot start with a number, the dash, or the word “MDisk”
(because this prefix is reserved for SVC assignment only).
Figure 8-19 Renaming an MDisk
8.2.7 Discovering MDisks
Perform the following steps to discover newly assigned MDisks:
1. Select Discover MDisks from the drop-down list that is shown in Figure 8-17 on
page 479, and click Go.
2. Any newly assigned MDisks are displayed in the window that is shown in Figure 8-20.
Figure 8-20 Newly discovered managed disks
8.2.8 Including an MDisk
If a significant number of errors occur on an MDisk, the SVC automatically excludes it. These
errors can result from a hardware problem, a storage area network (SAN) zoning problem, or
the result of poorly planned maintenance. If it is a hardware fault, you will receive Simple
Network Management Protocol (SNMP) alerts in regard to the state of the hardware (before
the disk was excluded) and preventive maintenance that has been undertaken. If not, the
hosts that were using VDisks, which used the excluded MDisk, now have I/O errors.
Chapter 8. SAN Volume Controller operations using the GUI
481
After you take the necessary corrective action to repair the MDisk (for example, replace the
failed disk and repair the SAN zones), you can tell the SVC to include the MDisk again.
8.2.9 Showing a VDisk using a certain MDisk
To display information about VDisks that reside on an MDisk, perform the following steps:
1. As shown in Figure 8-21, select the MDisk about which you want to obtain VDisk
information. Select Show VDisks using this MDisk from the list, and click Go.
Figure 8-21 Show VDisk using an MDisk
2. You now see a subset (specific to the MDisk that you chose in the previous step) of the
Viewing VDisks using MDisk window in Figure 8-22. We cover the Viewing VDisks window
in more detail in 8.4, “Working with hosts” on page 493.
Figure 8-22 VDisk list from a selected MDisk
482
Implementing the IBM System Storage SAN Volume Controller V5.1
8.3 Working with Managed Disk Groups
In this section, we describe the tasks that can be performed with the Managed Disk Group
(MDG). From the Welcome window that is shown in Figure 8-1 on page 470, select Working
with MDisks.
8.3.1 Viewing MDisk group information
We perform each of the following tasks from the Viewing Managed Disk Groups window
(Figure 8-23). To access this window, from the SVC Welcome window, click Work with
Managed Disks, and then, click Managed Disk Groups.
Figure 8-23 Viewing Managed Disk Groups window
To retrieve information about a specific MDG, perform the following steps:
1. In the Viewing Managed Disk Groups window (Figure 8-23), click the underlined name of
any MDG in the list.
2. In the View Managed Disk Group Details for MDGname window (where MDGname is the
MDG that you selected in the previous step), as shown in Figure 8-24, you see more
detailed information about the specified MDG. Here, you see information pertaining to the
number of MDisks and VDisks, as well as the capacity (both total and free space) within
the MDG. When you finish viewing the details, click Close to return to the previous
window.
Figure 8-24 MDG details
Chapter 8. SAN Volume Controller operations using the GUI
483
8.3.2 Creating MDGs
Perform the following steps to create an MDG:
1. From the SVC Welcome window (Figure 8-1 on page 470), select Work with Managed
Disks, and then, select Managed Disk Groups.
2. The Viewing Managed Disk Groups window opens (see Figure 8-25). Select Create an
MDisk Group from the list, and click Go.
Figure 8-25 Selecting the option to create an MDisk group
3. In the Create a Managed Disk Group window, the wizard provides an overview of the
steps that will be performed. Click Next.
4. While in the Name the group and select the managed disks window (Figure 8-26 on
page 485), follow these steps:
a. Type a name for the MDG.
MDG name: If you do not provide a name, the SVC automatically generates the
name MDiskgrpx, where x is the ID sequence number that is assigned by the SVC
internally.
If you want to provide a name (as we have done), you can use the letters A to Z and
a to z, the numbers 0 to 9, and the underscore (_). The name can be between one
and 15 characters in length and is case sensitive, but it cannot start with a number
or the word “MDiskgrp” (because this prefix is reserved for SVC assignment only).
b. From the MDisk Candidates box, as shown in Figure 8-26 on page 485, one at a time,
select the MDisks that you want to put into the MDG. Click Add to move them to the
Selected MDisks box. More than one page of disks might exist; you can navigate
between the windows (the MDisks that you have selected will be preserved).
c. You can specify a threshold to send a warning to the error log when the capacity is first
exceeded. The threshold can either be a percentage or a specific amount.
d. Click Next.
484
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-26 Name the group and select the managed disks window
5. From the list that is shown in Figure 8-27, select the extent size to use. When you select a
specific extent size, the typical value is 512; the total cluster size is shown in TB. Select
Next.
Figure 8-27 Select Extent Size window
6. In the Verify Managed Disk Group window (Figure 8-28 on page 486), verify that the
information that you have specified is correct. Click Finish.
Chapter 8. SAN Volume Controller operations using the GUI
485
Figure 8-28 Verify Managed Disk Group wizard
7. Return to the Viewing Managed Disk Groups window (Figure 8-29) where the new MDG is
displayed.
Figure 8-29 A new MDG was added successfully
You have now completed the tasks that are required to create an MDG.
8.3.3 Renaming a managed disk group
To rename an MDG, perform the following steps:
1. In the Viewing Managed Disk Groups window (Figure 8-30), select the MDG that you want
to rename. Select Modify an MDisk Group from the list, and click Go.
Figure 8-30 Renaming an MDG
486
Implementing the IBM System Storage SAN Volume Controller V5.1
From the Modifying Managed Disk Group MDisk Group Name window (where the MDisk
Group Name is the MDG that you selected in the previous step), type the new name that you
want to assign and click OK (see Figure 8-31).
You can also set or change the usage threshold from this window.
MDG name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, a
dash (-), and the underscore (_). The new name can be between one and 15 characters in
length, but it cannot start with a number, a dash, or the word “mdiskgrp” (because this
prefix is reserved for SVC assignment only).
Figure 8-31 Renaming an MDG
It is considered a best practice to enable the capacity warning for your MDGs. You must
address the range to be used in the planning phase of the SVC installation, although this
range can always be changed without interruption.
8.3.4 Deleting a managed disk group
To delete an MDG, perform the following steps:
1. Select the MDG that you want to delete. Select Delete an MDisk Group from the list, and
click Go.
2. In the Deleting a Managed Disk Group MDGname window (where MDGname is the MDG
that you selected in the previous step), click OK to confirm that you want to delete the
MDG (see Figure 8-32).
Figure 8-32 Deleting an MDG
3. If there are MDisks and VDisks within the MDG that you are deleting, you are required to
click Forced delete for the MDG (Figure 8-33 on page 488).
Chapter 8. SAN Volume Controller operations using the GUI
487
Important: If you delete an MDG with the Forced Delete option, and VDisks were
associated with that MDG, you will lose the data on your VDisks, because they are
deleted before the MDG. If you want to save your data, migrate or mirror the VDisks to
another MDG before you delete the MDG previously assigned to the VDisks.
Figure 8-33 Confirming forced deletion of an MDG
8.3.5 Adding MDisks
If you created an empty MDG or you simply assign additional MDisks to your SVC
environment later, you can add MDisks to existing MDGs by performing the following steps:
Note: You can only add unmanaged MDisks to an MDG.
1. In Figure 8-34, select the MDG to which you want to add MDisks. Select Add MDisks
from the list, and click Go.
Figure 8-34 Adding an MDisk to an existing MDG
2. From the Adding Managed Disks to Managed Disk Group MDGname window (where
MDGname is the MDG that you selected in the previous step), select the desired MDisk or
MDisks from the MDisk Candidates list (Figure 8-35 on page 489). After you select all of
the desired MDisks, click OK.
488
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-35 Adding MDisks to an MDG
8.3.6 Removing MDisks
To remove an MDisk from an MDG, perform the following steps:
1. In Figure 8-36, select the MDG from which you want to remove an MDisk. Select Remove
MDisks from the list, and click Go.
Figure 8-36 Viewing MDGs
2. From the Deleting Managed Disks from Managed Disk Group MDGname window (where
MDGname is the MDG that you selected in the previous step), select the desired MDisk or
MDisks from the list (Figure 8-37 on page 490). After you select all of the desired MDisks,
click OK.
Chapter 8. SAN Volume Controller operations using the GUI
489
Figure 8-37 Removing MDisks from an MDG
3. If VDisks are using the MDisks that you are removing from the MDG, you are required to
click Forced Delete to confirm the removal of the MDisk, as shown in Figure 8-38.
4. An error message is displayed if there is insufficient space to migrate the VDisk data to
other extents on other MDisks in that MDG.
Figure 8-38 Confirming forced deletion of MDisks from an MDG
8.3.7 Displaying MDisks
If you want to view the MDisks that are configured on your system, perform the following steps
to display MDisks.
From the SVC Welcome window (Figure 8-1 on page 470), select Work with Managed
Disks, and then, select Managed Disks. In the Viewing Managed Disks window (Figure 8-39
on page 491), if your MDisks are not displayed, rescan the Fibre Channel (FC) network.
Select Discover MDisks from the list, and click Go.
490
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-39 Discover MDisks
Troubleshooting: If your MDisks are still not visible, check that the logical unit numbers
(LUNs) from your subsystem are properly assigned to the SVC (for example, using storage
partitioning with a DS4000) and that appropriate zoning is in place (for example, the SVC
can see the disk subsystem).
8.3.8 Showing MDisks in this group
To show a list of MDisks within an MDG, perform the following steps:
1. Select the MDG from which you want to retrieve MDisk information (Figure 8-40). Select
Show MDisks in This Group from the list, and click Go.
Figure 8-40 Viewing Managed Disk Groups
2. You now see a subset (specific to the MDG that you chose in the previous step) of the
Viewing Managed Disks window (Figure 8-41 on page 492) that was shown in 8.2.4,
“Managed disks” on page 479.
Chapter 8. SAN Volume Controller operations using the GUI
491
Figure 8-41 Viewing MDisks in an MDG
Note: Remember, you can collapse the column entitled My Work at any time by clicking the
arrow to the right of the My Work column heading.
8.3.9 Showing the VDisks that are associated with an MDisk group
To show a list of the VDisks that are associated with MDisks within an MDG, perform the
following steps:
1. In Figure 8-42, select the MDG from which you want to retrieve VDisk information. Select
Show VDisks using this group from the list, and click Go.
Figure 8-42 Viewing Managed Disk Groups
2. You see a subset (specific to the MDG that you chose in the previous step) of the Viewing
Virtual Disks window in Figure 8-43 on page 493. We describe the Viewing Virtual Disks
window in more detail in “VDisk information” on page 505.
492
Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-43 VDisks belonging to selected MDG
You have now completed the required tasks to manage the disk controller systems, MDisks,
and MDGs within the SVC environment.
8.4 Working with hosts
In this section, we describe the various configuration and administration tasks that you can
perform on the host that is connected to your SVC.
For more details about connecting hosts to an SVC in a SAN environment, obtain more
detailed information in IBM System Storage SAN Volume Controller V5.1.0 - Host Attachment
Guide, SG26-7905-05.
Starting with SVC 5.1, iSCSI is introduced as an additional method for connecting your host
to the SVC. With this option, the host can now choose between FC or iSCSI as the
connection method. After the connection type has been selected, all further work with the
host is identical for the FC-attached host and the iSCSI-attached host.
To access the Viewing Hosts window from the SVC Welcome window on Figure 8-1 on
page 470, click Work with Hosts, and then, click Hosts. The Viewing Hosts window opens,
as shown in Figure 8-44. You perform each task that is shown in the following sections from
the Viewi