Uploaded by Nikolay Petrov

HP3PAR storage

advertisement
Managing HP 3PAR
StoreServ II
Student guide
HK904S D.00
Use of this material to deliver training without prior written permission from HP is prohibited.
 Copyright 2014 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice. The only warranties for HP
products and services are set forth in the express warranty statements accompanying such products and
services. Nothing herein should be construed as constituting an additional warranty. HP shall not be
liable for technical or editorial errors or omissions contained herein.
This is an HP copyrighted work that may not be reproduced without the written permission of HP. You
may not use these materials to deliver training to any person outside of your organization without the
written permission of HP.
Microsoft®, Windows®, Windows NT® are U.S. registered trademarks of Microsoft Corporation.
UNIX® is a registered trademark of The Open Group.
All other product names mentioned herein may be trademarks of their respective companies.
Export Compliance Agreement
Export Requirements. You may not export or re-export products subject to this agreement in violation of
any applicable laws or regulations.
Without limiting the generality of the foregoing, products subject to this agreement may not be
exported, re-exported, otherwise transferred to or within (or to a national or resident of) countries under
U.S. economic embargo and/or sanction including the following countries:
Cuba, Iran, North Korea, Sudan and Syria.
This list is subject to change.
In addition, products subject to this agreement may not be exported, re-exported, or otherwise
transferred to persons or entities listed on the U.S. Department of Commerce Denied Persons List; U.S.
Department of Commerce Entity List (15 CFR 744, Supplement 4); U.S. Treasury Department
Designated/Blocked Nationals exclusion list; or U.S. State Department Debarred Parties List; or to parties
directly or indirectly involved in the development or production of nuclear, chemical, or biological
weapons, missiles, rocket systems, or unmanned air vehicles as specified in the U.S. Export
Administration Regulations (15 CFR 744); or to parties directly or indirectly involved in the financing,
commission or support of terrorist activities.
By accepting this agreement you confirm that you are not located in (or a national or resident of) any
country under U.S. embargo or sanction; not identified on any U.S. Department of Commerce Denied
Persons List, Entity List, US State Department Debarred Parties List or Treasury Department Designated
Nationals exclusion list; not directly or indirectly involved in the development or production of nuclear,
chemical, biological weapons, missiles, rocket systems, or unmanned air vehicles as specified in the U.S.
Export Administration Regulations (15 CFR 744), and not directly or indirectly involved in the financing,
commission or support of terrorist activities.
Printed in US
Managing HP 3PAR StoreServ II
Student guide
August 2014
Contents
Module 1 — Course Overview ................................................................................................. 1-1
Module objectives ...................................................................................................... 1-2
Course introduction ................................................................................................... 1-3
Prerequisites .............................................................................................................. 1-4
Course agenda ........................................................................................................... 1-5
Module 2 — Remote Copy ...................................................................................................... 2-1
Module objectives ...................................................................................................... 2-2
Remote Copy introduction ......................................................................................... 2-3
FC-based Remote Copy .............................................................................................. 2-4
Native IP-based Remote Copy ................................................................................... 2-5
Cost effective: Active/Active links ............................................................................. 2-6
Link connectivity ........................................................................................................ 2-7
Flexible, modular architecture .................................................................................. 2-8
Shared HBA ports: RCFC and host .............................................................................. 2-9
Thin Provisioning aware ..........................................................................................2-10
Zero detection with Remote Copy ...........................................................................2-11
Remote Copy terminology .......................................................................................2-12
HP 3PAR Remote Copy data integrity ......................................................................2-13
Intervolume group fast resync ................................................................................2-14
Autonomic replication groups .................................................................................2-15
Remote Copy: Synchronous mode...........................................................................2-16
Synchronous mode operation .................................................................................2-17
Synchronous mode benefits ....................................................................................2-18
Synchronous mode summary ..................................................................................2-19
Remote Copy: Asynchronous periodic mode ...........................................................2-20
Asynchronous periodic mode operation..................................................................2-21
Asynchronous periodic mode operation details......................................................2-22
Asynchronous periodic mode benefits ....................................................................2-23
Asynchronous periodic mode summary ..................................................................2-24
HP 3PAR Remote Copy topologies and latency .......................................................2-25
Remote Copy: Synchronous long distance (SLD) mode ..........................................2-26
Supported topologies: Synchronous long distance (1 of 3) ...................................2-27
Supported topologies: Synchronous long distance (2 of 3) ...................................2-28
Supported topologies: Synchronous long distance (3 of 3) ...................................2-29
Remote Copy topologies..........................................................................................2-30
Topology features....................................................................................................2-31
HK904S D.00
© 2014 Hewlett-Packard Development Company, L.P.
i
Managing HP 3PAR StoreServ II
Supported topologies: One-to-one (1:1) .................................................................2-32
Supported topologies: Many-to-one (4:1) ..............................................................2-33
Supported topologies: One-to-many (1:2)..............................................................2-34
Supported topologies: M-to-N.................................................................................2-35
Remote Copy: Failure scenarios ..............................................................................2-36
Failure scenario ........................................................................................................2-37
Failover concept .......................................................................................................2-38
Remote Copy operations .........................................................................................2-39
Stop replication of Remote Copy group ..................................................................2-40
Fail over a Remote Copy group ................................................................................2-41
Recover a Remote Copy group ................................................................................2-42
Remote Copy CLI (1 of 3) ..........................................................................................2-43
Remote Copy CLI (2 of 3) ..........................................................................................2-44
Remote Copy CLI (3 of 3) ..........................................................................................2-45
Remote Copy: Summary of features and benefits ..................................................2-46
HP 3PAR Storage Replication Adapter for VMware Site Recovery Manager...........2-47
Peer Persistence ......................................................................................................2-48
HP 3PAR Peer Persistence (1 of 2) ...........................................................................2-49
HP 3PAR Peer Persistence (2 of 2) ...........................................................................2-50
Definitions ................................................................................................................2-51
Use cases—Automatic failover ...............................................................................2-52
Use cases—Manual failover ....................................................................................2-53
Quorum Witness.......................................................................................................2-54
Automating failover with Quorum Witness .............................................................2-55
How does a Quorum Witness help automate failover? ...........................................2-56
Quorum Witness.......................................................................................................2-57
Peer Persistence High-Availability Solution for Federated Storage .......................2-58
Peer Persistence supported environments and requirements ...............................2-60
Lab activity ...............................................................................................................2-61
Module 3 — System Reporter ................................................................................................ 3-1
Module objectives ...................................................................................................... 3-2
HP 3PAR System Reporter introduction .................................................................... 3-3
External System Reporter ......................................................................................... 3-4
On-Node System Reporter......................................................................................... 3-5
External System Reporter ......................................................................................... 3-6
HP 3PAR External System Reporter introduction ..................................................... 3-7
HP 3PAR System Reporter requirements .................................................................. 3-9
Installing External System Reporter .......................................................................3-10
External System Reporter Installation prerequisites .............................................3-11
External System Reporter interface ........................................................................3-12
External System Reporter: Adding an InServ system (1 of 2) .................................3-13
External System Reporter: Adding an InServ system (2 of 2) .................................3-14
External System Reporter: Sampling Policies .........................................................3-15
External System Reporter: Sampling Status ...........................................................3-16
External System Reporter reports ..........................................................................3-17
ii
© 2014 Hewlett-Packard Development Company, L.P.
HK904S D.00
Contents
External System Reporter: Quick Reports ...............................................................3-18
External System Reporter: Scheduled Reports (1 of 3) ..........................................3-19
External System Reporter: Scheduled Reports (2 of 3) ..........................................3-20
External System Reporter: Scheduled Reports (3 of 3) ..........................................3-21
External System Reporter: Custom Reports (1 of 2) ...............................................3-22
External System Reporter: Custom Reports (2 of 2) ...............................................3-23
External SR Report example: Daily VLUN Performance ..........................................3-24
External SR Report example: Hourly VV Cache Performance .................................3-25
External SR Report example: Hourly PD Performance at Time ..............................3-26
External SR Report example: Hourly Port Performance .........................................3-27
External System Reporter: Email alerts (1 of 2)......................................................3-28
External System Reporter: Email alerts (2 of 2)......................................................3-29
On-Node System Reporter.......................................................................................3-30
On-Node System Reporter introduction..................................................................3-31
On-Node System Reporter benefits ........................................................................3-32
On-Node SR Management Console Quick Reports ..................................................3-33
On-Node SR Quick Report example: Physical Disks – I/O Time and Size
Distribution IOPS .....................................................................................................3-35
On-Node SR Quick Report example: Physical Disks – Space ...................................3-36
On-Node SR Quick Report example: Disk Ports – Total Throughput ......................3-37
On-Node SR Quick Report example: Ports – Performance Statistics .....................3-38
On-Node SR Quick Report example: Virtual Volumes – Space ................................3-39
On-Node SR Quick Report example: Controller Node – Cache Performance .........3-40
On-Node SR Custom Reports (1 of 2) ......................................................................3-41
On-Node SR Custom Reports (2 of 2) ......................................................................3-42
On-Node SR Custom Report example: Virtual Volumes - Space .............................3-43
On-Node SR Management Console Charts: Template .............................................3-44
On-Node Template Chart example: PD Usage – Total IOPS ....................................3-45
On-Node SR Management Console Charts: Custom ................................................3-46
On-Node Custom Chart example: VLUNs.................................................................3-47
On-Node System Reporter: CLI (1 of 2) ...................................................................3-48
On-Node System Reporter: CLI (2 of 2) ...................................................................3-49
On-Node SR CLI example: srcpgspace .....................................................................3-50
On-Node SR CLI Example: srstatpd..........................................................................3-52
On-Node SR Alerting (1 of 2)....................................................................................3-54
On-Node SR Alerting (2 of 2)....................................................................................3-55
System Reporter considerations .............................................................................3-56
External System Reporter versus On-Node System Reporter ................................3-57
Module 4 — QoS and Priority Optimization............................................................................ 4-1
Module objectives ...................................................................................................... 4-2
Quality of Service and Priority Optimization introduction ........................................ 4-3
Quality of Service (QoS) introduction ........................................................................ 4-4
Why Quality of Service? (1 of 2) ................................................................................. 4-5
Why Quality of Service? (2 of 2) ................................................................................. 4-6
HP 3PAR Priority Optimization advantages .............................................................. 4-8
HK904S D.00
© 2014 Hewlett-Packard Development Company, L.P.
iii
Managing HP 3PAR StoreServ II
HP 3PAR Priority Optimization .................................................................................. 4-9
HP 3PAR StoreServ Virtualization ...........................................................................4-10
QoS controls .............................................................................................................4-11
HP 3PAR Priority Optimization example .................................................................4-12
How it works.............................................................................................................4-13
How it works (1 of 4) ................................................................................................4-15
How it works (2 of 4) ................................................................................................4-16
How it works (3 of 4) ................................................................................................4-17
How it works (4 of 4) ................................................................................................4-18
Priority Optimization example (1 of 2) ....................................................................4-19
Priority Optimization example (2 of 2) ....................................................................4-20
PO performance considerations ..............................................................................4-21
QoS features available only from the CLI ................................................................4-22
Dynamic cap .............................................................................................................4-23
Latency goal defined................................................................................................4-24
Service Level Objective (SLO)...................................................................................4-25
Priority Optimization configuration.........................................................................4-26
QoS configuration: MC Virtual Volume Sets area ....................................................4-27
QoS configuration: Management Console ...............................................................4-28
QoS configuration: MC QoS area ..............................................................................4-29
QoS configuration: all_others rule ..........................................................................4-30
QoS configuration: Using the CLI .............................................................................4-31
QoS arguments ........................................................................................................4-32
QoS configuration: Using the CLI .............................................................................4-33
QoS configuration: Using the CLI, examples............................................................4-34
Online upgrade .........................................................................................................4-35
Priority Optimization monitoring ............................................................................4-36
QoS monitoring: statqos command example ..........................................................4-37
QoS monitoring: srstatqos command example.......................................................4-39
QoS monitoring: Management Console Chart (1 of 2).............................................4-41
QoS monitoring: Management Console Chart (2 of 2).............................................4-42
QoS monitoring: Using External System Reporter ..................................................4-43
QoS Performance Report example ..........................................................................4-44
Alerts from External System Reporter ....................................................................4-46
Priority Optimization summary ...............................................................................4-47
Module 5 — Data Migration: Peer Motion and EVA to HP 3PAR Online Import ..................... 5-1
Module objectives ...................................................................................................... 5-2
Peer Motion: HP 3PAR to HP 3PAR ............................................................................. 5-3
Converged migration: HP 3PAR Peer Motion ............................................................. 5-4
Peer Motion introduction ........................................................................................... 5-5
Peer Motion management ......................................................................................... 5-6
Migration types .......................................................................................................... 5-7
What is Single VV Migration? ..................................................................................... 5-8
How does Single VV Migration work? ........................................................................ 5-9
Requirements for Single VV Migration ....................................................................5-10
iv
© 2014 Hewlett-Packard Development Company, L.P.
HK904S D.00
Contents
Recommended personas for various hosts .............................................................5-11
3PAR to 3PAR Peer Motion Support Matrix: Host OS ..............................................5-12
Peer Motion Migration phases (1 of 2) ....................................................................5-13
Peer Motion Migration phases (2 of 2) ....................................................................5-14
Configuring PM using Management Console (1 of 9) ..............................................5-15
Configuring PM using Management Console (2 of 9) ..............................................5-16
Configuring PM using Management Console (3 of 9) ..............................................5-17
Configuring PM using Management Console (4 of 9) ..............................................5-18
Configuring PM using Management Console (5 of 9) ..............................................5-19
Configuring PM using Management Console (6 of 9) ..............................................5-20
Configuring PM using Management Console (7 of 9) ..............................................5-21
Configuring PM using Management Console (8 of 9) ..............................................5-22
Configuring PM using Management Console (9 of 9) ..............................................5-23
Peer Motion Availability Matrix ................................................................................5-24
Peer Motion CLI ........................................................................................................5-25
Peer Motion Command Line Interface (PMCLI) overview ........................................5-26
Supported platforms ...............................................................................................5-27
Installation ...............................................................................................................5-28
HP 3PAR PMCLI commands ......................................................................................5-29
How it works (1 of 2) ................................................................................................5-30
How it works (2 of 2) ................................................................................................5-31
Online Windows Cluster Migration...........................................................................5-32
What is Online Windows Cluster Migration? ............................................................5-33
Introducing NPIV peer ports ....................................................................................5-34
Online Windows Cluster Migration (1 of 4) ..............................................................5-35
Online Windows Cluster Migration (2 of 4) ..............................................................5-36
Online Windows Cluster Migration (3 of 4) ..............................................................5-37
Online Windows Cluster Migration (4 of 4) ..............................................................5-38
Supported configurations ........................................................................................5-39
Administration and configuration ...........................................................................5-40
EVA to HP 3PAR Online Import .................................................................................5-41
EVA to HP 3PAR Online Import introduction............................................................5-42
Peer Motion versus EVA to HP 3PAR Online Import ................................................5-43
EVA to HP 3PAR Online Import Support Matrix .......................................................5-44
Online Import: Models and OS .................................................................................5-45
EVA to HP 3PAR Online Import in Command View ...................................................5-46
EVA to HP 3PAR Online Import zoning .....................................................................5-47
EVA to HP 3PAR Online Import operations ..............................................................5-48
Adding arrays in Command View (1 of 3).................................................................5-49
Adding arrays in Command View (2 of 3).................................................................5-50
Adding arrays in Command View (3 of 3).................................................................5-51
Migrating data in Command View (1 of 8) ...............................................................5-52
Migrating data in Command View (2 of 8) ...............................................................5-53
Migrating data in Command View (3 of 8) ...............................................................5-54
Migrating data in Command View (4 of 8) ...............................................................5-55
Migrating data in Command View (5 of 8) ...............................................................5-56
HK904S D.00
© 2014 Hewlett-Packard Development Company, L.P.
v
Managing HP 3PAR StoreServ II
Migrating data in Command View (6 of 8) ...............................................................5-57
Migrating data in Command View (7 of 8) ...............................................................5-58
Migrating data in Command View (8 of 8) ...............................................................5-59
Peer port performance in Management Console ....................................................5-60
HP 3PAR terms versus EVA/P6000 terms ...............................................................5-61
HP 3PAR Online Import Utility for EMC Storage ......................................................5-62
HP 3PAR Online Import Utility for EMC Storage (1 of 2) .........................................5-63
HP 3PAR Online Import Utility for EMC Storage (2 of 2) ..........................................5-64
HP 3PAR Online Import Utility for EMC Storage architecture..................................5-65
HP 3PAR Online Import Utility for EMC Storage deployment ..................................5-66
Pre-migration checks (1 of 2) ..................................................................................5-67
Pre-migration checks (2 of 2) ..................................................................................5-68
Best practices ...........................................................................................................5-69
Example CLI commands ...........................................................................................5-70
Lab activity ...............................................................................................................5-71
Module 6 — Tools for Performance Troubleshooting and Balancing a 3PAR Array ............. 6-1
Module objectives ...................................................................................................... 6-2
3PAR I/O behavior ..................................................................................................... 6-3
HP 3PAR space allocation (1 of 2).............................................................................. 6-4
HP 3PAR space allocation (2 of 2).............................................................................. 6-5
Write request (1 of 4) ................................................................................................. 6-6
Write request (2 of 4) ................................................................................................. 6-7
Write request (3 of 4) ................................................................................................. 6-8
Write request (4 of 4) ................................................................................................. 6-9
Local write request (1 of 8) ......................................................................................6-10
Local write request (2 of 8) ......................................................................................6-61
Local write request (3 of 8) ......................................................................................6-62
Local write request (4 of 8) ......................................................................................6-63
Local write request (5 of 8) ......................................................................................6-64
Local write request (6 of 8) ......................................................................................6-65
Local write request (7 of 8) ......................................................................................6-66
Local write request (8 of 8) ......................................................................................6-67
Remote write request (1 of 10) ...............................................................................6-68
Remote Write Request (2 of 10) ..............................................................................6-69
Remote write request (3 of 10) ...............................................................................6-20
Remote write request (4 of 10) ...............................................................................6-21
Remote write request (5 of 10) ...............................................................................6-22
Remote write request (6 of 10) ...............................................................................6-23
Remote write request (7 of 10) ...............................................................................6-24
Remote write request (8 of 10) ...............................................................................6-25
Remote write request (9 of 10) ...............................................................................6-26
Remote write request (10 of 10) .............................................................................6-27
Local read request (1 of 6) .......................................................................................6-28
Local read request (2 of 6) .......................................................................................6-29
Local read request (3 of 6) .......................................................................................6-30
vi
© 2014 Hewlett-Packard Development Company, L.P.
HK904S D.00
Contents
Local read request (4 of 6) .......................................................................................6-31
Local read request (5 of 6) .......................................................................................6-32
Local read request (6 of 6) .......................................................................................6-33
Remote read request (1 of 7)...................................................................................6-34
Remote read request (2 of 7)...................................................................................6-35
Remote read request (3 of 7)...................................................................................6-36
Remote read request (4 of 7)...................................................................................6-37
Remote read request (5 of 7)...................................................................................6-38
Remote read request (6 of 7)...................................................................................6-39
Remote read request (7 of 7)...................................................................................6-40
Performance difference: Local versus remote I/O ..................................................6-41
High-level caching algorithm (1 of 2) ......................................................................6-42
High-level caching algorithm (2 of 2) ......................................................................6-43
Cache read-ahead ....................................................................................................6-44
Performance basics .................................................................................................6-45
Understanding performance ...................................................................................6-46
Servicing I/O—Array basics .....................................................................................6-47
3PAR component block diagram .............................................................................6-48
FC host port considerations .....................................................................................6-49
Hardware component limits—FC links ....................................................................6-50
Hardware component limits ....................................................................................6-51
Hardware component limits ....................................................................................6-52
Front-end to back-end ratio ....................................................................................6-53
RAID random write back-end overhead ..................................................................6-54
Hardware component limits—CPU..........................................................................6-55
Hardware component limits—Cache ......................................................................6-56
Performance monitoring .........................................................................................6-57
Capturing performance ............................................................................................6-58
CLI stat commands...................................................................................................6-59
Common stat options ..............................................................................................6-60
statvlun ....................................................................................................................6-61
statvlun ....................................................................................................................6-62
statcmp ....................................................................................................................6-63
Delayed ack mode ....................................................................................................6-64
Performance & Reports: Charts area .......................................................................6-65
Performance Chart example ....................................................................................6-66
Example of troubleshooting (1 of 2) .......................................................................6-67
Example of troubleshooting (2 of 2) .......................................................................6-68
Array tuning .............................................................................................................6-69
Introduction to tunesys ...........................................................................................6-70
Phase 1—Internode Tunes ......................................................................................6-71
Phase 2—Intranode tunes.......................................................................................6-72
Intranode tuning ......................................................................................................6-73
Phase 3—Logical disk relayout tunes .....................................................................6-74
tunesys considerations ............................................................................................6-75
System Tuner: Tune system ....................................................................................6-76
HK904S D.00
© 2014 Hewlett-Packard Development Company, L.P.
vii
Managing HP 3PAR StoreServ II
System Tuner: Tune CPGs ........................................................................................6-77
System Tuner: Tune physical disk chunklets ..........................................................6-78
System Tuner Task window .....................................................................................6-79
System Tuner: tunesys CLI command.....................................................................6-80
System Tuner: tunepd CLI command .....................................................................6-81
CPG compaction .......................................................................................................6-82
Training from HP Education Services .....................................................................6-83
viii
© 2014 Hewlett-Packard Development Company, L.P.
HK904S D.00
Managing HP 3PAR StoreServ II
HK904S D.00
Course Overview
1–1
Managing HP 3PAR StoreServ II
HK904S D.00
Course Overview
1–2
Managing HP 3PAR StoreServ II
HK904S D.00
Course Overview
1–3
Managing HP 3PAR StoreServ II
HK904S D.00
Course Overview
1–4
Managing HP 3PAR StoreServ II
HK904S D.00
Course Overview
1–5
Managing HP 3PAR StoreServ II
HK904S D.00
Course Overview
1–6
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2–1
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2–2
Managing HP 3PAR StoreServ II
Remote Copy
Many remote replication solutions are complex to implement. It is typical, even for large enterprises (which
usually have a dedicated staff of storage experts), to require professional services to implement remote
replication solutions. For arrays that lack a native IP solution, costly solutions such as Fibre Channel over IP
(FCIP) routers or FC-to-IP channel converters are needed at both sites to facilitate long-haul connections. This
additional, third-party distance extension hardware makes both the topology and the actual installation and
configuration more complex. Some array-based solutions have complex command lines that are rarely managed
by the storage administrator directly. Because of the complexity of the back-end storage, administrators might
spend hours planning the volume mapping in conjunction with vendor or third-party consultants.
After implementation, array-based replication is often complex to manage. Arcane command lines are difficult to
decipher, and graphical user interfaces (GUIs) are not sufficient for repetitive actions that must be coordinated
with applications. Administrators must often budget for ongoing professional services engagements to enable
their replication solutions to keep up with changing business needs. Alternately, enterprises might take a “don’t
touch” approach, resisting the addition of new applications to their replication solution for fear that they might
break what is working today.
Remote Copy only supports replication between HP 3PAR Storage arrays.
Remote Copy can be managed using the CLI or UI. Both are very simple to configure.
HK904S D.00
2–3
Managing HP 3PAR StoreServ II
Remote Copy
For customers who choose Fibre Channel connectivity between their arrays, HP 3PAR offers Remote Copy over
Fibre Channel (RCFC) connectivity. RCFC is most often used when distances are short and there is no requirement
for distance extension. This solution is often used for campus-distance solutions where latency and bandwidth
are important factors. Asynchronous periodic mode can be used over longer distances by using FCIP with suitable
switches.
RCFC uses dual Fibre Channel links for availability and to increase total available bandwidth for replication. The
customer has the flexibility of using a direct, point-to-point implementation between 3PAR StoreServ arrays or
any approved Fibre Channel fabric to create multiple hops between arrays. These hops can include any fabric
vendor-approved connectivity, such as fabric extension using long-wavelength links, Dense Wavelength Division
Multiplexing (DWDM), or FC-to-IP bridging or routing.
In summary, RCFC supports:
•
Load balancing across available links
•
Direct connect between StoreServ arrays, or through a supported SAN infrastructure
•
Synchronous and asynchronous periodic modes (when using FCIP, only asynchronous periodic mode is
supported)
HK904S D.00
2–4
Managing HP 3PAR StoreServ II
Remote Copy
Remote Copy IP (RCIP) is a native IP implementation of Remote Copy over Gigabit Ethernet. This solution,
available with synchronous and asynchronous periodic modes, can be used for short-, medium-, or long-haul
replication. It is most often used by organizations that require disaster recovery over greater distances or
between buildings for which they have no native Fibre Channel connectivity. When used in asynchronous periodic
mode, replication to arrays up to 3,000 miles away is supported.
RCIP uses dual links between arrays for maximizing bandwidth and ensuring availability. By offering native IP
connectivity and using the site’s existing IP infrastructure, 3PAR Remote Copy is simpler to implement than
solutions that do not offer native IP replication. Furthermore, RCIP is compatible with partner solutions that
encrypt IP network traffic and optimize wide area network (WAN) bandwidth such as Riverbed’s Steelhead
Appliance.
In summary, RCIP supports:
•
Load balancing across available links
•
Direct connection between StoreServ arrays (through a cross-over cable) or through LAN/MAN/WAN switches
•
Synchronous and asynchronous periodic modes
HK904S D.00
2–5
Managing HP 3PAR StoreServ II
Remote Copy
Two or four inter-site links are supported between the arrays. When all links are up, data is replicated across all
links in an active/active solution. If a link goes down and restored, replication is automatically rebalanced.
HK904S D.00
2–6
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2–7
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2–8
Managing HP 3PAR StoreServ II
Remote Copy
Challenge:
Sharing HBA cards with both host and RCFC connections
Solution:
Shared host and RCFC HBA
•
Cost savings—Fewer HBA cards are required.
•
Spare capacity is used.
•
Different ports on the HBA must be used.
NOTE: As of HP 3PAR OS 3.1.3, both the HP 3PAR StoreServ 7000 Storage series and the HP 3PAR StoreServ
10000 Storage systems allow individual ports to be set either on a front-end HBA for RCFC or for host
connectivity. This applies to the F and T class systems as well.
HK904S D.00
2–9
Managing HP 3PAR StoreServ II
Remote Copy
Because HP 3PAR Remote Copy is built on HP 3PAR Thin Copy technology, it allows customers to purchase disks
at the remote site equivalent to the amount of data they are protecting through Remote Copy.
For a 100 GB usable scenario with 20% utilization (20 GB of written data), a thinly provisioned RAID 1 volume at
the local site consumes 40 GB raw capacity versus 200 GB raw capacity on a traditional array. The remote site
consumes 40 GB of raw capacity if using RAID 1 protection, and less if using RAID 5.
Rather than copying “a bunch of zeros” and maintaining a mirror of unwritten disks, HP 3PAR Remote Copy
allows administrators to budget their disk costs against the data being protected.
HK904S D.00
2 – 10
Managing HP 3PAR StoreServ II
Remote Copy
HP 3PAR Remote Copy software now leverages the HP 3PAR Thin Built-In ASIC to detect and eliminate the remote
replication of zero pages during initial synchronization and ongoing replication. As a result, precious remote copy
link bandwidth is used efficiently by replicating non-zero written data only. This capability also enables nearinstant initial synchronization of new volumes that are added to a Remote Copy group.
Special capacity savings stem when Remote Copy bandwidth optimization is coupled with HP 3PAR Thin
Conversion and Thin Persistence Software. Fully provisioned, “fat” source volumes can be converted simply and
rapidly to higher efficiency, “thin” target volumes during remote replication while making efficient use of Remote
Copy bandwidth. Capacity associated with zero pages on the source volume is eliminated on the replication
target volume.
HK904S D.00
2 – 11
Managing HP 3PAR StoreServ II
Remote Copy
Remote Copy volume groups (RCGs) are pairs of virtual volume sets that are logically related and that are used
when data needs to be consistent between specified sets of virtual volumes.
Most of the Remote Copy operations you perform are on sets of virtual volumes that have been formed into
RCGs.
You form volume groups in order to maintain consistent data across a set of virtual volumes. For example,
applications might issue dependent writes to a number of virtual volumes. If you add these related virtual
volumes to one volume group, the database and other applications can correctly process the data.
You can form volume groups to simplify administration. Even if a set of virtual volumes are unrelated and do not
need write consistency, you can add the volumes to a single volume group to reduce the number of commands
that you need to enter.
For example, a single start/stop/setrcopygroup command applies to all the volumes in the specified volume
group.
How Remote Copy volume groups work
Remote Copy functions as if the virtual volumes in a volume group are related and therefore ensures that the
data in the virtual volumes within a volume group maintain write consistency.
When you start or stop Remote Copy operations, Remote Copy starts and stops operations for the whole volume
group.
When you (or an automated process) create a point-in-time snapshot of virtual volumes in a volume group,
Remote Copy blocks writes to all volumes in the group in order to ensure a consistent point-in-time virtual copy
of the whole volume group.
HK904S D.00
2 – 12
Managing HP 3PAR StoreServ II
Remote Copy
RCGs are the HP 3PAR implementation of consistency groups for Remote Copy operations. Volume groups are
collections of volumes that are required to have write order consistency.
Use cases include multiple volumes written to by a database application. Unless write order consistency is
maintained across all volumes associated with a database, recovery from the copy cannot be guaranteed. HP
3PAR Remote Copy volume groups enforce this write order consistency.
HK904S D.00
2 – 13
Managing HP 3PAR StoreServ II
Remote Copy
This feature was introduced with HP 3PAR OS 3.1.2. This feature enables the movement of Remote Copy
volumes between different RC volume groups without needing to totally resynchronize a volume.
This provides very efficient utilization of precious bandwidth because a full resync of the volume is completely
negated; therefore, there is no impact on RC performance. This allows you to have greater flexibility to modify
existing volume groups when configuration changes are required and volumes must be moved between volume
groups.
HK904S D.00
2 – 14
Managing HP 3PAR StoreServ II
Remote Copy
3PAR Autonomic Replication is software designed to enhance data center agility and efficiency when provisioning
storage that is to be replicated on 3PAR Storage. Building on the foundation of 3PAR Rapid Provisioning and
3PAR Autonomic Groups, Autonomic Replication enables users to simplify, automate, and expedite the storage
replication configuration.
Autonomic Replication also increases agility because changes are handled autonomically at a subsystem level.
When the Source Volume size is grown online, the Target Volume size is grown automatically with a
corresponding increase in size.
Using the CLI, you can associate a pair of CPGs with a Remote Copy group. When a VV is to be added to the group,
the target VV can be automatically created on the second array, in the right size and type. This is for the initial
admitting of the VV to the RC group; additionally growing the VV on the primary would accomplish the same on
the second array.
HK904S D.00
2 – 15
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 16
Managing HP 3PAR StoreServ II
Remote Copy
In synchronous mode, a host-initiated write (1) is performed first on the primary storage array (2). The write
request is concurrently forwarded (3) to the secondary or backup storage array (4) before acknowledging the
forwarded write back to the primary array (5). Finally, the primary array acknowledges (6) the host server that
the data write has been completed.
Additional steps, or latency, are required when synchronous mode is used because, on both the primary and
secondary storage arrays, data is written to the caches of two nodes as well as the time it takes (round trip) to
forward the write request to the secondary array. The data written to cache at both storage arrays is additional
redundancy put in place in case one node fails before the write can be copied to the physical disk at either site.
The host server write is acknowledged after the active cache update completes and the backup
acknowledgement is received.
HK904S D.00
2 – 17
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 18
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 19
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 20
Managing HP 3PAR StoreServ II
Remote Copy
In the asynchronous periodic mode, host writes (1) are performed only on the primary storage array (2), and the
host write is acknowledged (3) as soon as the data is written into the cache of two nodes (2) on the primary
storage array.
This additional redundancy is in place in case one node fails before the write can be copied to a disk. At this point,
no copy of this data is sent to the secondary or backup storage array. The primary and backup volumes are
resynchronized periodically, for example, when scheduled or when resynchronization is manually initiated
through the syncrcopy command from the CLI. The deltas (using snapshots/virtual copies) of the volume or
volume group are copied to the secondary or backup array. If, between two resynchronizations, an area of the
volume is written to multiple times, only the last or most recent write needs to be sent over to the backup
storage array. Therefore, when using Remote Copy in asynchronous periodic mode, less data is transferred than
would be the case when using synchronous mode.
HK904S D.00
2 – 21
Managing HP 3PAR StoreServ II
Remote Copy
For initial copy of the data from primary to target, a snapshot is taken (A) and then copied as the base volume to
the secondary site (AR). How long will this take? Many factors can affect this, such as the bandwidth available
between the sites, the amount of data to be transferred, how busy the destination array is, and so on. As has
been mentioned previously, if the link bandwidth is restricted or there is a lot of data to transfer, tape seeding
might be an option for the customer to consider. After the original full copy of the base volume has been
completed, we can commence the snapshot schedule.
The snapshot schedule starts, or an ad hoc resynchronization is instigated by the customer. First, a snapshot of
the primary site volume is taken (B), and a snapshot of the secondary site base volume (AR) is also taken before
resynchronization of the B-A delta into the secondary base volume (BR). Why is this done? In the event of some
failure during transmission, and the base volume at the secondary site (BR) is not resynchronizing correctly, the
customer can fail back to the last complete resynchronization of the secondary base volume, which in this case
would be AR.
When the resynchronization is complete on the secondary site, base volume BR, the original snapshots of the
primary site base volume (A) and the secondary site base volume (AR) can now be deleted because they are no
longer required. Base volume snapshot B at the primary site is now synchronized with the secondary site base
snapshot BR. The primary site base volume is now ready for the next resynchronization operation to take place.
By deleting snapshots A and AR, the storage arrays are saving space at both the primary and secondary sites. It is
possible for the customer to have separate snapshot schedules, not Remote Copy schedules, in place at either
the primary or secondary site if they want additional flexibility or security.
HK904S D.00
2 – 22
Managing HP 3PAR StoreServ II
Remote Copy
For asynchronous RCGs, a Sync Period option can be set. The minimum time is 5 minutes. If a Sync Period is not
set, a manual resync can be performed.
Asynchronous periodic mode is ideal for organizations that need to replicate data over distances that are too
great for synchronous replication.
As distances increase, so does the latency between the host write and the array acknowledgement. This puts a
practical limitation on the distances that can be used for synchronous mode, with each application’s performance
having a greater or lesser dependency on latency of the writes to disk. However, the replication distance for
asynchronous periodic replication is effectively limitless.
What is it and how does it work? A point-in-time (PIT) synchronization based on snapshots—The latest snapshot
generates time-based updates from the previous snapshot.
There are several advantages to using asynchronous periodic mode over synchronous mode. You can have
greater distances between replication sites because latency is not such a key issue, and your bandwidth
requirements are typically less. This can mean that a less expensive link can be used between the sites, or an
existing link can be more efficiently used. For example, it can be used for regular traffic during the day but during
quieter periods (overnight), it can be used for Remote Copy operations. If one area of a volume is written to
multiple times between two resynchronizations, Remote Copy only needs to send the last write to the backup
system, whereas with synchronous, it would be sent for every write. Therefore, Remote Copy asynchronous
periodic mode transfers less data overall than synchronous mode, thereby saving bandwidth. It is the most
space-efficient, bandwidth friendly write mode because it uses a differential snapshot (deltas), which helps to
reduce data transfer amount and time required to use your link to transfer the data. For some customers,
bandwidth pricing might be based on the amount of data, or “traffic,” that gets transmitted. This means that
even with asynchronous periodic mode, multiple “delta-copy resynchronizations” can require lots of writes,
which can take bandwidth use to premium pricing. Fortunately, asynchronous periodic mode is flexible enough
such that the timing and frequency of delta-copy resynchronizations can be tailored to meet the customer’s
specific bandwidth requirements.
HK904S D.00
2 – 23
Managing HP 3PAR StoreServ II
Remote Copy
For Remote Copy to automatically resynchronize asynchronous periodic mode volume groups, system
administrators must configure the synchronization period for the volume group. There is no default
synchronization period, but the minimum that you can set is 5 minutes for a volume group up to a maximum of 1
year.
To set the synchronization period, you can use either the CLI or MC interface. The synchronization period can be
modified after RCG creation, and the RCG does not have to be stopped or suspended to change the sync period.
HK904S D.00
2 – 24
Managing HP 3PAR StoreServ II
Remote Copy
The information in the main table shows the maximum latencies for all currently supported HP 3PAR StoreServ
RC configuration links.
Note that all values are for a round trip, and that the distance does not matter. Latency is the key value that you
must adhere to when setting up RC between sites and arrays.
HK904S D.00
2 – 25
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 26
Managing HP 3PAR StoreServ II
Remote Copy
With SLD, instead of the traditional source array replication of a virtual volume to another remote target array in
what could be called a one-to-one relationship, SLD enables customers to replicate the same source virtual
volume to two remote target arrays simultaneously, one synchronously and the other asynchronously.
An SLD configuration is composed of two targets: one long-distance periodic group and one short-distance
synchronous group. One primary array uses two backup arrays and participates in two pairs, one for each backup
array.
HK904S D.00
2 – 27
Managing HP 3PAR StoreServ II
Remote Copy
This is an advanced feature for remote replication where the same volume is protected on two arrays, one in
synchronous mode and the other in asynchronous mode. In the event of failover, customers can fail over to the
synchronous site, which would be the majority of the time, or to the asynchronous site.
To recover from a particular failover, customers only need to replicate the delta changes from one of the DR
sites, and therefore, customers do not require a full sync of a volume.
In the event of a primary array failure, one of the backup arrays (typically the backup/secondary1 array sharing a
synchronous Remote Copy connection with the primary array) assumes the role of the primary array. The
additional backup/secondary2 array in the SLD configuration then serves as the backup/secondary of the new
primary array. The volume on the new primary array is updated periodically (deltas) on the backup/secondary2
array using the standby IP link. For asynchronous mode, full resynchronization is required. For synchronous and
periodic, a full resynchronization is not required, but is highly recommended because the volumes on the old
primary site might have been affected by the disaster.
You are not allowed to have two separate SLD configurations at the same time, but must use a combination of
SLD and bidirectional replication.
•
Bidirectional support on sync link only
•
No Peer Persistence support on SLD configurations
•
SLD is exclusive to all other Remote Copy configurations
•
SLD requires InForm OS 2.3.1 or higher
HK904S D.00
2 – 28
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 29
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 30
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 31
Managing HP 3PAR StoreServ II
Remote Copy
As of HP 3PAR OS 3.1.2, One-to-One (1:1) Remote Copy configurations support mixed-mode, synchronous, and
asynchronous periodic replication modes to run simultaneously.
To do this, use a dedicated pair of links for synchronous and another pair of dedicated links for the asynchronous
periodic replication. In bidirectional Remote Copy, the backup array (for example, System 2) also contains
primary volume groups and, therefore, is also considered a primary array for the purposes of the primary volume
groups it contains.
Both synchronous and asynchronous periodic mode can be configured as well as bidirectional support.
HK904S D.00
2 – 32
Managing HP 3PAR StoreServ II
Remote Copy
The HP 3PAR StoreServ Remote Copy supports Many-to-One topologies, including up to a 4-to-1 storage array
configuration. In this configuration, multiple HP 3PAR StoreServ arrays use asynchronous periodic mode to
replicate their primary volumes to a single, consolidated replication array. This particular topology is useful when
an organization is attempting to build a DR site for multiple remote sites. Similarly, multiple StoreServ arrays at a
single site, each with some volumes of their storage considered for replication, can be consolidated into a single
replication target at the remote site. In the example shown in the main diagram, four sites (A, B, C, and D)
replicate to a single central data center or DR site. This topology allows the central DR site to use bidirectional
replication with one other site, Site D in the example, ensuring that critical applications at the central site can be
protected offsite as well.
An N-to-1 Remote Copy configuration is composed of multiple Remote Copy pairs. A maximum of four primary
storage arrays can use the same backup or DR site storage array.
HK904S D.00
2 – 33
Managing HP 3PAR StoreServ II
Remote Copy
In a One-to-Many (also known as 1-to-N) Remote Copy configuration, a single primary storage array can use
multiple HP 3PAR StoreServ arrays as backup arrays. For the current HP 3PAR OS release (3.1.2) a One-to-Many
Remote Copy configuration has a maximum of two secondary or backup targets. Just like Many-to-One Remote
Copy configurations, One-to-Many configurations can operate in bidirectional functionality.
A 1-to-N Remote Copy configuration is composed of multiple Remote Copy pairs. The single primary storage
array can use a maximum of two backup storage arrays; therefore, the primary array participates in two Remote
Copy pairs, one for each backup or secondary array.
HK904S D.00
2 – 34
Managing HP 3PAR StoreServ II
Remote Copy
In an M-to-N Remote Copy configuration, bidirectional data replication takes place in a 4x4 fan-in, fan-out
configuration.
Data replication occurs without the need for dedicated Remote Copy pairs. The transport layer can be RCFC, RCIP,
or FCIP, or a mixture of these, with up to five links per node. Only one RCIP link per node is possible; the other
links can be RCFC or FCIP.
To change the transport layer between the members of a Remote Copy pair, you must first remove the targets
and set up all of the groups again.
Replication modes can be either synchronous or asynchronous periodic, or a mixture of these.
Up to 6,000 virtual volumes (VVs) can be replicated.
M-to-N configurations are supported on the following HP 3PAR StoreServ systems:
•
HP 3PAR StoreServ 10800 Storage
•
HP 3PAR StoreServ 10400 Storage
•
HP 3PAR StoreServ 7450 Storage
•
HP 3PAR StoreServ 7400 Storage
•
HP 3PAR StoreServ 7200 Storage
•
HP 3PAR F-Class Storage
•
HP 3PAR T-Class Storage
HK904S D.00
2 – 35
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 36
Managing HP 3PAR StoreServ II
Remote Copy
If a remote/target site is unreachable and replication cannot continue, writes can continue to the primary
volumes at the primary site, and snapshots are taken at the primary site for later resynchronization. The
resynchronization can be done (after the failure causing the initial behavior is corrected) either manually or
automatically.
HK904S D.00
2 – 37
Managing HP 3PAR StoreServ II
Remote Copy
If a primary system or primary volume group fails, you can change secondary volume groups to primary volume
groups to aid in disaster recovery. This change:
•
Reverses the direction of data replication
•
Enables I/O to the volume groups
•
Allows you to export the volume groups as read/write as part of the disaster-recovery process (if you do not
change secondary volume groups to primary groups, volumes are exported as read-only)
HK904S D.00
2 – 38
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 39
Managing HP 3PAR StoreServ II
Remote Copy
To stop or suspend the replication of a Remote Copy group, use the stoprcopygroup command or use the MC.
Snapshots will be maintained at the primary site for resynchronization by default.
An RCG must be stopped before certain actions can be performed on an RCG. Examples include failover and
changing the mode of the RCP.
HK904S D.00
2 – 40
Managing HP 3PAR StoreServ II
Remote Copy
When a disaster occurs that requires failover to the remote site, it is important that the failover be easy to
manage. A failover scenario could be at the level of the entire physical site, or it might be limited to a single
server. HP 3PAR Remote Copy allows for failure of entire Remote Copy targets, which results in failover of all
volumes that are being replicated between the StoreServ arrays. Similarly, the same command can be used to
trigger failover of individual Remote Copy volume groups for cases where a single server has failed.
Changing one volume group
To change a single secondary volume group to a primary group, enter:
setrcopygroup failover <group_name>
•
<group_name> is the name of the secondary volume group (for example, Group1.r96) to change to a primary
group
Changing all volume groups on a system
To change all secondary volume groups on the backup system to primary groups, enter:
setrcopygroup failover -t <target_name>
•
<target_name> is the name of the primary (failed) system (for example, System1)
HK904S D.00
2 – 41
Managing HP 3PAR StoreServ II
Remote Copy
After failover, replication by default is not resumed in the other direction. To do so, the RCG must be recovered
(resumed) using the setrcopygroup recover command or the MC.
HK904S D.00
2 – 42
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 43
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 44
Managing HP 3PAR StoreServ II
Remote Copy
New CLI commands in HP 3PAR OS 3.1.3
Users can monitor Remote Copy progress with two new commands.
•
srstatrcvv—System reporter reports on the performance of the Remote Copy volumes
•
srstatrcopy—System reporter performance reports for Remote Copy links
Each of these commands provides greater insight for the user when Remote Copy operations have been created
and are running.
The following example displays aggregate hourly performance statistics for all nodes and Remote Copy volumes
beginning 24 hours ago:
cli% srstatrcvv -hourly -btsecs -24h
The following example displays daily Remote Copy volume performance for volumes grouped by
port_n,port_s,port_p:
cli% srstatrcopy -daily -attime -groupby port_n,port_s,port_p
HK904S D.00
2 – 45
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 46
Managing HP 3PAR StoreServ II
Remote Copy
The HP 3PAR Storage Replication Adapter (SRA) software for VMware vCenter Site Recovery Manager (SRM)
integrates VMware SRM with HP 3PAR StoreServ Storage and replication software to provide a complete and
integrated business continuity solution.
This solution offers centralized management of recovery plans, nondisruptive testing and automated site
recovery, failback, and migration processes. The unique HP 3PAR StoreServ SRA software combines HP 3PAR
Remote Copy Software and HP 3PAR Virtual Copy Software with VMware SRM to ensure the highest performing
and most reliable disaster protection for all virtualized applications.
The HP 3PAR SRM integrates directly into the VMware vCenter Server virtualization management console to give
vSphere system administrators superior granularity and control. This plug-in powers the creation of hundreds of
VM-aware, point-in-time, disk-based snapshots that can be used for rapid online recovery.
When used together, HP 3PAR Recovery Manager Software and HP 3PAR Virtual Copy Software give VMware
administrators a simple, automated, integrated process for protecting and recovering virtual machine disks
(VMDKs), whole VMFS volumes, individual virtual machines, and even individual files within VMware vSphere
environments.
NOTE: HP 3PAR Recovery Manager for VMware vSphere requires a separate license to enable the functionality.
HK904S D.00
2 – 47
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 48
Managing HP 3PAR StoreServ II
Remote Copy
Two Remote Copy systems with synchronous remote copy, with a third site acting as a Quorum Witness, allow
you to choose which system should be servicing requests. In this illustration, the primary site has experienced a
failure, and the systems are failed over to the secondary site, with the storage already set up at the remote site.
HK904S D.00
2 – 49
Managing HP 3PAR StoreServ II
Remote Copy
Automatic means no user interaction is required.
Transparent means that the host does not recognize that there has been a failover, applications continue to
function, and there is no interruption of the I/O.
HK904S D.00
2 – 50
Managing HP 3PAR StoreServ II
Remote Copy
•
Automatic—Failover from one 3PAR StoreServ array to another can occur without user intervention, as long as
the failover conditions are met.
•
Transparent—Failover is nondisruptive to the hosts and applications running on them.
•
Failover—An operation that reverses the relationship between volume groups. The secondary volume group
becomes primary, and the primary volume group becomes secondary (if the primary array is available)
HK904S D.00
2 – 51
Managing HP 3PAR StoreServ II
Remote Copy
Configuration is key in these scenarios. Note that the Quorum Witness is on a third site that is acting as a record
of what is happening on both the primary and secondary sites.
HK904S D.00
2 – 52
Managing HP 3PAR StoreServ II
Remote Copy
Users might also want to do a manual failover:
•
When they do a vMotion, and migrate a VM from primary to secondary site
•
When a VM dies, and the vSphere cluster brings up that VM on the remote site
In both cases, a manual failover of the associated volume group ensures that the migrated VM can continue its
I/Os to the StoreServ system local to it.
HK904S D.00
2 – 53
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 54
Managing HP 3PAR StoreServ II
Remote Copy
The failover only occurs in a scenario where the target becomes unavailable.
•
Phase 2 of Peer Persistence (available with 3.1.2 MU2) supports a Quorum Witness (QW).
− A quorum is a pair of StoreServ systems in a Peer Persistence configuration.
− QW enables automatic transparent failover between sites.
•
The HP 3PAR Quorum Witness software is updated regularly with data from both primary and secondary arrays
•
In the event of a failure of a primary StoreServ array, the secondary StoreServ array detects that the Quorum
Witness is not getting updated from the primary StoreServ and initiates a failover operation.
HK904S D.00
2 – 55
Managing HP 3PAR StoreServ II
Remote Copy
•
Quorum Witness is typically on a third site where events that might affect site 1 or site 2 cannot affect the
Quorum Witness site at the same time.
•
Additionally, the QW connects to arrays on both the sites using non-RC links. So, issues with RC links do not
affect QW connection.
•
With these two configuration characteristics (site and link independence), QW helps determine the nature of
the failure.
− Link failure: Are the two arrays running but not communicating because of a link failure? QW would still be
receiving updates from both of the arrays.
− Array/site failure: Has one of the arrays or sites failed? The QW would not be receiving updates from one of
the arrays that has failed.
HK904S D.00
2 – 56
Managing HP 3PAR StoreServ II
•
The StoreServ arrays store their individual Quorum state.
•
Quorum states can be:
Remote Copy
− Not-started
− Started
− Failover
− Failsafe
•
The failure of StoreServ A results in a loss of RC links. The QW state becomes “Failover.”
•
StoreServ B becomes primary (the switchover command is automatically initiated).
•
When StoreServ A recovers, and RC links are re-established, it notices that B is now primary.
•
StoreServ A can be manually restored to primary.
HK904S D.00
2 – 57
Managing HP 3PAR StoreServ II
Remote Copy
The HP 3PAR Peer Persistence software allows you to use both their primary and secondary sites in an “activeactive mode,” thereby putting your secondary site to a much more active use than just using it as an expensive
insurance policy against disaster.
As shown in the diagram, each host is connected to each HP 3PAR StoreServ array on both sites using a
redundant fabric. Additionally, each volume must use 3PAR Remote Copy to maintain a synchronous copy of
itself at the other site.
While the primary volume A on Site 1 is exported in a read/write mode, its corresponding secondary volume A on
Site 2 is exported in a read-only mode. For example, in the main figure, Volume A (primary) and Volume A
(secondary) are being exported to hosts on both sites with a common WWN (LUN A -123). However, volume
paths for a given volume are “active” only on the HP 3PAR StoreServ array where the “primary” copy of the
volume resides.
In the diagram, for Volume A (primary), the path is active on HP 3PAR StoreServ A on Site 1 whereas for Volume B
(primary), the path is active on HP 3PAR StoreServ B on Site 2.
In a managed switchover scenario when hosts from Site 1 fail over to Site 2, the paths marked passive for their
secondary volumes become active, and the hosts continue to access the same volumes (with the same WWN) as
they were accessing before the failover. This transparent failover capability, enabled by the Peer Persistence
software, protects customers from unplanned host and application outages.
General notes for HP 3PAR StoreServ Peer Persistence:
•
Remote Copy synchronous must be configured between the two HP 3PAR StoreServ arrays.
•
The ESX host should be running ALUA (persona 11), which causes storage to have an active path (primary HP
3PAR StoreServ array) and a passive path (secondary HP 3PAR StoreServ array) to the ESX host.
•
The creation of RC volumes on the secondary site happens automatically.
•
RC volumes will have same WWN.
•
If the primary HP 3PAR StoreServ array fails, the customer must do an RC switchover to the secondary 3PAR
StoreServ array for the passive path to become active.
HK904S D.00
2 – 58
Managing HP 3PAR StoreServ II
Remote Copy
Currently supported environments:
•
ESX vSphere 5.x
•
HP 3PAR Synchronous Remote Copy
•
Up to Remote Copy supported maximum latency of RTT 2.6ms
Requirements:
•
HP 3PAR Peer Persistence requires an individual license or is part of the Replication Suite which includes the
following functionality and features:
− HP 3PAR StoreServ arrays
− HP 3PAR Remote Copy Licence (both arrays)
− HP 3PAR Peer Persistence Licence (both arrays)
HK904S D.00
2 – 59
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 60
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 61
Managing HP 3PAR StoreServ II
HK904S D.00
Remote Copy
2 – 62
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3–1
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3–2
Managing HP 3PAR StoreServ II
System Reporter
Before HP 3PAR OS 3.1.2, External SR was the only option users had to collect and report historical performance
data from one or more HP 3PAR arrays. The External System Reporter is installed on a standalone server and
configured to collect and store performance data for all the storage servers in the environment in a local
database. The External System Reporter was also used to manage the Adoptive Optimization (AO) functions for
all storage servers monitored (pre-HP 3PAR OS 3.1.2).
With the introduction of HP 3PAR OS version 3.1.2, an instance of the System Reporter software is by default
installed on the HP 3PAR StoreServ system, and it automatically collects performance data from the local
system. The collected information is available through the command line interface (CLI) using the sr* commands.
The On-Node System Reporter is intended to manage Adaptive Optimization for the local system, and
configuring AO is made available in the Management Console (MC) interface. MC lets you view collected data in
report form and graphs for the system beginning with Management Console 4.4.
HK904S D.00
3–3
Managing HP 3PAR StoreServ II
System Reporter
The External System Reporter is installed separately on a server that communicates to the HP 3PAR array
through the IP network interface. External System Reporter is a licensed product and allows the system
administrator to access various performance metrics. The External System Reporter tool allows you to examine
performance point-in-time data, and generate and schedule reports measuring the performance on the HP 3PAR
array. Reports are generated by using data that is captured and stored using various supported databases.
All AO operations with 3.1.2 have moved to the On-Node System Reporter. The External System Reporter,
however, can continue to be used for all non-AO functionalities like advanced performance/space analysis and
reporting.
HK904S D.00
3–4
Managing HP 3PAR StoreServ II
System Reporter
On-node System Reporter for HP 3PAR OS 3.1.2 runs on the nonmaster nodes, and it stores performance and
space data in SQLite data files. If the node it is running on fails, it simply moves to another node and continues to
run. NOTE: The system must therefore have more than two nodes. On-node System Reporter can perform
Adaptive Optimization on the controller node.
Configuration and setup
System Reporter is part of HP 3PAR OS 3.1.2, and it does not need to be installed or configured. The only setup is
to create the .srdata volume, which happens automatically with oodb or adminthw commands. No configuration
is required to set up sampling. You can stop and start System Reporter, but there is no way to change the
sampling or intervals.
To upgrade from 3.1.1 or earlier, customers only need to create the .srdata VV, which is created as part of the
out-of-the-box (OOTB) process or by running admithw.
Considerations
If the system is very busy, or if the disks that the .srdata VV uses are very busy, sampling might be slower than
expected. For example, high-resolution samples could take longer than five minutes.
NOTE: If only one node is online, it will be the master node and will not mount the .srdata virtual volume. This is
intentional so that the master node does not get overloaded.
HK904S D.00
3–5
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3–6
Managing HP 3PAR StoreServ II
System Reporter
System Reporter monitors performance and the usage of storage resources, and it allows you to generate charts
and graphs that report useful statistics for planning and configuring the operation of HP 3PAR Storage Systems.
The highly customizable, robust reporting offers straightforward report sharing and report scheduling, which
simplifies performance monitoring and assists in gathering data for optimization and planning. System Reporter
enables quick troubleshooting and isolation of performance issues, minimizing business impact. System
Reporter proves particularly useful for service providers and enterprises with detailed information for service
level administration. The ability to create reports by user group supports chargeback and meeting service level
agreements.
System Reporter provides the following main features:
• Convenient access to configuration options for selecting systems to include for reporting, specifying sampling
parameters, scheduling reports, and generating alerts
• Extensive selection of reports for obtaining performance and storage utilization statistics on selected objects
(such as hosts, ports, nodes, physical disks, virtual volumes, and so on)
• Quick access to predefined reports that contain useful statistics for most common types of installations
• Scheduling of reports with predefined parameters that are initiated at predetermined times and can then be
accessed when needed
• Customization of reports using the standard web interface (or through the provided Excel client), which
provide specifically selected and formatted reports for specified systems
• Options for choosing the time and duration for the collection of reporting statistics, which can be initiated at a
specific time, collected over a period of time, and compared between a range of times
• Options for viewing and comparing report information in a variety of formats through a selection of
charts and tables
• Alerts that can be configured to send email notifications to a specified address when certain reporting
criterions are met
HK904S D.00
3–7
Managing HP 3PAR StoreServ II
System Reporter
• Support for customized formatting of report data using the comma separated values (CSV) file
formatting standard for inclusion in web or Excel applications.
• Support for creating custom web or Excel reports using well-documented web queries.
• Access to the database schema for direct queries to the reporting data stored in the database used by
System Reporter.
HK904S D.00
3–8
Managing HP 3PAR StoreServ II
System Reporter
System Reporter 3.1 MU2 supports the following databases:
• SQLite (default, included with System Reporter). SQLite databases have significant limitations that you should
consider before opting to use them. The main limitation is that because it uses very coarse-grain locking, it is
only suited for situations requiring smaller database size and limited concurrency. If the database size is
expected to grow to more than about 1 GB, or if reports are run very frequently, we recommend using either
MySQL or Oracle databases. SQLite is not supported when sampling more than one HP 3PAR StoreServ
Storage array.
• Microsoft SQL Server (not included) must be obtained and installed separately from HP 3PAR System Reporter
Software and can be run on a remote host. HP 3PAR System Reporter Software uses Open Database
Connectivity (ODBC) to connect to the Microsoft SQL Server. HP 3PAR System Reporter Software does not
support Microsoft SQL Server through ODBC on Linux platforms.
• MySQL (not included), must be obtained and installed separate from System Reporter. MySQL version 5.5 or
later is supported. System Reporter uses MyISAM tables in MySQL. These tables do not support transactions
(which are not needed by System Reporter) and, as a result, have very good performance even when they
grow large. For very large databases and large sample sizes, MySQL is the recommended database. The
MySQL database can be installed on a remote database server host, not necessarily on the machine on which
System Reporter is installed.
• Oracle (not included) must be obtained and installed separately from HP 3PAR System Reporter 3.1 MU1
Software. Oracle version 11g is supported. Performance of sample insertion with Oracle is a little lower than
that of MySQL, but Oracle is also a good choice for large databases. The Oracle database can be installed on a
remote database server host, not necessarily on the machine on which HP 3PAR System Reporter Software is
installed. Note that 64-bit Red Hat Linux is not supported if using Oracle because it requires a 32-bit Oracle
client to be installed, and this is not supported on 64-bit Linux.
HK904S D.00
3–9
Managing HP 3PAR StoreServ II
•
System Reporter
To begin using HP 3PAR External System Reporter, you must install the following components, which are
supplied on the installation CD:
− HP 3PAR OS CLI
− Web server (Apache HTTP Server)
− System Reporter tools (sampler, default SQLite database, and web server tools for generating reports)
•
To install on Windows:
− installer.exe
•
To install on Linux:
− sysrptwebsrv-2.9-1.i386.rpm
− sampleloop-2.9-1.i386.rpm
•
To install and use External SR, you must have a valid license.
HK904S D.00
3 – 10
Managing HP 3PAR StoreServ II
System Reporter
Before installing HP 3PAR External System Reporter, select a system on which to run the System Reporter
sampler and Web server. This system must use Windows Server 2003, Windows Server 2008, Windows Server
2012, or Red Hat Enterprise Linux 5.
HK904S D.00
3 – 11
Managing HP 3PAR StoreServ II
System Reporter
You can access the HP 3PAR System External Reporter main features using any standard web browser. A web
browser is not provided on the HP 3PAR External System Reporter CD.
To access HP 3PAR System Reporter using a web browser, use the URL below, where <host_name> is the name or
IP address of your System Reporter host server:
http:// <host_name>/3par/
Consider creating a user account just for External SR; this will ensure that the arrays are continually monitored
and being sampled.
HK904S D.00
3 – 12
Managing HP 3PAR StoreServ II
System Reporter
On this screen you can add arrays that you want the External SR management server to sample. You can also
remove arrays and change the sampling information for existing arrays from this screen.
HK904S D.00
3 – 13
Managing HP 3PAR StoreServ II
System Reporter
Best practice
•
If you are running External System Reporter against an array with HP 3PAR OS 3.1.2 or newer, HP recommends
skipping LD Performance Data collection because Adaptive Optimization now runs as part of the 3PAR OS. If
the System Reporter server and database are running on the same host, double the values found by the sizing
tool. The sizing tool is an Excel sheet provided with the HP 3PAR System Reporter Installer. Running System
Reporter in virtual machines is acceptable as long as the VMs are sized correctly.
•
Do not increase the default retention times for high-resolution and hourly data. If a longer history is required
than the default allows, schedule automatic reports to be created every day. This allows the data to be viewed
for a long time without requiring a lot of CPU, memory, and disk resources.
HK904S D.00
3 – 14
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3 – 15
Managing HP 3PAR StoreServ II
System Reporter
Sampling Status displays arrays being sampled and the time since the most recent high-resolution sample for
each array.
HK904S D.00
3 – 16
Managing HP 3PAR StoreServ II
System Reporter
For additional information on External SR and detailed explanation of reports, refer to the HP 3PAR System
Reporter User Guide or “HP 3PAR System Reporter Technical Whitepaper.”
HK904S D.00
3 – 17
Managing HP 3PAR StoreServ II
System Reporter
The Quick Reports option allows you to immediately access a variety of predefined reports (created through CGI
programs) that are deemed useful for most installations. The reports are made available through a menu tree
that you expand and collapse to select the systems and options of interest. For instance, one of the reports
provides a comparison of the peak hourly system CPU utilization for the past seven days for all systems, while
another compares the utilization of the most active physical disks for a given system.
HK904S D.00
3 – 18
Managing HP 3PAR StoreServ II
System Reporter
The Scheduled Reports option allows you to view reports that were created at scheduled times based on
preselected parameters and stored in a reserved directory structure. You can either view the reports using the
web interface provided through External SR, or you can copy the report subdirectories to another area and view
them there.
HK904S D.00
3 – 19
Managing HP 3PAR StoreServ II
System Reporter
Scheduled Reports offers the following advantages:
•
Allows quick access to preconfigured reports on a scheduled basis instead of keeping active tabs in a web
browser
•
Allows the generation of reports to take place in the background at off-peak times to minimize impact on
system performance
•
Allows distribution of scheduled reports to users from a selected directory without giving access to policy
configuration or system information outside a particular users authority
When you schedule a report, a subdirectory is created with the specified name in the “scheduledreports”
directory according to the following structure:
scheduledreports/<report directory>/<report name>/<YYYYMMDDHH>
Each time the report runs, a new subdirectory is created for that instance of the report with the timestamp as the
name. All of the PNG image files, the .CSV file, and the .HTML file will be placed in that subdirectory.
You have the choice of making the scheduled reports accessible through the External SR Main Menu window, or
you can copy the directory structure to some other location with whatever permissions are appropriate for the
users who access the reports. The benefit of a having a report directory structure is that you can limit the users
who have access depending on the permissions that are assigned (for example by creating .htaccess files).
One example where this can be useful is when multiple departments share an array. You can schedule various
reports specific to each department, place them in different report directories, and then allow each department
access only to their respective report directory. By default, the scheduled report process can run 10 reports at a
time. This can be reconfigured if required.
When scheduling a report, you can have an email sent to a specified address that provides a link when a
scheduled report is generated.
NOTE: When making scheduled reports available, the entire report directory should be published along with all
the associated subdirectories.
HK904S D.00
3 – 20
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3 – 21
Managing HP 3PAR StoreServ II
System Reporter
Custom Reports are more complex to generate because they allow for a great deal of customization, but
they are also very useful.
You can select the resolution and the type of report, and then specify which systems and domains are to be
monitored for the reporting information that is collected. You also have the choice of specifying how the
information is to be formatted for presentation through a selection of tables and graphs. In general, there
are two steps involved in generating a report.
The first step (shown on the slide) is to choose a report, select the resolution, select arrays or domains, and
click the Build Report Menu button. This generates the report menu for that report in a new window or tab.
HK904S D.00
3 – 22
Managing HP 3PAR StoreServ II
System Reporter
In the second step, set the appropriate controls in the report menu and click the Generate Report button. The
report is created and can be viewed in a new window or tab.
HK904S D.00
3 – 23
Managing HP 3PAR StoreServ II
System Reporter
When a report has been generated, this can be viewed graphically or by selecting the blue icon on the top of the
page to generate a complete details report in an Excel spreadsheet format.
The comma-separated-values (CSV) file can be opened in an application such as an Excel spreadsheet and
charted or formatted as the user desires. If the browser is set to open files with a .CSV extension in a
spreadsheet application, this process can be performed automatically.
HK904S D.00
3 – 24
Managing HP 3PAR StoreServ II
System Reporter
The CPU and VV Cache performance reports provide different metrics; they are:
•
Read Hits—Number of reads that hit in the cache.
•
Read Misses—Number of reads that miss in the cache.
•
Read Total—Total number of reads. Not shown in charts.
•
Write Hits—Number of writes for which the page is already in cache and is dirty (has previously written data
that has not yet been flushed to disk).
•
Write Misses—Number of writes that miss in the cache. A write is considered a miss if the page is not in the
cache or if the page is not dirty in the cache (see above).
•
Write Total—Total number of writes. Not shown in charts.
•
Read Hit%—Percentage of reads (out of total reads) that hit in the cache.
•
Write Hit%—Percentage of writes (out of total writes) that hit in the cache.
•
Total—Number of accesses (reads + writes). Not shown in charts.
HK904S D.00
3 – 25
Managing HP 3PAR StoreServ II
System Reporter
The slide shows an hourly bar graph report showing physical disk performance.
HK904S D.00
3 – 26
Managing HP 3PAR StoreServ II
System Reporter
A histogram of hourly Port IOPs performance is shown here. The reads are in red and the writes are in green.
HK904S D.00
3 – 27
Managing HP 3PAR StoreServ II
System Reporter
You can configure System Reporter to send email alerts when certain metrics meet specified conditions. For
example, suppose you want to receive an email alert when any VLUN has an average read service time of more
than 100 ms in any high-resolution sampling interval. To do this, all you need to do is fill in a form with the
specified details and then submit the query.
To add an alert rule:
1.
Point your browser at the web server where Apache HTTP Server and the HP 3PAR System Reporter Web
server scripts are installed. The 3PAR System Reporter main window appears.
2.
Click Policy Settings in the Extras menu area. The 3PAR System Reporter Policies window appears.
3.
Choose the Alert Rules tab.
4.
Click Add Alert.
HK904S D.00
3 – 28
Managing HP 3PAR StoreServ II
System Reporter
1.
Choose the data table to which the rule applies from the drop-down list.
2.
Choose the resolution of the samples to which the rule applies from the drop-down list. The rule will be
evaluated for each sample of the chosen resolution.
3.
Choose the system to which the rule applies from the drop-down list. Leave this blank if you want the rule to
apply to all systems.
4.
Choose the metric that the rule should calculate from the down-list. The available metrics depend on the
chosen data table, and changing the data table will reset the selected metric.
5.
Choose the direction that determines how the metric is compared to the limit value from the drop-down list.
The available values are > (greater than) and < (less than).
6.
Enter the limit value as a number. The metric is compared against this number.
7.
Enter the limit count as an integer (zero or larger). For each sample interval, an alert email will only be
generated if the metric exceeds the limit value (as compared by direction) for more than limit count objects.
8.
Enter the condition (min_read_iops, min_write_iops, or min_total_iops) to indicate the type of condition that
is to be monitored.
9.
Enter the condition value to specify the minimum amount that is to be met for the associated condition.
10.
Enter the recipient email address to whom the alert for this rule should be sent.
11.
Click Submit Query. An alert window appears confirming that the alert rule was added.
12.
Click OK to return to the Sampling Policies window.
HK904S D.00
3 – 29
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3 – 30
Managing HP 3PAR StoreServ II
System Reporter
On-Node (or embedded) System Reporter has become integrated with the Management Console (MC) in the
release of version 4.4. Using the HP 3PAR Management Console, with the On-Node System Reporter, the
administrator can manage the HP 3PAR storage systems from a single management window, which
includes reporting on performance data. A single screen provides a dashboard view of all your connected
HP 3PAR storage systems, regardless of model, as well as graphical charts and tables representing
important system data and remote replication configurations.
To use On-Node System Reporter, the HP 3PAR OS must be version 3.1.2 or later and Management Console
4.4 or later.
In Management Console, Performance and Reports manager, reports provide I/O Time and Size Distribution,
Space, and Performance Statistics reports as follows:
• Region I/O Density reports for Adaptive Optimization (AO), common provisioning groups (CPGs), physical
disks (PDs), ports, virtual logical unit numbers (VLUNs), and logical disks (LDs)
• Space reports for AO, CPGs, PDs, virtual volumes (VVs), and LDs
• Performance statistics for physical disks, ports, VLUNs, and LDs
• Virtual volume set reports for QoS
• CPU and cache performance reports for controller nodes
HK904S D.00
3 – 31
Managing HP 3PAR StoreServ II
System Reporter
On-node SR is part of HP 3PAR OS 3.1.2 or later and does not need to be installed or configured.
Setup of the On-Node System Reporter is part of the upgrade process (the requisite VV is created when admithw
runs as part of the upgrade) and as part of the out-of-the-box (OOTB) process. There is no manual setup or
configuration as there is with External System Reporter.
HK904S D.00
3 – 32
Managing HP 3PAR StoreServ II
System Reporter
HP 3PAR continues to move more functionality from the External System Reporter to On-Node SR. Quick Reports
have become the default standard in gaining access to easy-to-use predefined reports that are designed to
report on the most common areas of array performance.
Quick Reports have been added as part of the Management Console starting with MC 4.4, and they allow quick
and easy access to some of the more common performance reports. The major sections covered under the Quick
Reports are:
•
Physical Disks
•
Ports (Data)
•
AO Configurations
•
CPGs
•
VLUNs
•
Virtual Volumes
•
Virtual Volume Set
•
Controller Node
By default, Quick Reports use 14 days of retention and low resolution to display captured data. On-Node SR data
is limited to the following sizes for data retention, and high resolution data is limited by the amount of data
stored within the SR database (.srdata):
•
7200: 60 GB
•
7400/7450/10400: 80 GB
•
10800: 100 GB
HK904S D.00
3 – 33
Managing HP 3PAR StoreServ II
System Reporter
You can monitor the collection usage and the size of the .srdata volume’s growth by entering the hidden showsr
command. The .srdata file is dynamic and constantly changes. The showsr command is useful for monitoring the
On-Node SR functionality and shows:
•
The controller node number on which the .srdata volume is mounted
•
Capacity that is used on the volume and the size of the volume
•
Number of files for each type of stat contained with the file
HK904S D.00
3 – 34
Managing HP 3PAR StoreServ II
System Reporter
One graph or multiple reports can be displayed in the viewer pane. Best practice would have the viewer using
only one graph for better visibility of the performance metric being monitored.
Along with each of the reports displayed are the collection metrics that were captured during the collections
(which display at the bottom of the report).
HK904S D.00
3 – 35
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3 – 36
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3 – 37
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3 – 38
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3 – 39
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3 – 40
Managing HP 3PAR StoreServ II
System Reporter
To access On-Node SR Custom Reports from the Common Actions area, select New Report…
The Select Report window appears, allowing the administrator to select the report. A default name and
description are assigned (which can be edited).
HK904S D.00
3 – 41
Managing HP 3PAR StoreServ II
System Reporter
For Custom Reports, the Object Selection window appears, allowing the administrator to provide Custom Report
specifics including:
•
Chart Type
− Values at a Specified Time
− Values over a Time Interval
•
Sampling Resolution
− Low (daily)
− Medium (hourly)
− High (every 5 minutes)
•
Time Interval—Depending on the chart type selected, data selections in this field will be different
•
Values Data—Depending on the chart type data selected, the choices in this box will be different
•
Show Charts—The chart choices for the selected graph to be displayed
•
Export Data—You can export data as a comma separated variable (CSV) or HTML file from all HP 3PAR
Management Console displays (except for Performance)
HK904S D.00
3 – 42
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3 – 43
Managing HP 3PAR StoreServ II
System Reporter
In Management Console in the Performance & Reports area, the New Chart option (in the Common Actions area)
lets you create template and custom charts.
There are two template chart areas to choose from: Template and Custom.
Template chart selection:
1.
Select Template, select the type of objects to plot by choosing PD Usage under Physical Disk, or Disk Ports,
Peer Ports, Host Ports, RCFC Ports, or RCIP Ports under Ports (Data).
2.
Click Next.
3.
Select a system from the System list.
4.
By default, all objects will be plotted in the chart. If you want to choose objects, deselect the All checkbox
and then select objects to plot in the objects list.
5.
Click Next to view the summary, or click Finish to display the chart.
HK904S D.00
3 – 44
Managing HP 3PAR StoreServ II
System Reporter
At any time, you can use the controls at the upper right corner of each chart to pause or stop the generation of
the performance chart.
Pausing the chart stops the plotting of data, but data collection still occurs in the background.
Stopping the chart stops both data collection and plotting.
HK904S D.00
3 – 45
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3 – 46
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3 – 47
Managing HP 3PAR StoreServ II
System Reporter
On-Node SR includes a large set of CLI commands. Each command displays captured data from the last 24-hour
collection in 5-minute increments as the default display.
The sr* CLI commands, in general, can be broken down to areas for investigation. The general areas are as
follows:
•
Space—Displays data space consumed or free in different categories: srcpgspace, srpdspace, srvvspace,
srldspace
•
History—Displays historical histograms for various performance collections: srhistld, srhistpd, srhistport,
srhistvlun
•
Remote Copy—Displays performance statistics for Remote Copy over a measured period: srstatrcvv,
srstatrcopy
For a complete list of the sr* CLI commands, consult the HP 3PAR OS 3.1.3 Command Line Interface Reference
manual.
HK904S D.00
3 – 48
Managing HP 3PAR StoreServ II
System Reporter
The sr* CLI commands, in general, can be broken down to areas for investigation. The general areas are as
follows:
•
Stat—Displays historical performance statistics over a measured period: srstatcmp, srstatcpu, srstatld,
srstatlink, srstatpd, srstatport, srstatqos, srstatvlun
•
Remote Copy—Displays performance statistics for Remote Copy over a measured period: srstatrcvv,
srstatrcopy
•
AO—Several commands report on AO movement: sraomoves, srrgiodensity
For a complete list of the sr* CLI commands, consult the HP 3PAR OS 3.1.3 Command Line Interface Reference
manual.
HK904S D.00
3 – 49
Managing HP 3PAR StoreServ II
System Reporter
srcpgspace
DESCRIPTION
The srcpgspace command displays historical space data reports for common provisioning groups (CPGs).
SYNTAX
srcpgspace [options] [<CPG_name>|<pattern>...]
AUTHORITY
Any role in the system.
SUBCOMMANDS/OPTIONS
-attime
Performance is shown at a particular time interval, specified by the -etsecs option, with one row per object group
described by the -groupby option. Without this option, performance is shown versus time with a row per time
interval.
-btsecs <secs>
Select the begin time in seconds for the report. The value can be specified as either the absolute epoch time
(time in seconds since 1 January 1970 00:00:00 Universal Time) or as a negative number indicating the number
of seconds before the current time. If it is not specified, the report starts with the earliest available sample.
Instead of a number representing seconds, <secs> can be specified with a suffix of m, h, or d to represent time in
minutes (-30m), hours ( -1.5h), or days (-7d).
-etsecs <secs>
Select the end time in seconds for the report. The value can be specified as either the absolute epoch time or as a
negative number indicating the number of seconds before the current time. If it is not specified, the report ends
with the most recent sample. Instead of a number representing seconds, <secs> can be specified with a suffix of
m, h, or d to represent time in minutes (-30m), hours ( -1.5h), or days (-7d)
HK904S D.00
3 – 50
Managing HP 3PAR StoreServ II
System Reporter
-hires
Select high-resolution samples (5-minute intervals) for the report. This is the default setting.
-hourly
Select hourly samples for the report.
-daily
Select daily samples for the report.
-groupby <groupby>[,<groupby>...]
For -attime reports, generate a separate row for <groupby> items. Each <groupby> must be different and one of
the following:
•
DOM_NAME (domain name)
•
CPGID (common provisioning group ID)
•
CPG_NAME (common provisioning group name)
•
DISK_TYPE (disk type of the physical disks used by the CPG)
-disk_type <type>[,<type>...]
Limit the data to disks of the types specified:
− FC (Fibre Channel)
− NL (Nearline)
− SSD (Solid State Drive)
•
RAID_TYPE (RAID type of the CPG)
-raid_type <type>[,<type>...]
− Limit the data to RAID of the specified types. Allowed types are 0, 1, 5, and 6.
HK904S D.00
3 – 51
Managing HP 3PAR StoreServ II
System Reporter
srstatpd
DESCRIPTION
The srstatpd command displays historical performance data reports for physical disks.
SYNTAX
srstatpd [options] [<PDID|pattern>...]
AUTHORITY
Any role in the system
OPTIONS
-attime
Performance is shown at a particular time interval, specified by the -etsecs option, with one row per object group
described by the -groupby option. Without this option, performance is shown versus time with a row per time
interval.
-btsecs <secs>
Select the begin time in seconds for the report. The value can be specified as either the absolute epoch time
(time in seconds since 1 January 1970 00:00:00 Universal Time) or as a negative number indicating the number of
seconds before the current time. If it is not specified, the report starts with the earliest available sample. Instead
of a number representing seconds, <secs> can be specified with a suffix of m, h, or d to represent time in minutes
(-30m), hours ( -1.5h), or days (-7d).
-etsecs <secs>
Select the end time in seconds for the report. The value can be specified as either the absolute epoch time or as a
negative number indicating the number of seconds before the current time. If it is not specified, the report ends
with the most recent sample. Instead of a number representing seconds, <secs> can be specified with a suffix of
m, h, or d to represent time in minutes (-30m), hours ( -1.5h), or days (-7d).
HK904S D.00
3 – 52
Managing HP 3PAR StoreServ II
System Reporter
-hires
Select high-resolution samples (5-minute intervals) for the report. This is the default setting.
-hourly
Select hourly samples for the report.
-daily
Select daily samples for the report.
-groupby <groupby>[,<groupby>...]
For -attime reports, generate a separate row for <groupby> items. Each <groupby> must be different and one of
the following:
•
PDID (physical disk ID)
•
PORT_N (node number for the primary port for the PD)
•
PORT_S (PCI slot number for the primary port for the PD)
•
PORT_P (port number for the primary port for the PD)
•
DISK_TYPE (disk type of the PD)
-disk_type <type>[,<type>...]
Limit the data to disks of the types specified. Allowed types are:
− FC (Fibre Channel)
− NL (Nearline)
− SSD (Solid State Drive)
•
SPEED (speed of the PD)
HK904S D.00
3 – 53
Managing HP 3PAR StoreServ II
System Reporter
Several commands, as documented in the System Reporter User’s Guide, can help users create and monitor
critical performance alerts on the array. These commands do not report on data collection and therefore do not
start with “sr”; but rather, they generate alerts based on thresholds set during the data collections.
Alert generation
Alerts are generated using the srsampler (system reporter sampler) command. Each time the sampler takes a
sample, it checks all of the current criteria created with the createsralertcrit command. The data sample
collected is compared against each alert string that the user generated in the creation of the alerts. If the criteria
for an alert is met, it generates an alert using existing mechanisms, including SMTP traps and email alerts. An
alert is generated for each of the objects in which the alert conditions are met.
NOTE: Each time a sample is taken, all existing alerts are cleared for each object on which alerts had already
been set. If the condition for an alert is then met again, system reporter regenerates a new alert. This is done so
that alerts can be cleared automatically when the criteria is no longer satisfied.
HK904S D.00
3 – 54
Managing HP 3PAR StoreServ II
System Reporter
A new feature that was introduced in the HP 3PAR OS 3.1.3 allows you to set up custom alerting with the OnNode System Reporter. The feature is only be enabled if a valid System Reporter license is installed on the array.
Using the CLI, the administrator can create customizable alerts to identify performance issues that the array
might be experiencing. There are no predefined values or alerts within the tool, so you must identify your own
standards on which they will alert. You should set values on observances that were made during the monitoring
of the array using the performance reporting.
An example of this would be if you experienced high service times on an array, sporadically throughout the work
day, which caused performance issues.
Example
Create an alert with criterion called busydrive. An alert is generated if a daily sample discovers at least 5 Fibre
Channel drives that have write service times over 10 ms and total iops over 1000.
cli% createsralertcrit pd
write_svcms> 10 busydrive
HK904S D.00
–daily
–count
5
–disk_type
FC
total_iops>1000,
3 – 55
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3 – 56
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3 – 57
Managing HP 3PAR StoreServ II
HK904S D.00
System Reporter
3 – 58
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4–1
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4–2
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4–3
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4–4
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
Over the last decade, the storage industry has undergone a wave of consolidations. Enterprise organizations
have moved their locally attached storage to large-scale ,SAN-based storage systems that can hold thousands of
individual disk drives. Thanks to this consolidation, it is common to see hundreds of physical and virtual servers,
each running one or more workloads connected to a single storage system. This consolidation has reduced the
complexity of data storage and helps make management, occupied floor space, and energy consumption more
efficient. It has also successfully lowered the total cost of ownership for storage in organizations.
The immediate consequence of consolidating data from many disparate workloads onto a single storage system
is contention for shared system resources. Examples of such shared resources include front-end host Fibre
Channel (FC), iSCSI, and Fibre Channel over Ethernet (FCoE) ports and adapters, back-end FC and SAS disk
connections, physical disks, data and control cache, ASICs, CPUs, and backplane interconnections. I/O requests
arriving at the front-end FC HBA ports are handled on a first-come, first-served basis. On the surface, this might
sound fair, but it is unlikely to provide equal or consistent throughput for multiple concurrent workloads. By
applying a priority policy to the incoming I/O requests, you can maintain the quality of service (QoS) and
performance level for a particular workload when competition surfaces for an the shared resources of an array.
HK904S D.00
4–5
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
HP 3PAR Priority Optimization has a number of use cases that demonstrate its value.
Prioritize production workloads over testing and development
In all environments, production workloads take precedence over testing and development activities. Software
performance and scale-up test scenarios can ay deplete resources inside the array and affect production
applications running on the same storage system. Purchasing two separate arrays or partitioning a single array
to obtain isolation between these workloads has long been the recommended approach. HP 3PAR StoreServ
Storage systems accommodate multiple workloads, thanks to their wide striping architecture and tiered storage
implementation. By complementing these features with HP 3PAR Priority Optimization, storage administrators
can control and balance the distribution of array resources across multiple production and test and development
workloads on a single HP 3PAR StoreServ Storage system. Snapshots of the production database for analysis,
backup, and development purposes can be made available to the test and development organization with a
lower I/O priority using QoS. Additionally, HP 3PAR Priority Optimization can control the impact of bursty
workloads like large-scale source code compilations or batch database updates during working hours.
Receive protection from rogue processes
Even production-quality software can spawn rogue processes that attempt to consume an excessive number of
system resources. Database queries running full table scans during business hours can interfere heavily with
transaction-based workloads and other interactive applications, resulting in high response times for them. Using
HP 3PAR Priority Optimization can prevent a process or an application from over-consuming resources during the
day while lifting the limit during nighttime hours.
Manage application performance expectation levels
When HP sizes a storage system, it does so with the full expected workload for that array in mind. When
deploying the system with only a fraction of the intended workloads in operation, the delivered performance for
these workloads will excel but will not be sustained when the other workloads eventually come online. To avoid
support calls from users about reduced performance when the additional anticipated workloads are added to the
system over time, HP 3PAR Priority Optimization defines a QoS level for these initial workloads and sets the right
user expectation level from the start.
HK904S D.00
4–6
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
Manage periodic variations in workloads
Some I/O workloads occur periodically, such as daily backups and database consolidations or monthly payroll
and billing cycles. Such workloads can require significant I/O resources starting at a well-defined time for a
known duration. Accommodating these workloads during business hours depletes the resources of interactive
workloads and reduces user productivity. By using HP 3PAR Priority Optimization, periodic workloads are
confined within an IOPS or bandwidth limit during periods of time when their impact on production and
interactive workloads would be detrimental. As an example, backups could get their full complement of
throughput only during off hours. This prevents a backup that is accidentally executed during the workday from
affecting daily production workloads.
Accommodate guest workloads
Guest visitors from remote offices might run applications intermittently for hours, days, or weeks and consume
large amounts of I/O. Using HP 3PAR Priority Optimization allows you to accommodate the storage requirements
for guest users’ applications on an HP 3PAR StoreServ system by limiting the number of IOPS and the amount of
bandwidth they can consume.
Fine-grained control of HP 3PAR Peer Motion software
HP 3PAR Peer Motion software migrates volumes online from one HP 3PAR StoreServ Storage system to another.
The migration data traffic runs over a dedicated pair of Fibre Channel wires called the Peer links. With up to nine
virtual volumes (VVs) migrating concurrently, an HP 3PAR Peer Motion operation can contend with other
workloads for internal array resources on the source and destination system. Furthermore, the Fibre Channel
host ports used by the peer links on the source system can be shared with other hosts. The large sequential
reads on the source system from an HP 3PAR Peer Motion operation can impact the traffic to and from other
hosts using these same array host ports. HP 3PAR Priority Optimization can be used on the source array to
control the throughput of HP 3PAR Peer Motion data transfer over the peer links in a fine-grained way by
enabling a QoS rule limiting I/O bandwidth for the VVs under migration. In this way, internal array resources and
host port bandwidth are made available on the source array for competing workloads.
HK904S D.00
4–7
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
HP 3PAR PO requirements
HP 3PAR Priority Optimization is a feature of HP 3PAR OS, starting with version 3.1.2 MU2, and is supported on all
HP 3PAR StoreServ Storage systems that are certified for this version of HP 3PAR OS. This includes the HP 3PAR
F-Class and T-Class systems and HP 3PAR StoreServ 7000, 7450 and 10000 Storage systems.
A valid license for HP 3PAR Priority Optimization is required on the HP 3PAR StoreServ Storage system. The
license is spindle based, and is available separately and as part of the HP 3PAR 7000, 7450, and 10000 Data
Optimization suites.
Creating and managing QoS definitions requires HP 3PAR Management Console 4.4 and later. To use the
command line, you must install HP 3PAR Command Line Interface 3.1.2 MU2 or higher.
Reports on HP 3PAR Priority Optimization are available through HP 3PAR System Reporter 3.1 MU1 or higher and
HP 3PAR Management Console 4.4 or higher. The reports on the QoS definitions require a license for HP 3PAR
System Reporter.
All major operating systems and connectivity modes are supported with HP 3PAR Priority Optimization. Any
incompatibilities are listed on the HP Single Point of Connectivity Knowledge (SPOCK) website.
HK904S D.00
4–8
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
HP 3PAR Priority Optimization software for HP 3PAR StoreServ Storage systems implements and manages a
priority policy per virtual volume set (VVset) that serves as a proxy for applications or Virtual Domains. HP 3PAR
Priority Optimization places limits on I/O requests with lower priority policies to help ensure that workloads with
higher priority achieve their performance targets. HP 3PAR Priority Optimization is flexible and easy to configure
and monitor, and it requires minimal supervision from storage system administrators. In contrast, some
competitive QoS implementations require that you assign a workload to a predefined priority level. These
solutions are neither flexible, nor do they allow real-time enforcement.
HP 3PAR Priority Optimization is resident on the storage system and runs completely on the HP 3PAR StoreServ
Storage system. There are no host-based components to install, which means that enabling the software is as
simple as installing the license key.
HK904S D.00
4–9
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
This graphic shows the levels of virtualization on an HP 3PAR system.
HK904S D.00
4 – 10
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4 – 11
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
Virtual volumes can be a member of multiple VV sets. In the example from the showqos output, The VV(s) in the
QoS_APP1 VV set can also be members of the QoS_Tenant_1 VV set, allowing for the individual application limit
and the overall tenant limit.
HK904S D.00
4 – 12
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
HP 3PAR Priority Optimization operates by applying an upper-limit control on I/O traffic to and from hosts
connected through Fibre Channel, iSCSI, or FCoE to an HP 3PAR StoreServ Storage system. These limits, called
QoS rules, are defined for front-end IOPS and bandwidth, and are applied using Autonomic Groups (VVsets).
Every QoS rule is associated with a single target object. The smallest target object to which a QoS rule can be
applied is a VVset. Because a VVset can contain a single VV, a QoS rule can target one volume. A QoS rule
operates on all volumes inside a VVset.
In HP 3PAR OS 3.1.3, every QoS rule has these eight attributes:
•
System—Name of the HP 3PAR StoreServ system working on
•
Name—Name of the QoS rule (fixed to the name of the VVset or the Virtual Domain)
•
Scope—QoS rule governs a VVset, a Virtual Domain, or the “system”
•
State—QoS rule can be active or disabled
•
IOPS range—Min goal and max limit on IOPS for the target object
•
Bandwidth range—Min goal and max limit on transfer rates for the target object
•
Latency goal—Target maximum service time for the target object
•
Priority—Priority level for the target object
At creation time of a rule, the default state is “enabled” and the default priority is “normal.”
If the upper limit for IOPS or bandwidth for a particular VVset has been reached, HP 3PAR Priority Optimization
delays I/O request responses for the Virtual Volumes (VVs) contained in that VVset. These delayed I/O requests
are pushed onto an outstanding I/O queue for the VVs in the VVset experiencing the limit breach.
Every QoS rule maintains its own queue for delayed I/O requests. These queues are constructed in the control
cache of the HP 3PAR StoreServ Storage controller node receiving the I/O request that needs to be delayed.
HK904S D.00
4 – 13
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
Only the I/O request (SCSI read and SCSI write request) descriptions are queued, not the actual data leading to
very fast handling of delay operations and no data cache consumption. When I/O requests reside longer than 200
ms in a QoS queue any more incoming I/O requests to the volumes in the VVset are rejected and a QFULL
response is returned to the host using the volumes.
If the workload for the VVset does not decline as a result of the delayed I/O requests, QFULL is returned to the
server in an attempt to get the server to throttle its workload to the storage array. The QFULL response prevents
delayed I/O from holding all system resources such as buffers and queues on the host, host bus adapter (HBA),
and VV layer.
Hosts should respond to the QFULL message appropriately and throttle I/O. The I/O delay and the eventual
QFULL response applies to all members of the VVset, even if only one of the VVs caused the QoS threshold
breach.
During operation, HP 3PAR Priority Optimization samples IOPS, bandwidth, and latency values per VVset and
Virtual Domain every eight ms and assembles 625 of these periods into a moving five-second window to adjust
the ingress of the I/O requests to a given QoS rule on a periodic basis. The migration toward a new IOPS or
bandwidth setting is driven by an exponentially weighted average of the five-second period and is completed in a
few seconds. The impact of tens of QoS rules has been determined to be negligible for the control cache and the
usage of the CPUs in the controller nodes.
HK904S D.00
4 – 14
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4 – 15
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
When an I/O request reaches the HP 3PAR StoreServ Storage controllers, HP 3PAR Priority Optimization takes
one of the following three actions:
•
Pass the I/O request to the virtual volume
•
Delay the I/O request by placing it in a private QoS queue that is processed periodically
•
Return a SCSI Queue Full (QFULL) message to the server that sent the request
If the upper limit for IOPS or bandwidth for a particular VVset has been reached, HP 3PAR Priority Optimization
delays I/O request responses for the volumes contained in that VVset. These delayed I/O requests are pushed
onto an outstanding I/O queue for the VV(s) in the VVset experiencing the limit breach. Every QoS rule maintains
its own queue for delayed I/O requests. These queues are constructed inside each HP 3PAR StoreServ Storage
controller node receiving an I/O request that needs to be delayed. The controller node’s cache is not impacted
because the QoS rules are applied before write I/O data hits the cache. Only the I/O request descriptions are
queued, not the actual data. The size of a request queue varies by maximum delay time and QoS limits. When I/O
requests reside longer than 200 ms in a QoS queue, any more incoming I/O requests to the volumes in the VVset
are rejected and a QFULL response is returned to the host using the volumes. QFULL prevents delayed I/O from
holding all system resources such as buffers and queues on the host, HBA, and VV layer. Hosts should respond to
the QFULL message appropriately and throttle I/O. The I/O delay and the eventual QFULL response applies to all
members of the VVset, even if only one of the VVs caused the QoS threshold breach.
QoS rules are persistent across the reboot of an HP 3PAR StoreServ Storage system. A VVset cannot be removed
unless all QoS rules defined for it are removed first.
PO calculates the average of the total IOPS and BW of all VVs under a VVset.
When the total IOPS or BW is above the QoS ceiling, it delays the requests, and when the maximum number (20%
of the rule threshold) of delayed requests is reached, it sends QFULL to the hosts.
HK904S D.00
4 – 16
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4 – 17
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
To detect rejected I/O requests, you can monitor the Rej column in the output of the srstatqos command or
search the debug event logs for QFULL messages using the showeventlog –debug –msg Qfull command. The
following is an example of a QFULL event from the debug event log:
Time : 2014-01-03 14:04:23 CEST
Severity : Debug
Type : Host error
Message : Port 0:1:1 -- SCSI status 0x28 (Qfull) Host:machine.bel.hp.com
(WWN 50060B0000C21639) LUN:2 LUN WWN:60002AC000000000000000AC000005D7 VV:0
CDB:28002692AAF000009000 (Read10) Skey:0x00 (No sense) asc/q:0x00/00 (No additional
sense information) VVstat:0x00 (TE_PASS -- Success) after 0.000s (-) toterr:9720,
lunerr:2
HK904S D.00
4 – 18
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4 – 19
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
With no QoS rule in place, we see the total IOPS on the system being around 16k; however, the critical volume is
not getting the right amount of IOPS, and the less critical application is starving system IOPS and affecting the
critical volume service times.
As soon the QOS rules are in place we can see the increase in IOPS and decrease in services times for the critical
volume.
HK904S D.00
4 – 20
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4 – 21
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4 – 22
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4 – 23
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4 – 24
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4 – 25
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4 – 26
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
To create a new QoS rule for a VVset using the HP 3PAR Management Console, select the VVset name in the
Virtual Volume Sets tree node in the Provisioning pane in the HP 3PAR Management Console and right-click its
name. A pop-up menu with configuration options appears. If this is a new rule, only the Configure QoS option is
available. The same menu appears, with only the four options for QoS, when clicking the large QoS button.
QoS rules are subject to five distinct actions:
•
Create—A QoS rule is created.
•
Enable—A disabled QoS rule is made active.
•
Disable—An enabled QoS rule is made inactive.
•
Change—The limit values, latency, or priority for a QoS rule are modified.
•
Clear—The QoS rule is removed from the system.
HK904S D.00
4 – 27
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
Fill out this window with non-negative integer values for the min goal and max limit fields for IOPS or bandwidth
within their respective ranges (0–231-1 for IOPS, 0–263-1 for bandwidth) and click the OK button to create and
activate the QoS rule. A QoS rule can either have IOPS limits, bandwidth limits, or both. A min goal cannot be
specified without a max limit as well. When only a max limit is specified, the min goal is made equal to it. The max
limit must be equal or higher to min goal value. The priority level for the rule can be changed from the default
normal to low or high.
The latency goal is expressed in milliseconds. Click the OK button to configure the QoS rule and enable it
immediately.
When values for I/O limits, bandwidth limits, priority, or latency were previously defined, right-clicking the VVset
name provides the menu option to configure, clear, or disable the rule. When disabled, the IOPS and bandwidth
used by the VVs in the VVset will go up if they were being limited by the rule.
In the same way as for VVsets, we can define a QoS rule and its parameters for a “Target Type” of “Domain” and
for “System.”
The Target Type of Domain enforces the QoS rule created onto all current and future members of the domain
specified for “Target Name.” The “System” target type rule has Target Name all_others and governs all VVsets
and VVs that are not subject to an explicit QoS rule.
Be careful when defining the limits for the “System” target type because the number of volumes managed by this
rule can be large. To ensure that at least some IOPS and bandwidth are available to all the volumes under this
rule, HP 3PAR Priority Optimization enforces a minimum value for min goal of 1,000 I/O requests per second and
100,000 KB/s for min goal for bandwidth. These limits might be far too low for some environments. If lower
values are entered, a pop-up displays information about the minimum values and closes the configuration
window. If the domain or system QoS rule was already configured, its defined values appear in the configuration
window, and changes to the rule can be made.
HK904S D.00
4 – 28
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
QoS rules can be set in the Provisioning , QoS, or Virtual Volume Sets areas.
HK904S D.00
4 – 29
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
HP 3PAR Priority Optimization features a system-wide, built-in system rule called all_others that is disabled by
default. This rule is used to specify an IOPS or bandwidth limit for all volumes and VVsets that are not subject to a
named rule. Enabling this rule averts the need to define a named rule for all volumes on the storage system. HP
highly recommends that you enable the all_others rule to help ensure that volumes not placed explicitly into
named QoS rules will not disrupt the system by using an inordinate amount of IOPS or bandwidth. Care should be
taken to help ensure that the all_others rule is allocated enough IOPS and bandwidth resources to satisfy the
requirements of the volumes it governs.
Limits for all_others cannot be set lower than 1000 IOs/Sec or 100 KBs/Sec.
HK904S D.00
4 – 30
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
Refer to the Help page in each of the CLI commands for details about the command syntax and on the columns in
their output. The integer value for bandwidth in the setqos command can optionally be followed with k or K to
indicate a multiple of 1000, m or M to indicate a multiple of 1,000,000, or g or G to indicate a multiple of
1,000,000,000. If none is specified, there is no limit on IOPS or bandwidth.
The CLI offers a number of features that are not available in the HP 3PAR Management Console. For example, a
QoS rule can be created with setqos and kept inactive with the –off option. All named rules can be switched off
simultaneously with the setqos –off vvset:* command. The setqos –on vvset:* command enables all of the rules
again.
Rules can be shown in ordered form with key column X using the showqos –sortcol X,[inc|dec] command, where X
is the column number starting with the left column numbered 0; inc and dec sort on increasing or decreasing
values in column X.
An option of particular interest for setqos is –vv <name>|<pattern>. This option changes the QoS rule for all
VVsets that include virtual volumes whose names match any of the names or patterns specified in the setqos
command. If the VV is in a VVset that had no QoS rule yet defined, a new rule is created for the VVset with the
limits specified. The built-in rule all_others is switched on and off with the setqos [-on|–off] sys:all_others
command. The showqos and statqos commands provide a centralized view of the QoS rules in the systems and
how the workloads conform to them.
Executing the QoS CLI commands requires a logon to an account with Super or Edit rights. Any role that is granted
the qos_set right can set QoS configurations. QoS rules can be scheduled using the createsched command for
automated QoS policy changes based on time of the day, day of the week, and other parameters. In this way, QoS
rules can adapt to day/night or weekday/weekend workload conditions.
The CLI srstatqos command creates historical performance data reports on the QoS rules. This command is
integrated in HP 3PAR System Reporter on HP 3PAR StoreServ Storage controller nodes. HP 3PAR System
Reporter 3.1 MU1, installed on a separate server, also includes statistics on QoS. For more information on QoS
reports, refer to the HP 3PAR System Reporter 3.1 MU1 User’s Guide.
HK904S D.00
4 – 31
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4 – 32
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
QoS rules in HP 3PAR Priority Optimization can also be created and managed from the HP 3PAR CLI. The three CLI
commands to create and manage QoS rules are:
•
setqos
•
showqos
•
statqos
Refer to the Help page for each of the commands in the HP 3PAR CLI and the HP 3PAR Command Line Interface
reference guide for details about the command syntax and an explanation of the different columns in their
output.
The integer value for bandwidth in the setqos command can optionally be followed with k or K to indicate a
multiple of 1,000; m or M to indicate a multiple of 1,000,000; or g or G to indicate a multiple of 1,000,000,000.
Following is an example of the creation of a QoS rule for a VVset called ESXi55:
setqos vvset:ESXi55 -io 6000-12000 -bw 140M-320M -lt 12 –pri high
If none is specified as the value, there is no limit on IOPS, bandwidth, or latency. The graphic on the slide displays
the output of a showqos command.
HK904S D.00
4 – 33
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
The example shows a QoS rule being created using the setqos command with the following options:
•
-io—Setting an upper IO limit of 2000 IOs/Sec
•
-bw—Setting an upper bandwidth limit of 5000 KBs/Sec
•
vvset: Virtual volume set with the name 3p74-QoS-vvset
To disable a QoS rule, use the setqos command with the –off option The -on option can be used to enable the
QoS rule. The –clear option can be used to clear or delete the QoS rule.
NOTE: The wildcard (*) can be used with the vvset: option for pattern matching.
HK904S D.00
4 – 34
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4 – 35
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4 – 36
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
The HP 3PAR CLI statqos command columns display run-time statistics of active QoS rules. The command
produces output every 2 seconds (this interval can be changed). The example on the slide shows a sample output
for this command.
Per the active QoS rule, the following values are shown:
•
IOPS
•
Bandwidth
•
Service time (Svt_ms)
•
Wait time (Wtt_ms)
•
Size of the I/O requests (IOSz_KB)
•
Number of rejected I/O requests (Rej)
•
Average QoS queue length (Qlen)
•
Average wait queue length (WQlen)
Following is the explanation for the column headers of the statqos command:
•
Type: QoS target type (vvset or sys)
•
Name: QoS target name, which is also the name of the VVset the rule is defined on
•
I/O_per_second:
− Qt : IOPS cap set by user
− Cur : Current IOPS
− Avg : Average IOPS over all iterations of the statqos command so far
− Max : Maximum IOPS over all iterations of the statqos command so far
HK904S D.00
4 – 37
Managing HP 3PAR StoreServ II
•
QoS and Priority Optimization
Kbytes_per_sec:
− Qt : IOPS cap set by user
− Cur : Current IOPS
− Avg : Average IOPS over all iterations of the statqos command so far
− Max : Maximum IOPS over all iterations of the statqos command so far
•
Svt_ms: Total service time of I/O requests processed by QoS (including wait time and the real service time)
•
Wtt_ms: Wait time of I/O requests are delayed by QoS
•
IOSz_KB: I/O block size in KB (1 KB = 1,000 bytes)
•
Rej: Number of I/O requests rejected by QoS
•
WQlen: Number of I/O requests delayed by QoS
•
Qlen: Number of I/O requests processed by QoS (including the number of I/O requests delayed by QoS and
number of I/O requests processed by QoS without delay)
The On-Node HP 3PAR System Reporter tool samples QoS statistics periodically using the statqos interface and
stores this information for all active QoS rules.
HK904S D.00
4 – 38
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
srstatqos
DESCRIPTION
The srstatqos command displays historical performance data reports for QoS rules.
SYNTAX
srstatqos [options]
AUTHORITY
Any role in the system.
OPTIONS
-attime
Performance is shown at a particular time interval, specified by the -etsecs option, with one row per object group
described by the -groupby option. Without this option, performance is shown versus time, with a row per time
interval.
-btsecs <secs>
Select the begin time in seconds for the report. The value can be specified as either:
•
Absolute epoch time (for example 1351263600)
•
Absolute time as a text string in one of the following formats:
− Full time string including time zone: “2012-10-26 11:00:00 PDT”
− Full time string excluding time zone: “2012-10-26 11:00:00”
− Date string: “2012-10-26” or 2012-10-26
− Time string: “11:00:00” or 11:00:00
HK904S D.00
4 – 39
Managing HP 3PAR StoreServ II
•
QoS and Priority Optimization
A negative number indicating the number of seconds before the current time. Instead of a number
representing seconds, <secs> can be specified with a suffix of m, h, or d to represent time in minutes (e.g. 30m), hours (e.g. -1.5h), or days (e.g. -7d). If it is not specified, the report begins at the earliest sample.
-etsecs <secs>
Select the end time in seconds for the report. If -attime is specified, select the time for the report. The value
can be specified as either:
− Absolute epoch time (for example 1351263600).
− Absolute time as a text string in one of the following formats:
• Full time string including time zone: “2012-10-26 11:00:00 PDT”
• Full time string excluding time zone: “2012-10-26 11:00:00”
• Date string: “2012-10-26” or 2012-10-26
• Time string: “11:00:00” or 11:00:00
•
A negative number indicating the number of seconds before the current time. Instead of a number
representing seconds, <secs> can be specified with a suffix of m, h, or d to represent time in minutes (e.g. 30m), hours (e.g. -1.5h), or days (e.g. -7d). If it is not specified, the report ends with the most recent sample.
DOM_N-hires
Select high resolution samples (5-minute intervals) for the report. This is the default.
-hourly
Select hourly samples for the report.
-daily
Select daily samples for the report.
-vvset <VVSet_name|pattern>[,<VVSet_name|pattern>...]
Limit the data to VVSets with names that match one or more of the specified names or glob-style patterns.
-all_others
Display statistics for all other IO not regulated by a QoS rule.
-groupby <groupby>[,<groupby>...]
For -attime reports, generate a separate row for each combination of <groupby> items. Each <groupby>must
be different and one of the following:
•
AME (domain name)
•
TARGET_TYPE (type of QoS rule target, i.e. vvset)
•
TARGET_NAME (name of QoS rule target)
•
IOPS_LIMIT (I/O per second limit)
•
BW_LIMIT_KBPS (KB per second bandwidth limit)
EXAMPLE
The following example displays aggregate hourly performance statistics for QoS rules beginning 24 hours ago:
cli% srstatqos -hourly -btsecs -24h
HK904S D.00
4 – 40
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
Create a custom chart to monitor QoS. Select Performance and Reports and click New Chart. On the Chart
Selection screen, select Custom and type in a name. On the Object Selection screen, select QoS in the Category
field.
HK904S D.00
4 – 41
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
The QoS chart in the example displays read (in red), write (in blue), and total read/write counts (in green) in
various time buckets. Charts can be generated for values over a time interval or at a specified time. In this
example, the chart was run for a 24-hour period (14:00 to 14:00).
The X-axis represents time, and the Y-axis (value) indicates an access count or access time, depending on the
type of chart.
In this example, a QoS rule/limit was set to be 10000 IOs/Sec, shown in the top chart (IOPS) represented with the
straight line with dashes. At approximately 16:00 hours, the Total IOs/Sec approached the 10000 IOs/Sec limit
and an internal queue was initiated. Looking at the Wait Time chart, the wait times increased between 16:00 and
10:00 as the IOs/Sec stayed at the 10000 level. At 10:00, when IOs/Sec dropped (top IOPS chart), the wait time
(third chart: Wait Time) dropped as well.
Because of the internal queuing for QoS, no rejections (the number of I/O requests that were not serviced
because of a full queue) occurred during the period as shown in the final Reject chart.
HK904S D.00
4 – 42
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
When using External System Reporter to generate QoS reports and configure alerts, the following applies:
•
HP 3PAR OS 3.1.2 MU2 or later must be installed on the array in order to use PO features in external SR.
•
External System Reporter 3.1 MU1 or later must be installed.
•
External SR checks to see whether the PO license is installed. The PO license must be installed.
•
QoS Rule is an SLA on either IOPs or Bandwidth or both, per VVSet.
•
External SR uses the getqos command and stores QoS rules-related data in the database.
•
A QoS Perf metrics chart can be generated using Custom Reports/Excel Client/Schedule Reports.
− This report can be either for all QoS rules or for an individual QoS rule.
− Only one system can be selected for QoS Perf metrics.
•
Alerts can be created on the following metrics (read_waittms, write_waittms, total_waittms, io_rej, bw_rej,
and d_qlen).
•
The example shows the selection screen in the Customer Reports area of external System Reporter. This
report is only available in System Reporter 3.1 MU1 or later.
HK904S D.00
4 – 43
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
The example shows a Daily QoS Performance report using external System Reporter 3.1 MU1. An explanation of
the report metrics is outlined below.
Metric
Description
Total IOPS
Total (read + write) operations per sec
IO Delay
Delay in IO per sec
IO Rejection
IO Rejection per sec
Total Bandwidth KBytes/sec
Total (read + write) bandwidth in KBytes/s
Bandwidth Delay
Bandwidth delay per sec
Bandwidth Rejection
Bandwidth rejection per sec
Read Svct ms
Average read service time in millisec
Write Svct ms
Average write service time in millisec
Total Svct ms
Average total (read + write) service time in millisec
Read Wait
Time ms Wait read time at sample time
Write Wait Time ms
Wait write time at sample time
Total Wait Time ms
Average total (read + write) wait time at sample time
Read IOS Kbytes
Average size of read operations in KBytes
Write IOS Kbyte
Average size of write operations in KBytes
Total IOS Kbytes
Average size of read and write operations in KBytes
HK904S D.00
4 – 44
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
Metric
Description
Queue Length
Queue length at the sample time. NOTE: Unlike the other metric
above, the queue length is an instantaneous measure at the sample
time, not an average over the sample interval. Because of the way that
RCFC ports process data, the Queue Length might not be a valid
measure.
Wait Queue Length
Wait Queue Length at sample time
Latency
Latency at sample time
Latency Delay
Delay in latency per sec
HK904S D.00
4 – 45
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
The following options apply when adding a QoS alert rule.
•
Data Table: statqos
•
Resolution
− Hires (data table that contains the high resolution samples)
− hourly (data table that contains the hourly samples)
− Daily (data table that contains the daily samples)
•
System: If System is specified, the alert rule is calculated only for objects in the specified system. If System is left blank,
the alert rule is calculated for all systems that are set up to be sampled.
•
Metric
− read_waittms (read wait time in millisec averaged over the sample time)
− write_waittms (write wait time in millisec averaged over the sample time)
− total_waittms (total wait time in millisec averaged over the sample time)
− io_rej (IO rejection)
− bw_rej (bandwidth rejection in KiB)
− d_qlen (number of IOs delayed by the QoS rule)
•
Direction specifies whether the Metric should be less than (<) or greater than (>) the Limit Value for the alert to be
generated.
•
Limit Value specifies the value that the Metric is compared to. This should be a number.
•
Limit Count —The alert is only generated if the number of objects for which the Metric exceeds the Limit Value, in the
direction specified by Direction, is greater than Limit Count in any given sample. This should be a number.
•
Condition specifies the condition that should be monitored.
•
Condition Value specifies the condition value.
•
Recipient—Email address to which the alert email should be sent.
HK904S D.00
4 – 46
Managing HP 3PAR StoreServ II
QoS and Priority Optimization
It is important that the workload characteristics of the various applications on a storage system be fully
understood before setting up and applying QoS rules for them. HP 3PAR System Reporter can be used to help
make these determinations.
Best practices on the collection and interpretation of HP 3PAR System Reporter data can be found in the HP 3PAR
System Reporter Software User’s Guide.
HK904S D.00
4 – 47
Managing HP 3PAR StoreServ II
HK904S D.00
QoS and Priority Optimization
4 – 48
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5–1
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5–2
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5–3
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
HP 3PAR Peer Motion Software is the first nondisruptive, do-it-yourself data migration tool for mid-range and
enterprise block storage. Peer Motion delivers simple, fool-proof, nondisruptive data mobility between HP 3PAR
Storage Systems. With Peer Motion, HP 3PAR Storage System customers can load balance I/O workloads across
systems at will, perform technology refresh seamlessly, cost-optimize asset life cycle management, and lower
technology refresh capital expenditure.
Unlike traditional block migration approaches, Peer Motion enables you to migrate storage volumes between any
HP 3PAR Storage Systems online, nondisruptively, and without complex planning or dependency on extra
tools. Peer Motion architecture is designed to support heterogeneous only HP 3PAR to HP 3PAR data migration
currently.
Peer Motion leverages HP 3PAR Thin Built In technology and HP 3PAR Thin Conversion Software to power the
simple and rapid inline conversion of inefficient, fat volumes on source arrays to more efficient, higher-utilization
thin volumes on the destination HP3PAR Storage System.
Peer Motion Manager orchestrates all stages of the data migration lifecycle to ensure data migration is simple
and fool-proof. Peer Motion is separately orderable software.
HK904S D.00
5–4
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5–5
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
Beginning with HP 3PAR OS 3.1.2 and Management Console Version 4.3, Peer Motion functionality is integrated
into Management Console. The CLI can also be used.
HK904S D.00
5–6
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
There are three types of data migration.
•
With online migration, the volumes copied from the source system are exported to the hosts, and host I/O can
continue during the migration process.
•
With offline migration, the volumes are not exported to the host. After migration, the volumes from the
destination array can be exported to the hosts.
•
A minimally disruptive migration involves exported virtual volumes (VLUNs) and a reboot of the host.
HK904S D.00
5–7
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5–8
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5–9
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 10
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 11
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
For the latest Peer Motion Support Matrix, consult SPOCK. Go to http://h20272.www2.hp.com  Array SW: 3PAR
 HP 3PAR Peer Motion Online Migration Host Support
HK904S D.00
5 – 12
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 13
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 14
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
On the source array, two FC ports from two different controller nodes must be configured as host ports. These
will be used for the migration link from the source to the destination array.
Ports can be configured using Management Console or the CLI using the controlport command. The ports must
be taken offline before changing the Connection Mode.
HK904S D.00
5 – 15
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
If using Management Console, you must be connected to both arrays involved in the migration process.
To build a PM relationship between the two arrays in the Management panel, select Peer Motion.
In the Common Actions panel, select Create PM Configuration.
HK904S D.00
5 – 16
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
In the Select Systems window, select which array will be the source and which array will be the destination.
HK904S D.00
5 – 17
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
In the Setup Connections window on the graphic for the Destination array, select a port that will act as a peer
port.
You will be prompted to select a second port to act as a peer port for redundancy. The second peer port must be
configured from a second controller node to reduce the controller node as a single point-of-failure. During the
migration, both migration links are used to migrate the data offering an active/active solution. Two migration
links must be configured.
The physical connection between the arrays (Host  Peer) can be direct connected between the two arrays or
SAN attached.
HK904S D.00
5 – 18
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
Management Console performs a test between the links, ensuring physical connectivity and ensuring zoning is
correct.
If all tests pass, select OK.
HK904S D.00
5 – 19
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
To migrate settings and configuration from the source storage array to the destination array (such as VVSets,
Host Sets, LDAP configuration), use the Copy Storage Settings and Configuration option.
To begin data migration, select Migrate Data from the Common Actions area.
HK904S D.00
5 – 20
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
1.
Select Online Migration, Minimally Disruptive Migration, or Offline Migration.
2.
Select the volume(s) to be migrated.
3.
Select Finish to start data migration, taking all defaults, or Next to go to the Import Volumes window.
HK904S D.00
5 – 21
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
For each source VV, PM allows you to select User CPG, Copy CPG, and Provisioning type for destination VVs.
Select Finish to begin the migration process.
HK904S D.00
5 – 22
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
After migration is complete, the configuration can be deleted using the Remove PM Configuration option in the
Common Actions area.
On the Remote PM Configuration confirmation window, select the Unconfigure the Peer Ports option to switch
the peer ports back to their original connection mode.
HK904S D.00
5 – 23
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 24
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 25
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 26
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 27
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 28
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
•
Use the addsource command to add a source array to the HP 3PAR Peer Motion Utility Database, then use the
showsource command to verify that it was added correctly.
•
Use the adddestination command to add a destination array to the HP 3PAR Peer Motion Utility Database, then
use the showdestination command to verify that it was added correctly.
•
Use the createmigration command to create a data migration job, then use the showmigration command to list
all migrations.
•
User startmigration to start a data migration job, and use showmigration to view the migration status.
•
When a migration job is complete, use the removemigration command to remote the migration entry from the
HP 3PAR Peer Motion Utility Database.
HK904S D.00
5 – 29
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 30
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 31
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 32
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 33
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 34
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 35
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 36
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 37
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 38
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 39
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 40
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 41
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
HP EVA to HP 3PAR StoreServ Online Import manages the migration of data from a source EVA storage system to
a destination HP 3PAR storage system. Using HP EVA to HP 3PAR StoreServ Online Import, you can migrate EVA
virtual disks and host configuration information to an HP 3PAR destination storage system without changing
host configurations or interrupting data access.
HP EVA to HP 3PAR StoreServ Online Import coordinates the movement of data from the source while servicing
I/O requests from the hosts. During the data migration, host I/O is serviced from the destination HP 3PAR storage
system. The host/virtual disk presentation implemented on the EVA is maintained on the HP 3PAR destination.
HK904S D.00
5 – 42
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
HP 3PAR Online Import migrates data from HP EVA P6000 to HP 3PAR StoreServ systems without the use of hostbased mirroring or an external appliance. Online Import is orchestrated from within P6000 Command View 10.2
or higher. The granularity of a migrated object is an entire vdisk. The import of data into the destination system
happens either online, with a short disruption to the host I/O, or in an offline fashion. The EVA P6000 and the HP
3PAR StoreServ systems are interconnected via dual FC links and SAN switches serving as the dedicated path for
the data transfer between the source and the destination system.
The coordination between the host and the storage systems for a migration is managed by a software wizard
running inside P6000 Command View. The wizard walks the administrator through the steps to define the source
and destination systems, select the volumes to migrate and start the migration. The wizard informs the
administrator when to make SAN zone changes if any are needed. Zone/unzone operations are executed
manually and outside of P6000 Command View using vendor-specific SAN switch management tools.
Starting at the actual data migration, the selected vdisks on the source EVA P6000 receive a Management Lock to
prevent them from being migrated twice or mounted to another host. Online Import does not support the reverse
migration of volumes from HP 3PAR StoreServ to EVA P6000.
HK904S D.00
5 – 43
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 44
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
The destination HP 3PAR array must be running HP 3PAR OS 3.1.2 or higher to use Online Import.
HK904S D.00
5 – 45
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
Data migration can be done by selecting a host or a virtual disk on the source EVA. In addition to the host or
virtual disk explicitly selected for migration, other objects may be included in the migration using implicit
selection. A pre-migration process identifies the relationship between hosts and presented virtual disks and
selects all necessary objects to completely migrate the hosts. Consequently, the explicit selection can lead to
many implicit selections and a large amount of data being migrated. For example, selecting a single host results
in migration of the host, any virtual disks presented to it, any hosts to which those virtual disks are presented,
and any virtual disks presented to those hosts.
The migration process selects objects to migrate using the following rules:
•
Host—When selecting a single host or group of hosts with virtual disk presentations, all the virtual disks
presented to the hosts are migrated. In addition, any presentations the source virtual disks have to other hosts
will include those hosts and all of their presented virtual disks in the migration.
•
Presented virtual disks—When selecting a virtual disk or group of virtual disks with host presentations, the
selected virtual disks and the hosts to which they are presented are migrated. In addition, any presentations
the source hosts have with other virtual disks will include those virtual disks in the migration.
•
Unpresented virtual disks—Only the selected virtual disks are migrated offline.
HK904S D.00
5 – 46
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 47
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 48
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
The limit for active offline migrations is 25. The limit for online migrations is 255.
HK904S D.00
5 – 49
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 50
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
For further details on migrating from an EVA to an HP 3PAR array, consult the HP EVA to 3PAR StoreServ Online
Import Migration Guide.
HK904S D.00
5 – 51
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
The vdisks to migrate from the source EVA P6000 to the destination HP 3PAR StoreServ system are selected. A
software wizard helps the administrator pick the vdisks for the migration using explicit and implicit selection
rules. At first, the administrator selects a single vdisk or host.
HK904S D.00
5 – 52
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
Click the Next button to open the wizard, which runs the rules engine for adding any extra vdisks or hosts
implicitly. The example on the slide shows that vdisk Vdisk001 was added implicitly to the explicit selection. This
is because Vdisk002 is also presented to host Blade-2. By hovering the mouse cursor over the name of the virtual
disk, the full name of the vdisk is shown as a pop-up.
HP EVA to 3PAR StoreServ Online Import supports three types of data migration. The appropriate type of data
migration is selected automatically based on the objects being migrated:
•
Online: Selected when migrating a non-Windows host or a virtual disk presented to a non-Windows host.
During online migration, all presentation relationships between hosts and virtual disks being migrated are
maintained. Host I/O to the data is not disrupted during an online migration.
•
Offline: Selected when migrating one or more unpresented virtual disks. During offline migration, only the
selected virtual disks are migrated. No hosts are migrated in this situation.
•
Minimally disruptive: Selected when migrating a Windows host or a virtual disk presented to a Windows host.
The host DSM used to access the storage system must be reconfigured from the EVA DSM to a DSM that will
communicate with the destination 3PAR storage system. Host I/O is interrupted only during the time it takes to
reconfigure the host.
In the example on the slide, a minimally disruptive migration is done. Two vdisks called Vdisk001 and Vdisk002,
located on an EVA P6000 and presented to a Windows host named Blade-2, will be migrated.
NOTE: The wizard can be stopped at any time by clicking the Cancel button on the screen.
HK904S D.00
5 – 53
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
By clicking the Next button, the wizard enters a screen where it does some more checking. It verifies the
presence of at least two peer ports on the destination HP 3PAR StoreServ, and of two distinct paths from the
source EVA P6000 to the destination HP 3PAR StoreServ.
The wizard also determines whether the source and the destination system are engaged in other peer
relationships over the same peer ports. The example on the slide shows the outcome of a successful verification
of the destination HP 3PAR StoreServ system.
HK904S D.00
5 – 54
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
After selecting the vdisks, you must determine the characteristics of the VVs on the HP 3PAR StoreServ system to
which the vdisks are migrated. You do not need to specify the size of a destination VV on HP 3PAR StoreServ
system because this value will be equal to that of its corresponding vdisk on the EVA P6000 source system. You
do need to specify its provisioning type (full or thin) and the CPG for the VV on the destination system.
The example on the slide shows the wizard screen for selecting the provisioning type and the CPG for every
destination VV. The wizard proposes a CPG for every VV that is migrated, with a RAID level and physical disk type
that are identical or as close as possible to the characteristics of the vdisk on the source system. The choice for
the CPG by the wizard can be overruled per destination volume individually or for all of them at once. Changing
the RAID protection level and the physical disk type (SSD, FC, NL) for the VV compared to the vdisk on the EVA is
possible by selecting the appropriate CPG for it.
NOTE: the HP 3PAR StoreServ system does not actively create CPGs that imitate the characteristics of the vdisks
selected for migration; the administrator must create appropriate CPGs before starting the Online Import wizard.
The selected CPG also determines the availability level (Port, Cage, or Magazine) of the destination VV.
When done, the Next button moves the administrator to a summary screen, which ends the section on selecting
the vdisks for migration on the EVA P6000 source system and their corresponding VVs on the destination HP
3PAR StoreServ system.
HK904S D.00
5 – 55
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
The example on the slide shows a screenshot of the Summary screen. You can see that the vdisk name is used by
default, and it is the same name used to create the corresponding HP 3PAR virtual volume on the destination
array.
Also listed is the provisioning type and some other characteristics per source vdisk and destination VV.
Click the Add Migration button to submit the migration for data transfer to Online Import.
HK904S D.00
5 – 56
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
When the migration is submitted, the wizard enters a short preparation phase in which it creates on the source
EVA a new host that is in reality the destination HP 3PAR StoreServ system made visible to it over the
migration/peer links. Also, the peer volumes are created on the destination HP 3PAR StoreServ and exported to
the source EVA P6000 in this phase. After this phase ends, the wizard enters a pending state indicating that SAN
zoning changes and a potential reboot must be executed. The example on the slide shows the screen for this
pending state.
The next step in the migration sequence differs for Windows and non-Windows systems. For a minimally
disruptive migration, Windows hosts must be shut down for adapting the SAN zoning: the SAN administrator adds
the zone for [host ↔ HP 3PAR StoreServ], removes the zone for [host ↔ EVA P6000], and activates the new SAN
configuration.
For online migrations on non-Windows systems, the same zone changes must be made but the you do not need
to shut down the host. Ensure the zone changes are made in the order specified above for an online migration or
the host will lose contact with the application’s disks. For offline migrations, no zone changes are required for
Windows and non-Windows hosts.
The host is now zoned properly to start the actual data transfer.
HK904S D.00
5 – 57
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
When you click the Start button, the data transfer initiates between the source and the destination system. This
is the last occasion for the administrator to cancel the migration by clicking the Abort button. Note that the
migration must be started before the host is booted when executing a minimally disruptive Windows migration.
This is particularly critical for hosts in a Windows cluster.
Whether the destination VVs come online after the boot of a Windows host depends on the SAN Policy defined on
the host. If the SAN Policy is set to offline, the destination VVs must be brought online manually using Disk
Manager or diskpart. For non-Windows systems, a SCSI bus rescan on the host is required after the zone changes
to discover the new peer volumes.
On Red Hat Enterprise Linux, use this command to rescan the SCSI bus:
echo ‘- - -‘ > /sys/class/scsi_host/hostX/scan
Where X is an integer number referring to the HBA FC controller connecting to the destination HP 3PAR StoreServ
system.
For SUSE SLES, use the rescan-scsi-bus.sh script that is included with the Linux distribution. Alternatively, the
hp_rescan Perl script included in the ProLiant Support Pack (PSP) can be used on Linux and also on other
operating systems.
HK904S D.00
5 – 58
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
After the data migration has started, the HP 3PAR Online Import wizard shows the progress of the migration in
bar graphs, one per vdisk and one for the entire migration. The bar graphs are updated every 60 seconds.
HK904S D.00
5 – 59
Managing HP 3PAR StoreServ II
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
The progress of the data transfer can be monitored from within the HP 3PAR Management Console. Graphical
historical information about the throughput speed over the Peer links is available in the Peer Ports performance
chart on the destination HP 3PAR StoreServ system.
The Recent Tasks section at the bottom of the Management Console lists all migration tasks scheduled with a
type of Import Virtual Volume. Every task manages the migration of one vdisk. A maximum of nine migration
tasks can be active simultaneously; any more tasks are pending for execution and are started when an active
migration task ends. The percentage of data transferred per active task is shown in a bar graph. Detailed
information about the progress of an active migration task, to the level of the 256 MB block selected, can be
obtained by right-clicking the mouse on the task name in the Recent Tasks section and selecting the Show
Details option. This information remains available after the migration task finishes.
The HP 3PAR CLI statport -peer command delivers the same information in numerical format and includes
information about the queue length on the peer ports. This information updates every 2 seconds.
HK904S D.00
5 – 60
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 61
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 62
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 63
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 64
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 65
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 66
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 67
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 68
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 69
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 70
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 71
Managing HP 3PAR StoreServ II
HK904S D.00
Data Migration: Peer Motion and
EVA to HP 3PAR Online Import
5 – 72
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6–1
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
The goal of this module is to explain tools that can be used to troubleshoot and respond to 3PAR performance
issues. In order to understand the tools, it is helpful to have background knowledge on how the 3PAR performs
I/O.
Performance problems can be caused when a 3PAR becomes out of balance. A 3PAR can become out of balance
when additional hardware is added. System Tuner software can be used to rebalance volumes. This module will
also cover how to use the System Tuner commands.
HK904S D.00
6–2
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
This section of the module is intended as background information to help you better understand how the 3PAR
array services I/O requests.
HK904S D.00
6–3
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
In order to understand how the 3PAR handles read and write requests, it is helpful to review how space for a
virtual volume is allocated on a 3PAR installation.
A virtual volume is composed of data and pointers to that data. When creating a virtual volume you specify a set
of policies that are used to determine where space for the volume can be drawn from, by associating the virtual
volume with a CPG.
When virtual volumes are created in a CPG, logical disk space is assigned to the virtual volume. For fully
provisioned virtual volumes, dedicated logical disks are created when the volume is created and are the size of
the fully provisioned virtual volume. The 3PAR creates logical disks that sit behind each node in order to actively
use all the nodes to service the virtual volume.
Each CPG keeps a pool of logical disk space that is used for thinly provisioned volumes and snapshot data. The
pool is made up of logical disks that sit behind each of the nodes in the 3PAR. The logical disks that form this
pool are created when the first TPVV or snapshot is created for the CPG. As more space is required, additional
logical disk space is acquired for the CPG. The initial size of the pool, and the growth size is determined by the
CPG Growth Increment.
Thinly provisioned volumes are assigned regions within the logical disks allocated for the CPG. Pages within the
reserved regions are assigned as writes to new areas of the volume are received. Additional regions from the
CPG logical disk space are reserved when previously allocated regions begin to fill.
HK904S D.00
6–4
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
Using a simple configuration of LDs balanced across the four nodes will be used to explain the concept of cache
writes and reads in the next few slides.
HK904S D.00
6–5
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6–6
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6–7
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6–8
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6–9
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 10
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 11
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 12
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 13
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 14
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 15
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 16
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 17
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 18
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 19
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 20
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 21
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 22
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 23
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 24
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 25
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 26
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 27
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 28
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 29
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 30
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 31
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 32
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 33
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 34
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 35
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 36
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 37
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 38
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 39
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 40
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 41
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
For data protection, cache is mirrored across the nodes to ensure data will not be lost in the event of a node
failure.
The algorithm is self adapting, determining in real time the allocation for writes versus reads.
•
Under intensive reads, up to 100% cache per node can be used.
•
Under intensive writes, up to 50% cache per node can be used.
The read cache is tracked, and frequently accessed data is retained in the cache of the node. This results in a
lower latency for frequently accessed data, because it does not have to be fetched every time.
HK904S D.00
6 – 42
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
The HP 3PAR architecture uses an intelligent flusher algorithm that flushes the dirty cache data out to disk at a
rate that maximizes cache hits yet ensures there are adequate free pages for new host writes.
This provides for the following advantages:
•
Merging multiple writes to the same page
•
Coalescing small block writes into large block writes
•
Combining multiple R5 and R6 writes into full stripe writes
For spinning disks, the HP 3PAR array reads a full 16K cache page from disk. For SSDs only, the required amount
of data is read into cache—variable caching.
If a host read is for less than 16K, the entire page is read in. If a write is for less than a full page, the partial page
is written with valid data. The algorithm combines multiple subpage size host writes in a single dirty page.
HK904S D.00
6 – 43
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 44
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 45
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 46
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 47
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 48
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
Initiators per FC host port
•
Per port: 128
•
Per array: 8192
Queue depth per FC host HBA
•
4 Gb: 1638
•
8 Gb: 3276
HK904S D.00
6 – 49
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
NOTE: Several variables can affect these numbers.
HK904S D.00
6 – 50
Managing HP 3PAR StoreServ II
•
Limits are covered separately.
•
IOPS rating is related to I/O size, such as 16K I/O size.
•
Rule of thumb: IOPS physical drives look at the block size.
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
NOTE: Several variables can affect these numbers.
HK904S D.00
6 – 51
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
NOTE: Several variables can affect these numbers.
HK904S D.00
6 – 52
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 53
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 54
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 55
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
Each 3PAR controller node allows a maximum number of pages for each type of disk, based on the number of
disks of each type, to occupy cache.
When reaching 85% of this number of maximum allowed cache pages, the system started delaying the
acknowledgement of I/Os in order to throttle down the hosts, until some cache pages have been freed by having
their data destaged to disk (condition known as “delayed ack”).
This destaging happens at a fixed speed that also depended on the number of disks of each type.
Because this destaging happens at fixed speed, the maximum write bandwidth of the hosts is limited to the
speed of the destaging.
HK904S D.00
6 – 56
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 57
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
Several tools can help with performance monitoring.
System Reporter (SR) enables you to view historical 3PAR performance information. SR is a great tool for
viewing how the 3PAR is delivering service over time. SR can be helpful for spotting bottlenecks; however, in
some situations the SR data might not be granular enough to catch a bottleneck. The minimum resolution for SR
is 1 minute and the default is 5 minutes. System Reporter is covered in a separate module.
In a performance troubleshooting situation where it is necessary to view performance information with a smaller
granularity, use either the real-time performance graphs built into Management Console or the 3PAR CLI stat*
and hist* commands.
HK904S D.00
6 – 58
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
The stat commands allow you to view performance counters at the layers involved in servicing I/O.
•
statvlun measures the round trip time of an I/O as seen by the system. You can view IOPS, bandwidth, and
service times.
•
statport –host enables you to view the port statistics for host-facing ports.
•
statcmp displays cache statistics. This command can help you see whether I/O flow is even across the
nodes. You can also use this command to see the number of delayed acknowledgements
•
statcpu shows the node CPU usage.
•
statvv displays the internal system response to the I/O. When comparing statvlun to statvv you can
get a good idea if the array is having a performance issue.
•
statport –disk enables you to view the port statistics for disk-facing ports.
•
statpd displays the performance of the physical spindles.
HK904S D.00
6 – 59
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 60
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 61
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
statvlun –ni shows the round trip of I/O on each path for exported volumes. This is helpful to determine
whether there might be a multipathing issue. The –ni option eliminates any inactive volumes.
Example:
statvlun
-vvsum –ni
-rw shows the round trip of I/O to each volume with the paths consolidated.
Example:
statvlun
HK904S D.00
-hostsum –ni
-rw organizes the output to the host level.
6 – 62
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
Statcmp displays the cache statistics. You want to look for an even I/O flow through the nodes. You also want
to see the number of delayed acknowledgements.
The following example shows a fairly even distribution of I/O across the nodes. Notice the high numbers in the
DelAck column. This indicates that the array is not able to destage cache quickly enough to keep up with
incoming I/O; therefore, new writes are being throttled.
HK904S D.00
6 – 63
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
The maximum cache pages allowed per node for each type of disk is a function of the type of disk and the
number of physical disks of that type.
HK904S D.00
6 – 64
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
The Performance Manager provides predefined performance charts for physical disks, host ports, and disk ports.
In addition, the Performance Manager allows you create, edit, and save your own customized performance
charts.
To view a predefined performance chart:
1.
In the Manager pane, click Performance & Reports.
2.
In the Common Actions area, click New Chart.
3.
In the Management tree, click either the Physical Disks or Ports (Data) node under the system for which you
want to view performance.
4.
Click PD Usage - Total IOPS, Disk Ports - Total Throughput, or Hosts Ports - Total Throughput.
5.
Repeat steps 2 and 3 for any additional performance charts you want to view.
A performance chart for each selected chart type appears in the Management window, and data collection and
chart generation begin.
HK904S D.00
6 – 65
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
Viewing performance over time
The Disk Ports and Host Ports charts display line graphs, which show performance over time.
Each started chart is tabbed at the top of the Management window. Simply click the tab for the chart you want to
view.
At any time, you can use the controls at the upper right corner of each chart to pause or stop the generation of
the performance chart.
•
Pausing the chart stops the plotting of data, but data collection continues in the background.
•
Stopping the chart stops both data collection and plotting.
The lower pane of the chart provides a legend indicating color/plot association. Click a row in the legend to
highlight the corresponding plot. If you want to change the color of a plot, click the color under the Color column
and then choose a new color in the menu that appears.
Viewing instantaneous statistics
The PD Usage performance chart displays the latest updated physical disk IOPs statistics. Instead of a line graph,
a bar graph is shown, displaying the physical disk performance at the moment.
HK904S D.00
6 – 66
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
This slide and the next are an example of identifying the culprit of a latency issue by looking at the metrics we
have discussed in this lesson.
The storage consumer is complaining about high latency at approximately 12:00. The data in question is stored
on NL drives. This 3PAR installation has 24 NL drives.
By looking at the delayed acknowledgement data ,we see that at 12:00, delayed ack count is high.
HK904S D.00
6 – 67
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
The physical disk report shows us that at the time of the complaint, the total IOPS for the NL tier is around 6000.
The configuration is not sized properly to support this level of I/O, resulting in high latencies.
HK904S D.00
6 – 68
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
The 3PAR array does an excellent job of distributing data over as many spindles and nodes as possible. However,
over time, the distribution can become imbalanced. This portion of the module discusses some tools that can
help.
HK904S D.00
6 – 69
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
HP 3PAR Operating System Software spreads volumes evenly and widely across all available resources so that
HP 3PAR Storage systems can deliver balanced performance. However, as the storage system scales and new
applications come online, new access patterns might emerge that result in suboptimal performance levels.
HP 3PAR System Tuner Software autonomically and nondisruptively detects potential bottlenecks and hotspots,
and then rebalances volumes to maintain peak performance without affecting service levels or changing preexisting service level characteristics such as RAID level.
tunesys is a system tuning utility designed to be used after node, cage, and disk upgrades.
tunesys can be scheduled. To run tunesys once a week on Saturdays at 10 pm:
createsched “tunesys –f” “0 22 * * 6” tunesys_sched
HK904S D.00
6 – 70
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 71
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 72
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
If new disks are added to a two-node system with two partially populated cages, intranode tuning balances used
space on “over-allocated” disks to new disks, and used space is spread out to new disks. The more disks that are
added (of the same device type), the more efficient the new layout.
Intranode tuning is performed at the chunklet layer, with blocks of data between 256 MB and 1 GB granularity.
NOTE: No I/O pauses occur during chunklet-level tuning.
HK904S D.00
6 – 73
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 74
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 75
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
With HP 3PAR StoreServ 7000 Storage, the rebalancing of data after hardware upgrades is now an integrated
feature, not requiring a Dynamic Optimization license. On the StoreServ 10000 model, this is a separately
licensed feature. After purchasing and installing new hardware, the rebalancing can be started by selecting the
3PAR StoreServ in the GUI and selecting Tune System.
Tune system detects physical disk space usage imbalance between nodes and rebalances virtual volumes
between nodes; detects newly added disks, and rebalances chunklets from a node's existing physical disks to its
under-allocated ones, and performs a re-layout of LDs whose characteristics do not match their parent CPG.
•
Volume Allocation Imbalance Threshold—Percentage above the average space allocated across nodes at
which a volume is considered out of balance.
•
Chunklet Allocation Imbalance Threshold—Percentage above the average (for physical disks of one device
type on a single node) at which the number of chunklets allocated to a physical disk is considered out of
balance.
•
Max Chunklets Moved Simultaneously—Maximum number of chunklets to be moved from one physical disk to
another during the reallocation operation. The value must be in the range of 1 to 8.
− As new nodes, cages, or disks are added to a system, the new component is under-allocated. Therefore, the
system looks for under-allocated space or chunklets to trigger the tune.
•
Maximum Simultaneous Tasks—The maximum number of individual tuning tasks allowed to run at the same
time. The value must be in the range of 1 to 8.
•
Analyze only—By selecting this checkbox, you have the option of viewing an analysis of the tuning tasks that
would be performed according to your specified values, without running the tuning task. This analysis is
displayed in the Tasks management window.
HK904S D.00
6 – 76
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
Tune CPGs performs a simlar function as Tune System, but it limits tuning to the selected CPGs and does not
rebalance chunklets if you added new HDDs.
When you select Tune CPGs, you can select specific CPGs to tune, and specify the Volume Allocation Imbalance
Threshold and Maximum Simultaneous Tasks that can be performed.
As with the Tune System task, you can also choose an analysis only.
HK904S D.00
6 – 77
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
This task is available only on systems running HP 3PAR OS 3.1.2 or newer. Its sole purpose is to rebalance the
physical disks at the chunklet level in an attempt to balance usage between existing and newly added physical
disks.
HK904S D.00
6 – 78
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
Go to the Task Manager window to see the status and results of the tune operations.
HK904S D.00
6 – 79
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
The tunesys command is used to detect poor layout and disk utilization across a system and perform a low-level
rebalance across the system.
For specifics on options to the command and output explanation, consult the HP 3PAR OS Command Line
Interface Reference Guide.
HK904S D.00
6 – 80
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
The tunepd command identifies physical disks with high service times and optionally executes load balancing.
For specifics on options to the command and output explanation, consult the HP 3PAR OS Command Line
Interface Reference Guide.
HK904S D.00
6 – 81
Managing HP 3PAR StoreServ II
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
When using Thin Provisioning, schedule regular CPG compactions during low activity periods.
CPG compaction allows capacity that is allocated to a CPG but no longer being used to hold data to be returned to
the pool of free chunklets. This is not necessary if only one CPG per tier is used because the system automatically
reuses the reserved space for new volume allocations.
This should be scheduled during periods of low activity to reduce the potential performance impact of chunklet
initialization (zeroing), which happens automatically when chunklets are freed.
To schedule a nightly CPG compaction at midnight every Saturday, execute the following CLI command:
cli% createsched “compactcpg -f -pat *” “0 0 * * 6” compactcpg
NOTE: If using Adaptive Optimization on all the CPGs, scheduling a CPG compaction is not required. CPG
compaction is part of the Adaptive Optimization process.
HK904S D.00
6 – 82
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 83
Managing HP 3PAR StoreServ II
HK904S D.00
Tools for Performance Troubleshooting
and Balancing a 3PAR Array
6 – 84
Download