Uploaded by Montri Katasila

pdfcoffee.com hp-3par-storeserv-7000-storage-service-guide-pdf-free

advertisement
HP 3PAR StoreServ 7000 Storage Service
Guide
Service Edition
Abstract
This guide provides information about maintenance and upgrading HP 3PAR StoreServ 7000 Storage system hardware
components for authorized technicians.
HP Part Number: QR482-96907
Published: September 2014
© Copyright 2014 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall
not be liable for technical or editorial errors or omissions contained herein.
Acknowledgments
Microsoft®, Windows®, are U.S. registered trademarks of Microsoft Corporation.
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
Warranty
WARRANTY STATEMENT: To obtain a copy of the warranty for this product, see the warranty information website:
http://www.hp.com/go/storagewarranty
Printed in the US
Contents
1 Understanding LED Indicator Status...............................................................7
Enclosure LEDs.........................................................................................................................7
Bezels LEDs.........................................................................................................................7
Disk Drive LEDs....................................................................................................................7
Storage System Component LEDs................................................................................................8
Power Cooling Module LEDs..................................................................................................8
Drive PCM LEDs............................................................................................................10
I/O Modules LEDs.............................................................................................................10
Controller Node and Internal Component LEDs...........................................................................11
Ethernet LEDs....................................................................................................................13
Node FC and CNA Port LEDs..............................................................................................13
Fibre Channel (FC) Adapter LEDs.........................................................................................14
Converged Network Adapter (CNA) LEDs.............................................................................14
Node FC and CNA Port Numbering....................................................................................15
SAS Port LEDs....................................................................................................................16
Interconnect Port LEDs.........................................................................................................16
Verifying Service Processor LEDs...............................................................................................17
2 Servicing the Storage System......................................................................20
Service Processor Onsite Customer Care ...................................................................................20
Accessing Guided Maintenance..........................................................................................21
Accessing SPMAINT ..........................................................................................................21
Accessing the HP 3PAR Management Console.......................................................................21
Identifying a Replaceable Part..................................................................................................21
Swappable Components.....................................................................................................21
Getting Recommended Actions.................................................................................................22
Powering Off/On the Storage System........................................................................................23
Powering Off.....................................................................................................................23
Powering On.....................................................................................................................24
Disengaging the PDU Pivot Brackets..........................................................................................24
Replacing an Interconnect Link Cable........................................................................................25
Repairing a Disk Drive.............................................................................................................25
Removing a 2.5 inch Disk ..................................................................................................28
Removing a 3.5 inch Disk...................................................................................................28
Installing a Disk Drive.........................................................................................................29
Verifying Disk Drives...........................................................................................................31
Controller Node Replacement Procedure....................................................................................31
Preparation.......................................................................................................................31
Node Identification and Shutdown.......................................................................................32
Node Identification and Preparation.....................................................................................32
Node Removal..................................................................................................................36
Node Installation...............................................................................................................36
Node Verification .............................................................................................................37
SFP Repair.............................................................................................................................38
Replacing an SFP...............................................................................................................42
Replacing a Drive Enclosure.....................................................................................................42
Replacing an I/O Module.......................................................................................................43
Removing an I/O Module...................................................................................................44
Installing an I/O Module....................................................................................................45
Replacing a Power Cooling Module..........................................................................................46
Removing a Power Cooling Module......................................................................................48
Replacing a Battery inside a Power Cooling Module...............................................................49
Contents
3
Installing a Power Cooling Module ......................................................................................51
Controller Node Internal Component Repair...............................................................................52
Node Cover Removal and Replacement................................................................................53
Controller Node (Node) Clock Battery Replacement Procedure................................................53
Preparation..................................................................................................................53
Node Identification and Shutdown..................................................................................53
Node Removal..............................................................................................................54
Node Clock Battery Replacement....................................................................................55
Node Replacement........................................................................................................55
Node and Clock Battery Verification................................................................................55
Controller Node (Node) DIMM Replacement Procedure..........................................................56
Preparation..................................................................................................................56
Node and DIMM Identification and Node Shutdown.........................................................56
Node Removal..............................................................................................................58
DIMM Replacement.......................................................................................................58
Node Replacement........................................................................................................58
Node and DIMM Verification.........................................................................................59
Controller Node (Node) PCIe Adapter Procedure...................................................................60
Controller Node (Node) Drive Assembly Replacement Procedure..............................................62
Preparation..................................................................................................................62
Node Identification and Shutdown..................................................................................62
Node Removal..............................................................................................................64
Node Drive Assembly Replacement.................................................................................64
Node Replacement........................................................................................................64
Node Verification .........................................................................................................65
CLI Procedures.......................................................................................................................66
Node Identification and Preparation ....................................................................................66
Node Verification .............................................................................................................66
The Startnoderescue Command............................................................................................67
Node and PCIe Adapter Identification and Preparation ..........................................................67
Node and PCIe Adapter Verification ...................................................................................68
Controller Node (Node) PCIe Adapter Riser Card Replacement Procedure.................................69
PCIe Adapter Identification and Node Shutdown...............................................................69
Node Removal..............................................................................................................70
PCIe Adapter Riser Card Replacement.............................................................................70
Node Replacement........................................................................................................71
Node PCM Identification....................................................................................................71
Drive PCM Identification ....................................................................................................71
PCM Location...............................................................................................................72
PCM and Battery Verification...............................................................................................73
SFP Identification...............................................................................................................74
SFP Verification.............................................................................................................74
Disk Drive Identification......................................................................................................75
Disk Drive (Magazine) Location...........................................................................................76
Disk Drive Verification.........................................................................................................76
3 Upgrading the Storage System...................................................................77
Installing Rails for Component Enclosures...................................................................................77
Controller Node Upgrade .......................................................................................................78
Upgrading a 7400 Storage System......................................................................................79
Installing the Enclosures.................................................................................................91
Drive Enclosures and Disk Drives Upgrade ................................................................................93
Adding an Expansion Drive Enclosure..................................................................................93
Upgrade Drive Enclosures...................................................................................................94
Check Initial Status........................................................................................................95
4
Contents
Install Drive Enclosures and Disk Drives............................................................................96
Power up enclosures and check status..............................................................................97
Chain Node 0 Loop DP-2 (B Drive Enclosures and the solid red lines)...................................97
Chain Node 0 Loop DP-1 (A Drive Enclosures and the dashed red lines)...............................98
Check Pathing..............................................................................................................99
Move Node 1 DP-1 and DP-2 to farthest drive enclosures..................................................100
Check Pathing............................................................................................................101
Chain Node 1 Loop DP-2 (B Drive Enclosures and the solid green lines).............................102
Chain Node 1 Loop DP-1 (A Drive Enclosures and the dashed green lines)..........................103
Check Pathing............................................................................................................105
Execute admithw.........................................................................................................106
Verify Pathing.............................................................................................................107
Verify Cabling............................................................................................................108
Upgrade Disk Drives.............................................................................................................108
Check Initial Status...........................................................................................................109
Inserting Disk Drives ........................................................................................................109
Check Status...................................................................................................................109
Check Progress................................................................................................................110
Upgrade Completion........................................................................................................110
Upgrading PCIe Adapters......................................................................................................111
Upgrading the HP 3PAR OS and Service Processor...................................................................111
4 Support and Other Resources...................................................................112
Contacting HP......................................................................................................................112
HP 3PAR documentation........................................................................................................112
Typographic conventions.......................................................................................................116
HP 3PAR branding information...............................................................................................116
5 Documentation feedback.........................................................................117
A Installing Storage Software Manually........................................................118
Connecting to the Laptop.......................................................................................................118
Connecting the Laptop to the Controller Node.....................................................................118
Connecting the Laptop to the HP 3PAR Service Processor......................................................118
Serial Cable Connections..................................................................................................118
Maintenance PC Connector Pin-outs .............................................................................118
Service Processor Connector Pin-outs .............................................................................119
Manually Initializing the Storage System Software.....................................................................119
Manually Setting up the Storage System..............................................................................119
Storage System Console – Out Of The Box.....................................................................122
Adding a Storage System to the Service Processor....................................................................127
Exporting Test LUNs..............................................................................................................128
Defining Hosts.................................................................................................................129
Creating and Exporting Test Volumes..................................................................................129
B Service Processor Moment Of Birth (MOB).................................................131
C Connecting to the Service Processor.........................................................143
Using a Serial Connection.....................................................................................................143
D Node Rescue.........................................................................................145
Automatic Node-to-Node Rescue............................................................................................145
Service Processor-to-Node Rescue...........................................................................................146
Virtual Service Processor-to-Node Rescue.................................................................................148
E Illustrated Parts Catalog...........................................................................152
Drive Enclosure Components..................................................................................................152
Storage System Components..................................................................................................155
Controller Node and Internal Components...............................................................................157
Contents
5
Service Processor..................................................................................................................160
Miscellaneous Cables and Parts.............................................................................................160
F Disk Drive Numbering.............................................................................163
Numbering Disk Drives..........................................................................................................163
G Uninstalling the Storage System...............................................................165
Storage System Inventory.......................................................................................................165
Removing Storage System Components from an Existing or Third Party Rack.................................165
6
Contents
1 Understanding LED Indicator Status
Storage system components have LEDs to indicate status of the hardware and whether it is
functioning properly. These indicators help diagnose basic hardware problems. You can quickly
identify hardware problems by examining the LEDs on all components using the tables and
illustrations in this chapter.
Enclosure LEDs
Bezels LEDs
The bezels are located at the front of the system on each side of the drive enclosure and include
three LEDs.
Figure 1 Bezel LEDs
Table 1 Bezel LEDs
Callout LED
Appearance
Indicates
1
Green
On – System power is available.
Amber
On – System is running on battery power.
System Power
2
Module Fault
Amber
On – System hardware fault to I/O modules or PCMs within the enclosure.
At the rear of the enclosure, identify if the PCM or I/O module LED is also
Amber.
3
Disk Drive
Status
Amber
On – Specific disk drive LED identifies the affected disk. This LED applies to
disk drives only.
NOTE: Prior to running the installation scripts, the numeric display located under the Disk Drive
Status LED on the bezels may not display the proper numeric order in relation to their physical
locations. The correct sequence will be displayed after the installation script completes.
Disk Drive LEDs
The LEDs are located on the front of the disk drives:
Enclosure LEDs
7
Figure 2 Disk Drive LEDs
Table 2 Disk drive LEDs
LED
Appearance
Status
Indicates
1 - Fault
Amber
On
Disk failed and is ready to be replaced.
Flashing
The locatecage command has been
issued. Fault LEDs for failed disk drives do
not flash. The I/O module Fault LEDs at the
rear of the enclosure also blink.
On
Normal operation
Flashing
Activity
2 - Activity
Green
Storage System Component LEDs
The storage system includes the following components in the enclosure at the rear of the system.
Power Cooling Module LEDs
The PCM has four or six LEDs, depending on PCM, and all are located in the corner of the module.
8
Understanding LED Indicator Status
Figure 3 PCM LEDs
The following table describes the LED states.
Table 3 PCM LED Descriptions
Icon
LED
AC input fail
PCM OK
Fan Fail
Appearance
On
No AC power or PCM fault
Flashing
Firmware download
On
AC present and PCM On / OK
Flashing
Standby mode
On
PCM fail or PCM fault
Flashing
Firmware download
On
No AC power or fault or out of tolerance
Flashing
Firmware download
On
Hard fault (not recoverable)
Flashing
Soft fault (recoverable)
On
Present and charged
Flashing
Charging or disarmed
Amber
Green
Amber
DC Output Fail
Amber
Battery Fail
Amber
Battery Good
Indicates
Green
Storage System Component LEDs
9
Drive PCM LEDs
The following figure shows the drive enclosure PCM LEDs.
Figure 4 Drive PCM LEDs
The next table describes the drive PCM LED states.
Table 4 Drive PCM LED Descriptions
Icon
LED
AC input fail
PCM OK
Fan Fail
DC Output Fail
Appearance
Indicates
On
No AC power or PCM fault
Flashing
FiPartner PCM Faulty/Off or Firmware
Download
On
AC Present and PCM On / OK
Flashing
Standby mode
On
PCM fail or PCM fault
Flashing
Firmware download
On
No AC power or fault or out of tolerance
Flashing
Firmware download
Amber
Green
Amber
Amber
I/O Modules LEDs
I/O modules are located on the back of the system. I/O modules have two mini-SAS universal
ports, which can be connected to HBAs or other ports and each port includes External Port Activity
LEDs, labeled 0–3. The I/O module also includes a Power and Fault LED.
10
Understanding LED Indicator Status
Figure 5 I/O Module
Table 5 I/O module LEDs
Icon
Function
Appearance
State
Meaning
Power
Green
On
Power is on
Off
Power is off
On
Fault
Off
Normal operation
Flashing
Locate command issued
Fault
Amber
Figure 6 External Port Activity LEDs
Function
Appearance
State
Meaning
External Port Activity; 4 LEDs for
Data Ports 0 through 3
Green
On
Ready, no activity
Off
Not ready or no power
Flashing
Activity
Controller Node and Internal Component LEDs
Controller node LEDs are shown in the following table.
Controller Node and Internal Component LEDs
11
Figure 7 Controller Node LEDs
1
NOTE:
2
3
Issue the locatenode command to flash the UID LED blue.
Figure 8 Controller Node Indicator LEDs
Table 6 Controller Node LEDs
Status
Unit ID
12
Understanding LED Indicator Status
Green
Blue
On
Not a Cluster member
Rapid Flashing
Boot
Slow Flashing
Cluster member
On
OK to remove
Off
Not OK to remove
Flashing
Locate command issued
Table 6 Controller Node LEDs (continued)
Fault
Amber
On
Fault
Off
No fault
Flashing
Node in cluster and there is a fault
Ethernet LEDs
The controller node has two built-in Ethernet ports and each includes two LEDs:
•
MGMT — Eth0 port provides connection to the public network
•
RC-1 — designated port for Remote Copy functionality
Figure 9 Ethernet LEDs
Table 7 Ethernet LEDs
Left LED
Right LED
Link Up Speed
Activity
Green
On
1 GbE Link
Amber
On
100 Mb Link
Off
No link established or 10 Mb Link
On
No link activity
Off
No link established
Flashing
Link activity
Green
Node FC and CNA Port LEDs
The controller node has two onboard FC ports; each includes two LEDs. The arrow head-shaped
LEDs point to the port they are associated with.
NOTE:
Incorrectly configured interconnect cables illuminate amber port LEDs.
Figure 10 FC Port LEDs
Controller Node and Internal Component LEDs
13
Table 8 FC Port LEDs
All ports
No light
Off
Wake up failure (dead device) or power is not applied
FC-1
Amber
Off
Not connected
3 fast blinks
Connected at 4GB/s
4 fast blinks
Connected at 8GB/s
On
Normal/Connected – link up
Flashing
Link down or nor connected
FC-2
Green
Fibre Channel (FC) Adapter LEDs
Figure 11 FC Adapter LEDs
Table 9 FC Adapter LEDs
All ports
Port speed
Link status
No light
Amber
Off
Wake up failure (dead device) or power is not applied
Off
Not connected
3 fast blinks
Connected at 4GB/s
4 fast blinks
Connected at 4GB/s
On
Normal/Connected – link up
Flashing
Link down or not connected
Green
Converged Network Adapter (CNA) LEDs
Figure 12 CNA LEDs
Table 10 CNA LEDs
Upper
14
Link
Understanding LED Indicator Status
Off
Link down
On
Link up
Green
Table 10 CNA LEDs (continued)
Lower
ACT (Activity)
Off
No activity
On
Activity
Green
Node FC and CNA Port Numbering
Port position is displayed as Node:Slot:Port (N:S:P) in the Management Console or CLI.
Figure 13 FC Ports
Table 11 FC Ports
Port
Slot:Port
FC-1
1:1
FC-2
1:2
Figure 14 FC Adapter Ports
Table 12 FC Adapter Ports
Port
Slot:Port
1
2:1
2
2:2
3
2:3
4
2:4
Controller Node and Internal Component LEDs
15
Figure 15 CNA Ports
Table 13 CNA Ports
Port
Slot:Port
1
2:1
2
2:2
SAS Port LEDs
The controller node has two SAS ports and each includes four LEDs, numbered 0–3:
Figure 16 SAS port LEDs
1
2
Table 14 SAS port LEDs
Appearance
Green
Indicates
Off
No activity on port. This LED does not indicate a Ready state with a solid
On as the I/O Module External Port Activity LEDs do.
Flashing
Activity on port
Interconnect Port LEDs
The controller node has two interconnect ports and each includes two LEDs.
NOTE:
16
Incorrectly configured interconnect cables illuminate amber port LEDs.
Understanding LED Indicator Status
Figure 17 7200 Interconnect Ports LEDs
Figure 18 7400 Interconnect Ports LEDs
Table 15 Interconnect port LEDs
7200
A 7200 does not use any external interconnect links. Interconnect port LEDs should always be off.
7400
Fault
Amber
On
Failed to establish link connection
Off
No error currently on link
Flashing
1. Interconnect cabling error
2. Controller node in wrong slot
3. Serial number mismatch between controller nodes
Status
Green
On
Link established
Off
Link not yet established
Verifying Service Processor LEDs
The HP 3PAR SP (Proliant DL320e) LEDs are located at the front and rear of the SP.
Figure 19 Front Panel LEDs
Verifying Service Processor LEDs
17
Table 16 Front panel LEDs
Item
LED
Appearance
Description
1
UID LED/button
Blue
Active
Flashing Blue
System is being managed remotely
Off
Deactivated
Green
System is on
Flashing Green
Waiting for power
Amber
System is on standby, power still on
Off
Power cord is not attached or power
supplied has failed
Green
System is on and system health is
normal
Flashing Amber
System health is degraded
Flashing Red
System health is critical
Off
System power is off
Green
Linked to network
Flashing Green
Network activity
Off
No network link
2
3
4
Power On/Standby button and
system power
Health
NIC status
Figure 20 Rear Panel LEDs
Table 17 Rear panel LEDs
Item
LED
Appearance
Description
1
NIC link
Green
Link
Off
No link
Green or Flashing Green
Activity
Off
No activity
Blue
Active
Flashing Blue
System is being managed remotely
Off
Deactivated
2
3
18
NIC status
UID LED/button
Understanding LED Indicator Status
Table 17 Rear panel LEDs (continued)
Item
LED
Appearance
Description
4
Power supply
Green
Normal
Off
Off = one or more of the following
conditions:
• Power is unavailable
• Power supply has failed
• Power supply is in standby mode
• Power supply error
Verifying Service Processor LEDs
19
2 Servicing the Storage System
Use this chapter to perform removal and replacement procedures on the HP 3PAR StoreServ 7000
Storage systems.
CAUTION: Before servicing any component in the storage system, prepare an Electrostatic
Discharge-safe (ESD) work surface by placing an antistatic mat on the floor or table near the storage
system. Attach the ground lead of the mat to an unpainted surface of the rack. Always use a
wrist-grounding strap provided with the storage system. Attach the grounding strap clip directly to
an unpainted surface of the rack.
For more information on part numbers for storage system components listed in this chapter, see
the “Illustrated Parts Catalog” (page 152).
Service Processor Onsite Customer Care
Use SPOCC to access Guided Maintenance or the SPMAINT (Service Processor Maintenance) in
the Command Line Interface (CLI), where you perform various administrative and diagnostic tasks
to support both the storage system and the SP.
To open SPOCC, enter the SP IP address in a web browser and enter your user name and password.
Figure 21 SPOCC – Support page
20
Servicing the Storage System
Accessing Guided Maintenance
To access Guided Maintenance:
1. On the left side of the SPOCC homepage, click Support.
2. On the Service Processor - Support page, under InServs, click Guided Maintenance in the
Action column.
Use Guided Maintenance when servicing the following hardware components:
•
Controller node
•
HBA/CNA
•
Node disk
•
DIMMs
•
Time of day battery
Accessing SPMAINT
Use SPMAINT if you are servicing a storage system component or when you need to run a CLI
command.
To access SPMAINT:
1.
2.
3.
On the left side of the SPOCC homepage, click Support.
On the Service Processor - Support page, under Service Processor, click SPMAINT on the Web
in the Action column.
Select option 7 Interactive CLI for a StoreServ and then select the desired system.
Accessing the HP 3PAR Management Console
To access the HP 3PAR Management console:
1. Double-click the exe to open the console.
2. Enter your user name and password.
3. Under the Systems tree in the left panel, select the storage system to be serviced to connect.
Identifying a Replaceable Part
Parts have a nine-character spare part number on their labels. For some spare parts, the part
number is available in the system. Alternatively, the HP call center can assist in identifying the
correct spare part number.
Figure 22 Product label with HP Spare part number
Swappable Components
Colored touch points on a storage system component (such as a lever or latch) identify whether
the system should be powered on or off during a part replacement:
•
Hot-swappable – Parts are identified by red-colored touch points. The system can remain
powered on and active during replacement.
Identifying a Replaceable Part
21
NOTE: Disk drives are hot-swappable, even though they are yellow and do not have red
touch points.
•
Warm-swappable– Parts are identified by gray touch points. The system does not fail if the
part is removed, but data loss may occur if the replacement procedure is not followed correctly.
•
Cold-swappable – Parts are identified by blue touch points. The system must be powered off
or otherwise suspended before replacing the part.
CAUTION:
•
Do not replace cold-swappable components while power is applied to the product. Power off
the device and then disconnect all AC power cords.
•
Power off the equipment and disconnect power to all AC power cords before removing any
access covers for cold-swappable areas.
•
When replacing hot-swappable components, allow approximately 30 seconds between
removing the failed component and installing the replacement. This time is needed to ensure
that configuration data about the removed component is cleared from the system registry. To
prevent overheating due to an empty enclosure or bay, use a blank or leave the slightly
disengaged component in the enclosure until the replacement can be made.
Drives must be replaced within 10 minutes, nodes 30 minutes and all other parts within 6
minutes.
•
Before replacing a hot-swappable component, ensure that steps have been taken to prevent
loss of data.
Getting Recommended Actions
This section explains the steps required to get from an alert message to the action associated with
the alert.
The Component line in the right column lists the cage number, magazine number, and drive number
(cage:magazine:disk). The first and second numbers are sufficient to identify the exact disk in a
StoreServ system, since there is always only a single disk (disk 0) in a single magazine. The
information displayed in the Component line depends on the type of components causing the alert.
1. Follow the link to alert actions under Recommended Actions (see Figure 23 (page 22)).
Figure 23 Verify Drive Failure Alert
22
Servicing the Storage System
2.
3.
4.
5.
6.
At the HP Storage Systems Guided Troubleshooting web site, follow the link for your product.
At the bottom of the HP 3PAR product page, click the link for HP 3PAR Alert Messages.
At the bottom of the Alert Messages page, choose the correct message code series based on
the first four characters of the alert message code.
Choose the next digit in the code to narrow the message code series.
On the next page, select the message code that matches the one that appeared in the alert.
The next page shows the message type based on the message code selected and provides a
link to the suggested action.
7.
8.
Follow the link.
On the suggested actions page, scroll through the list to find the message state listed in the
alert message. The recommended action is listed next to the message state.
Powering Off/On the Storage System
The following describes how to power the storage system on and off.
WARNING! Do not power off the system unless a service procedure requires the system to be
powered off. Before you power off the system to perform maintenance procedures, first verify with
a system administrator. Powering off the system will result in loss of access to the data from all
attached hosts.
Powering Off
Before you begin, use either SPMAINT or SPOCC to shut down and power off the system. For
information about SPOCC, see “Service Processor Onsite Customer Care ” (page 20).
NOTE: PDUs in any expansion cabinets connected to the storage system may need to be shut
off. Use the locatesys command to identify all connected cabinets before shutting down the
system. The command blinks all node and drive enclosure LEDs.
The system can be shutdown before powering off by any of the following three methods:
Using SPOCC
1.
2.
3.
4.
5.
6.
Select StoreServ Product Maintenance.
Select Halt a StoreServ cluster/node.
Follow the prompts to shutdown a cluster. Do not shut down individual Nodes.
Turn off power to the node PCMs.
Turn off power to the drive enclosure PCMs.
Turn off all PDUs in the rack.
Using SPMAINT
1.
2.
3.
4.
5.
6.
Select option 4 (StoreServ Product Maintenance).
Select Halt a StoreServ cluster/node.
Follow the prompts to shutdown a cluster. Do not shut down individual Nodes.
Turn off power to the node PCMs.
Turn off power to the drive enclosure PCMs.
Turn off all PDUs in the rack.
Powering Off/On the Storage System
23
Using CLI Directly on the Controller Node if the SP is Inaccessible
1.
Enter the CLI command shutdownsys – halt. Confirm all prompts.
CAUTION: Failure to wait until all controller nodes are in a halted state can cause the system
to view the shutdown as uncontrolled. The system will undergo a check-state when powered
on if the nodes are not fully halted before power is removed and can seriously impact host
access to data.
2.
3.
4.
5.
Allow 2-3 minutes for the node to halt, then verify that the node Status LED is flashing green
and the node hotplug LED is blue, indicating that the node has been halted. For information
about LEDs status, see “Understanding LED Indicator Status” (page 7).
Turn off power to the node PCMs.
Turn off power to the drive enclosure PCMs.
Turn off all PDUs in the rack.
Powering On
1.
2.
3.
Set the circuit breakers on the PDUs to the ON position.
Set the switches on the power strips to the ON position.
Power on the drive enclosure PCMs.
NOTE: To avoid any cabling errors, all drive enclosures must have at least one or more
hard drive(s) installed before powering on the enclosure.
4.
5.
Power on the node enclosure PCMs.
Verify the status of the LEDs, see “Understanding LED Indicator Status” (page 7).
Disengaging the PDU Pivot Brackets
To access the vertically mounted power distribution units (PDU) or servicing area, the PDUs can
be lowered out of the rack.
1. Remove the two top mounting screws.
2. Pull down on the PDU to lower.
NOTE:
3.
If necessary, loosen the two bottom screws to easily pull down the PDU.
Ensure the PDUs are in a fully lowered position before accessing.
Figure 24 Disengaging the PDU Pivot Brackets
24
Servicing the Storage System
Replacing an Interconnect Link Cable
Before replacing an Interconnect Link cable, verify with the system administrator before powering
off the system.
1. Shutdown all the controller nodes in the system.
2. Turn off power to the controller node PCMs.
3. Turn off power to the drive enclosure PCMs.
4. Turn off power to all PDUs in the rack.
5. Replace the damaged cable. Verify the direction of the cable connectors matches with the
ports before connecting.
6. Set the circuit breakers on the PDUs to the ON position.
7. Set the switches on the power strips to the ON position.
8. Power on the drive enclosure PCMs.
9. Power on the node enclosure PCMs.
10. Verify the status of the LEDs, see “Understanding LED Indicator Status” (page 7).
Repairing a Disk Drive
Use the following instructions for replacing failed disk drives or solid-state drives (SSD).
WARNING! If the StoreServ is enabled with HP 3PAR Data Encryption feature, only use the
self-encrypting drives (SED). Using a non-self-encrypting drive may cause errors during the repair
process.
CAUTION:
•
If you require more than 10 minutes to replace a disk drive, install a disk drive blank cover
to prevent overheating while you are working.
•
To avoid damage to hardware and the loss of data, never remove a disk drive without
confirming that the disk fault LED is lit.
NOTE: SSDs have a limited number of writes that can occur before reaching the SSD's write
endurance limit. This limit is generally high enough so wear out will not occur during the expected
service life of an HP 3PAR StoreServ under the great majority of configurations, IO patterns, and
workloads. HP 3PAR StoreServ tracks all writes to SSDs and can report the percent of the total
write endurance limit that has been used. This allows any SSD approaching the write endurance
limit to be proactively replaced before they are automatically spared out. An SSD has reached the
maximum usage limit once it exceeds its write endurance limit. Following the product warranty
period, SSDs that have exceeded the maximum usage limit will not be repaired or replaced under
HP support contracts.
Replacing an Interconnect Link Cable
25
Identifying a Disk Drive
1.
Under the Systems tree in the left panel of HP 3PAR Management Console, select the storage
system to be serviced. The Summary tab should be displayed indicating the failed drive (see
Figure 25 (page 26)).
Figure 25 Summary Tab
WARNING! The Physical Disks may indicate Degraded, which indicates that the disk drive
is not yet ready for replacement. It may take several hours for the data to be vacated; do not
proceed until the status is Failed. Removing the failed drive before all the data is vacated
will cause loss of data.
2.
On the Summary tab, select the Failed link in the Physical Disk row next to the red X icon
(
).
CAUTION: If more than one disk drive is failed or degraded, contact your authorized service
provider to determine if the repair can be done in a safe manner, preventing down time or
data loss.
A filtered table displays, showing only failed or degraded disk drives (see Figure 26 (page
26)).
Figure 26 Filtered Table
The Alert tab displays a filtered Alert table showing only the critical alerts associated with disk
drives, where the alert details are displayed (see Figure 27 (page 27)).
NOTE: The lower pane lists the alerts in a tabular fashion (you can see the highlighted alert
in Figure 27 (page 27)). Highlighted alerts display their details in the pane above the list.
26
Servicing the Storage System
Figure 27 Alert Details
3.
Double click the relevant alert to display the alert details.
Disk Drive (Magazine) Location
1.
2.
Execute steps 1 through 3 in the “Identifying a Disk Drive”.
Select the Cage link for the Failed drive (see Figure 28 (page 27)).
Figure 28 Cage Link for Failed Drive
3.
Select the Locate icon in the top toolbar of the Management Console.
Figure 29 Tool Bar Locate Icon
Repairing a Disk Drive
27
4.
In the Locate Cage dialog box, enter an appropriate time to allow service personnel to view
the LED status of the Drive Enclosure (Cage). See Figure 30 (page 28).
NOTE:
If necessary, use the Stop Locate icon to halt LED flashing.
Figure 30 Locate Cage Dialog Box
An icon with a flashing LED will be shown next to the cage, which flashes all drives in this
cage except the failed drive.
Removing a 2.5 inch Disk
1.
2.
3.
4.
Pinch the handle latch to release the handle into open position.
Pull the handle away from the enclosure and wait 30 seconds.
Slowly slide the disk drive out of the enclosure and set aside
Remove the replacement disk drive from its packaging. To reinstall a new disk drive, see
“Installing a Disk Drive” (page 29).
Figure 31 7200 and 7400 Two Node System (HP M6710 Drive Enclosure)
Removing a 3.5 inch Disk
To remove a 3.5 inch disk drive:
1. Pinch the latch in the handle towards the hinge to release the handle.
2. Gently pull the disk drive out approximately one inch and wait 30 seconds.
3. Slide the disk drive out of the enclosure and set aside.
28
Servicing the Storage System
4.
To reinstall a new disk drive, see “Installing a Disk Drive” (page 29).
Figure 32 Removing a 3.5 inch disk drive
Installing a Disk Drive
CAUTION: Blank disk drive carriers are provided and must be used if all slots in the enclosure
are not filled with disk drives.
CAUTION:
NOTE:
type.
To avoid potential damage to equipment and loss of data, handle disk drives carefully.
All drives in a vertical column of an LFF drive enclosure must be the same speed and
Installing a 2.5 inch disk drive (SFF)
1.
2.
3.
Press the handle latch to open the handle.
Insert the disk drive into the enclosure with the handle opened from the top in the vertical
position.
Slide the disk drive into the enclosure until it engages. Push firmly until it clicks.
Repairing a Disk Drive
29
Figure 33 7200 and 7400 Two Node System
4.
Observe the newly installed disk drive for 60 seconds to verify the amber LED turns off and
remains off for 60 seconds.
Installing a 3.5 inch disk drive (LFF)
1.
2.
3.
Press the handle latch to open the handle.
Position the disk drive so the handle opens from the left and slide it into the enclosure.
Push firmly until the handle fully engages and clicks.
Figure 34 Installing a 3.5 inch disk drive
30
Servicing the Storage System
Verifying Disk Drives
1.
2.
Verify the disk drive has been successfully replaced.
Display the physical disks to monitor. Open the system in the Systems tab and select Physical
Disks.
NOTE:
Users can select the column header State to resort.
NOTE: Until data has been restored, the original disk drive will display as Failed and the
replacement disk drive will display as Degraded.
3.
The new drive displays in the same position as the failed drive and the State is listed as
Normal.
NOTE: The drive that was replaced continues to display in the table as Failed until the
disk rebuild is complete, which may take several hours. When the process is complete, the
failed drive is dismissed and dropped from the display.
4.
Open a CLI session. Issue the checkhealth command to verify the system is working properly.
Controller Node Replacement Procedure
CAUTION: Customers are only able to replace a controller node on the StoreServ 7200 Storage.
Other internal components are only serviceable by the ASP.
CAUTION: Alloy gray-colored latches on components such as the node mean the component is
warm-swappable. HP recommends shutting down the node (with the enclosure power remaining
on) before removing this component. Contact your ASP for node diagnosis and shutdown.
CAUTION:
minutes.
NOTE:
To prevent overheating, node replacement requires a maximum service time of 30
Be sure to wear your electrostatic discharge wrist strap to avoid damaging any circuitry.
Preparation
1.
2.
Unpack the replacement node and place it on an ESD safe mat.
Remove the node cover:
a. Loosen the two thumbscrews that secure the node cover to the node.
b. Lift the node cover and remove it.
3.
If a PCIe adapter exists in the failed node:
a. Remove the PCIe adapter riser card from the replacement node by grasping the blue
touch point on the riser card and pulling it up and away from the node.
b. Insert the existing PCIe adapter onto the riser card.
c. Install the PCIe adapter assembly by aligning the recesses on the adapter plate with the
pins on the node chassis. This should align the riser card with the slot on the node. Snap
the PCIe adapter assembly into the node.
4.
Install the node cover:
a. While aligning the node rod with the cutout in the front and the guide pins with the cutouts
in the side, lower the node cover into place.
b. Tighten the two thumbscrews to secure the node cover to the node.
5.
Pull the gray node rod to the extracted position, out of the component.
Controller Node Replacement Procedure
31
Node Identification and Shutdown
Before you begin, use either the HP 3PAR Management Console or HP 3PAR CLI to identify and
halt the failed node.
NOTE: If the failed node is already halted, it is not necessary to shutdown (halt) the node because
it is not part of the cluster.
The following figure illustrates the 7200 controller node.
Figure 35 7200 Node Identification
1
0
The following figure illustrates the 7400 controller node.
Figure 36 7400 Node Identification
3
2
1
0
Node Identification and Preparation
For the CLI procedure, see “CLI Procedures” (page 66).
NOTE: If the failed node is already halted, it is not necessary to shutdown the node because it
is not part of the cluster.
32
Servicing the Storage System
1.
Under the Systems tree in the left panel, click the storage system to be serviced.
In this case, there is only one controller node present, which indicates that the other node is
not part of the cluster. If the node UID LED is blue proceed to step 4 to locate the system. If
the node UID LED is not blue, escalate to the next level of support.
NOTE:
If the node's state is Degraded, it will need to be shutdown to be serviced.
NOTE: Depending on the failed component, physical disks may be Degraded because
node paths to drive enclosures are missing.
2.
The Alert panel displays a filtered Alert table showing only the critical alerts associated with
the node, where the alert details are displayed. On the storage system, identify the node and
verify that the status LED is lit amber.
Controller Node Replacement Procedure
33
3.
Shutdown the node to be replaced:
a. Log into SPOCC and access Guided Maintenance for this storage system. In the Guided
Maintenance window, click Controller Node (see “Service Processor Onsite Customer
Care ” (page 20)). To log into SPOCC, go to https://<hostname or IP address>.
b. In the Node Rescue Task Information section, select the node to shut down from the Node
ID field, then click Shutdown Node.
c. In the Node Status Information section, click Refresh to confirm the node has been shut
down and the node is no longer in the cluster.
To view the Guided Maintenance pages:
i. Check the Node Status Information:
A. If the node to be serviced does not appear, close this window and proceed to
step 4.
B. If the node is listed, scroll to the bottom of the page.
ii.
Use the Locatenode and Shutdownode commands to locate and shutdown the
node to be serviced.
iii. Select the link Replacement Instructions and Video to go to the HP Services Media
Library (SML).
NOTE:
iv.
v.
34
You may already have this window open.
Navigate to your Storage System type:
•
Product Type - Storage
•
Product Family - 3PAR Storage Systems
•
Product Series - HP 3PAR StoreServ 7000 Storage Systems
Launch FRU Remove/Replace and select the procedure for the controller node.
Servicing the Storage System
4.
Execute a LOCATE against the System in HP 3PAR Management Console:
a. Select the Locate icon in the top toolbar of the Management Console.
Figure 37 Select Locate on Management Console Toolbar
b.
Enter an appropriate time to allow service personnel to view the LED status of the System.
NOTE:
If necessary use the Stop Locate icon to halt LED flashing.
Figure 38 Setting Permission for Time
This flashes the LEDs on all of the drives and all nodes in this System except the failed
node, which has a solid blue LED.
Controller Node Replacement Procedure
35
Node Removal
1.
Allow 2-3 minutes for the node to halt, then verify the Node Status LED is flashing green and
the Node UID LED is blue, indicating the node is halted.
CAUTION: The system does not fail if the node is properly halted before removal, but data
loss may occur if the replacement procedure is not followed correctly.
NOTE:
The Node Fault LED may be amber, depending on the nature of the node failure.
Figure 39 Verify Node Status LEDs
NOTE:
Nodes 1 and 3 are rotated in relation to nodes 0 and 2. See Figure 36 (page 32).
2.
3.
4.
Ensure that all cables on the failed node are marked to facilitate reconnecting later.
Remove cables from the failed node.
Pull the node rod to remove the node from the enclosure.
5.
6.
7.
When the node is halfway out of the enclosure, use both hands to slide the node out completely.
Set the node on the ESD safe mat next to the replacement node for servicing.
Push in the failed node’s rod to ready it for packaging and provide differentiation from the
replacement node.
Node Installation
36
1.
Move both SFPs from the onboard FC ports on the failed node to the onboard FC ports on
the replacement node:
a. Lift the retaining clip and carefully slide the SFP out of the slot.
b. Carefully slide the SFP into the FC port on the replacement node until it is fully seated;
close the wire handle to secure it in place.
2.
If a PCIe adapter is installed in the failed node, move the SFPs from the PCIe adapter on the
failed node to the PCIe adapter on the replacement node:
a. Lift the retaining clip and carefully slide the SFP out of the slot.
b. Carefully slide the replacement SFP into the adapter on the replacement node until it is
fully seated; close the wire handle to secure it in place.
3.
On the replacement node, ensure the gray node rod is in the extracted position, pulled out
of the component.
Servicing the Storage System
4.
Grasp each side of the replacement node and gently slide it into the enclosure. Ensure the
node is aligned with the grooves in the slot.
CAUTION:
5.
Ensure the node is correctly oriented; alternate nodes are rotated 180°.
Keep sliding the node in until it halts against the insertion mechanism.
CAUTION: Do not proceed until the replacement node has an Ethernet cable connected to
the MGMT port. Without an Ethernet cable, node rescue cannot complete and the replacement
node is not able to rejoin the cluster.
6.
7.
Reconnect the cables to the node.
Push the extended gray node rod into the node to ensure the node is fully seated.
CAUTION: If the blue LED is flashing, which indicates that the node is not properly seated,
pull out the grey node rod and push it back in to ensure that the node is fully seated.
NOTE: Once inserted, the node should power up and go through the node rescue procedure
before joining the cluster. This may take up to 10 minutes.
NOTE: On a 7400 (4-node system), there may only be two customer Ethernet cables. When
replacing nodes without any attached Ethernet cables, enter shownet command to identify
one of the active nodes, then remove one of the existing Ethernet cables and attach it to the
node being rescued.
8.
9.
Verify the node LED is blinking green in synchronization with other nodes, indicating that the
node has joined the cluster.
Follow the return instructions provided with the new component.
NOTE: If a PCIe adapter is installed in the failed node, leave it installed. Do not remove
and return it in the packaging for the replacement PCIe adapter.
Node Verification
For the CLI procedure, see “CLI Procedures” (page 66).
1. Verify the node is installed successfully by refreshing the Management Console.
NOTE:
status.
2.
The Management Console refreshes periodically and may already reflect the new
The Status LED for the new node may indicate Green and take up to 3 minutes to change to
Green Blinking.
Controller Node Replacement Procedure
37
3.
Under the Systems tree in the left panel, click the storage system just serviced.
NOTE: The storage system status is good and the alerts associated with the failure have
been auto-resolved by the system and removed.
SFP Repair
The SFP is located in the port on the controller node HBA/CNA and there are two to six SFPs per
node.
Before you begin, use either SPMAINT or the HP 3PAR Management Console to identify the failed
SFP.
SFP Identification
1.
2.
38
Under the Systems tree in the left panel, select the storage system to be serviced.
On the Summary tab, click the Port link to open the port's tab.
Servicing the Storage System
Typically the State is listed as Loss sync, the Mode as Initiator and the Connected
Device Type as Free.
3.
Verify that the SFP has been successfully replaced by refreshing the above pane.
State should now be listed as Ready, the Mode as Target and the Connected Device Type
as Host.
For the CLI procedure, see “SFP Identification” (page 74).
To perform maintenance using CLI, access SPMAINT:
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
2. Issue the following commands:
•
showport to view the port State:
s750 cli%showport
N:S:P Mode
State
Node_WWN
Label Partner FailoverState
0:0:1 initiator
ready 50002ACFF70185A6
0:0:2 initiator
ready 50002ACFF70185A6
0:1:1
target
ready 2FF70002AC0185A6
0:1:2
target
ready 2FF70002AC0185A6
0:2:1
target loss_sync
-
Port_WWN/HW_Addr Type Protocol
50002AC0010185A6 disk
SAS
50002AC0020185A6 disk
SAS
20110002AC0185A6 host
FC
20120002AC0185A6 host
FC
2C27D75301F6
iSCSI
iscsi
SFP Repair
39
0:2:2
target
0:3:1
peer
1:0:1 initiator
1:0:2 initiator
1:1:1
target
1:1:2
target
1:2:1 initiator
1:2:2 initiator
1:2:3 initiator
1:2:4 initiator
1:3:1
peer
-
•
loss_sync
- 2C27D75301F2
iscsi
iSCSI
offline
- 0002AC8004DB
rcip
IP RCIP0
ready 50002ACFF70185A6 50002AC1010185A6 disk
SAS
ready 50002ACFF70185A6 50002AC1020185A6 disk
SAS
ready 2FF70002AC0185A6 21110002AC0185A6 host
FC
loss_sync 2FF70002AC0185A6 21120002AC0185A6 free
FC
loss_sync 2FF70002AC0185A6 21210002AC0185A6 free
FC
loss_sync 2FF70002AC0185A6 21220002AC0185A6 free
FC
loss_sync 2FF70002AC0185A6 21230002AC0185A6 free
FC
loss_sync 2FF70002AC0185A6 21240002AC0185A6 free
FC
offline
IP RCIP1
MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM
8.5
No
No
No
Yes
8.5
No
No
No
Yes
10.3
No
No
Yes
Yes
10.3
No
No
Yes
Yes
8.5
No
No
No
Yes
8.5
No
No
Yes
Yes
8.5
No
No
Yes
Yes
8.5
No
No
Yes
Yes
8.5
No
No
Yes
Yes
Replace the SFP. See “Replacing an SFP” (page 42).
Issue the following commands:
•
showport to verify that the ports are in good condition and the State is listed as ready:
cli%showport
N:S:P Mode
State Node_WWN
Partner FailoverState
0:0:1 initiator ready 50002ACFF70185A6
0:0:2 initiator ready 50002ACFF70185A6
0:1:1 target
ready 2FF70002AC0185A6
0:1:2 target
ready 2FF70002AC0185A6
0:2:1 target
loss_sync
0:2:2 target
loss_sync
0:3:1 peer
offline
RCIP0 1:0:1 initiator ready 50002ACFF70185A6
-
40
rcip
showport -sfp to verify which SFP requires replacement:
cli%showport -sfp
N:S:P -State- -Manufacturer0:1:1 OK
HP-F
0:1:2 OK
HP-F
0:2:1 OK
AVAGO
0:2:2 OK
AVAGO
1:1:1 OK
HP-F
1:1:2 1:2:1 OK
HP-F
1:2:2 OK
HP-F
1:2:3 OK
HP-F
1:2:4 OK
HP-F
3.
4.
- 0002AC8004BD
Servicing the Storage System
Port_WWN/HW_Addr Type Protocol Label
50002AC0010185A6 disk
SAS
-
50002AC0020185A6 disk
SAS
-
20110002AC0185A6 host
FC
-
20120002AC0185A6 host
FC
-
2C27D75301F6
iscsi iSCSI
2C27D75301F2
iscsi iSCSI
0002AC8004DB
rcip
50002AC1010185A6 disk
SAS
IP
-
1:0:2
1:1:1
1:1:2
1:2:1
1:2:2
1:2:3
1:2:4
1:3:1
RCIP1
•
initiator
target
target
initiator
initiator
initiator
initiator
peer
-
ready 50002ACFF70185A6 50002AC1020185A6 disk
SAS
-
ready 2FF70002AC0185A6 21110002AC0185A6 host
FC
-
ready 2FF70002AC0185A6 21120002AC0185A6 host
FC
-
loss_sync 2FF70002AC0185A6 21210002AC0185A6
free
FC
-
loss_sync 2FF70002AC0185A6 21220002AC0185A6
free
FC
-
loss_sync 2FF70002AC0185A6 21230002AC0185A6
free
FC
-
loss_sync 2FF70002AC0185A6 21240002AC0185A6
free
FC
-
offline
rcip
IP
-
0002AC8004BD
showport -sfp to verify that the replaced SFP is connected and the State is listed as
OK:
cli% showport
N:S:P -State0:1:1 OK
0:1:2 OK
0:2:1 OK
0:2:2 OK
1:1:1 OK
1:1:2 OK
1:2:1 OK
1:2:2 OK
1:2:3 OK
1:2:4 OK
-sfp
-ManufacturerHP-F
HP-F
AVAGO
AVAGO
HP-F
HP-F
HP-F
HP-F
HP-F
HP-F
MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM
8.5
No
No
No
Yes
8.5
No
No
No
Yes
10.3
No
No
Yes
Yes
10.3
No
No
Yes
Yes
8.5
No
No
No
Yes
8.5
No
No
No
Yes
8.5
No
No
Yes
Yes
8.5
No
No
Yes
Yes
8.5
No
No
Yes
Yes
8.5
No
No
Yes
Yes
Open the HP 3PAR Management Console
1. Under the Systems tree in the left panel, select the storage system to be serviced to connect.
2. On the Summary tab, click the Port link to open the port's tab.
3. Verify that State is listed as Loss Sync, the Mode is listed as Initiator, and the Connected
Device Type is listed as Free.
Figure 40 Port details
SFP Repair
41
4.
5.
Replace the SFP. See “Replacing an SFP” (page 42).
In the HP 3PAR Management Console, verify that the SFP is successfully replaced. The replaced
port State is listed as Ready, the Mode is listed as Target, and the Connected Device Type
is listed as Host.
Figure 41 Port details
Replacing an SFP
1.
2.
3.
4.
5.
After identifying the SFP that requires replacement, disconnect the cable and lift the retaining
clip to carefully slide the SFP out of the slot.
Remove the replacement SFP module from its protective packaging.
Carefully slide the replacement SFP into the adapter until fully seated, close the retaining clip
to secure it in place, and reconnect the cable.
Place the failed SFP into the packaging for return to HP.
Reconnect the cable to the SFP module and verify that the link status LED is solid green.
Replacing a Drive Enclosure
CAUTION: A drive enclosure may be replaced while the StoreServ 7000 Storage is online or
by scheduling an offline maintenance window. Please contact HP Tech Support to schedule the
replacement of the drive enclosure while the storage system is online. The procedure for replacing
a drive enclosure offline is described in the rest of this section.
CAUTION: Before removing a drive enclosure from the rack, remove each disk drive, label it
with its slot number, and place each on a clean or ESD surface. After completing the enclosure
installation, reinstall the disk drives to their original slots.
CAUTION:
Two people are required to remove the enclosure from the rack to prevent injury.
To replace an enclosure:
1. Power down the enclosure and disconnect all power cables.
2. Remove the drives from the enclosure, noting each drives location in the enclosure.
3. Remove the bezels at the sides of the enclosure to access the screws.
4. Unscrew the M5 screws that mount the enclosure to the rack.
5. Using both hands, pull the enclosure from the rail shelves. Use the bottom lip as a guide and
the top to catch the enclosure.
42
Servicing the Storage System
6.
Reinstall the enclosure. See “Installing the Enclosures” (page 91).
Replacing an I/O Module
CAUTION: To prevent overheating the I/O module bay in the enclosure should not be left open
for more than 6 minutes.
CAUTION: Storage systems operate using two I/O modules per drive enclosure and can
temporarily operate using one I/O module when removing the other I/O module for servicing.
Drive Enclosure I/O Module Numbering
Figure 42 I/O Module Numbering on HP M6710 (2U) and HP M6720 (4U)
Before you begin, verify the location of the I/O module in an enclosure:
1. Display the failed I/O Module by executing the showcage command:
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model
0 cage0 1:0:1
0 0:0:1
0
6 25-28 320c 320c DCN1
1 cage1 1:0:1
2 0:0:1
2
6 25-29 320c 320c DCS1
2 cage2 1:0:1
1 0:0:1
1
6 33-28 320c 320c DCS2
3 cage3 1:0:1
0 ----0
6 33-27 320c 320c DCS2
Side
n/a
n/a
n/a
n/a
Typically, the dashes (— — — — —) indicate that one of the interfaces failed.
2.
If required, execute the locatecage command to identify the drive enclosure:
a. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
b. Execute the locatecage command.
To perform maintenance using CLI, access SPMAINT:
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
2. Issue the following commands:
•
showcage. A ----- indicates the location of the module in the enclosure. See the Name
field in the output.
•
locatecage cagex. Where x is the number of the cage in the Name field.
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB
0 cage0 1:0:1
0 0:0:1
0
7 25-34 3202 3202
1 cage1 1:0:1
0 0:0:1
1
0 0-0
3202 3202
2 cage2 1:0:1
3 0:0:1
2
2 33-34 3202 3202
Model
DCN1
DCS1
DCS2
Side
n/a
n/a
n/a
Replacing an I/O Module
43
3
4
6
7
8
9
3.
4.
5.
6.
7.
cage3
cage4
cage6
cage7
cage8
cage9
1:0:1
1:0:1
1:0:2
1:0:2
1:0:2
1:0:2
2
1
2
1
0
3
------0:0:1
0:0:2
0:0:2
0:0:2
0:0:2
3
0
1
2
0
0
2
2
6
6
6
8
33-33
34-34
33-35
34-34
35-36
34-48
3202
3202
3202
3202
3202
220c
3202
3202
3202
3202
3202
220c
DCS2
DCS2
DCS1
DCS1
DCS1
DCS1
n/a
n/a
n/a
n/a
n/a
n/a
The drive and I/O module fault LEDs flash amber with a one-second interval. Identify the
enclosure location where the I/O module resides by verifying the LED number on the front of
the enclosure.
Label and remove the SAS cables attached to the I/O module.
Replace the I/O module. See “Removing an I/O Module” (page 44) and “Installing an I/O
Module” (page 45).
Reattach the SAS cables to the I/O module.
In the CLI, issue the showcage command to verify that the I/O module has been successfully
replaced and the ----- is replaced with output:
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model
0 cage0 1:0:1
0 0:0:1
0
7 25-34 3202 3202 DCN1
1 cage1 1:0:1
0 0:0:1
1
0 0-0
3202 3202 DCS1
2 cage2 1:0:1
3 0:0:1
2
2 33-33 3202 3202 DCS2
3 cage3 1:0:1
2 0:0:1
3
2 32-32 3202 3202 DCS2
4 cage4 1:0:1
1 0:0:1
3
2 34-34 3202 3202 DCS2
6 cage6 1:0:2
2 0:0:2
1
6 33-35 3202 3202 DCS1
7 cage7 1:0:2
1 0:0:2
2
6 34-34 3202 3202 DCS1
8 cage8 1:0:2
0 0:0:2
0
6 35-36 3202 3202 DCS1
9 cage9 1:0:2
3 0:0:2
0
8 34-48 220c 220c DCS1
Side
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Removing an I/O Module
1.
2.
3.
4.
44
Validate the labeling and then remove SAS cables. There can be one cable or two.
Grasp the module latch between thumb and forefinger and squeeze to release the latch (see
Figure 43 (page 45)).
Pull the latch handles open, grip the handles on both sides of the module, remove it from the
enclosure, and set aside.
Place the removed I/O Module on an ESD safe mat.
Servicing the Storage System
Figure 43 Removing an I/O module
Installing an I/O Module
1.
2.
3.
4.
5.
Open the module latch and slide it into the enclosure until it automatically engages (see
Figure 44 (page 45)).
Once the module is in the enclosure, close the latch until it engages and clicks.
Pull back lightly on the handle to check seating.
Replace the SAS cables.
Follow the return instructions provided with the new component.
Figure 44 Installing an I/O module
6.
Verify that the I/O module is successfully replaced by executing the showcage command:
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model
0 cage0 1:0:1
0 0:0:1
0
6 25-28 320c 320c DCN1
1 cage1 1:0:1
2 0:0:1
2
6 25-29 320c 320c DCS1
2 cage2 1:0:1
1 0:0:1
1
6 25-28 320c 320c DCS2
3 cage3 1:0:1
0 0:0:1
0
6 25-27 320c 320c DCS2
Side
n/a
n/a
n/a
n/a
Replacing an I/O Module
45
Replacing a Power Cooling Module
The PCMs are located at the rear of the system on either side of an enclosure.
Figure 45 PCM Numbering for Node Enclosure (DCN1)
0
1
Figure 46 Drive Enclosure PCM Numbering
CAUTION: To prevent overheating the Node PCM bay in the enclosure should not be left open
for more than 6 minutes.
NOTE:
1.
46
Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
Remove the replacement PCM from its packaging and place it on an ESD safe mat with the
empty battery compartment facing up.
Servicing the Storage System
2.
Slide the cord clamp off the cable tie by releasing the cable tie release tab and pulling the
cord clamp. Place on the ESD safe mat ready to be placed onto the failed PCM.
Figure 47 Cord Clamp Cable Tie Release Tab
Before you begin, identify the failed PCM.
PCM Removal
CAUTION:
Verify the PCM power switch is turned to the OFF position to disconnect power.
NOTE: Because they use a common power bus, some PCM LEDs may remain illuminated after
the PCM is powered off.
1.
2.
3.
4.
5.
6.
7.
Loosen the cord clamp, release the cable tie tab, and slide the cord clamp off the cable tie.
Disconnect the power cable, keeping the cord clamp on the power cable.
Secure the power cable and cable clamp so that it will not be in the way when the PCM is
removed.
Note the PCM orientation.
With thumb and forefinger, grasp and squeeze the latch to release the handle.
Rotate the PCM release handle and slide the PCM out of the enclosure.
Place the faulty PCM on the ESD safe mat next to the replacement PCM with the battery
compartment facing up.
Node PCM Battery
The node PCM Battery is enclosed within a node PCM. Node PCMs are located at the rear of the
system and are located on either side of the nodes.
Node PCM Battery Removal
1.
2.
At the back of the faulty PCM, lift the battery handle to eject the battery pack.
Place the battery into the replacement PCM and push the handle down to install.
NOTE:
Check that the battery and handle is level with the surface of the PCM.
Replacing a Power Cooling Module
47
Node PCM Replacement
When replacing a gold- or silver-labeled PCM, ensure the new PCM color label matches with the
existing pair or switch to a different pair. For 7400 4-node systems, all four PCM labels do not
have to be the same color but the PCMs must be paired with the same color label.
Before installing the PCMs, verify
1. Rotate the PCM to the correct orientation.
2. Move the handle to the open position.
3. Slide the PCM into the enclosure and push until the insertion mechanism starts to engage (the
handle starts to rotate).
NOTE: Ensure that no cables get caught in the PCM insertion mechanism, especially the thin
Fiber Channel cables.
4.
Rotate the handle to fully seat the PCM into the enclosure; you will hear a click as the latch
engages.
5. Once inserted, pull back lightly on the PCM to ensure that it is properly engaged.
6. Reconnect the power cable and slide the cable clamp onto the cable tie.
7. Tighten the cord clamp.
8. Turn the PCM on and check that power LED is green (see Table 3 (page 9)).
9. Slide the cord clamp from the replacement PCM onto the cable tie of the failed PCM.
10. Follow the return instructions provided with the new component.
11. Verify that the PCM has been successfully replaced (see “PCM and Battery Verification” (page
73)).
NOTE: For a failed battery in a PCM, see “Replacing a Battery inside a Power Cooling Module”
(page 49).
To perform maintenance using CLI, access SPMAINT:
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
2. Issue the shownode –ps command:
cli%
Node
0,1
0,1
3.
4.
shownode -ps
PS -Assy_Part- --Assy_Serial-- ACState DCState PSState
0 0945768-09 PMW0945768J103N Failed Failed OK
1 0945768-09 PMW0945768J102A OK
OK
OK
Replace the PCM. See “Removing a Power Cooling Module” (page 48) and “Installing a
Power Cooling Module ” (page 51).
In the CLI, issue the shownode -ps command to verify that the PCM has been successfully
replaced.
cli% shownode -ps
Node PS Assy_Part Assy_Serial
ACState DCState PSState
0,1 0 0945768-09 PMW0945768J102U OK
OK
OK
0,1 1 0945768-09 PMW0945768J102A OK
OK
OK
Removing a Power Cooling Module
CAUTION:
1.
2.
48
Ensure that the PCM power switch is turned to the OFF position to disconnect power.
Remove the power cable.
With thumb and forefinger, grasp and squeeze the latch to release the handle.
Servicing the Storage System
3.
4.
Slide the handle away from the PCM to open it.
Grab the handle and pull the PCM from the enclosure. Set the PCM aside.
Figure 48 Removing a PCM
Replacing a Battery inside a Power Cooling Module
The Power Cooling Module (PCM) is an integrated power supply, battery, and cooling fan. You
can replace a battery on the 764W PCM without replacing the entire PCM.
WARNING! If both batteries in the same node enclosure failed, do not attempt to replace both
at the same time.
Before you begin, verify that at least one PCM battery in each node enclosure is functional and
identify which battery needs to be replaced.
To perform maintenance using CLI, access SPMAINT:
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ and issue
the following commands:
•
showbattery to verify that the battery has failed:
cli%
Node
0,1
0,1
•
showbattery
PS Bat Serial
State ChrgLvl(%) ExpDate
0
0 BCC0974242G00C7 Failed
106
n/a
1
0 BCC0974242G006J OK
104
n/a
Expired Testing
No
No
No
No
checkhealth –svc –detail node:
cli% checkhealth -svc -detail node
Checking node
Component
---Description--Node
Power supplies with failed or degraded AC
Node
Power supplies with failed or degraded DC
Node
Power supplies with failed or degraded batteries
Node
Number of node environmental factors out of tolerance
Node
Batteries not tested within 30 days
Qty
2
2
2
8
2
Component -Identifier- --Description-Node
node:0
Power supply 0 AC state is Failed
Node
node:0
Power supply 0 DC state is Failed
Replacing a Power Cooling Module
49
Node
Node
Node
Node
Node
node:0
node:1
node:1
node:1
node:0
Power supply 0 battery is Failed
Power supply 0 AC state is Failed
Power supply 0 DC state is Failed
Power supply 0 battery is Failed
Environmental factor PCM is Unrecoverable
NOTE: Because each battery is a backup for both nodes, node 0 and 1 both report a
problem with a single battery. The Qty appears as 2 in output because two nodes are
reporting the problem. Battery 0 for node 0 is in the left PCM, and battery 0 for node 1
is in the right side PCM (when looking at the node enclosure from the rear).
2.
Remove the PCM, see “Removing a Power Cooling Module” (page 48).
a. At the back of the PCM, lift the battery handle to eject the battery pack.
Figure 49 Removing the PCM Battery
b.
c.
50
Remove the replacement PCM battery pack from its packaging.
Lift the battery pack handle in upright position, then place it back into the PCM and push
down the handle to install.
Servicing the Storage System
Figure 50 Installing the PCM Battery
3.
4.
To reinstall the PCM, see “Installing a Power Cooling Module ” (page 51).
In the CLI, issue the following commands:
•
showbattery to confirm the battery is functional and the serial ID has changed:
cli%
Node
0,1
0,1
•
showbattery
PS Bat Assy_Serial
0
0
BCC0974242G00CH
1
0
BCC0974242G006J
State ChrgLvl(%) ExpDate Expired Testing
OK
104 n/a
No
No
OK
106 n/a
No
No
checkhealth –svc –detail node to verify State as OK
Installing a Power Cooling Module
1.
2.
With the handle in the open position, slide the module into the enclosure.
Close the PCM handle. You will hear a click as the latch engages.
Figure 51 Installing a PCM
Replacing a Power Cooling Module
51
3.
4.
Reconnect the power cable.
Secure the cord restraints.
Controller Node Internal Component Repair
CAUTION:
•
Do not replace cold-swappable components while power is applied to the product. Power off
the device and then disconnect all AC power cords.
•
Power off the equipment and disconnect power to all AC power cords before removing any
access covers for cold-swappable areas.
•
When replacing hot-swappable components, allow approximately 30 seconds between
removing the failed component and installing the replacement. This time is needed to ensure
that configuration data about the removed component is cleared from the system registry. To
prevent overheating due to an empty enclosure or bay, use a blank or leave the slightly
disengaged component in the enclosure until the replacement can be made.
Drives must be replaced within 10 minutes, nodes 30 minutes and all other parts within 6
minutes.
•
Before replacing a hot-swappable component, ensure that steps have been taken to prevent
loss of data.
NOTE: After servicing the controller nodes and cages, use the upgradecage cage<n> command
to ensure all the cages, along with the associated firmware, are operating with the correct version
of the software.
The following node internal component procedures are very complicated and may result in loss of
data. Before performing these procedures, remove the node cover, if appropriate.
Figure 52 Controller Node Internal Components
52
1. Node drive platform
2. Node drive and cable
3. PCIe riser card
4. PCIe adapter assembly
5. PCIe riser slot
6. Clock battery
7. Control Cache DIMM
8. Data Cache DIMM (DC 0:0)
Servicing the Storage System
9. Data Cache DIMM (DC 1:0)
NOTE:
Items 1 and 2 in the list above are regarded as one component, called the Node drive assembly.
NOTE: Before beginning any internal node component procedure, the node must be removed
from the storage system and the node cover removed.
Node Cover Removal and Replacement
Once a controller node has been removed from the storage system, you can remove the cover and
access the internal node components.
To remove the node cover, unscrew the captive screws and lift the cover from the node. You may
need a screwdriver if the node cover screw is too tight.
To replace the node cover, align the controller node cover with the pegs in their grooves, then
slide the cover until it is properly sealed and tighten the captive screws on the node cover.
Controller Node (Node) Clock Battery Replacement Procedure
CAUTION: Alloy gray-colored latches on components such as the node mean the component is
warm-swappable. HP recommends shutting down the node (with the enclosure power remaining
on) before removing this component.
NOTE: The clock inside the node uses a 3-V lithium coin battery. The lithium coin battery may
explode if it is incorrectly installed in the node. Replace the clock battery only with a battery
supplied by HP. Do not use non-HP supplied batteries. Dispose of used batteries according to the
manufacturer’s instructions.
CAUTION: To prevent overheating the node bay in the enclosure should not be left open for
more than 30 minutes.
NOTE:
Be sure to use an electrostatic discharge wrist strap to avoid damaging any circuitry.
Preparation
Unpack the replacement clock battery and place on an ESD safe mat.
Node Identification and Shutdown
Before you begin, use HP 3PAR CLI to halt the node:
NOTE: If the failed node is already halted, it is not necessary to shutdown (halt) the node because
it is not part of the cluster. The failed DIMM should be identified from the failure notification.
1.
2.
In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
Issue the shownode command to see if the node is listed as Degraded or is missing from the
output.
NOTE: If the node's state is Degraded, it must be shutdown to be serviced. If the node is
missing from the output it may already be shutdown and is ready to be serviced, in this case
proceed to Step 6.
In the following example of a 7200 both nodes are present:
cli% shownode
Control
Data
Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK
Yes
Yes
Off
GreenBlnk
8192
4096
100
1 1699808-1 Degraded No
Yes
Off
GreenBlnk
8192
4096
100
Controller Node Internal Component Repair
53
In the following 7200 example node 1 is missing:
cli% shownode
Control
Data
Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK
Yes
Yes
Off
GreenBlnk
8192
4096
100
NOTE:
3.
4.
5.
6.
If more than one node is down at the same time, escalate to the next level of support.
Type exit to return to the 3PAR Service Processor Menu.
Select option Halt a StoreServ cluster/node, select the desired node, and confirm all prompts
to halt the node.
In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
If required, execute the locatesys command to identify the system:
a. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
b. Execute the locatesys command.
NOTE: All nodes in this System flash, except the failed node, which displays a solid
blue LED.
Node Removal
1.
Allow 2-3 minutes for the node to halt, then verify that the Node Status LED is flashing green
and the Node UID LED is blue, indicating that the node has been halted.
CAUTION: The system will not fail if the node is properly halted before removal, but data
loss may occur if the replacement procedure is not followed correctly.
NOTE:
The Node Fault LED may be amber, depending on the nature of the node failure.
Figure 53 Verifying Node LEDs Status
NOTE:
2.
3.
4.
5.
6.
54
Nodes 1 and 3 are rotated with reference to nodes 0 and 2.
Mark all cables on the failed node to facilitate reconnecting later.
At the rear of the rack, remove cables from the failed node.
Pull the node rod to remove the node from the enclosure.
When the node is halfway out of the enclosure, use both hands to slide the node out completely.
Set the node on the ESD safe mat for servicing.
Servicing the Storage System
Node Clock Battery Replacement
1.
2.
Locate the Clock Battery.
Remove the Clock Battery by pulling aside the retainer clip and pulling the battery up from
the battery holder.
NOTE:
3.
4.
Do not touch internal node components when removing or inserting the battery.
Insert the replacement 3-V lithium coin battery into the Clock Battery slot with the positive-side
facing the retaining clip.
Replace the node cover.
Node Replacement
1.
2.
Ensure that the gray node rod is in the extracted position, pulled out of the component.
Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned
with the grooves in the slot.
CAUTION:
3.
4.
5.
Ensure the node is correctly oriented, alternate nodes are rotated by 180°.
Keep sliding the node in until it halts against the insertion mechanism.
Reconnect cables to the node.
Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: A flashing blue LED indicates that the node is not properly seated. Pull out the
gray node rod and push back in to ensure that the node is fully seated.
NOTE: Once inserted, the node should power up and rejoin the cluster; this may take up to
5 minutes.
6.
7.
Verify that the node LED is blinking green in synchronization with other nodes, indicating that
the node has joined the cluster.
Follow the return or disposal instructions provided with the new component.
Node and Clock Battery Verification
Verify that the node has been successfully replaced and the replacement Clock Battery is working.
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
2. Issue the checkhealth command to verify that the state of the system is OK:
cli% checkhealth
Checking alert
Checking cabling
Checking cage
Checking dar
Checking date
Checking host
Checking ld
Checking license
Checking network
Checking node
Checking pd
Checking port
Checking rc
Checking snmp
Checking task
Checking vlun
Checking vv
System is healthy
3.
Issue the shownode command to verify that the state of all nodes is OK.
Controller Node Internal Component Repair
55
NOTE: Depending on the serviced component, the node may go through Node Rescue,
which can take up to 10 minutes.
NOTE: The LED status for the replaced node may indicate Green and could take up to 3
minutes to change to Green Blinking.
cli% shownode
Control
Data
Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK
Yes
Yes
Off
GreenBlnk
8192
4096
100
1 1699808-1 OK
No
Yes
Off
GreenBlnk
8192
4096
100
4.
4. Issue the showdate command to confirm that the clock setting is correct:
cli%
Node
0
1
showdate
Date
2012-11-21 08:36:35 PDT (America/Los_Angeles)
2012-11-21 08:36:35 PDT (America/Los_Angeles)
Controller Node (Node) DIMM Replacement Procedure
CAUTION: Alloy gray-colored latches on components such as the node mean the component is
warm-swappable. HP recommends shutting down the node (with the enclosure power remaining
on) before removing this component.
CAUTION: To prevent overheating, the node bay in the enclosure should not be left open for
more than 30 minutes.
NOTE:
Use an electrostatic discharge wrist strap to avoid damaging any circuitry.
Preparation
Unpack the replacement DIMM and place on an ESD safe mat.
Node and DIMM Identification and Node Shutdown
Before you begin, use HP 3PAR CLI to identify the failed DIMM and then halt the node.
NOTE: If the failed node is already halted, it is not necessary to shutdown (halt) the node because
it is not part of the cluster. The failed DIMM should be identified from the failure notification.
Step 1 through Step 4 assist in the identification of the part to be ordered, if this information has
not already been obtained from the notification.
NOTE:
1.
2.
Even when a DIMM is reported as failed it still displays configuration information.
In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
Issue the shownode command to see if the node is listed as Degraded or is missing from the
output.
NOTE: If the node's state is Degraded, it must be shut down to be serviced. If the node is
missing from the output, it may already be shutdown and is ready to be serviced, in this case
proceed to Step 6.
In the following example of a 7200, both nodes are present:
cli% shownode
Control
Data
Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK
Yes
Yes
Off
GreenBlnk
8192
4096
100
1 1699808-1 Degraded No
Yes
Off
GreenBlnk
8192
4096
100
In the following 7200 example, node 1 is missing:
56
Servicing the Storage System
cli% shownode
Control
Data
Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK
Yes
Yes
Off
GreenBlnk
8192
4096
100
NOTE:
3.
Issue the shownode -mem command to display the usage (control or data cache) and
manufacturer (sometimes this cannot be displayed).
cli%
Node
0
0
0
1
1
1
4.
If more than one node is down at the same time, escalate to the next level of support.
shownode -mem
Riser Slot SlotID
n/a
0 J0155
n/a
0 J0300
n/a
1 J0301
n/a
0 J0155
n/a
0 J0300
n/a
1 J0301
-NameDIMM0
DIMM0
DIMM1
DIMM0
DIMM0
DIMM1
-UsageControl
Data
Data
Control
Data
Data
---Type--DDR3_SDRAM
DDR2_SDRAM
DDR2_SDRAM
DDR3_SDRAM
DDR2_SDRAM
DDR2_SDRAM
--Manufacturer---Micron Technology
Micron Technology
-Micron Technology
Micron Technology
-SerialB1F55894
DD9CCF19
DD9CCF1A
B1F55897
DD9CCF1C
DD9CCF1B
-Latency-- Size(MB)
CL5.0/10.0
8192
CL4.0/6.0
2048
CL4.0/6.0
2048
CL5.0/10.0
8192
CL4.0/6.0
2048
CL4.0/6.0
2048
Issue the shownode -i command to display the part number.
The shownode -i command displays node inventory information, scroll down to view physical
memory information.
cli% shownode -i
------------------------Nodes-----------------------.
----------------------PCI Cards---------------------.
-------------------------CPUs-----------------------.
-------------------Internal Drives------------------.
-------------------------------------------Physical Memory------------------------------------------Node Riser Slot SlotID Name Type
--Manufacturer--- ----PartNumber---- -Serial- -Rev- Size(MB)
0 n/a
0 J0155 DIMM0 DDR3_SDRAM -36KDYS1G72PZ-1G4M1 B1F55894 4D31
8192
0 n/a
0 J0300 DIMM0 DDR2_SDRAM Micron Technology 18HVF25672PZ-80EH1 DD9CCF19 0100
2048
0 n/a
1 J0301 DIMM1 DDR2_SDRAM Micron Technology 18HVF25672PZ-80EH1 DD9CCF1A 0100
2048
1 n/a
0 J0155 DIMM0 DDR3_SDRAM -36KDYS1G72PZ-1G4M1 B1F55897 4D31
8192
1 n/a
0 J0300 DIMM0 DDR2_SDRAM Micron Technology 18HVF25672PZ-80EH1 DD9CCF1C 0100
2048
1 n/a
1 J0301 DIMM1 DDR2_SDRAM Micron Technology 18HVF25672PZ-80EH1 DD9CCF1B 0100
2048
--------------------Power Supplies-------------------
5.
6.
7.
8.
9.
Type exit to return to the 3PAR Service Processor Menu.
Select option 4 StoreServ Product Maintenance and then select the desired system.
Select option Halt a StoreServ cluster/node, select the desired node, and confirm all prompts
to halt the node.
In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
If required, execute the locatesys command to identify the system:
a. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
b. Execute the locatesys command
NOTE: All nodes in this system flash, except the failed node, which displays a solid
blue LED.
Controller Node Internal Component Repair
57
Node Removal
1.
Allow 2-3 minutes for the node to halt, then verify the Node Status LED is flashing green and
the Node UID LED is blue, indicating that the node has been halted.
CAUTION: The system does not fail if the node is properly halted before removal, but data
loss may occur if the replacement procedure is not followed correctly.
NOTE:
The Node Fault LED may be amber, depending on the nature of the node failure.
Figure 54 Verifying Node LEDs Status
NOTE:
2.
3.
4.
5.
6.
Nodes 1 and 3 are rotated with reference to nodes 0 and 2.
Mark all cables on the failed node to facilitate reconnecting later.
At the rear of the rack, remove cables from the failed node.
Pull the node rod to remove the node from the enclosure.
When the node is halfway out of the enclosure, use both hands to slide the node out completely.
Set the node on the ESD safe mat for servicing.
DIMM Replacement
1.
2.
Lift the Node Drive Assembly, move it to the side, and place it on the ESD safe mat.
Physically identify the failed DIMM in the node.
The Control Cache (CC) and Data Cache (DC) DIMMs can be identified by locating the
appropriate silk-screening on the board.
3.
4.
5.
6.
With your thumb or finger, press outward on the two tabs on the sides of the DIMM to remove
the failed DIMM and place on the ESD safe mat.
Align the key and insert the DIMM by pushing downward on the edge of the DIMM until the
tabs on both sides snap into place.
Replace the Node Drive Assembly.
Replace the node cover.
Node Replacement
1.
58
Ensure that the gray node rod is in the extracted position, pulled out of the component.
Servicing the Storage System
2.
Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned
with the grooves in the slot.
CAUTION:
3.
4.
5.
Ensure that the node is correctly oriented, alternate nodes are rotated by 180°.
Keep sliding the node in until it halts against the insertion mechanism.
Push the extended gray node rod into the node to ensure the node is correctly installed.
Reconnect cables to the node.
CAUTION: A flashing blue LED indicates that the node is not properly seated. Pull out the
gray node rod and push back in to ensure that the node is fully seated
NOTE: Once inserted, the node should power up and rejoin the cluster, which may take up
to 5 minutes.
6.
7.
Verify that the node LED is blinking green in synchronization with other nodes, indicating that
the node has joined the cluster.
Follow the return instructions provided with the new component.
Node and DIMM Verification
Verify that the node has been successfully replaced and the replacement DIMM recognized
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
2. Issue the checkhealth command to verify that the state of the system is OK.
cli% checkhealth
Checking alert
Checking cabling
Checking cage
Checking dar
Checking date
Checking host
Checking ld
Checking license
Checking network
Checking node
Checking pd
Checking port
Checking rc
Checking snmp
Checking task
Checking vlun
Checking vv
System is healthy
3.
Issue the shownode command to verify that the state of all nodes is OK.
NOTE: Depending on the serviced component, the node may go through Node Rescue,
which can take up to 10 minutes.
NOTE: The LED status for the replaced node may indicate green and could take up to 3
minutes to change to blinking green.
cli% shownode
Control
Data
Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK
Yes
Yes
Off
GreenBlnk
8192
4096
100
1 1699808-1 OK
No
Yes
Off
GreenBlnk
8192
4096
100
Controller Node Internal Component Repair
59
4.
Issue the shownode -i command to display the memory.
NOTE: The shownode -i command displays node inventory information; scroll down to
view physical memory information.
cli% shownode -i
------------------------Nodes-----------------------.
----------------------PCI Cards---------------------.
-------------------------CPUs-----------------------.
-------------------Internal Drives------------------.
-------------------------------------------Physical Memory------------------------------------------Node Riser Slot SlotID Name Type
--Manufacturer--- ----PartNumber---- -Serial- -Rev- Size(MB)
0 n/a
0 J0155 DIMM0 DDR3_SDRAM -36KDYS1G72PZ-1G4M1 B1F55894 4D31
8192
0 n/a
0 J0300 DIMM0 DDR2_SDRAM Micron Technology 18HVF25672PZ-80EH1 DD9CCF19 0100
2048
0 n/a
1 J0301 DIMM1 DDR2_SDRAM Micron Technology 18HVF25672PZ-80EH1 DD9CCF1A 0100
2048
1 n/a
0 J0155 DIMM0 DDR3_SDRAM -36KDYS1G72PZ-1G4M1 B1F55897 4D31
8192
1 n/a
0 J0300 DIMM0 DDR2_SDRAM Micron Technology 18HVF25672PZ-80EH1 DD9CCF1C 0100
2048
1 n/a
1 J0301 DIMM1 DDR2_SDRAM Micron Technology 18HVF25672PZ-80EH1 DD9CCF1B 0100
2048
--------------------Power Supplies-------------------
Controller Node (Node) PCIe Adapter Procedure
CAUTION: Alloy gray-colored latches on components such as the node mean the component is
warm-swappable. HP recommends shutting down the node (with the enclosure power remaining
on) before removing this component.
CAUTION: To prevent overheating, the node bay in the enclosure should not be left open for
more than 30 minutes.
NOTE:
Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
Unpack the replacement PCIe Adapter and place on an ESD safe mat.
PCIe Adapter Identification and Node Shutdown
Before you begin, use the HP 3PAR CLI to identify the failed PCIe Adapter and then halt the node.
:
If the failed node is already halted, it is not necessary to shutdown (halt) the node because it is not
part of the cluster. The failed PCIe adapter is identified by the failure notification.
60
Servicing the Storage System
Node Removal
1.
Allow 2-3 minutes for the node to halt, then verify that the Node Status LED is flashing green
and the Node UID LED is blue indicating that the node has been halted.
CAUTION: The system does not fail if the node is properly halted before removal, but data
loss may occur if the replacement procedure is not followed correctly.
NOTE:
The Node Fault LED may be amber, depending on the nature of the node failure.
Figure 55 Verifying Node LEDs Status
NOTE:
2.
3.
4.
5.
6.
7.
Nodes 1 and 3 are rotated with respect to nodes 0 and 2.
Remove the node cover.
Ensure that all cables on the failed node are marked to facilitate reconnecting later.
At the rear of the rack, remove cables from the failed node.
Pull the node rod to remove the node from the enclosure.
When the node is halfway out of the enclosure, use both hands to slide the node out completely.
Set the node on the ESD safe mat for servicing.
PCIe Adapter Installation
1.
2.
Remove the node cover.
Remove the PCIe Adapter assembly and riser card:
NOTE:
tab.
a.
b.
c.
3.
4.
5.
The CNA Adapter is half-height and will not be held in place by the blue touch point
Press down on the blue touch point tab to release the assembly from the node.
Grasp the blue touch point on the riser card and pull the assembly up and away from
the node for removal.
Pull the riser card to the side to remove the riser card from the assembly.
Insert the replacement PCIe Adapter into the riser card.
To replace the Adapter, align the recesses on the Adapter plate with the pins on the node
chassis. This should align the riser card with the slot on the node. Snap the PCIe Adapter
assembly into the node.
Replace the node cover.
Controller Node Internal Component Repair
61
Node Installation
CAUTION: Alloy gray-colored latches on components such as the node mean the component is
warm-swappable. HP recommends shutting down the node (with the enclosure power remaining
on) before removing this component.
CAUTION: To prevent overheating the node bay in the enclosure should not be left open for
more than 30 minutes.
1.
2.
Ensure that the gray node rod is in the extracted position, pulled out of the component.
Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned
with the grooves in the slot.
CAUTION:
3.
4.
5.
Ensure that the node is correctly oriented; alternate nodes are rotated by 180°.
Keep sliding the node in until the node halts against the insertion mechanism.
Reconnect cables to the node.
Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: If the blue LED is flashing, it indicates that the node is not properly seated. Pull
out the gray node rod and push back in to ensure that the node is fully seated.
NOTE: Once inserted, the node should power up and rejoin the cluster; this may take up to
5 minutes.
6.
7.
8.
Verify that the node LED is blinking green in synchronization with other nodes indicating that
the node has joined the cluster.
Follow the return or disposal instructions provided with the new component.
Verify that the node has been successfully replaced and the replacement PCIe Adapter is
recognized.
Controller Node (Node) Drive Assembly Replacement Procedure
The Node Drive Assembly consists of a plastic tray, a circuit board, and a cable with a connector.
CAUTION: Alloy gray-colored latches on components such as the node mean the component is
warm-swappable. HP recommends shutting down the node (with the enclosure power remaining
on) before removing this component.
CAUTION: To prevent overheating the node bay in the enclosure should not be left open for
more than 30 minutes.
NOTE:
Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
Preparation
Remove the replacement Node Drive Assembly from its protective packaging and place on an ESD
safe mat.
Node Identification and Shutdown
Before you begin, use HP 3PAR CLI to identify and then halt the node.
NOTE: If the failed node is already halted, it is not necessary to shutdown (halt) the node because
it is not part of the cluster. The failed DIMM should be identified from the failure notification.
62
Servicing the Storage System
1.
2.
In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
Issue the shownode command to see if the node is listed as Degraded or is missing from the
output:
NOTE: Note: If the node's state is Degraded, it must be shut down to be serviced. If the
node is missing from the output, it may already be shutdown and is ready to be serviced, in
this case proceed to step 6.
•
In this example of a 7200 both nodes are present.
cli% shownode
Control
Node --Name--- -State0 1699808-0 OK
Yes
1 1699808-1 Degraded No
•
Data
Cache
Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
Yes
Off
GreenBlnk
8192
4096
100
Yes
Off
GreenBlnk
8192
4096
100
In this 7200 example node 1 is missing.
cli% shownode
Control
Data
Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK
Yes
Yes
Off
GreenBlnk
8192
4096
100
NOTE: If more than one node is down at the same time escalate to the next level of
support.
3.
4.
5.
6.
Enter exit to return to the 3PAR Service Processor Menu.
Select option 4 StoreServ Product Maintenance and then select the desired system.
Select option Halt a StoreServ cluster/node, select the desired node, and confirm all prompts
to halt the node.
If required, execute the locatenode command to identify the system:
a. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
b. Execute the locatenode command
NOTE: This flashes all nodes in this System except the failed node, which will have a
solid blue LED.
Controller Node Internal Component Repair
63
Node Removal
1.
Allow 2-3 minutes for the node to halt, then verify that the Node Status LED is flashing green
and the Node UID LED is blue, indicating that the node has been halted.
CAUTION: The system does not fail if the node is properly halted before removal, but data
loss may occur if the replacement procedure is not followed correctly.
NOTE:
The Node Fault LED may be amber, depending on the nature of the node failure.
Figure 56 Verifying Node LEDs Status
NOTE:
2.
3.
4.
5.
6.
Nodes 1 and 3 are rotated with respect to nodes 0 and 2.
Ensure that all cables on the failed node are marked to facilitate reconnecting later.
At the rear of the rack, remove cables from the failed node.
Pull the node rod to remove the node from the enclosure.
When the node is halfway out of the enclosure, use both hands to slide the node out completely.
Set the node on the ESD safe mat for servicing.
Node Drive Assembly Replacement
1.
2.
3.
4.
5.
Remove the node cover.
Lift the failed Node Drive Assembly from the node and detach the Node Drive Assembly cable.
Place the failed Node Drive Assembly on the ESD safe mat.
Attach the Node Drive Assembly cable to the replacement node drive.
Place the Node Drive Assembly into the node.
NOTE: There are four plastic guide pins that hold the node disk in place. To correctly seat
the node disk, push the node disk down on the guide pins. Failure to locate the guide pins
correctly may result in the inability to replace the node cover.
6.
Replace the node cover.
Node Replacement
1.
64
Ensure that the gray node rod is in the extracted position, pulled out of the component.
Servicing the Storage System
2.
Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned
with the grooves in the slot.
CAUTION:
3.
4.
Ensure that the node is correctly oriented, alternate nodes are rotated by 180°.
Keep sliding the node in until it halts against the insertion mechanism
Reconnect cables to the node.
CAUTION: Do not proceed until the node being replaced has an Ethernet cable connected
to the MGMT port. Without an Ethernet cable, node rescue cannot complete and the
replacement node will not be able to rejoin the cluster.
5.
Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: If the blue LED is flashing, the node is not properly seated. Pull out the gray node
rod and push back in to ensure that the node is fully seated.
NOTE: Once inserted, the node should power up and go through the Node Rescue procedure
before joining the Cluster; this may take up to 10 minutes.
6.
7.
Verify that the node LED is blinking green in synchronization with other nodes, indicating that
the node has joined the cluster.
Follow the return and disposal instructions provided with the new component.
Node Verification
Verify that the node is operational and the Node Drive Assembly has been successfully replaced:
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
2. Issue the checkhealth command to verify that the state of the system is OK:
cli% checkhealth
Checking alert
Checking cabling
Checking cage
Checking date
Checking host
Checking ld
Checking license
Checking network
Checking node
Checking pd
Checking port
Checking rc
Checking snmp
Checking task
Checking vlun
Checking vv
System is healthy
3.
Issue the shownode command to verify that the state of all nodes is OK.
NOTE: Depending on the serviced component, the node may go through Node Rescue,
which can take up to 10 minutes.
NOTE: The LED status for the replaced node may indicate green and can take up to 3 minutes
to change to green blinking.
cli% shownode
Control
Data
Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK
Yes
Yes
Off
GreenBlnk
8192
4096
100
1 1699808-1 OK
No
Yes
Off
GreenBlnk
8192
4096
100
Controller Node Internal Component Repair
65
CLI Procedures
Node Identification and Preparation
To perform maintenance using CLI, access SPMAINT:
NOTE: If the failed node is already halted, it is not necessary to shutdown the node because it
is not part of the cluster.
1.
2.
In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
Issue the shownode command to see if the node is listed as Degraded or is missing from the
output.
NOTE: If the node's state is Degraded, it will need to be shutdown to be serviced. If the
node is missing from the output, it may already be shutdown and is ready to be serviced; in
this case proceed to step 6.
In the following example of a 7200 both nodes are present.
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK
Yes
Yes
Off
GreenBlnk
8192
4096
100
1 1699808-1 Degraded No
Yes
Off
GreenBlnk
8192
4096
100
In the following 7200 example node 1 is missing.
cli% shownode
Control
Data
Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK
Yes
Yes
Off
GreenBlnk
8192
4096
100
NOTE: If more than one node is down at the same time, contact your authorized service
provider.
3.
4.
5.
6.
7.
Type exit to return to the 3PAR Service Processor Menu.
Select option 4 StoreServ Product Maintenance, then select the desired system.
Select option Halt a StoreServ cluster/node, then select the desired node and confirm all
prompts to halt the node.
In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
Execute the command locatesys -t XX where XX is an appropriate number of seconds
to allow service personnel to view the LED status of the System.
All drives and nodes in this system flash, except the failed node, which has a solid blue LED.
Node Verification
Verify that the node has successfully been replaced:
1. Select the button to return to the 3PAR Service Processor Menu.
2. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
3. Issue the checkhealth command to verify that the state of all nodes is OK.
NOTE: Depending on the serviced component, the node may go through Node Rescue,
which can take up to 10 minutes.
cli% checkhealth
Checking alert
Checking cabling
Checking cage
Checking dar
Checking date
Checking host
Checking ld
Checking license
Checking network
66
Servicing the Storage System
Checking node
Checking pd
Checking port
Checking rc
Checking snmp
Checking task
Checking vlun
Checking vv
System is healthy
4.
Issue the shownode command to verify that the state of all nodes is OK.
NOTE: Depending on the serviced component, the node may go through Node Rescue,
which can take up to 10 minutes.
NOTE: The LED status for the replaced node may indicate Green and could take up to 3
minutes to change to Green Blinking.
cli% shownode
Control
Data
Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK
Yes
Yes
Off
GreenBlnk
8192
4096
100
1 1699808-1 OK
No
Yes
Off
GreenBlnk
8192
4096
100
The Startnoderescue Command
If a node is not in a cluster for 10 minutes the Authorized Service Providers (ASP) should execute
the startnoderescue command. If a node is rescued but still fails to rejoin the cluster, the node
should be replaced.
startnoderescue help text
startnoderescue - Starts a node rescue.
SYNTAX
startnoderescue -node <node>
DESCRIPTION
Initiates a node rescue, which initializes the internal node disk of the
specified node to match the contents of the other node disks. The copy is
done over the network, so the node to be rescued must have an ethernet
connection. It will automatically select a valid unused link local
address. Progress is reported as a task.
AUTHORITY
Super, Service
OPTIONS
None.
SPECIFIERS
<node>
Specifies the node to be rescued. This node must be physically present
in the system and powered on, but not part of the cluster.
NOTES
On systems other than T and F class, node rescue will automatically be
started when a blank node disk is inserted into a node. The
startnoderescue command only needs to be manually issued if the node rescue
must be redone on a disk that is not blank. For T and F class systems,
startnoderescue must always be issued to perform a node rescue.
EXAMPLES
The following example show starting a node rescue of node 2.
cli% startnoderescue -node 2
Node rescue from node 0 to node 2 started.
cli% showtask
Id Type
Name
Status Phase Step -------StartTime------- -FinishTime- -Priority- ---User---96 node_rescue node_2_rescue active
1/1 0/1 2012-06-15 18:19:38 PDT n/a
sys:3parsys
Node and PCIe Adapter Identification and Preparation
To perform maintenance using CLI, access SPMAINT on SPOCC.
CLI Procedures
67
:
If the failed node is already halted, it is not necessary to shutdown the node because it is not part
of the cluster.
1.
2.
In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
Issue the shownode -pci command to display adapter information:
cli% shownode -pci
Node Slot Type -Manufacturer0
0 SAS LSI
0
1 FC
EMULEX
0
2 FC
EMULEX
0
3 Eth Intel
1
0 SAS LSI
1
1 FC
EMULEX
1
2 FC
EMULEX
1
3 Eth Intel
-Model-9205-8e
LPe12002
LPe12004
e1000e
9205-8e
LPe12002
LPe12004
e1000e
--Serial-Onboard
Onboard
5CF223004R
Onboard
Onboard
Onboard
5CF2230036
Onboard
-Rev01
03
03
n/a
01
03
03
n/a
Firmware
11.00.00.00
2.01.X.14
2.01.X.14
1.3.10-k2
11.00.00.00
2.01.X.14
2.01.X.14
1.3.10-k2
Using this output, verify that the replacement card manufacturer and model are the same as
that currently installed in a slot.
3.
Issue the shownode command to see if the node is listed as Degraded or is missing from the
output.
NOTE: If the node's state is Degraded, it must be shutdown to be serviced. If the node is
missing from the output, it may already be shutdown and ready to be serviced; in that case,
proceed to step 6.
•
In the following 7200 example, both nodes are present:
cli% shownode
Control
Data
Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK
Yes
Yes
Off
GreenBlnk
8192
4096
100
1 1699808-1 Degraded No
Yes
Off
GreenBlnk
8192
4096
100
•
In this 7200 example, node 1 is missing.
cli% shownode
Control
Data
Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK
Yes
Yes
Off
GreenBlnk
8192
4096
100
NOTE:
4.
5.
6.
7.
8.
If more than one node is down at the same time escalate to the next level of support.
Type exit to return to the 3PAR Service Processor Menu.
Select option 4 InServ Product Maintenance and then select the desired system.
Select option Halt a StoreServ cluster/node, select the desired node, and confirm all prompts
to halt the node.
In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ
If required, execute the locatesys command to identify the system.
NOTE:
LED.
This flashes all nodes in this System except the failed node, which has a solid blue
Node and PCIe Adapter Verification
Verify that the node is operational and the PCIe Adapter has been successfully replaced:
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
2. Issue the checkhealth command to verify that the state of the system is OK:
cli% checkhealth
Checking alert
68
Servicing the Storage System
Checking cabling
Checking cage
Checking dar
Checking date
Checking host
Checking ld
Checking license
Checking network
Checking node
Checking pd
Checking port
Checking rc
Checking snmp
Checking task
Checking vlun
Checking vv
System is healthy
3.
Issue the shownode command to verify that the state of all nodes is OK.
NOTE: Depending on the serviced component, the node may go through Node Rescue,
which can take up to 10 minutes.
NOTE: The LED status for the replaced node may indicate green and could take up to 3
minutes to change to green blinking.
cli% shownode
Control
Data
Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK
Yes
Yes
Off
GreenBlnk
8192
4096
100
1 1699808-1 OK
No
Yes
Off
GreenBlnk
8192
4096
100
4.
Issue the shownode –pci command to verify that all PCIe Adapters are operational.
cli% shownode -pci
Node Slot Type -Manufacturer0
0 SAS LSI
0
1 FC
EMULEX
0
2 FC
EMULEX
0
3 Eth Intel
1
0 SAS LSI
1
1 FC
EMULEX
1
2 FC
EMULEX
1
3 Eth Intel
-Model-9205-8e
LPe12002
LPe12004
e1000e
9205-8e
LPe12002
LPe12004
e1000e
--Serial-Onboard
Onboard
5CF223004R
Onboard
Onboard
Onboard
5CF2230036
Onboard
-Rev01
03
03
n/a
01
03
03
n/a
Firmware
11.00.00.00
2.01.X.14
2.01.X.14
1.3.10-k2
11.00.00.00
2.01.X.14
2.01.X.14
1.3.10-k2
Controller Node (Node) PCIe Adapter Riser Card Replacement Procedure
CAUTION: Alloy gray-colored latches on components such as the node mean the component is
warm-swappable. HP recommends shutting down the node (with the enclosure power remaining
on) before removing this component.
CAUTION: To prevent overheating the node bay in the enclosure should not be left open for
more than 30 minutes.
NOTE:
Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
Unpack the replacement PCIe Adapter Riser Card and place on an ESD safe mat.
PCIe Adapter Identification and Node Shutdown
Before you begin, use the HP 3PAR CLI to identify the failed PCIe Adapter and then halt the node.
CLI Procedures
69
NOTE: The PCIe Adapter Riser Card does not have active components so is not displayed in any
output, its failure shows as a failed PCIe Adapter.
NOTE: If the failed node is already halted, it is not necessary to shutdown (halt) the node because
it is not part of the cluster.
Node Removal
1.
Allow 2-3 minutes for the node to halt, and then verify that the Node Status LED is flashing
green and the Node UID LED is blue, indicating that the node has been halted.
CAUTION: The system does not fail if the node is properly halted before removal, but data
loss may occur if the replacement procedure is not followed correctly.
NOTE:
The Node Fault LED may be amber, depending on the nature of the node failure.
Figure 57 Verifying Node LEDs Status
NOTE:
2.
3.
4.
5.
6.
Nodes 1 and 3 are rotated with respect to nodes 0 and 2.
Ensure that all cables on the failed node are marked to facilitate reconnecting later.
At the rear of the rack, remove cables from the failed node.
Pull the node rod to remove the node from the enclosure.
When the node is halfway out of the enclosure, use both hands to slide the node out completely.
Set the node on the ESD safe mat for servicing.
PCIe Adapter Riser Card Replacement
1.
2.
Remove the node cover.
Remove the PCIe Adapter assembly and riser card:
a. Press down on the blue touch point tab to release the assembly from the node.
NOTE:
b.
c.
3.
70
The PCIe CNA Adapter is half-height; it is not secured by this tab.
Grasp the blue touch point on the riser card and pull the assembly up and away from
the node for removal.
Pull the riser card to the side to remove it from the assembly.
Insert the PCIe Adapter into the replacement riser card.
Servicing the Storage System
4.
5.
To replace the Adapter, align the recesses on the Adapter plate with the pins on the Node
chassis. This should align the riser card with the slot on the node. Snap the PCIe Adapter
assembly into the node.
Replace the node cover.
Node Replacement
1.
2.
Ensure that the gray node rod is in the extracted position, pulled out of the component.
Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned
with the grooves in the slot.
CAUTION:
3.
4.
Ensure that the node is correctly oriented; alternate nodes are rotated by 180°.
Keep sliding the node in until it halts against the insertion mechanism.
Reconnect cables to the node.
CAUTION: If the blue LED is flashing, the node is not properly seated. Pull out the gray node
rod and push back in to ensure that the node is fully seated.
5.
Push the extended gray node rod into the node to ensure the node is correctly installed.
NOTE: Once inserted, the node should power up and rejoin the cluster; it may take up to
5 minutes.
6.
7.
8.
Verify that the node LED is blinking green in synchronization with other nodes, indicating that
the node has joined the cluster.
Follow the return or disposal instructions provided with the new component.
Verify that the node has been successfully replaced and the PCIe Adapter is recognized (see
“Node and PCIe Adapter Verification ” (page 68)).
Node PCM Identification
Identify failed power supplies by issuing the shownode –ps command:
cli%
Node
0,1
0,1
shownode -ps
PS -Assy_Part- --Assy_Serial-- ACState DCState PSState
0 0945768-09 PMW0945768J103N Failed Failed OK
1 0945768-09 PMW0945768J102A OK
OK
OK
One or more of the following could be in a Failed state:
•
ACState
•
DCState
•
PSState
Identify failed batteries by issuing the showbattery command:
cli%
Node
0,1
0,1
showbattery
PS Bat Serial
State ChrgLvl(%) ExpDate Expired Testing
0
0 BCC0974242G00C7 Failed
106
n/a
No
No
1
0 BCC0974242G006J OK
104
n/a
No
No
Drive PCM Identification
Normally a failed PCM displays an invalid status; if that is the case, close the current window and
proceed to “PCM Removal”. If no invalid status is displayed use one of the following procedures:
CLI Procedures
71
1.
If the cage has been called out in a notification issue the showcage –d cageX command,
where cageX is the name of the cage indicated in the notification.
cli% showcage -d cage0
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1
0 0:0:1
0
6 26-27 320c 320c DCN1 n/a
-----------Cage detail info for cage0 --------Position: --Interface Board Info
Card0
Card1
Firmware_status
Current
Current
Product_Rev
320c
320c
State(self,partner)
OK,OK
OK,OK
VendorId,ProductId
HP,DCN1
HP,DCN1
Master_CPU
Yes
No
SAS_Addr 50050CC10230567E 50050CC10230567E
Link_Speed(DP1,DP2) 6.0Gbps,6.0Gbps 6.0Gbps,6.0Gbps
PS PSState ACState DCState Fan State Fan0_Speed Fan1_Speed
ps0 Failed Failed Failed
OK
Low
Low
ps1
OK
OK
OK
OK
Low
Low
-------------Drive Info-------------- --PortA-- --PortB-Drive
DeviceName State Temp(C) LoopState LoopState
0:0 5000cca0160e859f Normal
26
OK
OK
1:0 5000cca0160e66af Normal
26
OK
OK
2:0 5000cca0160ef9bf Normal
27
OK
OK
3:0 5000cca0161181f7 Normal
27
OK
OK
4:0 5000cca0160e5ff7 Normal
27
OK
OK
5:0 5000cca0160e78d7 Normal
26
OK
OK
One or more of ACState, DCState, and PSState could be in a Failed state.
2.
If the cage is unknown, issue the showcage –d command.
The output above is repeated for each cage; search for the failure.
PCM Location
If an invalid status is not displayed, you can flash LEDs in a drive enclosure using the command
locatecage –t XX cageY where XX is the number of seconds to flash LEDs and cageY is the
name of the cage from the commands in “Drive PCM Identification ” (page 71).
LEDs can be stopped by issuing the locatecage –t 1 cageY command.
72
Servicing the Storage System
PCM and Battery Verification
1.
Verify that the PCM has been successfully replaced by issuing the showcage –d cageY
command:
cli% showcage -d cage0
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1
0 0:0:1
0
6 26-27 320c 320c DCN1 n/a
-----------Cage detail info for cage0 --------Position: --Interface Board Info
Card0
Card1
Firmware_status
Current
Current
Product_Rev
320c
320c
State(self,partner)
OK,OK
OK,OK
VendorId,ProductId
HP,DCN1
HP,DCN1
Master_CPU
Yes
No
SAS_Addr 50050CC10230567E 50050CC10230567E
Link_Speed(DP1,DP2) 6.0Gbps,6.0Gbps 6.0Gbps,6.0Gbps
PS PSState ACState DCState Fan State Fan0_Speed Fan1_Speed
Ps0
OK
OK
OK
OK
Low
Low
ps1
OK
OK
OK
OK
Low
Low
-------------Drive Info-------------- --PortA-- --PortB-Drive
DeviceName State Temp(C) LoopState LoopState
0:0 5000cca0160e859f Normal
26
OK
OK
1:0 5000cca0160e66af Normal
26
OK
OK
2:0 5000cca0160ef9bf Normal
27
OK
OK
3:0 5000cca0161181f7 Normal
27
OK
OK
4:0 5000cca0160e5ff7 Normal
27
OK
OK
5:0 5000cca0160e78d7 Normal
26
OK
OK
ACState, DCState and PSState should all be OK.
2.
Verify that the PCM Battery is still working by issuing the showbattery command:
cli%
Node
0,1
0,1
showbattery
PS Bat Assy_Serial
0 0
BCC0974242G00CH
1 0
BCC0974242G006J
State ChrgLvl(%) ExpDate Expired Testing
OK
104 n/a
No
No
OK
106 n/a
No
No
The State of both batteries should be OK.
3.
Validate node health by executing the checkhealth -svc -detail node command:
cli% checkhealth -svc -detail node
Checking node
The following components are healthy: node
CLI Procedures
73
SFP Identification
1.
Issue the showport command to view the port state:
cli% showport
N:S:P
0:0:1
0:0:2
0:1:1
0:1:2
0:2:1
0:2:2
0:3:1
1:0:1
1:0:2
1:1:1
1:1:2
1:2:1
1:2:2
1:2:3
1:2:4
Mode
initiator
initiator
target
target
target
target
peer
initiator
initiator
target
initiator
initiator
initiator
initiator
initiator
State
ready
ready
ready
ready
loss_sync
loss_sync
offline
ready
ready
ready
loss_sync
loss_sync
loss_sync
loss_sync
loss_sync
Node_WWN
50002ACFF70185A6
50002ACFF70185A6
2FF70002AC0185A6
2FF70002AC0185A6
50002ACFF70185A6
50002ACFF70185A6
2FF70002AC0185A6
2FF70002AC0185A6
2FF70002AC0185A6
2FF70002AC0185A6
2FF70002AC0185A6
2FF70002AC0185A6
Port_WWN/HW_Addr
50002AC0010185A6
50002AC0020185A6
20110002AC0185A6
20120002AC0185A6
2C27D75301F6
2C27D75301F2
0002AC8004DB
50002AC1010185A6
50002AC1020185A6
21110002AC0185A6
21120002AC0185A6
21210002AC0185A6
21220002AC0185A6
21230002AC0185A6
21240002AC0185A6
Type
disk
disk
host
host
iscsi
iscsi
rcip
disk
disk
host
free
free
free
free
free
Protocol
SAS
SAS
FC
FC
iSCSI
iSCSI
IP RCIP0
SAS
SAS
FC
FC
FC
FC
FC
FC
Label
-
Partner
-
FailoverState
-
Typically, the State is listed as loss sync, the Mode as initiator, and the Connected
Device Type as free.
2.
Issue the showport -sfp command to verify which SFP requires replacement:
cli% showport –sfp
N:S:P
0:1:1
0:1:2
0:2:1
0:2:2
1:1:1
1:1:2
1:2:1
1:2:2
1:2:3
1:2:4
-StateOK
OK
OK
OK
OK
OK
OK
OK
OK
-ManufacturerHP-F
HP-F
AVAGO
AVAGO
HP-F
HP-F
HP-F
HP-F
HP-F
MaxSpeed(Gbps)
8.5
8.5
10.3
10.3
8.5
8.5
8.5
8.5
8.5
TXDisable
No
No
No
No
No
No
No
No
No
TXFault
No
No
No
No
No
No
No
No
No
RXLoss
No
No
No
No
No
No
No
No
No
DDM
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Typically, data will be missing and listed as - - -.
SFP Verification
1.
2.
Replace the SFP (see “Replacing an SFP” (page 42)).
Issue the showport command to verify that the ports are in good condition and the State is
listed as ready:
cli% showport
N:S:P
0:0:1
0:0:2
0:1:1
0:1:2
0:2:1
0:2:2
0:3:1
1:0:1
1:0:2
1:1:1
1:1:2
1:2:1
1:2:2
1:2:3
1:2:4
Mode
initiator
initiator
target
target
target
target
peer
initiator
initiator
target
target
initiator
initiator
initiator
initiator
State
ready
ready
ready
ready
loss_sync
loss_sync
offline
ready
ready
ready
ready
loss_sync
loss_sync
loss_sync
loss_sync
Node_WWN
50002ACFF70185A6
50002ACFF70185A6
2FF70002AC0185A6
2FF70002AC0185A6
50002ACFF70185A6
50002ACFF70185A6
2FF70002AC0185A6
2FF70002AC0185A6
2FF70002AC0185A6
2FF70002AC0185A6
2FF70002AC0185A6
2FF70002AC0185A6
Port_WWN/HW_Addr
50002AC0010185A6
50002AC0020185A6
20110002AC0185A6
20120002AC0185A6
2C27D75301F6
2C27D75301F2
0002AC8004DB
50002AC1010185A6
50002AC1020185A6
21110002AC0185A6
21120002AC0185A6
21210002AC0185A6
21220002AC0185A6
21230002AC0185A6
21240002AC0185A6
Type
disk
disk
host
host
iscsi
iscsi
rcip
disk
disk
host
host
free
free
free
free
Protocol
SAS
SAS
FC
FC
iSCSI
iSCSI
IP RCIP0
SAS
SAS
FC
FC
FC
FC
FC
FC
Label
-
Partner
-
FailoverState
-
The State should now be listed as ready, the Mode as target and the Connected Device
Type as host.
74
Servicing the Storage System
3.
Issue the showport -sfp command to verify that the replaced SFP is connected and the
State is listed as OK:
cli% showport –sfp
N:S:P
0:1:1
0:1:2
0:2:1
0:2:2
1:1:1
1:1:2
1:2:1
1:2:2
1:2:3
1:2:4
-StateOK
OK
OK
OK
OK
OK
OK
OK
OK
OK
-ManufacturerHP-F
HP-F
AVAGO
AVAGO
HP-F
HP-F
HP-F
HP-F
HP-F
HP-F
MaxSpeed(Gbps)
8.5
8.5
10.3
10.3
8.5
8.5
8.5
8.5
8.5
8.5
TXDisable
No
No
No
No
No
No
No
No
No
No
TXFault
No
No
No
No
No
No
No
No
No
No
RXLoss
No
No
No
No
No
No
No
No
No
No
DDM
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Data should now be populated.
Disk Drive Identification
To identify the drive to replace and its current status, enter the servicemag status command.
NOTE: When an SSD is identified as degraded, you must manually initiate the replacement
process. Execute servicemag start -pdid pd_id to move the chunklets. When the SSD is
replaced, the system automatically initiates servicemag resume.
There are four possible responses. Response 1 is expected when the drive is ready to be replaced:
1. servicemag has successfully completed:
cli% servicemag status
Cage 0, magazine 1:
The magazine was successfully brought offline by a servicemag start command.
The command completed Thu Oct 4 15:29:05 2012.
servicemag start -pdid 7 - Succeeded
When Succeeded displays as the last line in the output, it is safe to replace the disk.
2.
servicemag has not started.
Data is being reconstructed on spares; servicemag does not start until this process is
complete. Retry the command at a later time.
cli% servicemag status
No servicemag operations logged.
3.
servicemag has failed. Call your authorized service provider for assistance.
cli% servicemag status
Cage 0, magazine 1:
A servicemag start command failed on this magazine.
.....
4.
servicemag is in progress. The output will inform the user of progress.
cli% servicemag status
Cage 0, magazine 1:
The magazine is being brought offline due to a servicemag start.
The last status update was at Thu Oct 4 15:27:54 2012.
Chunklet relocations have completed 35 in 0 seconds
servicemag start -pdid 1 -- is in Progress
NOTE:
This process may take up to 10 minutes; repeat the command to refresh the status.
CLI Procedures
75
Disk Drive (Magazine) Location
1.
Execute the showpd -failed command:
cli% showpd –failed
Id CagePos Type RPM State
7 1:5:0
FC
10 failed
2.
----Size(MB)----- ----Ports---Total
Free A
B
Cap(GB)
278528
0 1:0:1 0:0:1
450
Execute the locatecage -t XX cageY command.
Where:
•
XX is the appropriate number of seconds to allow service personnel to view the LED status
of the drive enclosure
•
Y is the cage number shown as the first number of CagePos in the output of theshowpd
-failed command; in this case, 1 (1:5:0).
For example, locatecage -t 300 cage1 flashes LEDs on cage1 for 300 seconds
(5 minutes).
This flashes all drives in this cage except the failed drive.
Disk Drive Verification
1.
2.
Replace the disk drive.
Verify that the disk drive has successfully been replaced by executing servicemag status.
There are 3 possible responses:
•
servicemag is in progress; the output describes the current state of the procedure:
cli% servicemag status
Cage 0, magazine 1:
The magazine is being brought online due
The last status update was at Thu Oct 4
Chunklets relocated: 16 in 7 minutes and
Chunklets remaining: 57
Chunklets marked for moving: 57
Estimated time for relocation completion
seconds
servicemag resume 0 1 -- is in Progress
to a servicemag resume.
16:26:32 2012.
40 seconds
based on 28 seconds per chunklet is: 26 minutes and 36
NOTE: If the command is executed again, the estimated time for relocation completion
may vary.
•
servicemag has completed:
cli% servicemag status
No servicemag operations logged
When No servicemag operations logged displays as the last line in the output,
the disk has successfully been replaced.
•
servicemag has failed:
cli% servicemag status
Cage 0, magazine 1:
A servicemag resume command failed on this magazine.
.....
There can be several causes for this failure; contact your authorized service provider for
assistance.
76
Servicing the Storage System
3 Upgrading the Storage System
HP 3PAR StoreServ 7000 products include 3PAR licensing which enables all functionality associated
with the system. A failure to register the license key may limit access and restrict upgrading of your
system. Before you proceed with upgrading, verify all applicable licenses associated with the
system are registered.
For further assistance with registering HP software licenses, visit HP Support website: http://hp.com/
support.
Use the QR484A HP 3PAR StoreServ 7400 Upgrade Node Pair kit to upgrade the system. This
kit contains the following:
•
7400 2-node array
•
Four 8 Gb/s FC SFPs (two per node)
•
Rack mounting hardware
•
Two power cords
•
Four 2M SAS cables
•
Four-node link cables
NOTE: There must be additional 2U rack space in the rack immediately above an existing node
enclosure to perform an online node pair upgrade. If rack space is not available, your system must
be shut down and enclosures and components must be removed and then reinstalled to make room
for the additional enclosures for an offline upgrade. See Offline Upgrade.
The following describes the requirements for upgrading hardware components to an existing
storage system.
Installing Rails for Component Enclosures
Before you can install the enclosure into the rack, you must mount both rail channels into the rack.
Use the rail kits shown in Table 18 (page 77), based on installation type:
Table 18 Part numbers used in rail kit installation
2U
4U
Rail assembly 692984-001, 692985-001
Rail assembly 692986-001, 692987-001
Screws 5697-1199
Screws 5697-1199
Cage nut 353039-002
To mount a rail shelf to the rack:
1. Align the rail shelf to the pin screws to the rear rack post and expand the shelf until it reaches
the front rack post.
2. Use the T-25 Torx toolbit to secure the shelf to the front and rear of the rack posts using the
shoulder screws (PN 5697-1199). Torque to 13 in-lbs
For a 4U rail kit install, snap in one cage nut on both sides of the rack in the position above
the rail. Check all sides at the back and front of the rack and ensure that all screws are properly
installed.
NOTE:
3.
4.
The cage nut is positioned 2 holes above the top of the rail.
Press down hard with your hand on the top of each rail to ensure they are mounted firmly.
Repeat on the other side of the rack.
Installing Rails for Component Enclosures
77
Figure 58 Mounting the Rail Kit
Controller Node Upgrade
Installing additional controller nodes enhances performance and increases maximum storage
capacity of a storage system.
CAUTION: When performing any upgrade while concurrently using the system, use extra care,
because an incorrect action during the upgrade process may cause the system to fail. Upgrading
nodes requires performing node rescue. See “Node Rescue” (page 145).
IMPORTANT: You cannot upgrade a 7200 storage system to a 7400. Only a two-node 7400
storage system can be upgraded to a four-node system, see “Upgrading a 7400 Storage System”
(page 79).
Information on node upgrades:
•
There must be 2U of space in the rack directly above the existing controller node enclosure
(nodes 0 and 1) for the expansion controller node enclosure to be installed (nodes 2 and 3).
If there is no rack space available, your system must be shutdown and enclosures and
components must be relocated to make room for the additional enclosures for an offline
upgrade.
•
7200 nodes do not work in a 7400 storage system.
•
A four-node system (7400) requires interconnect cabling between the node enclosures.
•
Nodes must be cabled correctly for the cluster to form; incorrect cabling displays as alerts or
events in the OS.
NOTE:
78
Incorrectly configured interconnect cables illuminate amber port LEDs.
•
Only nodes configured as FRUs can be used to replace existing nodes or for upgrades in a
7400. Nodes cannot be moved from one system and installed in another.
•
Nodes in a node pair must have identical PCIe adapter configurations.
Upgrading the Storage System
Upgrading a 7400 Storage System
This section describes how to upgrade a 7400 two-node system to a four-node system.
CAUTION: All CLI commands must be performed from the SPMAINT using the spvar ID to ensure
correct permissions to execute all the necessary commands.
Before beginning a controller node upgrade:
•
At the front of the storage system, before installing the enclosures, remove the filler plates that
cover the empty rack space reserved for the additional enclosures.
•
Verify with the system administrator whether a complete backup of all data on the storage
system has been performed. Controller nodes must be installed into an active system.
•
Verify Initial LED status:
•
◦
Node LEDs on nodes 0 and 1 should indicate a good status.
◦
Because no node interconnect cables have been installed, all port LEDs should be off.
Validate Initial System Status:
1. Issue the showsys command to verify that your system is listed as an HP_3PAR 7400
model and the number of nodes is listed as 2.
cli% showsys
----------------(MB)---------------ID --Name--- ---Model---- -Serial- Nodes Master TotalCap AllocCap FreeCap FailedCap
99806 3par_7400 HP_3PAR 7400 1699806
2
0 16103424 4178944 11924480
0
2.
Issue the showhost command to verify that all hosts are attached to at least two nodes.
cli% showhost
Id Name
0 3PARL2ESX01
Persona
-WWN/iSCSI_Name- Port
Generic-legacy 500110A00017ECC8 0:3:4
500110A00017ECCA 1:3:4
500110A00017ECC8 1:3:3
500110A00017ECCA 0:3:3
1 3PARL2ESX02 Generic-legacy 500110A00017EC96 0:3:4
500110A00017EC96 1:3:3
500110A00017EC94 1:3:4
500110A00017EC94 0:3:3
2 3PARL2HYPERV Generic-ALUA
5001438021E10E12 1:3:4
5001438021E10E10 1:3:3
5001438021E10E12 0:3:3
5001438021E10E10 0:3:4
3 3PARL2ORA02 Generic
50060B000063A672 0:3:4
50060B000063A670 1:3:4
50060B000063A670 0:3:3
50060B000063A672 1:3:3
4 3PARL2ORA01 Generic
500110A00017DF9C 1:3:3
500110A00017DF9C 0:3:4
IMPORTANT:
Hosts should be connected to two nodes where possible.
Controller Node Upgrade
79
3.
Issue the checkhealth command to verify system status.
cli% checkhealth
Checking alert
Checking cabling
Checking cage
Checking date
Checking host
Checking ld
Checking license
Checking network
Checking node
Checking pd
Checking port
Checking rc
Checking snmp
Checking task
Checking vlun
Checking vv
System is healthy
Hardware Installation
NOTE: See Cabling Guide instructions for your particular node and drive enclosure configuration
for best practice positioning of enclosures in the rack. These best practices also facilitate cabling.
The cabling guides are located at http://h20000.www2.hp.com/bizsupport/TechSupport/
DocumentIndex.jsp?contentType=SupportManual&lang=en&cc=us&docIndexId=64179&
taskId=101&prodTypeId=12169&prodSeriesId=5335712#1.
Install rail kits for additional node and drive enclosures before loading any enclosures in the rack.
1. Install a rail kit for drive and node enclosures.
NOTE: Controller nodes should ship with PCIe Adapters already installed. If that is not the
case, remove the controller nodes, install PCIe Adapters and SFPs, and re-install the controller
nodes.
2.
3.
4.
Install the controller node enclosure. It may ship with the nodes and PCMs already installed.
Install all drive enclosures following the Cabling Guide's configuration best practices where
possible. Adding new drive enclosures directly above a new node enclosure may also be
applicable. The cabling guides are located at http://h20000.www2.hp.com/bizsupport/
TechSupport/DocumentIndex.jsp?contentType=SupportManual&lang=en&cc=us&
docIndexId=64179&taskId=101&prodTypeId=12169&prodSeriesId=5335712#1.
Install disk drives in the node and drive enclosures (see the “Allocation and Loading Order”
sections in the HP 3PAR StoreServ 7000 Storage Installation Guide).
NOTE:
5.
Install the power cables to the controller node and drive enclosure PCMs.
NOTE:
80
Enclosures may be delivered populated with disk drives.
Do not power on at this stage.
6.
After you have completed the physical installation of the drive enclosures and disk drives,
cable the drive enclosures to the controller nodes and each other (see the appropriate HP
3PAR Cabling Guide). The cabling guides are located at http://h20000.www2.hp.com/
bizsupport/TechSupport/DocumentIndex.jsp?contentType=SupportManual&lang=en&cc=us&
docIndexId=64179&taskId=101&prodTypeId=12169&prodSeriesId=5335712#1.
7.
Install node interconnect cables between nodes 0, 1 and 2 (see Table 19 (page 81) and
Figure 59 (page 81)).
Upgrading the Storage System
Table 19 Node Interconnect Cabling for Nodes 0, 1, and 2
A
C
Node 0
Intr 0
>
Node 2
Intr 1
Node 1
Intr 1
>
Node 2
Intr 0
Figure 59 Node Interconnect Cabling for Nodes 0, 1, and 2
8.
Connect Ethernet cables to the MGMT port for each new node.
CAUTION: Ethernet cables are required as the OS for new nodes is transferred across the
network. If additional Ethernet cables for node 2 and node 3 are unavailable, use one of the
existing cables in node 0 and node 1. Use the shownet command to locate the active node
before moving the non-active node Ethernet connection to node 2.
9.
Without removing any cables, pull the gray node rod to unseat node 3 from the enclosure.
Powering On
1.
2.
Turn power switches to ON for all drive enclosure PCMs.
Verify that each disk drive powers up and the disk drive status LED is green.
NOTE:
3.
4.
Rectify any disk drive problems before proceeding.
Turn power switches to ON for the new controller node enclosure PCMs.
Node rescue for node 2 auto-starts and the HP 3PAR OS is copied across the local area
network (LAN).
When the HP 3PAR OS is installed, node 2 should reboot and join the cluster.
Controller Node Upgrade
81
Verify Node 2 Upgrade LED status
1.
Wait at least three minutes before verifying the LED status of node 2. If the status is in a good
state, continue on to “Monitor Node 2 Upgrade Progress”.
•
All nodes should indicate a good status.
NOTE: If the node status LED is solid green, the node has booted but is unable to join
the cluster.
2.
•
Intr 0 to Intr 2 interconnect port status LEDs on all four nodes should be green, indicating
that links have been established.
•
If any node interconnect port fault LEDs are amber or flashing amber, one or both of the
following errors has occurred:
◦
Amber: failed to establish link connection.
◦
Flashing amber: interconnect cabling error
If the status LED for node 2 is solid green or any of the interconnect port fault LEDs for node
2 are amber or flashing amber, execute the node interconnect fault recovery procedure.
Node Interconnect Fault Recovery Procedure
WARNING! Never remove a node interconnect cable when all port LEDs at both ends of the
cable are green.
CAUTION: Node interconnect cables are directional. Ends marked A should connect only to
node 0 or node 1. Ends marked C should connect only to node 2 or node 3 (see Figure 60 (page
82)).
NOTE:
If all cables are correct, escalate the problem to the next level of HP support.
Figure 60 Directional Cable Markings
NOTE:
If you are currently adding node 2, only the node 2 cables should be connected.
Install the node interconnect cables as shown in Table 20 (page 82) and Figure 61 (page 83).
Table 20 Node Interconnect Cabling for Nodes 0, 1, 2, and 3
A
82
C
Node 0
Intr 0
>
Node 3
Intr 0
Node 1
Intr 1
>
Node 3
Intr 1
Upgrading the Storage System
Table 20 Node Interconnect Cabling for Nodes 0, 1, 2, and 3 (continued)
A
C
Node 0
Intr 0
>
Node 2
Intr 1
Node 1
Intr 1
>
Node 2
Intr 0
Figure 61 Node Interconnect Cabling for Nodes 0, 1, 2, and 3
Execute the following procedure:
1. Issue the showtask command:
cli% showtask
Id Type
Name
Status Phase Step -------StartTime------- -FinishTime- -Priority- ---User---1297 node_rescue node_3_rescue active
1/1 0/1 2012-10-19 13:27:29 PDT n/a
sys:3parsys
NOTE: This is an example of a node rescue task for node 3. If there are no active node
rescue tasks, go to Step 4 (shownode).
2.
Cancel the current node rescue task:
cli% canceltask –f 1297
3.
4.
Issue the showtask command to confirm the cancellation:
Issue the shownode command to confirm that the node with interconnect problems did not
join the cluster:
cli% shownode
Node
0
1
2
--Name--1699806-0
1699806-1
1699806-2
NOTE:
-StateOK
OK
OK
Master
Yes
No
No
InCluster
Yes
Yes
Yes
-Service_LED
Off
Off
Off
Control
Data
Cache
---LED--- Mem(MB) Mem(MB) Available(%)
GreenBlnk
8192
8192
100
GreenBlnk
8192
8192
100
GreenBlnk
8192
8192
100
In this example, node 3 is not part of the cluster.
If the node with interconnect problems did joint the cluster, issue the shutdownnode halt
X command (where X is the ID of the node with interconnect problems).
5.
6.
7.
8.
9.
Without removing any cables, pull the gray node rod out to unseat the node from the enclosure.
When power is lost to the node (all LEDS are out), wait for at least 30 seconds.
Ensure that the gray node rod is in the extended position.
Push the node into the enclosure until it rests against the insertion mechanism.
Correct any node interconnect cabling problems:
a. Use Table 20 (page 82) to ensure correct port orientation
b. Check the direction of the cables are correct. Ends marked A should be connected only
to nodes 0 or 1. Ends marked C should be connected only to nodes 2 or 3.
Controller Node Upgrade
83
10. Push the gray node rod into the node to reseat and power on the node.
11. The node should join the cluster, indicate good status, and interconnect ports Intr 0 and Intr
1 should be green.
12. If the node with interconnect problems was node 2, return to “Monitor Node 2 Upgrade
Progress”.
13. If the node with interconnect problems was node 3, return to “Insert Node 3 and Monitor
Upgrade Progress”.
IMPORTANT:
If any step does not have expected results, escalate to the next level of HP support.
Monitor Node 2 Upgrade Progress
1.
Issue the showtask command to view active node rescue tasks:
cli% showtask
Id Type
Name
Status Phase Step -------StartTime------- -FinishTime- -Priority- ---User---1296 node_rescue node_2_rescue active
1/1 0/1 2012-10-19 13:27:29 PDT n/a
sys:3parsys
2.
Issue the showtask -d <taskID> command against the active node rescue task to view
detailed node rescue status.
The File sync has begun step in the following procedure, where the node rescue file is
being copied to the new node, takes several minutes.
cli% showtask -d 1296
Id Type
Name
Status Phase Step -------StartTime------- -FinishTime- -Priority- ---User---1296 node_rescue node_2_rescue active
1/1 0/1 2012-10-19 13:27:29 PDT n/a
sys:3parsys
Detailed status:
2012-10-19 13:27:29
2012-10-19 13:27:29
2012-10-19 13:27:36
2012-10-19 13:27:36
2012-10-19 13:27:36
2012-10-19 13:27:37
2012-10-19 13:27:59
2012-10-19 13:28:02
details.
2012-10-19 13:28:21
begin.
2012-10-19 13:28:54
PDT
PDT
PDT
PDT
PDT
PDT
PDT
PDT
Created
Updated
Updated
Updated
Updated
Updated
Updated
Updated
PDT Updated
PDT Updated
task.
Running node rescue for node 2 as 0:15823
Using IP 169.254.190.232
Informing system manager to not autoreset node 2.
Attempting to contact node 2 via NEMOE.
Setting boot parameters.
Waiting for node 2 to boot the node rescue kernel.
Kernel on node 2 has started. Waiting for node to retrieve install
Node 2 has retrieved the install details.
File sync has begun.
Waiting for file sync to
Estimated time to complete this step is 5 minutes
on a lightly loaded sys.
3.
Repeat the command showtask -d <taskID> against the active node rescue task to view
detailed node rescue status.
Node 2 has completed the node rescue task and is in the process of joining the cluster.
cli% showtask -d 1296
Id Type
Name
Status Phase Step -------StartTime------- -FinishTime- -Priority- ---User---1296 node_rescue node_2_rescue active
1/1 0/1 2012-10-19 13:27:29 PDT n/a
sys:3parsys
Detailed status:
2012-10-19 13:27:29 PDT Created
2012-10-19 13:27:29 PDT Updated
2012-10-19 13:27:36 PDT Updated
2012-10-19 13:27:36 PDT Updated
2012-10-19 13:27:36 PDT Updated
2012-10-19 13:27:37 PDT Updated
2012-10-19 13:27:59 PDT Updated
2012-10-19 13:28:02 PDT Updated
details.
2012-10-19 13:28:21 PDT Updated
begin.
2012-10-19 13:28:54 PDT Updated
on a lightly loaded sys.
2012-10-19 13:32:34 PDT Updated
2012-10-19 13:32:34 PDT Updated
4.
task.
Running node rescue for node 2 as 0:15823
Using IP 169.254.190.232
Informing system manager to not autoreset node 2.
Attempting to contact node 2 via NEMOE.
Setting boot parameters.
Waiting for node 2 to boot the node rescue kernel.
Kernel on node 2 has started. Waiting for node to retrieve install
Node 2 has retrieved the install details.
File sync has begun.
Waiting for file sync to
Estimated time to complete this step is 5 minutes
Remote node has completed file sync, and will reboot.
Waiting for node to rejoin cluster.
Issue the showtask command to view the Node Rescue tasks.
When complete the node_rescue task should have a status of done.
cli% showtask
Id Type
Name
Status Phase Step -------StartTime------- ------FinishTime------- -Priority---User---1296 node_rescue node_2_rescue done
--- --- 2012-10-19 13:27:29 PDT 2012-10-19 13:37:44 PDT n/a
sys:3parsys
84
Upgrading the Storage System
5.
Issue the shownode command and verify that node 2 has joined the cluster.
NOTE: Repeat if necessary. The node may reboot and take an additional three minutes
between the node rescue task completing and the node joining the cluster.
cli% shownode
Node
0
1
2
--Name--1699806-0
1699806-1
1699806-2
-StateOK
OK
OK
Master
Yes
No
No
InCluster
Yes
Yes
Yes
-Service_LED
Off
Off
Off
Control
Data
Cache
---LED--- Mem(MB) Mem(MB) Available(%)
GreenBlnk
8192
8192
100
GreenBlnk
8192
8192
100
GreenBlnk
8192
8192
100
Insert Node 3 and Monitor Upgrade Progress
1.
2.
3.
Ensure that the gray node rod is in the extracted position.
Push node 3 into the enclosure until it rests against the insertion mechanism.
Install node interconnect cables between nodes 0, 1 and 3 (see Table 21 (page 85) and
Figure 62 (page 85)).
Table 21 Node Interconnect Cabling for Nodes 0, 1, and 3
A
C
Node 0
Intr 1
>
Node 3
Intr 0
Node 1
Intr 0
>
Node 3
Intr 1
Figure 62 Node Internet Cabling for Nodes 0, 1, and 3
4.
5.
Push the gray node rod in to seat node 3.
Wait at least 3 minutes before verifying the LED status of node 3.
•
All nodes should indicate a good status.
NOTE: If the node status LED is solid green, the node has booted but is unable to join
the cluster.
•
Intr 0 and Intr 1 interconnect port status LEDs on all four nodes should be green, indicating
that links have been established.
•
If any node interconnect port fault LEDs are amber or flashing amber, one or both of the
following errors has occurred:
◦
Amber: failed to establish link connection.
◦
Flashing amber: interconnect cabling error.
Controller Node Upgrade
85
6.
7.
If the status LED for node 3 is solid green or any of the interconnect port fault LEDs for node
3 are amber or flashing amber follow the node interconnect fault recovery procedure (see
“Node Interconnect Fault Recovery Procedure ”).
Issue the showtask command to view active node_rescue tasks.
cli% showtask
Id
Type
Name
Status Phase Step -------StartTime------- ----FinishTime----- -Priority---User---1297 node_rescue node_2_rescue done
----- 2012-10-19 13:27:29 PDT 2012-10-19 13:37:44 n/a
sys:3parsys
1299 node_rescue node_3_rescue active
1/1 0/1 2012-10-19 13:39:25 PDT n/a
sys:3parsys
8.
Issue the showtask -d <taskID> command against the active node rescue task to view
detailed node rescue status.
The File sync has begun step in the following procedure, where the node rescue file is
being copied to the new node, takes several minutes.
cli% showtask -d 1299
Id Type
Name
Status Phase Step -------StartTime------- -FinishTime- -Priority- ---User---1299 node_rescue node_3_rescue active
1/1 0/1 2012-10-19 13:39:25 PDT n/a
sys:3parsys
Detailed status:
2012-10-19 13:39:25
2012-10-19 13:39:25
2012-10-19 13:40:36
2012-10-19 13:40:36
2012-10-19 13:40:36
2012-10-19 13:40:37
2012-10-19 13:40:59
2012-10-19 13:41:02
details.
2012-10-19 13:41:21
begin.
2012-10-19 13:41:54
PDT
PDT
PDT
PDT
PDT
PDT
PDT
PDT
Created
Updated
Updated
Updated
Updated
Updated
Updated
Updated
PDT Updated
PDT Updated
task.
Running node rescue for node 3 as 0:15823
Using IP 169.254.190.232
Informing system manager to not autoreset node 3.
Attempting to contact node 3 via NEMOE.
Setting boot parameters.
Waiting for node 3 to boot the node rescue kernel.
Kernel on node 3 has started. Waiting for node to retrieve install
Node 3 has retrieved the install details.
File sync has begun.
Waiting for file sync to
Estimated time to complete this step is 5 minutes
on a lightly loaded sys.
9.
Reissue the showtask -d <taskID> command against the active node rescue task to view
detailed node rescue status. Node 3 has completed the node rescue task and is the process
of joining the cluster:
cli% showtask -d 1299
Id Type
Name
Status Phase Step -------StartTime------- -FinishTime- -Priority- ---User---1299 node_rescue node_3_rescue active
1/1 0/1 2012-10-19 13:39:25 PDT n/a
sys:3parsys
Detailed status:
2012-10-19 13:39:25 PDT Created
2012-10-19 13:39:25 PDT Updated
2012-10-19 13:40:36 PDT Updated
2012-10-19 13:40:36 PDT Updated
2012-10-19 13:40:36 PDT Updated
2012-10-19 13:40:37 PDT Updated
2012-10-19 13:40:59 PDT Updated
2012-10-19 13:41:02 PDT Updated
details.
2012-10-19 13:41:21 PDT Updated
begin.
2012-10-19 13:41:54 PDT Updated
on a lightly loaded sys.
2012-10-19 13:44:34 PDT Updated
2012-10-19 13:44:34 PDT Updated
task.
Running node rescue for node 3 as 0:15823
Using IP 169.254.190.232
Informing system manager to not autoreset node 3.
Attempting to contact node 3 via NEMOE.
Setting boot parameters.
Waiting for node 3 to boot the node rescue kernel.
Kernel on node 3 has started. Waiting for node to retrieve install
Node 3 has retrieved the install details.
File sync has begun.
Waiting for file sync to
Estimated time to complete this step is 5 minutes
Remote node has completed file sync, and will reboot.
Waiting for node to rejoin cluster.
10. Issue the showtask command to view the node rescue tasks.
When complete the node_rescue task should have a status of done.
li% showtask
Id Type
Name
Status Phase Step -------StartTime------- ------FinishTime------- -Priority---User---1299 node_rescue node_3_rescue done
--- --- 2012-10-19 13:39:25 PDT 2012-10-19 13:47:44 PDT n/a
sys:3parsys
86
Upgrading the Storage System
11. Issue the shownode command and verify that node 3 has joined the cluster.
NOTE: Repeat if necessary. The node may reboot and take an additional three minutes
between the node rescue task completing and the node joining the cluster.
cli% shownode
Node
0
1
2
3
--Name--1699806-0
1699806-1
1699806-2
1699806-3
-StateOK
OK
OK
OK
Master
Yes
No
No
No
InCluster
Yes
Yes
Yes
Yes
-Service_LED
Off
Off
Off
Off
Control
Data
Cache
---LED--- Mem(MB) Mem(MB) Available(%)
GreenBlnk
8192
8192
100
GreenBlnk
8192
8192
100
GreenBlnk
8192
8192
100
GreenBlnk
8192
8192
100
Initiate admithw
When node and drive enclosures display in CLI, they are identified as follows:
•
DCN1 for a node enclosure
•
DCS2 for 2U24 (M6710) drive
•
DCS1 for 4U24 (M6720) drive enclosure
Issue the admithw command to start the process to admit new hardware.
cli% admithw
Checking nodes...
Checking volumes...
Checking system LDs...
Checking ports...
Checking state of disks...
18 new disks found
Checking cabling...
Checking cage firmware...
Checking if this is an upgrade that added new types of drives...
Checking for disks to admit...
18 disks admitted
Checking admin volume...
Admin volume exists.
Checking if logging LDs need to be created...
Creating logging LD for node 2.
Creating logging LD for node 3.
Checking if preserved data LDs need to be created...
Creating 16384 MB of preserved data storage on nodes 2 and 3.
Checking if system scheduled tasks need to be created...
Checking if the rights assigned to extended roles need to be updated...
No need to update extended roles rights.
Rebalancing and adding FC spares...
FC spare chunklets rebalanced; number of FC spare chunklets increased by 0 for a total of 816.
Rebalancing and adding NL spares...
NL spare chunklets rebalanced; number of NL spare chunklets increased by 0 for a total of 2794.
Rebalancing and adding SSD spares...
No SSD PDs present
System Reporter data volume exists.
Checking
Checking
Checking
Checking
Checking
Checking
Checking
system health...
alert
cabling
cage
dar
date
host
Controller Node Upgrade
87
Checking ld
Checking license
Checking network
Checking node
Checking pd
Checking port
Checking rc
Checking snmp
Checking task
Checking vlun
Checking vv
Component -Description- Qty
Alert
New alerts
1
admithw has completed
IMPORTANT: If you are prompted for permission to upgrade drive enclosure (cage) or physical
disk (disk drive) firmware always agree to the upgrade.
There may be a delay in the script while Logging LDs are created for nodes 2 and 3:
Creating logging LD for node 2.
Creating logging LD for node 3.
Initialization of upgraded storage is required for these to be created.
Verify Successful Completion
1.
Issue the shownode command and verify that all installed nodes are part of the cluster:
cli% shownode
Node
0
1
2
3
2.
--Name--1699806-0
1699806-1
1699806-2
1699806-3
-StateOK
OK
OK
OK
Master
Yes
No
No
No
InCluster
Yes
Yes
Yes
Yes
-Service_LED
Off
Off
Off
Off
Control
Data
Cache
---LED--- Mem(MB) Mem(MB) Available(%)
GreenBlnk
8192
8192
100
GreenBlnk
8192
8192
100
GreenBlnk
8192
8192
100
GreenBlnk
8192
8192
100
Issue the shownode -pci command and verify that all installed PCIe Adapters are displayed.
cli% shownode -pci
Node Slot Type
0
0 SAS
0
1 FC
0
2 CNA
0
3 Eth
1
0 SAS
1
1 FC
1
2 CNA
1
3 Eth
2
0 SAS
2
1 FC
2
2 CNA
2
3 Eth
3
0 SAS
3
1 FC
3
2 CNA
3
3 Eth
3.
-ManufacturerLSI
EMULEX
QLOGIC
Intel
LSI
EMULEX
QLOGIC
Intel
LSI
EMULEX
QLOGIC
Intel
LSI
EMULEX
QLOGIC
Intel
-Model-9205-8e
LPe12002
QLE8242
e1000e
9205-8e
LPe12002
QLE8242
e1000e
9205-8e
LPe12002
QLE8242
e1000e
9205-8e
LPe12002
QLE8242
e1000e
----Serial---Onboard
Onboard
PCGLT0ARC2U4FR
Onboard
Onboard
Onboard
PCGLT0ARC2U4G0
Onboard
Onboard
Onboard
PCGLT0ARC2U4FR
Onboard
Onboard
Onboard
PCGLT0ARC2U4G0
Onboard
-Rev01
03
58
n/a
01
03
58
n/a
01
03
58
n/a
01
03
58
n/a
Firmware
11.00.00.00
2.01.X.14
4.11.114
1.3.10-k2
11.00.00.00
2.01.X.14
4.11.114
1.3.10-k2
11.00.00.00
2.01.X.14
4.11.114
1.3.10-k2
11.00.00.00
2.01.X.14
4.11.114
1.3.10-k2
Issue the showcage command and verify that:
•
All drive enclosures (cages) are displayed
•
Each cage has two active paths, LoopA and LoopB
•
Cage firmware (RevA and RevB) is the same for all cages
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1
0 0:0:1
0
6 31-31 320b 320b DCN1 n/a
1 cage1 1:0:1
0 0:0:1
1
6 30-32 320b 320b DCS1 n/a
88
Upgrading the Storage System
2
3
4
5
4.
cage2
cage3
cage4
cage5
1:0:2
3:0:1
3:0:1
3:0:2
1
0
0
1
0:0:2
2:0:1
2:0:1
2:0:2
0
0
1
0
6
6
6
6
31-32
29-29
30-32
32-33
320b
320b
320b
320b
320b
320b
320b
320b
DCS2
DCN1
DCS1
DCS2
n/a
n/a
n/a
n/a
Issue the showpd command and verify that all disk drives are displayed and both paths are
active.
cli% showpd
----Size(MB)----- ----Ports---Id CagePos Type RPM State
Total
Free A
B
Cap(GB)
0 0:0:0
FC
10 normal
417792
308224 1:0:1* 0:0:1
450
1 0:1:0
FC
10 normal
417792
307200 1:0:1 0:0:1*
450
2 0:2:0
FC
10 normal
417792
308224 1:0:1* 0:0:1
450
3 0:3:0
FC
10 normal
417792
308224 1:0:1 0:0:1*
450
4 0:4:0
FC
10 normal
417792
308224 1:0:1* 0:0:1
450
5 0:5:0
FC
10 normal
417792
307200 1:0:1 0:0:1*
450
6 1:0:0
NL
7 normal 1848320 1371136 1:0:1* 0:0:1
2000
7 1:4:0
NL
7 normal 1848320 1371136 1:0:1 0:0:1*
2000
8 1:8:0
NL
7 normal 1848320 1371136 1:0:1* 0:0:1
2000
.
.
.
-----------------------------------------------------------------36 total
16103424 11924480
NOTE: New disk drives must be initialized before they are ready for use. Initialization occurs
in the background and can take several hours, depending on disk drive capacities.
5.
Issue the showhost command to verify that all hosts are still attached to the original two
nodes.
cli% showhost
Id Name
0 3PARL2ESX01
Persona
-WWN/iSCSI_Name- Port
Generic-legacy 500110A00017ECC8 0:3:4
500110A00017ECCA 1:3:4
500110A00017ECC8 1:3:3
500110A00017ECCA 0:3:3
1 3PARL2ESX02 Generic-legacy 500110A00017EC96 0:3:4
500110A00017EC96 1:3:3
500110A00017EC94 1:3:4
500110A00017EC94 0:3:3
2 3PARL2HYPERV Generic-ALUA
5001438021E10E12 1:3:4
5001438021E10E10 1:3:3
5001438021E10E12 0:3:3
5001438021E10E10 0:3:4
3 3PARL2ORA02 Generic
50060B000063A672 0:3:4
50060B000063A670 1:3:4
50060B000063A670 0:3:3
50060B000063A672 1:3:3
4 3PARL2ORA01 Generic
500110A00017DF9C 1:3:3
500110A00017DF9C 0:3:4
IMPORTANT:
NOTE:
Hosts should be connected to two nodes where possible.
Hosts should be connected to new nodes after the upgrade is completed.
Controller Node Upgrade
89
6.
Issue the checkhealth -svc cabling node cage pd command to verify status.
cli% checkhealth -svc cabling node cage pd
Checking cabling
Checking node
Checking cage
Checking pd
The following components are healthy: cabling, node, cage, pd
Upgrading a 7400 Storage System
Before beginning a controller node upgrade:
•
Verify with the system administrator whether a complete backup of all data on the storage
system has been performed. HP recommends that you install controller nodes into an active
system.
•
Before installing the enclosure: At the front of the storage system, remove the filler plates that
cover the empty rack space reserved for the additional enclosure.
•
Issue the following commands:
◦
showsys to verify that your system is listed as a 7400 model and the number of nodes
is listed as 2.
◦
showhost to verify that all hosts are attached to both nodes.
◦
checkhealth –svc cabling to verify existing cabling is correct and output displays
as: The following components are healthy: cabling.
NOTE:
1.
2.
3.
4.
90
Before you begin, remove the additional enclosures from the packaging.
Install rail kits for the enclosures, if applicable. See “Installing Rails for Component Enclosures”
(page 77).
Install the controller node enclosure (that was shipped with the nodes already installed). See
“Installing the Enclosures” (page 91).
Install the 764W PCMs into the node enclosure. See “Installing a Power Cooling Module ”
(page 51).
Cable node enclosures to each other and verify that the power switch is OFF. Do not power
ON until Nodes Rescue steps have been executed.
a. Insert the cable connector A end into node 0, intr 0 port. Connect the C end to node 2,
intr 1 port.
b. Insert the cable connector A end into node 0, intr 1 port. Connect the C end to node 3,
intr 0 port.
c. Insert the cable connector A end into node 1, intr 1 port. Connect the C end to node 2,
intr 0 port.
Upgrading the Storage System
d.
Insert the cable connector A end into node 1, intr 0 port. Connect the C end to node 3,
intr 1 port.
Figure 63 Cabling controller nodes
5.
6.
Install the additional drive enclosures and disk drives according to best practice rules, balancing
the drives between the node pairs. See “Installing a Disk Drive” (page 29).
After you have completed the physical installation of the enclosures and disk drives, cable the
drive enclosures to the new controller nodes.
For more information, see “Cabling Controller Nodes” in the HP 3PAR StoreServ 7000 Storage
Installation Guide.
7.
8.
9.
Install the power cables to the PCMs and press the power switch to ON. Turn power on to
the drive enclosures first, and then to the node enclosures.
Node rescue auto-starts and adds the nodes to the cluster by copying the OS to the new nodes.
Verify the upgrade is successful.
Installing the Enclosures
The storage system can include two types of drive and node enclosures.
NOTE: When installing a two-node 7400 enclosure, 2U of space must be reserved above the
enclosure for an upgrade to a four-node system. There are two 1U filler panels available to reserve
this space.
WARNING!
people.
The enclosure is heavy. Lifting, moving, or installing the enclosure requires two
To install an enclosure on the rack:
Controller Node Upgrade
91
1.
Determine that the enclosure is oriented correctly by looking at the rear of the enclosure. Verify
the node numbering by reviewing the node label located at the edges of the node.
Figure 64 Verify the Node Numbering
1
0
2.
3.
4.
At the front of the enclosure, remove the yellow bezels on each side of the enclosure to provide
access to the mounting holes.
Using both hands, slide the enclosure onto the lips of rail channels. Use the bottom lip as a
guide and the top to catch the enclosure. Check all sides of the rack at the front and the back
to ensure the enclosure is fitted to the channel lips before using any screws.
If required, add hold-down screws at the rear of the enclosure for earthquake protection .
Part number 5697-1835 is included with each enclosure: 2 x SCR, M5 -0.8, 6mm H, Pan
HEAD- T25/SLOT.
Figure 65 Tightening the Hold-Down Screw
5.
At the front of the enclosure:
a. Insert one M5 screw into the mounting hole on each side to secure the enclosure to the
rack.
b. Replace the yellow bezels on each side of the enclosure.
6.
7.
At the rear of the enclosure install and secure power and data cables.
Install disk drives.
CAUTION:
upgrade.
NOTE:
92
Do not power on without completing the remainder of the physical installation or
For proper thermal control, blank filler panels must be installed in any slots without drives.
Upgrading the Storage System
Drive Enclosures and Disk Drives Upgrade
There are two types of drive enclosures that are used for expansion:
•
The HP M6710 drive enclosure (2U24) holds up to 24, 2.5 inch SFF SAS disk drives arranged
vertically in a single row at the front of the enclosure. The back of the enclosure includes two
580 W PCMs and two I/O modules.
•
The HP M6720 drive enclosure (4U24) holds up to 24, 3.5 LFF SAS disk drives, arranged
horizontally with four columns of six disk drives. The back of the enclosure includes two 580
W PCMs and two I/O modules.
NOTE: Before beginning this procedure, review how to load the drives based on drive type,
speed, and capacity. For more information, see the HP 3PAR StoreServ 7000 Storage Installation
Guide.
Information on drive enclosure upgrades:
•
The number of drive enclosures attached to a specific node-pair should be determined by the
desired RAID set size, and HA Cage protection requirements; drive enclosures should be
added and configured to achieve HA cage for a specific node-pair, taking into account the
customer RAID set requirement.
•
The distribution of drive enclosures between DP-1 and DP-2 of the node should be done to
achieve maximum balance across the ports.
•
When adding both 2U and 4U drive enclosures, they should be mixed on SAS chains (DP1
and DP2), added in pairs across node pairs on a four-node system, and balanced across SAS
ports on each controller pair.
Drive enclosure expansion Limits:
NOTE:
Disk drives in the node enclosure are connected internally through DP1.
•
The 7200 node enclosure can support up to five drive enclosures, two connected through
DP-1 and three connected through DP-2 on the nodes.
•
The 7400 node enclosure can support up to nine drive enclosures, four connected through
DP-1 and five connected through DP-2 on the nodes. A four-node 7400 configuration doubles
the amount of drive enclosures supported to 18.
Information on disk drives upgrades:
You can install additional disk drives to upgrade partially populated drive enclosures:
•
The first expansion drive enclosure added to a system must be populated with the same number
of disk drives as the node enclosure.
•
Disks must be identical pairs.
•
The same number of disk drives should added to all of the drive enclosures of that type in the
system.
•
The minimum upgrade to a two–node system without expansion drive enclosures is two identical
disk drives.
•
The minimum upgrade to a four–node system without expansion drive enclosures is four
identical disk drives.
Adding an Expansion Drive Enclosure
1.
Install the expansion drive enclosure. See “Installing the Enclosures” (page 91).
a. Install the disk drives. See “Installing a Disk Drive” (page 29).
b. Cable the enclosures to each other using SAS cables. See “SAS Cabling” in the Cabling
Guide. The cabling guides are located at http://h20000.www2.hp.com/bizsupport/
Drive Enclosures and Disk Drives Upgrade
93
TechSupport/DocumentIndex.jsp?contentType=SupportManual&lang=en&cc=us&
docIndexId=64179&taskId=101&prodTypeId=12169&prodSeriesId=5335712#1.
NOTE: For the drive enclosures, verify that the activity LED is functional (all four LEDs are lit
solid green), and the LED at the front of the enclosure should have a number. This number
may change later in the installation process.
2.
3.
4.
5.
If they have not been installed at the factory, install the 580 W PCMs into the drive enclosure
“Installing a Power Cooling Module ” (page 51).
After you have completed the physical installation of the enclosures and disk drives, cable the
drive enclosure to the controller nodes.
Connect the power cables to the PCMs and press the power switch to ON.
Verify the upgrade is successful.
Upgrade Drive Enclosures
Steps to adding drive enclosures:
1. Check initial status
2. Install the drive enclosures and disk drives
3. Power Up
4. Chain Node 0 Loop DP-2
5. Chain Node 0 Loop DP-1
6. Check pathing
7. Move Node 1 DP-1 and DP-2 to farthest drive enclosures
8. Check pathing
9. Chain Node 1 Loop DP-2
10. Chain Node 1 Loop DP-1
11. Check pathing
12. Execute ADMITHW
13. Verify Pathing
14. Verify Cabling
Figure 66 (page 95) shows an initial configuration consisting of a two-node 7200 with 2 additional
drive enclosures, and the upgrade consists of 3 drive enclosures
94
Upgrading the Storage System
Figure 66 Initial Configuration
Check Initial Status
Execute the showpd and checkhealth commands. Resolve any outstanding problems before
starting the upgrade.
NOTE:
Remember to log your session.
cli> showpd
----Size(MB)----- ----Ports---Id CagePos Type RPM State
Total
Free A
B
Cap(GB)
0 0:0:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
1 0:2:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
2 0:3:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
3 0:4:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
4 0:5:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
5 1:0:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
6 1:4:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
7 1:8:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
8 1:12:0 FC
10 normal
417792
313344 1:0:1 0:0:1*
450
9 1:16:0 FC
10 normal
417792
313344 1:0:1* 0:0:1
450
10 1:20:0 FC
10 normal
417792
313344 1:0:1 0:0:1*
450
11 2:0:0
FC
10 normal
417792
313344 1:0:2* 0:0:2
450
12 2:1:0
FC
10 normal
417792
313344 1:0:2 0:0:2*
450
13 2:2:0
FC
10 normal
417792
313344 1:0:2* 0:0:2
450
14 2:3:0
FC
10 normal
417792
313344 1:0:2 0:0:2*
450
15 2:4:0
FC
10 normal
417792
313344 1:0:2* 0:0:2
450
16 2:5:0
FC
10 normal
417792
313344 1:0:2 0:0:2*
450
17 0:1:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
--------------------------------------------------------------------18 total
7520256 5640192
cli> checkhealth
Checking
Checking
Checking
Checking
alert
cage
dar
date
Drive Enclosures and Disk Drives Upgrade
95
Checking ld
Checking license
Checking network
Checking node
Checking pd
Checking port
Checking rc
Checking snmp
Checking task
Checking vlun
Checking vv
System is healthy
Install Drive Enclosures and Disk Drives
Install the rail kits and drive enclosures then insert the disk drives.
The new drive enclosures should be added adjacent to the enclosure farthest from the controller
node when possible. In Figure 67 (page 97) the additional drive enclosures should be racked
directly below the second drive enclosure in the initial configuration.
96
Upgrading the Storage System
Figure 67 Second Configuration
Power up enclosures and check status
All disk drives should indicate ready. Do not proceed until all the disk drives are ready.
Chain Node 0 Loop DP-2 (B Drive Enclosures and the solid red lines)
1.
Install a cable from the first B drive enclosure I/O module 0
in port (DP-1) of I/O module 0 on the second B drive enclosure.
2.
2. Install a cable from the second B drive enclosure I/O module 0
out port (DP-2) to the
out port (DP-2) to the
in port (DP-1) of I/O module 0 on the third B drive enclosure.
Drive Enclosures and Disk Drives Upgrade
97
Figure 68 Installing Node 0 DP-2 B Drive Enclosure Cables
Chain Node 0 Loop DP-1 (A Drive Enclosures and the dashed red lines)
Install a cable from the second A drive enclosure I/O module 0
in port (DP-1) of I/O module 0 on the third A drive enclosure.
98
Upgrading the Storage System
out port (DP-2) to the
Figure 69 Installing Node 0 DP-1 A Drive Enclosure Cables
Check Pathing
Execute the showpd command.
•
The additional three drive enclosures have been allocated cage numbers 3 through 5; for
example, 3:0:0.
•
LED indicators on the drive enclosure left-hand bezels should indicate 03, 04 and 05.
•
18 disk drives have been recognized and are initially connected via Port B to Node 0; for
example, 0:0:2.
•
The new disk drives indicate degraded because they currently only have one path.
cli> showpd
Id CagePos Type RPM State
--- 3:0:0
FC
10 degraded
--- 3:1:0
FC
10 degraded
----Size(MB)----Total
Free
417792
0
417792
0
----Ports---A
B
Cap(GB)
----- 0:0:2*
0
----- 0:0:2*
0
Drive Enclosures and Disk Drives Upgrade
99
--- 3:2:0
FC
10 degraded
417792
0 ----- 0:0:2*
0
--- 3:3:0
FC
10 degraded
417792
0 ----- 0:0:2*
0
--- 3:4:0
FC
10 degraded
417792
0 ----- 0:0:2*
0
--- 3:5:0
FC
10 degraded
417792
0 ----- 0:0:2*
0
--- 4:0:0
FC
10 degraded
417792
0 ----- 0:0:1*
0
--- 4:1:0
FC
10 degraded
417792
0 ----- 0:0:1*
0
--- 4:2:0
FC
10 degraded
417792
0 ----- 0:0:1*
0
--- 4:3:0
FC
10 degraded
417792
0 ----- 0:0:1*
0
--- 4:4:0
FC
10 degraded
417792
0 ----- 0:0:1*
0
--- 4:5:0
FC
10 degraded
417792
0 ----- 0:0:1*
0
--- 5:0:0
FC
10 degraded
417792
0 ----- 0:0:2*
0
--- 5:1:0
FC
10 degraded
417792
0 ----- 0:0:2*
0
--- 5:2:0
FC
10 degraded
417792
0 ----- 0:0:2*
0
--- 5:3:0
FC
10 degraded
417792
0 ----- 0:0:2*
0
--- 5:4:0
FC
10 degraded
417792
0 ----- 0:0:2*
0
--- 5:5:0
FC
10 degraded
417792
0 ----- 0:0:2*
0
0 0:0:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
2 0:1:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
1 0:2:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
2 0:3:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
3 0:4:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
4 0:5:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
5 1:0:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
6 1:1:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
7 1:2:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
8 1:3:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
9 1:4:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
10 1:5:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
11 2:0:0
FC
10 normal
417792
313344 1:0:2* 0:0:2
450
12 2:1:0
FC
10 normal
417792
313344 1:0:2 0:0:2*
450
13 2:2:0
FC
10 normal
417792
313344 1:0:2* 0:0:2
450
14 2:3:0
FC
10 normal
417792
313344 1:0:2 0:0:2*
450
15 2:4:0
FC
10 normal
417792
313344 1:0:2* 0:0:2
450
16 2:5:0
FC
10 normal
417792
313344 1:0:2 0:0:2*
450
--------------------------------------------------------------------36 total
15040512 5640192
Move Node 1 DP-1 and DP-2 to farthest drive enclosures
Refer to Figure 70 (page 101) during this procedure.
1. Remove the cable from the A drive enclosure farthest from the node enclosure (in this example,
the third enclosure in the original configuration) I/O module 1
2.
in port (DP-1) and install
into the
in port (DP-1) of I/O module 1 of the added A drive enclosure farthest from the
node enclosure (dashed green line).
Remove the cable from the B drive enclosure farthest from the node enclosure (in this example
the second enclosure in the original configuration) I/O module 1
in port (DP-1) and
install into the
in port (DP-1) of I/O module 1 on the added B drive enclosure farthest
from the node enclosure (solid green line).
100 Upgrading the Storage System
Figure 70 Moving Node 1 DP-1 and DP-2
Check Pathing
Execute the showpd command.
•
A path has been removed from the original drive enclosures (cages) 1 and 2, PD IDs 6 through
17. Disk drives in these cages are in a degraded state until the path is restored.
•
New cages 4 and 5 now have 2 paths, but cage 3 still has only one path. The state of all
installed disk drives with 2 paths is new until they are admitted into the System.
cli> showpd
Id
---------
CagePos
3:0:0
3:1:0
3:2:0
3:3:0
Type RPM State
FC
10 degraded
FC
10 degraded
FC
10 degraded
FC
10 degraded
----Size(MB)----Total
Free
417792
0
417792
0
417792
0
417792
0
----Ports---A "B" Cap(GB)
----- 0:0:2
----- 0:0:2*
----- 0:0:2
----- 0:0:2*
0
0
0
0
Drive Enclosures and Disk Drives Upgrade
101
--- 3:4:0
FC
10 degraded
417792
0 ----- 0:0:2
0
--- 3:5:0
FC
10 degraded
417792
0 ----- 0:0:2*
0
--- 4:0:0
FC
10 new
417792
0 1:0:1* 0:0:1
0
--- 4:1:0
FC
10 new
417792
0 1:0:1 0:0:1*
0
--- 4:2:0
FC
10 new
417792
0 1:0:1* 0:0:1
0
--- 4:3:0
FC
10 new
417792
0 1:0:1 0:0:1*
0
--- 4:4:0
FC
10 new
417792
0 1:0:1* 0:0:1
0
--- 4:5:0
FC
10 new
417792
0 1:0:1 0:0:1*
0
--- 5:0:0
FC
10 new
417792
0 1:0:1* 0:0:1
0
--- 5:1:0
FC
10 new
417792
0 1:0:1 0:0:1*
0
--- 5:2:0
FC
10 new
417792
0 1:0:1* 0:0:1
0
--- 5:3:0
FC
10 new
417792
0 1:0:1 0:0:1*
0
--- 5:4:0
FC
10 new
417792
0 1:0:1* 0:0:1
0
--- 5:5:0
FC
10 new
417792
0 1:0:1 0:0:1*
0
0 0:0:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
1 0:1:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
2 0:2:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
3 0:3:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
4 0:4:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
5 0:5:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
6 1:0:0
FC
10 degraded
417792
313344 ----- 0:0:1
450
7 1:1:0
FC
10 degraded
417792
313344 ----- 0:0:1*
450
8 1:2:0
FC
10 degraded
417792
313344 ----- 0:0:1
450
9 1:3:0
FC
10 degraded
417792
313344 ----- 0:0:1*
450
10 1:4:0
FC
10 degraded
417792
313344 ----- 0:0:1
450
11 1:5:0
FC
10 degraded
417792
313344 ----- 0:0:1*
450
12 2:0:0
FC
10 degraded
417792
313344 ----- 0:0:2
450
13 2:1:0
FC
10 degraded
417792
313344 ----- 0:0:2*
450
14 2:2:0
FC
10 degraded
417792
313344 ----- 0:0:2
450
15 2:3:0
FC
10 degraded
417792
313344 ----- 0:0:2*
450
16 2:4:0
FC
10 degraded
417792
313344 ----- 0:0:2
450
17 2:5:0
FC
10 degraded
417792
313344 ----- 0:0:2*
450
--------------------------------------------------------------------30 total
29421568 11990016
Chain Node 1 Loop DP-2 (B Drive Enclosures and the solid green lines)
1.
Install a cable from the last B drive enclosure I/O module 1
out port (DP-2) to the
in port (DP-1) of I/O module 1 on the second from last B drive enclosure.
2.
Install a cable from the second from last B drive enclosure I/O module 1
to the
out port (DP-2)
in port (DP-1) of I/O module 1 on the third from last B drive enclosure.
102 Upgrading the Storage System
Figure 71 Installing Node 1 DP-2 B Drive Enclosure Cables
Chain Node 1 Loop DP-1 (A Drive Enclosures and the dashed green lines)
Install a cable from the last A drive enclosure I/O module 1
out port (DP-2) to the
in
port (DP-1) of I/O module 1 on the second from last A drive enclosure (see Figure 72 (page 104)).
Drive Enclosures and Disk Drives Upgrade 103
Figure 72 Installing Node 1 DP-1 A Drive Enclosure Cables
104 Upgrading the Storage System
Figure 73 Cabling Complete
Check Pathing
Execute the showpd command.
All drives should have two paths. All the original drives should have returned to a normal state.
New drives are now ready to be admitted into the System.
cli> showpd
Id
---------------
CagePos
3:0:0
3:1:0
3:2:0
3:3:0
3:4:0
3:5:0
4:0:0
Type RPM State
FC
10 new
FC
10 new
FC
10 new
FC
10 new
FC
10 new
FC
10 new
FC
10 new
----Size(MB)----Total
Free
417792
0
417792
0
417792
0
417792
0
417792
0
417792
0
417792
0
----Ports---A "B" Cap(GB)
1:0:2* 0:0:2
1:0:2 0:0:2*
1:0:2* 0:0:2
1:0:2 0:0:2*
1:0:2* 0:0:2
1:0:2 0:0:2*
1:0:1* 0:0:1
0
0
0
0
0
0
0
Drive Enclosures and Disk Drives Upgrade 105
--- 4:1:0
FC
10 new
417792
0 1:0:1 0:0:1*
0
--- 4:2:0
FC
10 new
417792
0 1:0:1* 0:0:1
0
--- 4:3:0
FC
10 new
417792
0 1:0:1 0:0:1*
0
--- 4:4:0
FC
10 new
417792
0 1:0:1* 0:0:1
0
--- 4:5:0
FC
10 new
417792
0 1:0:1 0:0:1*
0
--- 5:0:0
FC
10 new
417792
0 1:0:2* 0:0:1
0
--- 5:1:0
FC
10 new
417792
0 1:0:2 0:0:1*
0
--- 5:2:0
FC
10 new
417792
0 1:0:2* 0:0:1
0
--- 5:3:0
FC
10 new
417792
0 1:0:2 0:0:1*
0
--- 5:4:0
FC
10 new
417792
0 1:0:2* 0:0:1
0
--- 5:5:0
FC
10 new
417792
0 1:0:2 0:0:1*
0
0 0:0:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
1 0:1:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
2 0:2:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
3 0:3:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
4 0:4:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
5 0:5:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
6 1:0:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
7 1:1:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
8 1:2:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
9 1:3:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
10 1:4:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
11 1:5:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
12 2:0:0
FC
10 normal
417792
313344 1:0:2* 0:0:2
450
13 2:1:0
FC
10 normal
417792
313344 1:0:2 0:0:2*
450
14 2:2:0
FC
10 normal
417792
313344 1:0:2* 0:0:2
450
15 2:3:0
FC
10 normal
417792
313344 1:0:2 0:0:2*
450
16 2:4:0
FC
10 normal
417792
313344 1:0:2* 0:0:2
450
17 2:5:0
FC
10 normal
417792
313344 1:0:2 0:0:2*
450
--------------------------------------------------------------------30 total
29421568 11990016
Execute admithw
Issue the admithw command to start the process to admit new hardware.
cli> admithw
Checking nodes...
Checking volumes...
Checking system LDs...
Checking ports...
Checking state of disks...
18 new disks found
Checking cabling...
Checking cage firmware...
Checking if this is an upgrade that added new types of drives...
Checking for disks to admit...
18 disks admitted
Checking admin volume...
Admin volume exists.
Checking if logging LDs need to be created...
Checking if preserved data LDs need to be created...
Checking if system scheduled tasks need to be created...
Checking if the rights assigned to extended roles need to be updated...
No need to update extended roles rights.
Rebalancing and adding FC spares...
FC spare chunklets rebalanced; number of FC spare chunklets increased by 0 for a total of 1944.
Rebalancing and adding NL spares...
No NL PDs present
106 Upgrading the Storage System
Rebalancing and adding SSD spares...
No SSD PDs present
System Reporter data volume exists.
Checking system health...
Checking alert
Checking cabling
Checking cage
Checking dar
Checking date
Checking host
Checking ld
Checking license
Checking network
Checking node
Checking pd
Checking port
Checking rc
Checking snmp
Checking task
Checking vlun
Checking vv
Component -Description- Qty
Alert
New alerts
1
admithw has completed
IMPORTANT: If you are prompted for permission to upgrade drive enclosure (cage) or physical
disk (disk drive) firmware always agree to the upgrade.
Verify Pathing
Execute the showpd command; all drives should have two paths and a state of normal.
cli> showpd
Id
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
CagePos
0:0:0
0:1:0
0:2:0
0:3:0
0:4:0
0:5:0
1:0:0
1:1:0
1:2:0
1:3:0
1:4:0
1:5:0
2:0:0
2:1:0
2:2:0
2:3:0
2:4:0
2:5:0
3:0:0
3:1:0
3:2:0
3:3:0
3:4:0
3:5:0
4:0:0
4:1:0
4:2:0
4:3:0
4:4:0
4:5:0
5:0:0
5:1:0
5:2:0
Type RPM State
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
FC
10 normal
----Size(MB)----Total
Free
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
417792
313344
----Ports---A
B
Cap(GB)
1:0:1* 0:0:1
450
1:0:1 0:0:1*
450
1:0:1* 0:0:1
450
1:0:1 0:0:1*
450
1:0:1* 0:0:1
450
1:0:1 0:0:1*
450
1:0:1* 0:0:1
450
1:0:1 0:0:1*
450
1:0:1* 0:0:1
450
1:0:1 0:0:1*
450
1:0:1* 0:0:1
450
1:0:1 0:0:1*
450
1:0:2* 0:0:2
450
1:0:2 0:0:2*
450
1:0:2* 0:0:2
450
1:0:2 0:0:2*
450
1:0:2* 0:0:2
450
1:0:2 0:0:2*
450
1:0:2* 0:0:2
450
1:0:2 0:0:2*
450
1:0:2* 0:0:2
450
1:0:2 0:0:2*
450
1:0:2* 0:0:2
450
1:0:2 0:0:2*
450
1:0:1* 0:0:1
450
1:0:1 0:0:1*
450
1:0:1* 0:0:1
450
1:0:1 0:0:1*
450
1:0:1* 0:0:1
450
1:0:1 0:0:1*
450
1:0:2* 0:0:1
450
1:0:2 0:0:1*
450
1:0:2* 0:0:1
450
Drive Enclosures and Disk Drives Upgrade 107
33 5:3:0
FC
10 normal
417792
313344 1:0:2 0:0:1*
450
34 5:4:0
FC
10 normal
417792
313344 1:0:2* 0:0:1
450
35 5:5:0
FC
10 normal
417792
313344 1:0:2 0:0:1*
450
--------------------------------------------------------------------36 total
15040512 11280384
Verify Cabling
Execute the checkhealth -svc cabling command to verify installed cabling.
cli% checkhealth -svc cabling
Checking cabling
The following components are healthy: cabling
Upgrade Disk Drives
You can install additional disk drives to upgrade partially populated drive enclosures:
•
The first expansion drive enclosure added to a system must be populated with the same number
of disk drives as the node enclosure.
•
Disks must be identical pairs.
•
The same number of disk drives should be added to all of the drive enclosures of that type in
the system.
•
The minimum upgrade to a two-node system without expansion drive enclosures is two identical
disk drives.
•
The minimum upgrade to a four-node system without expansion drive enclosures is four identical
disk drives.
SFF Drives
For HP M6710 Drive Enclosures, drives must be added in identical pairs, starting from slot 0 on
the left and filling to the right, leaving no empty slots between drives. The best practice for installing
or upgrading a system is to add the same number of identical drives to every drive enclosure in
the system, with a minimum of three disk drive pairs in each drive enclosure. This ensures a balanced
workload for the system.
Figure 74 7200 Two Node System (HP M6710 Drive Enclosure)
LFF Drives
For HP M6720 Drive Enclosures, drives must be added by pairs of the same drive type (NL, SAS
or SSD). Start adding drives in the left column, bottom to top, then continue filling columns from
left to right beginning at the bottom of the column. The best practice when installing or upgrading
a system is to add the same number of identical drives to every drive enclosure in the system, with
a minimum of two drives added to each enclosure. This ensures a balanced workload for the
system. This ensures a balanced workload for the system.
108 Upgrading the Storage System
Figure 75 7400 Four Node System (HP M6720 Drive Enclosure)
When upgrading a storage system with mixed SFF and LFF enclosures you must follow these
guidelines to maintain a balanced work load.
•
Each drive enclosure must contain a minimum of three pairs of drives.
•
Upgrades can be just SFF, LFF, or a mixture of SFF and LFF drives.
•
SFF–only upgrades must split the drives evenly across all SFF enclosures.
•
LFF–only upgrades must split the drives evenly across all LFF enclosures.
•
Mixed SFF and LFF upgrades must split the SFF drives across all SFF enclosures and LFF drives
across all LFF enclosures.
Check Initial Status
Issue the showpd command (remember to log your session):
cli> showpd
----Size(MB)----- ----Ports---Id CagePos Type RPM State
Total
Free A
B
Cap(GB)
0 0:0:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
1 0:1:0
FC
10 normal
417792
312320 1:0:1 0:0:1*
450
2 0:2:0
FC
10 normal
417792
314368 1:0:1* 0:0:1
450
3 0:3:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
4 0:4:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
5 0:5:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
6 1:0:0
NL
7 normal 1848320 1371136 1:0:1* 0:0:1
2000
7 1:4:0
NL
7 normal 1848320 1371136 1:0:1 0:0:1*
2000
8 1:8:0
NL
7 normal 1848320 1371136 1:0:1* 0:0:1
2000
9 1:12:0 NL
7 normal 1848320 1371136 1:0:1 0:0:1*
2000
10 1:16:0 NL
7 normal 1848320 1372160 1:0:1* 0:0:1
2000
11 1:20:0 NL
7 normal 1848320 1372160 1:0:1 0:0:1*
2000
12 2:0:0
FC
10 normal
417792
313344 1:0:2* 0:0:2
450
13 2:1:0
FC
10 normal
417792
313344 1:0:2 0:0:2*
450
14 2:2:0
FC
10 normal
417792
314368 1:0:2* 0:0:2
450
15 2:3:0
FC
10 normal
417792
313344 1:0:2 0:0:2*
450
16 2:4:0
FC
10 normal
417792
313344 1:0:2* 0:0:2
450
17 2:5:0
FC
10 normal
417792
313344 1:0:2 0:0:2*
450
-----------------------------------------------------------------18 total
16103424 11990016
Inserting Disk Drives
For information about inserting disk drives, see “Installing a Disk Drive” (page 29).
Check Status
Issue the showpd command. Each of the inserted disk drives has a new state and is ready to be
admitted into the System.
cli> showpd
Id CagePos Type RPM State
----Size(MB)----- ----Ports---Total
Free A
B
Cap(GB)
Upgrade Disk Drives 109
0 0:0:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
1 0:1:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
2 0:2:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
3 0:3:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
4 0:4:0
FC
10 normal
417792
313344 1:0:1* 0:0:1
450
5 0:5:0
FC
10 normal
417792
313344 1:0:1 0:0:1*
450
--- 0:6:0
FC
10 new
417792
0 1:0:1* 0:0:1
0
--- 0:7:0
FC
10 new
417792
0 1:0:1 0:0:1*
0
6 1:0:0
NL
7 normal
1848320 1371136 1:0:1* 0:0:1
2000
7 1:4:0
NL
7 normal
1848320 1371136 1:0:1 0:0:1*
2000
8 1:8:0
NL
7 normal
1848320 1371136 1:0:1* 0:0:1
2000
9 1:12:0 NL
7 normal
1848320 1371136 1:0:1 0:0:1*
2000
10 1:16:0 NL
7 normal
1848320 1372160 1:0:1* 0:0:1
2000
11 1:20:0 NL
7 normal
1848320 1372160 1:0:1 0:0:1*
2000
--- 1:1:0
NL
10 new
1848320
0 1:0:1* 0:0:1
0
--- 1:5:0
NL
10 new
1848320
0 1:0:1 0:0:1*
0
12 2:0:0
FC
10 normal
417792
313344 1:0:2* 0:0:2
450
13 2:1:0
FC
10 normal
417792
313344 1:0:2 0:0:2*
450
14 2:2:0
FC
10 normal
417792
313344 1:0:2* 0:0:2
450
15 2:3:0
FC
10 normal
417792
313344 1:0:2 0:0:2*
450
16 2:4:0
FC
10 normal
417792
313344 1:0:2* 0:0:2
450
17 2:5:0
FC
10 normal
417792
313344 1:0:2 0:0:2*
450
--- 2:6:0
FC
10 new
417792
0 1:0:1* 0:0:1
0
--- 2:7:0
FC
10 new
417792
0 1:0:1 0:0:1*
0
--------------------------------------------------------------------24 total
7520256 5640192
Check Progress
Issue the showpd -c command to check chunklet initialization status:
cli> showpd -c
-------- Normal Chunklets -------- ---- Spare Chunklets ---- Used - -------- Unused --------- - Used - ---- Unused ---Id CagePos Type State Total OK Fail Free Uninit Unavail Fail OK Fail Free Uninit Fail
0 0:0:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
1 0:1:0
FC
normal
408 35
0
323
0
0
0 0
0
51
0
0
2 0:2:0
FC
normal
408 33
0
323
0
0
0 0
0
51
0
0
3 0:3:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
4 0:4:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
5 0:5:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
6 1:0:0
NL
normal 1805
0
0 1339
0
0
0 0
0 466
0
0
7 1:4:0
NL
normal 1805
0
0 1339
0
0
0 0
0 466
0
0
8 1:8:0
NL
normal 1805
0
0 1339
0
0
0 0
0 466
0
0
9 1:12:0 NL
normal 1805
0
0 1339
0
0
0 0
0 466
0
0
10 1:16:0 NL
normal 1805
0
0 1339
0
0
0 0
0 466
0
0
11 1:20:0 NL
normal 1805
0
0 1339
0
0
0 0
0 466
0
0
12 2:0:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
13 2:1:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
14 2:2:0
FC
normal
408 33
0
323
0
0
0 0
0
51
0
0
15 2:3:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
16 2:4:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
17 2:5:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
18 0:6:0
FC
normal
408
0
0
53
304
0
0 0
0
0
51
0
19 0:7:0
FC
normal
408
0
0
53
304
0
0 0
0
0
51
0
20 1:1:0
NL
normal 1805
0
0
559
780
0
0 0
0
0
466
0
21 1:5:0
NL
normal 1805
0
0
559
780
0
0 0
0
0
466
0
22 2:6:0
FC
normal
408
0
0
53
304
0
0 0
0
0
51
0
23 2:7:0
FC
normal
408
0
0
53
304
0
0 0
0
0
51
0
----------------------------------------------------------------------------------------28 total
20968 383
0 13746
2776
0
0 0
0 3408
1136
0
Upgrade Completion
When chunklet initialization is complete, issue the showpd -c command to display the available
capacity:
cli> showpd -c
-------- Normal Chunklets -------- ---- Spare Chunklets ---- Used - -------- Unused --------- - Used - ---- Unused ---Id CagePos Type State Total OK Fail Free Uninit Unavail Fail OK Fail Free Uninit Fail
0 0:0:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
1 0:1:0
FC
normal
408 35
0
322
0
0
0 0
0
51
0
0
110
Upgrading the Storage System
2 0:2:0
FC
normal
408 33
0
324
0
0
0 0
0
51
0
0
3 0:3:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
4 0:4:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
5 0:5:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
6 1:0:0
NL
normal 1805
0
0 1455
0
0
0 0
0 350
0
0
7 1:4:0
NL
normal 1805
0
0 1455
0
0
0 0
0 350
0
0
8 1:8:0
NL
normal 1805
0
0 1456
0
0
0 0
0 349
0
0
9 1:12:0 NL
normal 1805
0
0 1456
0
0
0 0
0 349
0
0
10 1:16:0 NL
normal 1805
0
0 1456
0
0
0 0
0 349
0
0
11 1:20:0 NL
normal 1805
0
0 1456
0
0
0 0
0 349
0
0
12 2:0:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
13 2:1:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
14 2:2:0
FC
normal
408 33
0
324
0
0
0 0
0
51
0
0
15 2:3:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
16 2:4:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
17 2:5:0
FC
normal
408 34
0
323
0
0
0 0
0
51
0
0
18 0:6:0
FC
normal
408
0
0
357
0
0
0 0
0
51
0
0
19 0:7:0
FC
normal
408
0
0
357
0
0
0 0
0
51
0
0
20 1:1:0
NL
normal 1805
0
0 1456
0
0
0 0
0 349
0
0
21 1:5:0
NL
normal 1805
0
0 1456
0
0
0 0
0 349
0
0
22 2:6:0
FC
normal
408
0
0
357
0
0
0 0
0
51
0
0
23 2:7:0
FC
normal
408
0
0
357
0
0
0 0
0
51
0
0
----------------------------------------------------------------------------------------24 total
20968 407
0 16951
0
0
0 0
0 3610
0
0
Upgrading PCIe Adapters
PCIe adapters connect the controller nodes to host computers and disk drives. Upgrading PCle
adapters involves installing additional supported types of adapters or replacing existing adapters.
WARNING! Fibre Channel HBA and iSCSI CNA upgrade on the HP 3PAR StoreServ 7400
Storage system must be done by authorized service personnel and cannot be done by a customer.
Contact your local service provider for assistance. Upgrade in HP 3PAR StoreServ 7200 Storage
systems may be performed by the customer.
CAUTION: To avoid possible data loss, only one node at a time should be removed from the
storage system. To prevent overheating, node replacement requires a maximum service time of 30
minutes.
NOTE: If two FC HBAs and two CNA HBAs are added in a system, the HBAs should be installed
in nodes 0 and 1, and the CNAs should be installed in nodes 2 and 3. The first two HBAs or
CNAs added in a system should be added to nodes 0 and 1 for the initially installed system and
for field HBA upgrades only.
1.
2.
3.
Identify and shut down the node. For information about identifying and shutting down the
node, see “Node Identification and Shutdown” (page 32).
Remove the node and the node cover.
If a PCIe Adapter Assembly is already installed:
a. Remove the PCIe Adapter Assembly and disconnect the PCIe Adapter from the riser card.
b. Install the new PCIe Adapter onto the riser card and insert the assembly into the node.
For information about installing a PCIe adapter, see “PCIe Adapter Installation”.
4.
If a PCIe Adapter is not installed:
a. Remove the PCIe Adapter riser card.
b. Install the new PCIe Adapter onto the riser card and insert the assembly into the node.
For information about installing a PCIe adapter, see “PCIe Adapter Installation”.
5.
Replace the node cover and the node.
Upgrading the HP 3PAR OS and Service Processor
Upgrade the OS and SP using the following upgrade guides: HP 3PAR Upgrade Pre-Planning
Guide and the HP 3PAR Service Processor Software Installation Instructions.
Upgrading PCIe Adapters
111
4 Support and Other Resources
Contacting HP
For worldwide technical support information, see the HP support website:
http://www.hp.com/support
Before contacting HP, collect the following information:
•
Product model names and numbers
•
Technical support registration number or Service Agreement ID (if applicable)
•
Product serial numbers
•
Error messages
•
Operating system type and revision level
•
Detailed questions
Specify the type of support you are requesting:
HP 3PAR storage system
Support request
HP 3PAR StoreServ 7200, 7400, and 7450 Storage
systems
StoreServ 7000 Storage
HP 3PAR StoreServ 10000 Storage systems
3PAR or 3PAR Storage
HP 3PAR T-Class storage systems
HP 3PAR F-Class storage systems
HP 3PAR documentation
For information about:
See:
Supported hardware and software platforms
The Single Point of Connectivity Knowledge for HP
Storage Products (SPOCK) website:SPOCK
(http://www.hp.com/storage/spock)
Locating HP 3PAR documents
The HP Storage Information Library:
Storage Information Library
(http://www.hp.com/go/storage/docs/)
By default, HP 3PAR Storage is selected under Products
and Solutions.
Repair and replace procedures (media)
The HP Services Media Library:
HP Services Media Library (http://thesml.hp.com/) for
service personnel
Partner Services Media Library
(http://h20181.www2.hp.com/plmcontent/NACSC/SML/)
for partners
HP 3PAR storage system software
Storage concepts and terminology
HP 3PAR StoreServ Storage Concepts Guide
Using the HP 3PAR Management Console (GUI) to configure HP 3PAR Management Console User's Guide
and administer HP 3PAR storage systems
Using the HP 3PAR CLI to configure and administer storage
systems
112
Support and Other Resources
HP 3PAR Command Line Interface Administrator’s
Manual
For information about:
See:
CLI commands
HP 3PAR Command Line Interface Reference
Analyzing system performance
HP 3PAR System Reporter Software User's Guide
Installing and maintaining the Host Explorer agent in order
to manage host configuration and connectivity information
HP 3PAR Host Explorer User’s Guide
Creating applications compliant with the Common Information HP 3PAR CIM API Programming Reference
Model (CIM) to manage HP 3PAR storage systems
Migrating data from one HP 3PAR storage system to another HP 3PAR-to-3PAR Storage Peer Motion Guide
Configuring the Secure Service Custodian server in order to
monitor and control HP 3PAR storage systems
HP 3PAR Secure Service Custodian Configuration Utility
Reference
Using the CLI to configure and manage HP 3PAR Remote
Copy
HP 3PAR Remote Copy Software User’s Guide
Updating HP 3PAR operating systems
HP 3PAR Upgrade Pre-Planning Guide
Identifying storage system components, troubleshooting
information, and detailed alert information
HP 3PAR F-Class, T-Class, and StoreServ 10000 Storage
Troubleshooting Guide
Installing, configuring, and maintaining the HP 3PAR Policy
Server
HP 3PAR Policy Server Installation and Setup Guide
HP 3PAR Policy Server Administration Guide
HP 3PAR documentation
113
For information about:
See:
Planning for HP 3PAR storage system setup
Hardware specifications, installation considerations, power requirements, networking options, and cabling information
for HP 3PAR storage systems
HP 3PAR 7200, 7400, and 7450 storage systems
HP 3PAR StoreServ 7000 Storage Site Planning Manual
HP 3PAR StoreServ 7450 Storage Site Planning Manual
HP 3PAR 10000 storage systems
HP 3PAR StoreServ 10000 Storage Physical Planning
Manual
HP 3PAR StoreServ 10000 Storage Third-Party Rack
Physical Planning Manual
Installing and maintaining HP 3PAR 7200, 7400, and 7450 storage systems
Installing 7200, 7400, and 7450 storage systems and
initializing the Service Processor
HP 3PAR StoreServ 7000 Storage Installation Guide
HP 3PAR StoreServ 7450 Storage Installation Guide
HP 3PAR StoreServ 7000 Storage SmartStart Software
User’s Guide
Maintaining, servicing, and upgrading 7200, 7400, and
7450 storage systems
HP 3PAR StoreServ 7000 Storage Service Guide
Troubleshooting 7200, 7400, and 7450 storage systems
HP 3PAR StoreServ 7000 Storage Troubleshooting Guide
HP 3PAR StoreServ 7450 Storage Service Guide
HP 3PAR StoreServ 7450 Storage Troubleshooting Guide
Maintaining the Service Processor
HP 3PAR Service Processor Software User Guide
HP 3PAR Service Processor Onsite Customer Care
(SPOCC) User's Guide
HP 3PAR host application solutions
Backing up Oracle databases and using backups for disaster HP 3PAR Recovery Manager Software for Oracle User's
recovery
Guide
Backing up Exchange databases and using backups for
disaster recovery
HP 3PAR Recovery Manager Software for Microsoft
Exchange 2007 and 2010 User's Guide
Backing up SQL databases and using backups for disaster
recovery
HP 3PAR Recovery Manager Software for Microsoft SQL
Server User’s Guide
Backing up VMware databases and using backups for
disaster recovery
HP 3PAR Management Plug-in and Recovery Manager
Software for VMware vSphere User's Guide
Installing and using the HP 3PAR VSS (Volume Shadow Copy HP 3PAR VSS Provider Software for Microsoft Windows
Service) Provider software for Microsoft Windows
User's Guide
Best practices for setting up the Storage Replication Adapter HP 3PAR Storage Replication Adapter for VMware
for VMware vCenter
vCenter Site Recovery Manager Implementation Guide
Troubleshooting the Storage Replication Adapter for VMware HP 3PAR Storage Replication Adapter for VMware
vCenter Site Recovery Manager
vCenter Site Recovery Manager Troubleshooting Guide
Installing and using vSphere Storage APIs for Array
Integration (VAAI) plug-in software for VMware vSphere
HP 3PAR VAAI Plug-in Software for VMware vSphere
User's Guide
Servicing HP 3PAR storage systems
For information about:
See:
Maintaining the HP 3PAR Service Processor
114
Support and Other Resources
Initializing and using the Service Processor
HP 3PAR Service Processor Software User Guide: Service
Edition
Upgrading the Service Processor
HP 3PAR Service Processor Software Upgrade
Instructions: Service Edition
Troubleshooting the Service Processor
HP 3PAR Service Processor Troubleshooting Guide:
Service Edition
Remotely servicing all storage systems
Remotely servicing HP 3PAR storage systems
HP 3PAR Secure Service Collector Remote Operations
Guide
Servicing 7200 and 7400 storage systems
Maintaining, servicing, and upgrading 7200 and 7400
storage systems
HP 3PAR StoreServ 7000 Storage Service Guide: Service
Edition
Troubleshooting 7200 and 7400 storage systems
HP 3PAR StoreServ 7000 Storage Troubleshooting
Guide: Service Edition
Servicing 10000 storage systems
Using the Installation Checklist
HP 3PAR StoreServ 10000 Storage Installation Checklist
(for HP 3PAR Cabinets): Service Edition
Upgrading 10000 storage systems
HP 3PAR StoreServ 10000 Storage Upgrade Guide:
Service Edition
Maintaining 10000 storage systems
HP 3PAR StoreServ 10000 Storage Maintenance
Manual: Service Edition
Installing and uninstalling 10000 storage systems
HP 3PAR StoreServ 10000 Storage Installation and
Deinstallation Guide: Service Edition
Servicing T-Class storage systems
Using the Installation Checklist
HP 3PAR T-Class Storage System Installation Checklist
(for HP 3PAR Cabinets): Service Edition
Upgrading T-Class storage systems
HP 3PAR T-Class Storage System Upgrade Guide:
Service Edition
Maintaining T-Class storage systems
HP 3PAR T-Class Storage System Maintenance Manual:
Service Edition
Installing and uninstalling the T-Class storage system
HP 3PAR T-Class Installation and Deinstallation Guide:
Service Edition
Servicing F-Class storage systems
Using the Installation Checklist
HP 3PAR F-Class Storage System Installation Checklist
(for HP 3PAR Cabinets): Service Edition
Upgrading F-Class storage systems
HP 3PAR F-Class Storage System Upgrades Guide:
Service Edition
Maintaining F-Class storage systems
HP 3PAR F-Class Storage System Maintenance Manual:
Service Edition
Installing and uninstalling the F-Class storage system
HP 3PAR F-Class Storage System Installation and
Deinstallation Guide: Service Edition
HP 3PAR documentation
115
Typographic conventions
Table 22 Document conventions
Convention
Element
Bold text
• Keys that you press
• Text you typed into a GUI element, such as a text box
• GUI elements that you click or select, such as menu items, buttons,
and so on
Monospace text
• File and directory names
• System output
• Code
• Commands, their arguments, and argument values
<Monospace text in angle brackets> • Code variables
• Command variables
Bold monospace text
• Commands you enter into a command line interface
• System output emphasized for scannability
WARNING! Indicates that failure to follow directions could result in bodily harm or death, or in
irreversible damage to data or to the operating system.
CAUTION:
NOTE:
Indicates that failure to follow directions could result in damage to equipment or data.
Provides additional information.
Required
Indicates that a procedure must be followed as directed in order to achieve a functional and
supported implementation based on testing at HP.
HP 3PAR branding information
116
•
The server previously referred to as the "InServ" is now referred to as the "HP 3PAR StoreServ
Storage system."
•
The operating system previously referred to as the "InForm OS" is now referred to as the "HP
3PAR OS."
•
The user interface previously referred to as the "InForm Management Console (IMC)" is now
referred to as the "HP 3PAR Management Console."
•
All products previously referred to as “3PAR” products are now referred to as "HP 3PAR"
products.
Support and Other Resources
5 Documentation feedback
HP is committed to providing documentation that meets your needs. To help us improve the
documentation, send any errors, suggestions, or comments to Documentation Feedback
(docsfeedback@hp.com). Include the document title and part number, version number, or the URL
when submitting your feedback.
117
A Installing Storage Software Manually
WARNING! Use this procedure only if access to HP SmartStart CD or the Storage System and
Service Processor Setup wizards are not available.
This appendix describes how to manually set up and configure the storage system software and
SP. You must execute these scripted procedures from a laptop after powering on the storage system.
Connecting to the Laptop
You can connect the laptop directly to a controller node or SP using the connector cables. Once
you have established a serial or Ethernet connection, you can access the CLI to perform maintenance
procedures.
Connecting the Laptop to the Controller Node
Connect the RJ45 cable to the controller node MFG ports (known as the public interface) to the
laptop with a LAN connection.
For a two-node system, both nodes need to be connected to the public network. HP recommends
that each node of a four-node system have a public network connection. If only two connections
are used on a four-node system, each node pair should have a connection. Node pairs are 0–1
and 2–3.
Connecting the Laptop to the HP 3PAR Service Processor
See the HP 3PAR StoreServ 7000 Storage Installation Guide.
Serial Cable Connections
The gray adapter for the laptop (PN 180-0055) is used on the serial port for connection to the
controller node MFG port. You will need a USB-to-serial adapter on a modern laptop to have a
serial port.
The black adapter for the SP (PN 180-0059) connects the Ethernet port on the laptop to the
maintenance port on the controller node.
Maintenance PC Connector Pin-outs
Use at the laptop end of a standard Ethernet cable to connect to the maintenance port on the
controller node and in conjunction with the SP adapter (PN 180-0059-01) and a standard Ethernet
cable to connect the laptop to the SP serial port.
DB9 (Female) RJ45
•
Pin 2-Pin 2 (orange)
•
Pin 3-Pin 3 (black)
•
Pin 5-Pin 5 (green)
Figure 76 Maintenance PC Connector Pin-outs
118
Installing Storage Software Manually
Service Processor Connector Pin-outs
Use at the SP end of a standard Ethernet cable and in conjunction with the laptop adapter (PN
180-0055-01) to allow serial connection to the SP.
DB9 (Female) RJ45
•
Pin 2-Pin 3 (orange)
•
Pin 3-Pin 2 (black)
•
Pin 5-Pin 5 (green)
Figure 77 Service Processor Connector Pin-outs
Manually Initializing the Storage System Software
Use the following procedures to manually set up the storage system and SP when access to the
wizards is not available.
NOTE: You must physically connect a laptop to the SP to complete these procedures. See the
section “Connecting a Laptop to the SP” in the HP 3PAR StoreServ 7000 Storage Installation Guide.
Manually Setting up the Storage System
The Out-of-the-Box (OOTB) script guides you through setting up and configuring the storage system
software:
1. Connect the PC to the controller node 0 through a serial cable and log in with user ID
console.
2. From the Console Menu, select option 1, Out of The Box Procedure.
3PAR Console Menu 1400293-1 3.1.2.xxx
1. Out Of The Box Procedure
2. Run ootb-stress-analyzer
3. Re-enter network configuration
4. Update the CBIOS
5. Enable or disable CLI error injections
6. Perform an IDE rescue
7. Set up the system to wipe and rerun ootb
8. Cancel a wipe
9. Perform a deinstallation
10. Update the system for recently added hardware (admithw)
11. Check system health (checkhealth)
12. Exit
> 1
WARNING! Proceeding with the system setup script causes complete and irrecoverable loss
of data. Do not perform this procedure on a system that has already undergone the system
setup. If you quit this setup script at any point, you must repeat the entire process
Manually Initializing the Storage System Software
119
If the system is ready for the system setup script, the following message appears:
It appears your Cluster is in a proper manual startup state to proceed.
Cluster has the following nodes:
Node 0:
Node 1:
...
Is this correct?
3.
4.
Enter < C > to continue
or
< Q > to quit
==> c
Verify the number of controller nodes in the system, then type c and press Enter. If the system
is not ready for the system setup script, an error message appears. After following any
instructions and correcting any problems return to step 2 and attempt to run the setup script
again.
Set up the time zone for the operating site:
a. Select a location from the list, type the corresponding number <N>, and press Enter.
b. Select a country, enter the corresponding number <N>, and press ENTER.
c. Select a time zone region, type the corresponding number <N>, and press Enter
d. Verify the time zone settings are correct, type 1 and press Enter.
NOTE: The system automatically makes the time zone change permanent. Disregard
the instructions on the screen for appending the command to make the time zone change
permanent.
5.
Press Enter to accept the default time and date, or type the date and time in the format
<MMDDhhmmYYYY>, where MM, DD, hh, mm, and YYYY are the current month, day, hour,
minute, and year, respectively, and then press Enter.
Current date according to the system: <date_and_time>
Enter dates in MMDDhhmmYYYY format. For example, 031822572008 would be March 18,
2012 10:57 PM.
Enter the correct date and time, or just press enter to accept the date shown
above.=> <enter>
(...)
Is this the desired date? (y/n) y
6.
7.
To confirm the date setting, type y and press Enter.
Name the storage system using up to 31 alphanumeric characters. Type yes and press Enter
to confirm the name.
NOTE: The system name can include only letters, numbers and the special characters “.-_”,
(dot, hyphen, underscore) and can be no more than 31 characters long. The first character
in the sequence must be a letter or number.
Enter the InServ system name ==> <systemname>
Cluster will be initialized with the name <systemname>
IS THIS THE CORRECT NAME?
yes/change
=> yes
Cluster is being initialized with the name <systemname> ...Please Wait...
120 Installing Storage Software Manually
8.
Verify the OS version is correct. Type c and press Enter to continue.
Patches: None
Component Name
CLI system
CLI Client
System Manager
Kernel
TPD Kernel Code
Enter < C > to continue or < Q >
Version
3.1.2.xxx
3.1.2.xxx
3.1.2.xxx
3.1.2.xxx
3.1.2.xxx
to quit ==> c
9. Verify the number of drives in the storage system. Type c and press Enter to continue.
10. If there are any missing or nonstandard connections, an error message displays. Verify that
all nonstandard connections are correct or complete any missing connections, then type r
and press Enter to recheck the connections. If it is necessary to quit the setup procedure to
resolve an issue, type q and press Enter When all connections are correct, type c and press
Enter to continue.
11. The system prompts you to begin the system stress test script. Type y and press Enter. The
system stress test continues to run in the background as you complete the system setup.
At this point, it is recommended that the OOTB stress test be started. This will
run heavy I/O on the PDs for 1 hour following 1 hour of chunklet initialization.
The results of the stress test can be checked in approximately 2 hours and 15
minutes. Chunklet initialization will continue even after the stress test
completes. Select the "Run ootb-stress-analyzer" option from the console menu
to check the results.Do you want to start the test (y/n)? ==> y
12. When finished, type c and press Enter.
13. Create spare chunklets as directed.
CAUTION: HP recommends that at least four physical disks worth of chunklets be designated
as spares to support the servicemag command. The default sparing options create an
appropriate number of spare chunklets for the number of disks installed.
Select one of the following spare chunklet selection algorithms:
Custom allows specifying the exact number of chunklets, but is not recommended
as spares must be manually added when new disks are added.
Enter "Ma" for maximal, "D" for default, "Mi" for minimal, or "C" for custom: D
Selecting spare chunklets...
14. Verify the correct license is displayed and press Enter. If the license information is not correct,
type c and press Enter to continue with the system setup. After completing the system setup,
contact your local service provider for technical support to obtain the proper license keys.
15. Complete the network configuration:
a. When prompted, type the number of IP addresses used by the system (usually 1) and
press Enter.
b. Type the IP address and press Enter.
c. Type the netmask and press Enter. When prompted, press Enter again to accept the
previously entered netmask.
d. Type the gateway IP address and press Enter.
Manually Initializing the Storage System Software
121
e.
Specify the speed and duplex and press Enter.
Please specify speed (10, 100 or 1000) and duplex (half or full), or auto to
use autonegotation: auto
NOTE:
f.
g.
If an NTP system IP address is not provided, use the SP IP address.
Type the NTP system IP address and press Enter.
If you indicated more than one IP address, the setup script prompts you to choose which
nodes to use for each address. Note, <X Y Z> are nodes (for example: 2 3 for nodes
2 and 3).
Enter config for IP #0
IP Address: <IPaddress>
Netmask: <netmask>
Nodes Using IP address: <X Y Z>
h. Verify the IP address information is correct. Type y and press Enter.
16. The OOTB has completed when the following displays:
Out-Of-The-Box has completed.
Please continue with the SP moment of birth.
Exiting Out-Of-The-Box Experience...
Storage System Console – Out Of The Box
IMPORTANT: This procedure is not intended for customer use and should only be used if SmartStart
or Setup Wizards cannot be run.
1.
Create a serial connection to Controller Node 0.
NOTE:
2.
Always log the session output
Logon as console using the appropriate password.
The following is displayed:
3PAR Console Menu 1699808-0 3.1.2.278
1. Out Of The Box Procedure
2. Re-enter network configuration
3. Update the CBIOS
4. Enable or disable CLI error injections
5. Perform a Node-to-Node rescue
6. Set up the system to wipe and rerun ootb
7. Cancel a wipe
8. Perform a deinstallation
9. Update the system for recently added hardware (admithw)
10. Check system health (checkhealth)
11. Exit
> 1
It appears your Cluster is in a proper manual startup state to proceed.
Welcome to the Out-Of-The-Box Experience 3.1.2.278
*****************************************************************************
*****************************************************************************
*
*
*
CAUTION!! CONTINUING WILL CAUSE COMPLETE AND IRRECOVERABLE DATA LOSS
*
*
*
*****************************************************************************
*****************************************************************************
122
Installing Storage Software Manually
You need to have the InServ network config information available.
This can be obtained from the Systems Assurance Document.
DO YOU WISH TO CONTINUE?
yes/no
==> yes
Cluster has the following nodes:
Node 0
Node 1
Enter < C > to continue or < Q > to quit
==> c
Please identify a location so that time zone rules can be set correctly.
Please select a continent or ocean.
1) Africa
2) Americas
3) Antarctica
4) Arctic Ocean
5) Asia
6) Atlantic Ocean
7) Australia
8) Europe
9) Indian Ocean
10) Pacific Ocean
11) none - I want to specify the time zone using the Posix TZ format.
#? 2
Please select a country.
1) Anguilla
28) Haiti
2) Antigua & Barbuda
29) Honduras
3) Argentina
30) Jamaica
4) Aruba
31) Martinique
5) Bahamas
32) Mexico
6) Barbados
33) Montserrat
7) Belize
34) Nicaragua
8) Bolivia
35) Panama
9) Bonaire Sint Eustatius & Saba 36) Paraguay
10) Brazil
37) Peru
11) Canada
38) Puerto Rico
12) Cayman Islands
39) Sint Maarten
13) Chile
40) St Barthelemy
14) Colombia
41) St Kitts & Nevis
15) Costa Rica
42) St Lucia
16) Cuba
43) St Martin (French part)
17) Curacao
44) St Pierre & Miquelon
18) Dominica
45) St Vincent
19) Dominican Republic
46) Suriname
20) Ecuador
47) Trinidad & Tobago
21) El Salvador
48) Turks & Caicos Is
22) French Guiana
49) United States
23) Greenland
50) Uruguay
24) Grenada
51) Venezuela
25) Guadeloupe
52) Virgin Islands (UK)
26) Guatemala
53) Virgin Islands (US)
27) Guyana
#? 49
Please select one of the following time zone regions.
1) Eastern Time
2) Eastern Time - Michigan - most locations
3) Eastern Time - Kentucky - Louisville area
4) Eastern Time - Kentucky - Wayne County
5) Eastern Time - Indiana - most locations
6) Eastern Time - Indiana - Daviess, Dubois, Knox & Martin Counties
7) Eastern Time - Indiana - Pulaski County
8) Eastern Time - Indiana - Crawford County
9) Eastern Time - Indiana - Pike County
10) Eastern Time - Indiana - Switzerland County
11) Central Time
12) Central Time - Indiana - Perry County
13) Central Time - Indiana - Starke County
14) Central Time - Michigan - Dickinson, Gogebic, Iron & Menominee Counties
15) Central Time - North Dakota - Oliver County
16) Central Time - North Dakota - Morton County (except Mandan area)
17) Central Time - North Dakota - Mercer County
18) Mountain Time
19) Mountain Time - south Idaho & east Oregon
20) Mountain Time - Navajo
21) Mountain Standard Time - Arizona
22) Pacific Time
23) Alaska Time
24) Alaska Time - Alaska panhandle
25) Alaska Time - southeast Alaska panhandle
26) Alaska Time - Alaska panhandle neck
27) Alaska Time - west Alaska
28) Aleutian Islands
29) Metlakatla Time - Annette Island
30) Hawaii
#? 22
Manually Initializing the Storage System Software
123
The following information has been given:
United States
Pacific Time
Therefore TZ='America/Los_Angeles' will be used.
Local time is now:
Wed Dec 5 11:19:23 PST 2012.
Universal Time is now: Wed Dec 5 19:19:23 UTC 2012.
Is the above information OK?
1) Yes
2) No
#? 1
You can make this change permanent for yourself by appending the line
TZ='America/Los_Angeles'; export TZ
to the file '.profile' in your home directory; then log out and log in again.
Here is that TZ value again, this time on standard output so that you can use the /usr/bin/tzselect command in
shell scripts:
Updating all nodes to use timezone America/Los_Angeles...
Timezone set successfully.
Setting TOD on all nodes.
Current date according to the system: Wed Dec
Enter dates in MMDDhhmmYYYY format.
5 11:19:30 PST 2012
For example, 031822572002 would be March 18, 2002 10:57 PM.
Enter the correct date and time, or just press enter to accept the date shown above.
Enter the InServ system name ==> 3par_7200
Cluster will be initialized with the name < 3par_7200 >
IS THIS THE CORRECT NAME?
yes/change
==> yes
Cluster is being initialized with the name < 3par_7200 > ...Please Wait...
Please verify your InForm OS versions are correct.
Release version 3.1.2.412
Patches: None
Component Name
CLI Server
CLI Client
System Manager
Kernel
TPD Kernel Code
Version
3.1.2.412
3.1.2.412
3.1.2.412
3.1.2.412
3.1.2.412
Enter < C > to continue or < Q > to quit
==> c
Examining the port states...
All ports are in acceptable states.
Examining state of new disks...
Found < 12 > HCBRE0450GBAS10K disks
Found < 6 > HMRSK2000GBAS07K disks
Cluster has < 18 > total disks in < 18 > magazines.
< 18 > are new.
Now would be the time to fix any disk problems.
Enter < C > to continue or < Q > to quit
==> c
Ensuring all ports are properly connected before continuing... Please Wait...
Cages appear to be connected correctly, continuing.
Examining drive cage firmware... Please wait a moment...
All disks have current firmware.
Issuing admitpd... Please wait a moment...
admitpd completed with the following results...
Found < 12 > HCBRE0450GBAS10K disks
Found < 6 > HMRSK2000GBAS07K disks
Cluster has < 18 > total disks in < 18 > magazines.
124
Installing Storage Software Manually
==>
< 18 > are valid.
At this point, it is recommended that the OOTB stress test be started. This will run heavy I/O on the PDs for
1 hour following 1 hour of chunklet initialization. The stress test will stop in approximately 2 hours and
15 minutes. Chunklet initialization may continue even after the stress test completes. Failures will show up
as slow disk events.
Do you want to start the test (y/n)? ==> y
Starting system stress test...
Creating admin volume.
Failed -... will retry in roughly 30 seconds.
... re-issuing the request
Creating .srdata volume.
Failed -... will retry in roughly 30 seconds.
... re-issuing the request
Failed -... will retry in roughly 100 seconds.
... re-issuing the request
Failed -... will retry
... re-issuing
Failed -1 chunklet out
... will retry
... re-issuing
Failed -1 chunklet out
... will retry
... re-issuing
InServ Network
in roughly 37 seconds.
the request
of 120 is not clean yet
in roughly 5 seconds
the request
of 120 is not clean yet
in roughly 5 seconds
the request
Configuration
This system has only 2 nodes, so only 1 IP address is supported.
Select IP address type you want to assign:
1: IPv4 Address
2: Both IPv4 and IPv6 Address
> 1
IPv4 Address: 192.168.56.212
Netmask [255.255.255.0]:
Please specify a gateway IP address (enter for default of 192.168.56.1,
"none" if none):
Please specify speed (10, 100 or 1000) and duplex (half or full), or auto to use autonegotiation: auto
NTP server's IP address (enter if none):
DNS server's IP address (enter if none):
Disabling non-encrypted ports will disable SP event handling,Recovery Manager for VMWare, SRA, and CLI connections
with default parameters. It should only be done if there is a strict requirement for all connections to be
encrypted.
Disable non-encrypted ports? n
Please verify the following:
IPv4 Address: 192.168.56.212
Netmask: 255.255.255.0
IPv6 Address: ::/0
Nodes: 0 1 2 3 4 5 6 7
Default route through gateway 192.168.56.1, via 192.168.56.212
Speed and duplex will be autonegotiated.
No NTP server.
No DNS server.
Non-encrypted ports are enabled.
Does this appear to be correct? [y/n] y
Updated netc configuration in the PR.
SIGHUP has been sent to the netc controller.
The network configuration should reach the new state momentarily.
Checking for active ethernet interface...
Active ethernet interface found.
Creating
Creating
Creating
Creating
logging LD for node 0.
logging LD for node 1.
256 MB of preserved metadata storage on nodes 0 and 1.
7936 MB of preserved data storage on nodes 0 and 1.
Manually Initializing the Storage System Software
125
Failed -7 chunklets out of 24 are not clean yet
... will retry in roughly 10 seconds
... re-issuing the request
Failed -3 chunklets out of 24 are not clean yet
... will retry in roughly 10 seconds
... re-issuing the request
Failed -2 chunklets out of 24 are not clean yet
... will retry in roughly 11 seconds
... re-issuing the request
Failed -1 chunklet out of 24 is not clean yet
... will retry in roughly 5 seconds
... re-issuing the request
The logging LDs have been properly created.
Creating system tasks
Creating scheduled task check_slow_disk
Creating scheduled task remove_expired_vvs
Creating scheduled task move_back_chunklet
Creating scheduled task sample
Creating extended roles
Checking if the rights assigned to extended roles need to be updated...
create role updated
basic_edit role updated
3PAR_AO role updated
3PAR_RM role updated
Calculating space usage of sparing algorithms...
Select one of the following spare chunklet selection algorithms:
Minimal: About 11% of the system chunklets will be used.
Default: About 23% of the system chunklets will be used.
Maximal: About 17% of the system chunklets will be used.
Custom allows specifying the exact number of chunklets, but is not recommended
as spares must be manually added when new disks are added.
Enter "Ma" for maximal, "D" for default, "Mi" for minimal, or "C" for custom: d
Selecting spare chunklets...
Rebalancing and adding FC spares...
FC spare chunklets rebalanced; number of FC spare chunklets increased by 816 for a total of 816.
Rebalancing and adding NL spares...
NL spare chunklets rebalanced; number of NL spare chunklets increased by 2794 for a total of 2794.
Rebalancing and adding SSD spares...
No SSD PDs present
Please verify that the correct license features are enabled:
No license has been entered.
If the enabled features are not correct, take note of this and correct the issue after the out of the box script
finishes.
Press enter to continue.
Support for the CIM-based management API is disabled by default. It can be enabled at this point.
Does the customer want this feature to be enabled (y/n)? ==> n
Saving backup copy of eventlog as event.txt --> /common/ on node1
Determining most recent copy of /common/pr_ide/biosm*
Copying node0:/common/pr_ide/biosmsg* --> node1:/common//biosmsg*
Creating default cpgs
Creating default AO Config
Not enough CPGs to create default AO CFG.
Issues were found by checkhealth:
Component -----------Description------------ Qty
License
No license has been entered.
1
PD
PD count exceeds licensed quantity
1
These alerts may indicate issues with the system; please see the Messages and Operator's Guide for details on
the meaning of individual alerts.
Out-Of-The-Box has completed.
Please continue with the SP moment of birth.
Exiting Out-Of-The-Box Experience...
126
Installing Storage Software Manually
3PAR Console Menu 1699808-0 3.1.2.278
1. Out Of The Box Procedure
2. Re-enter network configuration
3. Update the CBIOS
4. Enable or disable CLI error injections
5. Perform a Node-to-Node rescue
6. Set up the system to wipe and rerun ootb
7. Cancel a wipe
8. Perform a deinstallation
9. Update the system for recently added hardware (admithw)
10. Check system health (checkhealth)
11. Exit
>
Adding a Storage System to the Service Processor
After successfully completing the Service Processor Setup Wizard, you must add the storage system
to the configuration database of the SP. Adding the storage system permits the SP to communicate,
service, and monitor the health of the system.
NOTE: Beginning with HP 3PAR SP OS 4.1.0 MU2, only the StoreServ with a serial number
associated with the SP ID can be attached to the SP. For assistance with adding the StoreServ to
SP, contact HP Support.
To add the storage system to the SP:
1. Connect the maintenance PC to the SP.
2. In the SPMAINT, type 3 and press Enter to select InServ Configuration Management.
SPXXXXX
1
SP Main
3PAR Service Processor Menu
Transfer media: ethernet
Transfer status: No transfer yet
Enter Control-C at any time to abort this process
1
2
3
4
5
6
7
==>
==>
==>
==>
==>
==>
==>
X
Exit
SP Control/Status
Network Configuration
InServ Configuration Management
InServ Product Maintenance
Local Notification Configuration
Site Authentication Key Manipulation
Interactive CLI for a StoreServ
3
3.
Type 2 and press Enter to Add a new InServ.
SP - InServ Configuration Manipulation
Enter Control-C at any time to abort this process
1
2
3
4
==>
==>
==>
==>
Display InServ information
Add a new InServ
Modify a StoreServ config parameters
Remove a StoreServ
X
Return to the previous menu
2
Adding a Storage System to the Service Processor
127
4.
Enter the IP address of the InServ and press Enter.
SP - InServ Configuration Manipulation
Enter Control-C at any time to abort this process
Please enter the IP address of the InServ you wish to add
-OR Enter QUIT to abort:
<static.ip.address>
16:57:36 Reply='<static.ip.address>'
Adding <static.ip.address> to firewall rules on interface eth0
5.
Enter a valid user credentials (CLI super-user name and password) to add the HP 3PAR InServ
and press Enter.
Please enter valid Customer Credentials (CLI super-user name and password) to add
the HP 3PAR InServ.
Username:<Valid Username>
Password:<Valid Password>
NOTE: If adding a storage system fails, exit from the process and check the SP software
version for compatibility. Update the SP with the proper InForm OS version before adding
additional systems.
6.
After successfully adding the system, press Enter to return to the SP menu.
...
validating communication with <static.ip.address>...
site key ok
interrogating <static.ip.address> for version number...
Version 3.1.x.GA-x reported on <static.ip.address>
retrieving system data for <static.ip.address> ...
HP 3PAR system name <InServ Name> found for <static.ip.address>
SYSID <InServ Name> found for <static.ip.address>
serial number <InServ serial #>found for <static.ip.address>
Writing configuration file for <static.ip.address>...
verifying / adding cli service ids...
Adding InServ to NTP configuration...
creating required file structures...
adding InServ to SP database...
Config complete for <static.ip.address>..
Starting 'spcollect' tasks for InServ <InServ Name>
Starting 'spevent' task for InServ <InServ Name>
InServ add complete
Press <enter/return> to continue
Exporting Test LUNs
As the final step in verifying the storage system installation, create two or three test LUNs and
confirm that the attached host or hosts can access them. After you have created the test LUNs and
verified that the host can access them, notify the system administrator that the storage system is
ready for use.
128
Installing Storage Software Manually
NOTE: Before you can export test LUNs, you must determine the host Fibre Channel connection
types and set the appropriate port personas for all target ports, or ports that connect to host
computers. See the HP 3PAR Implementation Guides where appropriate.
Defining Hosts
In order to define hosts and set port personas, you must access the CLI. For more information about
the commands used in this section, see the HP 3PAR OS Command Line Interface Reference.
To set the personas for ports connecting to host computers:
1. In the CLI, verify connection to a host before defining a host:
192.168.46.249 cli% showhost
2.
Define a new system host as follows:
192.168.46.249 cli% createhost -persona <hostpersonaval> <hostname> <WWN>...
where <hostpersonaval> is the host persona ID number, <hostname> is the name of the
test host, and <WWN> is the WWN of an HBA in the host machine. This HBA must be physically
connected to the storage system.
3.
After you have defined a system host for each physically connected WWN, verify host
configuration information for the storage system as follows:
192.168.46.249 cli% showhost
4.
Use the controlport command to set each target port as follows:
192.168.46.249 cli% controlport config <connmode> [-ct loop | point]
<node:slot:port>
where <connmode> is the name of the disk, host, or rcfc. The -ct subcommand sets the
connection type and is optional. Use loop for the disk; loop or point for the host; and
point for rcfc. The <node:slot:port> specifies the controller node, PCI slot, and PCI
adapter port to be controlled.
5.
When finished setting each connected target port, verify that all ports are set correctly.
192.168.46.249 cli% showport -par
Creating and Exporting Test Volumes
For more information about the commands used in this section, see the HP 3PAR OS Command
Line Interface Reference.
To create and export test volumes:
1. In the CLI, create a common provisioning group test to verify the system can create and export
virtual volumes.
192.168.46.249 cli% createcpg test_cpg
Exporting Test LUNs
129
2.
Create a virtual volume.
192.168.46.249 cli% createvv <usr_CPG> test0 256
3.
Create a VLUN of the virtual volume for export to the host.
192.168.46.249 cli% createvlun test0 0 <hostname>
4.
5.
6.
Verify that the host can access the VLUN.
Repeat steps 1 through 4 for each host.
From the SP, type exit to stop the CLI session, then type x and press Enter to return to the
SP main menu. Type x and press Enter again to exit. Type exit and press Enter to log off
the SP. Disconnect the cables between the SP and the maintenance PC.
130 Installing Storage Software Manually
B Service Processor Moment Of Birth (MOB)
IMPORTANT: This procedure is not intended for customer use and should be used only if SmartStart
or Setup Wizards cannot be run.
1.
Create a serial connection to the Service Processor (SP).
NOTE:
2.
Always log the session output
Logon as root with no password.
NOTE:
This works only the first time to enable the SP to be configured.
Questions are shown with common answers provided, so you can just press Enter if the common
answer is correct.
The following output example was captured during a SP Moment of Birth.
Red Hat Enterprise Linux Server release 6.1 (Santiago)
Kernel 2.6.32-131.0.15.el6.i686 on an i686
SP00000 login: root
Welcome to the HP 3PAR Service Processor Moment of Birth
Enter Control-C at any time to abort this process
Are you ready to configure the SP at this time? (yes or no) [yes]:
yes
13:27:32 Reply='yes'
Welcome to the HP 3PAR Service Processor Moment of Birth
Site Security Level
Enter Control-C at any time to abort this process
A Secure Site is a site where the customer will NEVER allow an HP 3PAR
SP to access the public internet. Thus the SP public interface will be
used only to access and monitor the HP 3PAR InServ attached to this SP.
Is this a Secure Site? ( yes or no ) [no]
13:27:35 Reply=''
Welcome to the HP 3PAR Service Processor Moment of Birth
Type of transport control
Enter Control-C at any time to abort this process
You have two options for file transfer/remote operations:
1 ==> SP Mode where inbound/outbound access is via ssh session
and control is via the Customer Controlled Access (CCA) setting.
2 ==> Secure Network Mode where inbound/outbound access is via https
and is controlled by the HP 3PAR Secure Service Agent (SSAgent).
X
None of the above. Cancel and Exit
Please enter your selection [2]:
2
13:27:50 Reply='2'
Welcome to the HP 3PAR Service Processor Moment of Birth
Type of install
Enter Control-C at any time to abort this process
How do you want to configure this SP?
131
1
2
3
==>
==>
==>
Continue with spmob ( new site install )
Restore from a backup file ( SP rebuild/replacement )
Setup SP with original SP ID ( SP rebuild/replacement no backup files)
X None of the above. Cancel and Exit
1
13:27:58 Reply='1'
Welcome to the HP 3PAR Service Processor Moment of Birth
Type of install
Enter Control-C at any time to abort this process
Please enter the Serial Number of the InServ that will be configured with this Service Processor:
-ORtype quit to exit
1400383
12:29:03 Reply='1400383'
Welcome to the HP 3PAR Service Processor Moment of Birth
Confirmation
Enter Control-C at any time to abort this process
Please confirm that (1400383} is the Serial Number of InServ
(y or n)
y
12:29:10 Reply='y'
Welcome to the HP 3PAR Service Processor Moment of Birth
SP Network Parameters
Enter Control-C at any time to abort this process
Valid length is upto 32 characters and Valid characters are [a-z] [A-Z] [0-9] dash(-) underscore(_)
Please enter the host name or press ENTER to accept the default of [SP0001400383]:
13:33:18 Reply=''
Welcome to the HP 3PAR Service Processor Moment of Birth
SP Network Parameters
Enter Control-C at any time to abort this process
Please enter the IP address for the public network interface:
192.168.56.113
13:33:30 Reply='192.192.10.100'
Please enter the netmask for this interface: [255.255.255.0]
13:33:33 Reply=''
Please enter the IP address of a default gateway, or NONE: [192.192.10.1]
13:33:35 Reply=''
Please enter the network speed
(10HD,10FD,100HD,100FD,1000HD,1000FD,AUTO)[AUTO]
13:33:40 Reply=''
Welcome to the HP 3PAR Service Processor Moment of Birth
SP Network Parameters
Enter Control-C at any time to abort this process
Please enter the IPv4 address (or blank separated list of addresses) of the Domain Name Server(s)
or 'none' if there will not be any DNS support: [none]:
13:33:44 Reply=''
Welcome to the HP 3PAR Service Processor Moment of Birth
HP 3PAR Secure Service Policy Manager Parameters
Enter Control-C at any time to abort this process
132
Service Processor Moment Of Birth (MOB)
Will a HP 3PAR Secure Service Policy Manager be used with this HP 3PAR Secure Service Collector Server?
(yes or no) [yes]:
no
13:34:11 Reply='no'
Remote access to this Service Processor would normally be controlled
by the HP 3PAR Secure Service Policy Manager. Since there will not be one, the ability to
remotely access this SP will be controlled by a configuration setting
of the local SSAgent.
Will remote access to this Service Processor be allowed (yes or no)? [yes]:
13:34:22 Reply=''
HP 3PAR Secure Service Policy Manager
- Name/address:
none
- Remote access:
Allowed
Is this data correct? (yes or no)? [yes]
13:34:29 Reply=''
Welcome to the HP 3PAR Service Processor Moment of Birth
HP 3PAR Secure Service Collector Server Parameters
Enter Control-C at any time to abort this process
To which HP 3PAR Secure Service Collector Server should this SSAgent connect?
1
OTHER
==>
==>
Production
HP 3PAR Internal testing (not for customer sites!)
Please enter your selection [1]:
1
13:34:41 Reply='1'
Will a proxy server be required to connect to the HP 3PAR Secure Service Collector Server? (yes or no) [no]:
13:34:45 Reply=''
HP 3PAR Secure Service Collector Server
- Name/address:
Production
- Proxy:
none
Is this data correct? (yes or no)? [yes]
13:34:48 Reply=''
Welcome to the HP 3PAR Service Processor Moment of Birth
Network Time Server
Enter Control-C at any time to abort this process
Please enter the I/P address of an external NTP server,
or a blank delimited list if more than one is desired,
or 'none' if there will not be any time server [?]:
none
13:35:01 Reply='none'
Welcome to the HP 3PAR Service Processor Moment of Birth
SP Permissive Firewall
Enter Control-C at any time to abort this process
The SP firewall protects the SP and the customer
network from unauthorized use. It can be
configured in 'permissive' mode to allow
any AUTHENTICATED host to connect to the SP via SSH and HTTP.
Do you wish to configure the SP firewall
in 'permissive' mode? [YES/no]
YES
13:35:13 Reply='YES'
Welcome to the HP 3PAR Service Processor Moment of Birth
133
SP Network Parameters - Confirmation
Enter Control-C at any time to abort this process
The
-
Host Name is: SPUSE241HT90
Public IP address: 192.192.10.100
Netmask:
255.255.255.0
Gateway:
192.192.10.1
Network Speed:
AUTO
DNS Server(s):
Domain name:
none
none
Secure Network Mode transport control selected.
PERMISSIVE FIREWALL MODE SELECTED
NTP Server address:
none
HP 3PAR Secure Service Collector Server
- Name/address:
Production
- Proxy:
none
HP 3PAR Secure Service Policy Manager
- Name/address:
none
- Remote access:
Allowed
Is this data correct? (yes or no)? [yes]
13:35:22 Reply=''
Welcome to the HP 3PAR Service Processor Moment of Birth
Physical location
Enter Control-C at any time to abort this process
There are 229 countries in the list.
They will be presented a screen at a time (using the 'more' command), in the format
xxx) country_name
yyy) country_name
When you find the country you want, remember the number to its left (xxx or yyy).
If you have found the country you want, type 'q' to terminate the display.
Otherwise, press the SPACE bar to present the next screen.
Press ENTER when you are ready to proceed:
13:35:30 Reply=''
1) Andorra
3) Afghanistan
5) Anguilla
7) Armenia
9) Angola
11) Argentina
13) Austria
15) Aruba
17) Bosnia and Herzegovina
19) Bangladesh
21) Burkina Faso
23) Bahrain
25) Benin
27) Brunei Darussalam
29) Brazil
31) Bhutan
33) Belarus
35) Canada
37) Congo - The Democratic Republic of
39) Congo
41) Cote d'Ivoire
43) Chile
45) China
47) Costa Rica
49) Cape Verde
51) Cyprus
53) Germany
55) Denmark
57) Dominican Republic
59) Ecuador
61) Egypt
63) Spain
65) Finland
67) Falkland Islands (Malvinas)
69) Faroe Islands
134
Service Processor Moment Of Birth (MOB)
2)
4)
6)
8)
10)
12)
14)
16)
18)
20)
22)
24)
26)
28)
30)
32)
34)
36)
38)
40)
42)
44)
46)
48)
50)
52)
54)
56)
58)
60)
62)
64)
66)
68)
70)
United Arab Emirates
Antigua and Barbuda
Albania
Netherlands Antilles
Antarctica
American Samoa
Australia
Azerbaijan
Barbados
Belgium
Bulgaria
Burundi
Bermuda
Bolivia
Bahamas
Botswana
Belize
Cocos (Keeling) Islands
Central African Republic
Switzerland
Cook Islands
Cameroon
Colombia
Cuba
Christmas Island
Czech Republic
Djibouti
Dominica
Algeria
Estonia
Eritrea
Ethiopia
Fiji
Micronesia - Federated States of
France
71)
73)
75)
77)
79)
81)
83)
85)
87)
89)
91)
93)
95)
97)
99)
101)
103)
105)
107)
109)
111)
113)
115)
117)
119)
121)
123)
125)
127)
129)
131)
133)
135)
137)
139)
141)
143)
145)
147)
149)
151)
153)
155)
157)
159)
161)
163)
165)
167)
169)
171)
173)
175)
177)
179)
181)
183)
185)
187)
189)
191)
193)
195)
197)
199)
201)
203)
205)
207)
209)
211)
213)
215)
217)
219)
221)
223)
225)
227)
229)
Gabon
72) United Kingdom
Grenada
74) Georgia
French Guiana
76) Ghana
Gibraltar
78) Greenland
Gambia
80) Guinea
Guadeloupe
82) Equatorial Guinea
Greece
84) Guatemala
Guam
86) Guinea-Bissau
Guyana
88) Hong Kong
Honduras
90) Croatia
Haiti
92) Hungary
Indonesia
94) Ireland
Israel
96) India
Iraq
98) Iran - Islamic Republic of
Iceland
100) Italy
Jamaica
102) Jordan
Japan
104) Kenya
Kyrgyzstan
106) Cambodia
Kiribati
108) Comoros
Saint Kitts and Nevis
110) Korea - Democratic People's Republ
Korea - Republic of
112) Kuwait
Cayman Islands
114) Kazakhstan
Lao People's Democratic Republic
116) Lebanon
Saint Lucia
118) Liechtenstein
Sri Lanka
120) Liberia
Lesotho
122) Lithuania
Luxembourg
124) Latvia
Libyan Arab Jamahiriya
126) Morocco
Monaco
128) Moldova - Republic of
Madagascar
130) Marshall Islands
Macedonia - The Former Yugoslav Re 132) Mali
Myanmar
134) Mongolia
Macao
136) Northern Mariana Islands
Martinique
138) Mauritania
Montserrat
140) Malta
Mauritius
142) Maldives
Malawi
144) Mexico
Malaysia
146) Mozambique
Namibia
148) New Caledonia
Niger
150) Norfolk Island
Nigeria
152) Nicaragua
Netherlands
154) Norway
Nepal
156) Nauru
Niue
158) New Zealand
Oman
160) Panama
Peru
162) French Polynesia
Papua New Guinea
164) Philippines
Pakistan
166) Poland
Saint Pierre and Miquelon
168) Puerto Rico
Palestinian Territory
170) Portugal
Palau
172) Paraguay
Qatar
174) Reunion
Romania
176) Russian Federation
Rwanda
178) Saudi Arabia
Solomon Islands
180) Seychelles
Sudan
182) Sweden
Singapore
184) Saint Helena
Slovenia
186) Slovakia
Sierra Leone
188) San Marino
Senegal
190) Somalia
Suriname
192) Sao Tome and Principe
El Salvador
194) Syrian Arab Republic
Swaziland
196) Turks and Caicos Islands
Chad
198) Togo
Thailand
200) Tajikistan
Tokelau
202) Turkmenistan
Tunisia
204) Tonga
Turkey
206) Trinidad and Tobago
Tuvalu
208) Taiwan
Tanzania - United Republic of
210) Ukraine
Uganda
212) United States
Uruguay
214) Uzbekistan
Holy See (Vatican City State)
216) Saint Vincent and the Grenadines
Venezuela
218) Virgin Islands - British
Virgin Islands - U.S.
220) Viet Nam
Vanuatu
222) Wallis and Futuna
Samoa
224) Yemen
Mayotte
226) Obsolete see CS territory
South Africa
228) Zambia
Zimbabwe
Enter the number of the country you wish to set (1-229),
or 'r' to redisplay the list:
212
13:35:42 Reply='212'
Country successfully set to 'United States'
Please identify a location so that time zone rules can be set correctly.
Please select a continent or ocean.
1) Africa
2) Americas
3) Antarctica
4) Arctic Ocean
5) Asia
135
6) Atlantic Ocean
7) Australia
8) Europe
9) Indian Ocean
10) Pacific Ocean
11) none - I want to specify the time zone using the Posix TZ format.
#? 2
Please select a country.
1) Anguilla
27) Honduras
2) Antigua & Barbuda
28) Jamaica
3) Argentina
29) Martinique
4) Aruba
30) Mexico
5) Bahamas
31) Montserrat
6) Barbados
32) Netherlands Antilles
7) Belize
33) Nicaragua
8) Bolivia
34) Panama
9) Brazil
35) Paraguay
10) Canada
36) Peru
11) Cayman Islands
37) Puerto Rico
12) Chile
38) St Barthelemy
13) Colombia
39) St Kitts & Nevis
14) Costa Rica
40) St Lucia
15) Cuba
41) St Martin (French part)
16) Dominica
42) St Pierre & Miquelon
17) Dominican Republic
43) St Vincent
18) Ecuador
44) Suriname
19) El Salvador
45) Trinidad & Tobago
20) French Guiana
46) Turks & Caicos Is
21) Greenland
47) United States
22) Grenada
48) Uruguay
23) Guadeloupe
49) Venezuela
24) Guatemala
50) Virgin Islands (UK)
25) Guyana
51) Virgin Islands (US)
26) Haiti
#? 47
Please select one of the following time zone regions.
1) Eastern Time
2) Eastern Time - Michigan - most locations
3) Eastern Time - Kentucky - Louisville area
4) Eastern Time - Kentucky - Wayne County
5) Eastern Time - Indiana - most locations
6) Eastern Time - Indiana - Daviess, Dubois, Knox & Martin Counties
7) Eastern Time - Indiana - Pulaski County
8) Eastern Time - Indiana - Crawford County
9) Eastern Time - Indiana - Pike County
10) Eastern Time - Indiana - Switzerland County
11) Central Time
12) Central Time - Indiana - Perry County
13) Central Time - Indiana - Starke County
14) Central Time - Michigan - Dickinson, Gogebic, Iron & Menominee Counties
15) Central Time - North Dakota - Oliver County
16) Central Time - North Dakota - Morton County (except Mandan area)
17) Central Time - North Dakota - Mercer County
18) Mountain Time
19) Mountain Time - south Idaho & east Oregon
20) Mountain Time - Navajo
21) Mountain Standard Time - Arizona
22) Pacific Time
23) Alaska Time
24) Alaska Time - Alaska panhandle
25) Alaska Time - southeast Alaska panhandle
26) Alaska Time - Alaska panhandle neck
27) Alaska Time - west Alaska
28) Aleutian Islands
29) Metlakatla Time - Annette Island
30) Hawaii
#? 22
The following information has been given:
United States
Pacific Time
Therefore TZ='America/Los_Angeles' will be used.
Local time is now:
Wed Dec 5 13:35:55 PST 2012.
Universal Time is now: Wed Dec 5 21:35:55 UTC 2012.
Is the above information OK?
1) Yes
2) No
#? 1
You can make this change permanent for yourself by appending the line
TZ='America/Los_Angeles'; export TZ
to the file '.profile' in your home directory; then log out and log in again.
Here is that TZ value again, this time on standard output so that you
can use the /usr/bin/tzselect command in shell scripts:
America/Los_Angeles
Welcome to the HP 3PAR Service Processor Moment of Birth
136
Service Processor Moment Of Birth (MOB)
Basic Date and Time
Enter Control-C at any time to abort this process
Please enter the current date in MM/DD/YYYY format [12/05/2012]:
13:36:09 Reply=''
Please enter the time in HH:MM format [13:36]:
13:36:11 Reply=''
The date and time you entered is 12/05/2012 13:36
Is this Correct? (yes or no) [yes]:
13:36:14 Reply=''
Date set
Generating communication keys for connex...
Please Note: New Connection Portal (CP) keys have been
generated for SP-mode. The public key has 'not' been exchanged
with the CP. This will happen only if MOB is completed in SP mode
Generating new key for on-site communications...
Please Note: SP to InServ authentication keys just generated
may not be suitable for immediate use with any pre-existing
InServ(s). This can be rectified by using SPMAINT option 6.4.2
AFTER the moment of birth to manually invoke (or force)
a key exchange.
Contact your HP 3PAR authorized support
provider for answers to any questions
Welcome to the HP 3PAR Service Processor Moment of Birth
Confirmation
Enter Control-C at any time to abort this process
Using the DEFAULT, installed Site key files:
If this is the INITIAL INSTALLATION of this HP 3PAR SP
and InServ at this site, the DEFAULT keys should be used.
If this is a REPLACEMENT SP, or there is already a StoreServ
running at this site with which this SP must communicate,
do one of the following:
1) If you have external media containing the currently
deployed key pair (on CD or floppy), then answer NO
and provide the Keys to use.
2) If you do not have a copy of the current keys,
answer YES and force a key-exchange by MANUALLY
adding the InServ during the SP Moment of Birth,
or by using "SPMAINT" option 6.4.2 AFTER the moment
of birth to invoke (or force) a key exchange.
You may have to manually add any existing InServ
clusters in order to perform the key exchange.
Do you wish to use the DEFAULT, installed Site key files?
(y or n)
y
13:36:17 Reply='y'
Using installed keys
Welcome to the HP 3PAR Service Processor Moment of Birth
InServ Connection Parameters
Enter Control-C at any time to abort this process
Inserv configuration is no longer done during the Moment Of Birth.
Use SPMAINT to install the InForm OS software on the SP
and add the InServ configuration to the SP after a successful MOB.
Press ENTER to continue.
137
13:36:35 Reply=''
Welcome to the HP 3PAR Service Processor Moment of Birth
Configuring network parms for SP ...
Building NTP configuration file...
Starting eth0 ...
igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
Setting eth0 interface speed ...
Testing the network ...
Ping of localhost successful.
Ping of public interface (192.192.10.100) successful.
Ping of gateway (192.192.10.1) successful.
There is no HP 3PAR Secure Service Policy Manager configured, test bypassed.
Starting agent ping test.
Connectivity test to HP 3PAR Secure Service Collector Server successful.
xgEnterpriseProxy: Message round-trip time: 0.010000 seconds.
xgEnterpriseProxy: Message round-trip time: 0.245000 seconds.
Do you want to load a Customer Documentation CD? (yes or no) [no]:
Welcome to the HP 3PAR Service Processor Moment of Birth
INFO
INFO
NOTE: Connectivity to the HP Collector Server was successful,
however connectivity test to the Global Access Servers failed.
Check firewall or proxy server setting to ensure remote network
connectivity is allowed to the HP Global Access Servers.
Global Access Servers connectivity errors can be ignored if this
SP will be configured to use an external HP 3PAR Policy Server
and remote operations will be disallowed by the configured policy.
------------------------------------------------------------------------WELCOME TO SYSTEM SUPPORT INFORMATION COLLECTION
------------------------------------------------------------------------Please enter the following system support information. This information
will be sent to HP, and will only be used to enable HP Technical Support
to contact the appropriate person if necessary to support your product.
This information can also be updated through the Service Processor Online
Customer Care (SPOCC) website.
Country Code (type '?' to see a list of valid ISO 3166-1 country codes): ?
Country Name - ISO 3166-1-alpha-2 code
AFGHANISTAN - AF
ALAND ISLANDS - AX
ALBANIA - AL
ALGERIA - DZ
AMERICAN SAMOA - AS
ANDORRA - AD
ANGOLA - AO
ANGUILLA - AI
ANTARCTICA - AQ
ANTIGUA AND BARBUDA - AG
ARGENTINA - AR
ARMENIA - AM
ARUBA - AW
AUSTRALIA - AU
AUSTRIA - AT
AZERBAIJAN - AZ
BAHAMAS - BS
BAHRAIN - BH
BANGLADESH - BD
BARBADOS - BB
BELARUS - BY
BELGIUM - BE
BELIZE - BZ
BENIN - BJ
BERMUDA - BM
BHUTAN - BT
BOLIVIA, PLURINATIONAL STATE OF - BO
BONAIRE, SINT EUSTATIUS AND SABA - BQ
BOSNIA AND HERZEGOVINA - BA
BOTSWANA - BW
BOUVET ISLAND - BV
BRAZIL - BR
BRITISH INDIAN OCEAN TERRITORY - IO
BRUNEI DARUSSALAM - BN
BULGARIA - BG
BURKINA FASO - BF
BURUNDI - BI
CAMBODIA - KH
CAMEROON - CM
CANADA - CA
CAPE VERDE - CV
CAYMAN ISLANDS - KY
CENTRAL AFRICAN REPUBLIC - CF
CHAD - TD
138
Service Processor Moment Of Birth (MOB)
CHILE - CL
CHINA - CN
CHRISTMAS ISLAND - CX
COCOS (KEELING) ISLANDS - CC
COLOMBIA - CO
COMOROS - KM
CONGO - CG
CONGO, THE DEMOCRATIC REPUBLIC OF THE - CD
COOK ISLANDS - CK
COSTA RICA - CR
COTE D'IVOIRE - CI
CROATIA - HR
CUBA - CU
CURACAO - CW
CYPRUS - CY
CZECH REPUBLIC - CZ
DENMARK - DK
DJIBOUTI - DJ
DOMINICA - DM
DOMINICAN REPUBLIC - DO
ECUADOR - EC
EGYPT - EG
EL SALVADOR - SV
EQUATORIAL GUINEA - GQ
ERITREA - ER
ESTONIA - EE
ETHIOPIA - ET
FALKLAND ISLANDS (MALVINAS) - FK
FAROE ISLANDS - FO
FIJI - FJ
FINLAND - FI
FRANCE - FR
FRENCH GUIANA - GF
FRENCH POLYNESIA - PF
FRENCH SOUTHERN TERRITORIES - TF
GABON - GA
GAMBIA - GM
GEORGIA - GE
GERMANY - DE
GHANA - GH
GIBRALTAR - GI
GREECE - GR
GREENLAND - GL
GRENADA - GD
GUADELOUPE - GP
GUAM - GU
GUATEMALA - GT
GUERNSEY - GG
GUINEA - GN
GUINEA-BISSAU - GW
GUYANA - GY
HAITI - HT
HEARD ISLAND AND MCDONALD ISLANDS - HM
HOLY SEE (VATICAN CITY STATE) - VA
HONDURAS - HN
HONG KONG - HK
HUNGARY - HU
ICELAND - IS
INDIA - IN
INDONESIA - ID
IRAN, ISLAMIC REPUBLIC OF - IR
IRAQ - IQ
IRELAND - IE
ISLE OF MAN - IM
ISRAEL - IL
ITALY - IT
JAMAICA - JM
JAPAN - JP
JERSEY - JE
JORDAN - JO
KAZAKHSTAN - KZ
KENYA - KE
KIRIBATI - KI
KOREA, DEMOCRATIC PEOPLE'S REPUBLIC OF - KP
KOREA, REPUBLIC OF - KR
KUWAIT - KW
KYRGYZSTAN - KG
LAO PEOPLE'S DEMOCRATIC REPUBLIC - LA
LATVIA - LV
LEBANON - LB
LESOTHO - LS
LIBERIA - LR
LIBYA - LY
LIECHTENSTEIN - LI
LITHUANIA - LT
LUXEMBOURG - LU
MACAO - MO
MACEDONIA, THE FORMER YUGOSLAV REPUBLIC OF - MK
MADAGASCAR - MG
MALAWI - MW
MALAYSIA - MY
MALDIVES - MV
MALI - ML
139
MALTA - MT
MARSHALL ISLANDS - MH
MARTINIQUE - MQ
MAURITANIA - MR
MAURITIUS - MU
MAYOTTE - YT
MEXICO - MX
MICRONESIA, FEDERATED STATES OF - FM
MOLDOVA, REPUBLIC OF - MD
MONACO - MC
MONGOLIA - MN
MONTENEGRO - ME
MONTSERRAT - MS
MOROCCO - MA
MOZAMBIQUE - MZ
MYANMAR - MM
NAMIBIA - NA
NAURU - NR
NEPAL - NP
NETHERLANDS - NL
NEW CALEDONIA - NC
NEW ZEALAND - NZ
NICARAGUA - NI
NIGER - NE
NIGERIA - NG
NIUE - NU
NORFOLK ISLAND - NF
NORTHERN MARIANA ISLANDS - MP
NORWAY - NO
OMAN - OM
PAKISTAN - PK
PALAU - PW
PALESTINE, STATE OF - PS
PANAMA - PA
PAPUA NEW GUINEA - PG
PARAGUAY - PY
PERU - PE
PHILIPPINES - PH
PITCAIRN - PN
POLAND - PL
PORTUGAL - PT
PUERTO RICO - PR
QATAR - QA
REUNION - RE
ROMANIA - RO
RUSSIAN FEDERATION - RU
RWANDA - RW
SAINT BARTHELEMY - BL
SAINT HELENA, ASCENSION AND TRISTAN DA CUNHA - SH
SAINT KITTS AND NEVIS - KN
SAINT LUCIA - LC
SAINT MARTIN (FRENCH PART) - MF
SAINT PIERRE AND MIQUELON - PM
SAINT VINCENT AND THE GRENADINES - VC
SAMOA - WS
SAN MARINO - SM
SAO TOME AND PRINCIPE - ST
SAUDI ARABIA - SA
SENEGAL - SN
SERBIA - RS
SEYCHELLES - SC
SIERRA LEONE - SL
SINGAPORE - SG
SINT MAARTEN (DUTCH PART) - SX
SLOVAKIA - SK
SLOVENIA - SI
SOLOMON ISLANDS - SB
SOMALIA - SO
SOUTH AFRICA - ZA
SOUTH GEORGIA AND THE SOUTH SANDWICH ISLANDS - GS
SOUTH SUDAN - SS
SPAIN - ES
SRI LANKA - LK
SUDAN - SD
SURINAME - SR
SVALBARD AND JAN MAYEN - SJ
SWAZILAND - SZ
SWEDEN - SE
SWITZERLAND - CH
SYRIAN ARAB REPUBLIC - SY
TAIWAN, PROVINCE OF CHINA - TW
TAJIKISTAN - TJ
TANZANIA, UNITED REPUBLIC OF - TZ
THAILAND - TH
TIMOR-LESTE - TL
TOGO - TG
TOKELAU - TK
TONGA - TO
TRINIDAD AND TOBAGO - TT
TUNISIA - TN
TURKEY - TR
TURKMENISTAN - TM
TURKS AND CAICOS ISLANDS - TC
140 Service Processor Moment Of Birth (MOB)
TUVALU - TV
UGANDA - UG
UKRAINE - UA
UNITED ARAB EMIRATES - AE
UNITED KINGDOM - GB
UNITED STATES - US
UNITED STATES MINOR OUTLYING ISLANDS - UM
URUGUAY - UY
UZBEKISTAN - UZ
VANUATU - VU
VENEZUELA, BOLIVARIAN REPUBLIC OF - VE
VIET NAM - VN
VIRGIN ISLANDS, BRITISH - VG
VIRGIN ISLANDS, U.S. - VI
WALLIS AND FUTUNA - WF
WESTERN SAHARA - EH
YEMEN - YE
ZAMBIA - ZM
ZIMBABWE - ZW
Country Code (type '?' to see a list of valid ISO 3166-1 country codes): US
Please enter the company name : HP
Please enter the HW Installation site mailing address
Street name and number: 4209 Technology Drive
City: Fremont
State/Province (required only for USA and Canada): CA
ZIP/Postal Code: 94538
Please enter the first name of the technical contact: Brien
Please enter the last name of the technical contact: Ninh
Please enter the phone number to reach this contact: 650-258-0055
Please enter the fax number for this contact (optional):
Please enter the email address for alert notification: bninh@hp.com
Is support directly from HP or a partner? (yes for HP, no for partner): yes
* Company: HP
* HW Installation Site Address
Street and number: 4209 Technology Drive
City: Fremont
State/Province: CA
ZIP/Postal Code: 94538
Country Code: US
* Technical Contact
First Name: Joe
Last Name: Thornton
Phone: 555-555-0055
E-Mail: joethornton19@hp.com
FAX:
* Direct Support from HP: Y
Is the following information correct? (yes or no) [yes]: yes
Done with System Support Contact Collection.
Do you want to load a Customer Documentation CD? (yes or no) [no]: no
Welcome to the HP 3PAR Service Processor Moment of Birth
*** starting final MOB phase
Fix passwords
Disabling key change on reboot ...
Disabling sendmail...
/sp/prod/code/csst/bin/MRfunctions: line 1250: /etc/init.d/sendmail: No such file or directory
verifying postfix status...
Setup to run all ST/SP tasks at boot time
Add SPID to ppp id
ls: cannot access /dev/modem: No such file or directory
Cleanup MOB
Updating PAM settings
Rebooting....
.
.
.
.
.
.
Red Hat Enterprise Linux Server release 6.1 (Santiago)
Kernel 2.6.32-131.0.15.el6.i686 on an i686
login:
Password:
SP0001400383
1
SP Main
HP 3PAR Service Processor Menu
>>>>>>>>
InForm OS software has not been installed!
<<<<<<<<
>>>>>>>>
The SP cannot communicate with the InServ until
<<<<<<<<
>>>>>>>> the InForm OS software has been installed on the SP. <<<<<<<<
Transfer media: ethernet
Transfer status: Ok
141
Enter Control-C at any time to abort this process
142
1
2
3
4
5
6
7
==>
==>
==>
==>
==>
==>
==>
X
Exit
SP Control/Status
Network Configuration
InServ Configuration Management
InServ Product Maintenance
Local Notification Configuration
Site Authentication Key Manipulation
Interactive CLI for a StoreServ
Service Processor Moment Of Birth (MOB)
C Connecting to the Service Processor
You can connect the maintenance PC to the service processor (SP) either through a serial connection
or an Ethernet connection (LAN). When you are connected to the SP by a serial or Ethernet
connection, there are two SP user interfaces know as SPOCC and SPMAINT. Use either interface
to perform various administrative and diagnostic tasks.
NOTE: Connecting to the SP through the LAN (Ethernet) requires establishing a Secure Shell
Session (SSH). If you do not have SSH, connect to the serial port of the SP.
Using a Serial Connection
To use a serial connection:
Procedure 1
1.
2.
Locate the SP and attach the DB9 crossover serial adapter (P/N 180–0055) that is at the free
end of the blue Ethernet cable to the Serial port on your maintenance PC. Use a standard
Category 5 Ethernet cable with the appropriate RJ-45 to DB9 adapter to connect to the DB9
Serial port of SP.
Insert a standard Category 5 Ethernet cable into the SP serial port with the DB9 crossover
serial to RJ45 adapter (P/N 180–0055).
Figure 78 HP DL320e SP Ports
#
Ports
Description
1
Ethernet ports:
NIC1 (left)
Use to establish an Ethernet
connection to the SP.
Use NIC1 for Public.
NIC2 (right)
Use NIC2 for Private (SPOCC).
2
3.
Serial port
Use to establish a serial connection
to the SP.
Power on the laptop.
Using a Serial Connection
143
4.
Use the following table as a guideline to adjust the serial settings of the laptop before using
a terminal emulator, such as HyperTerminal, Attachmate Reflection X, SecureCRT, or TeemTalk
to communicate with the SP and perform various tasks to support the storage system.
Setting
Value
Baud Rate
57600
Parity
None
Word Length
8
Stop Bits
1
Flow Control
Both
Transmit
Xon/Xoff
Receive
Xon/Xoff
Char transmit delay
0
Line transmit delay
0
144 Connecting to the Service Processor
D Node Rescue
Automatic Node-to-Node Rescue
Automatic node-to-node rescue is started automatically when a node is removed then replaced in
a storage system and when there is at least one node in the cluster, perform either auto node-to-node
rescue.
Auto node rescue also requires that an Ethernet cable be connected to the node to be rescued
prior to insertion, along with the currently configured Ethernet connections on the running nodes.
NOTE:
Always perform the automatic node rescue procedures unless otherwise instructed.
NOTE: When performing automatic node-to-node rescue, there may be instances where a node
is to be rescued by another node that has been inserted but has not been detected. If this happens,
issue the CLI command, startnoderescue –node <nodenum>. Before you do, you must have
the rescue IP address. This is the IP address that is allocated to the node being rescued and must
be on the same subnet as the SP.
Use the showtask -d command to view detailed status regarding the node rescue:
root@1400461-0461# showtask -d
Id Type
Name
Status Phase Step ----StartTime------FinishTime---Priority
User
4 node_rescue node_0_rescue done
--- --- 2012-04-10 13:42:37 PDT 2012-04-10
13:47:22 PDT
n/a
sys:3parsys
Detailed status:
2012-04-10 13:42:37 PDT Created
task.
2012-04-10 13:42:37 PDT Updated
Running node rescue for node 0 as 1:8915
2012-04-10 13:42:44 PDT Updated
Using IP 169.254.136.255
2012-04-10 13:42:44 PDT Updated
Informing system manager to not autoreset node 0.
2012-04-10 13:42:44 PDT Updated
Resetting node 0.
2012-04-10 13:42:53 PDT Updated
Attempting to contact node 0 via NEMOE.
2012-04-10 13:42:53 PDT Updated
Setting boot parameters.
2012-04-10 13:44:08 PDT Updated
Waiting for node 0 to boot the node rescue kernel.
2012-04-10 13:44:54 PDT Updated
Kernel on node 0 has started. Waiting for node
to retrieve install details.
2012-04-10 13:45:14 PDT Updated
Node 32768 has retrieved the install details.
Waiting for file sync to begin.
2012-04-10 13:45:36 PDT Updated
File sync has begun. Estimated time to complete
this step is 5 minutes on a lightly loaded system.
2012-04-10 13:47:22 PDT Updated
Remote node has completed file sync, and will
reboot.
2012-04-10 13:47:22 PDT Updated
Notified NEMOE of node 0 that node-rescue is done.
2012-04-10 13:47:22 PDT Updated
Node 0 rescue complete.
2012-04-10 13:47:22 PDT Completed
scheduled task.
Automatic Node-to-Node Rescue
145
Service Processor-to-Node Rescue
CAUTION: Before proceeding with the controller node rescue, verify with the system administrator
before disconnecting all host cables or shutting down the host.
NOTE: This node rescue procedure should only be used if all nodes in the HP 3PAR system are
down and needs to be rebuilt from the HP 3PAR OS image on the service processor. The SP-to-node
rescue procedure is supported with HP 3PAR OS version 3.1.2 or higher and HP 3PAR Service
Processor 4.2 or higher.
To perform SP-to-node rescue:
1. At the rear of the storage system, uncoil the red crossover Ethernet cable connected to the SP
(ETH) private network connection and connect this cross-over cable to the E0 port of the node
that is being rescued (shown).
Figure 79 DL320e ETH Port
NOTE:
2.
3.
Connect the crossover cable to the following ETH port of a specific SP brand:
•
HP 3PAR Service Processor DL320e: ETH port 2
•
Supermicro II: ETH port 1
Connect the maintenance PC to the SP using the serial connection and start an spmaint
session.
Go to 3 StoreServ Configuration Management > 1 Display StoreServ information to perform
the pre-rescue task of obtaining the following information:
•
HP 3PAR OS Level on the StoreServ system
•
StoreServ system network parameters including netmask and gateway information
Return to the main menu.
NOTE: Copy the network information on to a separate document for reference to complete
the subsequent steps of configuring the system network.
4.
In the 3PAR Service Processor Menu, complete the following:
a. Choose 4 ==> StoreServ Product Maintenance.
b. Choose 11 ==> Node Rescue.
c. Enter y to confirm to action before continuing with node rescue.
d. Choose 1 ==> Configure Node Rescue, then select the desired system.
At this point, you will be prompted for the node rescue configuration information.
1. Verify the current HP 3PAR OS level and enter y to use the level.
2. Enter y to continue to setup node rescue.
NOTE:
3.
4.
5.
5.
146
The process may take a few minutes.
Press Enter to accept the default [/dev/tpddev/vvb/0].
Enter y to specify the time zone. Continue to follow the time zone setup prompts.
Confirm the HP 3PAR OS level and enter y to continue.
Choose 2 ==> SP-to-Node Rescue.
Node Rescue
NOTE: The process of establishing communication between the SP and StoreServ system
may take several minutes.
6.
7.
Establish a serial connection to the node being rescued. If necessary, disconnect the serial
cable from SP.
Connect a serial cable from the laptop to the serial port on the node being rescued (S0).
NOTE:
•
HP DL320e or DL360e: ETH port 2
NOTE:
8.
Connect the crossover cable to the following ETH port of a specific SP brand:
If necessary, check the baud rate settings before establishing a connection.
Press CTRL+W to establish a whack> prompt.
a. Type nemoe cmd unset node_rescue_needed and press Enter. The output will
display the message no output.
b. Type boot rescue and press Enter.
c. Monitor the console output process. The node will continue to run POST then it will stop
and display instructions for running node-rescue (see output on the following page). Enter
y to continue.
NOTE:
If y is not entered, you will need to redo step 8.
The system installs the OS. This process takes approximately 10 to 15 minutes (rescue
and rebuild of disk = 5 minutes) + (reboot = 5-10 minutes). When complete, the node
restarts and becomes part of the cluster.
This is the procedure for manually rescuing a 3PAR StoreServ node (i.e.,
rebuilding the software on the node's internal disk). The system will install
the base OS, BIOS, and InForm OS for the node before it joins the cluster.
You must first connect a Category 5 crossover Ethernet cable between the SP's
private/internal network (Eth-1) and the "E0" Ethernet port of the node to be
rescued. Note that the diagram below does not represent the physical port
numbers or configuration of all node types.
New Node
Service Processor
+------------+
+-----------------+
|||||||
|
|
|
|||||||
|
|Eth-0 Eth-1(Int) |
||||||| E0 C0|
+-----------------+
+------------+
^
^ ^
|____Crossover Eth____| |__Maintenance PC (serial connection)
This procedure will execute the following Whack commands:
1. net addr 10.255.155.53
2. net netmask 255.255.255.248
3. net server 10.255.155.54
4. boot net install ipaddr=10.255.155.53 nm=255.255.255.248 rp=10.255.155.54::rescueide
This operation will completely erase and reinstall the node's local disk.
Are you sure? (Y/N) No
9.
Verify the node status LED is slowly blinking green and provides a login prompt.
Service Processor-to-Node Rescue
147
10. If applicable, remove the crossover cable from the recently saved node and connect it to the
next node.
NOTE:
Reconnect the public network (Ethernet) cable to recently saved node.
11. Repeat steps 7 through 10 for each node.
12. Log on to a node as a console user.
13. Choose option 2, Network Configuration to set the network configuration for the system. Follow
the prompts to complete the network configuration.
NOTE: The cluster must be active and the admin volume must be mounted before changing
the network configuration.
NOTE: Access STATs to obtain the network information or request it from the system
administrator.
14. Press Enter.
15. Before deconfiguring the node rescue, disconnect the crossover cables and reconnect the
public network cable.
16. Return to the SP Main menu and perform the following:
a. Choose 1 ==> Deconfigure Node Rescue.
b. Choose X ==> Return to previous menu to return to the main menu.
c. Choose 7 ==> Interactive CLI for a StoreServ,, then select the desired system.
17. Execute the shownode command to verify that all nodes have joined the cluster.
cli% shownode
Control Data Cache
Node --Name--- -State- Master InCluster ---LED--- Mem(MB) Mem(MB) Available(%)
0
1000163-0 OK
No
Yes
GreenBlnk 4096
6144
100
1
1000163-1 OK
Yes
Yes
GreenBlnk 4096
6144
100
18. Execute the shutdownsys reboot command and enter yes to reboot the system.
When the system reboot is complete, reestablish an SPMAINT session to perform additional
CLI commands.
19. Reconnect the host and host cables if previously removed or shutdown.
20. Issue the checkhealth -svc -detail command to verify the system is healthy.
21. In the SP window, issue the exit command and select X to exit from the 3PAR Service
Processor Menu and to log out of the session.
22. Disconnect the serial cable from the maintenance PC and the red cross-over Ethernet cable
from the node and coil and replace the cable behind the SP. If applicable, reconnect the
customer's network cable and any other cables that may have been disconnected.
23. Close and lock the rear door.
Virtual Service Processor-to-Node Rescue
NOTE: This SPMAINT node-rescue procedure should only be used if all nodes in the 3PAR system
are down and needs to be rebuilt from the HP 3PAR OS image on Service Processor. The SP-to-node
rescue procedure is supported with HP 3PAR OS version 3.1.2 or higher and HP 3PAR Service
Processor 4.2 or higher.
To perform a virtual service processor-to-node rescue:
NOTE: Verify all the controller nodes in the system are in the offline status. Only a single controller
node can rescued at a time.
148
Node Rescue
Procedure 2
1.
2.
Establish a spmaint session.
Go to 3 StoreServ Configuration Management > 1 Display StoreServ information to perform
the pre-rescue task of obtaining the following information:
•
HP 3PAR OS Level on the StoreServ system
•
StoreServ system network parameters including netmask and gateway information
Return to the main menu.
NOTE: Copy the network information on to a separate document for reference to complete
the subsequent steps of configuring the system network.
3.
In the 3PAR Service Processor Menu, complete the following:
a. Choose 4 ==> StoreServ Product Maintenance.
b. Choose 11 ==> Node Rescue.
c. Enter y to confirm to action before continuing with node rescue.
d. Choose 1 ==> Configure Node Rescue, then select the desired system.
At this point, you will be prompted for the node rescue configuration information.
1. Verify the current HP 3PAR OS level and enter y to use the level.
2. Enter y to continue to setup node rescue.
NOTE:
3.
4.
5.
4.
The process may take a few minutes.
Press Enter to accept the default [/dev/tpddev/vvb/0].
Enter y to specify the time zone. Continue to follow the time zone setup prompts.
Confirm the HP 3PAR OS level and enter y to continue.
Choose 2 ==> SP-to-NODE Rescue. The following screen appears:
This is the procedure for manually rescuing node(s) in StoreServ s974
PLEASE NOTE THAT THIS PROCEDURE IS FOR USE WITH
PROCESSOR (VSP) WHEN ALL NODES ARE DOWN. Verify
(the last known IP address of the StoreServ) is
in this StoreServ must be offline and the nodes
at a time.
A VIRTUAL SERVICE
that 10.0.121.245
not in use. All nodes
can only be rescued one
The following network configuration assumes that the VSP and the
StoreServ are on the same subnet. If the VSP and the StoreServ are
not on the same subnet, the netmask (255.255.248.0) and the gateway
(10.0.120.1) need to be changed in the commands below to the
netmask and gateway values used by the StoreServ.
1. Connect a laptop to the serial interface on the node to be rescued
NOTE: 57600,N,8,1,XON/XOFF
2. Reset, or power cycle, the node to be rescued
3. On the serial interface press CTRL-w 5-10 seconds after the
'PCI Bus Initialization' test[2] has completed to get a Whack> prompt.
NOTE: If you interrupt the BIOS tests too early you will see the
following message: Warning: PCI scan has not completed. It is
not safe to use most Whack commands at this point. Please resume
initialization by typing "go" now.
4: Type: nemoe cmd unset node_rescue_needed <enter>
5: Type: net server 10.0.122.77 <enter>
6: Type: net netmask 255.255.248.0 <enter>
7: Type: net gateway 10.0.120.1 <enter>
8: Type: net addr 10.0.121.245 <enter>
9: Type: boot net install ipaddr=10.0.121.245 nm=255.255.248.0 gw=10.0.120.1 rp=10.0.122.77::rescueide <enter>
NOTE: Type these commands exactly!
The system will install the base OS, HP 3PAR OS, and reboot. Repeat this
procedure for all nodes and then wait for all nodes to join the cluster
before proceeding.
Press Enter to continue.
NOTE: The output is only an example and the addresses may vary depending on the network
configuration.
5.
Disconnect the serial cable from the serial adapter on the SP.
Virtual Service Processor-to-Node Rescue
149
6.
Connect a serial cable from the laptop serial port (S0) to the console port (C0) on the node
being rescued.
NOTE:
7.
The VSP is connected to the target node being rescued via the customer network.
Reset the node by pressing Ctrl+w to establish a Whack> prompt. When the prompt displays,
type reset.
NOTE: Make sure to monitor the reset and do not complete a full reset. After 30 seconds,
press Ctrl+w to interrupt the reset.
8.
At the Whack> prompt, refer to the output in step 4, copy and paste the commands for the
following setting prompts:
a. Whack> nemoe cmd unset node_rescue_needed
b. Whack> net server <VSP IP Address>
c. Whack> net netmask <netmask IP Address>
d. Whack> net gateway <Gateway IP address>
e. Whack> net addr <StoreServ IP address>
f. Whack> boot net install ipaddr=<StoreServ IP address> nm=<netmask IP Address>
gw=<Gateway IP Address> rp=<VSP IP address>::rescueide
The following table is only an example.
Whack>nemoe cmd unset node_rescue_needed
No output
Whack> net server 10.0.122.77
Server address 10.0.122.77
Whack>net netmask 255.255.248.0
Network mask 255.255.248.0
Whack> net gateway 10.0.120.1
Gateway address 10.0.120.1
Whack>net addr 10.0.121.245
My address is 10.0.121.245
Whack>boot net install ipaddr=10.0.121.245 nm=255.255.248.0 gw=10.0.120.1 rp=10.0.122.77::rescueide
Booting from net...
TFTP "install" from 10.0.122.77.
File size 6 MB: [
]...................................] complete
Setting FSB WDT Boot Complete State.
NOTE: If you get a message about a failing ARP response, type reset and wait about 30
seconds before pressing Ctrl+w to halt the reboot. When the whack> prompt displays, repeat
step 8.
9. Repeat steps 5 through 8 for each node being rescued.
10. Log on to a node as a console user.
11. Choose option 2, Network Configuration to set the network configuration for the system. Follow
the prompts to complete the network configuration.
NOTE: The cluster must be active and the admin volume must be mounted before changing
the network configuration.
150
Node Rescue
NOTE: Access STATs to obtain the network information or request it from the system
administrator.
12. Wait for all of the nodes to join the cluster. The node status LEDs should be blinking green.
13. Establish an SPMAINT session. Use console as the login name.
14. Select option 2 Network Configuration to enter the network configuration. Return to the main
menu when complete.
NOTE: The cluster must be active and the admin volume must be mounted before changing
the network configuration.
15. Disconnect the cable (serial) from the node and reconnect to the adapter on the SP. Press Enter
16. Before deconfiguring the node rescue, disconnect the crossover cables and reconnect the
public network cable.
17. Return to the SP Main menu and choose 4 StoreServ Product Maintenance > 11 Node Rescue.
Enter y to confirm rescue is completed and press Enter to continue.
a. Choose 1 ==> Deconfigure Node Rescue to deconfigure the node rescue.
b. Choose X ==> Return to previous menu to return to the main menu.
c. Choose 7 ==> Interactive CLI for a StoreServ,, then select the desired system.
18. Issue the shownode command to verify that all nodes have joined the cluster.
cli% shownode
Control Data Cache
Node --Name--- -State- Master InCluster ---LED--- Mem(MB) Mem(MB) Available(%)
0
1000163-0 OK
No
Yes
GreenBlnk 4096
6144
100
1
1000163-1 OK
Yes
Yes
GreenBlnk 4096
6144
100
19. Execute the shutdownsys reboot command and enter yes to reboot the system.
When the system reboot is complete, reestablish an SPMAINT session to perform additional
CLI commands.
20. Reconnect the host and host cables if previously removed or shutdown.
21. Execute the checkhealth -svc -detail command to verify the system is healthy.
22. In the SP window, issue the exit command and select X to exit from the 3PAR Service
Processor Menu and to log out of the session.
23. Disconnect the serial cable from the maintenance PC. If applicable, reconnect the customer's
network cable and any other cables that may have been disconnected.
24. Close and lock the rear door.
Virtual Service Processor-to-Node Rescue
151
E Illustrated Parts Catalog
The following shows each component of the storage system for all replaceable hardware parts
including the part number, full description, quantity, and CSR type.
Drive Enclosure Components
Figure 80 HP M6710 Drive Enclosure (2U24)
Figure 81 HP M6720 Drive Enclosure (4U24)
152
Illustrated Parts Catalog
Figure 82 2.5-inch SFF disk drive
Figure 83 3.5-inch LFF disk drive
Table 23 Drive Chassis FRUs
Material Number
Description
Qty Per Chassis
CSR Type
683232-001
SPS-Enclosure Midplane 2U24 Assy
1
Not
683233-001
SPS-Enclosure Midplane 4U24 Assy
1
Not
683234-001
SPS-Drive Carrier SFF SSD Assy
683235-001
SPS-Drive Carrier LFF HDD Assy
683236-001
SPS-Drive Carrier LFF SSD Assy
24–480
Mandatory
The following are CSR-A parts:
697387-001
SPS-Drive HD 300GB 6G SAS 15K
M6710 2.5in HDD
697388-001
SPS-Drive HD 450GB 6G SAS 10K
M6710 2.5in HDD
Mandatory
Drive Enclosure Components
153
Table 23 Drive Chassis FRUs (continued)
Material Number
Description
750781-001
HP M6710 450GB 6G SAS 10K 2.5in
FE HDD
Mandatory
727398-001
SPS-HDD SS7000 600GB 10K SFF 6G
SAS-S 2.5in.
Mandatory
697389-001
SPS-Drive HD 900GB 6G SAS 10K
M6710 2.5in HDD
Mandatory
750782-001
HP M6710 900GB 6G SAS 10K 2.5in
FE HDD
Mandatory
727397-001
SPS-HDD SS7000 1TB 7.2K SFF 6G
SAS-S 2.5in.
Mandatory
727391-001
SPS-HDD SS7000 1TB 7.2K SFF ENCR
SAS-S 2.5in.
Mandatory
761928-001
SPS-DRIVE SAS 1.2TB 6G 10K RPM
SFF
Mandatory
697390-001
SPS-Drive HD 2TB 6G SAS 7.2K NL
M6720 3.5in HDD
Mandatory
746841-002
HP M6720 2TB 6G SAS 7.2K 3.5in
FE HDD
Mandatory
697391-001
SPS-Drive HD 3TB 6G SAS 7.2K NL
M6720 3.5in HDD
Mandatory
746841-004
HP M6720 4TB 6G SAS 7.2K 3.5in
FE HDD
Mandatory
697392-001
SPS-Drive 200GB 6G SAS SLC M6710
2.5in SSD
Mandatory
703521-001
SPS-Drive HD 100GB 6G SAS 3.5in
HDD
Mandatory
703522-001
SPS-Drive 100GB 6G SAS 3.5in HDD
Mandatory
703523-001
SPS-Drive 200GB 6G SAS 3.5in HDD
Mandatory
743182-001
SPS-HDD SS7000 2TB 7.2K LFF SAS
Mandatory
710490-001
HP M6720 2TB 6G SAS 7.2K 3.5in
NL HDD
Mandatory
743181-001
SPS-HDD SS7000 3TB 7.2K LFF SAS
Mandatory
710490-002
HP M6720 3TB 6G SAS 7.2K 3.5in
NL HDD
Mandatory
743183-001
SPS-HDD SS7000 4TB 7.2K LFF SAS
Mandatory
725862-002
HP M6710 400GB 6G SAS 2.5in MLC
SSD
Mandatory
725862-002
HP M6720 400GB 6G SAS 3.5in MLC
SSD
Mandatory
752840-001
HP M6710 480GB 6G SAS 2.5in MLC
SSD
Mandatory
761924-001
SPS-SSD 480GB SAS 6G SFF MLC SG
Mandatory
752841-001
HP M6710 480GB 6G SAS 3.5in MLC
SSD
Mandatory
154 Illustrated Parts Catalog
Qty Per Chassis
CSR Type
Table 23 Drive Chassis FRUs (continued)
Material Number
Description
Qty Per Chassis
CSR Type
761925-001
SPS-SSD 480GB SAS 6G LFF MLC SG
Mandatory
725862-001
HP M6710 800GB 6G SAS 2.5in ME
SSD
Mandatory
725862-001
HP M6720 800GB 6G SAS 3.5in ME
SSD
Mandatory
783267-001
HP M6710 920GB 6G SAS 2.5in MLC
FE SSD
Mandatory
752842-001
HP M6710 920GB 6G SAS 2.5in MLC
SSD
Mandatory
761926-001
SPS-SSD 920GB SAS 6G SFF MLC SG
Mandatory
752843-001
HP M67200 920GB 6G SAS 3.5in
MLC SSD
Mandatory
761927-001
SPS-SSD 920GB SAS 6G LFF MLC SG
Mandatory
750785-001
SPS-DRV 2TB HDD 6GSAS7.2K LFF
SS7000 FIPS
Mandatory
750786-001
SPS-DRV 4TB HDD 6GSAS7.2KLFF
SS7000SG FIPS
Mandatory
Storage System Components
Figure 84 764 W Power Cooling Module without Battery
Storage System Components
155
Figure 85 764 W Power Cooling Module Battery
Figure 86 580 W Power Cooling Module
Figure 87 I/O Module
156
Illustrated Parts Catalog
Table 24 Storage System Components
Part Number
Description
Qty.
CSR Type
683239-001
SPS-PCM 764W Assy
up to 2
Not
727386-001
SPS-PCM 764W Assy, Gold
2
Not
683240-001
SPS-Battery PCM 764W Assy
up to 2
Not
683241-001
SPS-PCM 580W Assy
up to 2
Not
683251-001
SPS-Module I/O SASquatch
up to 4
Not
Controller Node and Internal Components
Figure 88 Controller Node
Figure 89 Node Disk
Controller Node and Internal Components
157
Figure 90 4-port Fibre Channel Adapter
Figure 91 2-port CNA Adapter
Figure 92 FC SFP Adapter
158
Illustrated Parts Catalog
Table 25 Controller Node and Components
Part Number
Description
Qty.
CSR Type
683245-001
SPS-Node Module 7200 NO HBA
2
Optional
683246-001
SPS-Node Module 7400 NO HBA
4
Not
683248-001
SPS-Node Boot Drive (Node drive)
1 per node
Not
683259-001
SPS-Adapter FC 4port
1
Not
683237-001
SPS-Adapter CNA 2port
1
Not
468508-002
SPS-Module FC SFP
Up to 4 per node
Not
Figure 93 Internal Node Components
Figure 94 Internal Node Components
Controller Node and Internal Components
159
Table 26 Internal Node Components
Callout
Part Number
Description
Qty.
CSR Type
1
N/A
Node drive location
1
2
683807-001
SPS-Cable Node Drive SATA
1
Not
683250-001
SPS-Cable Boot Drive (Node
drive cable)
3
683247-001
SPS-PCIe Riser Assy
1
Not
4
N/A
N/A
N/A
N/A
5
N/A
N/A
N/A
N/A
6
683249-001
SPS-Battery Coin (TOD battery)
1
Not
7
683806-001
SPS-Memory DIMM 8GB DDR3
Control Cache 7200, 7400
1
Not
8, 9
683803-001
SPS-Memory DIMM 2GB DDR2
7200
2 (7200)
Not
8, 9
683804-001
SPS-Memory DIMM 4GB DDR2
7400
2 (7400)
Not
Service Processor
Figure 95 Service Processor DL320e
Table 27 Service Processor
Part Number
Description
Qty
725287-001
HP 3PAR Service Processor DL320e
1
Miscellaneous Cables and Parts
Table 28 Storage System Cables
Part Number
Description
683808-001
SPS-Cable Node Link PCIe
7400
Not
683809-001
SPS-Cable Console Node
Not
683810-001
SPS-Cable Console Drive
Chassis
Not
683252-001
SPS-Power Cord PCM
Not
160 Illustrated Parts Catalog
Qty.
CSR Type
Table 28 Storage System Cables (continued)
Part Number
Description
Qty.
CSR Type
656427-001
SPS-CA 1m PREMIER FLEX
FC OM4
Mandatory
656428-001
SPS-CA 2m PREMIER FLEX
FC OM4
Mandatory
656429-001
SPS-CA 5m PREMIER FLEX
FC OM4
Mandatory
656430-001
SPS-CA 15m PREMIER FLEX
FC OM4
Mandatory
656431-001
SPS-CA 30m PREMIER FLEX
FC OM4
Mandatory
656432-001
SPS-CA 50m PREMIER FLEX
FC OM4
Mandatory
649991-001
SPS-Cable FC LC-LC OM3
10 M
Not
649992-001
SPS-Cable FC LC-LC OM3
25 M
Not
649993-001
SPS-Cable FC LC-LC OM3
50 M
Not
649994-001
SPS-Cable FC LC-LC OM3
100 M
Not
659061-001
SPS-Cable FC LC-LC OM3 6
M
Not
408765-001
PS-CA,EXT MINI SAS, 0.5M
Mandatory
408767-001
SPS-CA,EXT MINI SAS, 2M
Mandatory
408769-001
SPS-CA,EXT MINI SAS, 6M
Mandatory
456096-001
SPSSFP+, 10G BLc, SR
Optional
Table 29 Miscellaneous Parts
Part Number
Description
Qty.
CSR Type
683253-001
SPS-Rail Kit 2U24
Fasteners
Optional
683254-001
SPS-Rail Kit 4U24
Fasteners
Optional
683812-001
SPS-Panel 2U Filler
Optional
The following are CSR-A parts:
683255-001
SPS-Bezel M6710 drive
shelf, right
Mandatory
683256-001
SPS-Bezel M6720 drive
shelf, left
Mandatory
683257-001
SPS-Bezel 7200, right
Mandatory
683258-001
SPS-Bezel 7400, right
Mandatory
690777-001
SPS-Bezel M6720 drive
shelf, right
Mandatory
Miscellaneous Cables and Parts
161
Table 29 Miscellaneous Parts (continued)
Part Number
Description
Qty.
CSR Type
690778-001
SPS-Bezel M6710 drive
shelf, left
Mandatory
683807-001
SPS-Drive blank SFF
Mandatory
697273-001
SPS-Drive blank LFF
Mandatory
Table 30 Service Processor Parts
Part Number
Description
Qty.
CSR Type
683811-001
SPS-Processor 1U Mounting
Kit
Not
675040-001
SPS-Service Processor 1U
Mounting Kit
Mandatory
647980-001
Service Processor Cable
Adapter Set
Not
• 2 RJ45/DB9 adapters
• 2 Ethernet cables
162
707989–001
SPS-Service Processor
DL360e
Not
5183–2687
Ethernet Cable 25 ft CAT5
M/M
Not
5183–5691
Ethernet Cable 50 ft. CAT5
RJ45 M/M
Not
C7542A
HP Ethernet 15.2m (50 ft)
CAT5e RJ45 M/M Cable
Mandatory
Illustrated Parts Catalog
F Disk Drive Numbering
Numbering Disk Drives
Figure 96 7200 and 7400 2-Node - displayed as DCN1 in software output
Figure 97 7400 4 Controller Node Displayed as DCN1 in Software Output
Numbering Disk Drives 163
Figure 98 M6710 (2U24) Displayed as DCS2 in Software Output
Figure 99 M6720 (4U24) Displayed as DCS1 in Software Output
164 Disk Drive Numbering
G Uninstalling the Storage System
Use these procedures when removing systems from an operating site and relocating to an alternate
site.
Before uninstalling a storage system:
•
Obtain drive enclosure shipping containers, one per enclosure.
•
Verify with a System Administrator that the system is prepared for shutdown.
•
Complete the storage system inventory after uninstalling the system.
Storage System Inventory
To complete the storage system inventory, record the following information for each system to be
uninstalled:
•
Site information and system serial numbers
•
Software currently being used on the system
•
In the CLI, issue the following commands:
◦
To show inventory - showinventory
◦
Software version - showversion –b –a
◦
Drive cage firmware version - showcage
◦
Disk drive firmware version - showpd –i
◦
CBIOS version - shownode -verbose
◦
Amount of data and control cache in the controller nodes - shownode
◦
Number and type of Fibre Channel adapters in each node - showport -i
◦
Number of drive magazines - showcage –d
◦
Number and sizes of drives in disk drives - showpd
•
Storage system hardware configuration
•
Number of enclosures and nodes
•
Physical condition of system hardware and cabinet (note presence of scratches, dents, missing
screws, broken bezels, damaged ports, and other visible anomalies)
•
Destination address or addresses and list of the equipment going to each address
Removing Storage System Components from an Existing or Third Party
Rack
See the appropriate component removal procedures in “Servicing the Storage System” (page 20).
Storage System Inventory
165
Download