Uploaded by okar.can

HPE 3PAR StoreServ Simulator 3.2.2

advertisement
HPE 3PAR StoreServ
Simulator
Changes to object, file, storage management
and reporting
Contents
HPE 3PAR StoreServ Simulator ...........................................................................................................2
Overview ........................................................................................................................................................2
Supported Platform .................................................................................................................................2
Supported Features ...................................................................................................................................2
Simulator Installation ...............................................................................................................................2
Minimum System Requirements ......................................................................................................2
Installation and Configuration Summary ...................................................................................3
VMware ESXi Server Installation .........................................................................................................3
Installing the Cluster Nodes ................................................................................................................3
Installation of the Enclosure Node ..................................................................................................4
Network Configuration ..........................................................................................................................5
VMware Workstation ...................................................................................................................................6
Installation of the Cluster Nodes Overview .................................................................................6
Enclosure Node Installation ................................................................................................................6
Network Configuration using VMware Workstation ...............................................................7
Technical white paper
Technical white paper
Page 2
HPE 3PAR StoreServ Simulator
Overview
The 3PAR Simulator software provides a virtualized HPE 3PAR StoreServ Storage System without
the corresponding physical storage hardware requirements. The features normally provided by the
hardware components, including the ASIC, HBA and enclosure are emulated in software. The
3PAR Simulator provides the same look, feel and primary interaction capabilities of a physical
storage system including support for management, administration and reporting through the HP
3PAR StoreServ Management Console (SSMC), remote CLI, and client software access to the CIM-API
and WS-API interfaces.
Supported Platform
The 3PAR Simulator software release version 3.2.2 MU4 supports the configuration of up to 2
simultaneous instances, differentiated by name and serial number, of a 2-node HPE 3PAR
StoreServ 8200 Storage System running HP 3PAR OS version 3.2.2 MU4.
Supported Features
The following 3PAR StoreServ Storage System features are supported:
•
Up to 48 HDDs, 4 cage configuration
•
Cage types – DCN2, DCS7, DCS8
•
Disk types – FC, SSD, NL
•
3PAR Management Console support
•
CLI and Remote CLI support
•
CIM-API (SMI-S) and WS-API support
•
Storage Provisioning including Thin-Provisioning
•
Exporting Virtual Volumes
•
Adaptive Optimization (AO)
•
Dynamic Optimization (DO)
•
Local Replication (Snapshot)
Remote Replication (RCIP) - requires 2 instances of the simulator (Requires more resources)
Important to note – When using the HPE 3PAR Simulator with Remote Copy, extra resource requirements will be necessary for the VM’s which
make up the simulator instance.
NOTE: the license validity for HPE Storeserv 3PAR Simulator has been extended till December 2024.
Simulator Installation
Installation of the HPE 3PAR simulator requires a VMware environment. Requirements include

One VMware VM assigned as the ESD node

Two VMware VM’s assigned as cluster node 1 and cluster node 2
Minimum System Requirements
The 3PAR Simulator requires a minimum of VMware ESXi 5.5 or above, or VMware Workstation 11 or later. The simulator configuration requires
deploying three VMs; two VMs simulating the cluster nodes, a third VM to simulate the enclosure and private network configuration to enable
communication between the three VMs. The simulator has not been tested on any other variations of virtual machines and it is not supported in
Technical white paper
Page 3
any other configuration that tested. It should be noted that the user can add additional resources but is not required unless simulating a RCIP
configuration as noted below.
The minimum system resources required for each VM are:

One virtual CPU (single core)

Three virtual NICs

2GB RAM (4GB is required for use in a RCIP configuration)

50GB disk space (full or thin-provisioned)
Installation and Configuration Summary
Setup of the 3PAR Simulator configuration will require completion of the following steps:
1.
2.
3.
4.
5.
Cluster Nodes Installation – Involves installation of two VMs (sim_node0 & sim_node1) using the cluster node OVF package.
Enclosure Node Installation – Involves installing one VM (sim_esd) using the enclosure node OVF package.
Network Configuration – Involves creating a private network for communication between the three VMs. The network configuration varies for the
VMware ESXi Server and VMware Workstation setup, follow the instructions in the corresponding section of this guide.
Simulator configuration and bring up – Involves configuration of the cluster node VMs, enclosure node VM and bringing them all up to form the
storage system simulator.
Installation of the HPE 3PAR StoreServ Management Console (SSMC) 2.3 and 3PAR CLI application v3.2.2 MU4 (optional).
VMware ESXi Server Installation
Installing the Cluster Nodes
The cluster node (clus_node) package is included in the HP-3PAR-StoreServ-Simulator-3.2.2MU4.zip file. Extract the files to a folder on your
computer. The uncompressed file size of the clus_node package is ~ 6.4GB
Note
Each "Deploy with OVF template" operation in vSphere Client can take up to 30 minutes to complete with high speed LAN networking. It is highly
recommended that the installation is performed from a high speed network or local disk.
The cluster node installation steps are as follows:
Step 1: Unzip the cluster node OVF package
There are 3 VM files in the cluster node OVF package:
•
clus_node_template_3.2.2.MU4_field_mode.ovf
•
clus_node_template_3.2.2.MU4_field_mode -disk1.vmdk
•
clus_node_template_3.2.2.MU4_field_mode.mf
Step 2: Create a VM (sim_node0) using the "Deploy OVF template" option in vSphere
•
Login to the VMware vSphere Client
•
Click the "File" option at the top menu
•
Click "Deploy OVF template"
•
In the "Deploy from a file or URL" dialog, specify the full pathname of the clus_node_template_3.2.2.MU4.ovf file
•
Click "Next" - shows the OVF Template details
•
Click "Next" - type in the name of the new VM
Example: sim_node0
•
Click "Next" - shows the disk format, choose thick or thin provisioning
•
In “Network Mapping” screen, select VM Network
Technical white paper
•
•
Page 4
Click "Next" - Ready to complete?
Click "Finish" - Deploying Your new VM
Step 3: Create a second VM (sim_node1) using the vCenter “clone” or “Deploy OVF template” option in vSphere
You can use the "clone" feature in vCenter to create an additional VM that is identical to the one that was created in Step 2 above, and provide a name
for the cloned VM (Example: sim_node1). If you are not using vCenter, you can create the second VM by following the steps below:
•
Login to the VMware vSphere Client
•
Click the "File" option at the top menu
•
Click "Deploy OVF template"
•
In the "Deploy from a file or URL" dialog, specify the full pathname of the clus_node_template_3.2.2.MU4_field_mode.ovf file
•
Click "Next" - shows the OVF Template details
•
Click "Next" - type in the name of the new VM
Example: sim_node1
•
Click "Next" - shows the disk format, choose thick or thin provisioning
•
In “Network Mapping” screen, select VM Network
•
Click "Next" - Ready to complete?
•
Click "Finish" - Deploying Your new VM
Installation of the Enclosure Node
The enclosure node (esd_node_template_3.2.2.MU4) OVF package is included in the HP-3PAR-StoreServ-Simulator-3.2.2 MU4.zip file. Extract the
files to a folder on your computer. The uncompressed file size of the enclosure node package is ~ 460MB.
Note
The enclosure node installation steps for creating the third VM (esd_node) are similar to the cluster node installation except that the
esd_node_template_3.2.2 MU4 OVF package must be used
Step 1: Unzip the enclosure node OVF package
There are 3 VM files in the enclosure node OVF package:
•
esd_node_template_3.2.2.MU4.ovf
•
esd_node_template_3.2.2.MU4 -disk1.vmdk
•
esd_node_template_3.2.2.MU4.mf
Step 2: Create VM (sim_esd) using the "Deploy OVF template" option
•
Login to the VMware vSphere Client
•
Click the "File" option at the top menu
•
Click "Deploy OVF template"
•
In "Deploy from a file or URL", specify the full pathname of the esd_node_template_3.2.2.MU4.ovf file
•
Click "Next" - shows the OVF Template details
•
Click "Next" - type in the name of the new VM
Example: sim_esd
•
Click "Next" - shows the disk format, choose thick or thin provisioning
•
In “Network Mapping” screen, pick VM Network
•
Click "Next" - Ready to complete?
•
Click "Finish" - Deploying Your new VM
Technical white paper
Page 5
Network Configuration
Once the VMs have been deployed, the network configuration must be completed to connect them to the appropriate networks. The VMs are
configured with three vNICs and the network label is set to VM Network by default for all the three vNICs. The default configuration must be changed
as follows:
•
The first vNIC will be dedicated for use with management tools. It must be connected to a network allowing communication to the servers
running the HP 3PAR Management Console, remote CLI or client software accessing the CIM-API (SMI-S) or WS-API. Once configured, these tools
will connect to the 3PAR Simulator via the cluster IP address specified during the “Out of the Box” (OOTB) procedure.
•
(Optional) The second vNIC will be dedicated for use by Remote Copy if you plan on configuring two separate instances of the 3PAR Simulator to
leverage this capability. If Remote Copy is configured this interface must be connected to the Remote Copy network. This applies only to the
cluster nodes (sim_node0 & sim_node1) as the enclosure node (sim_esd) is not part of the Remote Copy network. The second vNIC in the
enclosure node (sim_esd) is unused and the network label can be left to the default VM Network.
•
The third vNIC must be connected to a private network connecting all three VMs (sim_node0, sim_node1 & sim_esd) with a virtual switch
(vSwitch). The virtual switch should not be connected to a physical adapter as the traffic will only be routed locally (assuming the VMs hosted on
the same ESXi) to allow communication between the three VMs in the 3PAR Simulator configuration. Follow the instructions below in Step 1:
vSwitch configuration and Step 2: Connect the third vNIC in each VM to the Private Network to set up the private network.
Step 1: vSwitch configuration
To configure a vSwitch:
•
Log in to the vSphere client and select your ESX host from the inventory panel
•
Click the “Configuration” tab in the right side top panel
•
Click on “Networking” in the Hardware frame
•
Click on “Add Networking” (top right corner)
•
Accept the default connection type, “Virtual Machine” and click “Next”
•
Accept the default “Create a vSphere standard switch”
•
Deselect the network adapter(s).
•
Example: If the adapter vmnic2 is selected, deselect it. (See Figure 1 below)
Figure 1 Sample vSwitch configuration with all adapters unselected
•
•
•
Click “Next”
Accept the default Network Label and VLAN ID None (0). Make note of the Network Label, this label must be selected in the next step to connect
the vNIC to the correct network
Click “Next” and then “Finish” to complete the vSwitch configuration (See Figure 2 below)
Figure 2 Sample vSwitch configuration
Step 2: Connect the third vNIC in each VM to the Private Network
Connect the third vNIC in each VM to the private network corresponding to the vSwitch created in the previous step.
To connect the vNIC:
•
Log in to the vSphere client and select the sim_esd VM from the inventory panel
•
Click the “Getting Started” tab (in the right page)
•
Click on “Edit virtual machine settings”
•
Click on the network interface “Network Adapter 3”.
•
In the Network label pull down menu, select the network label created in the previous step.
Example: vSwitch1 was created in the previous step and the network is VM Network 2
Technical white paper
•
•
Page 6
Select “VM Network 2”
Click “OK”
Figure 3 Sample Private Network configuration
VMware Workstation
This section describes the HPE 3PAR StoreServ simulator installation and network configuration
procedures for VMware Workstation 11 and above.
Installation of the Cluster Nodes Overview
The cluster node (clus_node) package is included in the HP-3PAR-StoreServ-Simulator-3.2.2
MU4.zip file. Extract the files to a folder on your computer.
Step 1: Unzip the clus_node OVF package
There are 3 VM files in the clus_node OVF package:
1. clus_node_template_3.2.2.MU4_field_mode.ovf
2. clus_node_template_3.2.2.MU4_field_mode -disk1.vmdk
3. clus_node_template_3.2.2.MU4_field_mode.mf
Step 2: Create a VM (sim_node0) using the "Open" option within VMware Workstation
1. Launch the VMware Workstation
2. Click the "File" option at the top menu
3. Click "Open"
4. In "Open file or URL", specify the full pathname of the clus_node_template_3.2.2.MU4_field_mode.ovf file
5. Click "Open" - shows Import Virtual Machine
6. Enter a name of the node
Example: sim_node0
You can leave the default path for the location of the VM
7. Click “Import”
Step 3: Create a second VM (sim_node1) using the "Open" option within VMware Workstation
1.
2.
3.
4.
5.
6.
7.
8.
9.
Launch the VMware Workstation
Click the "File" option at the top menu
Click "Open"
In "Open file or URL", specify the full pathname of the clus_node_template_3.2.2.MU4_field_mode.ovf file
Click "Open" - shows Import Virtual Machine
Enter a name of the node
Example: sim_node1
You can leave the default path for the location of the VM
Click “Import”
Enclosure Node Installation
Step 1: Create a VM (sim_esd) using the "Open" option within VMware Workstation
1. Click the "File" option at the top menu
Technical white paper
2.
3.
4.
5.
6.
Page 7
Click "Open"
In "Open file or URL", specify the full pathname of the esd_node_template_3.2.2.MU4.ovf file
Click "Open" - shows Import Virtual Machine
Enter a name of the node
Example: esd_node
You can leave the default path for the location of the VM
Click “Import”
Network Configuration using VMware Workstation
Once the VMs are deployed, the network configuration must be done to connect them to appropriate networks. The VMs are configured with three
vNICs and the network label is set to VM Network by default for all the three vNICs. The configuration must be changed as follows:
•
The first vNIC (Network Adapter in figure 4) will be dedicated for use with management tools. It must be connected to a network allowing
communication to the servers running the HP 3PAR StoreServ Management Console, remote CLI or client software accessing the CIM-API (SMI-S)
or WS-API. Once configured, these tools will connect to the 3PAR Simulator via the cluster IP address specified during the “Out of the Box” (OOTB)
procedure.
•
(Optional) The second vNIC (Network Adapter 2 in figure 4) will be dedicated for use by Remote Copy if you plan on configuring two
separate instances of the 3PAR Simulator to leverage this capability. If Remote Copy is configured this interface must be connected to the
Remote Copy network. This applies only to the cluster nodes (sim_node0 & sim_node1) as the enclosure node (sim_esd) is not part of the
Remote Copy network. The second vNIC in the enclosure node (sim_esd) is unused and the network label can be left to the default VM Network.
•
The third vNIC (Network Adapter3 in figure 4) must be connected to a private network connecting all three VMs (sim_node0, sim_node1 &
sim_esd) with a Lan Segment for traffic to be routed locally, allowing communication between the three VMs in the 3PAR Simulator
configuration. Follow the instructions below in Step 1: Configuring a LAN Segment and connecting vNIC to the LAN Segment Network to set up the
private network.
Step 1: Configuring a LAN Segment
To configure a LAN Segment:
1. Launch VMware Workstation
2. Right-Click on sim_esd, select “Settings”
3. Click on “Networking Adapter 3” in the Hardware frame
4. In the Network Connection frame, select LAN Segment
5. Click on the tab below that says “LAN segment”. A global LAN Segments Window will pop up
6. Click “Add” and enter a name for the LAN Segment
Example: Simulator
7. Ensure that the Network Adapter 3 is connected to the LAN Segment you just created by selecting it using the drop down menu (See Figure 4
below)
8. Click “OK”
Technical white paper
Page 8
Figure 4 Virtual Machine Settings - LAN segment selection for Network Adaptor 3
Important
Repeat the above steps to connect Network Adapter 3 for both sim_node0 & sim_node1 to the LAN segment
Step 2: Configuring a management network.
In order to use the HP 3PAR StoreServ Management Console or 3PAR remote CLI on the system that is running VMware Workstation and hosting the
3PAR Simulator instance, or another VM instance you will need to configure Network Adapter 1.
1.
2.
3.
4.
5.
6.
7.
8.
9.
Right-Click on sim_node0, select “Settings”
Click on “Network Adapter” in the Hardware frame
In the Network connection frame, select “Host-only”
Click “OK”
Click the "Edit" option at the top menu
Click "Virtual Network Editor”
Select the host only type VMnet
Note the Subnet Address and the Subnet Mask. You will need this later during the OOTB process when configuring the node IP address
that talks to the StoreServ Management Console, CLI Utility, Putty, etc.
After creating the “Host Only Network” you may need to reboot your machine in order to initialize the network stack. Few users running VMware
Workstation 10 have reported that is the only way they could get the network to work in their case.
Technical white paper
Page 9
Figure 5 Virtual Machine Settings - Host only selection for Network Adaptor 1
Important
During the “Out Of The Box” (OOTB) procedure, you will need to provide a Static IP and the same subnet mask to be able to communicate with the HP
3PAR Simulator storage system instance form the 3PAR StoreServ Management Console on your laptop or on a separate VM as well. Please refer to
the SSMC documentation for resources requirement
Note
Note about VMware Networking: When you assign a Host-Only network profile to a specific VMNetx , VMware Workstation will
automatically create a new Class C Subnet for that VMnet (i.e. 192.168.x.x) Every installation is different, so it is important to check
what subnet is actually in use on your machine using the VMware Networking Editor Utility (under Edit on the top menu of VMware
Workstation home page). This network will by default have DHCP turned on (assigning addresses from .128 through .254), as well as
a default gateway of .1 on that subnet.
In most cases, VMware Workstation install would create VMNet1 interface on your system. The default IP assigned to the VMnet1 interface is:
192.168.80.1. THIS IS NOT AN IP ADDRESS YOU ASSIGN TO THE 3PAR SIMULATOR.
Example:
In this case the IP address of 192,168.80.10 and subnet mask of 255.255.255.0 have been assigned to the HP 3PAR Simulator node during OOTB
procedure.
Technical white paper
Figure 6 Host IP Address Settings - ipconfig for VMNet1
Figure 7 VMWare Virtual Network Editor - Host only IP address configuration for VMnet 1
Simulator Setup
After node installation and network configuration are complete, power up all VM’s.
Page 10
Technical white paper
Page 11
Step 1: Start the esd daemon
Log in to the VM deployed with the esd package with user root password root
Run createcages to create the desired cage configuration.
The createcages utility will display the following menu:
Choose from the menu:
1. 1 cage, 24 HDDs
1 DCN2 cage with 24 FC drives
2. 4 cages, 96 HDDs
1 DCN2 cage with 12 FC drives and 12 SSD drives
2 DCS8 cages with 12 FC drives and 12 SSD drives each
1 DCS7 cage with 12 NL drives and 12 SSD drives
3. 4 cages, 48 HDDs
1 DCN2 cage with 12 FC drives
3 DCS8 cages with 12 FC drives each
4. 4 cages, 96 HDDs
1 DCN2 cage with 12 FC drives and 12 SSD drives
3 DCS8 cages with 12 FC drives and 12 SSD drives each
5. exit (you can create cages using esd)
Select a number(1-5) to create cages and then exit
Once you have the HDW configured, you must start the esd daemon by running the command esd.
See the section Hata! Başvuru kaynağı bulunamadı. for more information on how to manage cages & disks using esd.
Step 2: Simulator Node Configuration
1.
2.
3.
Log in to the first node as user console password crH72mkr
Choose: Option 1 (Simulator Cluster Node Configuration)
a) Enter 0 for the node ID
b) Choose one the serial # to assign to the cluster node ( 1699678 or 1699679 )
c) Confirm
Log in to the second node as user console password crH72mkr
a) Enter 1 for the node ID
b) Assign the same serial number to this node as the one you assigned 2-b ( 1699678 or 1699679 )
c) Confirm ‘y’
Important
•
The serial number must be the same for both nodes in a simulator instance
•
One of the two serial numbers must be used in order to activate the preloaded licenses. Failing to use one the serial numbers will leaded to
an instance where no license is applied and you would have to re-install.
4. Choose: Option 5 in each VM to reboot
5. Once the VMs are booted, log in to the first node as user console password crH72mkr
6. Choose: Option 2 to run the Out Of The Box procedure
7. Enter “yes” to continue
8. Follow the prompts to finish the configuration using the information you collected in the Simulator Setup section.
When OOTB completes, the HP 3PAR Simulator storage system cluster node configuration is done.
See the section Hata! Başvuru kaynağı bulunamadı. for more information on how to manage cages & disks using esd.
Step 3: Verifying the configuration
1.
2.
3.
4.
Open the HP 3PAR Management Console
Connect to the cluster node IP address
Login with user 3paradm password PTzk5fVt
Check the controller nodes, cages and disk configuration.
Technical white paper
Page 12
Simulator Best Practice
1.
2.
3.
Unless the entire storage system (cluster nodes and esd node) is being brought down or restarted, do not terminate the esd daemon alone
while the nodes are up and running. Terminating the esd daemon has the effect of disks going offline leading to TOC quorum loss and
eventually the nodes go into power fail mode.
Gracefully shutdown the HP 3PAR Simulator by first shutting down the cluster nodes, then stop the esd daemon and shutdown the esd VM.
Use the following steps:
A. Login to node 0 as user 3paradm and password PTzk5fVt
B. Issue the command
shutdownsys halt
C. Once the cluster node VMs (node 0 and node 1) are powered down:
i. login to esd VM as user root password root
ii. Issue the command
esd stop
D. Once the daemon is stopped, issue the command
shutdown –h 0
When powering back up the HP 3PAR Simulator you need to:
A. Bring the esd VM first and start the esd daemon
B. Then power up both the cluster nodes.
Enclosure and Cage Configuration
Start the Daemon
You can start the daemon by running either of the following commands:
esd
esd start
esd data directory: /common/esd/.esddata
The esd maps each disk to a file. By default, all data files are created in a directory called .esddata under /common/esd.
You can override the default parent directory /common/esd when you start esd using the –l option:
esd start –l <new-dir>
Stop the Daemon
You can stop the daemon by running the following command:
esd stop
Note
You cannot start esd again without stopping it first as only one instance of esd can be running at any time
Cage WWN and Disk WWN
Some of the subcommands require cage WWN or disk WWN to perform the requested operation. The esd automatically generates cage WWN and
disk WWN when a new cage or disk is added. You can use the list command described later in the section to get the WWNs.
Add a Cage
Synopsis:
esd add cage [mag-num=<num> | disk-num=<num>] cage-type={DCN2|DCS7|DCS8}] [disk-type={FC|SSD|NL}] node=<node0-name>
[node=<node1-name>]
mag-num
specifies the number of magazines to create, default 24 magazines
disk-num
specifies the number of disks to create, default 24 disks
cage-type
specifies the type of cage to create: DCN2, DCS7, DCS8
disk-type specifies the type of disks to create: FC, SSD, NL
node
specifies the nodes to connect
Technical white paper
Page 13
Important
1.
2.
3.
4.
5.
For the first cage configuration, the parameter cage-type is optional. By default the first cage created will be DCN2 with FC drives. A user is
allowed to override the default by specifying the cage-type and/or disk-type. The parameter cage-type is mandatory for additional cages.
The version 1.2 release of HP 3PAR Simulator supports only a 2-node cluster, so you can configure only one DCN2 cage with the simulator. If
you need to configure more than one cage, use cage types DCS7 or DCS8.
If the parameter disk-type is not specified, the following defaults will apply for each cage type:
a. DCN2 and DCS8 cages: FC
b. DCS7 cage: NL
If the parameter disk-num or mag-num is not specified, the default number is 24.
The simulator supports up to 4 cages with at most 48 disks. For example, a configuration with 2 cages and 24 disks per cage or 4 cages with
12 disks per cage has reached the limit. Additional cages/disks cannot be added until some disks are deleted.
Examples:
1.
Add the first cage with the default configuration:
# esd add cage node=vm-a node=vm-b
Note
The above command adds a DCN2 cage with 24 FC drives, connecting to cluster nodes vm-a and vm-b
2.
Add the first cage or additional cage by specifying cage-type:
# esd add cage cage-type=DCS8 node=vm-a node=vm-b
Note
The above command adds a DCS8 cage with 24 FC drives, connecting to cluster nodes vm-a and vm-b
3.
Add an additional cage by specifying cage-type, disk-type, and disk-num:
# esd add cage cage-type=DCS8 disk-type=SSD disk-num=12 node=vm-a node=vm-b
Note
The above command adds a DCS8 cage with 12 SSD drives, connecting cluster nodes vm-a and vm-b
Add Magazine(s)
Synopsis:
esd add mag -w <cage-wwn> [disk-type={FC|SSD|NL}] [mag-num=<num>]
-w <cage-wwn> specifies the associated cage to add the magazine
mag-num
specifies the number of magazines to create, default: 1
disk-type
specifies the type of disks to create, default depends on the type of the cage
See the section Add a Cage
Examples:
1. add two magazines with default disk-type:
# esd add mag –w 50050cc2fc464dc4 mag-num=2
The above command adds two magazines to the cage with wwn 50050cc2fc464dc4. If the cage is DCN2 or DCS8, FC drives will be added. If
the cage is DCS7, NL drives will be added.
2. Add 1 magazine with SSD drive:
# esd add mag –w 50050cc2fc464dc4 disk-type=SSD
The above command adds 1 magazine with SSD drive to the cage with wwn 0x50050cc2fc464dc4. If the cage is DCS7, this operation will fail
because DCS7 doesn’t allow SSD drives.
Add Disk(s)
Synopsis:
esd add disk -w <cage-wwn> [disk-type={FC|SSD|NL}] [disk-num=<num>]
-w <cage-wwn> specifies which cage to add the disk
disk-num specifies the number of disks to create, default: 1
Page 14
Technical white paper
disk-type
specifies the type of disks to create, default depends on the type of the cage
See the section Add a Cage
Examples:
1.
2.
Add 8 disks with default disk-type:
# esd add disk –w 50050cc2fc464dc4 disk-num=8
The above command adds 8 drives to the cage with the wwn 50050cc2fc464dc4.
If the cage is DCN2 or DCS8, FC drives will be added, if the cage is DCS7, NL drives will be added.
Add 1 NL drive:
# esd add disk –w 50050cc2fc464dc4 disk-type=NL
The above command adds 1 magazine with SSD drive to the cage with the wwn 50050cc2fc464dc4.
If the cage is DCS8, this operation will fail because DCS8 doesn’t allow NL drives.
Delete a Cage
Synopsis:
esd delete cage -w <cage-wwn>
Example:
# esd delete cage –w 50050cc2fc464dc4
Note
The above command deletes the cage with wwn 50050cc2fc464dc4, the corresponding data directory for this cage will be removed as well.
Delete a Disk
Synopsis:
esd delete disk –w <disk-wwn>
Example:
# esd delete disk –w 20000002fc464dc4
Note
The above command deletes a disk with wwn 20000002fc464dc4, if the magazine becomes empty, it will be deleted as well. The corresponding
data file for this disk will be removed.
List Cage(s)
Synopsis:
esd list cage [-w <cage-wwn>]
Note
If -w <cage-wwn> is specified, it lists detailed information of the specific cage; otherwise, it lists a summary of all cages.
Examples:
1) List summary of all cages, one line per cage:
# esd list cage
CageType
WWN
Drives Vendor
ProdID
DCN2
50050cc2fc47cff4 23
PTzk5fVt EB-2425P-E6EOS 3207
DCS7
50050cc2fc48ddc8 7
PTzk5fVt HB-2435-E6EBD
DCS8
50050cc2fc4ae718 10
PTzk5fVt EB-2425P-E6EBD 3207
Rev
3207
Note
The list only displays all cages configured by esd, it is not necessarily ordered as “showcage” command on the cluster node.
2)
List the detailed information of a specific cage:
# esd list cage –w 50050cc2fc47cff4
List Magazine(s)
Synopsis:
Page 15
Technical white paper
esd list mag -w <cage-wwn>
Note
This lists all magazines of a specific cage. Since there is one disk per magazine in an EOS system, this lists all disks of a cage.
Example:
List all magazines (i.e. disks) of a cage:
# esd list mag –w 50050cc2fc47cff4
===== disk =====
ID
DEVNAME
Serial# NumOfBlks
0:0
20000002fc47cff4 bb000000 943718400
520
1:0
20000002fc47d0f4
bb000100 943718400
2:0
20000002fc47d1f4
bb000200 943718400 520
BlkSz
Vendor Model
SEAGATE ST9450404SS
XRFA
520
SEAGATE ST9450404SS
SEAGATE ST9450404SS
XRFA
Rev
ok
XRFA
ok
Status
20
ok
20
Temp
BlkSz
520
Rev
XRFA
Status
ok
Temp
20
20
List a Disk
Synopsis:
esd list disk –w <disk-wwn>
Example:
List information of a specific disk:
# esd list disk –w 20000002fc47cff4
===== disk =====
ID
DEVNAME
Serial#
0:0
20000002fc47cff4 bb000000
NumOfBlks
943718400
Vendor Model
SEAGATE ST9450404SS
ESD Log FIles
The esd log file location is /var/log/esd. It logs the following information in the esd.log* files:
•
Daemon start/stop time along with its pid
•
connections with vhba
•
errors
The esd keeps last 5 log files (from esd.log.0 to esd.log.4). The esd.log has the most recent information. If a change configuration fails, check esd.log
for errors.
Exporting Virtual Volumes
The HP 3PAR Simulator environment does not support a host entity. To export volumes to a host, a fake host entity must be created. The
simulator provides FC (Fibre Channel) ports for the fake host to connect and simulate host connectivity only, host I/O is not supported with this
connection.
Before exporting a virtual volume, you need to simulate the host connection.
1. Log in to a cluster node with user console and password crH72mkr
2. Choose option 3 (Simulator Host Connection), this will create a host WWN.
You can then use the 3PAR Management Console or Remote CLI to create host attributes and export the volume. If you use Remote CLI, run
showhost to find the host WWN. The host WWN must be used with the createhost command.
Note
1. The host connection simulation is an internal utility and may be changed/obsoleted in the future releases of HP 3PAR Simulator.
2. The simulated host connections once created they can not be deleted.
3. The simulator host connections are not persistent through simulator reboots. they are automatically removed upon rebooting the nodes
Refer to CLI Administrator's Manual for the full syntax of the commands createhost, createvlun to export volumes. See the section Hata! Başvuru
kaynağı bulunamadı.
Technical white paper
Page 16
Remote Copy
Remote Copy allows copying virtual volumes from one storage system to another.
With the HP 3PAR Simulator:
•
Only Remote Copy over IP (RCIP) and a 1:1 configuration is supported
•
Any RC configuration that requires more than 2 nodes in a cluster is not supported
•
Assign one of the two supported serial numbers to each instance of the simulator. The serial number assignment is part of the cluster node
configuration & setup, ensure the same serial number is assigned to both the cluster nodes in the same instance of the simulator.
Requirements
To run a 1:1 RCIP configuration in a server or workstation environment, your system needs to be able to support:
•
A total of 6 VMs, three VMs for each simulated storage system.
•
The cluster node VMs require 2 vCPUs and 4GB memory each.
•
The two instances of the simulator can be run in a single system or two different systems. The RC network connectivity between the two
simulator instances must be setup via a separate vSwitch on ESXi or LAN segment on VMware Workstation 9. The vNIC#2 in each of the cluster
nodes must be connected to the vSwitch or the LAN segment.
The following requirements are general RC requirement, not specific to simulator:
•
one static cluster IP address for each cluster
•
One static IP address for each RC port, the RC IP address must be in a different subnet than the primary node and cluster IP address (vNic#2 in
each cluster node VM).
Configuration
Follow instructions in RC user guide to setup the RC over IP configuration. See the section Hata! Başvuru kaynağı bulunamadı. for reference to
the RC user guide.
As an example, the section below in the document is to illustrate how the RCIP can be setup using the HP 3PAR CLI. Users might choose differtent
IPs and subnet masks.
Important
In the following instructions, Cluster A refers to the 2-node cluster in Storage System 1 and Cluster B refers to the 2-node cluster in Storage
System 2
Cluster A:
First setup the netc configuration to configure cluster IP; this is required for the RC commands to work
netcconf -set
Follow the prompts to enter IP address, subnet mask and gateway address
To verify the configuration, use the command shownet
RC configuration:
controlport rcip addr 192.1.82.75 255.255.248.0 0:3:1
controlport rcip addr 192.1.82.76 255.255.248.0 1:3:1
controlport rcip gw 192.1.80.1 0:3:1
controlport rcip gw 192.1.80.1 1:3:1
Use showport –rcip to verify the RC port information
Cluster B:
First setup the netc configuration; this is required for the RC commands to work
netcconf –set
Follow the prompts to enter IP address, subnet mask and gateway address
To verify the configuration, use the command shownet
Technical white paper
Page 17
RC configuration:
controlport rcip addr 192.1.82.77 255.255.248.0 0:3:1
controlport rcip addr 192.1.82.78 255.255.248.0 1:3:1
controlport rcip gw 192.1.80.1 0:3:1
controlport rcip gw 192.1.80.1 1:3:1
Use showport –rcip to verify the RC port information
Cluster A:
To verify the RC link connectivity:
controlport rcip ping 192.1.82.77 0:3:1
controlport rcip ping 192.1.82.78 0:3:1
To start remote copy and create target:
startrcopy
creatercopytarget srcp2 IP 0:3:1:192.1.82.77 1:3:1:192.1.82.78
To create a CPG and a VV:
createcpg -ha mag -sdgs 16g -sdgl 32g -sdgw 24g cpg1
createvv -snp_cpg cpg1 cpg1 vv1 10g
Cluster B:
To verify the RC link connectivity:
controlport rcip ping 192.1.82.75 0:3:1
controlport rcip ping 192.1.82.76 0:3:1
To start remote copy and create target:
startrcopy
creatercopytarget srcp1 IP 0:3:1:192.1.82.75 1:3:1:192.1.82.76
To create a CPG and a VV:
createcpg -ha mag -sdgs 16g -sdgl 32g -sdgw 24g cpg1
createvv -snp_cpg cpg1 cpg1 rvv1 10g
Cluster A:
To create an rcopy group and admit rcopy VV:
creatercopygroup rc_group1 srcp2:periodic
admitrcopyvv vv1 rc_group1 srcp2:rvv1
Start initial replication:
startrcopygroup rc_group1
Use showrcopy to display the status of the replication
To resync:
syncrcopy rc_group1
Use showrcopy to check the status of the replication
The HP 3PAR Management Console can also be used to configure RCIP. Please refer to the Remote Copy and Managemement user guides for more
information.
The HP 3PAR StoreServ Simulator sharepoint
http://ent142.sharepoint.hp.com/teams/3ParSimulator/SitePages/Home.aspx
Technical white paper
Resources, contacts, or additional links
hp.com/vanity/url
Page 18
Download