Uploaded by martin.panggabean

23684621

advertisement
HPE 3PAR StoreServ Storage PoC
Technical white paper
Technical white paper
Contents
Executive summary................................................................................................................................................................................................................................................................................................................................5
PoC framework ...........................................................................................................................................................................................................................................................................................................................................5
Customer responsibility......................................................................................................................................................................................................................................................................................................................5
Further requirements.....................................................................................................................................................................................................................................................................................................................5
Vendor responsibility ...........................................................................................................................................................................................................................................................................................................................5
Decision criteria .........................................................................................................................................................................................................................................................................................................................................5
HPE 3PAR StoreServ architecture ...........................................................................................................................................................................................................................................................................................6
Step-by-step procedures ..................................................................................................................................................................................................................................................................................................................7
Provisioning with the GUI ..........................................................................................................................................................................................................................................................................................................7
VV creation..............................................................................................................................................................................................................................................................................................................................................9
Creation of custom CPGs ....................................................................................................................................................................................................................................................................................................... 11
Creating a CPG using the GUI (optional)................................................................................................................................................................................................................................................................. 12
Provisioning with the CLI ....................................................................................................................................................................................................................................................................................................... 12
Host creation with the CLI .................................................................................................................................................................................................................................................................................................... 13
VV creation using the CLI...................................................................................................................................................................................................................................................................................................... 13
Exporting the VV with the CLI .......................................................................................................................................................................................................................................................................................... 13
Creating a CPG using the CLI (optional).................................................................................................................................................................................................................................................................. 14
Host configuration........................................................................................................................................................................................................................................................................................................................14
Autonomic Groups ..............................................................................................................................................................................................................................................................................................................................14
HPE 3PAR Snapshots (formerly known as Virtual Copy)............................................................................................................................................................................................................................... 15
Creating snapshots in the HPE 3PAR SSMC ....................................................................................................................................................................................................................................................... 16
Creating snapshots using the CLI .................................................................................................................................................................................................................................................................................. 17
Clones (formerly known as Full/Physical Copies) ............................................................................................................................................................................................................................................ 17
Preconditioning SSD drives for performance and deduplication testing ................................................................................................................................................................................... 19
Demonstrating capacity efficiency ....................................................................................................................................................................................................................................................................................... 20
Capacity utilization and efficiency.................................................................................................................................................................................................................................................................................. 20
Thin Provisioning...........................................................................................................................................................................................................................................................................................................................20
HPE 3PAR Thin Conversion ............................................................................................................................................................................................................................................................................................... 21
Zero Detection .................................................................................................................................................................................................................................................................................................................................22
VMware thin clones .....................................................................................................................................................................................................................................................................................................................28
Real-time monitoring and historical reporting .......................................................................................................................................................................................................................................................... 31
Getting started with real-time monitoring and historical reporting in the GUI..................................................................................................................................................................... 31
Creating a real-time report ................................................................................................................................................................................................................................................................................................... 32
Creating a historical report ................................................................................................................................................................................................................................................................................................... 34
Create historical reports using the HPE 3PAR SSMC .................................................................................................................................................................................................................................. 34
Sample statistics for performance that can be view real-time from the CLI ........................................................................................................................................................................... 35
Performance monitoring in the CLI............................................................................................................................................................................................................................................................................... 36
Technical white paper
Historical reporting the CLI ........................................................................................................................................................................................................................................................................................................ 37
Generating SR reports from the command line ................................................................................................................................................................................................................................................. 37
Performance tuning ...........................................................................................................................................................................................................................................................................................................................38
Dynamic Optimization (Volume level/system-level tuning) .................................................................................................................................................................................................................. 38
Running Dynamic Optimization in the HPE 3PAR SSMC ........................................................................................................................................................................................................................ 38
Tune VV Dialog...............................................................................................................................................................................................................................................................................................................................38
Tuning an entire array .............................................................................................................................................................................................................................................................................................................. 39
Running Dynamic Optimization from the CLI ..................................................................................................................................................................................................................................................... 40
Adaptive Optimization (sub-volume-level tiering) ......................................................................................................................................................................................................................................... 41
CPGs as tiers in Adaptive Optimization configuration ................................................................................................................................................................................................................................ 42
Demonstrating HPE 3PAR Adaptive Optimization........................................................................................................................................................................................................................................ 42
Creating an AO schedule........................................................................................................................................................................................................................................................................................................ 42
Start the load generator .......................................................................................................................................................................................................................................................................................................... 44
Adaptive Flash Cache ................................................................................................................................................................................................................................................................................................................45
Priority Optimization (quality of service)................................................................................................................................................................................................................................................................. 47
Running Priority Optimization in the GUI ............................................................................................................................................................................................................................................................... 47
Running Priority Optimization in the CLI................................................................................................................................................................................................................................................................. 50
HPE 3PAR File Persona (SMB and NFS) ....................................................................................................................................................................................................................................................................... 51
Configuring File Persona via the HPE 3PAR CLI ............................................................................................................................................................................................................................................. 59
Resiliency testing .................................................................................................................................................................................................................................................................................................................................60
Test case 1: Failure of a front-end (Host) cable ............................................................................................................................................................................................................................................... 61
Test case 2: Failure of a back-end (disk) serial-attached SCSI (SAS) cable ............................................................................................................................................................................ 63
Test case 3: Fail a disk drive................................................................................................................................................................................................................................................................................................ 64
Test case 4: Fail a disk enclosure power supply ............................................................................................................................................................................................................................................... 65
Test case 5: Power off a drive enclosure ................................................................................................................................................................................................................................................................. 65
Test case 6: Fail an AC source (optional)................................................................................................................................................................................................................................................................ 66
Test case 7: Fail a controller node power supply ............................................................................................................................................................................................................................................. 66
Test case 8: Simulate a node failure............................................................................................................................................................................................................................................................................. 66
Appendix A: Preconditioning SSD drives for performance testing ........................................................................................................................................................................................................ 67
Intended audience ........................................................................................................................................................................................................................................................................................................................67
General guidance on preconditioning ........................................................................................................................................................................................................................................................................ 67
Defining the preconditioning workload..................................................................................................................................................................................................................................................................... 70
Appendix B: Performance testing using Iometer .................................................................................................................................................................................................................................................... 75
Appendix C: Using the HPEPoCApp for performance benchmarking ................................................................................................................................................................................................ 80
HPEPoCApp.......................................................................................................................................................................................................................................................................................................................................80
Summary steps ................................................................................................................................................................................................................................................................................................................................80
Fill workload........................................................................................................................................................................................................................................................................................................................................83
Deduplication and Compression ..................................................................................................................................................................................................................................................................................... 83
Detailed steps ...................................................................................................................................................................................................................................................................................................................................84
Deduplication ....................................................................................................................................................................................................................................................................................................................................88
Technical white paper
Compression ......................................................................................................................................................................................................................................................................................................................................88
Pattern.....................................................................................................................................................................................................................................................................................................................................................88
Threads...................................................................................................................................................................................................................................................................................................................................................88
Workload definitions .........................................................................................................................................................................................................................................................................................................................90
Sample .....................................................................................................................................................................................................................................................................................................................................................90
RandomRead .....................................................................................................................................................................................................................................................................................................................................91
Curve and Threads ......................................................................................................................................................................................................................................................................................................................91
RandomWrite ...........................................................................................................................................................................................................................................................................................................................................91
SequentialRead ......................................................................................................................................................................................................................................................................................................................................91
SequentialWrite .....................................................................................................................................................................................................................................................................................................................................91
RandomMix ...............................................................................................................................................................................................................................................................................................................................................91
RandomMix6535 .................................................................................................................................................................................................................................................................................................................................91
SequentialMix ..........................................................................................................................................................................................................................................................................................................................................92
Fill .......................................................................................................................................................................................................................................................................................................................................................................92
SQLServer ...................................................................................................................................................................................................................................................................................................................................................92
OracleDB ......................................................................................................................................................................................................................................................................................................................................................92
VDI .....................................................................................................................................................................................................................................................................................................................................................................92
Sizer16K6040.........................................................................................................................................................................................................................................................................................................................................92
SRData <filename>.............................................................................................................................................................................................................................................................................................................................93
Appendix D: Basic troubleshooting ..................................................................................................................................................................................................................................................................................... 93
Appendix E: Capturing performance data for offsite analysis..................................................................................................................................................................................................................... 98
Appendix F: Tables .............................................................................................................................................................................................................................................................................................................................99
Appendix G: Figures .......................................................................................................................................................................................................................................................................................................................100
Technical white paper
Page 5
Executive summary
HPE 3PAR StoreServ is a six-nines, 1 all flash-drive, enterprise-class storage array designed from the ground up with thin provisioning and wide
striping. This is in contrast to established vendors who bolt on these features to decades-old architectures. Also, unlike other architectures,
HPE 3PAR StoreServ arrays are n-way active on a per-volume basis, meaning by default in a 2-, 4-, 6-, or even an 8-node array all controller
nodes work to serve data for every volume. All HPE 3PAR arrays use the same mature code base—from an 8-drive HPE 3PAR StoreServ 8200
to HPE 3PAR StoreServ 1,920 drive 20800—and offer enterprise-level features on even the smallest array.
This proof of concept (PoC) is focused on HPE 3PAR StoreServ arrays and will highlight the performance and space efficiencies as well as
HPE 3PAR enterprise-class resiliency.
PoC framework
This PoC document is written for HPE 3PAR StoreServ arrays running at least HPE 3PAR OS version 3.2.1 MU2.
Timeline
The start of the PoC is <date>. The end of the PoC is estimated to be <date>.
Customer responsibility
For an onsite PoC, customer will provide resources as documented in table 1.
Table 1. Customer onsite PoC requirements
Requirement
Ethernet ports
<number of nodes> + 2 <for the service processor> <(monitoring device)> + any ports to be used for iSCSI provisioned from 2 subnets if possible
IP addresses
1 IP address for the array + 1 IP address for the service processor (SP) + the number (if any) of iSCSI connections
Power
As specified in the NinjaSTARS configuration
Rack units
As specified in the NinjaSTARS configuration
Further requirements
• Provide the success criteria (or confirm the criteria listed in “Decision criteria”)
• Deliver resources to build test environment and perform performance testing
• Provide resources to work with HPE on array configuration
• Deliver facilities for testing, evaluation, and collaborating with vendor
• Provide input into the PoC planning, evaluation, and documentation
Vendor responsibility
Provide test equipment along with individual expertise to help run the tests. Deliver best practices documentation regarding setup and
configuration of the hardware, OS, database (DB), and applications.
Decision criteria
Objectives that must be met in order to determine success of the PoC:
• Array performance
– Array must be able to successfully complete all the performance tests with a positive result (see performance specifications in the projected
performance summary in appendix A section)
1
Visit www8.hp.com/us/en/products/data-storage/data-storage-products.html?compURI=1649739#.VGqaRskcx8E for more information on the six-nines guarantee.
Technical white paper
Page 6
• Array efficiency
– Array must be able to use space efficiently with a combination of:
 Thin provisioning
 Data deduplication
– Thin snapshots
 Demonstrate the ability to present significantly more capacity than the physically installed capacity in the array
• Array must be able to successfully demonstrate snapshots, quality of service (QoS) capabilities, and ability to non-disruptively
change RAID types
• Array resiliency
– Array must be able to ensure data consistency at all times
– Array must be able to withstand a complete power outage while maintaining full data integrity
– Array must have 1+1 or minimum N+1 resiliency in all major hardware components
HPE 3PAR StoreServ architecture
A quick introduction to HPE 3PAR architecture
The HPE 3PAR StoreServ is, at its core, a very sophisticated volume manager; a volume manager that extends its control all the way down to the
raw electrical device. There are two distinctive types of virtualization: physical and logical.
Physical virtualization is the division of physical disks into physically contiguous 1 GiB allocations called chunklets. RAID is implemented by
combining chunklets together into logical disks (LDs). When visualizing the StoreServ RAID implementation, imagine thousands of 1 GB drives
that form small RAID sets called RAIDlets. A template called a Common Provisioning Group (CPG) defines all characteristics of the LDs (RAID
level, set size, step size, and so on).
Logical virtualization involves the layout of volumes across the LDs. LDs themselves consist of 128 MB contiguous space. Virtual Volumes (VVs)
are constructed from these regions. Although LDs can be shared by multiple VVs, a region can only belong to one VV.
When allocating storage for volumes, each node in the n-way cluster assembles LDs. The regions from the LDs are combined by HPE 3PAR
StoreServ array into the VVs that are exported (presented) by the array. Hence, HPE 3PAR array is n-way active on a per volume basis. All nodes
actively serve data for every volume at all times.
Technical white paper
Page 7
Table 2. HPE 3PAR terminology
Term
Virtualization
Definition
Chunklet
Physical
1 GiB of physically contiguous space on a physical disk
Page
Physical
16 KB of 512 byte disk sectors
Logical disk
Physical
A collection of stripes (for example, 128 KB for RAID 5) across chunklets formatted into RAID sets
Virtual volume
Logical
A collection of 128 MiB regions
Virtual logical unit
number (VLUN)
Logical
The logical representation of a single path from a single VV to a single port on a host; a dual-ported host bus adapter (HBA)
connected to two different switches will have four possible paths between the host and the volume; hence, an export of the volume
will produce four VLUNs
VVSet
Logical
A group of volumes that can be manipulated as one; these are particularly useful for exporting multiple volumes to a cluster, such
as a VMware® cluster
HostSet
Logical
A group of hosts that are manipulated as one; these are useful for grouping together cluster hosts
Service processor
N/A
The SP is either a 1U physical server or virtual host (VMware or Hyper-V), which nearly continuously monitors the array and
optionally reports status, performance, hardware failures, as well as configuration information back to HPE
HA MAG
Logical
By default the HPE 3PAR LD layout algorithm seeks to construct RAIDlets (RAID sets) using regions spread out across as many
drive enclosures as possible. This protects against drive enclosure failures. HA MAG allows the construction of RAIDlets inside the
same drive enclosure. This allows for larger stripe size with fewer drive enclosures
HA cage
Logical
High availability cage; this makes sure that no two members of the same RAIDlet reside in the same disk enclosure; this makes sure
data will still be available in the event a drive enclosure is lost
HA port
Logical
High availability port only applies to the 8000 series array this makes sure that no two members of the same RAIDlet resides on a
daisy-chained disk enclosure and its upstream disk enclosure. HA Port is a superset of HA cage
Hash collision
N/A
Hash collisions occur when the cyclic redundancy check (CRC)—the hash is generated when a page matches another hash; in the
event this happens both pages are stored in the dedupe datastore (DDS)
Step-by-step procedures
Provisioning with the GUI
Host creation
1. Install the HPE 3PAR StoreServ Management Console (SSMC)
2. Launch the console and login using the default array user-id and password
User-id: 3paradm
Password: 3pardata
Figure 1. HPE 3PAR login dialog
Technical white paper
Page 8
Figure 2. StoreServ host creation
a. The easiest way to create a host is to install HPE 3PAR Host Explorer (provided at no cost). The Host Explorer agent will register the
host with the associated Worldwide Names (WWNs) and iSCSI Qualified Names (IQNs). To install Host Explorer, run setup from the
Host Explorer installation media.
b. Start HPE 3PAR Host Explorer process using the service manager MMC (location will vary depending on Windows® version).
c. After the service has started, open a command prompt, navigate to “C:\Program Files\3PAR\Host Explorer” (or a custom directory),
and execute TpdHostAgent push. This will send host information to the array using SCSI Enclosure Services (SES).
Figure 3. WWN selection
Technical white paper
Page 9
If the host is attached through iSCSI, simply click “Next,” leaving the WWN selection empty; then select the iSCSI IQNs from the iSCSI selection pane.
Figure 4. IQN selection
d. After completing the selection of either the SCSI WWNs or IQNs, click “Next.” A summary dialog box displays the selections; click “Finish”
to complete the process.
VV creation
Figure 5. VV creation
Technical white paper
Page 10
1. After logging into the array, select “Provisioning” in the lower left corner at location a in figure 5.
2. Select “Virtual Volumes” at location b in figure 5; then the VV size at location c in figure 5.
Figure 6. VV creation dialog box
3. In the VV creation dialog box, although there are many options available, the only information required is VV name (location a in figure 6),
the VV size and the CPG to use (RAID level) (location b in figure 6), and the type of provisioning to select. The array defaults to
“Thinly Deduped” volumes in the management console change this to “Thinly Provisioned” for the purpose of the PoC. By default there is a
CPG created for each of the different disk types for both r1 and r6. A CPG cannot consist of different disk classes.
4. Name the volumes 3PAR.vv at location a in figure 6. The volumes will be created and given the same name with an integral suffix.
5. At location c give the VV a size of 512 GB and select SSD_r1. This space will be used for the production data.
6. At location d in figure 6 select “Copy CPG” and set it to SSD_r5. This allows the HPE 3PAR array to use a different RAID or availability level
and even different drive types or speeds for snapshot space.
7. For the purposes of this PoC, set the number of volumes to create to “3” at location e in figure 6. Note that as multiple volumes are created at
the same time, they can be added to a VVSet and exported all at once.
8. Ensure “Export after creation” is checked, which will trigger the export dialog box. If this step is accidently skipped, simply select the volume
set and select “Export.”
9. On the export dialog box screen, make sure to select “Virtual Volume Set” at location b in figure 7.
Figure 7. Export dialog box
Technical white paper
Page 11
Creation of custom CPGs
CPGs are templates used for the creation of LDs. CPGs do not consume space until VVs are created that utilize the CPG template. HPE 3PAR
StoreServ automatically creates CPGs for each drive type for RAID 10, RAID 50, and RAID 60. In addition to the predefined CPGs, users are free
to create CPGs using a wide array of customizable attributes.
Table 3. User-definable CPG attributes
Attribute
Options
Device type
Select from nearline (NL), solid-state drive (SSD), or fast class drives
Device RPM (relative
performance metric)
7.2k nearline, 10k, or 15k fast class, 100k or 150k SSD drives; 100k is used to designate consumer multi-level cell (cMLC) drives; all other SSD drives
are currently rated at 150k
Availability
HA cage, HA MAG, HA Port (refer to HPE 3PAR terminology at the beginning of this document for definitions)
RAID type
RAID 0, 10, 50, and 60 2
Set size
Applies to differing RAID types
RAID 10 offers mirror depths of 2, 3, or 4
RAID 50 offers set sizes of 2+1, 3+1, 4+1, 5+1, 6+1, 7+1, and 8+1
RAID 60 offers set sizes of 4+2, 6+2, 8+2, 10+2, and 14+2
Growth increment
Amount of space to allocate to the growth of LDs when growth is required
Allocation warning
Generate an alert when specified capacity is utilized by this CPG
Allocation limit
Control the total allocation capacity for volumes created with this CPG; use with extreme caution as no further growth will be allowed for thinly
provisioned volumes created from this CPG when this limit has been reached
2
RAID 0 and RAID 50 on nearline drives are both disabled by default. In order to enable those RAID levels, open a CLI session and use the commands setsys AllowR5onNLDrives
yes and setsys AllowR0 yes. Note that HPE 3PAR strongly discourages use of r5 on NL drives and R0 in any situation and is counter to HPE 3PAR best practices. Customers use
them at their own risk.
Technical white paper
Page 12
Creating a CPG using the GUI (optional)
a
b
c
d
e
f
g
h
i
Figure 8. Creating a CPG with the GUI
1. From the main menu select common provision groups; note in this example Advanced Options have been selected.
2. As the CPG creation dialog box appears, name the CPG “provisioning.cpg” at location a in figure 8.
3. The system have been selected at location b, up to 32 systems can be managed from a single instance of the SSMC.
4. The drive types, RAID level, set size and availability can be selected at location c, d, e, and f respectively (these are optional, default
are usual).
5. The growth increments and capacity limits can be set and location g, h, and i respectively, again these are optional settings.
6. Notice that as attributes are changed, the GUI will update the amount of space available.
7. Click the “Create” button.
Note
If the PoC focuses on using the CLI, remove the CPGs, VLUNs, VVs, and hosts. If the PoC focuses only on the GUI, leave the configuration intact
and skip to the “Snapshot” section.
Provisioning with the CLI
HPE 3PAR platform is CLI friendly. All functions available in the GUI are also available in the CLI. In addition, there are system variables that can
be set that ease the exporting of data to a command separated value (csv) file, which can then be used to model performance and capacity.
Refer to HPE 3PAR StoreServ CLI administrators guide for complete documentation. Any secure shell (SSH) client that supports version 2, such
as PuTTY, can also be used.
Technical white paper
Page 13
Host creation with the CLI
1. First, install the CLI from the provided installation media.
2. Launch the tool by simply typing “cli” at a command prompt or start run in Windows.
3. To create a host using the CLI the WWNs first need to be known. The easiest way to find them is to create one host at a time and use the
CLI to display unassigned WWNs. Use the command:
4. CLI command: showhost -noname
cli% showhost -noname
Id Name Persona -WWN/iSCSI_Name- Port
5001438026EA5AD4 1:1:1
5001438026EA5AD4 1:1:2
5001438026EA5AD6 0:1:1
After the WWNs are known, use the createhost command specifying the operating system type with the “-persona” switch; for example:
CLI command: createhost -persona 15 TestHost 5001438026EA5AD4
For a complete list of available personas, use cmore help createhost.
VV creation using the CLI
1. Creating a VV can be accomplished using a single CLI command where the syntax is: 3
createvv [options] <usr_cpg> <vvname>[.<index>] <size>[g|G|t|T]
where <usr_cpg> is the CPG template used to create the LDs used; for example:
createtpvv -cnt 4 SSD_r1 VVname it will create 4 (-cnt 4) thinly provisioned
volumes in the cpg SSD_r1 each 1 TB in virtual size.
Note
There is a syntax change in 3.2.1, which changes the syntax to createvv –tpvv.
For the purpose of the PoC use:
CLI command: createvv –tdvv –cnt 3 –snp_cpg SSD_r5 SSD_r1 3PAR.vv 1t
Note
When using the CLI to create multiple volumes at one time, a VVSet will not automatically be created. In order to ease exporting the volumes
create a VVSet with the following CLI command: createvvset 3PARvv.vvset 3PARvv*.
This creates a VVSet named 3PARvv.vvset using all volumes that start with the regular expression (regex) pattern 3PARvv*.
Exporting the VV with the CLI
2. Exporting the VV (making the VV visible to the hosts) can also be accomplished with a single command.
createvlun [options] <VV_name | VV_set> <LUN> <host_name>.
For this PoC use the command:
3
Note that in HPE 3PAR StoreServ version 3.2.1, the syntax has been changed, createtpvv has been removed and replaced with a switch to the createvv command; createvv –tpvv
and –tdvv (thinly provisioned deduplicated volume)
Technical white paper
Page 14
CLI command: createvlun set:3PARvv.vvset <SCSI target #> <hostname>
where <SCSI target #> and <hostname> is the name of the TestHost used.
Note
There are many options to choose from to control how the VV is presented, what paths to use, which SCSI ID to use, to use VCN or not, and so
on. Also, if there is a “HostSet” that has been created, all volumes can be exported to all hosts in the HostSet at once.
Creating a CPG using the CLI (optional)
Creating a CPG using the CLI is very simple and can be completed with a single step.
1. CLI command: createcpg –p –devtype SSD
This will create a RAID 10 (by default) CPG using only SSD drives. As with other CLI, commands there are extensive options available to
control how the CPG is created. The “-p” specifies a pattern that can be node, slot, port, drive type, drive RPM, location in magazine, magazine
number, number of free chunklets, and so on.
Host configuration
1. If MPIO has already been installed, skip to step 2 “Host Configuration.” If not, launch “Server Manager,” then manage and select features and
add MPIO. Before rebooting, launch the MPIO control panel. On the MPIO devices, click “Add” and input 3PARdataVV; the host will pause for
a few seconds. Reboot the server afterwards.
2. On the attached Windows host launch “Computer Management,” then “Disk Management,” and right click and select “Rescan Disks.”
3. For the purpose of this PoC, the volumes will remain uninitialized. If benchmarking with a file system is also desired, use the freely available
“Dummy File Creator,” available at mynikko.com/dummy/. The utility creates test files quickly.
Table 4. Provisioning evaluation
Provisioning
Host and volume creation and exporting
Expected results
1. Hosts were successfully created
2. Volumes were successfully created
3. Hosts can see volumes
Customer comments
Result
Success
Partial success
Failed
Compare (optional)
Better
Equal
Worse
Autonomic Groups
To support a converged infrastructure, HPE 3PAR InForm Operating System software includes an Autonomic Groups feature that enables
HPE 3PAR Storage System administrators to create host and volume groups that automate and expedite storage provisioning, monitoring, and
data protection with clustered and virtual server deployments.
For example, a 32-way VMware cluster will typically have dozens if not hundreds of volumes. Exporting 20 volumes to 32 hosts in the past
would have required 640 exports. With Autonomic Groups, all 20 volumes can be exported to all hosts in a few clicks. In addition, newly created
volumes can be exported to all 32 hosts simply by placing the volume in the previously exported VVSet.
Technical white paper
Page 15
Figure 9. Demonstrating Autonomic Groups
To demonstrate the functionality of Autonomic Groups complete the following steps:
1. In the SSMC, select “Provisioning” at location a in figure 9
2. Launch the “Create Virtual Volume” dialog box at location b in figure 9
3. After entering the VV, name, and size, select “Set Name” and select the 3PARvv.vvset previously created
4. Rescan disks in the Windows Management console and see the new volume automatically presented
Table 5. Autonomic Groups functional evaluation
Autonomic Groups
Verification of autonomic group functionality
Expected results
Newly created VV is exported to the host without running the Export dialog box
<Customer> comments
Result
Success
Partial success
Failed
Compare (optional)
Better
Equal
Worse
HPE 3PAR Snapshots (formerly known as Virtual Copy)
HPE 3PAR Snapshot Software enables you to take instant point-in-time (PIT) snapshots of your data volumes with little or no impact to your
applications. Since snapshots use efficient first copy-on-write technology that points to existing data rather than duplicating it, snapshots
consume less physical capacity. Physical capacity is consumed only when changes to a base volume require stale data to be retained in order to
maintain the integrity of the PIT copy. This virtual solution represents a significant capacity saving over the use of traditional full physical copies.
Technical white paper
Figure 10. Creating a Snapshot
Creating snapshots in the HPE 3PAR SSMC
1. Select “Provisioning” at location a in figure 10, then select “Create Snapshot” from the drop down at location b.
2. Snapshots can be created on demand or scheduled using the snapshot dialog box.
3. Observe the small amount of space occupied by the snapshot in the GUI.
Figure 11. Creating a Snapshot (continued)
Page 16
Technical white paper
Page 17
Creating snapshots using the CLI
Snapshots can be created with a si+ngle command in the CLI.
createsv [options] <SV_name> <copy_of_VV | VV_set | RC_group>
Where createsv means snap volume, SV_name is the name of the snapshot, and copy_of_VV | VV_set | RC_group is the name of the VVs or
VVSet to snap.
1. For the purposes of this PoC, execute the following command:
a. createsv 3PARvv.sv 3PARvv, the volume can now be exported and will show as a regular volume.
b. Use the CLI to observe the small amount of space occupied by the snapshot using the command showvv.
Table 6. Snapshot functionality verification
Snapshots
Verification of snapshot functionality
Expected results
1. Snapshots can easily be created from the GUI and from the CLI
2. Observe that snapshots are reservationless and consume a very small amount of space
<Customer> comments
Result
Success
Partial success
Failed
Compare (optional)
Better
Equal
Worse
Clones (formerly known as Full/Physical Copies)
Note
HPE 3PAR Clones provide a block-for-block copy of a VV with the option to instantly export the VV to a host. The “Instant Export” option is only
available through the command line and is not compatible with the re-sync option. The destination volume cannot exist before the command is
executed; whereas it is required in the GUI.
First, create two volumes with the base name “PCTest.”
1. Launch the VV creation dialog box (Using previously stated steps) and create two volumes, name them PCTest.0 and PCTest.1
2. Set the size to 64 GB using any available CPG
3. Export the PCTest.0 using the documented steps
After the VVs are created, the PCTest.0 is exported.
Technical white paper
Page 18
Figure 12. Clone creation dialog box
4. When returned to the VV pane, select “PCTest.0” at location a in figure 12
5. Then select “Create Clone” at location b in figure 12
Figure 13. Create Clone dialog
6. Select the source (PCTest.0) and destination (PCTest.1) VV names, set the task priority to “High,” and then click “OK.”
The Clone will proceed; the speed of the copy will depend on the configuration of the array and the current utilization. Progress can be
monitored in the Activity tab of the SSMC. The Clone will be exportable when the copy completes.
Technical white paper
Page 19
Clones with Instant Export (currently available in the CLI)
1. Login to the array with the CLI or SSH client.
2. Create the CPG to use for this test.
CLI command: createcpg -p -devtype SSD PCTest.cpg
3. Create the source VV.
CLI command: createvv PCTest.cpg PCTest.vv 64G
4. Export the VV to the host.
CLI command: createvlun PCTest.vv.0 TestHost
5. Use computer management and scan for the new storage. Initialize and bring the disk online; initialize and format the drive.
6. Use dummy file creator to create a file 32 GB in size on PCTest.0.
7. Create the clone.
CLI command: createvvcopy -p PCTest.vv.0 -online PCTest.cpg PCTest.vvcopy
8. Export the volume to the host and verify the data is the same on both VVs.
CLI command: createvlun PCTest.vvcopy 150 TestHost
9. Verify the data is the same by right clicking on the iobw.tst file on PCTest.0 and PCTest.1, and selecting properties and verifying
the byte counts.
Table 7. Clones and Instant Export
Clones
Verification of clones and instant export
Expected results
1. Clones can be created quickly and easily in the SSMC
2. Using the CLI, clones can be created and instantly exported, and is readable by a host
<Customer> comments
Result
Success
Partial success
Failed
Compare (optional)
Better
Equal
Worse
Preconditioning SSD drives for performance and deduplication testing
SSD drives have better write performance when new, as writes to a fresh drive always go to previously unwritten cells. After a drive has been
filled, writes update to existing cells invoke the program/erase (P/E) cycle. This involves writing the updated data to reserved space on the drive,
erasing the existing data in the target cell, and reprogramming the cell. In order to benchmark the performance abilities of SSD drives correctly,
the drives need pre-conditioning to reach “Steady state.”
Note
• All drives, SSD and HDD alike, undergo a chunklet initialization process that writes a large block sequential workload to the complete capacity
of the drive. This process is then repeated during the onsite installation process. Before beginning an onsite PoC, the full capacity of all drives
has to be written two times.
• If performing a remote PoC through one of the HPE solution centers, preconditioning is already performed on the system and is unnecessary
to repeat.
• When deleting VVs, a chunklet re-initialization process overwrites the chunklets again. When deleting a large number of VVs there may be a
significant period while chunklets are reinitialized.
Technical white paper
Page 20
For onsite PoCs, the two initial overwrites do not constitute full preconditioning as it does not account for variations in block size or the impact of
SSD drive garbage collection and sufficient “scrambling” of metadata that are present in production systems. As part of the onsite PoC, a few
additional steps are necessary; the length of time the process takes will depend on the capacity of the array from a few hours to a day or more.
1. Using the SSMC or CLI, export all usable capacity of the array to the host(s) as fully provisioned VVs.
2. Using Iometer as documented in appendix A, precondition the drives and return to the functionality testing. Note that it may take several
hours to a day or more to initialize the array depending on the number of drives, the type of drives, and the capacity of the drives.
Demonstrating capacity efficiency
Capacity utilization and efficiency
HPE 3PAR has been a pioneer in thin technologies, shipping production ready HPE 3PAR Thin Provisioning since 2002 and providing one of
the most comprehensive thin technology solutions. Since then, HPE 3PAR has expanded the portfolio of capacity-efficient technologies to
include ASIC-assisted data deduplication, thin persistence, thin conversion, thin reclamation, and thin clones for virtualized hosts. For a complete
description of HPE 3PAR thin technologies, read the HPE 3PAR Thin Technologies white paper. As HPE 3PAR offers a number of thin
technologies, in this PoC there will be an independent evaluation of all offerings.
Thin Provisioning
1. In the “Provisioning tab” of the SSMC, observe the capacity virtual size and the used capacity.
2. In figure 14, four volumes totaling 64 TB have been presented to the host (virtual size) while only consuming 0 GB in reserved.
Figure 14. Thin Provisioning allocations
Table 8. Thin Provisioning verification
Clones
Verification of clones and instant export
Expected results
A thinly provisioned VV will show a capacity used number dramatically lower than the virtual size (the size the host believes
is attached)
<Customer> comments
Result
Success
Partial success
Failed
Compare (optional)
Better
Equal
Worse
Technical white paper
Page 21
HPE 3PAR Thin Conversion
a
b
c
Figure 15. Start the Thin Conversion dialog box
Convert a fully provisioned volume to a thinly provisioned volume online.
1. Select the Virtual Volumes from the home screen in the SSMC at location a in figure 15.
2. Select the desired fully provisioned volume and from the Action menu at location b in figure 15, and finally “Convert” as noted at location c.
a
b
c
Figure 16. Thin Conversion dialog box
Technical white paper
Page 22
3. Select the desired volume type for the VV at location a in figure 16.
4. Add more VVs to convert, if so desired at location b.
5. Begin the VV provisioning type conversion by pressing convert at location c.
Table 9. Thin Conversion verification
Clones
Verification of clones and instant export
Expected results
A fully provisioned VV was converted to a thinly provisioned VV or thinly provisioned dedupe VV. Note that thinly provisioned
dedupe VVs are available only with HPE 3PAR OS version 3.2.1 MU1 and after.
<Customer> comments
Result
Success
Partial success
Failed
Compare (optional)
Better
Equal
Worse
Zero Detection
1. Create a 250 GB thinly provisioned dedupe virtual volume (TPVV) or thinly provisioned VV (TPVV) and export it to the TestHost. Initialize
and format the drive and mount it as drive letter Z
2. While still in the VV display page select the drop down next to the VV name and select Capacity
3. Observe in the SSMC, the VV consumes little space as shown in figure 17
a
Figure 17. Initial space consumption of a TPVV
b
Technical white paper
Page 23
4. Download the tool “Dummy File Creator” from mynikko.com/dummy/
5. Launch Dummy File Creator and create a 100 GB file named z:\iobw.tst
Figure 18. Dummy File Creator dialog
a
b
Figure 19. Space allocation after writing zeros
Table 10. Zero Detection verification
Clones
Verification of clones and instant export
Expected results
When writing a continuous stream of zeroes the array does not write any to disk. The granularity of zero detect is 16 KB
<Customer> comments
Result
Success
Partial success
Failed
Compare (optional)
Better
Equal
Worse
Thin reclaim
1. Using the same volume as before, download a file creation tool, for example “Dummy File Creator” from mynikko.com/dummy/.
2. Configure Dummy File Creator (or the tool of your choice) to write 8 GB of random (non-compressible) data to the target volume.
3. Return to the SSMC and notice the sizable increase in reserved space used.
4. Figure 19 shows thin capacity savings.
5. Delete the 8 GB file on the VV.
6. Execute sdelete once again sdelete –c –z r:
Technical white paper
Page 24
Table 11. Thin reclaim verification
Clones
Verification of clones and instant export
Expected results
Thin reclaim will reclaim space that has been deleted from the file system and return space to the CPG
<Customer> comments
Result
Success
Partial success
Failed
Compare (optional)
Better
Equal
Worse
Data deduplication
HPE 3PAR StoreServ Storage systems employ purpose-built HPE 3PAR ASICs at the heart of each controller node that feature efficient,
silicon-based mechanisms to drive inline deduplication. This unique implementation relies on built-in hardware capability to assign a unique hash
to any incoming write request and leverages the HPE 3PAR Thin Provisioning metadata lookup table for fast hash comparisons. When new
IO requests come in, the hash of the incoming data is compared to the hashes of the data already stored in the array. When a match is found, the
software marks the data as duplicated. The software also leverages the controller node ASICs to perform a bit-to-bit comparison that reduces the
possibility of hash collision. The CPU-intensive job of calculating hashes of incoming data and read verify is offloaded to the hardware assist
engines, freeing up processor cycles to deliver advanced data services and service IO requests. This inline deduplication process carries multiple
benefits, including increasing capacity efficiency, protecting flash performance, and extending flash media lifespan. Other storage architectures
lack the processing power to simultaneously drive inline deduplication and the high-performance levels demanded by flash-based media while
also offering advanced data services (replication, quality of service [QoS], and federation).
The following should be taken into account before implementing Thin Deduplication:
• It is only applicable to Virtual Volumes residing solely on SSD storage. Any system with an SSD tier can take advantage of Thin Deduplication.
Since a Thin Deduplication volume can only be on SSD VVs, they are not compatible with the sub-LUN tiering of Adaptive Optimization (AO).
If a thinly deduped volume exists within a Common Provisioning Group (CPG) then the CPG is not available for use in an AO configuration.
Conversely, if a CPG is already in an AO configuration then it is not possible to create a thinly deduped volume in the CPG.
• For best performance limit the number of dedupe CPGs to one per node.
• The granularity of deduplication is 16 KiB, therefore, the efficiency is greatest when the IOPS are aligned to this granularity. For hosts that use
file systems with tunable allocation units consider setting the allocation unit to 16 KiB or multiple of 16 KiB. With Microsoft® Windows hosts
that use NTFS the allocation unit can be set in the format dialog box. With Linux® hosts Btrfs, EXT4, and XFS file systems require no tuning
but EXT3 file systems align to 4 KiB boundaries so for maximum deduplication these should be migrated to EXT4. For applications that have
tunable block sizes consider to setting the block size to 16 KiB or a multiple of 16 KiB.
• Deduplication is performed not only on the data contained within the VVs but also between VVs in the same CPG. For maximum deduplication
store data with duplicate affinity on VVs within the same CPG.
• Thin Deduplication is ideal for data that has a high level of redundancy. Data with a low level of redundancy such as databases or data that has
been previously deduplicated, compressed, or encrypted are not good candidates for deduplication and should be stored on thin provisioned
volumes. Use the dedupe Estimate tool to check the dedupe ratio of existing volumes before conversion to thinly deduped volumes.
Understanding compaction and deduplication ratios
• Compaction refers to the total amount of space savings achieved by all thin technologies.
• Deduplication is simply the amount of duplicated data.
Example
When a 1 TiB volume is exported the compaction ratio will be huge as the host sees 1 TiB (usable) but just a few MiB are allocated (used). As
data starts to get written to the storage array the compaction ratio will become realistic. If the host ends up writing 100 GiB (of non-zeroed and
unique data) the compaction ratio will be 10:1.
• If the host is just sending zeroes, the compaction ratio will remain the same, as we will not be allocating more space on the array.
• If the host is writing data that it detects as it duplicates, the compaction ratio will remain the same, but the dedupe ratio will increase.
Technical white paper
Page 25
• If the host is writing data that is neither zeroes nor duplicates then more space will be used and the compaction ratio will slowly decrease, as
the delta between virtual size exported vs. used will shrink.
Demonstrating data deduplication
Different types of data will produce different levels of effectiveness for data deduplication; for instance, databases that are not particularly
efficient use of TDVV volumes, whereas VDI and file server stores do very well. This section will demonstrate:
1. Using the management console create a TDVV named FULL_RANDOM and export to a host. In this example a 256 GiB VV was created.
Assign drive letter R:
Format the new drive in Windows with a 16 KB allocation size.
Figure 20. Creating FULL RANDOM VV with 16 KB allocation
2. Create a second TDVV named REPEATING_BYTES and export to the same host. Assign drive letter S:
Format the new drive in Windows with a 16 KB allocation.
Figure 21. Format the new drive in Windows with a 16 KB allocation
3. Note in the SSMC that only 0.22 GiB of space is consumed on a 100 GiB (all zeros) file.
a
b
Figure 22. Reserved User Space
4. Using the dummy file creator and create a file named R:\FULL_RANDOM. In this example, a 200 GB file is created containing random
(non-compressible) data.
Technical white paper
Figure 23. Dummy File Creator
5. Start Iometer and assign 1/3 of the available workers to the first VV as noted in figure 24, location a.
Figure 24. Assign workers for first disk target
6. Assign a queue depth of 32 as noted in figure 24, location b.
7. Assign a write IO pattern of “Repeating bytes” as noted in figure 24, location c.
8. Assign the next 1/3 of available workers to the next disk target as noted in figure 25, location a.
Page 26
Technical white paper
Figure 25. Assign the second 1/3 of workers to the next disk target
9. Assign the # of outstanding IOs (queue depth) to 32 as noted in figure 25, location b.
10. Assign the pseudo random write IO data pattern as noted in figure 25, location c.
11. Assign the final 1/3 of available workers to the final VV as noted in figure 26, location a.
Figure 26. Assign the last 1/3 of workers to the next disk target
Page 27
Technical white paper
Page 28
12. Assign the # of outstanding IOs (queue depth) to 32 as noted in figure 26, location b.
13. Assign the Full random write IO pattern to the VV as noted in figure 26, location c.
14. Switch to the access specification tab as noted in figure 26, location d.
VMware thin clones
To show HPE 3PAR Thin Deduplication in action, a host environment that facilitates creating duplicate data is needed.
There are many ways to demonstrate data deduplication—this PoC will use VMware and VM clones.
1. Using the HPE 3PAR StoreServ Management Console, select the Virtual Volumes option from the home drop down menu.
2. Select “Virtual Volumes” then “Create” as previously documented.
3. “Thinly Deduped” should already be selected by default. Enter 100 for the size of the volume. Also, select SSD_r1 for both the “User CPG” and
“Copy CPG.”
4. Select the ESX host to export the dedupe volume.
Figure 27. Exporting a Virtual Volume
5. Click “Finish.”
6. Login to the ESX vCenter and mount the new HPE 3PAR dedupe volume. Name this new datastore: dedupe_vol1.
Figure 28. VMware LUN selection
7. Go to the CLI and run the command: showcpg –s SSD_r1. Take note of the current used space in this CPG.
Technical white paper
Page 29
Figure 29. VMware space used
8. As mentioned previously, have an existing virtual machine (VM) filled with data in your ESX environment. Using the vSphere client, go to the
original VM, right click and clone the VM onto the newly exported and mounted datastore: dedupe_vol1.
Figure 30. Clone a VM
9. You can give the new VM a name: Dedupe_VM.
Figure 31. Name the VM clone
10. Select the new dedupe volume as the target datastore.
Figure 32. Select the VMware target datastore
11. Select the default and click “Finish.”
12. When the clone completes, run the CLI command again: showcpg SSD_r1.
Figure 33. Display space consumed
Technical white paper
Page 30
Notice that the CPG has grown to allow space for the cloned VM. However, the space that grew was considerably less than the actual size of the
VM. This is because blocks that were duplicates in the VM were deduped by the HPE 3PAR array.
Once more, we will clone the VM again. Using your vSphere client, go to the newly cloned “Dedupe_VM,” right click, and clone that VM.
Figure 34. Repeat the VM clone process
Name the new VM: Dedupe_VM2.
Figure 35. Name the new VM clone
13. Similar to before, select the new dedupe volume as the target datastore.
Figure 36. Select the datastore
Technical white paper
Page 31
14. Select the default and click “Finish.”
15. When the clone completes, run the CLI command again: showcpg –s SSD_r1.
Figure 37. Display space consumed
Notice this time, the space did not grow at all. This is because identical blocks already existed, therefore, clones were deduplicated by the
HPE 3PAR StoreServ array.
Table 12. Thin Deduplication verification
Clones
Verification of clones and instant export
Expected results
When performing clones, data was observed to be deduplicated by the HPE 3PAR array in resulting less physical capacity
consumed
<Customer> comments
Result
Success
Partial success
Failed
Compare (optional)
Better
Equal
Worse
Real-time monitoring and historical reporting
Unlike previous version on HPE 3PAR management consoles the SSMC combines both real-time and historical reporting (formerly known as
System Reporter) into a single interface. Note also that both real-time e and historical reporting are available from the CLI and the historical
information can also be exported from the database which resides on the .srdata VV on the array, refer to The HPE 3PAR System Reporter White
Paper for details. In addition to providing real-time and historical performance reporting other characteristics such as space consumption,
CPG size over time, PD performance, CPU, etc.
Getting started with real-time monitoring and historical reporting in the GUI
1. From the home button at location a in the HPE 3PAR SSMC select Reports at location b.
Figure 38. Launching the reporting interface
There are a variety of templates for real-time monitoring and historical reporting that are already prepared in addition to custom reports.
Historical reports can be configured to run on a schedule for individual objects, groups of objects or the entire system and, if so desired, can be
emailed out upon their completion. For the purpose of this PoC a custom report will be created.
Technical white paper
Page 32
A sample of the reports available
Figure X. A sample if predefined reports
Creating a real-time report
For this report I selected a real-time aggregate of all volumes broken out by read / write and total for IOPs, Bandwidth, and Service Time. Using
custom reports this could also be broken down to an individual VV or CPG for example. In figure 39 location a, the desired report template is
selected, notice that when a real-time report is desired the scheduling and email options are not available. In location b, a single object has been
selected for monitoring, in this case a single VLUN. In location c, the desired polling interval has been set to 5 seconds.
Technical white paper
a
b
c
Figure 39. Real-time performance report creation
Page 33
Technical white paper
Page 34
Example of a real-time report aggregated system performance.
Figure 40. Real-time performance monitoring in the GUI
Creating a historical report
HPE 3PAR has a long history of keeping data around for analysis for great lengths of time. There is a database that runs on the system itself
that maintains information. To see how long information on the array can be kept, open a CLI or SSH session to the array and type the
following: showsr
In figure 41, the estimated end date (the date when the database will fill for a particular reporting type, daily for example (a report consisting of a
24 hour summary) will be retained for up to 156 years, 185 days.
Figure 41. Data retention for historical reporting
Create historical reports using the HPE 3PAR SSMC
1. At location a in figure 42 select either a single report or multiple reports. Multiple reports are useful in charge back environments or shared
storage environments, for example shared storage in web hosting environments.
2. At location b select the report template to use. Note that if a real-time report template is selected the scheduling as well as other options will
not be available.
3. At location c select the objects to run the report(s) on. There will be an object selection based on the report type selected, for example on
port performance will only show ports in the objection selection. HPE 3PAR encourages potential clients to explore the options available as
there are too many combinations to practically list in this document.
4. At location d selection the time settings, this is the range that the report will cover, with a start date and time and an end date and time. The
measurement interval can also be selected, for example Hi-Res (every 5 seconds), hourly or daily.
5. At location e there are several charting options available including changing x/y axis or adding information such as queue length.
6. At location f there are scheduling options, including sampling intervals, days of the week to run, how many times a day to run and an email
address to send reports to.
Technical white paper
Page 35
a
b
c
d
e
f
Figure 42. Historical reporting in the HPE 3PAR SSMC
Sample statistics for performance that can be view real-time from the CLI
Table 13. CLI performance monitoring commands
Command
What is reported
Statvlun
Reports the IOPS, throughput, average block size, ratio of r/w (using the –rw switch), and both current and average (average since the
command was executed) for each path between the VV and the host.
statport -host
All the information that statvlun reports for all traffic that is on the front side of the controller nodes.
Statcmp
Statistics covering the data cache management pages. Detailed information on how dynamic cache management is using data cache.
statport –disk
Reports information on the back end as the nodes interact with the disks. High service time and larger block sizes are notable here as
the controller nodes work to coalesce writes to physical disks. More on this in the troubleshooting section.
Statpd
Statistics covering physical disk performance.
Not all switches apply to all stat commands. For detailed information, use the command “help <command>”.
Technical white paper
Page 36
Table 14. CLI switches for performance monitoring
CLI switch
Meaning and effect
-ni
Not idle, do not report information on idle devices
-rw
Read/write, report read and write statistics separately with totals
-p –devtype
-p denotes a pattern followed, -devtype specifies SSD, FC, or NL
-hostsum
Summarize all information grouped by host instead of individual VLUNs
Performance monitoring in the CLI
1. Open a CLI session or simply use an SSH client such as PuTTY
2. Use a load generator to start activity on the array
3. Monitor host port activity
CLI command: statport -host
4. Monitor VLUN activity
CLI command: statvlun
5. Filter VLUN reporting by eliminating idle devices
CLI command: statvlun -ni
6. To demonstrate the data export functionality set the following variables:
CLI command: setclienv nohdtot 1
CLI command: setclienv csvtable 1
7. Now execute the command from step 4
CLI command: statvlun
8. Notice that now the data is presented in a comma-separated list
Table 15. Real-time performance monitoring results
CLI switch
Meaning and effect
Real-time performance monitoring functional
verification
Demonstrate monitoring the array performance (real-time), host, ports, VLUNs, and PDs
Expected results
1. Array provides real-time performance reporting via the GUI
2. Graphs are available and informative
3. Graphs can be exported
4. CLI is flexible and data is easily accessible
5. Data is easily exportable to be manipulated by a spreadsheet
<Customer> comments
Result
Success
Partial success
Failed
Compare (optional)
Better
Equal
Worse
Technical white paper
Page 37
Historical reporting the CLI
Most of the commands to view historical information from the CLI use the same basic command as viewing real-time information but uses
the prefix sr<command>; for example statvlun becomes srstatvlun. For a complete list of all switches refer to the HPE 3PAR System Reporter
white paper.
Figure 43. Using showsr to display retention periods
Generating SR reports from the command line
As discussed before, the HPE 3PAR array is very CLI friendly in that raw data can be quickly and easily accessible. The options for SR CLI are
extensive. In table 16, are a few options useful for generating csv files using the CLI. To view these options from the command line use the
command help sr.
Table 16. Generating SR reports from the command line
SR report
Data provided
sraomoves
Space report for AO moves
srcpgspace
Space reports for CPGs
srldspace
Space reports for LDs
srvvspace
Space reports for VVs
srpdspace
Space reports for PDs
srhistld
Histogram performance reports for LDs
srhistpd
Histogram performance reports for PDs
srhistport
Histogram performance reports for ports
srhistvlun
Histogram reports for VLUNs
srstatcache
Performance reports for flash cache (not applicable to 7450 systems)
srstatcmp
Performance reports for cache memory
srstatcpu
Performance reports for CPUs
srstatld
Performance reports for LDs
srstatlink
Performance reports or links (internode, PCI, and cache)
srstatpd
Performance reports for ODs
srstatport
Performance reports for ports
srstatqos
Performance reports for QoS rules
srstsatrcopy
Performance reports for HPE 3PAR Remote Copy links
srstatrcopy
Performance reports for Remote Copy
srstatrcvv
Performance reports for Remote Copy volumes
srstatvlun
Performance reports for VLUNs
Technical white paper
Page 38
Table 17. Useful CLI switches for generating SR reports
Switch
Effect
btsecs
Before seconds, specifies the number of seconds before then from which to retrieve data
esecs
End seconds, the end time in seconds for the report data
attime
Report performance statistics at a particular time
hires, hourly, daily
Select the resolution of the report
cpg
Select a specific CPG to report on
Performance tuning
Dynamic Optimization (Volume level/system-level tuning)
HPE 3PAR Dynamic Optimization (DO) allows a storage administrator to change the characteristics of any volume or CPG at any time, even
during production hours. It is also the engine behind “Tunesys,” a tool that rebalances arrays after hardware upgrades of drives or nodes.
Dynamic Optimization can be run on a single volume or an entire CPG. For example, a VV that was originally created as RAID 50 can be changed
to RAID 10 by selecting “Tune Volume,” and then selecting a destination CPG with the desired attributes.
Running Dynamic Optimization in the HPE 3PAR SSMC
1. Select “Virtual Volumes” from the home menu and then select a VV as demonstrated at location a in figure 44.
2. From the action drop down select “tune” as noted at location b.
Figure 44. Tuning a VV with Dynamic Optimization
Tune VV Dialog
1. Select the system at location a in figure 45.
2. At location b the tune options can be selected to tune User space or Copy (snapshot) space.
3. At location c select VV(s) to be tuned.
4. VVs can be tuned to the same CPG, which will maintain the same dive types or move to a new CPG.
Technical white paper
Page 39
a
b
c
d
Figure 45. DO selection dialog
Tuning an entire array
1. After navigating to the home tab, select systems.
2. After selecting the desired array, navigate to the action drop down and select tune as shown at location a in figure 46.
a
Figure 46. Starting System Tuner, whole array rebalancing
Technical white paper
Page 40
Running Dynamic Optimization from the CLI
Using DO to modify the RAID level attribute of a VV.
1. Create a fully provisioned VV for the PoC test using the documented steps. A fully provisioned VV used as a thinly provisioned volume
does not create LDs until data is written.
createvv SSD_r5 DOTest.vv 256G
2. Verify the RAID level of the VV.
Figure 47. Verification of RAID level using the CLI
3. CLI command: tunevv usr_cpg SSD_r1 DOTest.vv.
4. To observe the progress of the tune, use the command:
showtask –d <PID>.
5. After the task has completed, run showld –d -3PARvv.vv0.
Figure 48. Verifying DO task completion
Table 18. Dynamic Optimization verification
CLI switch
Meaning and effect
Dynamic Optimization functional verification
Verification of Dynamic Optimization
Expected results
1. DO successfully changes a volume from RAID 50 to RAID 10
2. There is no disruption in host access to data
3. Observe that snapshots are reservationless and consume a very small amount of space
<Customer> comments
Result
Success
Partial success
Failed
Compare (optional)
Better
Equal
Worse
Technical white paper
Page 41
Adaptive Optimization (sub-volume-level tiering)
For more information on Adaptive Optimization refer to the AO white paper.
Note
Adaptive Optimization moves data between media tiers at regular intervals, not to be confused with HPE 3PAR Adaptive Flash Cache (AFC) that
uses SSD drives as another layer of cache. AFC is not applicable to all-flash arrays.
Modern storage arrays support multiple tiers of storage media with a wide range of performance, cost, and capacity characteristics—ranging
from inexpensive Serial ATA (SATA) HDDs that can sustain only about 75 IOPS to more expensive flash memory-based SSDs that can sustain
thousands of IOPS. Volume RAID and layout choices enable additional performance, cost, and capacity options. This wide range of cost, capacity,
and performance characteristics is both an opportunity and a challenge. The opportunity is that the performance and cost of the system can be
improved by correctly placing the data on different tiers; move the most active data to the fastest (and most expensive) tier and move the idle
data to the slowest (and least expensive) tier.
As explained in the architecture review of this paper, data is mapped into 128 MB regions. Access to these regions is monitored via histograms
and data stored in HPE 3PAR System Reporter (SR). Using this data, HPE 3PAR StoreServ makes decisions based on the number of times a
region is accessed in a particular window (for example, working hours) in order to move regions to an appropriate tier based on user-specified
policies. These policies include:
• Performance: The array moves data into faster drive tiers more aggressively, for example, from FC to SSD.
• Balanced: The array seeks to make the most-efficient use of higher performing tiers for the most accessed data.
• Cost: The array seeks to push data down to the most cost-efficient drive tier.
SSD
only
HPE 3PAR Adaptive Optimization
FC
only
Non-optimized approach
NL
only
Non-tiered Volumes/LUNs
Tier 0 SSD
Tier 1 FC
Improved approach for
leveraging SSDs
Tier 2 NL
Multi-tiered volumes/LUNs
Figure 49. Autonomic tiering with HPE 3PAR StoreServ
Sub-LUN block movements
between tiers based
on policies
Technical white paper
Page 42
CPGs as tiers in Adaptive Optimization configuration
Before creating an Adaptive Optimization (AO) configuration, it is required to create CPGs. CPGs are used as tiers with AO. An AO configuration
can have a maximum of three tiers, and a minimum of two tiers are required to define a new AO configuration. A CPG can be part of only one
AO configuration. So, every AO configuration will need a different set of CPGs.
VVs that are not part of an AO configuration have all their regions mapped to LDs belonging to a single CPG. However, in the case of AO, virtual
volumes will have regions mapped to LDs from different CPGs (tiers). Data placement for a particular region is decided based on statistics
collected and analyzed by AO for each region. HPE 3PAR Adaptive Optimization leverages data collected by HPE 3PAR SR on the controller
nodes. The SR on node periodically collects detailed performance and space data that is used by AO for the following:
• Analyze the data to determine the volume regions that should be moved between tiers
• Instruct the array to move the regions from one CPG (tier) to another
• Provide the user with reports that show the impact of Adaptive Optimization
Demonstrating HPE 3PAR Adaptive Optimization
1. Launch the StoreServ Management Console and select “Adaptive Optimization” from the top menu.
2. Select the Create option from the “Actions” menu, the left pane could show a default policy. If so, remove it.
3. Name the policy at location a as seen in figure 49.
4. Select the AO mode, target either performance (at the cost of usable capacity), balanced (blend of performance and increased usable
capacity—RAID 6 CPGs) or cost, which targets moving data to maximize usable capacity by moving data to higher capacity drives.
5. Set the desired target CPGs to use for Tiers 0, 1, and 2, typically SSD, FC, and NL.
6. Schedule the execution policy as shown in figure 51.
a
b
Figure 50. Create AO configuration
Creating an AO schedule
The purpose of creating an AO schedule is to ensure that data is moved to an appropriate storage tier based on a long running analysis to
prevent the data being moved repetitively over longer periods (weeks or months). Short term, real-time tiering is accomplished use Adaptive
Flash Cache (AFC).
1. Enable the AO schedule at location a, figure 51.
2. Set “run max time” at location b. This is the maximum amount of time the array will spend moving data to target drive tiers. For example, if
AO determines that 100 TiBs of data should be moved from the SSD drives to the larger, but slower 7.2k NL drives and is set to start at
midnight, it may not be done by 6 a.m., when business starts. AO will complete the moves on the next scheduled run.
3. Data Analysis Interval at location c is the amount of time that AO will read from the .srdata database to run analysis on, for example the
previous 12 hours to determine moves to be made.
Technical white paper
Page 43
4. Optimization schedule is simply when analysis and moves should be started. For the purpose of the PoC it can be set to “now”. In production
it would be set to hours when the array is least busy.
a
b
c
d
Figure 51. Create AO configuration (continued)
Creating the AO policy with the CLI
1. Login to the array using either the HPE 3PAR CLI or SSH and execute showcpg, noting at least 2 tiers, tier 0 and tier 1.
Figure 52. showcpg
2. Create the AO configuration using the command createaocfg with the following options:
a. -t0cpg <name of the Tier 0 CFG>
b. –t1cpg <name of the Tier 1 CFG>
Technical white paper
c. –mode Performance
d. The name of the AO policy
3. createaocfg -t0cpg R1SSDcpg -t1cpg R5FCcpg -mode Performance PoCAO
Figure 53. Creating an AO policy with the CLI
Start the load generator
1. Using dummy file creator, create a 10 GB file name o:\iobw.tst
2. Start Iometer and select drive O:; ensure all workers are assigned to drive O: at location a, as seen in figure 54
3. Highlight the computer name under “All Managers” and set the # of outstanding IOs at location b
Figure 54. Iometer configuration
4. Next, select the access specification tab at location c
Page 44
Technical white paper
Page 45
Figure 55. Access specification
1. Highlight the computer name at location a, as seen on figure 55.
2. Select a workload from the Global access specification list at location b (figure 55); for the purpose of this PoC 64 KB, 50 percent read
and 0 percent random.
3. Press Add at location c.
4. Finally press the green flag at location d, as seen in figure 55; a dialog box will pop out asking where to save the results, for the purpose of
demonstrating AO there is no need to save results.
Starting the AO policy through the CLI
1. Connect to the array using either the HPE 3PAR CLI or SSH and execute the command startao with the following switches.
a. startao -btsecs 30m PoCAO (btsecs means before time seconds), although m (minutes) and h (hours) also work.
2. Follow execution of the task from the cli using showtask. Showtask will list the task ID of region_mover. For more detailed information on the
task use showtask –d <taskid>.
3. After all region moves are complete generate a report in the HPE 3PAR SSMC using the previously document report generation process. The
report template to use is “Adaptive Optimization Space Moved”.
Adaptive Flash Cache
Adaptive Flash Cache (AFC) is included at no charge and is supported with HPE 3PAR OS 3.2.1 and later; allowing SSD drives (a minimum of
four) to be used as a second layer of caching behind DRAM. The amount of AFC that can be used varies by the number of and type of drives in
chart below.
Table 19. Adaptive Flash Cache size in HPE 3PAR models
Model
Minimum number of SSD drives
Maximum AFC size
8200
4
768 GiB
8400
4 per node pair
1.5 TB, 768 GiB per node pair
8440
4 per node pair
4 TiB
20800
4 per node pair
32 TiB/8 TiB per node pair
20840
8 per node pair
48 TiB/although 12 TiB per node pair up to the system limit
Technical white paper
Page 46
If there are no SSD drives available, a simulator mode will determine what type of performance improvement could be expected. Before enabling
AFC, execute the command:
createflashcache –sim 256G leverages SSD capacities as read cache extension for spinning media.
• Can be enabled system-wide or for individual virtual volume sets (VVSets)
• Can be used in conjunction with AO
• Lowers latency for random read-intensive IO workloads
• No dedicated SSDs required—can be shared with SSD Tier and AO
• No license required; included in the HPE 3PAR OS software
Demonstrating AFC effectiveness
1. Start a workload on the array using Iometer with the following parameters:
a. 70 percent read and 30 percent write
b. 16 KB block size
c. 100 percent random
2. Login to the HPE 3PAR SSMC and navigate to reports
3. Select to create a new report, using the “Exported Volumes—Real-Time Performance Statistics” this will provide a real-time view of the
immediate effects of enabling AFC
4. With Iometer and the performance chart plotting login to the CLI using either the HPE 3PAR CLI or SSH.
5. Execute the following command:
d. cli% createflashcache <maximum amount available>
Note
If the size is too large, a message will return with the maximum allowed size, the total amount of flash cache is dependent on the number of
nodes in the system and the array model.
6. Enable flash cache for the entire array using the command:
a. setflashcache enable sys:all
Note that flash cache can be enabled on a per VVSet basis also.
7. Return to the performance graph, the “warm up” period will take a few minutes, latency will drop and IOPS increase as noted in figures 56 and
57. This example had some VVSets with AFC enabled and some not in order to demonstrate the difference.
Figure 56. AFC decrease in latency
Technical white paper
Page 47
Figure 57. AFC IOPS increase
Priority Optimization (quality of service)
HPE 3PAR Priority Optimization software is a QoS feature that allows you to enable predictable levels of service for your critical applications in a
multi-tenant environment. You can create and modify minimum performance goals and threshold limits including IOPS, bandwidth, and a latency
goal. Your mission-critical apps would be set to high priority and others set to medium or low priority. Thresholds are managed either by virtual
volume set or by virtual domain. Enforcement is real-time and measured in seconds or sub-seconds.
Running Priority Optimization in the GUI
Create a virtual volume set containing two or more volumes, call it “High” and export it to “TestHost.” You can use any RAID 6 or RAID 1 CPG for
volume creation. Using the “Create Virtual Volume” dialog, name the volumes “High” Create a virtual volume set at figure 58 location named
QoS_High, specify two or more volumes to be created at location b, name the VVSet QoS_High at location c.
Figure 58. Creating VVs for Priority Optimization testing
Technical white paper
Page 48
Figure 59. Creating VVs for Priority Optimization testing
8. Repeat the same process, creating and exporting a second virtual volume set called “Low.”
9. Use “Dummy File Creator” to create a 64 GiB file named iobw.tst on each volume. This is to avoid the delay associated with Dynamo to
generate test files.
10. On your host, generate an Iometer workload across all the drives that you have just created. Make it sufficient to drive a minimum of
8,000 IOPS.
11. In the GUI, set up a performance chart to track the performance of your volumes. Create a custom chart and select VLUNs (a), see figure 60.
First, select all the “High” LUNs (b), as seen in figure 60 and select an aggregated plot location (c), figure 60. Then click “Create”. Repeat the
process for the “Low” LUNs. Both reports should show up at the left hand pane, select both reports and select “Start real-time” to start the
performance graphs.
Figure 60. VV selection dialog for QoS
Technical white paper
12. For the purpose of the PoC, set up two QoS policies by selecting “Priority Optimization” from the top menu, select “Create priority
optimization policy” allowing the High_QoS VVSet to have no limits while the QoS_Low volume set will be limited to 3,000 IOPS.
Figure 61. Configuring QoS
13. After creating the two policies they will show in the left hand pane, when selected the details of the policies are shown.
Figure 62. Configuring QoS service levels
14. Return to the performance chart and wait a few seconds for it to update. It will resemble the graph in figure 63 where the green line
represents your “Low” volumes, the red line represents your “High” volumes, and the blue line represents a possible third workload
on the array.
Page 49
Technical white paper
Page 50
Figure 63. Effect of QoS policies
Running Priority Optimization in the CLI
Note
If the GUI portion of the Priority Optimization test has been completed, skip ahead to step 8, or if so desired, remove the previous configuration
and proceed with step 1.
1. Create two “High” volumes named QoS_High.0 and QoS_High.1
a. CLI command: createvv -tdvv –cnt 2 provisioning.cpg QoS_low.0 512G
2. Create a VVSet named QoS_High containing all volumes starting with “QoS_H”
a. CLI command: createvvset QoS_High QoS_H*
3. Export them to your host
a. CLI command: createvlun set:QoS_High 100 TestHost
4. Do the same for the “Low” volumes
a. CLI command: createvvset QoS_Low QoS_L*
5. Export the new volumes to the TestHost
a. CLI command: createvlun set:QoS_Low 150 TestHost
6. Display your VVSet
a. CLI command: showvvset
7. Create a workload to all your volumes on your host using Iometer; refer to appendix A for steps to create a small block workload
a. Display the performance of your volumes using the statvlun command. You may limit the display by including the “–v” switch; the –v
switch limits the volumes displayed
b. For example, if your volumes start with “QoS,” you may use the command CLI command: statvlun –v QoS*
Make a note of the IOPS to your volumes
8. Create a new QoS configuration limiting your VVSet to 3,000 IOPS
a. CLI command: setqos –io 3000 vvset:QoS_Low
Technical white paper
Page 51
9. If so desired, specify low, normal, or high priority for the VVSet
a. CLI command: setqos -pri low -on -io 3000 vvset:QoS_Low
b. CLI command: setqos -pri normal -on -io 3000 vvset:QoS_Low
c. CLI command: setqos -pri high -on -io 3000 vvset:QoS_Low
10. Display the current status of QoS on your VVSet
a. CLI command: showqos vvset:QoS_Low
Figure 64. QoS policy in the CLI
1. Rerun the statvv command and note the change in performance.
CLI command: statvlun
2. You may use the following commands to toggle QoS off and on for your VVSet:
CLI command: setqos -off vvset:QoS_Low
CLI command: setqos -on vvset:QoS_Low
3. To complete this test case clear the QoS configuration and remove:
CLI command: setqos –clear vvset:QoS_Low
CLI command: removevvset -f QoS_Low
CLI command: removevvset -f QoS_High
Table 20. Priority Optimization verification
Priority Optimization
Verification of quality of service feature
Expected results
1. Validated that the IOPS consumed by a workload was throttled back when the QoS limit was set.
<Customer> comments
Result
Success
Partial success
Failed
Compare (optional)
Better
Equal
Worse
HPE 3PAR File Persona (SMB and NFS)
Note
HPE 3PAR is transitioning from the existing Java-based managed console (MC) to a Web-based interface branded as the HPE 3PAR StoreServ
Management Console (SSMC). For a short period, certain management features will only be available in either the MC or the SSMC. File Persona
is currently supported in the SSMC or the CLI.
Technical white paper
Page 52
HPE 3PAR File Persona storage concepts and terminology
Figure 65. HPE 3PAR File Persona storage concepts and terminology
File Provisioning Group
A File Provisioning Group (FPG) is an instance of the HPE intellectual property Adaptive File System. It controls how files are stored and
retrieved. Each FPG is transparently constructed from one or multiple VVs and is the unit for replication and disaster recovery for File Persona
Software Suite. There are up to 16 FPGs supported on a node pair.
Virtual File Servers
A Virtual File Servers (VFS) is conceptually like a server. As such, it presents virtual IP addresses to clients, participates in user authentication
services, and can have properties for such things as user/group quota management and antivirus policies.
File Stores
File Stores are the slice of a VFS and FPG at which snapshots are taken, capacity quota management can be performed, and antivirus scan
services policies customized.
File Shares
File Shares are what provide data access to clients via SMB, NFS, and the Object Access API, subject to the share permissions applied to them.
Getting started with File Persona
Note
The File Persona section of the PoC has the following networking requirements:
1. One IP address per node used in a File Persona cluster (File Persona is configured per node pair)
a. At least one Active Directory or LDAP server (if none are available, local user accounts can be used also)
b. If Active Directory is used a user-id and password will be required, with only minimal rights
c. LDAP will require basic information such as search base, and more
d. Optionally local users and groups can be used
2. In addition, multiple authentication providers (up to three) can be used, enabling concurrent authentication for SMB, NFS, and local users
3. At least one IP address per virtual file server.
Technical white paper
Page 53
1. Download the HPE 3PAR SSMC.
2. Install the SSMC on a local server or a local computer.
3. After launching the SSMC, it prompts for creation of an administrator login; this login is not for administration of the actual array(s), but
administration of the console itself; create any administrator login and password that conforms to the password requirements; the
administrator console is solely used for adding and removing connections to HPE 3PAR arrays.
4. Close the dialog box and login with the newly created administrator login.
5. In the newly opened dialog box select “Add” from the action menu as noted in figure 78 location a.
a
Figure 66. Adding a storage array to the SSMC
6. Put in the IP address or FQDN of the array as well as the user-id and password of the array, the default user-id is “3paradm” and the default
password is “3pardata”.
Figure 67. Add storage system dialog
7. Note that initial connections to arrays require the acceptance of the encryption certificated as noted in figure 68 position a. From the action
menu select “Accept Certificate”.
a
Figure 68. Secure certificate acceptance
Technical white paper
Page 54
8. After accepting the certificate close the dialog box and return to the login screen. Uncheck the “Administrator Console” checkbox
if still selected.
9. Login with the user-id and password of the HPE 3PAR array, the default user-id is 3paradm and the default password is 3pardata.
After logging into the console, a system dashboard is presented summarizing statistics across all arrays. Focus through this section of the PoC
will only be on HPE 3PAR File Persona functionality. First File Persona settings need to be enabled in the SSMC.
Figure 69. SSMC dashboard
1. Select the main 3PAR StoreServ drop down and note the three options available.
2. Next from the main 3PAR StoreServ drop down as noted in figure 70 location a, select settings, which is location b.
a
b
Figure 70. Alternating advanced settings
3. In the settings dialog box select “Edit Global Settings”.
4. Scroll down in the dialog box and enable “Advanced file objects” as noted in figure 71, location a; then “OK” as noted in figure 71 location b.
Technical white paper
Page 55
Figure 71. Turn on File Persona Management
5. Return to the main console by expanding the main 3PAR StoreServ menu as noted in figure 72 location a, and make note of the expanded
options available under the File Persona column as noted in figure 72 location b.
a
c
b
Figure 72. Beginning HPE 3PAR File Persona configuration
6. Select Persona configuration as noted in figure 73 location c.
7. Enable advanced options as noted in figure 73, location a.
8. Click the configuration gear as noted in figure 73 location b.
9. Fill in the subnet mask and gateway as noted in figure 73 location c and optionally assigning a VLAN tag, changing the MTU to 9000 and
changing the NIC bond mode from 1 (Active/Standby) to 6 (Adaptive load balancing).
Note
10GbE is supported only with mode 1, 1GbE is supported with either mode.
Technical white paper
Page 56
10. Fill in at least one DNS server if so desired as noted in figure 73 location d.
Note that the configuration may take several minutes to complete as the array verifies networking information.
Figure 73. Configuring HPE 3PAR File Persona
11. Scroll down the configuration dialog and select the authentication provider order as noted in figure 73 location a.
12. Fill out the appropriate authentication provider information as noted in figure 73 location b; for the purpose of this PoC local accounts
will be used.
13. Add a user-id and password at location a figure 74.
14. Enable the user account at location b figure 74.
Figure 74. Authentication settings
Technical white paper
Page 57
15. Click configure in the lower right corner to begin the configuration process and return to the main HPE 3PAR File Persona page.
16. Select “Create virtual file server” from the general menu or from the Actions menu as noted in figure 75 locations a and b, respectively.
Figure 75. Create virtual file server
17. Name the virtual file server in the configuration dialog as noted in figure 76 location a.
18. Select a CPG as noted in figure 76 location b.
19. Set the amount of storage space to allocate at location c, figure 76.
20. Assign an IP address by clicking the configuration gear at location d, figure 76.
Figure 76. Virtual file server configuration
Technical white paper
Page 58
21. Return to the main menu by clicking on 3PAR StoreServ as noted in figure 77 location a.
a
b
Figure 77. Create File Store
22. Select “File Stores” as noted in figure 77 location b (Refer to the terminology section for a refresher on File Stores).
23. Name the File Store “VFSStore” as noted in figure 86 location a.
Figure 78. Create File Store (continued)
24. Select the virtual file server, which will host the file store as noted in figure 78 location b.
25. Open the main menu by clicking 3PAR StoreServ as noted in figure 79 location a.
26. Select “File Shares” as noted in figure 79 location b.
Figure 79. File Share creation
Technical white paper
Page 59
27. Click create file share as noted in figure 80 location a.
Figure 80. Create File Share
28. Name the File Share as noted in figure 81 location a.
Figure 81. File Share configuration
29. Select the Virtual file server and File Store to host the share, for the purpose of this PoC they are VFS and File Store in figure 81 locations b
and c, respectively.
30. Click create to finish configuration and return to the main File Share display.
Configuring File Persona via the HPE 3PAR CLI
1. Start File Persona services on the nodes and ports to be used using the command:
a. startfs 0:2:1 1:2:1 2:2:1 3:2:1 (note that this will depend on the number of nodes in the system)
2. Set the IP addresses of each of the respective nodes
Note that the last number used on the command line is the node number and substitute in IP addresses appropriate for the environment
a. setfs nodeip -ipaddress 10.39.15.20 -subnet 255.255.255.0 0
b. setfs nodeip -ipaddress 10.39.15.21 -subnet 255.255.255.0 1
c. setfs nodeip -ipaddress 10.39.15.22 -subnet 255.255.255.0 2
d. setfs nodeip -ipaddress 10.39.15.23 -subnet 255.255.255.0 3
3. Set the DNS server, if using Active Directory this is most likely an AD server
a. setfs dns 10.139.2.3
Technical white paper
Page 60
4. Set the IP gateway
a. setfs gw10.39.15.254
5. Create the File Provisioning Groups (essentially and CPG) for each of the nodes
a. createfpg -node 0 FC_r5 node0fs 1T
b. createfpg -node 1 FC_r5 node1fs 1T
c. createfpg -node 2 FC_r5 node2fs 1T
d. createfpg -node 3 FC_r5 node3fs 1T
6. Create the Virtual File Servers, in this example one per node on a four node system
a. createvfs -fpg node0fs 10.39.15.200 255.255.255.0 node0fs_vfs
b. createvfs -fpg node1fs 10.39.15.201 255.255.255.0 node1fs_vfs
c. createvfs -fpg node2fs 10.39.15.202 255.255.255.0 node2fs_vfs
d. createvfs -fpg node3fs 10.39.15.203 255.255.255.0 node3fs_vfs
7. Create the fstores to be used by the Virtual File Stores (VFS)
a. createfstore node0fs_vfs node0fs_fs1
b. createfstore node1fs_vfs node1fs_fs1
c. createfstore node2fs_vfs node2fs_fs1
d. createfstore node3fs_vfs node3fs_fs1
8. Finally, create the file shares, in this case SMB
a. createfshare smb -allowperm Everyone:fullcontrol -fstore node0fs_fs1 -sharedir share0/data node0fs_vfs share0
b. createfshare smb -allowperm Everyone:fullcontrol -fstore node1fs_fs1 -sharedir share1/data node1fs_vfs share1
c. createfshare smb -allowperm Everyone:fullcontrol -fstore node2fs_fs1 -sharedir share2/data node2fs_vfs share2
d. createfshare smb -allowperm Everyone:fullcontrol -fstore node3fs_fs1 -sharedir share3/data node3fs_vfs share3
9. Drives can now be mapped to the share of the HPE 3PAR File Persona shares
Resiliency testing
HPE 3PAR StoreServ has been designed to deliver 99.999 percent or greater availability. The purpose of this test is to create or simulate failures
in order to validate the redundancy and resiliency of the components of the array such as disk drives, disk enclosure, and ports. It will also
validate the mesh-active configuration of the array across all the controller nodes.
Notes regarding failure testing. Included in these failure scenarios, several features are highlighted including:
HPE 3PAR Persistent Cache
HPE 3PAR Persistent Cache is a resiliency feature built into the HPE 3PAR Operating System that allows graceful handling of an unplanned
controller failure or planned maintenance of a controller node. This feature eliminates the substantial performance penalties associated with
traditional modular arrays and the cache “write-through” mode they have to enter under certain conditions. HPE 3PAR StoreServ Storage can
maintain high and predictable service levels even in the event of a cache or controller node failure by avoiding cache write-through mode.
Under normal operation on an HPE 3PAR StoreServ Storage system, each controller has a partner controller in which the controller pair has
ownership of certain logical disks. As mentioned earlier, LDs are the second layer of abstraction in the system’s approach to virtualization of
physical resources and is also where the QoS parameters are implemented (drive type, RAID, HA, etc.). Ultimately, LDs from each node pair are
grouped together to form VVs. In the rare event of a controller failure or planned controller maintenance, HPE 3PAR Persistent Cache preserves
write caching by dynamically remirroring cache of the surviving partner controller node to the other controller nodes in the system.
Technical white paper
Page 61
For example, in a quad controller configuration (where Node 0 and Node 1 form a node pair and Node 2 and Node 3 form a second node pair),
each node pair might own 100 LDs with each node within the pair fulfilling the role of the primary node for 50 of those LDs. If Node 2 fails, the
system will transfer ownership of its 50 LDs to Node 3, and Node 0 and Node 1 will now be the backup (and thereby the cache mirroring
partner) for the 100 LDs that Node 3 is now responsible for. The mirroring of write data coming into Node 3 for those 100 LDs will be evenly
distributed across Node 0 and Node 1.
Port Persistence: Mission-critical tier 1 storage environments require extreme high availability (HA). Tier 1 customers running hundreds (or
thousands) of servers in an enterprise environment feel that a dependency on host multipathing failover software during firmware upgrades,
node failures, or in response to a “loss_sync” event (a physical layer problem between the storage array and the switch) introduces the risk of a
service disruption and hence should be avoided.
HPE 3PAR Persistent Ports technology allows for a non-disruptive environment (from the host multipathing point of view), where host-based
multipathing software will not be involved in maintaining server connectivity to the storage during firmware upgrades. In the event of a node
failure, when an array port is taken offline administratively or because of a hardware failure in the SAN fabric, the storage array loses physical
connectivity to the fabric.
Persistent Ports technology does not negate the need for properly installed, configured, and maintained host multipathing software. Persistent
Ports technology will isolate a server from the need for path failover during firmware upgrades. In the event of a fabric hardware failure resulting
in a “loss_sync” or in the event a node becomes available due to a panic or loss of power, it will not protect from cable problems or host HBA
failures that do not result in a “loss_sync” on the storage array node. A properly configured multipathing environment only provides protection
from these events.
HPE 3PAR Persistent Ports functionality works for the following transport layers:
• FC
• FCoE
• iSCSI
Features and benefits
HPE 3PAR Persistent Ports functionality provides transparent and uninterrupted failover in response to the following events:
• HPE 3PAR OS firmware upgrade
• Node maintenance that requires the node to be taken offline (e.g., adding a new HBA)
• HPE 3PAR node failure
• HPE 3PAR array “loss_sync” to the FC fabric
• Array host ports being taken offline administratively
Test case 1: Failure of a front-end (Host) cable
It is important to take care when executing these tests. Damage to electrical components may result from sparking or static electricity. It is
strongly recommended to have a certified HPE 3PAR field specialist onsite and to follow proper data center protocols for working with electrical
components. Service procedures and commands are subject to change with new releases and updates; check with HPE 3PAR support or the
proper service guides for the most current information.
Before beginning the resiliency section of the PoC, start a load generator such as Iometer in order to observe the performance of the array.
1. Prior to disconnecting any cables, check that no single-pathed hosts are called out by the checkupgrade command.
CLI command: checkupgrade
2. Use the following commands to display the status of the ports:
Technical white paper
Page 62
Table 21. Port failure monitoring
Command
Description
showhost
The showhost command displays information about defined hosts and host paths in the system.
showport
The showport command displays information about ports in the system.
showport -par
This command displays a parameter listing such as the configured data rate of a port and the maximum data rate that the card supports.
Also shown is the type of attachment (direct connect or fabric attached) and if the unique_nwwn capabilities are enabled.
showport -c
This command displays all devices connected to the port. Such devices include cages (for initiator ports), hosts (for target ports), and
ports from other storage systems for Online Import or Remote Copy over Fibre Channel (RCFC) ports.
statport -host -ni
This command specifies to display only host ports (target ports), only disk ports (initiator ports), only Fibre Channel Remote Copy
configured ports, or only Fibre Channel ports for data migration.
3. Disconnect the chosen host port, and re-display the commands. Such as “Disk Port 2” on Node 0 as highlighted at location a, figure 82.
Figure 82. Locating a disk connector
CLI command: showhost
CLI command: showport
CLI command: showport -par
CLI command: showport -c
CLI command: statport -host -ni
4. View the system alerts generated by the lost connection
CLI command: showalert
5. Verify that TestHost does not have any loss of array connectivity
6. To complete this test case, reconnect the cables and verify the system is back to normal
Use the following commands to verify that the status of the ports is back to normal:
CLI command: showhost
CLI command: showport
CLI command: showport -par
CLI command: showport –c
CLI command: statport -host -ni
7. If the HPE 3PAR Persistent Ports option is implemented, the path failure will be transparent to the host; path status can be displayed as
shown in figure 83
Technical white paper
Page 63
Figure 83. Displaying failover state of back-end ports
Failover state value
None
failover not in operation
failover_pending
failover to partner request has been issued but not yet completed (AKA transient)
failed_over
port has failed over to partner port
failback_pending
failback request has been issued but not yet completed
Active
partner port is failed over to this port
active_down
partner port failed over to this port, but the port is down most likely due to a missing or bad cable
active_failed
partner port failed over to this port, but the action failed. Most likely, the FC switch did not have N_port ID Virtualization (NPIV) enabled
8. To conclude this portion of the test, replace the cable and run checkhealth in the CLI.
Test case 2: Failure of a back-end (disk) serial-attached SCSI (SAS) cable
Note
This test is not possible if the configuration includes only controller node enclosures and no disk-only enclosures. The disk on the controller
enclosures is cabled internally and cannot be disconnected.
1. Prior to disconnecting any cables, check for two paths to each cage
CLI command: showcage
2. Disconnect a SAS back-end cable
A disconnected SAS cable from the DP2 connection of the controller node will remove the Loop A connections from that node to all attached
disk enclosures; a disconnected SAS cable from the DP1 connection of Node 0 will remove all the Loop B connections
3. Verify if A or B loop is missing
CLI command: showcage
4. Display the status of system alerts
CLI command: showalert
5. Verify no loss of system operation
6. To complete this test case, reconnect the cable and verify path has been restored
CLI command: showcage
7. Verify that the alerts have been auto-cleared by the system
CLI command: showalert
Technical white paper
Page 64
Test case 3: Fail a disk drive
With HPE 3PAR AFAIK at HPE 3PAR OS 3.1.2 or above, the failed disk shows as degraded until the sparing process has completed, at which
point it will show failed. In addition, the hot plug LED on the failed disk will be on to indicate that it can be replaced. When the failed disk has
been replaced, the servicemag resume will start automatically and the used chunklets will be returned to the new disk (under normal
circumstances). In order to accurately determine the specific location of a drive to pull, use the locatecage command, for example to blink the
LED for drive 19 in age 0, use the command, locatecage 0 19.
In this test scenario, there are two ways of achieving the simulated failure:
1. The simplest way is to use the controlpd spindown command. However, this is not a genuine failure, and at the recent code levels there is a
built-in delay of 20 minutes before the drive will start to spare out.
2. To simulate a genuine failure on a spinning drive, fail six chunklets on the drive. When the sixth chunklet fails, the system will mark the disk as
degraded and immediately start sparing out. On SSD drives, the failed chunklet will not work due to differences in the way SSD and spinning
disks function.
Begin the test: Failing a disk
1. Choose a disk with not too many used chunklets (around 30 if possible)
2. CLI command: showpd –cp
Figure 84. Displaying chunklet utilization
3. Determine the WWN of the chosen drive, in this example drive 22
CLI command: showpd –i <pdid>
Figure 85. Output of showpd –i
Record the WWN of the chosen disk:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
4. Set the SSD to a failed state
CLI command: controlpd chmederr set –2 <wwn>
This is the WWN of the entire SSD (not a chunklet) that will be failed.
Example: controlpd chmederr set -2 5000CCA0131856AB
The SSD drive will fail and show degraded as the state.
5. Monitor the progress of the rebuild process
CLI command: showpd -c
When the used spare chunklets reach zero the sparing out process has completed. Disk status shows degraded while sparing out and will
change to failed when fully spared out and ready to be replaced.
6. Wait until the process is complete
Un-fail the disk by clearing the failed status
CLI command: controlpd clearerr <wwn>
Example: controlpd clearerr 5000CCA0131856AB
7. HPE 3PAR StoreServ keeps track of all WWNs and will not allow their re-use; to clear this condition enter these two commands:
CLI command: servicemag clearstatus <cg> <mg>
Technical white paper
Page 65
Where <cg> is the cage number and <mg> is the disk magazine number
Example: servicemag clearstatus 0 6
8. Clear the failed status of the drive
CLI command: servicemag unmark <cg> <mg>
9. Resume the servicemag; allow time for the chunklets to be relocated
CLI command: servicemag resume <cg> <mg>
10. Monitor the status of the servicemag process
CLI command: servicemag status -d
11. Monitor the progress of chunklets being moved back
CLI command: showpdch –mov
During the process, this displays the total of chunklets that still need to be relocated
After the process is complete, it displays where all the chunklets went
12. Completing this test case
When the servicemag status -d shows complete, you can display where all the chunklets moved
CLI command: showpdch –<PDID>
Where PDID is the ID number of the physical disk
Test case 4: Fail a disk enclosure power supply
1. Turn off a disk enclosure power supply
Display the status to verify the power supply is off
CLI command: showcage -d
2. Observe amber LEDs on appropriate cage port
3. Verify no loss of system operation
4. To complete this test case, turn the power supply back on
Test case 5: Power off a drive enclosure
1. Important: Before powering off a “Drive Cage,” you need to be sure that the system is configured for cage or enclosure level availability
or better.
CLI command: showld -d
The Avail and CAvail columns must be Cage or Port for all LDs.
The Avail column shows the availability that was specified when the VV was created. The CAvail column shows the current availability.
2. If it is determined that the system is configured for Cage availability or better, proceed to power off both power supplies in the desired
Drive Cage.
Note
If the Cage is powered down for more than 5 minutes, chunklet relocation will start, and it will be necessary to relocate the chunklets back to their
original location after this test.
3. To complete this test case, restore power to the Drive Cage by turning both power supplies on.
4. To complete this test case verify that chunklet relocation did not start.
CLI command: showpdch –mov
Technical white paper
Page 66
5. If chunklet relocation started, run moverelocpd.
CLI command: moverelocpd <fd> - <td>
Where <fd> (from disks) and <td> (to disk) are the same.
Note
On the HPE 3PAR StoreServ there is a process that executes every 24 hours that will move any relocated chunklets to their original locations.
Test case 6: Fail an AC source (optional)
1. Turn off a power distribution unit (PDU). If a physical SP is in use, choose the PDU that it is not plugged into
2. Verify no loss of system operation
3. To complete this test case, turn the PDU back on
Test case 7: Fail a controller node power supply
This test case is redundant if the “Fail an AC source” was completed.
1. Select a node and verify that it is healthy
CLI command: shownode -d
2. Turn off one node enclosure power supply
3. Verify that the power is down from one power supply (PS)
CLI command: shownode -d
4. Observe that the “Node Status” LED is flashing amber
5. Verify no loss of system operation
6. To complete this test case, turn the supply back on and verify the node has returned to normal status
7. CLI command: shownode –d
Test case 8: Simulate a node failure
1. Ensure that there are no hosts with only one path
CLI command: checkupgrade
2. Observe which node has the active Ethernet interface
CLI command: shownet
3. To demonstrate the Ethernet interface failing over, move the IP to the node selected for failure testing
CLI command: setnet changenode <nodeid>
4. In addition, if so desired, the master node can be selected for failure testing
CLI command: shownode
5. Fail the selected node; turning off the power to the node will fail the node, although turning off the power allows cache to de-stage gracefully;
a more realistic approach is putting the array into whack mode, which removes the node from the cluster immediately
To put a node in whack mode do the following:
a. Use a terminal program to bring up a serial connection on the node to be halted
b. When you see the login prompt, type “Ctrl-w”
This will immediately drop the node out of the cluster and into the whack prompt
c. To prevent the node from automatically rebooting, type any command, such as “ver”
Technical white paper
Page 67
d. Verify that the node has dropped out of the cluster
CLI command: shownode
e. Observe that alerts of the missing node arrive
f.
To complete this test case, bring the node back into the cluster
At the whack prompt
CLI command: reboot
g. Wait a few minutes and verify that the node has rejoined the cluster
CLI command: shownode
To complete the resiliency test as a whole run checkhealth and turn off Maintenance Mode
CLI command: checkhealth
Table 22. Resiliency verification
Resiliency testing
Verification of redundancy and failover capabilities of array
Expected results
1. Validated redundancy configuration of controllers and other components
2. No IO timeouts observed during controller failover and other component failures
<Customer> comments
Result
Success
Partial success
Failed
Compare (optional)
Better
Equal
Worse
Appendix A: Preconditioning SSD drives for performance testing
Intended audience
This guide’s intended audience is storage administrators who wish to observe HPE 3PAR performance using an open source, freely available, and
easy-to-use benchmarking tool on their own premise, through the customer demo portal or during a guided proof of concept. This document is
not an exhaustive guide on how to use Iometer but rather a guide to demonstrate the performance of the HPE 3PAR StoreServ products quickly
and effectively. This guide assumes that the HPE 3PAR StoreServ has been set up according to best practices. There are several
recommendations throughout this appendix that make note of the best practices, but they do not comprise an exhaustive discussion on that
topic. For more information download the HPE 3PAR best practices guide.
General guidance on preconditioning
When defining workloads, block size is inversely proportional to both latency and IOPS rate. The larger the block size, the higher the service time,
and the larger block size means fewer IOPS are needed to service the workload.
Queue depth (AKA number of outstanding IOPS) will substantially affect throughput of the arrays as well as latency.
Although the Iometer GUI must run on Windows, RHEL as well as other platforms are supported for the Dynamo managers. If using Linux
distributions, note that changes to the IO scheduler will have pronounced effects on test results.
Ensure that the test environment meets a best practices configuration resembling figure 86.
Technical white paper
Page 68
Figure 86. Recommended architecture for Iometer benchmarking
Why Iometer? Iometer (pronounced like thermometer) was selected, as it is a freely available, open source, and easy-to-use benchmarking tool.
Originally developed by Intel®, it has been open source since 2001. HPE has neither influence nor involvement of any kind with the Iometer
development community. Links are included in this document to download the tool and its documentation from sourceforge.net, which is a site
developed, maintained, and used by the open source community.
Important notes regarding Iometer components and versions
There are two components to Iometer:
1. The Iometer GUI executable, Iometer.exe
2. Dynamo, which is the executable that generates load and is available on Linux (32- and 64-bit), and Windows (x86, x64, and ia64). Dynamo
workloads are instantiated and monitored by Iometer.
Iometer is the controlling program. Using Iometer’s graphical user interface, users configure the workload, set operating parameters, and start
and stop tests. Iometer tells Dynamo what to do, collects the resulting data, and summarizes the results in output files. Only one copy of Iometer
should be running at a time; typically, it is run on a management computer that is not running Dynamo. Each running copy of Dynamo is called a
manager; each thread within a copy of Dynamo is called a worker.
Technical white paper
Page 69
Figure 87. Determining Iometer version
Figure 88. Final Intel version of Iometer
Best practices for using Iometer to precondition
• Best practice: Make sure to download the latest version of Iometer. The most pronounced effect of the full random pattern will be the
compressibility of the generated data.
• Best practice: The entire capacity of the drive should be overwritten at least twice with large block sequential IO, then 64 KB random IO, and
the 4 KB random IO, as detailed in figure 90.
Technical white paper
Page 70
Getting started with Iometer
Figure 89. Iometer GUI controls
1. Download Iometer from sourceforge.net onto a Windows Server® that will act as the Iometer server
2. Unzip and launch Iometer; note that on the Iometer architecture is laid out as shown in figure 89
3. Download and install Dynamo on the manager machines; Windows and Linux are both supported in version 1.1
4. Export four volumes from the HPE 3PAR array to each of the manager hosts
5. Launch the Iometer GUI on the Iometer server (if only using one host, skip step 6)
6. Execute Dynamo on each of the manager hosts using the following syntax:
I. Dynamo – i <Iometer server> -m <localhost name>
II. Where Iometer server is the machine that is running the Iometer GUI
III. Note that there are several more options available; these are the only options necessary
7. Highlight and expand a manager host listed in the topology pane of the Iometer GUI
a. There will be two worker (threads) for every CPU core (a result of hyper threading)
b. The target list: Select a target for each worker; Note the different icons and colors of the targets; Yellow with a red line across it means the
target has a file system, but does not have an iobw.tst (test file); if this is the case, Iometer will create the file; note that if the volume is
large it can take several hours to create, as Iometer will fill the entire volume
c. Note: Depending on the number of drives and their capacity, overwriting the drives may take hours to days
8. Assign 1/4th of the total number of workers to each of the four volumes presented to the array
9. In the Iometer pane c (as designated in figure 89), set the “# of outstanding IOs” to 32; this is the queue depth and 32 is the default for most
operating systems
10. Also in Iometer pane c set the “Write IO Data Pattern” to “Full Random”
11. Next, select the “Access Specification” tab to define the workload to test; there are several predefined workloads from which to select as noted
in figure 90 and pane d; in addition, multiple workloads can execute at once by assigning a percentage to an individual workload; all must add
up to 100 percent; for more information, review the documentation included with Iometer
12. Set “All Workers” as the default assignment for the workload as noted in figure 90 at location e
13. Assign the workload to be used by either selecting a predefined workload or defining a workload by selecting “New”
Defining the preconditioning workload
1. After selecting disk targets, switch tabs to “Access Specification” as shown in figure 90, location a
2. Select New at location b, figure 90
Technical white paper
Figure 90. Access specification
First preconditioning workload 256 KB sequential write
1. Name the access specification 256 KB sequential at location a, figure 91
2. Set default worker assignment to all workers at location b, figure 91
3. Set transfer request size to 256 KB at location c, figure 91
4. Set random/sequential access to 100% sequential at location d, figure 91
5. Set percent read/write to 100% write at location e, figure 91
6. Set align IO to 16 KB as well as set the default 512 bytes to zero
7. Select OK to close the dialog box at location g, figure 91
Figure 91. First Iometer specification for preconditioning
Page 71
Technical white paper
Second preconditioning workload 64 KB random 100 percent write
1. Select New from the access specification dialog at location b, figure 90
2. Name the access specification 64 KB random at location a, figure 92
3. Set default worker assignment to all workers at location b, figure 92
4. Set transfer request size to 64 KB at location c, figure 92
5. Set random/sequential access to 100 percent random at location d, figure 92
6. Set percent read/write to 100 percent write at location e, figure 92
7. Set align IO to 16 KB as well as set the default 512 bytes to zero
8. Select OK to close the dialog box at location g, figure 92
Figure 92. 64 KB random write
Third preconditioning workload 4 KB random 100 percent write
1. Select New from the access specification dialog at location b, figure 90
2. Name the access specification 4 KB random at location a, figure 93
3. Set default worker assignment to all workers at location b, figure 93
4. Set transfer request size to 4 KB at location c, figure 93
5. Set random/sequential access to 100 percent random at location d, figure 93
6. Set percent read/write to 100 percent write at location e, figure 93
7. Set align IO to 16 KB as well as set the default 512 bytes to zero
8. Select OK to close the dialog box at location g, figure 93
Page 72
Technical white paper
Page 73
Figure 93. 4 KB random
Assigning workloads to managers
1. After completing the workload definitions highlight each of the managers names (individually) at location(s) a in figure 94.
2. Scroll down at location b, figure 94 to see the newly created access specifications.
3. Assign the newly created access specifications 256 KB sequential, 64 KB random, and 4 KB random as noted at location c, figure 93.
Figure 94. Assigning access specifications to managers
Technical white paper
Page 74
Setting test cycle durations
1. Select test setup at location a, figure 95
2. Optionally assign a test description at location b, figure 95
3. Assign a run time of 12 hours to ensure each access specification has enough time to completely overwrite the drives at location c, figure 95
Note
Iometer will completely overwrite the drives with a file named iobw.tst, since the drives are overwritten at the factory once, again at installation,
this will be the third time it is overwritten sequential. When Iometer completes its 256 KB sequential write, it will be 4 sequential overwrites. The
64 KB and 4 KB patterns “scramble” the metadata. The drives will be overwritten a total of 6 times.
4. Press the green flag at location d, figure 95, to start the test; optionally save the results, although it’s not necessary
5. Switch to the results display tab at location e, figure 95
Note that it may be several hours to a day before Iometer completes creating the iobw.tst files depending on the size of the array. Specifically,
the number of drives, the type of drives, and the total capacity.
Figure 95. Setting test durations
When the workload cycles are completed, the SSD drives will be preconditioned.
Delete all iobw.tst files and return to demonstrating capacity efficiency.
Technical white paper
Page 75
Appendix B: Performance testing using Iometer
Note
Material here is intentionally repeated for those customers who are performing a remote PoC through the HPE Customer Solution Center and
therefore did not require preconditioning of SSD drives.
Intended audience
This guide’s intended audience is storage administrators who wish to observe HPE 3PAR performance using an open source, freely available,
and easy-to-use benchmarking tool on their own premise, through the customer demo portal or during a guided proof of concept. This
document is not an exhaustive guide on how to use Iometer but rather a guide to demonstrate the performance of the HPE 3PAR StoreServ
products quickly and effectively. This guide assumes that the HPE 3PAR StoreServ has been set up according to best practices. There are
several recommendations throughout this appendix that make note of the best practices, but they do not comprise an exhaustive discussion
on that topic.
General guidance on performance testing
When defining workloads, the block size is inversely proportional to both latency and IOPS rate. The larger the block size, the higher the service
time, and the larger block size means fewer IOPS are needed to service the workload.
Queue depth (AKA number of outstanding IOPS) will substantially affect throughput of the arrays as well as latency.
Although the Iometer GUI must run on Windows, RHEL as well as other platforms are supported for the Dynamo managers. If using Linux
distributions, note that changes to the IO scheduler will have pronounced effects on test results.
Ensure that the test environment meets a best practices configuration resembling figure 96.
Figure 96. Recommended architecture for Iometer benchmarking
Technical white paper
Page 76
Why Iometer? Iometer (pronounced like thermometer) was selected, as it is a freely available, open source, and easy-to-use benchmarking tool.
Originally developed by Intel, it has been open source since 2001. HPE has neither influence nor involvement of any kind with the Iometer
development community. Links are included in this document to download the tool and its documentation from sourceforge.net, which is a site
developed, maintained, and used by the open source community.
Important notes regarding Iometer components and versions
There are two components to Iometer:
1. The Iometer GUI executable, Iometer.exe.
2. Dynamo, which is the executable that generates load and is available on Linux (32- and 64-bit), and Windows (x86 32- and 64-bit
as well as ia64). Dynamo workloads are instantiated and monitored by Iometer.
Iometer is the controlling program. Using Iometer’s graphical user interface, users configure the workload, set operating parameters, and start
and stop tests. Iometer tells Dynamo what to do, collects the resulting data, and summarizes the results in output files. Only one copy of Iometer
should be running at a time; typically, it is run on a management computer. Each running copy of Dynamo is called a manager; each thread within
a copy of Dynamo is called a worker.
Figure 97. Determining Iometer version
Figure 98. Final Intel version of Iometer
Technical white paper
Page 77
Best practices for using Iometer
• Best practice: Make sure to download the latest version of Iometer. The most pronounced effect of the full random pattern will be the
compressibility of the generated data.
• Best practice: Let tests run at least 30 minutes before observing results. This allows cache to warm up. Heavy write workloads, for example,
will go very fast until the data cache on an array fills up; subsequent performance will drop off, as data no longer goes to DRAM but to the
back-end disk.
Getting started with Iometer
Figure 99. Iometer GUI controls
1. Download Iometer from sourceforge.net onto a Windows Server that will act as the Iometer server
2. Unzip and launch Iometer; note that on the Iometer architecture is laid out as shown in figure 99
3. Download and install Dynamo on the manager machines; Windows and Linux are both supported in version 1.1
4. Export four volumes from the HPE 3PAR array to each of the manager hosts
5. Launch the Iometer GUI on the Iometer server (if only using one host, skip step 6)
6. Execute Dynamo on each of the manager hosts using the following syntax:
I. Dynamo – i <Iometer server> -m <localhost name>
II. Where Iometer server is the machine that is running the Iometer GUI
III. Note that there are several more options available; these are the only options necessary
7. Highlight and expand a manager host listed in the topology pane of the Iometer GUI
a. There will be two worker (threads) for every CPU core (a result of hyper threading)
8. The target list: Select a target for each worker; note the different icons and colors of the targets; yellow with a red line across it means the
target has a file system, but does not have an iobw.tst (test file); if this is the case, Iometer will create the file
Note that if the volume is large it can take several hours to create, as Iometer will fill the entire volume assign 1/4th of the total number of
workers to each of the four volumes presented to the array
9. In the Iometer pane c (as designated in figure 99), set the “# of outstanding IOs” to 32; this is the queue depth and 32 is the default for most
operating systems
10. Also in Iometer pane c set the “Write IO Data Pattern” to “Repeating bytes”
11. Next, select the “Access Specification” tab to define the workload to test; there are several predefined workloads from which to select as noted
in figure 100 and pane d; in addition, multiple workloads can execute at once by assigning a percentage to an individual workload; all must
add up to 100 percent. For more information, review the documentation included with Iometer
12. Set “All Workers” as the default assignment for the workload as noted in figure 100 at location e
13. Assign the workload to be used by either selecting a predefined workload or defining a workload by selecting “New”
Technical white paper
Page 78
Figure 100. Iometer Access Specifications definition
Defining the workload to be tested
In addition to the predefined workloads provided, there are workload definitions available freely on the Internet that simulate VDI, Microsoft SQL,
Oracle, as well as other applications. If there is a specific performance characteristic to be demonstrated (IOPS or throughput for example),
table 23 defines parameters to use to demonstrate different performance characteristics.
Table 23. Workload characteristics
Transfer request size
(figure 100, location f)
This is block size in common parlance. Block size and IO rate are inversely proportional. Larger block sizes require fewer IOPS to move
data. Larger block sizes will increase throughput and decrease IOPS at the expense of higher latency. Databases tend to be small
block (8k ~ 16k); streaming tends to be large block (256k) with VMware in the median, roughly 32 KB.
Sequential/Random access
(figure 100, location g)
Data locality is irrelevant on SSD drives as there is no seek time. To demonstrate the IOPS capability of SSD drives use 4 KB
100 percent read. To demonstrate throughput capability of the array use 256 KB 100 percent read. In the case of SSD drives, large
block sequential access will be close to FC drive performance as the limiting factor is the transport-signaling rate of Fibre Channel
4/8 Gb or SAS 6 Gb.
Percent of access specification
(figure 100, location h)
When using multiple workloads, this allows for a workload distribution mix. All workload percentages combined must equal 100.
Percent read/write distribution
(figure 100, location i)
In order to demonstrate the maximum IOPS capability of SSD drives, set this to 100 percent read. In order to demonstrate the raw
write performance of SSD drives without the benefit of write cache, set this number to 0 and allow a work-warm window to ensure
data cache is filled.
Align IOs on (figure 100, location j)
For HPE 3PAR environments, align IO to the array page size, 16 KB use any multiples of 16.
Number of outstanding IOPS
(figure 99, location c)
This variable can have substantial impact on performance and latency. Change from the default of 1 to 32.
Technical white paper
Page 79
Table 24. Sample workloads
Objective
No. of outstanding IOPS
Transfer request size
% random
% of read
Demonstrate maximum IOPS
32
4 KB
100
–
Demonstrate maximum
throughput
32
256 KB
100
100
Simulate VDI
32
4 KB
80
20
Simulate Microsoft SQL
32
8 KB
80
70
Figure 101. Iometer workload setup options
1. Select the Test Setup tab at location k, in figure 101
2. Set the “Ramp up Time” to 300 seconds at location l in figure 101 (this allows the array to warm up cache)
3. Change the run time from 0 (infinity) to at least 30 minutes at location m in figure 101
4. Name the description if so desired at location n in figure 101
5. Save the testing profile for later use at location o in figure 101
6. Press the green flag icon
on the menu bar to start the test
7. Iometer will prompt for the name of a result file; label results file with a meaningful name; if there is no need to save the results, click “Cancel”
and the test will begin
8. Switch to the results tab, change the refresh interval to 2 seconds and change the radio button to “Last Update”
9. Real-time performance can be observed through the “Results Display” tab and in the collected csv results file
10. When the test runs are completed, the data from the csv file can be graphed using Microsoft Excel
Technical white paper
Page 80
Appendix C: Using the HPEPoCApp for performance benchmarking
The HPEPoCApp is a wrapper around the industry standard vdbench tool. Although vdbench is free the licensing agreement prohibits
redistribution. It can be downloaded, free of charge, from the Oracle Technology Network, which does require a free registration. The hpepocapp
tool itself is available through the sales team and cannot be redistributed.
HPEPoCApp
The HPEPoCApp is a tool to help run a PoC while providing flexibility and choice to meet the needs of a PoC. Other PoC tools lock you into a
specific workload choice or require you to learn a synthetic workload tool to gain the flexibility needed to meet your PoC goals.
The HPEPoCApp breaks the PoC into 4 manageable steps with appropriate automation where it is needed to make the job easier, but not so
much that it locks you into someone else’s choices.
Summary steps
Step 1: Prepare the PoC environment
Unfortunately no tool can remove the hard work of setting up the test environment, but the HPEPoCApp offers some tools and the flexibility to
reach your goals. The goal of this step is to connect one or more driver systems to a storage array to be tested and configure them to run the
test. Elements of this step include provisioning the servers, loading the OS, creating the SAN, and provisioning the storage.
Below is an example PoC test environment.
Drivers are servers that will generate the I/O workload for the array to process and then capture the results. The manager is where the user
interface to the drivers resides and where the results are consolidated. The manager can share server hardware with a driver or reside on its own
server. This version of the HPEPoCApp uses vdbench software to generate the I/O workloads so the drivers can be running any OS supported by
vdbench.
Vdbench is a disk I/O workload generator originally released by Sun Microsystems. HPEPoCApp has been tested with vdbench versions
5.04.02 (Septembe 2014), 5.04.03 (April 2015), and 5.04.05 (April 2016) which has been tested on Windows (7, 8, NT, 2000, 2003,
2008, and 2012), HP-UX, Linux and more.
Driver systems are responsible for generating the I/O workload presented to the storage array. One of the more common problems encountered
in PoCs is an undersized driver system. The storage array can only perform I/O’s requested by the host so when a desired performance level is
not reached a first troubleshooting step is to ensure the driver system is requesting sufficient I/O’s from the storage array. Ensure you have
enough processing capability (CPU, memory, HBA’s, etc.) in the drivers to generate the desired I/O load.
Technical white paper
Page 81
The Manager runs the HPEPoCApp software. Install the HPEPoCApp software by extracting the zip file on the Windows server or the tar file on
the Linux server. There are many files in the directory, but only 2 that are important for the user.
hpepocapp.exe/hpepocapp—this is a Command Line executable that runs the HPEPoCApp. Verify the tool is ready to run by executing
hpepocapp with the help option:
hpepocapp –h.
HPEPoCAppWorkloads—this is the configuration file where you specify the test environment (drives and LUNs) and the workload
parameters. This file has example configurations that can be modified to reflect the local environment.
HPE’s recommendation is to use a workload representing the production workload during a PoC. No insight can be gained into the expected
behavior of a storage array by running 100% write workload, for example, unless the production workload is also 100% write. The HPEPoCApp
was designed to offer flexibility in workload choice.
The SAN provides the connectivity between the servers and the storage array that will be tested. It is important to configure the SAN to represent the
production SAN as closely as possible. If the SAN becomes a bottleneck, the expected level of storage array performance cannot be reached.
The most commonly overlooked attribute of the SAN in a PoC environment is lack of sufficient connections. Ensure that the SAN used for the
PoC has sufficient throughput to handle the expected workload(s).
The storage array being tested must have sufficient capability to represent the production workload. Running PoCs using unusually small
configurations (e.g., 8 drives) or “as is” equipment (e.g., broken controller) will tell you how unusually small or broken equipment will work. This is
seldom helpful in understanding the real world behavior of your production environment.
Preparing the PoC environment is often the most labor intensive step, but a very important step. When the environment is configured, specify
the servers and LUNs to use for the test in the workload file. An example of this file looks like this:
Driver=driverwin.hp.com Administrator
Vdbench=c:\perf\vdbench\
Lun=\\.\PHYSICALDRIVE1
Lun=\\.\PHYSICALDRIVE2
Lun=\\.\PHYSICALDRIVE3
Lun=\\.\PHYSICALDRIVE4
Lun=\\.\PHYSICALDRIVE5
Lun=\\.\PHYSICALDRIVE6
Lun=\\.\PHYSICALDRIVE7
Lun=\\.\PHYSICALDRIVE8
Driver=driverlinux.hp.com Administrator
Vdbench=/root/vdbench
Lun=/dev/mapper/mpathaa,openflags=directio,size=256g
Lun=/dev/mapper/mpathab,openflags=directio,size=256g
Lun=/dev/mapper/mpathac,openflags=directio,size=256g
Lun=/dev/mapper/mpathad,openflags=directio,size=256g
Lun=/dev/mapper/mpathae,openflags=directio,size=256g
Lun=/dev/mapper/mpathaf,openflags=directio,size=256g
Lun=/dev/mapper/mpathag,openflags=directio,size=256g
Lun=/dev/mapper/mpathah,openflags=directio,size=256g
Technical white paper
Page 82
Drivers can be identified by IP address or network name. You can see from the example, the format of the LUNs will vary depending on the OS.
The first example is from a Windows driver and the second is Linux using multipathing. Notice you can add vdbench parameters to the end of
the LUN path. In this example, the Linux driver includes the openflags parameter which is required by vdbench on Linux systems. The
HPEPoCApp supports any driver OS supported by vdbench.
Note
Linux device files of the form /dev/dm-n are for internal (Linux) use only. User friendly names such as /dev/mapper/mpathab are preferred.
Use of these preferred user friendly names however, have been known to cause errors in vdbench and may also cause errors with the
HPEPoCApp. A workaround is to add the size= parameter to the Lun= line in the HPEPoCApp (e.g., Lun=/dev/mapper/mpathab,
openflags=directio, size=256g).
Step 2: Define the Workload
The choice of what workload to use is the most important part of the process. If maximum effort is expended completing the other steps to
perfection, but the workload choice is not reflective of the production workload the results will have little value. Think of the process of choosing
a car. If a careful analysis is conducted and a model of car is selected that fits the requirements, but during testing the car only drives backwards
or parks, little can be learned about production performance of the car. Best practices for the PoC suggest that the array be loaded up to
simulate production workloads in order to derive production performance. Select an I/O workload that is most representative the array
in production.
The HPEPoCApp is designed to offer flexibility in choosing the workload that best matches a particular sites production workload. Workloads are
defined using a workload definition file. An example of the workload file looks like this:
# Multiple Workload= commands can be used together
# except Fill and Sample which must be run individually
# Each workload will be run separately in the order
# listed
#
# Workload=sample
Workload=RandomRead
# Workload=RandomWrite
# Workload=SequentialRead
# Workload=SequentialWrite
# Workload=RandomMix
# Workload=SequentialMix
# Workload=Fill
Workload=SQLServer
# Workload=OracleDB
# Workload=VDI
# Workload=Sizer16K6040
# Workload=SRData <filename>
In this example, we have removed the comment line (leading #) to select two workloads: RandomRead and SQLServer. When the test is
executed, each of these workloads will be executed sequentially.
Technical white paper
Page 83
Fill workload
The Fill and Sample workloads cannot be combined with any other workload or the report will not properly represent the results. If it is necessary
to run a series of workloads without interruption, fill may be mixed with other workloads, but the run-time option –m (mixed workloads) must be
specified. When specifying the fill workload in combination with other workloads in the same test, the proper workloads will be executed against
the storage under test, but the resulting report may not accurately represent the data and in fact the report step may fail.
The reason the fill workload may not be mixed with other workloads is because of the nature of the workload. The fill workload is intended to
populate LUNs in preparation for a test workload. The fill workload is not executed with varying intensities in order to explore the workload
characteristics as is desired of other workloads. All other workloads will be executed in a way that produces a saturation curve. Mixing these
workloads in the same report is not currently supported.
Deduplication and Compression
The performance results of some tests have a dependency on the data pattern being written to the storage. Data compaction technologies such
as deduplication and compression will produce different results and different performance levels as the data pattern sent to the storage changes.
An array that supports deduplication, for example, must write every block to the storage media if every block is unique. When a unique data
pattern such as this is presented to a storage array implementing deduplication, the overhead of deduplication will degrade performance (cost of
deduplication) but since the data is all unique there is no benefit. It is important to send a data pattern that has some level of redundancy for a
valid test of the deduplication function of a storage array.
HPEPoCApp supports 2 features to define a data pattern that is sent to the storage under test. These features are deduplication and
compression.
The configuration file may include the “Deduplication=#” parameter where the # sign is replaced by a deduplication ratio. Deduplication=2, for
example, cause the data written to the storage array during the test to include duplicate data resulting in a 2:1 deduplication ratio.
The configuration file may also include the “Compression=#” parameter where the # sign is replaced by a compression ratio. Compression=3, for
example, will cause the data written to the storage array during the test to include data that may be compressed resulting in a 3:1 compression
ratio. The maximum compression ratio that can be specified is 25:1. Take care to use a compression ratio that is representative of the expected
production data. This will provide the best PoC results.
When the deduplication= or compression= parameters are not included in the configuration file the data pattern generated for the test will
include all unique data that does not compress. Stated another way the default is a deduplication ratio of 1:1 (no duplicate data) and a
compression ratio of 1:1 (data does not compress).
Step 3: Execute the Test
Executing the test is where automation can help the most. Tests can take a long time to execute and capturing the results can be tedious
without the help of some good tools. The HPEPoCApp automates the entire process of executing the tests and capturing the results.
Once the drivers have been identified and the workloads chosen, the HPEPoCApp can be enlisted to create and execute the test. The following is
an example of the PoC App in action. In this case, the dry run option was specified which will check the environment and configure the test, but it
will not execute the test. This option is also helpful for checking the duration of the test which you can see is estimated to be about 7 hours.
C:\perf\HPEPoCApp>HPEPoCApp -w HPEPoCAppWorkloads -d
HPEPoCApp Workload Driver - version: 3.2
Friday, June 06, 2015 08:28 AM Pacific Daylight Time
Workload filename: HPPoCAppWorkloads
***** Dry Run Only *****
Elapsed time set to 300 seconds.
Validating Drivers
Driver: 10.10.1.1
Driver: 10.10.1.1 found.
Driver: 10.10.1.2
Technical white paper
Page 84
Driver: 10.10.1.2 found.
Validating vdbench on all Drivers
vdbench on Driver: 10.10.1.1
vdbench on Driver: 10.10.1.2
Validating luns on all Drivers
luns on Driver: 10.10.1.1
lun: \\.\PHYSICALDRIVE1
lun: \\.\PHYSICALDRIVE2
lun: \\.\PHYSICALDRIVE3
lun: \\.\PHYSICALDRIVE4
lun: \\.\PHYSICALDRIVE5
lun: \\.\PHYSICALDRIVE6
lun: \\.\PHYSICALDRIVE7
lun: \\.\PHYSICALDRIVE8
luns on Driver:
10.10.1.2
lun: \\.\PHYSICALDRIVE1
lun: \\.\PHYSICALDRIVE2
lun: \\.\PHYSICALDRIVE3
lun: \\.\PHYSICALDRIVE4
lun: \\.\PHYSICALDRIVE5
lun: \\.\PHYSICALDRIVE6
lun: \\.\PHYSICALDRIVE7
lun: \\.\PHYSICALDRIVE8
Creating vdbench input file
Processed workload randomread
Processed workload randomwrite
Created vdbench inputfile: HPEPoCAppWorkloads.vdb
Test is expected to run for approximately 420 minutes or 7 hours.
***** Dry Run Specified - not running the test *****
Removing the dryrun option (-d or –dryrun) will allow the test to execute.
Step 4: Results analysis and reporting
The HPEPoCApp consolidates the data and produces a summary report of the results upon completion of the test execution. The results are
produced in a single PDF file for ease of review and sharing. This report is automatically generated at the conclusion of the test and stored in a
file named HPEPoCApp.yyyymmdd-hhmmss.pdf.
Detailed steps
Step 1 details:
• Setup hardware, OS, SAN
• Configure storage as you would in production following HPE 3PAR best practices
Technical white paper
Page 85
– Recommend 4 LUNs per driver system to provide adequate queuing space
– Recommend using raw devices to remove any file system variables from the test results
• Make a list of LUNs from each driver system. The LUN names will be different depending on the host OS. For example, a Linux multipath
device will take the form /dev/mapper/mpathab while Windows devices will be something like \\.\PhysicalDrive3.
The list will look something like this:
– Driver1 – 10.10.1.1 (Windows):
 \\.\PhysicalDrive1
 \\.\PhysicalDrive2
 \\.\PhysicalDrive3
 \\.\PhysicalDrive4
– Driver2 – 10.10.1.2 (Linux):
 /dev/mapper/mpathaa
 /dev/mapper/mpathab
 /dev/mapper/mpathac
 /dev/mapper/mpathad
• Download and install vdbench on each driver system.
– The vdbench license does not allow it to be redistributed. Download the software from the Sun (Oracle) website here. You will need an
Oracle Technical Network (OTN) login, which is free.
– Install vdbench on each of the driver systems. The software comes in a zip file that can be unpacked and copied to any location where you
want the software to run.
– Multihost testing.
– If testing will be conducted in a multihost environment (multiple driver systems) each host will need a way to communicate.
Communications are done using a vdbench proprietary implementation of Remote Shell (RSH) for Windows servers and Secure
Shell (SSH) for Linux servers.
– Windows: If you are using Windows drivers in a multihost environment, you must start the vdbench RSH daemon on each driver.
Open a window that will remain open for the duration of the testing and run vdbench with the rsh command line paramenter:
“c:\vdbench\vdbench.bat rsh”
Linux: If you are using Linux drivers in a multihost environment, Secure Shell (SSH) must be configured. Start on the management server
where HPEPoCApp will be installed. Create a public/private key pair using the ssh-keygen command. If you will be using root as the
vdbench user, login as root and navigate to the root home directory then cd to .ssh. Once in the .ssh directory execute the command
“ssh-keygen –C” “comment”. This will build two files: id_rsa and id_rsa.pub.
The contents of the id_rsa.pub file must be installed on each of the driver systems. Login to each driver system as the vdbench user
(e.g., root) and navigate to the .ssh directory. Append the contents of the id_rsa.pub file created earlier to the authorized_keys file in the
.ssh directory of each driver and the management station.
Verify the SSH configuration by logging into each driver system from the management system (e.g., ssh 10.10.1.2).
Firewall note: vdbench uses tcp port 5570 to communicate when running the test. In some Linux implementations, iptables can block port
5570 which will cause the test to fail with an error indicating “No route to host”. In this case you must configure the firewall to allow
communications over port 5570 in order to execute the test. (e.g., iptables –I INPUT –p tcp –m tcp –dport 5570 –j ACCEPT).
Technical white paper
Page 86
• Install the HPEPoCApp
– Choose a working directory and unpack the files
– Verify the HPEPoCApp is working by executing the command hpepocapp –h
• Congratulations! The first, and most labor intensive step, is complete!
Step 2 details
Step 2 is defining the workload. The HPEPoCApp uses a single file to configure the drivers, LUNs, and workloads. A file called
HPEPoCAppWorkloads is provided with the HPEPoCApp package to use as a template. Here is an example of this file. Note any line
beginning with a pound sign (#) is a comment line and blank lines are allowed.
# HPEPoCApp - 3.4 - May 2016
#
# Location of vdbench on this system
vdbench=c:\perf\tools\vdbench\vdbench.bat
#
# List driver network names and LUNs
# Driver=name username password
# where:
# name is a network name (abc.hpe.com) or an
#
ip address (10.10.1.1)
# username is the username that will run the vdbench tool
#
usually Administrator for windows and
#
root for Linux
#
# Deduplication
# if you are using deduplication on the storage array
# uncomment the following line
# Deduplication=1.9
#
# Compression
# if you are using compression on the storage array
# uncomment the following line
# Compression=1.5
#
# Pattern
# By default a random pattern will be used for all writes
# If this parameter is specified, all writes will use the data
# in the pattern file.
The data willbe copied as often as needed
# to fill the write buffer
# Pattern=<filename>
Technical white paper
Page 87
#
# Elapsed time per iteration in seconds. Remember
# that many tests have many iterations
# Use the –d or –dryrun option to check the test run
# duration
Elapsed=900
# Workloads: specify the workload to use for the test
# by uncommenting the line with the desired workload name
# sample - a sample workload to test the PoC process
# RandomRead - 100% Random Read workload varying
#
block sizes up to 64k
# RandomWrite - 100% Random Write workload varying
#
block sizes up to 64k
# SequentialRead = 100% Sequential Read workload
#
varying block sizes greater than 64k
# SequentialWrite = 100% Sequential Write workload
#
varying block sizes greater than 64k
# RandomMix
#
= Random workload varying the block size
up to 64k and read:write mix
# SequentialMix
#
= Sequential workload varying the
block size greater than 64k and read:write mix
# Fill = Special case where all LUNs will be written
#
1 time. This is sometimes needed to populate
#
the LUNs prior to running a measured test
# SQLServer = workload to simulate SQL Server database
#
workload
# OracleDB = workload to simulate Oracle DB workload
# VDI = workload to simulate Virtual Desktop workload
#
# Multiple Workload= commands can be used together
# Each workload will be run separately in the order listed
#
# Workload=sample
Workload=RandomRead
# Workload=RandomWrite
# Workload=SequentialRead
# Workload=SequentialWrite
Technical white paper
Page 88
# Workload=RandomMix
# Workload=SequentialMix
# Workload=Fill
Workload=SQLServer
# Workload=OracleDB
# Workload=VDI
# Workload=Sizer16K6040
# Workload=SRData <filename>
Deduplication
Deduplication can be specified in the workload file by removing the comment character (#) from the Deduplication line. When deduplication is
specified, the deduplication ratio will be the number specified. For example, if Deduplication=1.9 is specified, data will be written to the array that
should result in a 1.9:1 deduplication ratio.
Each storage array supporting deduplication will deduplicate data using an architecturally defined block size such as 16k for 3PAR. This means
that to ensure a given deduplication ratio, all writes must be done in multiples of this block size. An 8k write, for example, will be handled properly
by the storage array, but the test tool does not know how this data will impact the deduplication ratio. The test tool therefore requires all I/O to
be in multiples of the dedupe block size when deduplication is specified. Technically only writes must follow this rule, but the current tool
implements this restriction for both reads and writes for consistency.
Deduplication will only work with workloads doing I/O in multiples of 16k. Deduplication only makes sense for write workloads. The write
workloads that support deduplication at this time include sample, randomwrite, sequentialwrite, randommix, sequentialmix, fill, SQLServer,
OracleDB, VDI, and SRData.
Compression
The configuration file may also include the “Compression=#” parameter where the # sign is replaced by a compression ratio. Compression=3, for
example, will cause the data written to the storage array during the test to include data that may be compressed resulting in a 3:1 compression
ratio. The maximum compression ratio that can be specified is 25:1. Take care to use a compression ratio that is representative of the expected
production data. This will provide the best PoC results.
When the deduplication= or compression= parameters are not included in the configuration file the data pattern generated for the test will
include all unique data that does not compress. Stated another way the default is a deduplication ratio of 1:1 (no duplicate data) and a
compression ratio of 1:1 (data does not compress).
Pattern
The configuration file may also include the “Pattern=<filename>” parameter. This parameter allows the tester to specify what data will be written
during all write operations. The contents of the pattern file will be copied as many times as needed to fill the write buffer.
Threads
All HPEPoCApp workloads vary the intensity during the test to provide a result that can be graphed in what is commonly called a saturation
curve. The saturation curve shows performance (IOPS or throughput) starting at a low workload intensity and increasing. As the workload
intensity approaches the capabilities of the environment, performance will stop increasing at the same rate and service times will increase
more rapidly.
The default workload intensity option is referred to as “curve” and often provides a nice smooth saturation curve. The curve option is created by
first running a maximum workload intensity test for a period of time, then starting over with a small fraction of that maximum. The maximum
workload intensity might result in performance of 500,000 IOPS, for example. Following this maximum workload intensity, a workload intensity of
10% of this maximum or 50,000 IOPS in this example will be run for a time. The 10% workload intensity will be increased a bit (e.g., 10%, 20%,
30%) each time until the maximum workload intensity is reached again. This will produce a series of data points representing performance of the
environment as the intensity increases that will be plotted in the resulting report.
Technical white paper
Page 89
Specifying the Threads= parameter is another approach to producing a saturation curve. This option will often produce a slightly higher
maximum performance level, but will also often produce a sharper curve. Consider specifying the Threads= parameter if the absolute best
performance is your goal.
The Threads= parameter requires a series of numbers as a parameter like this: (2, 4, 8, 16, 32, 64). Each number, representing the number of
threads per LUN, will be used for a separate run. The resulting report will plot a saturation curve showing performance at each listed thread level.
The number of threads values listed will impact the total run time of the test.
Each thread value can be thought of as the number of outstanding I/O’s the test tool will attempt to keep in the queue for each LUN. Choosing a
list of threads values that produces the most meaningful saturation curve will depend on the environment capabilities. Some arrays may saturate
with only 2 or 4 outstanding I/O’s, but more capable arrays can continue to produce good results with large numbers (>100) of outstanding I/O’s.
If you choose to specify a threads= parameter some thought will need to be given to selecting a set of parameters that will produce the desired
result. Using a threads=(2,4,6,8,10,12,14,16) parameter when the saturation point in the storage is not reached until a workload intensity level of
64 threads will not produce a meaningful saturation curve. Likewise using a threads=(80,90,100,110,120) parameter when the saturation point
of the storage is less than 32 threads will also not produce a meaningful results. Keep in mind that the same threads= parameters should be
used when comparing different storage options.
Step 3 details
When the workload file parameters are complete, the test can be executed using the HPEPoCApp workload tool. The tool accepts the name of
the workload file as input and can be run using the dryrun flag. The dryrun flag (-d or –dryrun) will cause the environment to be checked and the
workload file to be processed, but the test will not be executed. The dryrun flag is also useful for learning the estimated run time before starting
the test.
C:\perf\HPEPoCApp>hpepocapp -h
HPEPoCApp Workload Driver - version: 3.4
Friday, February 19, 2016 09:02 AM Pacific Standard Time
usage: HPE PoC Workload App [-h] -w WORKLOAD [-d] [-v]
##### USAGE #####
optional arguments:
-h, --help
show this help message and exit
-w WORKLOAD, --workload WORKLOAD
Name of Workload definition file. Cannot be used with --baseline parameter.
-d, --dryrun
Dry Run only –
do not run vdbench test
-b BASELINE, --baseline BASELINE
baseline Measurement data file name. If this option is specified and workload parameters are
ignored.
-t TAG, --tag TAG
Tag to be included
in the report output to identify the data
-v, --verbose
-h, --help
Verbose outputarguments:
show this help message and exit
Execute the test by specifying the workload file without the dryrun parameter:
C:\perf\HPEPoCApp>hpepocapp -w HPEPoCAppWorkloads
The tool will create a vdbench input file from the workload file. This file is named the same as the specified workload file name with the .vdb file
extension added to the end of the filename. This file may be used outside the HPEPoCApp environment directly with vdbench if desired.
Technical white paper
Page 90
Step 4 details
When the test execution completes, a report will automatically be generated. The report is contained in a single PDF file named
HPEPoCApp.yyyymmdd.hhmmss.pdf. The report will vary depending on the workload. If the workload includes a saturation curve test, the report
will include saturation curves for each workload. If the workload will include the average performance levels of each workload. In both cases a
time-series of the busiest workload and a table of the details will be included.
When the test is executed, the data is consolidated into a subdirectory created during the test. This subdirectory will be named
HPEPoCApp.yyyymmdd.hhmmss. A description of the files in this subdirectory can be found in the vdbench documentation. If the
HPEPoCApp report must be rerun without rerunning the test, it can be done with a command like the following:
hpepocapp –b HPEPoCApp.yyyymmdd.hhmmss/flatfile.html
The reporting tool (hpepocapp –b) can be used with any vdbench output file (e.g., flatfile.html). An example of one of the graphs from the report
looks like this.
Workload definitions
Sample
The Sample workload is intended to verify the environment is setup correctly. A short test of the environment can be run by selecting the Sample
workload with an elapsed parameter of 300 seconds. This will allow verification of the configuration before a longer test is run. The Sample
workload with 100% 16k random reads with a target of 1000 I/O’s per second (IOPS).
The Sample workload will run for the number of seconds specified in the elapsed parameter.
Technical white paper
Page 91
The Sample workload must be run as the only workload selected. If Sample is selected in combination with other workloads, the workloads will be
run as intended, but the report may not represent the results properly and may cause the report task to fail.
RandomRead
The RandomRead workload is one of several standard workloads. This workload is 100% random reads using block sizes of 4k, 8k, and 16k.
RandomRead will use a curve profile or threads profile if specified.
Curve and Threads
This workload defaults to a curve workload profile. A curve profile will start with a maximum I/O run to establish the peak IOPS level for this
environment. Once the maximum is determined, a series of 13 runs will measure performance at varying percentages of the maximum starting at
10% and going to 98%. A test constructed this way enables a saturation curve to be plotted showing how service times and throughput vary as
the intensity increases.
A curve profile and the previously described threads profile are mutually exclusive. The curve profile will generate a nice smooth saturation curve,
but may not reach the absolute maximum performance. The threads profile can reach the absolute maximum performance for the environment,
but the saturation curve may not be smooth. The threads profile also requires the test engineer to determine the proper series of threads values
to test. Using threads that are too small will not reach the saturation point of the environment while using values too large will show extreme and
not realistic service times.
The elapsed time of a workload using either a curve or threads profile will add a multiple of the number of curve or threads values. The default
curve profile includes 13 values while the threads profile, when specified, is explicitly listed in the configuration file. A RandomRead workload
uses 3 block sizes and when tested with an elapsed time of 300 seconds or 5-minutes a single iteration of RandomRead will take 5-minutes
times 3 block sizes or 15-minutes. When the RandomRead workload includes the default curve profile this length gets multiplied by the
13 values used as curve values and the length of the run increases to 195 minutes or 3 hours 15 minutes. An estimate of the run-time is
provided by the HPEPoCApp when run with the dry run (-d) parameter.
RandomWrite
The RandomWrite workload is one of several standard workloads. This workload is 100% random writes using block sizes of 4k, 8k, and 16k.
RandomWrite will use a curve profile or threads profile if specified.
SequentialRead
The SequentialRead workload is one of several standard workloads. This workload is 100% sequential reads using block sizes of 64k, 128k, and
256k. When reading reaches the end of a LUN or volume, it will wrap-around and start again at the beginning of the volume. SequentialRead will
use a curve profile or threads profile if specified.
SequentialWrite
The SequentialWrite workload is one of several standard workloads. This workload is 100% sequential writes using block sizes of 64k, 128k, and
256k. When reading reaches the end of a LUN or volume, it will wrap-around and start again at the beginning of the volume. SequentialWrite will
use a curve profile or threads profile if specified.
RandomMix
The RandomMix workload performs random I/O while varying the read percentage and block size. All I/O is random with read percentages of
35%, 50%, and 80% combined with block sizes 4k, 8k, and 16k. The RandomMix workload has 9 iterations (3 read percentages * 3 block sizes).
RandomMix will use a curve profile or threads profile if specified.
RandomMix6535
The RandomMix6536 workload is a special case of the RandomMix workload using only a single block size of 16k and a single read
percentage of 65%. This reduces the combination of parameters by a factor of 9 relative to RandomMix and can be used to reduce the test
time. RandomMix6535 will use a curve profile or threads profile if specified.
Technical white paper
Page 92
SequentialMix
The SequentialMix workload performs sequential I/O while varying the read percentage and bock size. All I/O is sequential with read percentages
of 35%, 50%, and 80% combined with block sizes of 64k, 128k, and 256k. The SequentialMix workload has 9 iterations (3 read percentages * 3
block sizes). SequentialMix will use a curve profile or threads profile if specified.
Fill
The Fill workload is a special case intended to be used populate an array with data before starting PoC testing. The workload will do 100%
sequential I/O with a block size of 256k until all volumes have been written once or until the elapsed parameter time is reached. If the goal is to
write 100% of all volumes, a value for the elapsed time parameter must be chosen that factors the write rate of the environment and the
combined space of all volumes. Choosing an elapsed time value too small will result in the test terminating before all volumes can be complete
written. Choosing an elapsed time value too large will allow all volumes to be written completely and the test will terminate before reaching the
elapsed time. It is recommended to use a very large elapsed time value when using the fill workload.
The Fill workload, like the Sample workload, must be run as the only workload selected. If Fill is selected in combination with other workloads, the
workloads will be run as intended, but the report may not represent the results properly and may cause the report task to fail.
SQLServer
The SQLServer workload was created after research into many different SQLServer workloads running in 3PAR environments. The result is an
average SQLServer I/O profile. The workload includes 69% reads and 80% of all I/O is random.
The block size distribution is different for reads and writes. Reads use the following block size distribution: 61% 16k, 3% 32k, 29% 64k, 2% 128k
and 5% 256k. Writes use the following block size distribution: 68% 16k, 3% 32k, 26% 64k, 1% 128k, and 2% 256k.
SQLServer will use a curve profile by default or threads profile if specified.
OracleDB
The OracleDB workload was created after research into many different Oracle workloads running in 3PAR environments. The result is an average
Oracle Database I/O profile. The workload includes 83% reads and 80% of all I/O is random.
The block size distribution is different for reads and writes. Reads use the following block size distribution: 76% 16k, 1% 32k, 1% 64k, 15% 128k,
and 7% 256k. Writes use the following block size distribution: 79% 16k, 5% 32k, 5% 64k, 8% 128k, and 3% 256k.
OracleDB will use a curve profile by default or threads profile if specified.
VDI
The VDI workload was created after research into many different Virtual Desktop Infrastructure workloads running in 3PAR environments. The
result is an average VDI I/O profile. The workload includes 59% reads and 80% of all I/O is random.
The block size distribution is different for reads and writes. Reads use the following block size distribution: 66% 16k, 10% 32k, 13% 64k, 5% 128k,
and 6% 256k. Writes use the following block size distribution: 83% 16k, 3% 32k, 10% 64k, 1% 128k, and 3% 256k.
VDI will use a curve profile by default or threads profile if specified.
Sizer16K6040
The Sizer16K6040 workload was created to facilitate testing against Sizer results. When sizer results for a workload with 16k block size and
60% reads has been obtained, this workload can be used to compare a test environment with the sizer result.
Sizer16K6040 will use a curve profile by default or threads profile is specified.
Please note there are many factors incorporated into a sizer result. The inability to reproduce a specific result in a test environment does
not necessarily imply the sizer result is inaccurate. A full discussion of the factors involved is beyond the scope of this work, but may include
server sizing, server configuration (e.g., queue depth), SAN bandwidth, number of paths, number of array host ports, number of array back-end
ports, etc.
Technical white paper
Page 93
SRData <filename>
The SRData workload allows the test designer to provide historical statistics from a running 3PAR array to define the workload. System Reporter
is a 3PAR resource that allows querying a rich set of statistics. The set of data that provides the best indication of a host workload is the
srstatport command using the –host option reporting only on ports connected to hosts. This data is provided to the HPEPoCApp tool which will
analyze the data and create a set of parameters representing the average workload.
There are many options to the srstatport command. Consult the HPE 3PAR CLI Administrators Manual reference for details. Pay particular
attention to the time period represented in the srstatport output. The parameters –btsecs (before time seconds) and –etsecs (end time seconds)
will specify the time returned by the command. The HPEPoCApp will analyze the data and produce workload parameters based on averages of
all data in the time period. Make sure the time period covers only the period when the desired workload is active.
Capture the srstatport output into a file. Provide the filename as a parameter to the workload command in the configuration file. Here is what the
command line to start the workload line might look like: Workload=SRData srstathostfilename.txt.
Appendix D: Basic troubleshooting
The HPE 3PAR array uses a modular architecture, which delineates entry and exit of all layers, allowing the array to have highly instrumented
performance monitoring and for easy planning of upgrades or diagnosing performance problems.
Figure 102. Troubleshooting layer by layer
• statvlun: This shows service times to each LUN. The values should be close to the same on the host as on the array; if not, this is indicative of
problems in the SAN.
• statport -host: This shows all port bandwidth utilized on the host-facing HBAs. If the load is not evenly distributed, examine your host
connectivity to ensure hosts are widely distributed across physical HBA ports.
Technical white paper
Page 94
• statcmp: This shows cache memory page statistics by controller node or by virtual volume. Pay particular attention to the delayed
acknowledgement counter (DelAck). Note that this is a cumulative counter and is only reset when a node is rebooted. Look to see if it has
incremented during your test; if so, the only remedies are to change the VVs to a higher performing RAID level or to add more drives.
• statvv: This shows back-end activity to the virtual volumes. This command is generally used by support personnel in particular situations.
• statport –disk: This shows the load going through the back end. Look for imbalances across ports. If there are imbalances, first look for an
uneven distribution of disks across nodes; if the physical disk distribution across nodes is even, run Tunesys to redistribute data across the
back end—especially if hardware has been added recently.
• statpd: This shows the amount of data going to each physical disk. It is important to note that disks showing high service times (30 ms or
over) do not necessarily mean performance problems. The HPE 3PAR array works to coalesce writes to physical disks to improve efficiency.
However, if there are also high VLUN service times (statvlun), the lack of physical disks may be the problem.
General troubleshooting tips regarding the six levels
• statvlun service times should be the same on the host as on the array; if not this is indicative of problems in the SAN.
• statport—host shows all port bandwidth utilized on the host-facing HBAs. If the load is not evenly distributed examine your host connectivity
to ensure hosts are widely distributed across physical HBA ports.
• statcmp—pay particular attention to the delayed acknowledgement counter. Note that this is a cumulative counter and is only reset when a
node is rebooted. Look to see if it is incremented; if so the only remedy is to add more drives.
• statvv—this command generally is of use to support personnel in particular situations.
• statport –disk—shows the load going through the backend. Look for imbalances across ports. If there are imbalances first look for an
uneven distribution of disks across nodes; if the physical disk distribution across nodes is even run “Tunesys” to redistribute data across the
backend—especially if hardware has been added recently.
• statpd—will show the amount of data going to each physical disk. It is important to note that disks showing high service times (30 ms or over)
do not necessarily mean performance problems. The HPE 3PAR array works to coalesce writes to physical disks to improve efficiency,
however, if there are also high VLUN service times (statvlun) the lack of physical disks may be the problem.
Stat* vs. Hist* commands
• All of the previous objects have stat* commands (use “help stat” for complete list).
• Stat* commands display average values of all IOs during two iterations.
• Because the result is an average, a large number of IOs with a good service time might hide a single anomalously long IO.
• The hist* (short for histogram) commands can be used to display (histogram) buckets of response times and block sizes if required.
• Use “help hist” to see the list of hist* commands.
A special note regarding System Reporter and the CLI
• Customers who have licensed System Reporter can also access the System Reporter data from the command line.
• The SR commands are broken into two major sections that mirror the stat* and hist* commands as noted below:
For example reporting VLUN performance data
• statvlun displays real-time performance information for VLUNs
• histvlun shows the histogram data for the VLUNs (not real-time, limited retention period)
• srstatvlun will show all the statvlun data going back as long as the retention policy allows
• srhistvlun will show VLUN histogram data going back as long as the SR database has retained the information
Technical white paper
Page 95
Common stat options
Some options are common to most stat* commands:
• -ni: display only non-idle objects
• -rw: displays read and write stats separately; output will have 3 lines per object: read (r), write (w), and total (t)
• -iter <X>: only display X iterations; default: loop continuously
• -d <X>: specifies an interval of X seconds between two iterations; default: 2 seconds
Common performance commands and how to interpret their output
• Statvlun
• Statvlun is the highest level that can be measured and the statistics reported will the closest to what could be measured on the host
Statvlun shows:
• All host IOs, including cache hits
Statvlun does not show:
• RAID overhead
• IOs caused by internal data copy/movement, such as clones, DO/AO tasks, and more
• IOs caused by disk rebuilds
• IOs caused by VAAI copy offload (XCOPY)
Statvlun read service time:
• Excludes interrupt coalescing time
• Includes statvv read time
• Includes additional time spent dealing with the VLUN
Statvlun write service time:
• Excludes the first interrupt coalescing time
• Includes the time spent between telling the host it’s “OK” to send data and the host actually sending data; because of this, if the host/HBA/link
is busy the statvlun time will increase but the problem will be at the host/SAN level
• Includes the second interrupt coalescing time when the host sends data
• Includes the time spent writing data to cache plus mirroring
• Includes delayed ack time
Useful options:
• -vvsum: displays only 1 line per VV
• -hostsum: displays only 1 line per host
• -v <VV name>: displays only VLUN for specified VV
Things to look for:
• High read/write response times
• Higher response times on some paths only
• Using -hostsum: Has the host reached its max read/write bandwidth?
• Single threaded workloads: will have a queue of 1 steadily; consider disabling interrupt coalescing
• Maximum host/HBA/VM queue length reached for a path/host
Technical white paper
Page 96
Statport
• Statport will show the aggregated stats for all devices (disks or hosts) connected on a port
• The totals reported by statport -host are the same as the totals of statvlun
• The totals reported by statport –disk are the same as the totals of statpd
Useful options:
• -host/disk/rcfc/peer: displays only host/disk/rcfc/peer ports
Things to look for:
• Host ports that have a higher response time than other for the same hosts; might indicate problem on fabric
• Host ports that have reached they maximum read/write bandwidth
• Host ports that are busy in terms of bandwidth as this can increase the response time of IOs for hosts
statvv
• statvv stats represent the IOs done by the array to the VV. They exclude all time spent communicating with the host and all time spent at the
FC/iSCSI level.
statvv includes:
• Cache hits
• IOs caused by the pre-fetching during sequential read IOs; because of this it is possible to have more KB/s at the VV level than at the
VLUN level
• IOs caused by VAAI copy offload (XCOPY)
• IOs caused by cloning operations
• IOs caused by Remote Copy
Things to look for:
• High write response times; might indicate delayed ack
• Statcmp
Useful options:
• -v: shows read/write cache hit/miss stats per VV instead of per node
Things to look for:
• Delayed ack on a device type
• High LockBlock (this applies to older version of the HPE 3PAR operating system)
• statcpu
Things to look for:
• CPUs maxed out
• Causes can include remote copy configurations that have low bandwidth (forces the array CPUs into increased wait states)
• Small block size, high IOPS workloads; to remediate add more nodes if possible
• In should be noted that SSDs are CPU intensive. SSDs can simply perform much more work than spinning (mechanical) media. They will
exhaust CPUs much more quickly, albeit with providing much greater performance.
• statpd
Technical white paper
Page 97
Shows Physical disks stats:
• Current/Average/Max IOPS
• Current/Average/Max KB/s
• Current/Average service time
• Current/Average IO size
• Queue length
• Current/Average % idle
Statpd will show:
• Backend IOs caused by host IOs
• IOs caused by data movement, such as DO tunes, AO region moves
• IOs caused by clones
• IOs caused by disk rebuild
Statpd will not show:
• IOs caused by chunklet initialization. The only way to see that chunklet initialization is going on is to use “showpd -c”
Useful options:
• -devinfo: displays the type and speed of each disk
• -p –devtype FC/NL/SSD: display only FC/NL/SSD PDs
Things to look for:
• PDs that have too many IOPS (based on recommended numbers, see “Limits”); usually these PDs will also have a % idle < 20
• PDs of a given type that have significantly more/less IOPS than other PDs of the same type; usually a sign that PDs are incorrectly balanced
• PDs with anomalous response times
Technical white paper
Page 98
Appendix E: Capturing performance data for offsite analysis
1. Connect to the Service Processor by pointing a browser to http://<IP_address_of_SP>
2. Login with the login “spvar” and password “HP3parvar” (SP 2.5.1 MU1 or later) or “3parvar” (SP 2.5.1 or earlier)
3. Select “Support” on the left, then “Performance Analyzer”
4. Click “Select all” and enter the number of iterations to capture
5. Example, to capture 1 hours of data, enter 360 iterations of 10 seconds
6. The default of 60 iterations of 10 seconds will correspond to at least 10 minutes of data
7. Click “Launch Performance Analysis tool”
Figure 103. Performance Analysis collection
Once the performance capture is over, the files will be uploaded automatically to the HPE 3PAR support center and can be downloaded from
STaTS (storefrontremote.com/app/login). If the service processor is not configured to send data automatically, the file can be found in
/files/<3PAR serial number>/perf_analysis.
Technical white paper
Page 99
Appendix F: Tables
Table 1. Customer onsite PoC requirements .................................................................................................................................................................................................................................................................5
Table 2. HPE 3PAR terminology ...............................................................................................................................................................................................................................................................................................7
Table 3. User-definable CPG attributes .......................................................................................................................................................................................................................................................................... 11
Table 4. Provisioning evaluation ............................................................................................................................................................................................................................................................................................ 14
Table 5. Autonomic Groups functional evaluation ................................................................................................................................................................................................................................................ 15
Table 6. Snapshot functionality verification ................................................................................................................................................................................................................................................................ 17
Table 7. Clones and Instant Export ..................................................................................................................................................................................................................................................................................... 19
Table 8. Thin Provisioning verification............................................................................................................................................................................................................................................................................. 20
Table 9. Thin Conversion verification................................................................................................................................................................................................................................................................................ 22
Table 10. Zero Detection verification ................................................................................................................................................................................................................................................................................ 23
Table 11. Thin reclaim verification ...................................................................................................................................................................................................................................................................................... 24
Table 12. Thin Deduplication verification ..................................................................................................................................................................................................................................................................... 31
Table 13. CLI performance monitoring commands.............................................................................................................................................................................................................................................. 35
Table 14. CLI switches for performance monitoring ........................................................................................................................................................................................................................................... 36
Table 15. Real-time performance monitoring results......................................................................................................................................................................................................................................... 36
Table 16. Generating SR reports from the command line ............................................................................................................................................................................................................................. 37
Table 17. Useful CLI switches for generating SR reports ............................................................................................................................................................................................................................... 38
Table 18. Dynamic Optimization verification ............................................................................................................................................................................................................................................................. 40
Table 19. Adaptive Flash Cache size in HPE 3PAR models......................................................................................................................................................................................................................... 45
Table 20. Priority Optimization verification ................................................................................................................................................................................................................................................................. 51
Table 21. Port failure monitoring .......................................................................................................................................................................................................................................................................................... 62
Table 22. Resiliency verification............................................................................................................................................................................................................................................................................................. 67
Table 23. Workload characteristics ..................................................................................................................................................................................................................................................................................... 78
Table 24. Sample workloads...................................................................................................................................................................................................................................................................................................... 79
Technical white paper
Page 100
Appendix G: Figures
Figure 1. HPE 3PAR login dialog ..............................................................................................................................................................................................................................................................................................7
Figure 2. StoreServ host creation ..............................................................................................................................................................................................................................................................................................8
Figure 3. WWN selection ...................................................................................................................................................................................................................................................................................................................8
Figure 4. IQN selection ........................................................................................................................................................................................................................................................................................................................9
Figure 5. VV creation ............................................................................................................................................................................................................................................................................................................................9
Figure 6. VV creation dialog box ........................................................................................................................................................................................................................................................................................... 10
Figure 7. Export dialog box ........................................................................................................................................................................................................................................................................................................ 10
Figure 8. Creating a CPG with the GUI ............................................................................................................................................................................................................................................................................. 12
Figure 9. Demonstrating Autonomic Groups ............................................................................................................................................................................................................................................................. 15
Figure 10. Creating a Snapshot .............................................................................................................................................................................................................................................................................................. 16
Figure 11. Creating a Snapshot (continued) .............................................................................................................................................................................................................................................................. 16
Figure 12. Clone creation dialog box ................................................................................................................................................................................................................................................................................. 18
Figure 13. Create Clone dialog ................................................................................................................................................................................................................................................................................................ 18
Figure 14. Thin Provisioning allocations ........................................................................................................................................................................................................................................................................ 20
Figure 15. Start the Thin Conversion dialog box.................................................................................................................................................................................................................................................... 21
Figure 16. Thin Conversion dialog box............................................................................................................................................................................................................................................................................ 21
Figure 17. Initial space consumption of a TPVV ..................................................................................................................................................................................................................................................... 22
Figure 18. Dummy File Creator dialog.............................................................................................................................................................................................................................................................................. 23
Figure 19. Space allocation after writing zeros......................................................................................................................................................................................................................................................... 23
Figure 20. Creating FULL RANDOM VV with 16 KB allocation................................................................................................................................................................................................................ 25
Figure 21. Format the new drive in Windows with a 16 KB allocation ............................................................................................................................................................................................... 25
Figure 22. Reserved User Space ............................................................................................................................................................................................................................................................................................ 25
Figure 23. Dummy File Creator ............................................................................................................................................................................................................................................................................................... 26
Figure 24. Assign workers for first disk target .......................................................................................................................................................................................................................................................... 26
Figure 25. Assign the second 1/3 of workers to the next disk target ................................................................................................................................................................................................. 27
Figure 26. Assign the last 1/3 of workers to the next disk target........................................................................................................................................................................................................... 27
Figure 27. Exporting a Virtual Volume............................................................................................................................................................................................................................................................................. 28
Figure 28. VMware LUN selection ....................................................................................................................................................................................................................................................................................... 28
Figure 29. VMware space used ............................................................................................................................................................................................................................................................................................... 29
Figure 30. Clone a VM .....................................................................................................................................................................................................................................................................................................................29
Figure 31. Name the VM clone ............................................................................................................................................................................................................................................................................................... 29
Figure 32. Select the VMware target datastore ....................................................................................................................................................................................................................................................... 29
Figure 33. Display space consumed ................................................................................................................................................................................................................................................................................... 29
Figure 34. Repeat the VM clone process ....................................................................................................................................................................................................................................................................... 30
Figure 35. Name the new VM clone ................................................................................................................................................................................................................................................................................... 30
Figure 36. Select the datastore ............................................................................................................................................................................................................................................................................................... 30
Figure 37. Display space consumed ................................................................................................................................................................................................................................................................................... 31
Figure 38. Launching the reporting interface............................................................................................................................................................................................................................................................ 31
Figure 39. Real-time performance report creation ............................................................................................................................................................................................................................................... 33
Figure 40. Real-time performance monitoring in the GUI .............................................................................................................................................................................................................................. 34
Technical white paper
Page 101
Figure 41. Data retention for historical reporting .................................................................................................................................................................................................................................................. 34
Figure 42. Historical reporting in the HPE 3PAR SSMC .................................................................................................................................................................................................................................. 35
Figure 43. Using showsr to display retention periods........................................................................................................................................................................................................................................ 37
Figure 44. Tuning a VV with Dynamic Optimization........................................................................................................................................................................................................................................... 38
Figure 45. DO selection dialog ................................................................................................................................................................................................................................................................................................ 39
Figure 46. Starting System Tuner, whole array rebalancing ........................................................................................................................................................................................................................ 39
Figure 47. Verification of RAID level using the CLI............................................................................................................................................................................................................................................... 40
Figure 48. Verifying DO task completion ....................................................................................................................................................................................................................................................................... 40
Figure 49. Autonomic tiering with HPE 3PAR StoreServ ............................................................................................................................................................................................................................... 41
Figure 50. Create AO configuration .................................................................................................................................................................................................................................................................................... 42
Figure 51. Create AO configuration (continued) .................................................................................................................................................................................................................................................... 43
Figure 52. showcpg............................................................................................................................................................................................................................................................................................................................43
Figure 53. Creating an AO policy with the CLI.......................................................................................................................................................................................................................................................... 44
Figure 54. Iometer configuration ........................................................................................................................................................................................................................................................................................... 44
Figure 55. Access specification ............................................................................................................................................................................................................................................................................................... 45
Figure 56. AFC decrease in latency..................................................................................................................................................................................................................................................................................... 46
Figure 57. AFC IOPS increase................................................................................................................................................................................................................................................................................................... 47
Figure 58. Creating VVs for Priority Optimization testing ............................................................................................................................................................................................................................. 47
Figure 59. Creating VVs for Priority Optimization testing ............................................................................................................................................................................................................................. 48
Figure 60. VV selection dialog for QoS ............................................................................................................................................................................................................................................................................ 48
Figure 61. Configuring QoS ........................................................................................................................................................................................................................................................................................................ 49
Figure 62. Configuring QoS service levels .................................................................................................................................................................................................................................................................... 49
Figure 63. Effect of QoS policies ............................................................................................................................................................................................................................................................................................ 50
Figure 64. QoS policy in the CLI ............................................................................................................................................................................................................................................................................................. 51
Figure 65. HPE 3PAR File Persona storage concepts and terminology ........................................................................................................................................................................................... 52
Figure 66. Adding a storage array to the SSMC ...................................................................................................................................................................................................................................................... 53
Figure 67. Add storage system dialog.............................................................................................................................................................................................................................................................................. 53
Figure 68. Secure certificate acceptance ....................................................................................................................................................................................................................................................................... 53
Figure 69. SSMC dashboard ...................................................................................................................................................................................................................................................................................................... 54
Figure 70. Alternating advanced settings ..................................................................................................................................................................................................................................................................... 54
Figure 71. Turn on File Persona Management ......................................................................................................................................................................................................................................................... 55
Figure 72. Beginning HPE 3PAR File Persona configuration ..................................................................................................................................................................................................................... 55
Figure 73. Configuring HPE 3PAR File Persona ..................................................................................................................................................................................................................................................... 56
Figure 74. Authentication settings ...................................................................................................................................................................................................................................................................................... 56
Figure 75. Create virtual file server ..................................................................................................................................................................................................................................................................................... 57
Figure 76. Virtual file server configuration ................................................................................................................................................................................................................................................................... 57
Figure 77. Create File Store ........................................................................................................................................................................................................................................................................................................ 58
Figure 78. Create File Store (continued) ........................................................................................................................................................................................................................................................................ 58
Figure 79. File Share creation ................................................................................................................................................................................................................................................................................................... 58
Figure 80. Create File Share ....................................................................................................................................................................................................................................................................................................... 59
Figure 81. File Share configuration ..................................................................................................................................................................................................................................................................................... 59
Technical white paper
Page 102
Figure 82. Locating a disk connector ................................................................................................................................................................................................................................................................................ 62
Figure 83. Displaying failover state of back-end ports...................................................................................................................................................................................................................................... 63
Figure 84. Displaying chunklet utilization ..................................................................................................................................................................................................................................................................... 64
Figure 85. Output of showpd –i.............................................................................................................................................................................................................................................................................................. 64
Figure 86. Recommended architecture for Iometer benchmarking ...................................................................................................................................................................................................... 68
Figure 87. Determining Iometer version......................................................................................................................................................................................................................................................................... 69
Figure 88. Final Intel version of Iometer ......................................................................................................................................................................................................................................................................... 69
Figure 89. Iometer GUI controls ............................................................................................................................................................................................................................................................................................. 70
Figure 90. Access specification ............................................................................................................................................................................................................................................................................................... 71
Figure 91. First Iometer specification for preconditioning ............................................................................................................................................................................................................................. 71
Figure 92. 64 KB random write............................................................................................................................................................................................................................................................................................... 72
Figure 93. 4 KB random.................................................................................................................................................................................................................................................................................................................73
Figure 94. Assigning access specifications to managers ................................................................................................................................................................................................................................ 73
Figure 95. Setting test durations ........................................................................................................................................................................................................................................................................................... 74
Figure 96. Recommended architecture for Iometer benchmarking ...................................................................................................................................................................................................... 75
Figure 97. Determining Iometer version......................................................................................................................................................................................................................................................................... 76
Figure 98. Final Intel version of Iometer ......................................................................................................................................................................................................................................................................... 76
Figure 99. Iometer GUI controls ............................................................................................................................................................................................................................................................................................. 77
Figure 100. Iometer Access Specifications definition ......................................................................................................................................................................................................................................... 78
Figure 101. Iometer workload setup options ............................................................................................................................................................................................................................................................. 79
Figure 102. Troubleshooting layer by layer ................................................................................................................................................................................................................................................................ 93
Figure 103. Performance Analysis collection ............................................................................................................................................................................................................................................................. 98
Technical white paper
Learn more at
hpe.com/storage/3par
Sign up for updates
© Copyright 2015–2016 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without
notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements
accompanying such products and services. Nothing herein should be construed as constituting an additional warranty.
Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. Microsoft, Windows, and Windows Server are either
registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Oracle and Java are registered
trademarks of Oracle and/or its affiliates. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other
jurisdictions. All other third-party trademark(s) is/are the property of their respective owner(s).
4AA5-6045ENW, October 2016, Rev. 4
Download