Red paper Reporting with TPCTOOL Front cover

advertisement
Front cover
Reporting with TPCTOOL
Learn the reporting capabilities of
TPCTOOL
Create customized reports
Evaluate report data
Massimo Mastrorilli
ibm.com/redbooks
Redpaper
International Technical Support Organization
Reporting with TPCTOOL
August 2007
REDP-4230-00
Note: Before using this information and the product it supports, read the information in “Notices” on page v.
First Edition (August 2007)
This edition applies to Version 3 of IBM TotalStorage Productivity Center (product number 5608-VC0).
© Copyright International Business Machines Corporation 2007. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
The team that wrote this Redpaper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Chapter 1. TPCTOOL - what is it and when you should use it . . . . . . . . . . . . . . . . . . . . 1
1.1 TPCTOOL functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 What do you need to install and run TPCTOOL - part 1 of 2 . . . . . . . . . . . . . . . . . . . . . 3
1.3 What do you need to install and run TPCTOOL - part 2 of 2 . . . . . . . . . . . . . . . . . . . . . 4
1.4 Where to install TPCTOOL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 TPCCLI.CONF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 How to run a command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6.1 Single-shot mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6.2 Interactive mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6.3 Multiple / Script Command Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6.4 Output syntax to a file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.7 Basic syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.8 When to use TPCTOOL instead of GUI to get reports . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.9 Reports with CLI interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Chapter 2. TPCTOOL reports and configuration data . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1 TPC Server tasks before creating reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 GUID, WWN - how to get them . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Multiple metrics - Tabular reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 Multiple metrics - Graphical reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5 Multiple metrics - TPCTOOL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6 TPCTOOL - Programming technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.7 Commands to start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.8 How to create a graph from a text file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
16
17
18
19
20
21
22
24
Chapter 3. Rules of thumb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1 The customer wants to know... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Response time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4 Performance metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.5 Look at historical data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.6 Performance metric guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.7 RAID level and RPM considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
26
27
29
30
31
32
34
Chapter 4. Quick start for disk performance monitoring . . . . . . . . . . . . . . . . . . . . . . .
4.1 Throughput and response time metrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 How to evaluate response time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Additional metrics related to throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4 Backend and frontend metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5 Backend response time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.6 Historical performance charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
38
39
41
44
45
48
© Copyright IBM Corp. 2007. All rights reserved.
iii
4.7 Port Data Rate and Port Response time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Chapter 5. Sample TPCTOOL reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1 Data rate vs. response time for disk storage subsystem. . . . . . . . . . . . . . . . . . . . . . . .
5.2 Ports report for a disk storage subsystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3 SVC performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4 Reports for a switch fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
54
56
58
61
Chapter 6. Macro to create charts from TPCTOOL CLI text file . . . . . . . . . . . . . . . . . .
6.1 Importing and exporting data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1 TimeStamp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.2 Create macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.3 Creating a template. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.4 Creating graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
66
71
73
77
79
Chapter 7. Metrics per subsystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.1 Metrics for DS4000 storage subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
7.2 Metrics for ESS storage subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.3 Metrics for DS8000/DS6000 storage subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7.4 Metrics for SVC storage subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.5 Metrics for switch fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
119
119
119
119
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
iv
Reporting with TPCTOOL
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
© Copyright IBM Corp. 2007. All rights reserved.
v
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX®
DS4000™
DS6000™
DS8000™
IBM®
Redbooks®
Redbooks (logo)
System Storage™
Tivoli®
TotalStorage®
®
The following terms are trademarks of other companies:
Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both.
Excel, Microsoft, Visual Basic, Windows, and the Windows logo are trademarks of Microsoft Corporation in
the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
vi
Reporting with TPCTOOL
Preface
Since the introduction of TotalStorage® Multiple Device Manager, and its subsequent
replacement by versions of TotalStorage Productivity Center, and ever since the stabilization
of the TotalStorage ESS Expert, customers have asked for a way to pull performance metrics
from the TotalStorage Productivity Center database in much the same way they could pull the
metrics from the ESS Expert database. There are always a few leading edge customers who
know what they want to do with performance monitoring and performance management, and
they know how they want to do it.
This IBM® Redpaper gives you an overview of the function of TPCTOOL and shows you how
to use it to generate reports based on your TotalStorage Productivity Center repository data.
The team that wrote this Redpaper
This Redpaper was produced by a specialist from Switzerland working at the International
Technical Support Organization, San Jose Center.
Massimo Mastrorilli is an Advisory IT Storage Specialist in Switzerland. He joined IBM Italy
in 1989 and seven years ago he moved to IBM Switzerland, based in Lugano. He has 16
years of experience in implementing, designing, and supporting Storage solutions in S390
and Open Systems environment. His area of expertise include IBM Tivoli® Storage Manager,
SAN Storage Area Network and Storage solutions for Open Systems. He is an IBM Certified
Specialist for TSM, Storage Sales and Open System Storage™ Solutions. He is a member of
Tivoli GRT Global Response Team group.
Thanks to Brian Smith for allowing us to convert his TPCTOOL presentation materials into
this Redpaper and his continued support of this project.
Become a published author
Join us for a two- to six-week residency program! Help write an IBM Redbooks publication
dealing with specific products or solutions, while getting hands-on experience with
leading-edge technologies. You'll have the opportunity to team with IBM technical
professionals, Business Partners, and Clients.
Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you
will develop a network of contacts in IBM development labs, and increase your productivity
and marketability.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this
Redpaper or other Redbooks® in one of the following ways:
© Copyright IBM Corp. 2007. All rights reserved.
vii
򐂰 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
򐂰 Send your comments in an email to:
redbooks@us.ibm.com
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
viii
Reporting with TPCTOOL
1
Chapter 1.
TPCTOOL - what is it and when
you should use it
This chapter provides an overview of TPCTOOL. It covers:
򐂰 The main function provided by TPCTOOL
򐂰 How to use TPCTOOL
򐂰 Where to find the syntax
򐂰 How to install TPCTOOL
We also give an understanding of why you would use TPCTOOL and when to use it instead
of the GUI.
© Copyright IBM Corp. 2007. All rights reserved.
1
1.1 TPCTOOL functionality
Figure 1-1 TPCTOOL Functionality
There have been many requests for the ability to produce performance reports from the TPC
database and produce multiple metric graphs as provided by other IBM Storage products.
TPCTOOL is a command line (CLI) based program which interacts with the TPC Device
server (see Figure 1-1). It allows you to create graphs and charts with multiple metrics, with
different unit types and for multiple entities (for example, Subsystems, Volumes, Controller,
Arrays). Commands are entered as lines of text (that is, sequences of types of characters)
and output can be received as text. The command can be used to access Generic
Commands, Device Server, Server Administration Configuration and Reporting. The example
in Figure 1-1 shows the lsdev command, which lists all the devices discovered by the TPC
server you are querying.
򐂰 The tool provides queries, management and reporting capabilities. You cannot initiate
discoveries, probes and performance collection from the tool.
򐂰 With the release of TPC V3.1 Storage Resource Management Command Processor
(SCRMCP), perfcli (Performance Management), and AgentCLI have been integrated into
TPCTOOL.
򐂰 It is installable anywhere, the CLI code can be deployed on any computer that has access
to the network where the TPC server is located.
򐂰 Connects via TCP/HTTP/SOAP to the Web Service API.
򐂰 Used for Storage provisioning and management.
򐂰 Standalone Java™ Client.
2
Reporting with TPCTOOL
1.2 What do you need to install and run TPCTOOL - part 1 of 2
Figure 1-2 How to install CLI to use TPCTOOL
Before starting the installation, these are prerequisites that you should know or prepare:
1. You will need to have the TPC server code available to install the TPCTOOL CLI.
– Start the TotalStorage Productivity Installer, select a Custom installation, and select
the CLI as shown in Figure 1-2.
– You need only Disk1 of the TPC code to install the CLI software.
Chapter 1. TPCTOOL - what is it and when you should use it
3
1.3 What do you need to install and run TPCTOOL - part 2 of 2
Figure 1-3 What you need to install - part 2 of 2
1. You need IP connection to TPC Device Server.
2. You need the IP address and the port (the default is 9550) used for TPC Device Server.
3. You need to know the Host Authentication Password. This is specified during the
installation of the Agent Manager. The default password is changeMe.
4. You may add to the TPCCLI.CONF file the user and password of a fabric or disk
administrator (TPC Role based authority) to issue these commands. It is not mandatory
that you create TPCCLI.CONF, TPCTOOL CLI works even without any TPCCLI.CONF file
configured. But if you did not have the TPCCLI.CONF file configured then you have to
specify a valid user and the password in all your commands.
4
Reporting with TPCTOOL
1.4 Where to install TPCTOOL
Figure 1-4 Install TPCTOOL on your workstation
According to the prerequisites in 1.2, “What do you need to install and run TPCTOOL - part 1
of 2” on page 3, you could install the TPCTOOL CLI on your computer (mobile computer,
laptop) or on your workstation (see Figure 1-4).
Table 1-1 shows the supported platforms for the TPCTOOL CLI.
Table 1-1 Supported platforms
Operating System
Mode
AIX® 5.3
32-bit and 64-bit (compatibility mode)
Windows® 2000 Advanced Server
32-bit
Windows 2000 Data Center
32-bit
Windows 2003 Standard Edition
32-bit and 64-bit (compatibility mode)
Windows 2003 Enterprise Edition
32-bit and 64-bit (compatibility mode)
Windows XP
32-bit
Red Hat Enterprise Linux® AS 3.0
32-bit IA32-xSeries
Chapter 1. TPCTOOL - what is it and when you should use it
5
1.5 TPCCLI.CONF
TPCCLI.conf sample
acmeurl=10.8.66.77:9550
asvc1=10.8.140.146:0000020060C03052+0
asvc2=10.8.140.148:0000020060403200+0
Subsys1:=10.5.3.88:000000200656c9990+0
allsvcvolmetrics = 803,806,809,812,815,818,819,820,821,825,826,827
switchportmetrics=
855,856,857,858,859,860,861,862,869,870,871,872,873,874,875,876,877,878,879,880,88
1,882,883
myid=acmesuperuser
mypwd=acmesuper
asvc1vdisk=getrpt -url acmeurl -user myid -pwd mypwd -ctype 12 -columns
803,806,809,812,815,818,819,820,821,825,826,827,830,831,832,833 -level sample subsys asvc1 -fs ;
asvc2vdisk=getrpt -url acmeurl -user myid -pwd mypwd -ctype 12 -columns
803,806,809,812,815,818,819,820,821,825,826,827,830,831,832,833 -level sample subsys asvc2 -fs
Figure 1-5 tpccli.conf file
Configuration files, or config files are used to configure the initial settings for computer
operations. TPCTOOL allows you to you to utilize a config file to enter your settings to access
your TPC device server and to execute syntax.
򐂰 Command is a textual substitution of the command string with a defined alias
򐂰 Parameters can be provided with default values that can be substituted
򐂰 Command and Parameter can be ‘aliased’ and stored in a config file on the client
machine or server.
򐂰 The Aliased commands are saved in the command configuration file.
TPCCLI.CONF file (see Figure 1-5) is a method to define alias commands that can be
executed either in a script or though the interactive CLI interface.
Note: The TPCCLI.CONF file is not provided by default
The config file should be created and saved in the following directories:
򐂰 Windows:
C:\Program Files\IBM\TPC\cli\libs
򐂰 Unix or Linux:
/<usr or opt>/IBM/TPC/cli/libs
The aliased commands are written in ASCII and are line oriented. The lines are terminated by
a newline. The config file should be created by TPC administrators or SAN administrators.
This config file needs to maintained on a regular basis depend on the changing environment
and the function that is required.
6
Reporting with TPCTOOL
1.6 How to run a command
How to run a command
ode
m
tive
terac
d in
le an
Sing
de
t mo
Scrip
e
ect th
Redir
a
ut to
outp
file
Figure 1-6 Command mode
Since the CLI is divided into many different modes (see Figure 1-6), the commands available
to you at any given time depend on the mode you are currently in. Entering a question mark
(?) or help at the CLI prompt allows you to obtain a list of commands available for each
command mode.
1.6.1 Single-shot mode
Use the TPCTOOL CLI single-shot command mode if you want to issue a single occasional
command. You must supply the login information and issue the command that you want to
process at the same time. Perform the following steps to use the single-shot mode:
1. Start a Command WIndow (CMD)
2. You must set your path or CHDIR to <install>/cli.
3. From the <install>/cli directory, type your command at the shell prompt.
Syntax:
tpctool lsdev -user tpcadmin -pwd tpcadmin -url 9.43.85.143:9550
1.6.2 Interactive mode
Use the TPCTOOL CLI interactive command mode when you have multiple transactions to
process that cannot be incorporated into a script. The interactive command mode provides a
history function that makes repeating or checking prior command usage easy to do. You may
Chapter 1. TPCTOOL - what is it and when you should use it
7
enter the interactive mode by issuing the TPCTOOL command with no command line options.
Perform the following steps to use the single-shot mode:
1. Start a Command WIndow (CMD).
2. You must set your path or CHDIR to <install>/cli.
3. From the <install>/cli directory, type tpctool at the shell prompt, you will now be within the
interactive session.
4. At the prompt you may enter any valid TPCTOOL CLI command.
Syntax:
shell> tpctool
tpctool> lsdev -user tpcadmin -pwd tpcadmin -url 9.43.85.143:9550
1.6.3 Multiple / Script Command Mode
You can create a file that contains multiple TPCTOOL CLI commands (“TPCCLI.CONF” on
page 6). Login commands can be included in the command file.
Use the TPCTOOL CLI Multiple / Script command mode if you want to issue a sequence of
CLI commands. Administrators can use this mode to create automated processes; for
example, establishing a volume performance report for SVC.
Consider the following when using the TPCTOOL CLI Multiple / Script command mode:
򐂰 The TPCTOOL CLI script can contain only TPCTOOL CLI commands. Use of shell
commands results in a process failure.
򐂰 You can add comments to the scripts. Comments must be prefixed by the number sign
(#); for example, # This script contains a list of metrics available for DS8000™ subsystem
volume performance.
Syntax:
shell> tpctool -script
1.6.4 Output syntax to a file
For programs that display a lot of text like TPCTOOL, consider redirecting text that is usually
displayed to a file. Displaying a lot of text will slow down execution; scrolling text in a terminal
windows on a workstation can cause an I/O bottleneck and use more CPU time. The
command in Example 1-1 shows how to run the tool more efficiently by redirecting output to a
file and then displaying the program output.
Example 1-1 Redirecting output to a file
tpctool lsdev -user ***** -pwd ***** -url localhost:9550
> C:\reports\Output.txt
The file could be imported into Excel® or accessed to create custom reports.
8
Reporting with TPCTOOL
1.7 Basic syntax
H elp for com m and sintax
rf a c
In te
0.
L in e
7 7- 0
and
2 -1 7
om m
3
C
C
to
r
,G
R e fe n c e v 3 .1
re
R e fe
Figure 1-7 Help
For a complete description of all commands to use with TPCTOOL, refer to the manual:
򐂰 IBM TotalStorage Productivity Center Command Line Interface Reference, GC32-1777
Type HELP to get the list of all available commands (see Figure 1-7).
TPCTOOL has a help function to assist you with the syntax of each specific command. Enter
- help | h | -? to get the details for each command.
In this document, we show examples for the following commands:
򐂰
򐂰
򐂰
򐂰
lsdev
lsmetrics
lstype
getrpt
Chapter 1. TPCTOOL - what is it and when you should use it
9
1.8 When to use TPCTOOL instead of GUI to get reports
Prior to TPC V3.1.3 the GUI interface methods (Query or Batch) have a limitation in that they
restrict the user to a presentation of a single performance metric per report. With TPC V3.1.3,
you can now select multiple metrics so that you can compare metrics side by side in the same
view. Only metrics of the same unit type can be displayed in a chart.
A performance metric is represented by a single variable such as the TOTAL Input/Output
operations performed per second as seen at the I/O group (Total IOPS). While this is good
information it is only one value of many that must be understood when analyzing a
performance problem. Typical metrics that are reviewed are IOPS for read and write requests,
read and write data rates, and read and write response times. All of these metrics are needed
and defined data points along the entire flow of data from the client host, through the SAN,
SVC, and finally to the lower storage subsystem. While this method can provide valuable data
for performance analyzing as it provides both details and help on how to interpret the data.
Figure 1-8 shows how to select multiple metrics. If you choose metrics with different unit type,
you get an error as shown in Figure 1-9.
Figure 1-8 Selecting multiple metrics, different unit type.
10
Reporting with TPCTOOL
Figure 1-9 Only metrics with the same unit type can be selected
With TPC V3.1.3 you can select multiple metrics with same unit type and produce graphs, as
shown in Figure 1-10.
Chapter 1. TPCTOOL - what is it and when you should use it
11
Figure 1-10 Selecting different metrics, but with same unit type
Then, using the GUI, you get a chart like the one in Figure 1-11.
12
Reporting with TPCTOOL
Figure 1-11 Chart with multiple metrics from GUI, same unit type
Important: The chart above can be produced only since TPC V3.1.3 and follow-on
versions.
Chapter 1. TPCTOOL - what is it and when you should use it
13
1.9 Reports with CLI interface
Figure 1-12 Data Rate versus Response Time
The CLI interface methods have an attractive alternative. The CLI method supports the
capability either through the interactive or script interface to collect the data from the TPC
data repository by start time and duration, but also for multiple metrics on a single report. You
can do this even selecting metrics with different unit types.
Figure 1-12 shows two different metrics in the same chart:
򐂰 Total Data Rate
򐂰 Overall Response Time
The definition of the performance output should include best practice efforts by the client and
storage product teams to provide the most appropriate result.
14
Reporting with TPCTOOL
2
Chapter 2.
TPCTOOL reports and
configuration data
This chapter describes which information you can get using TPCTOOL CLI. It contains basic
information about how to generate reports and obtain configuration data.
Performance report examples are covered in this chapter as well as prerequisites to be
checked before using TPCTOOL.
© Copyright IBM Corp. 2007. All rights reserved.
15
2.1 TPC Server tasks before creating reports
central
TPC
DB
PING
PROBE
DISCOVERY
Fabric Agent
PERFORMANCE
MONITOR
SCAN
Computers
Data Agent
SAN Components
SNMP / API
Storage and Tape
Subsystems
CIMOM
Data Collection Jobs
Figure 2-1 Prerequisites for performance report generation
Before creating performance reports using the TPCTOOL CLI you have to successfully
complete these steps on your TPC Server (see Figure 2-1):
򐂰 Successfully discover and configure CIMOMs for the storage subsystems on which you
want to create reports.
– Administrative services → Discovery → CIMOM
򐂰 Successfully Probed these storage subsystems;
– TotalStorage Productivity Center → Monitoring → Probes
򐂰 Successfully run a Performance Monitor jobs against them.
– Disk Manager → Monitoring → Subsystem Performance Monitors
Similar techniques under Fabric Manager in the TPC Navigation Tree are used to collect
performance data from switches.
16
Reporting with TPCTOOL
2.2 GUID, WWN - how to get them
Figure 2-2 lsdev -perf -l
Most TPCTOOL commands requires you specify the globally-unique identifier (GUID) for
storage subsystems or WWN for switches and fabrics.
Use the lsdev command to get information about storage subsystems, fabrics, and switches.
This information includes the GUID, user-defined name, device type, status, and the time that
the status was updated.
You must have Disk administrator authority to use this command.
The commands in Figure 2-2 help you collect this information to be used during your
commands and scripts. These are the options used:
򐂰 -user
Specifies a valid TotalStorage Productivity Center user ID. The user variable is a valid user
ID.
򐂰 -pwd
Specifies the password for the TotalStorage Productivity Center user ID. The password
variable is the password.
򐂰 -url
Specifies the Device server. The format of the URL variable is system:port_number, where
system is either the host name or IP address, and port_number is a valid port number for
the HTTP service of the Device server. The default port is 9550.
򐂰 -perf
Specifies that the devices for which performance data is collected should be listed. You
must have the applicable authority to view the devices.
򐂰 -l
Specifies that the long version should be listed:
–
–
–
–
–
GUID
User-defined name
Device type
Status
Time that the status was update.
If you omit this parameter, only the GUID is listed.
Important: Note that all GUID and WWN are case sensitive!
Chapter 2. TPCTOOL reports and configuration data
17
2.3 Multiple metrics - Tabular reports
Figure 2-3 Tabular reports
Using the TPC GUI, you can get Tabular reports (see Figure 2-3) with:
򐂰 Single time sample
򐂰 Multiple performance metrics
Select:
Disk Manager → Reporting → Storage Subsystem Performance → By Volume
18
Reporting with TPCTOOL
2.4 Multiple metrics - Graphical reports
Figure 2-4 Graphical reports
Graphical reports (see Figure 2-4) give:
򐂰 Multiple Time Samples
򐂰 But only one metric
Select the following path from the TPC Navigation Tree:
Disk Manager → Reporting → Storage Subsystem Performance → By Volume
In the report pane click Generate Report and select additional volumes.
Chapter 2. TPCTOOL reports and configuration data
19
2.5 Multiple metrics - TPCTOOL
Figure 2-5 Multiple metrics - TPCTOOL
There is a requirement to get Multiple metrics in multiple intervals, using different unit types,
in addition, to analyze the data using SAS or MS Excel.
Using the TPCTOOL CLI you can generate tabular reports with multiple metrics, multiple
samples and different unit types including some or all metrics (see Figure 2-5).
Important: With TPC V3.1.3, you can produce charts and graphs with multiple metrics
using the TPC GUI. However, you cannot select metrics with different unit types, as you
can do with TPCTOOL.
To create this chart, we exported the text file produced by the tpctool getrpt command to
Excel spreadsheet and we customized the chart.
Details about how to create this are covered in 2.7, “Commands to start” on page 22.
20
Reporting with TPCTOOL
2.6 TPCTOOL - Programming technique
Figure 2-6 Programming technique
With aliasing, you define a name for the alias followed by a value that is the name of a
command and any options associated with command. The password is automatically
encrypted using the same encryption algorithm as the password command before being
stored in the config file. In conjunction with the interactive mode, this enables secure
password encryption (plain text passwords will not appear in a command line). Aliased
commands are saved in the command configuration file. For additional information you should
refer to the book IBM TotalStorage Productivity Center Command Line Interface Reference,
GC32-1777.
Using the output created by TPCTOOL, you can create custom reports, and personalize them
to your needs.
You can schedule TPCTOOL commands using cron jobs (UNIX® platform) or as scheduled
Windows tasks.
You cannot create a Constraint Violations report using TPCTOOL CLI. You must use TPC
GUI for this task following the Navigation Tree path:
Disk Manager → Reporting → Storage Subsystem Performance → Constraint
violations
TPCTOOL can be used for daily checks against your environment and to run your first level
analysis. As soon as you get the initial information, you can drill down using the TPC GUI.
Refer to Chapter 8 in the book IBM TotalStorage Productivity Center: The Next Generation,
SG24-7194, for more information.
Chapter 2. TPCTOOL reports and configuration data
21
2.7 Commands to start
Figure 2-7 Commands to start
Starting with the lsdev command mentioned in 2.2, “GUID, WWN - how to get them” on
page 17, you can run:
򐂰 lstype to list the components known to TPC
򐂰 lsmetrics to list the performance metrics available for a component
򐂰 lscomp to list the components for which performance metrics have actually been collected
(Performance Monitors started):
򐂰 lstime to list times for which performance metrics exist.
For all these commands (except for lstype), you have to specify:
– -user
Specifies a valid TotalStorage Productivity Center user ID. The user variable is a valid
user ID.
– -pwd
Specifies the password for the TotalStorage Productivity Center user ID. The password
variable is the password.
– -url
Specifies the Device server. This is the format of the URL variable:
system:port_number, where system is either the host name or IP address, and
port_number is a valid port number for the HTTP service of the Device server (Default
is 9550).
Following is the syntax used in the example of Figure 2-7:
򐂰 tpctool lstype
22
Reporting with TPCTOOL
򐂰 tpctool lsmetrics -user tpcadmin -pwd tpcadmin -url 9.43.85.142:9550 -ctype 10
-subsys 2105.22513+0
Where:
– -ctype specifies that the output should include only components of the specified type.
See the lstype command for more information about the comp_type variable.
– -subsys specifies the storage subsystem. The subsystem variable is the GUID of the
storage subsystem. You can use the lsdev command to return information, including
the GUID, for all storage subsystems that are discovered by TotalStorage Productivity
Center.
Important: From the output of lsmetrics command, you get the numeric value for each
Metric. This has to be specified when you create report using tpctool getrpt as
described in Chapter 5, “Sample TPCTOOL reports” on page 53.
The example in Figure 2-7 shows metrics for ESS subsystem at Array level (ctype 10):
򐂰 821 - Total Data Rate
򐂰 822 - Read Response time
򐂰 tpctool lscomp -user tpcadmin -pwd tpcadmin -url 9.43.85.142:9550 -ctype
switch_port -level sample -fabric 1000000051E34E895 -start 2006.10.10:09:00:00
-duration 86400
Where:
– -fabric GUID specifies the fabric. The GUID variable is the globally unique identifier.
– -subsys subsystem specifies the storage subsystem. The subsystem variable is the
GUID of the storage subsystem. You can use the lsdev command to return information,
including the GUID, for all storage subsystems that are discovered by TotalStorage
Productivity Center.
– -level sample | hourly | daily specifies the level for which the performance metrics
of the components should be summarized. You can specify a sample summary, an
hourly summary, or a daily summary.
– -ctype comp_type specifies that the output should include only components of the
specified type. See the lstype command for more information about the comp_type
variable.
– -start date/time specifies the date and time to start the sampling period. The date
and time are formatted as: yyyy.MM dd:HH:mm:ss All time zones are relative to the
Device server. See the lstime command for more information.
– -duration duration_seconds specifies the duration of the sampling period, in seconds.
The duration_seconds variable is an integer.
򐂰 tpctool lstime -user tpcadmin -pwd tpcadmin -url 9.43.85.142:9550 -ctype
switch_port -level -sample -fabric 1000000051E34E895
Where:
– -fabric GUID specifies the fabric. The GUID variable is the globally unique identifier.
– -level sample | hourly | daily specifies the level for which the performance metrics
of the components should be summarized. You can specify a sample summary, an
hourly summary, or a daily summary.
– -ctype comp_type specifies that the output should include only components of the
specified type. See the lstype command for more information about the comp_type
variable.
Chapter 2. TPCTOOL reports and configuration data
23
2.8 How to create a graph from a text file
Figure 2-8 How to create a graph from a text file
You can direct the output of TPCTOOL CLI to a text file using the command:
tpctool lstype > output.txt
Then, you can import this text file into an Excel spreadsheet or similar tool. Refer to
Chapter 6, “Macro to create charts from TPCTOOL CLI text file” on page 65 for detailed
information about this step.
24
Reporting with TPCTOOL
3
Chapter 3.
Rules of thumb
This chapter describes which information you can get using TPCTOOL CLI. It contains basic
information about how to generate reports and obtain configuration data.
Performance report examples and prerequisites to be checked before using TPCTOOL are
covered in this chapter as well.
© Copyright IBM Corp. 2007. All rights reserved.
25
3.1 The customer wants to know...
Figure 3-1 Rules of thumb
Customers want to know typical values for their performance metrics – rules of thumb or
best practices. It is truly difficult to provide a simple answer for this question. The throughput
for storage volumes can range from fairly small numbers (1 to 10 I/O per second) to very
large values (more than 1000 I/O per second). This depends on the nature of the application.
Note: When the I/O rates (throughput) approach 1000 I/O per second per volume, it is
because the volume is getting good performance, usually from good cache behavior.
26
Reporting with TPCTOOL
3.2 Response time
Figure 3-2 Response time
We often assume (and our performance models assume) that 10 milliseconds (ms) is fairly
high. But for a particular application, 10 ms may be too low or too high. Many On-Line
Transaction Processing (OLTP) environments require response times closer to 5 ms, while
batch applications with large sequential transfers may be fine with 20 ms response time. The
appropriate value may also change between shifts or on the weekend. A response time of
5 ms may be required from 8 until 5, while 50 ms is perfectly acceptable near midnight. It is all
customer and application dependent.
The value of 10 ms is somewhat arbitrary, but related to the nominal service time of current
generation disk products. In general terms, the service time of a disk is composed of a seek,
a latency, and a data transfer. Nominal seek times these days can range from 4 to 8 ms,
though in practice, many workloads do better than nominal. It is not uncommon for
applications to experience from 1/3 to 1/2 the nominal seek time. Latency is assumed to be
1/2 the rotation time for the disk, and transfer time for typical applications is less than a
millisecond.
Note: So it is not unreasonable to expect 5-7 ms service time for a simple disk access.
Under ordinary queuing assumptions, a disk operating at 50% utilization would have a wait
time roughly equal to the service time. So 10-14 ms response time for a disk is not
unusual, and represents a reasonable goal for many applications.
For cached storage subsystems, we certainly expect to do as well or better than uncached
disks, though that may be harder than you think. If there are a lot of cache hits, the subsystem
Chapter 3. Rules of thumb
27
response time might be well below 5 ms, but poor read hit ratios and busy disk arrays behind
the cache will drive the average response time number up. A high cache hit ratio allows us to
run the backend storage ranks at higher utilizations than we might otherwise be satisfied with.
Rather than 50% utilization of disks, we might push the disks in the ranks to 70% utilization,
which would produce high rank response times, which are averaged with the cache hits to
produce acceptable average response times. Conversely, poor cache hit ratios require pretty
good response times from the backend disk ranks in order to produce an acceptable overall
average response time.
28
Reporting with TPCTOOL
3.3 Assumptions
Figure 3-3 Assumptions
To make a long story short, front end response times probably need to be in the 5-15 ms
range. The rank (backend) response times can usually operate in the 20-25 ms range unless
the hit ratio is really poor. Backend write response times can be even higher, generally up to
80 ms (see Figure 3-3).
There are applications (typically batch applications) for which response time is not the
appropriate performance metric. In these cases, it is often the throughput in megabytes per
second that is most important, and maximizing this metric will drive response times much
higher than 30 ms. But this is a quick start document, and will not deal extensively with such
batch workloads.
Chapter 3. Rules of thumb
29
3.4 Performance metrics
Figure 3-4 Performance metrics
The most important metrics are throughput and response time metrics. These reports are
available for different storage components. Figure 3-4 shows the different component level for
which you can produce reports.
Not all subsystems provide the same level of detail, but that will gradually change over time,
as SMI-S standards evolve. In addition to throughput graphs, you may also produce graphs
(or tabular data) for any of the metrics listed in Chapter 7, “Metrics per subsystem” on
page 85. For example, if the Write Response Time becomes high, you might want to look at
the NVS Full metric for various components, such as the Volume or Disk Array.
The Read, Write, and Overall Transfer Sizes are useful for understanding throughput and
response times.
30
Reporting with TPCTOOL
3.5 Look at historical data
Figure 3-5 Look at historical data
For most components, whether box, cluster, array or port, there will be expected limits to
many of the performance metrics. But there are few rules of thumb, because it depends so
much on the nature of the workload. Online Transaction Processing (OLTP) is so different
from Backup (for instance, TSM Backup) that the expectations cannot be similar. OLTP is
characterized by small transfers, and consequently data rates may be lower than the
capability of the array or box hosting the data. TSM Backup uses large transfer sizes, so the
I/O rates may seem low, yet the data rates test the limits of individual arrays (RAID ranks).
And each box has different performance characteristics, from ESS F20, ESS 800 (including
Turbo), SVC, DS4000™, DS6000™, to DS8000 models (Figure 3-5), each box will have
different expectations for each component. The best rules of thumb are derived from looking
at current (and historical) data for the configuration and workloads that are not getting
complaints from their users. From this performance base, you can do trending, and in the
event of performance complaints look for the changes in workload that can cause them.
Chapter 3. Rules of thumb
31
3.6 Performance metric guidelines
Figure 3-6 Performance metric guidelines
Keeping in mind considerations made in the 3.5, “Look at historical data” on page 31 above,
here are some metrics and limits that usually make sense. At least, these will provide a
starting comparison, to see how your particular environment compares to these numbers,
and then to understand why (Figure 3-6).
򐂰 Small block reads (4-8KB/Op) should have average response times in the 2 ms to 15 ms
range. The low end of the range comes from very good Read Hit Ratio, while the high end
of the range may represent either lower hit ratio or higher I/O rates. Average response
times can also vary from time interval to time interval. It is not uncommon to see some
intervals with higher response times,
򐂰 Small block writes should have response times near 1 ms. These should all be writes to
cache and NVS and be very fast, unless the write rate exceeds the NVS and rank
capabilities. Performance metrics for these considerations will be discussed later.
򐂰 Large reads (32 KB or greater) and large writes often signify batch workloads or highly
sequential access patterns. These environments often prefer high throughput to low
response times, so there is no guideline for these I/O characteristics. Batch and overnight
workloads can tolerate very high response times without indicating problems.
򐂰 Read Hit Percentages can vary from near 0% to near 100%. Anything below 50% is
considered low, but many database applications show hit ratios below 30%. For very low
hit ratios, you need many ranks providing good backend response time. It is difficult to
predict whether more cache will improve the hit ratio for a particular application. Hit ratios
are more dependent on the application design and amount of data than on the size of
cache (especially for Open System workloads). But larger caches are always better than
32
Reporting with TPCTOOL
smaller ones. For high hit ratios, the backend ranks can be driven a little harder, to higher
utilizations.
򐂰 For Random Read I/O, the backend rank (disk) read response times should seldom
exceed 25 ms, unless the read hit ratio is near 99%. Backend Write Response Times will
be higher because of RAID 5 (or RAID 10) algorithms, but should seldom exceed 80 ms.
There will be some time intervals when response times exceed these guidelines.
Chapter 3. Rules of thumb
33
3.7 RAID level and RPM considerations
Figure 3-7 RAID and RPM
RAID ranks have I/O per second limitations that depend on the type of RAID (RAID5 versus
RAID10) and the number of disks in the rank. Because of the different RAID algorithms, it is
not easy to know how many I/Os are actually going to the backend RAID ranks.
For many RAID 5 subsystems, a worst case scenario can be approximated by using the
backend read rate plus four times the backend write rate (R + 4 * W) where R and W are the
backend read rate and backend write rate. Sequential writes can behave considerably better
than worst case.
Use care when trying to estimate the number of backend Ops to a RAID rank. The
performance metrics seldom report this number precisely. You have to use the number of
backend read and write operations to deduce an approximate backend Ops/sec number.
The rank I/O limit depends on many factors, chief among them are the number of disks in the
rank and the speed (RPM) of the disks.
Note: When the number of I/O per second to a rank is near or above 1000, the rank should
be considered very busy!
For 15K RPM disks, the limit is a bit higher.
But these high I/O rates to the backend ranks are not consistent with good performance; they
imply the backend ranks are operating at very high utilizations, indicative of considerable
queuing delays. Good capacity planning demands a solution that reduces the load on such
busy ranks.
Let us consider the upper limit of performance for 10K and 15K RPM, enterprise class
devices. Be aware that different people have different opinions about these limits, but rest
assured that all these numbers (see Figure 3-7) represent very busy DDMs.
34
Reporting with TPCTOOL
DDM Speed
Max Ops/sec
6+P Ops/sec
7+P Ops/sec
10K RPM
150-175
1050-1225
1200-1400
15K RPM
200-225
1400-1575
1600-1800
While disks may achieve these throughputs, they imply a lot of queuing delay and high
response times. These ranges probably represent acceptable performance only for batch
oriented applications, where throughput is the paramount performance metric.
For Online Transaction Processing (OLTP) applications, these throughputs may already have
unacceptably high response times. Because 15K RPM DDMs are most commonly used in
OLTP environments where response time is at a premium, a simple rule of thumb is this: if the
rank is doing more than 1000 Ops/sec, it is very busy, no matter what the RPM. If available, it
is the average frontend response time that really matters.
In addition to these enterprise class drives, near-line drives of high capacity and somewhat
lower performance capabilities are now becoming options in mixtures with higher performing,
enterprise class drives. These are definitely considered lower performance, capacity oriented
drives, and have their own limits (see Table 3-1).
Table 3-1 DDM speed and operations
DDM Speed
Max Ops/sec
6+P Ops/sec
7+P Ops/sec
7.2K RPM
85-110
595-770
680-880
These drive types should have limited exposure to enterprise class workloads, and the
guidelines may be subject to substantial revision based on field experience.
These rules of thumb or guidelines, in conjunction with knowledge of workload growth and
change can be used to plan for new hardware. The discipline of capacity planning goes
together with monitoring workloads and their performance.
Workload characterization is just as important as performance monitoring. It is through
knowledge of particular workloads and application requirements that you can focus these
general guidelines into customer specific configurations.
Chapter 3. Rules of thumb
35
36
Reporting with TPCTOOL
4
Chapter 4.
Quick start for disk performance
monitoring
In this chapter we look at the disk performance metrics and available reports with TPCTOOL.
© Copyright IBM Corp. 2007. All rights reserved.
37
4.1 Throughput and response time metrics
Figure 4-1 Throughput metrics and response time
TotalStorage Productivity V3 offers a large number of disk performance report options and
each report offers useful information about the storage performance. For a quick start, let us
focus on throughput and response time.
To know the performance metrics available in TPC V3 Disk Manager, refer to “Commands to
start” on page 22 using lstype and lsmetrics commands. You can have different levels of
reports, starting from subsystem, then controller, volumes down, to device adapter and more.
There are several read and write throughput metrics available for selection and inclusion in a
report. Chief among these are:
򐂰 Total I/O Rate (overall) – includes random and sequential, read and write
򐂰 Read I/O Rate (overall) – includes random and sequential
򐂰 Write I/O Rate (overall) – includes random and sequential
The corresponding response times are:
򐂰 Overall Response Time – average of reads and writes, including cache hits and misses
򐂰 Read Response Time – includes cache hits and misses
򐂰 Write Response Time
Tip: It pays to keep historical records (and graphs) of these values over the long term.
You can increase the retention period using TPC GUI. Moreover, you can periodically create
text file with TPCTOOL command line and archive them using TSM or similar product.
38
Reporting with TPCTOOL
4.2 How to evaluate response time
Figure 4-2 DS8000 performance - quick start
It could be useful to track any growth or change in the rates and response times. It frequently
happens that I/O rate grows over time and that response time increases as the I/O rates
increase. As I/O rates increase, and as response times increase, you can use these trends to
project when additional storage performance (as well as capacity) will be required, or
alternative application designs or data layouts.
It usually turns out that throughput and response time change drastically from hour to hour,
day to day, and week to week. This is usually a result of different workloads between first or
third shift production, or business cycles like monthend processing versus normal
production.There will be periods when the values lie outside the expected range of values and
the reasons will not be clear. Then the other performance metrics may be used to try to
understand what is happening.
The chart in Figure 4-2 is just an example to compare the response time and the throughput
for two different volumes.
In this case, you do not see many differences during this period between Response Time and
Throughput. In other cases, you may need to split this chart in multiple charts with two or
three metrics maximum. It is important to monitor the throughput and response time patterns
and investigate when the numbers deviate from expected patterns.
Chapter 4. Quick start for disk performance monitoring
39
The command to create that chart in is shown in Figure 4-3.
C:\Program Files\IBM\TPC\cli> tpctool getrpt -user tpcadmin -pwd tpcadmin -url
9.43.85.142:9550 -columns 803,806,809,822,823,824 -level sample -subsys
2107.75BALB1+0 -ctype 12 -start 2006.10.12:03:05:35 -duration 172800 >
ds8000_quickstart.txt
Figure 4-3 Command to create a chart about throughput and response time
The getrpt command list a performance report for a specified storage subsystem. You must
have fabric operator or disk operator authority to use this command. The parameters used in
this example are:
򐂰 -columns 803,806,809,822,823,824 specifies what columns will appear in the report. The
columns are obtained from the lscounters and lsmetrics commands (see Table 4-1).
Table 4-1 TPCTOOL metrics
Metric
Value
Total Data Rate
821
Read Response Time
822
Write Response Time
823
Overall Response Time
824
Read Transfer Size
825
򐂰 -subsys 2107.75BALB1+0 specifies the storage subsystem. The subsystem variable is the
GUID of the storage subsystem. You can use the lsdev command to return information,
including the GUID, for all storage subsystems that are discovered by TotalStorage
Productivity Center.
򐂰 -level sample | hourly | daily specifies the level for which the performance metrics of
the components should be summarized. You can specify a sample summary, an hourly
summary, or a daily summary.
򐂰 -ctype 12 (Volume or VDisk) specifies that the output should include only components of
the specified type. See the lstype command for more information about the comp_type
variable.
40
Reporting with TPCTOOL
4.3 Additional metrics related to throughput
Figure 4-4 Additional metrics
This is a quick start document, so we cannot cover all the possible cases. But here is a short
list of additional metrics (see Figure 4-4) that can be used to make sense of throughput and
response time and proceed further:
򐂰 Total Cache Hit percentage - Percentage of cache hits for both sequential and
non-sequential read and write operations, for a particular component over a time interval.
򐂰 Read Cache Hit Percentage - Percentage of cache hits for both sequential and
non-sequential read operations, for a particular component over a time interval.
򐂰 NVS Full Percentage - Percentage of time that NVS space constraints caused I/O
operations to be delayed, for a particular component over a time interval. (The ratio of
delayed operations to total I/Os.)
򐂰 Important: WIth TPC V3.1.3, this metric name has changed, as well as delayed I/O
rate. See Table 4-2.
Table 4-2 NVS metrics
Old metrics name (up to V 3.1.2)
New metrics name (3.1.3 or later)
NVS full percentage
Write-Cache hits Percentage
(overall) - 832
NVS delayed I/O rate
Write-Cache delay I/O rate - 833
Chapter 4. Quick start for disk performance monitoring
41
򐂰 Read Transfer Size (KB/Op) - Average number of KB per I/O for read operations, for a
particular component over a time interval.
򐂰 Write Transfer Size (KB/Op) - Average number of KB per I/O for write operations, for a
particular component over a time interval.
Total Cache Hit percentage is the percentage of reads and writes that are handled by the
cache without needing immediate access to the backend disk arrays. Read Cache Hit
percentage focuses on Reads, since Writes are almost always recorded as cache hits. NVS
refers to non-volatile storage for writes. If the NVS is full, a write may be delayed while some
changed data is destaged to the disk arrays to make room for the new write data in NVS. The
Read and Write Transfer Sizes are the average number of bytes transferred per I/O
operation. There are many more metrics available via TPC, but these are important ones for
understanding throughput and response time.
Table 4-3 shows the metrics you could use for -ctype 12 at volume level, for a DSxxxx
subsystem. For a complete list of all metrics available, per subsystem, refer to Chapter 7,
“Metrics per subsystem” on page 85.
Table 4-3 TPCTOOL metrics
Metric
Value
==============================================
Total Data Rate
821
Read Response Time
822
Write Response Time
823
Overall Response Time
824
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Record Mode Read I/O Rate
828
Record Mode Read Cache Hit Percentage
829
Disk to Cache Transfer Rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay I/O Rate
833
Read I/O Rate (overall)
803
Write I/O Rate (normal)
804
Write I/O Rate (sequential)
805
Write I/O Rate (overall)
806
Total I/O Rate (normal)
807
Total I/O Rate (sequential)
808
Total I/O Rate (overall)
809
Read Cache Hit Percentage (normal)
810
Read Cache Hits Percentage (sequential) 811
Read Cache Hits Percentage (overall)
812
Write Cache Hits Percentage (normal)
813
Write Cache Hits Percentage (sequential) 814
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (normal)
816
Total Cache Hits Percentage (sequential) 817
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
Write Data Rate
820
Read I/O Rate (normal)
801
Read I/O Rate (sequential)
802
42
Reporting with TPCTOOL
If you need to further investigate your disk performance, you may create additional reports
using a command as shown in Figure 4-5.
C:\Program Files\IBM\TPC\cli> tpctool getrpt -user tpcadmin -pwd tpcadmin -url
9.43.85.142:9550 -columns 818,810,832,825,826 -level sample -subsys
2107.75BALB1+0 -ctype 12 -start 2006.10.12:03:05:35 -duration 172800 >
ds8000_quickstart.txt
Figure 4-5 Commands to create report about cache and transfer size
Chapter 4. Quick start for disk performance monitoring
43
4.4 Backend and frontend metrics
Figure 4-6 Backend and frontend
Throughput is measured and reported in several different ways.
There is throughput for your entire Storage subsystem, or of each cluster or controller, or of
each volume (or LUN). You can measure throughput at Fiber Channels interface (ports level
or at the RAID ranks level, after cache hits have been filtered out.
Frontend I/O metrics (see Figure 4-6) are average of all traffic between the servers and
storage box and for account for relatively fast hits in the cache, as well as occasional cache
misses that go all the way to the RAID ranks on the back end.
Most storage boxes give metrics for both kinds of I/O operations, frontend and backend.
44
Reporting with TPCTOOL
4.5 Backend response time
Figure 4-7 Ds8000 quick start backend metrics
Backend Response time is the time to do staging or destaging between cache and disk
arrays.
The chart in Figure 4-7 can be created with the command in Figure 4-8.
C:\Program Files\IBM\TPC\cli> tpctool getrpt -user tpcadmin -pwd tpcadmin -url
9.43.85.142 :9550 -columns 809,837,843 -level sample -subsys 2107.75BALB1+0
-ctype 9 -start 2006.10.16:20:20:00 -duration 86400 >
ds8000_quickstart_backend.txt
Figure 4-8 Command to create backend metrics reports
Figure 4-9 shows the backend metrics available for a device adapter. You may get backend
reports at different levels.
Chapter 4. Quick start for disk performance monitoring
45
C:\Program Files\IBM\TPC\cli> tpctool lsmetrics -ctype 10 -user tpcadmin -pwd
tpcadmin -url 9.43.85.142:9550 -subsys 2105.22513+0
Metric
Value
==============================================
Total Data Rate
821
Read Response Time
822
Write Response Time
823
Overall Response Time
824
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Record Mode Read I/O Rate
828
Record Mode Read Cache Hit Percentage
829
Disk to Cache Transfer Rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay I/O Rate
833
Backend Read I/O Rate
835
Backend Write I/O Rate
836
Total Backend I/O Rate
837
Backend Read Data Rate
838
Backend Write Data Rate
839
Total Backend Data Rate
840
Backend Read Response Time
841
Backend Write Response Time
842
Overall Backend Response Time
843
Backend Read Transfer Size
847
Backend Write Transfer Size
848
Overall Backend Transfer Size
849
Read I/O Rate (overall)
803
Write I/O Rate (normal)
804
Write I/O Rate (sequential)
805
Write I/O Rate (overall)
806
Total I/O Rate (normal)
807
Total I/O Rate (sequential)
808
Total I/O Rate (overall)
809
Read Cache Hit Percentage (normal)
810
Read Cache Hits Percentage (sequential) 811
Read Cache Hits Percentage (overall)
812
Write Cache Hits Percentage (normal)
813
Write Cache Hits Percentage (sequential) 814
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (normal)
816
Total Cache Hits Percentage (sequential) 817
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
Write Data Rate
820
Read I/O Rate (normal)
801
Read I/O Rate (sequential)
802
Disk Utilization Percentage
850
Sequential I/O Percentage
851
Figure 4-9 Backend metrics
46
Reporting with TPCTOOL
Other than this, it can be useful to set some thresholds, such as:
򐂰 TOTAL I/O rate threshold - Sets threshold on the average number of I/O operations per
second for read and write operations, for the subsystem controllers (clusters) or I/O
Groups. The Total I/O Rate metric for each controller or I/O Group is checked against the
threshold boundaries for each collection interval. These thresholds are disabled by
default.
򐂰 TOTAL Backend I/O rate threshold - Sets thresholds on the average number of I/O
operations per second for MDisk read and write operations, for the MDisk Groups. Or, this
is the average number of I/O operations per second for read and write operations for
physical volumes. The Total I/O Rate metric for each MDisk Group is checked against the
threshold boundaries for each collection interval. This threshold is disabled by default.
򐂰 Overall Backend Response time threshold (this is the most important one) - Average
number of milliseconds that it took to service each I/O operation (read and write), for a
particular component over a time interval. For SVC, this is the external response time of
the MDisks.
Backend I/O rate is the rate of I/O between cache and disk RAID ranks in the backend of the
storage. This typically include Disk Reads from the array to cache caused by a Read Miss in
the cache. The Disk Write activity from cache to disk array is normally an asynchronous
operation to move change data from cache to disk, freeing up space in the NVS.
The Backend Response time gets averaged together with response time for cache hits, to
give the Overall Response time mentioned earlier.
Important: It must be always clear whether you are looking at throughput and response
time at the frontend (very close to system level response time as measured from a server
at operating system level) or the throughput/response time at the backend (just between
cache and disk).
Chapter 4. Quick start for disk performance monitoring
47
4.6 Historical performance charts
Figure 4-10 DS8000 quick start - Total I/O rate
This throughput chart (see Figure 4-10) summarizes the throughput by hour for a particular
day. This data can be easily exported into various formats for further analysis or for archiving.
You can export from the GUI a CSV file (or many other formats) or you can store directly the
text file created by TPCTOOL CLI, using a command similar to the one in Figure 4-11.
C:\Program Files\IBM\TPC\cli> tpctool getrpt -user tpcadmin -pwd tpcadmin -url
9.43.85.142:9550 -columns 809,837,843 -level sample -subsys 2107.75BALB1+0
-ctype 10 -start 2006.10.16:20:20:00 -duration 86400 >
ds8000_quickstart_backend.txt
Figure 4-11 Command to create report at Array/DS Rank level
The recommendation is to build up a historical archive of performances for various
subsystems, critical volumes, disk arrays and other storage subsystem resources.
When there are throughput or response time anomalies, we suggest that you look at
performance reports for other metrics and other resources, such as Percentage of NVS Full,
48
Reporting with TPCTOOL
or the performance of individual RAID ranks, or particular volumes in critical applications as
shown in Figure 4-12.
Figure 4-12 DS8000 quick start Backend Response time
The Response time chart (see Figure 4-12) is still related to the same DS Rank as in the
example above (see Figure 4-10). The Backend Response time (and throughput) are
available for all models of ESS, DS6000, DS8000 and SVC. See the command used in
Figure 4-13.
The chart shows the milliseconds per operations or Response time for backend reads. Read
misses to cache are serviced by the backend RAID ranks and, for this rank, average 12 to 20
milliseconds response time can be considered perfectly normal for backend read response
times. Actual rules of thumb for response time depend strongly on workload, time of day and
other factors. There is no “cookbook value” that works for every application and storage
configuration.
C:\Program Files\IBM\TPC\cli> tpctool getrpt -user tpcadmin -pwd tpcadmin -url
9.43.85.142:9550 -columns 841 -level sample -subsys 2107.75BALB1+0 -ctype 10
-start 2006.10.16:20:20:00 -duration 86400 > ds8000_quickstart_backend_read.txt
Figure 4-13 command used to create Backend Response time report
The key is to monitor normal operations, develop an understanding of expected behavior and
then track the behavior for either performance anomalies or simple growth in the workload.
This historical performance information is the best source of data for capacity planning, too.
Chapter 4. Quick start for disk performance monitoring
49
Retention of the performance data in the TPC database is controlled by a policy setting. After
some period of time, the data is rolled up into one hour summaries. Eventually, the data is
aged out of the database. So it is important to develop a set of graphs and reports to
summarize and visualize the data, and to also keep the graphs and reports in some sort of
historical archive. You do not need to keep every day of every month, but you do need to keep
periodic snapshots of performance. In the event of performance questions, the frequency of
the data samples should be increased.
The number of days, weeks and/or months that performance data is retained can be modified
by selecting Administrative Services → Configuration → Resource History Retention. In the
screenshow below the options selected are rather large because this is a demo system.
Remember that more data retained will impact the size of your TPC database.
50
Reporting with TPCTOOL
4.7 Port Data Rate and Port Response time
These are useful metrics and you also can set thresholds against them, but they do not
usually impact the throughput and response time from disk storage. So, usually you do not go
through these in a TPC performance monitoring analysis for disk.
When you create reports against Port Data Rate and Port Response Time metrics it is usually
to investigate a problem in the path between the servers and storage.
Chapter 4. Quick start for disk performance monitoring
51
52
Reporting with TPCTOOL
5
Chapter 5.
Sample TPCTOOL reports
In this chapter, we go through examples of using TPCTOOL and the GETRPT command.
These reports are intended to provide you with enough information so that you can start to
customize your reports according to your needs.
Chapter 7, “Metrics per subsystem” on page 85 contains the detailed list of which metrics are
available per each storage subsystem or fabric.
For additional information and suggestions about methodology and about how to proceed,
refer to Monitoring Storage Subsystems Using TotalStorage Productivity Center, SG24-7364.
© Copyright IBM Corp. 2007. All rights reserved.
53
5.1 Data rate vs. response time for disk storage subsystem
Figure 5-1 Data rate verses response time
In this case, we collect the data (metrics values 821, 824 - see “Commands to start” on
page 22) against a DS8000 volume. You could create the same charts for DS family, as well
for all the other supported Storage Subsystems.
Chapter 7, “Metrics per subsystem” on page 85 contains the detailed list of which metrics are
available per each storage subsystem or fabric. You can check all the available metrics for
each subsystem, using the command lsmetrics as described in “How to create a graph from
a text file” on page 24.
The chart in Figure 5-1 has been created using TPCTOOL command shown in Figure 5-2.
C:\Program Files\IBM\TPC\cli>tpctool getrpt -user tpcadmin -pwd tpcadmin -ctype
subsystem -url 9.43.85.142:9550 -subsys 2107.75BALB1+0 -level sample -columns
821,824 -duration 86400 -start 2006.10.12:03:00:00 >
ds8000_datarate_resptime.txt
Figure 5-2 Data Rate versus Response time
The getrpt command lists a performance report for a specified storage subsystem. You must
have fabric operator or disk operator authority to use this command. The output is redirected
to a text file. The parameters used in this example are:
򐂰 -columns 821,824 specifies what columns will appear in the report. The columns are
obtained from the lscounters and lsmetrics commands as shown in Table 5-1 on page 55.
54
Reporting with TPCTOOL
Table 5-1 I/O rate and response time metrics
Metric
Value
Total Data Rate
821
Overall response Time
824
򐂰 -subsys subsystem specifies the storage subsystem. The subsystem variable is the GUID
of the storage subsystem. You can use the lsdev command to return information, including
the GUID, for all storage subsystems that are discovered by TotalStorage Productivity
Center.
򐂰 -level sample | hourly | daily specifies the level for which the performance metrics of
the components should be summarized. You can specify a sample summary, an hourly
summary, or a daily summary.
򐂰 -ctype subsystem specifies that the output should include only components of the
specified type. See the lstype command for more information about the comp_type
variable.
򐂰 -start date/time specifies the date and time to start the sampling period. The date and
time are formatted as: yyyy.MM dd:HH:mm:ss All time zones are relative to the Device
server. See the lstime command for more information.
򐂰 -duration 86400 specifies the duration of the sampling period, in seconds. The
duration_seconds variable is an integer. 86400 seconds means one day.
Chapter 5. Sample TPCTOOL reports
55
5.2 Ports report for a disk storage subsystem
Figure 5-3 DS8000 port report
The chart above (Figure 5-3) shows multiple metrics I/O rate for two PORTs of a DS8000
Storage Subsystem. Chapter 7, “Metrics per subsystem” on page 85 contains the detailed list
of which metrics is available per each storage subsystem or fabric.
This could help to compare traffic workload and utilization on two different ports. The
command used (shown in Figure 5-4) can be customized as needed.
C:\PROGRA~1\IBM\TPC\cli>tpctool getrpt -user tpcadmin -pwd tpcadmin -url
9.43.85.142:9550 -columns 852,853,854 -level sample -subsys 2107.75BALB1+0
-ctype 2 -start 2006.10.13:02:10:00 -duration 86400 > ports_ds8000.txt
Figure 5-4 Port report
The getrpt command lists a performance report for a specified storage subsystem. You must
have fabric operator or disk operator authority to use this command. The output is redirected
to a text file. The parameters used in this example are:
򐂰 -columns 852,853,854 specifies what columns will appear in the report. The columns are
obtained from the lscounters and lsmetrics commands.
56
Reporting with TPCTOOL
Table 5-2 Port metrics
Metric
Value
Port Send I/O Rate
852
Port Receive I/O Rate
853
Total Port I/O Rate
854
򐂰 -subsys subsystem specifies the storage subsystem. The subsystem variable is the GUID
of the storage subsystem. You can use the lsdev command to return information, including
the GUID, for all storage subsystems that are discovered by TotalStorage Productivity
Center.
򐂰 -level sample | hourly | daily specifies the level for which the performance metrics of
the components should be summarized. You can specify a sample summary, an hourly
summary, or a daily summary.
򐂰 -ctype 2 (Subsystem port) specifies that the output should include only components of
the specified type. See the lstype command for more information about the comp_type
variable.
򐂰 -start date/time specifies the date and time to start the sampling period. The date and
time are formatted as: yyyy.MM dd:HH:mm:ss All time zones are relative to the Device
server. See the lstime command for more information.
򐂰 -duration duration_seconds specifies the duration of the sampling period, in seconds.
The duration_seconds variable is an integer.
Chapter 5. Sample TPCTOOL reports
57
5.3 SVC performance
Figure 5-5 SVC performance reports
The CLI interface provides the breadth of data to perform performance analysis on storage
environments using the TPCTOOL. This is based upon the capability of providing both an
interactive and batch method for creating a column based report that could be imported into a
spreadsheet for analysis by a client.
If you want to review performance data for an SVC and the component in question was VDisk
performance, then when you executed the TPCTOOL with the VDisk alias, the output would
include the performance data for a specific time and for specific metrics. With SVC Version
3.1 the SVC development team exposed the VDisk response time metric. This metric along
with similar current metrics in the MDisk component allow for performance analysis at the
time to wait level for a client host.
A large percentage of performance problems can be diagnosed using TPC .
Using TPC V3.1.3 and SVC 4.1 you can get additional metrics and component types, such as
SVC Node. Refer to Chapter 7, “Metrics per subsystem” on page 85 for details.
Figure 5-6 shows the command to create the chart above to get multiple performance metrics
against the same VDisk. This is a good a starting point for performance analysis and then you
could drill down using the GUI and using the collected data.
58
Reporting with TPCTOOL
C:\PROGRA~1\IBM\TPC\cli>tpctool getrpt -url acmeurl -user myid -pwd mypwd
-ctype 12 -columns
803,806,809,812,815,818,819,820,821,825,826,827,830,831,832,833 -level sample
-subsys asvc1 -fs
Figure 5-6 Command - SVC performance for a vDisk
The getrpt command lists a performance report for a specified storage subsystem. You must
have fabric operator or disk operator authority to use this command. The parameters used in
this example are:
򐂰 -columns 803,806,809,812,815,818,819,820,821,825,826,827,830,831,832,833
specifies what columns will appear in the report. The columns are obtained from the
lscounters and lsmetrics commands.
Table 5-3 SVC metrics and values
Metric
Value
Read I/O Rate (overall)
803
Write I/O Rate (overall)
806
Total I/O Rate (overall)
809
Read Cache Hits Percentage
(overall)
812
Write Cache Hits Percentage
(overall)
815
Total Cache Hits Percentage
(overall)
818
Read Data Rate
819
Write Data Rate
820
Total Data Rate
821
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Disk to Cache Transfer rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay Delay I/O Rate
833
򐂰 -subsys asvc1 (this is an alias configured into TPCCLI.CONF) specifies the storage
subsystem. The subsystem variable is the GUID of the storage subsystem. You can use
the lsdev command to return information, including the GUID, for all storage subsystems
that are discovered by TotalStorage Productivity Center.
򐂰 -level sample | hourly | daily specifies the level for which the performance metrics of
the components should be summarized. You can specify a sample summary, an hourly
summary, or a daily summary.
Chapter 5. Sample TPCTOOL reports
59
򐂰 -ctype 12 (Volume or VDisk) specifies that the output should include only components of
the specified type. See the lstype command for more information about the comp_type
variable.
60
Reporting with TPCTOOL
5.4 Reports for a switch fabric
Figure 5-7 Two ports and multiple metrics
Chapter 5. Sample TPCTOOL reports
61
The chart in Figure 5-7 has been created using the command in Figure 5-8. As soon as you
get the output text file, using Excel or any other tools, you can choose how to custom the
report.
You may compare multiple metrics for two or more ports as shown in Figure 5-7 (with more
than two it could become confusing).
Otherwise, you may focus on a few metrics (for example, Port Peak Data rate metrics) and
compare them among all switch ports, as shown in (see Figure 5-9) for the first 8 ports.
This can be customized at any time, working with Excel or the tool you choose to elaborate
the text file generated by tpctool getrpt command.
C:\PROGRA~1\IBM\TPC\cli>tpctool getrpt -user tpcadmin -pwd tpcadmin -url
9.43.85.142:9550 -columns
855,856,857,858,859,860,861,862,869,870,871,872,873,874,875,876,877,878,879,880
,881,882,883 -level sample -fabric 100000051E34E895 -ctype 14 -start
2006.10.11:01:21:00 -duration 86400 > ports_switch.txt
Figure 5-8 command to produce report against switch ports
Figure 5-9 Ports peak Data Rate
The getrpt command lists a performance report for a specified storage subsystem. You must
have fabric operator or disk operator authority to use this command. The parameters used in
this example are:
򐂰 -columns
855,856,857,858,859,860,861,862,869,870,871,872,873,874,875,876,877,878,879,880
62
Reporting with TPCTOOL
,881,882,883 specifies what columns will appear in the report. The columns are obtained
from the lscounters and lsmetrics commands.
Table 5-4 Switch port metrics
Metric
Value
Port Send Data Rate
858
Link Failure Rate
874
CRC Error Rate
877
Port Send Packet Rate
855
Port Receive Packet Rate
856
Total Port Packet Rate
857
Port Receive Data Rate
859
Total Port Data Rate
860
Port Peak Send Data Rate
861
Port Peak Receive Data Rate
862
Port Send Packet Size
869
Port Receive Packet Size
870
Overall Port Packet Size
871
Error Frame Rate
872
Dumped Frame Rate
873
Loss of Sync Rate
875
Loss of Signal Rate
876
Short Frame Rate
878
Long Frame Rate
879
򐂰 -subsys asvc1 (this is an alias configured into TPCCLI.CONF) specifies the storage
subsystem. The subsystem variable is the GUID of the storage subsystem. You can use
the lsdev command to return information, including the GUID, for all storage subsystems
that are discovered by TotalStorage Productivity Center.
򐂰 -level sample | hourly | daily specifies the level for which the performance metrics of
the components should be summarized. You can specify a sample summary, an hourly
summary, or a daily summary.
򐂰 -ctype 14 (Switch port) specifies that the output should include only components of the
specified type. See the lstype command for more information about the comp_type
variable.
򐂰 -start date/time specifies the date and time to start the sampling period. The date and
time are formatted as: yyyy.MM dd:HH:mm:ss All time zones are relative to the Device
server. See the lstime command for more information.
򐂰 -duration duration_seconds specifies the duration of the sampling period, in seconds.
The duration_seconds variable is an integer.
Chapter 5. Sample TPCTOOL reports
63
64
Reporting with TPCTOOL
6
Chapter 6.
Macro to create charts from
TPCTOOL CLI text file
In this chapter we show you how to import the output data, the getrpt (rptfast4500) script that
has been executed for the Subsystem performance report into Excel or a similar spreadsheet,
and then to create a template for later use. This imported data will now be in a more readable
format for analysis and for creating reports and graphs.
Chapter 7, “Metrics per subsystem” on page 85 contains the detailed list of which metrics are
available per each storage subsystem or fabric.
© Copyright IBM Corp. 2007. All rights reserved.
65
6.1 Importing and exporting data
Data may need to be exported or imported regularly. In this case, the data has to be exported
to the rptfast4500.out file and then read by the application (MS Excel). Alternatively, you can
copy data on an ad hoc basis.
The following example was built using Microsoft® Excel office 2000. The first task is to import
the data into Excel. Follow these tasks.
1. Open a new Excel document (see Figure 6-1).
Figure 6-1 A new excel document
2. From the Data drop-down menu box, select Import External Data (see Figure 6-2).
66
Reporting with TPCTOOL
Figure 6-2 Select Data → Import External Data → Import Data
3. Locate the data file in the directory where you stored the TPC CLI output file
(rptfast4500.out). Once selected, press Open and start the import process (see
Figure 6-3).
Chapter 6. Macro to create charts from TPCTOOL CLI text file
67
Figure 6-3 Select the data source to begin the import process
4. This will start the text import wizard; select Delimited (default) and Next (see Figure 6-4).
Figure 6-4 Select Delimited (default) and Next
5. Select Tab (default) and enter the delimiter you selected in your script. For our example,
we selected semicolon (;) as shown in Example 6-1.
68
Reporting with TPCTOOL
Example 6-1 The delimiter chosen in our script -fs
getrpt -user myuser -pwd mypass -url myurl -subsys fast4500 -level sample
-ctype subsystem -columns 803,806,809,812,815,818,819,820,821,825,826,827
-start 2006.09.19:16:21:00 -duration 80000 -fs ;
Figure 6-5 Select delimiter semicolon
6. Once the delimiter has been selected, press Next → Finish (see Figure 6-6).
Figure 6-6 Select Finish to complete Wizard
7. Press OK to complete the task (see Figure 6-7).
Chapter 6. Macro to create charts from TPCTOOL CLI text file
69
Figure 6-7 Press OK to complete the task
Figure 6-8 Excel spreadsheet with columns unformatted
The Excel document will need to be formatted to allow a template to be created. First, you
need to delete row 2 and then rename the metrics to more understandable headings. These
metrics heading are obtainable for using the lsmetrics command. The Excel spreadsheet
result is similar to Figure 6-9.
70
Reporting with TPCTOOL
Figure 6-9 Excel spreadsheet with columns renamed.
You need to copy row 1 (Header Row) and paste it into a new book; the reason for this is if
you attempt to delete the data below row 1 within Excel and save it as a template, then upon
reusing Excel it will prompt you to refresh the data from the original source file. Therefore you
need to copy the header row into a new book.
Once you have copied the header row into a new book it can be saved as a template.
6.1.1 TimeStamp
This extension inserts the current date/time into a message or an input field in the
browser.Timestamp can refer to a time code or to a digitally signed timestamp. Timestamps
are very useful for logging events.
Date, time, and their variants differ more than any other data types when you compare
formats between DATETIME and DATE (see Example 6-2).
Example 6-2 timestamp variants
2005-05-08 10:45
Sat June 29 23:16:57 2005
2005.08.03:10:45
The international standard date notation is YYYY-MM-DD.
Where YYYY is the year, MM is the month of the year between 01 (January) and 12
(December), and DD is the day of the month between 01 and 31.
For example, the third day of August in the year 1980 is written in the standard notation
as1980-08-03.
Chapter 6. Macro to create charts from TPCTOOL CLI text file
71
The international standard notation for the time of day is hh:mm:ss.
Where hh is the number of complete hours that have passed since midnight (00-24), mm
is the number of complete minutes that have passed since the start of the hour (00-59),
and ss is the number of complete seconds since the start of the minute (00-60). If the hour
value is 24, then the minute and second values must be zero.
For example, time is 23:59:59 which represents the time one second before midnight.
All of the time comparison procedures require the time objects to be of the same type. It is an
error to use these procedures on time objects of different types. For the timestamp
measurements we would need to convert the timestamp to meet international standard
regardless of format used per country or region. We have developed a Visual Basic® Script
(vbs) to collect the format used by your workstation and convert the timestamp to an
international format (see Example 6-3).
Example 6-3 Visual Basic Script to convert timestamp to international format
' Get Date and Time Separator String
DS = Application.International(xlDateSeparator)
TS = Application.International(xlTimeSeparator)
If Application.International(xl24HourClock) Then
AMPM = ""
Else
AMPM = " AM/PM"
End If
' This loop runs until there is nothing in the next column
Dim TimeStamp As String
Do
If Application.International(xlDateOrder) = 0 Then
ActiveCell.NumberFormat = "mm" + DS + "dd" + DS + "yyyy hh" + TS +
"mm" + TS + "ss" + AMPM
ElseIf Application.International(xlDateOrder) = 1 Then
ActiveCell.NumberFormat = "dd" + DS + "mm" + DS + "yyyy hh" + TS +
"mm" + TS + "ss" + AMPM
ElseIf Application.International(xlDateOrder) = 2 Then
ActiveCell.NumberFormat = "yyyy" + DS + "mm" + DS + "dd hh" + TS +
"mm" + TS + "ss" + AMPM
End If
TimeStamp = Replace(ActiveCell.Value, ":", " ", 1, 1, vbTextCompare)
TimeStamp = Replace(TimeStamp, ".", DS, 1, 2, vbTextCompare)
TimeStamp = Replace(TimeStamp, ":", TS, 1, 2, vbTextCompare)
ActiveCell.Value = TimeStamp
ActiveCell.Offset(1, 0).Select
Loop Until IsEmpty(ActiveCell)
ENDE:
72
Reporting with TPCTOOL
Application.ScreenUpdating = True
Application.EnableEvents = True
6.1.2 Create macros
You will need to copy this VB code into your Excel worksheet and save it a macro. A macro
automates a complex task. Excel macros can perform complicated series of actions or simply
record commonly used commands. Using the code above we can fully automate the
Timestamp conversion. These are the steps to create a macro.
1. Open the Tools menu and select Macro, and then select Macros (see Figure 6-10).
Figure 6-10 Select Macro
2. Fill in the macro name, and press Create. The macro should be given a descriptive name;
in this case we are using Timestamp. The macro can be available from only one
worksheet or from any worksheet (see Figure 6-11).
Chapter 6. Macro to create charts from TPCTOOL CLI text file
73
Figure 6-11 Macro Name
3. Copy the code in Example 6-3 on page 72 into the “Microsoft Visual Basic” editor between
the “Sub” and the “End Sub” lines (see Figure 6-12).
Figure 6-12 Copy Macro code in VB Editor
4. Once the code has been copied into the editor, close the editor window, which brings you
back into the Excel worksheet and the macro is saved automatically.
5. Now you have successfully created a macro to convert the timestamp into an international
standard.
74
Reporting with TPCTOOL
How to run the macro
Once the data has been imported into your worksheet and you want to convert the timestamp
into an international standard, and to use it as a true value, you can run the macro. The
macro is very simple to run.
1. Select Tools → Macro → Macros (Alt- F8) as shown in Figure 6-13.
Figure 6-13 Browse to run the Macro
2. Select the macro you want to run. In this case, we selected “Timestamp” and then pressed
Run (see Figure 6-14).
Chapter 6. Macro to create charts from TPCTOOL CLI text file
75
Figure 6-14 Select the macro to run
3. This will format the timestamp column (see Figure 6-15).
Figure 6-15 Timestamp column formatted
You may save this as a complete template which includes the Performance report headings
and the Timestamp convertor.
76
Reporting with TPCTOOL
6.1.3 Creating a template
Using the Excel spreadsheet above you need to create a template for later use either with or
without the macro; this is dependent on the type of graphs you want to create (see
Figure 6-16). Provide a descriptive name for the template.
Figure 6-16 Save the book as a template for reuse
Important: Select Template (*.xlt) from the Save as type drop-down box.
Once you have saved this template you may import performance extracted data into this
spreadsheet.
Restriction: Only the same data criteria can be imported into the same Template, for
example, DS4000 Subsystem metrics.
The ITSO has created a template for you to import your extracted data into. The template
headings may be modified to compensate for other metrics and reports, however the macro
and the layout has been created for you. This template can downloaded from the ITSO Web
site. Refer to the Redbooks publication TotalStorage Productivity Center Advanced Topics,
SG24-7348, download materials.
Importing into a saved template
Use the same methodology described above import the performance extracted data into the
saved template.
1. Open the saved template.
2. Select the source file.
Chapter 6. Macro to create charts from TPCTOOL CLI text file
77
3. Start the import wizard.
4. For Step 1 of 3 (see Figure 6-17), in the field Start import at row:, choose 3. This will
remove the header from the source file and the additional separation line.
Figure 6-17 Excel - change the start import row number
5. Choose the delimiter used when extracting the data in Step 2 of 3.
6. Select Finish. This will return an output similar to Figure 6-18.
Figure 6-18 Final output of imported data to a template
78
Reporting with TPCTOOL
Now the data extracted from TPC will be available to be used for analysis. Furthermore, this
data could be used to create graphs.
Depending on the request and the analysis required, you may convert the timestamp into an
international standard or use it as a string.
6.1.4 Creating graphs
Using the data that has been extracted from TPC and imported into the Excel template you
will now be able to create readable graphs. Graphs are used to determine relationships by
plotting large numbers of data points and observing the grouping or clustering of the data
points. Templates help you create a macro to copy data from the Excel sheet to any
application. Excel macros can be recorded so these templates for creating graphs can be
reused in the future for reproducing graphs of the same type; in our example, Subsystem
reports for DS4500.
After creating your template and importing the data extracted from TPC via the TPCTOOL,
you need to determine what type of report you want to generate. In this example, we show
you how to create a performance report using the timestamp, Read I/O, Write I/O and Total
I/O.
Note: For this example we used the macro to convert the timestamp into an international
standard and use the timestamp as a data value and not as a string.
1. Select the columns you want to plot onto the graph, that is, timestamp, Read I/O, Write I/O
and Total I/O (see Figure 6-19). Note we are only selecting a few lines of entries of data.
Figure 6-19 Select Columns to create a graph
2. You need to utilize the chart wizard to plot the values onto the graph. Select Insert →
Chart Wizard (see Figure 6-20).
Chapter 6. Macro to create charts from TPCTOOL CLI text file
79
Figure 6-20 Begin Chart Wizard
3. Select the chart type you want to use to create the graph.
Graphs (line graphs and scatter plots)
Line graphs provide an excellent way to map independent and dependent variables that are
both quantitative. When both variables are quantitative, the line segment that connects two
points on the graph expresses a slope, which can be interpreted visually relative to the slope
of other lines or expressed as a precise mathematical formula. Scatter plots are similar to line
graphs in that they start with mapping quantitative data points. The difference is that with a
scatter plot, the decision is made that the individual points should not be connected directly
together with a line, but instead express a trend. This trend can be seen directly through the
distribution of points or with the addition of a regression line. A statistical tool is used to
mathematically express a trend in the data.
Tip: Using the XY scatter plot graph will provide you with the most realistic graph as both
axis X and Y would be true value plot points on the graph, instead of points. However the
graphs maybe be produced in any format for analysis. The graph types produced are
dependent on the analysis you want to achieve.
We use the XY scatter graph to analyze and produce the graph (Step 1 of 4). See
Figure 6-21.
80
Reporting with TPCTOOL
Figure 6-21 Selecting XY Scatter Graph
4. Select Next once the graph type is chosen. The next window will confirm the data range
(Step 2 of 4). Click Next (Step 3 of 4) as shown in Figure 6-22.
Figure 6-22 Step 3 of 4- Chart Options
5. You need to enter the chart title, X value axis title and the Y value axis title. Descriptive
titles are used for each graph (see Figure 6-23).
Chapter 6. Macro to create charts from TPCTOOL CLI text file
81
Figure 6-23 Input chart titles
6. The next panel (Step 4 of 4) allows you to create the graph in the same sheet or in a new
sheet. This is entirely up to you. If you are producing multiple graphs from the same
template, then we recommend that you place each graph in a new sheet. In this example
we use the same sheet. Now the graph will be generated (see Figure 6-24).
Figure 6-24 Performance Graph for Subsystem
82
Reporting with TPCTOOL
Looking at the performance graph in Figure 6-24 on page 82, we notice a consistent I/O rate
throughout the time frame. For an I/O graph like this, we look for spikes and drill deeper into
the subsystem to determine what causes these spikes.
As described above, it is easy to use Excel to produce performance graphs. Performance
graphs can similarly produce graphs for other subsystems, switch, and SAN Volume
Controller (SVC).
For more information about producing custom performance reports, refer to the paper titled:
A Quickstart to TotalStorage Productivity Center Performance Reporting
One or more of the following URL links will show you the document:
– IBM
http://w3.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100794
– Business Partners
http://partners.boulder.ibm.com/src/atsmastr.nsf/WebIndex/WP100794
Link to IBM TotalStorage Productivity Center V3 - Performance Management Best Practices:
http://www-1.ibm.com/support/docview.wss?uid=ssg1S7001493&rs=1133
Chapter 6. Macro to create charts from TPCTOOL CLI text file
83
84
Reporting with TPCTOOL
7
Chapter 7.
Metrics per subsystem
In this chapter we provide the available metrics per subsystem that can be used to generate
reports using TPCTOOL.
© Copyright IBM Corp. 2007. All rights reserved.
85
7.1 Metrics for DS4000 storage subsystem
Table 7-1 lists the DS4000 component type available for reports. In TPCTOOL reports you
can specify either -ctype subsystem or -ctype 1 as a metric and get the same result.
Table 7-1 DS4000 storage subsystem component
Component
Type
Available
subsystem
1
Subsystem
subsys_port
2
HBA port
vol
12
Volume
Table 7-2 lists the DS4000 metrics that can be reported on by subsystem.
Table 7-2 Metrics for DS4000 ctype subsystem
DS4000 ctype: subsystem 1
(Subsystem)
Metric
Value
Write Data Rate820
820
Total Data Rate
821
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Read I/O Rate (overall)
803
Write I/O Rate (overall)
806
Total I/O Rate (overall)
809
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
Table 7-3 lists the DS4000 metrics that can be reported on by HBA port.
Table 7-3 Metrics for DS4000 ctype sub_port2
DS4000 - ctype: subsys_port 2
86
(HBA port)
Metric
Value
Total Port I/O Rate
854
Total Port Data Rate
860
Total Port Transfer Size
868
Reporting with TPCTOOL
Table 7-4 lists the DS4000 metrics that can be reported on By Volume.
Table 7-4 Metrics for DS4000 ctype VolumeComponent
DS400 - ctype: vol 12 (VolumeComponent)
Metric
Value
Write Data Rate
820
Total Data Rate
821
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Read I/O Rate (overall)
803
Write I/O Rate (overall)
806
Total I/O Rate (overall)
809
Read Cache Hits Percentage (overall)
812
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
7.2 Metrics for ESS storage subsystem
Table 7-5 lists the ESS component types available for reports.
Table 7-5 ESS Storage Subsystems components
Component type available
subsystem 1
Subsystem
subsys_port 2
HBA port
controller 3
Controller
da 8
Device Adapter
array 10
Array
vol 12
VolumeComponent
Table 7-6 lists the ESS metrics by subsystem.
Table 7-6 Metrics for ESS by subsystem
ESS - ctype: subsystem 1 (Subsystem)
Metric
Value
Total Data Rate
821
Read Response Time
822
Write Response Time
823
Chapter 7. Metrics per subsystem
87
ESS - ctype: subsystem 1 (Subsystem)
88
Overall Response Time
824
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Record Mode Read I/O Rate
828
Record Mode Read Cache Hit Percentage
829
Disk to Cache Transfer Rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay I/O Rate
833
Cache Holding Time
834
Backend Read I/O Rate
835
Backend Write I/O Rate
836
Total Backend I/O Rate
837
Backend Read Data Rate
838
Backend Write Data Rate
839
Total Backend Data Rate
840
Backend Read Response Time
841
Backend Write Response Time
842
Overall Backend Response Time
843
Port Receive Data Rate
859
Backend Read Transfer Size
847
Backend Write Transfer Size
848
Overall Backend Transfer Size
849
Read I/O Rate (overall)
803
Write I/O Rate (normal)
804
Write I/O Rate (sequential)
805
Write I/O Rate (overall)
806
Total I/O Rate (normal)
807
Total I/O Rate (sequential)
808
Total I/O Rate (overall)
809
Read Cache Hits Percentage (normal)
810
Read Cache Hits Percentage (sequential)
811
Read Cache Hits Percentage (overall)
812
Reporting with TPCTOOL
ESS - ctype: subsystem 1 (Subsystem)
Write Cache Hits Percentage (normal)
813
Write Cache Hits Percentage (sequential)
814
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (normal)
816
Total Cache Hits Percentage (sequential)
817
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
Write Data Rate
820
Read I/O Rate (normal)
801
Read I/O Rate (sequential)
802
Port Send I/O Rate
852
Port Receive I/O Rate
853
Total Port I/O Rate
854
Port Send Data Rate
858
Total Port Data Rate
860
Port Send Response Time
863
Port Receive Response Time
864
Total Port Response Time
865
Port Send Transfer Size
866
Port Receive Transfer Size
867
Total Port Transfer Size
868
Table 7-7 lists the ESS metrics by HBA port.
Table 7-7 ESS metrics by HBA port
ESS - ctype:subsys_port 2
(HBA port)
Metric
Value
Port Receive Data Rate
859
Port Send I/O Rate
852
Port Receive I/O Rate
853
Total Port I/O Rate
854
Port Send Data Rate
858
Total Port Data Rate
860
Port Send Response Time
863
Port Receive Response Time
864
Chapter 7. Metrics per subsystem
89
ESS - ctype:subsys_port 2
(HBA port)
Total Port Response Time
865
Port Send Transfer Size
866
Port Receive Transfer Size
867
Total Port Transfer Size
868
Table 7-8 lists the ESS metrics by controller.
Table 7-8
ESS metrics by controller
ESS - ctype:controller 3
90
(Controller)
Metric
Value
Total Data Rate
821
Read Response Time
822
Write Response Time
823
Overall Response Time
824
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Record Mode Read I/O Rate
828
Record Mode Read Cache Hit Percentage
829
Disk to Cache Transfer Rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay I/O Rate
833
Cache Holding Time
834
Backend Read I/O Rate
835
Backend Write I/O Rate
836
Total Backend I/O Rate
837
Backend Read Data Rate
838
Backend Write Data Rate
839
Total Backend Data Rate
840
Backend Read Response Time
841
Backend Write Response Time
842
Overall Backend Response Time
843
Backend Read Transfer Size
847
Backend Write Transfer Size
848
Reporting with TPCTOOL
ESS - ctype:controller 3
(Controller)
Overall Backend Transfer Size
849
Read I/O Rate (overall)
803
Write I/O Rate (normal)
804
Write I/O Rate (sequential)
805
Write I/O Rate (overall)
806
Total I/O Rate (normal)
807
Total I/O Rate (sequential)
808
Total I/O Rate (overall)
809
Read Cache Hit Percentage (normal)
810
Read Cache Hits Percentage (sequential)
811
Read Cache Hits Percentage (overall)
812
Write Cache Hits Percentage (normal)
813
Write Cache Hits Percentage (sequential)
814
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (normal)
816
Total Cache Hits Percentage (sequential)
817
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
Write Data Rate
820
Read I/O Rate (normal)
801
Read I/O Rate (sequential)
802
Table 7-9 lists the ESS metrics by device adapter.
Table 7-9 ESS metrics by device adapter
ESS - ctype: da
8
(Device Adapter)
Metric
Value
Total Data Rate
821
Read Response Time
822
Write Response Time
823
Overall Response Time
824
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Record Mode Read I/O Rate
828
Chapter 7. Metrics per subsystem
91
ESS - ctype: da
92
8
(Device Adapter)
Record Mode Read Cache Hit Percentage
829
Disk to Cache Transfer Rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay I/O Rate
833
Backend Read I/O Rate
835
Backend Write I/O Rate
836
Total Backend I/O Rate
837
Backend Read Data Rate
838
Backend Write Data Rate
839
Total Backend Data Rate
840
Backend Read Response Time
841
Backend Write Response Time
842
Overall Backend Response Time
843
Backend Read Transfer Size
847
Backend Write Transfer Size
848
Overall Backend Transfer Size
849
Read I/O Rate (overall)
803
Write I/O Rate (normal)
804
Write I/O Rate (sequential)
805
Write I/O Rate (overall)
806
Total I/O Rate (normal)
807
Total I/O Rate (sequential)
808
Total I/O Rate (overall)
809
Read Cache Hit Percentage (normal)
810
Read Cache Hits Percentage (sequential)
811
Read Cache Hits Percentage (overall)
812
Write Cache Hits Percentage (normal)
813
Write Cache Hits Percentage (sequential)
814
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (normal)
816
Total Cache Hits Percentage (sequential)
817
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
Reporting with TPCTOOL
ESS - ctype: da
8
(Device Adapter)
Write Data Rate
820
Read I/O Rate (normal)
801
Read I/O Rate (sequential)
802
Table 7-10 lists the ESS metrics by array.
Table 7-10 ESS metrics by array
ESS - ctype: array
10 ( Array )
Metric
Value
Total Data Rate
821
Read Response Time
822
Write Response Time
823
Overall Response Time
824
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Record Mode Read I/O Rate
828
Record Mode Read Cache Hit Percentage
829
Disk to Cache Transfer Rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay I/O Rate
833
Backend Read I/O Rate
835
Backend Write I/O Rate
836
Total Backend I/O Rate
837
Backend Read Data Rate
838
Backend Write Data Rate
839
Total Backend Data Rate
840
Backend Read Response Time
841
Backend Write Response Time
842
Overall Backend Response Time
843
Backend Read Transfer Size
847
Backend Write Transfer Size
848
Overall Backend Transfer Size
849
Read I/O Rate (overall)
803
Chapter 7. Metrics per subsystem
93
ESS - ctype: array
10 ( Array )
Write I/O Rate (normal)
804
Write I/O Rate (sequential)
805
Write I/O Rate (overall)
806
Total I/O Rate (normal)
807
Total I/O Rate (sequential)
808
Total I/O Rate (overall)
809
Read Cache Hit Percentage (normal)
810
Read Cache Hits Percentage (sequential)
811
Read Cache Hits Percentage (overall)
812
Write Cache Hits Percentage (normal)
813
Write Cache Hits Percentage (sequential)
814
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (normal)
816
Total Cache Hits Percentage (sequential)
817
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
Write Data Rate
820
Read I/O Rate (normal)
801
Read I/O Rate (sequential)
802
Disk Utilization Percentage
850
Sequential I/O Percentage
851
Table 7-11 lists the ESS metrics By Volume component.
Table 7-11 ESS metrics By Volume component
ESS - ctype:vol
94
12 (VolumeComponent)
Metric
Value
Total Data Rate
821
Read Response Time
822
Write Response Time
823
Overall Response Time
824
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Record Mode Read I/O Rate
828
Reporting with TPCTOOL
ESS - ctype:vol
12 (VolumeComponent)
Record Mode Read Cache Hit Percentage
829
Disk to Cache Transfer Rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay I/O Rate
833
Read I/O Rate (overall)
803
Write I/O Rate (normal)
804
Write I/O Rate (sequential)
805
Write I/O Rate (overall)
806
Total I/O Rate (normal)
807
Total I/O Rate (sequential)
808
Total I/O Rate (overall)
809
Read Cache Hit Percentage (normal)
810
Read Cache Hits Percentage (sequential)
811
Read Cache Hits Percentage (overall)
812
Write Cache Hits Percentage (normal)
813
Write Cache Hits Percentage (sequential)
814
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (normal)
816
Total Cache Hits Percentage (sequential)
817
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
Write Data Rate
820
Read I/O Rate (normal)
801
Read I/O Rate (sequential)
802
7.3 Metrics for DS8000/DS6000 storage subsystems
Table 7-12 lists the DS8000/DS6000 component types available for reports.
Table 7-12 DS8000/DS6000 Storage Subsystem components
Component types available
subsystem
1
Subsystem
subsys_port
2
HBA port
controller
3
Controller
Chapter 7. Metrics per subsystem
95
Component types available
stor_pool
4
Storage Pool
ds_rio
6
RIO Loop
da
8
Device Adapter
ds_rank
9
Rank
array
10
Array
vol
12
VolumeComponent
Table 7-13 lists the DS8000/DS6000 metrics by subsystem.
Table 7-13 DS8000/DS6000 metrics by subsystem
DS8K/DS6K - ctype:subsystem 1
96
(Subsystem)
Metric
Value
Total Data Rate
821
Read Response Time
822
Write Response Time
823
Overall Response Time
824
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Record Mode Read I/O Rate
828
Record Mode Read Cache Hit
Percentage
829
Disk to Cache Transfer Rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay I/O Rate
833
Cache Holding Time
834
Backend Read I/O Rate
835
Backend Write I/O Rate
836
Total Backend I/O Rate
837
Backend Read Data Rate
838
Backend Write Data Rate
839
Total Backend Data Rate
840
Backend Read Response Time
841
Backend Write Response Time
842
Overall Backend Response Time
843
Reporting with TPCTOOL
DS8K/DS6K - ctype:subsystem 1
(Subsystem)
Port Receive Data Rate
859
Backend Read Transfer Size
847
Backend Write Transfer Size
848
Overall Backend Transfer Size
849
Read I/O Rate (overall)
803
Write I/O Rate (normal)
804
Write I/O Rate (sequential)
805
Write I/O Rate (overall)
806
Total I/O Rate (normal)
807
Total I/O Rate (sequential)
808
Total I/O Rate (overall)
809
Read Cache Hit Percentage (normal)
810
Read Cache Hits Percentage
(sequential)
811
Read Cache Hits Percentage (overall)
812
Write Cache Hits Percentage (normal)
813
Write Cache Hits Percentage
(sequential)
814
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (normal)
816
Total Cache Hits Percentage
(sequential)
817
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
Write Data Rate
820
Read I/O Rate (normal)
801
Read I/O Rate (sequential)
802
Port Send I/O Rate
852
Port Receive I/O Rate
853
Total Port I/O Rate
854
Port Send Data Rate
858
Total Port Data Rate
860
Port Send Response Time
863
Port Receive Response Time
864
Total Port Response Time
865
Chapter 7. Metrics per subsystem
97
DS8K/DS6K - ctype:subsystem 1
(Subsystem)
Port Send Transfer Size
866
Port Receive Transfer Size
867
Total Port Transfer Size
868
Table 7-14 lists the DS8000/DS6000 metrics by HBA port.
Table 7-14 DS8000/DS6000 metrics by HBA port
DS8k/DS6k - ctype:subsys_port 2
(HBA port)
Metric
Value
Port Receive Data Rate
859
Port Send I/O Rate
852
Port Receive I/O Rate
853
Total Port I/O Rate
854
Port Send Data Rate
858
Total Port Data Rate
860
Port Send Response Time
863
Port Receive Response Time
864
Total Port Response Time
865
Port Send Transfer Size
866
Port Receive Transfer Size
867
Total Port Transfer Size
868
Table 7-15 lists the DS8000/DS6000 metrics by controller.
Table 7-15 DS8000/DS6000 metrics by controller
DS8K/DS6K - ctype:controller 3
98
(Controller)
Metric
Value
Port Receive Data Rate
859
Port Send I/O Rate
852
Port Receive I/O Rate
853
Total Port I/O Rate
854
Port Send Data Rate
858
Total Port Data Rate
860
Port Send Response Time
863
Port Receive Response Time
864
Total Port Response Time
865
Reporting with TPCTOOL
DS8K/DS6K - ctype:controller 3
(Controller)
Port Send Transfer Size
866
Port Receive Transfer Size
867
Total Port Transfer Size
868
Table 7-16 lists the DS8000/DS6000 metrics by device adapter.
Table 7-16 DS8000/DS6000 metrics by device adapter
DS8K/DS6K - ctype:da
8
(Device Adapter)
Metric
Value
Total Data Rate
821
Read Response Time
822
Write Response Time
823
Overall Response Time
824
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Record Mode Read I/O Rate
828
Record Mode Read Cache Hit Percentage
829
Disk to Cache Transfer Rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay I/O Rate
833
Backend Read I/O Rate
835
Backend Write I/O Rate
836
Total Backend I/O Rate
837
Backend Read Data Rate
838
Backend Write Data Rate
839
Total Backend Data Rate
840
Backend Read Response Time
841
Backend Write Response Time
842
Overall Backend Response Time
843
Backend Read Transfer Size
847
Backend Write Transfer Size
848
Overall Backend Transfer Size
849
Read I/O Rate (overall)
803
Chapter 7. Metrics per subsystem
99
DS8K/DS6K - ctype:da
8
(Device Adapter)
Write I/O Rate (normal)
804
Write I/O Rate (sequential)
805
Write I/O Rate (overall)
806
Total I/O Rate (normal)
807
Total I/O Rate (sequential)
808
Total I/O Rate (overall)
809
Read Cache Hit Percentage (normal)
810
Read Cache Hits Percentage (sequential)
811
Read Cache Hits Percentage (overall)
812
Write Cache Hits Percentage (normal)
813
Write Cache Hits Percentage (sequential)
814
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (normal)
816
Total Cache Hits Percentage (sequential)
817
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
Write Data Rate
820
Read I/O Rate (normal)
801
Read I/O Rate (sequential)
802
Table 7-17 lists the DS8000/DS6000 metrics by rank.
Table 7-17 DS8000/DS6000 metrics by rank
DS8K/DS6K - ctype:ds_rank
100
9
(Rank)
Metric
Value
Backend Read I/O Rate
835
Backend Write I/O Rate
836
Total Backend I/O Rate
837
Backend Read Data Rate
838
Backend Write Data Rate
839
Total Backend Data Rate
840
Backend Read Response Time
841
Backend Write Response Time
842
Overall Backend Response Time
843
Backend Read Transfer Size
847
Reporting with TPCTOOL
DS8K/DS6K - ctype:ds_rank
9
(Rank)
Backend Write Transfer Size
848
Overall Backend Transfer Size
849
Table 7-18 lists the DS8000/DS6000 metrics by array.
Table 7-18
DS8000/DS6000 metrics by array
DS8K/DS6K- ctype:array
10 (Array)
Metric
Value
Total Data Rate
821
Read Response Time
822
Write Response Time
823
Overall Response Time
824
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Record Mode Read I/O Rate
828
Record Mode Read Cache Hit Percentage
829
Disk to Cache Transfer Rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay I/O Rate
833
Backend Read I/O Rate
835
Backend Write I/O Rate
836
Total Backend I/O Rate
837
Backend Read Data Rate
838
Backend Write Data Rate
839
Total Backend Data Rate
840
Backend Read Response Time
841
Backend Write Response Time
842
Overall Backend Response Time
843
Backend Read Transfer Size
847
Backend Write Transfer Size
848
Overall Backend Transfer Size
849
Read I/O Rate (overall)
803
Write I/O Rate (normal)
804
Chapter 7. Metrics per subsystem
101
DS8K/DS6K- ctype:array
10 (Array)
Write I/O Rate (sequential)
805
Write I/O Rate (overall)
806
Total I/O Rate (normal)
807
Total I/O Rate (sequential)
808
Total I/O Rate (overall)
809
Read Cache Hit Percentage (normal)
810
Read Cache Hits Percentage (sequential)
811
Read Cache Hits Percentage (overall)
812
Write Cache Hits Percentage (normal)
813
Write Cache Hits Percentage (sequential)
814
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (normal)
816
Total Cache Hits Percentage (sequential)
817
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
Write Data Rate
820
Read I/O Rate (normal)
801
Read I/O Rate (sequential)
802
Disk Utilization Percentage
850
Sequential I/O Percentage
851
Table 7-19 lists the DS8000/DS6000 metrics By Volume component.
Table 7-19 DS8000/DS6000 metrics By Volume component
DS8K/DS6K - ctype:vol
102
12 (VolumeComponent)
Metric
Value
Total Data Rate
821
Read Response Time
822
Write Response Time
823
Overall Response Time
824
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Record Mode Read I/O Rate
828
Record Mode Read Cache Hit Percentage
829
Reporting with TPCTOOL
DS8K/DS6K - ctype:vol
12 (VolumeComponent)
Disk to Cache Transfer Rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay I/O Rate
833
Read I/O Rate (overall)
803
Write I/O Rate (normal)
804
Write I/O Rate (sequential)
805
Write I/O Rate (overall)
806
Total I/O Rate (normal)
807
Total I/O Rate (sequential)
808
Total I/O Rate (overall)
809
Read Cache Hit Percentage (normal)
810
Read Cache Hits Percentage (sequential)
811
Read Cache Hits Percentage (overall)
812
Write Cache Hits Percentage (normal)
813
Write Cache Hits Percentage (sequential)
814
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (normal)
816
Total Cache Hits Percentage (sequential)
817
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
Write Data Rate
820
Read I/O Rate (normal)
801
Read I/O Rate (sequential)
802
7.4 Metrics for SVC storage subsystems
Table 7-20 lists the SVC component types available for reports.
Table 7-20 SVC component types available for reports
Component
type
available
subsystem
1
Subsystem
subsys_port
2
HBA port
svc_iogrp
5
SVC I/O Group
svc_mdgrp
7
SVC Managed Disk Group
Chapter 7. Metrics per subsystem
103
Component
type
available
svc_mdisk
11
SVC Managed Disk
vol
12
VolumeComponent
svc_node
15
SVC Node
Table 7-21lists the SVC metrics by subsystem.
Table 7-21 SVC metrics by subsystem
SVC - ctype:subsystem 1
104
(Subsystem)
Metric
Value
Total Data Rate
821
Read Response Time
822
Port Receive Data Rate
859
Write Response Time
823
Overall Response Time
824
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Disk to Cache Transfer Rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay I/O Rate
833
Backend Read I/O Rate
835
Backend Write I/O Rate
836
Total Backend I/O Rate
837
Backend Read Data Rate
838
Backend Write Data Rate
839
Total Backend Data Rate
840
Backend Read Response Time
841
Backend Write Response Time
842
Overall Backend Response Time
843
Read Queue Time
844
Write Queue Time
845
Overall Queue Time
846
Backend Read Transfer Size
847
Backend Write Transfer Size
848
Reporting with TPCTOOL
SVC - ctype:subsystem 1
(Subsystem)
Overall Backend Transfer Size
849
Port Send Data Rate
858
Read I/O Rate (overall)
803
Write I/O Rate (overall)
806
Total I/O Rate (overall)
809
Read Cache Hits Percentage (overall)
812
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
Write Data Rate
820
Port Send I/O Rate
852
Port Receive I/O Rate
853
Total Port I/O Rate
854
Total Port Data Rate
860
Readahead Percentage of Cache Hits
890
Dirty Write Percentage of Cache Hits
891
Write Cache Overflow Percentage
894
Write Cache Overflow I/O Rate
895
Write Cache Flush-through Percentage
896
Write Cache Flush-through I/O Rate
897
Write Cache Write-through Percentage
898
Write Cache Write-through I/O Rate
899
CPU Utilization Percentage
900
Port to Host Send I/O Rate
901
Port to Host Receive I/O Rate
902
Total Port to Host I/O Rate
903
Port to Disk Send I/O Rate
904
Port to Disk Receive I/O Rate
905
Total Port to Disk I/O Rate
906
Port to Local Node Send I/O Rate
907
Port to Local Node Receive I/O Rate
908
Total Port to Local Node I/O Rate
909
Port to Remote Node Send I/O Rate
910
Port to Remote Node Receive I/O Rate
911
Chapter 7. Metrics per subsystem
105
SVC - ctype:subsystem 1
(Subsystem)
Total Port to Remote Node I/O Rate
912
Port to Host Send Data Rate
913
Port to Host Receive Data Rate
914
Total Port to Host Data Rate
915
Port to Disk Send Data Rate
916
Port to Disk Receive Data Rate
917
Total Port to Disk Data Rate
918
Port to Local Node Send Data Rate
919
Port to Local Node Receive Data Rate
920
Total Port to Local Node Data Rate
921
Port to Remote Node Send Data Rate
922
Port to Remote Node Receive Data Rate
923
Total Port to Remote Node Data Rate
924
Port to Local Node Send Response Time
925
Port to Local Node Receive Response Time
926
Overall Port to Local Node Response Time
927
Port to Local Node Send Queue Time
928
Port to Local Node Receive Queue Time
929
Overall Port to Local Node Queue Time
930
Port to Remote Node Send Response Time
931
Port to Remote Node Receive Response Time
932
Overall Port to Remote Node Response Time
933
Port to Remote Node Send Queue Time
934
Port to Remote Node Receive Queue Time
935
Overall Port to Remote Node Queue Time
936
Global Mirror Write I/O Rate
937
Global Mirror Overlapping Write Percentage
938
Global Mirror Overlapping Write I/O Rate
939
Peak Read Response Time
940
Peak Write Response Time
941
Global Mirror Secondary Write Lag
942
Table 7-22 lists SVC HBA ports metrics available for reports.
106
Reporting with TPCTOOL
Table 7-22 SVC HBA ports metrics
SVC - ctype:subsys_port 2
(HBA port)
Metric
Value
Port Receive Data Rate
859
Port Send Data Rate
858
Port Send I/O Rate
852
Port Receive I/O Rate
853
Total Port I/O Rate
854
Total Port Data Rate
860
Port to Host Send I/O Rate
901
Port to Host Receive I/O Rate
902
Total Port to Host I/O Rate
903
Port to Disk Send I/O Rate
904
Port to Disk Receive I/O Rate
905
Total Port to Disk I/O Rate
906
Port to Local Node Send I/O Rate
907
Port to Local Node Receive I/O Rate
908
Total Port to Local Node I/O Rate
909
Port to Remote Node Send I/O Rate
910
Port to Remote Node Receive I/O Rate
911
Total Port to Remote Node I/O Rate
912
Port to Host Send Data Rate
913
Port to Host Receive Data Rate
914
Total Port to Host Data Rate
915
Port to Disk Send Data Rate
916
Port to Disk Receive Data Rate
917
Total Port to Disk Data Rate
918
Port to Local Node Send Data Rate
919
Port to Local Node Receive Data Rate
920
Total Port to Local Node Data Rate
921
Port to Remote Node Send Data Rate
922
Port to Remote Node Receive Data Rate
923
Total Port to Remote Node Data Rate
924
Table 7-23 lists SVC I/O group metrics available for reports.
Chapter 7. Metrics per subsystem
107
Table 7-23
SVC I/O Group metrics
SVC - ctype:svc_iogrp 5
108
(SVC I/O Group)
Metric
Value
Total Data Rate
821
Read Response Time
822
Write Response Time
823
Port Receive Data Rate
859
Overall Response Time
824
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Disk to Cache Transfer Rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay I/O Rate
833
Backend Read I/O Rate
835
Backend Write I/O Rate
836
Total Backend I/O Rate
837
Backend Read Data Rate
838
Backend Write Data Rate
839
Total Backend Data Rate
840
Backend Read Response Time
841
Backend Write Response Time
842
Overall Backend Response Time
843
Read Queue Time
844
Write Queue Time
845
Overall Queue Time
846
Backend Read Transfer Size
847
Backend Write Transfer Size
848
Overall Backend Transfer Size
849
Read I/O Rate (overall)
803
Write I/O Rate (overall)
806
Total I/O Rate (overall)
809
Read Cache Hits Percentage (overall)
812
Write Cache Hits Percentage (overall)
815
Reporting with TPCTOOL
SVC - ctype:svc_iogrp 5
(SVC I/O Group)
Total Cache Hits Percentage (overall)
818
Port Send Data Rate
858
Read Data Rate
819
Write Data Rate
820
Port Send I/O Rate
852
Port Receive I/O Rate
853
Total Port I/O Rate
854
Total Port Data Rate
860
Readahead Percentage of Cache Hits
890
Dirty Write Percentage of Cache Hits
891
Write Cache Overflow Percentage
894
Write Cache Overflow I/O Rate
895
Write Cache Flush-through Percentage
896
Write Cache Flush-through I/O Rate
897
Write Cache Write-through Percentage
898
Write Cache Write-through I/O Rate
899
CPU Utilization Percentage
900
Port to Host Send I/O Rate
901
Port to Host Receive I/O Rate
902
Total Port to Host I/O Rate
903
Port to Disk Send I/O Rate
904
Port to Disk Receive I/O Rate
905
Total Port to Disk I/O Rate
906
Port to Local Node Send I/O Rate
907
Port to Local Node Receive I/O Rate
908
Total Port to Local Node I/O Rate
909
Port to Remote Node Send I/O Rate
910
Port to Remote Node Receive I/O Rate
911
Total Port to Remote Node I/O Rate
912
Port to Host Send Data Rate
913
Port to Host Receive Data Rate
914
Total Port to Host Data Rate
915
Port to Disk Send Data Rate
916
Port to Disk Receive Data Rate
917
Chapter 7. Metrics per subsystem
109
SVC - ctype:svc_iogrp 5
(SVC I/O Group)
Total Port to Disk Data Rate
918
Port to Local Node Send Data Rate
919
Port to Local Node Receive Data Rate
920
Total Port to Local Node Data Rate
921
Port to Remote Node Send Data Rate
922
Port to Remote Node Receive Data Rate
923
Total Port to Remote Node Data Rate
924
Port to Local Node Send Response Time
925
Port to Local Node Receive Response Time
926
Overall Port to Local Node Response Time
927
Port to Local Node Send Queue Time
928
Port to Local Node Receive Queue Time
929
Overall Port to Local Node Queue Time
930
Port to Remote Node Send Response Time
931
Port to Remote Node Receive Response
Time
932
Overall Port to Remote Node Response Time
933
Port to Remote Node Send Queue Time
934
Port to Remote Node Receive Queue Time
935
Overall Port to Remote Node Queue Time
936
Global Mirror Write I/O Rate
937
Global Mirror Overlapping Write Percentage
938
Global Mirror Overlapping Write I/O Rate
939
Peak Read Response Time
940
Peak Write Response Time
941
Global Mirror Secondary Write Lag
942
Table 7-24 lists the SVC Managed Disk Group metrics available for reports.
Table 7-24 SVC Managed Disk Group metrics
SVC - ctype:svc_mdgrp 7
110
(SVC Managed Disk Group)
Metric
Value
Total Data Rate
821
Read Response Time
822
Write Response Time
823
Overall Response Time
824
Reporting with TPCTOOL
SVC - ctype:svc_mdgrp 7
(SVC Managed Disk Group)
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Backend Read I/O Rate
835
Backend Write I/O Rate
836
Total Backend I/O Rate
837
Backend Read Data Rate
838
Backend Write Data Rate
839
Total Backend Data Rate
840
Backend Read Response Time
841
Backend Write Response Time
842
Overall Backend Response Time
843
Read Queue Time
844
Write Queue Time
845
Overall Queue Time
846
Backend Read Transfer Size
847
Backend Write Transfer Size
848
Overall Backend Transfer Size
849
Read I/O Rate (overall)
803
Write I/O Rate (overall)
806
Total I/O Rate (overall)
809
Read Data Rate
819
Write Data Rate
820
Table 7-25 lists SVC Managed Disk metrics available for reports.
Table 7-25
SVC Managed Disk metrics
SVC - ctype:svc_mdisk 11 (SVC Managed Disk)
Metric
Value
Backend Read I/O Rate
835
Backend Write I/O Rate
836
Total Backend I/O Rate
837
Backend Read Data Rate
838
Backend Write Data Rate
839
Total Backend Data Rate
840
Chapter 7. Metrics per subsystem
111
SVC - ctype:svc_mdisk 11 (SVC Managed Disk)
Backend Read Response Time
841
Backend Write Response Time
842
Overall Backend Response Time
843
Read Queue Time
844
Write Queue Time
845
Overall Queue Time
846
Backend Read Transfer Size
847
Backend Write Transfer Size
848
Overall Backend Transfer Size
849
Table 7-26 SVC volume component metrics available for reports.
Table 7-26 SVC volume component metrics
SVC - ctype:vol
112
12 (VolumeComponent)
Metric
Value
Total Data Rate
821
Read Response Time
822
Write Response Time
823
Overall Response Time
824
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Disk to Cache Transfer Rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay I/O Rate
833
Read I/O Rate (overall)
803
Write I/O Rate (overall)
806
Total I/O Rate (overall)
809
Read Cache Hits Percentage (overall)
812
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
Write Data Rate
820
Readahead Percentage of Cache Hits
890
Reporting with TPCTOOL
SVC - ctype:vol
12 (VolumeComponent)
Dirty Write Percentage of Cache Hits
891
Write Cache Overflow Percentage
894
Write Cache Overflow I/O Rate
895
Write Cache Flush-through Percentage
896
Write Cache Flush-through I/O Rate
897
Write Cache Write-through Percentage
898
Write Cache Write-through I/O Rate
899
Global Mirror Write I/O Rate
937
Global Mirror Overlapping Write
Percentage
938
Global Mirror Overlapping Write I/O Rate
939
Peak Read Response Time
940
Peak Write Response Time
941
Global Mirror Secondary Write Lag
942
Table 7-27 lists SVC node metrics available for reports.
Table 7-27 SVC node metrics
SVC - ctype:svc_node
15 SVC Node
Metric
Value
Total Data Rate
821
Port Send Data Rate
858
Read Response Time
822
Write Response Time
823
Overall Response Time
824
Read Transfer Size
825
Write Transfer Size
826
Overall Transfer Size
827
Disk to Cache Transfer Rate
830
Cache to Disk Transfer Rate
831
Write-cache Delay Percentage
832
Write-cache Delay I/O Rate
833
Backend Read I/O Rate
835
Backend Write I/O Rate
836
Total Backend I/O Rate
837
Chapter 7. Metrics per subsystem
113
SVC - ctype:svc_node
114
15 SVC Node
Backend Read Data Rate
838
Backend Write Data Rate
839
Total Backend Data Rate
840
Backend Read Response Time
841
Backend Write Response Time
842
Overall Backend Response Time
843
Read Queue Time
844
Write Queue Time
845
Overall Queue Time
846
Backend Read Transfer Size
847
Backend Write Transfer Size
848
Overall Backend Transfer Size
849
Read I/O Rate (overall)
803
Write I/O Rate (overall)
806
Total I/O Rate (overall)
809
Read Cache Hits Percentage (overall)
812
Write Cache Hits Percentage (overall)
815
Total Cache Hits Percentage (overall)
818
Read Data Rate
819
Write Data Rate
820
Port Send I/O Rate
852
Port Receive I/O Rate
853
Total Port I/O Rate
854
Port Receive Data Rate
859
Total Port Data Rate
860
Readahead Percentage of Cache Hits
890
Dirty Write Percentage of Cache Hits
891
Write Cache Overflow Percentage
894
Write Cache Overflow I/O Rate
895
Write Cache Flush-through Percentage
896
Write Cache Flush-through I/O Rate
897
Write Cache Write-through Percentage
898
Write Cache Write-through I/O Rate
899
CPU Utilization Percentage
900
Reporting with TPCTOOL
SVC - ctype:svc_node
15 SVC Node
Port to Host Send I/O Rate
901
Port to Host Receive I/O Rate
902
Total Port to Host I/O Rate
903
Port to Disk Send I/O Rate
904
Port to Disk Receive I/O Rate
905
Total Port to Disk I/O Rate
906
Port to Local Node Send I/O Rate
907
Port to Local Node Receive I/O Rate
908
Total Port to Local Node I/O Rate
909
Port to Remote Node Send I/O Rate
910
Port to Remote Node Receive I/O Rate
911
Total Port to Remote Node I/O Rate
912
Port to Host Send Data Rate
913
Port to Host Receive Data Rate
914
Total Port to Host Data Rate
915
Port to Disk Send Data Rate
916
Port to Disk Receive Data Rate
917
Total Port to Disk Data Rate
918
Port to Local Node Send Data Rate
919
Port to Local Node Receive Data Rate
920
Total Port to Local Node Data Rate
921
Port to Remote Node Send Data Rate
922
Port to Remote Node Receive Data Rate
923
Total Port to Remote Node Data Rate
924
Port to Local Node Send Response Time
925
Port to Local Node Receive Response Time
926
Overall Port to Local Node Response Time
927
Port to Local Node Send Queue Time
928
Port to Local Node Receive Queue Time
929
Overall Port to Local Node Queue Time
930
Port to Remote Node Send Response Time
931
Port to Remote Node Receive Response Time
932
Overall Port to Remote Node Response Time
933
Port to Remote Node Send Queue Time
934
Chapter 7. Metrics per subsystem
115
SVC - ctype:svc_node
15 SVC Node
Port to Remote Node Receive Queue Time
935
Overall Port to Remote Node Queue Time
936
Global Mirror Write I/O Rate
937
Global Mirror Overlapping Write Percentage
938
Global Mirror Overlapping Write I/O Rate
939
Peak Read Response Time
940
Peak Write Response Time
941
Global Mirror Secondary Write Lag
942
7.5 Metrics for switch fabric
Table 7-28 lists the switch fabric component metrics available for reports.
Table 7-28 Switch fabric component metrics
Component
type
available
switch
13
Switch
switch_port
14
Switch Port
Table 7-29 lists the switch metrics available for reports.
Table 7-29
Switch metrics
Switch FABRIC - ctype: switch
116
13 (Switch)
Metric
Value
Port Send Data Rate
858
CRC Error Rate
877
Link Failure Rate
874
Loss of Sync Rate
875
Port Send Packet Rate
855
Port Receive Packet Rate
856
Total Port Packet Rate
857
Port Receive Data Rate
859
Total Port Data Rate
860
Port Peak Send Data Rate
861
Port Peak Receive Data Rate
862
Port Send Packet Size
869
Port Receive Packet Size
870
Overall Port Packet Size
871
Reporting with TPCTOOL
Switch FABRIC - ctype: switch
13 (Switch)
Error Frame Rate
872
Dumped Frame Rate
873
Loss of Signal Rate
876
Short Frame Rate
878
Long Frame Rate
879
Encoding Disparity Error Rate
880
Discarded Class3 Frame Rate
881
F-BSY Frame Rate
882
F-RJT Frame Rate
883
Table 7-30 lists switch port metrics available for reports.
Table 7-30 Switch port metrics
Switch FABRIC - ctype: switch_port 14 (Switch Port)
Metric
Value
Port Send Data Rate
858
Link Failure Rate
874
CRC Error Rate
877
Port Send Packet Rate
855
Port Receive Packet Rate
856
Total Port Packet Rate
857
Port Receive Data Rate
859
Total Port Data Rate
860
Port Peak Send Data Rate
861
Port Peak Receive Data Rate
862
Port Send Packet Size
869
Port Receive Packet Size
870
Overall Port Packet Size
871
Error Frame Rate
872
Dumped Frame Rate
873
Loss of Sync Rate
875
Loss of Signal Rate
876
Short Frame Rate
878
Long Frame Rate
879
Chapter 7. Metrics per subsystem
117
Switch FABRIC - ctype: switch_port 14 (Switch Port)
118
Encoding Disparity Error Rate
880
Discarded Class3 Frame Rate
881
F-BSY Frame Rate
882
F-RJT Frame Rate
883
Reporting with TPCTOOL
Related publications
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this Redpaper.
IBM Redbooks
For information about ordering these publications, see “How to get IBM Redbooks” on
page 119. Note that some of the documents referenced here may be available in softcopy
only.
򐂰 TotalStorage Productivity Center Advanced Topics, SG24-7348
򐂰 Monitoring Your Storage Subsystems with TotalStorage Productivity Center, SG24-7364
How to get IBM Redbooks
You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft
publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at
this Web site:
ibm.com/redbooks
Help from IBM
IBM Support and downloads
ibm.com/support
IBM Global Services
ibm.com/services
© Copyright IBM Corp. 2007. All rights reserved.
119
120
Reporting with TPCTOOL
Index
Numerics
15K RPM DDMs 35
A
AgentCLI 2
B
backend IO operations 44
Backend IO rate 47
backend response time 45
C
command mode 7
config files 6
creating reports 66
CSV file export 48
H
Host Authentication Password 4
I
import into template 77
interactive command mode 7
L
large reads 32
Line graph 80
lscomp command 22
lsdev command 2, 17, 22
lsmetrics command 22, 54
lstime command 22
lstype command 22
M
Multiple / Script command mode 8
D
DS array metrics 101
DS component type metrics 95
DS controller metrics 98
DS device adapter metrics 99
DS HBA port metrics 98
DS rank metrics 100
DS subsystem metrics 96
DS volume component metrics 102
DS4000 HBA port metrics 86
DS4000 report metrics 86
DS4000 subsystem metrics 86
DS4000 volume metrics 87
E
ESS array metrics 93
ESS component type metrics 87
ESS controller metrics 90
ESS device adapter metrics 91
ESS HBA port metrics 89
ESS subsystem metrics 87
ESS volume component metrics 94
exported data 66
F
frontend I/O operations 44
G
getrpt command 40, 54, 59, 65
globally-unique identifier 17
graphs 79
GUID 17
© Copyright IBM Corp. 2007. All rights reserved.
N
NVS Full Percentage 41
O
OLTP applications 35
output to text file 24
Overall Backend Response time threshold 47
P
perfcli 2
performance graph 83
performance metric 10
performance metrics rules of thumb 26
performance reports
prerequisite tasks 16
performance snapshots 50
policy setting 50
Port Data Rate 51
Port Response Time 51
R
RAID ranks 34
Random Read I/O 33
Read Cache Hit Percentage 41
Read Hit Percentages 32
Redbooks Web site 119
Contact us viii
report graphs 79
report macro 73
report template 77
report timestamp considerations 72
121
reports
CLI interface 14
response time chart 49
response time evaluation 39
response time factors 27
response time metrics 30
response times 29
rule of thumb
rank 35
S
scatter plots 80
SCRMCP 2
single-shot command mode 7
small block reads 32
small block writes 32
specifying a report delimiter 68
SVC component type metrics 103
SVC HBA port metrics 106
SVC I/O group metric 107
SVC Managed Disk Group metrics 110
SVC Managed Disk metrics 111
SVC metrics 58
SVC node metrics 113
SVC subsystem metrics 104
SVC volume component metrics 112
switch fabric component metrics 116
switch metrics 116
switch port metrics 117
switch ports reports 62
syntax to a file 8
T
text file output 24
throughput measurements 44
throughput metrics 30, 38
TOTAL Backend I/O rate threshold 47
Total Cache Hit percentage 41
TOTAL I/O rate threshold 47
TPCCLI.CONF file 4, 6
TPCTOOL CLI 48
TPCTOOL overview 2
TPCTOOL report metrics 85
122
Reporting with TPCTOOL
Back cover
®
Reporting with TPCTOOL
Redpaper
Learn the reporting
capabilities of
TPCTOOL
Create customized
reports
Evaluate report data
TPCTOOL is a command line interface (CLI) based program
which interacts with the TotalStorage Productivity Center
Device server and it lets you create graphs and charts with
multiple metrics, with different unit types, and for multiple
entities (for example, subsystems, volumes, controller, and
arrays). Commands are entered as lines of text and output
can be received as text.
INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION
This IBM Redpaper gives you an overview of the function of
TPCTOOL and shows you how to use it to generate reports
based on your TotalStorage Productivity Center repository
data.
BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
IBM Redbooks are developed
by the IBM International
Technical Support
Organization. Experts from
IBM, Customers and Partners
from around the world create
timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.
For more information:
ibm.com/redbooks
REDP-4230-00
Download