IBM System Storage DS8000 Easy Tier Server Front cover

advertisement
Front cover
IBM System Storage DS8000
Easy Tier Server
Unified storage caching and
tiering solution
Leverage AIX direct-attached
flash devices
Cache management and
workload monitoring
Bertrand Dufrasne
Bruno Anderson Barbosa
Peter Cronauer
Delmar Demarchi
Hans-Paul Drumm
Ronny Eliahu
Xin Liu
Michael Stenson
ibm.com/redbooks
Redpaper
International Technical Support Organization
IBM System Storage DS8000 Easy Tier Server
August 2013
REDP-5013-00
Note: Before using this information and the product it supports, read the information in “Notices” on page v.
First Edition (August 2013)
This edition applies to the IBM System Storage DS8870 with Licensed Machine Code (LMC) 7.7.10.xx.xx
(bundle version 87.10.xxx.xx).
This document was created or updated on August 7, 2013.
© Copyright International Business Machines Corporation 2013. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Chapter 1. Easy Tier Server overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1 Introduction to Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.1 General Easy Tier functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.2 Easy Tier evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.3 Easy Tier fifth generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Easy Tier Server overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.1 Business motivation for Easy Tier Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.2 Easy Tier Server for performance improvements . . . . . . . . . . . . . . . . . . . . . . . . .
13
14
14
15
16
17
17
18
Chapter 2. Easy Tier Server concepts and architecture . . . . . . . . . . . . . . . . . . . . . . . .
2.1 Easy Tier Server Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Easy Tier Server architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Easy Tier Server design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.1 Easy Tier Server coherency client to coherency server communication. . . . . . . .
2.4 Easy Tier Server caching details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4.1 Caching advice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5 Easy Tier Server and Easy Tier data placement integration . . . . . . . . . . . . . . . . . . . . .
2.6 Direct-attached storage considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
22
22
24
25
26
27
29
30
Chapter 3. Planning for Easy Tier Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1 Planning and requirements guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1 Easy Tier Server coherency server requirements. . . . . . . . . . . . . . . . . . . . . . . . .
3.1.2 Easy Tier Server coherency client requirements . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.3 Supported DAS enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.4 Easy Tier Server coherency client and server connectivity. . . . . . . . . . . . . . . . . .
3.2 Validating Easy Tier Server requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.1 Easy Tier Server coherency server validation. . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.2 Easy Tier Server coherency client validation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Other considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.1 DS CLI and DS GUI support to Easy Tier Server . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.2 Easy Tier and Easy Tier Server integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.3 Easy Tier Server interaction with other DS8870 advanced features. . . . . . . . . . .
33
34
34
35
36
38
38
38
42
49
49
49
49
Chapter 4. Easy Tier Server implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1 Implementing Easy Tier Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1 Setting up DS8870 for Easy Tier Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.2 Setting up an AIX host for Easy Tier Server client . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Uninstalling Easy Tier server coherency client driver . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Upgrading Easy Tier Server coherency client driver . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
54
54
58
64
65
© Copyright IBM Corp. 2013. All rights reserved.
iii
iv
Chapter 5. Managing Easy Tier Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1 Managing Easy Tier Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.1 Easy Tier Server coherency client management tasks . . . . . . . . . . . . . . . . . . . . .
5.1.2 Managing and configuring direct-attached storage . . . . . . . . . . . . . . . . . . . . . . . .
69
70
70
77
Chapter 6. Easy Tier Server monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1 Monitoring Easy Tier Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1 DS8870 Storage Tier Advisor Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.2 AIX operating system IOSTAT Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.3 Monitoring with etcadmin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
83
84
84
87
89
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91
91
91
91
92
IBM System Storage DS8000 Easy Tier Server
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
© Copyright IBM Corp. 2013. All rights reserved.
v
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX®
DB2®
DS8000®
Easy Tier®
Enterprise Storage Server®
FlashCopy®
Global Technology Services®
HyperSwap®
IBM®
Power Systems™
POWER6®
POWER7+™
POWER7®
Redbooks®
Redpaper™
Redbooks (logo)
System p®
System Storage®
Tivoli®
z/OS®
®
The following terms are trademarks of other companies:
Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other
countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
vi
IBM System Storage DS8000 Easy Tier Server
Preface
IBM® Easy Tier® Server is one of several Easy Tier enhancements introduced with the
IBM DS8000® Licensed Machine Code 7.7.10.xx.xx. Easy Tier Server is a unified storage
caching and tiering solution across IBM AIX® servers and supported direct-attached storage
(DAS) flash drives. Easy Tier Server manages placing a copy of the “hottest” volume extents
on flash drives attached to an AIX server. Data can be read directly from flash drives local to
the application host rather than from cache or disk drives in the DS8870, while maintaining
other advanced feature functions.
This IBM Redpaper™ publication explains the Easy Tier Server concept and explores key
aspects of its architecture, design, and implementation.
From a more practical standpoint, this publication also contains numerous illustrations and
examples that help you set up, manage, and monitor Easy Tier Server.
Authors
This paper was produced by a team of specialists from around the world working at the
International Technical Support Organization (ITSO), San Jose Center.
Bertrand Dufrasne is an IBM Certified IT Specialist and Project Leader for IBM System
Storage® disk products at the ITSO, San Jose Center. He has worked at IBM in various IT
areas. He has written many IBM Redbooks® publications and has developed and taught
technical workshops. Before joining the ITSO, he worked for IBM Global Services as an
Application Architect. He holds a Master’s degree in Electrical Engineering.
Bruno Anderson Barbosa is a Senior Software Support Specialist for Global and
Technology Services (GTS) in IBM Brazil. He has seven years of experience working with
IBM Power Systems™, storage area network (SAN) and IBM Storage Systems. He holds a
degree in IT Systems Analysis, a Business Administration postgraduation degree, and an
MBA diploma in Information Technology Management from Fundação Getúlio Vargas (FGV)
in Brazil. His areas of expertise include Implementation, Software Support, Problem
Determination, and Performance Analysis on IBM Storage Products for Open Systems.
Peter Cronauer is a certified SAP consultant who works for IBM in the European Storage
Competence Center (ESCC) in Germany. He joined the IBM Advanced Technical Skills (ATS)
department implementing and managing client projects for SAP solutions on IBM storage. He
managed the STG Lab Services storage for Europe and the ATS development support
department. Peter is the speaker of the Solution Advisory Board (SAB) and has led Redbooks
residencies on various storage topics. He holds a diploma in Computer Science and wrote
white papers on SAP and IBM storage solutions.
Delmar Demarchi is an IT Storage Specialist at IBM Lab Services Brazil team with more than
20 years of experience. He has expertise in IBM System p® and Pure Systems, UNIX and
High Availability solutions, IBM Storage Systems, and SAN products. Delmar participated in
various projects involving those technologies. Delmar holds an MBA Diploma in Business and
Information Technology from the Fundação Getulio Vargas in Brazil.
© Copyright IBM Corp. 2013. All rights reserved.
vii
Hans-Paul Drumm is an IT Specialist at IBM Germany. He has 28 years of experience in the
IT industry. He has worked at IBM for twelve years. He holds a degree in Computer Science
from the University of Kaiserslautern. His areas of expertise include Solaris, HP-UX, Veritas
Storage Foundation, and IBM z/OS®, with a special focus on Disk Solutions Attachment.
Ronny Eliahu is an IBM Senior SAN and Storage Architect with over 10 years of experience.
Ronny has worked for several clients with large SAN deployment. He is a member of the
Storage Networking Industry Association (SNIA) and participated in a few panels for
developing standards and procedures. Ronny also participated in a project in Southern
France, involving IBM DB2®, SAP, and multi-partition database on IBM System p (Big Data).
Xin Liu is a Senior IT Specialist in IBM China. He has seven years of experience in the
technical support area for IBM system hardware and software products. He joined IBM Global
Technology Services® in 2006 as a Field Software Engineer supporting Power Systems and
Storage. In 2010, he became a member of the storage ATS team, and since then he worked
on Storage Pre-Sales Support focusing on High-End Storage Solutions. Xin holds a Master’s
degree in Electronic Engineering from Tsinghua University, China.
Michael Stenson joined IBM DS8000 Product Engineering in 2005 and is currently the team
lead in Tucson, Arizona. He has worked with all generations of the DS8000, as well as the
IBM Enterprise Storage Server® (ESS). He has over 15 years of experience of administration
and engineering in storage, server, and network environments. His current focus is process,
tools, and document development.
Special thanks to the Enterprise Disk team manager, Bernd Müller; ESCC Pre-Sales and
Service Delivery Manager, Friedrich Gerken; and the ESCC Director, Klaus-Jürgen Rünger;
for their continuous interest and support regarding the ITSO Redbooks projects.
Many thanks to the following people who helped with equipment provisioning and preparation:
Roland Beisele, Uwe Heinrich Müller, Hans-Joachim Sachs, Günter Schmitt, Mike Schneider,
Dietmar Schniering, Stephan Schorn, Uwe Schweikhard, Edwin Weinheimer, Jörg Zahn.
IBM Systems Lab Europe, Mainz, Germany
Thanks to the following people for their contributions to this project:
Dale H Anderson, Chiahong Chen, Lawrence Chiu, John Elliott, Yong YG Guo (Vincent),
Yang Liu (Loren), Michael Lopez, Thomas Luther, Stephen Manthorpe, Allen Marin, Mei Mei,
Paul Muench, Andreas Reinhardt, Brian Rinaldi, Rick Ripberger, David Sacks, Louise Schillig,
Falk Schneider, Cheng-Chung Song, Jeff Steffan, Allen Wright.
IBM
Now you can become a published author, too!
Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
viii
IBM System Storage DS8000 Easy Tier Server
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks publications in one of the following ways:
򐂰 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
򐂰 Send your comments in an email to:
redbooks@us.ibm.com
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Stay connected to IBM Redbooks
򐂰 Find us on Facebook:
http://www.facebook.com/IBMRedbooks
򐂰 Follow us on Twitter:
http://twitter.com/ibmredbooks
򐂰 Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
򐂰 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
򐂰 Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
Preface
ix
x
IBM System Storage DS8000 Easy Tier Server
1
Chapter 1.
Easy Tier Server overview
IBM Easy Tier Server is a unified storage caching and tiering solution across AIX servers and
supported direct-attached storage (DAS) flash drives.
Easy Tier Server is one of several Easy Tier enhancements, introduced with the DS8000
Licensed Machine Code 7.7.10.xx.xx. Easy Tier is now in its fifth generation.
This chapter starts with a summary of the Easy Tier functions in general and briefly describes
how Easy Tier has evolved over five generations. The remaining sections provide an overview
of Easy Tier Server and discuss its business case and potential performance improvements it
can use in a DS8870 environment.
© Copyright IBM Corp. 2013. All rights reserved.
13
1.1 Introduction to Easy Tier
This section reviews the various functions and evolution of Easy Tier.
1.1.1 General Easy Tier functions
At the core of its functionality, Easy Tier is an optional and no-charge feature of the
IBM DS8700, DS8800, and DS8870 Storage Systems that offers enhanced capabilities
through automated hot spot management and data relocation, auto-rebalancing, manual
volume rebalancing and volume migration, rank depopulation, merging of extent pools, and
thin provisioning support. Easy Tier determines the appropriate tier of storage based on data
access requirements and then automatically and non-disruptively moves data to the
appropriate tier on the DS8000.
The basic IBM Easy Tier features can be summarized in two operating modes:
򐂰 Easy Tier Automatic Mode:
Easy Tier Automatic Mode is designed to automatically optimize storage performance and
storage economics management across different drive tiers through data placement on a
subvolume level in multitier or hybrid extent pools. Multitier or hybrid extent pools are
storage pools that contain a mix of different disk drive technologies or storage tiers. It can
automatically and non-disruptively relocate data at the subvolume level (extent level)
across different drive tiers or even within the same drive tier according to its data
temperature (I/O activity) to optimize performance and resource utilization. This feature
significantly improves the overall storage cost performance ratio and simplifies storage
performance tuning and management.
Easy Tier Automatic Mode manages any combination of the three disk drive technology
tiers available for the DS8000 series. In the DS8870, the following three disk technologies
are supported:
– Solid-state drives (SSDs)
– Serial-attached SCSI (SAS) Enterprise disks
– SAS Nearline disks
Easy Tier allows cold demotion and warm demotion. Cold demotion aims to optimize the
placement of extents across tiers, moving inactive extents, or extents with low activity
patterns, from a high performance tier, to a lower tier. Warm demotion is designed to
prevent the activity overload of a higher performance tier by demoting extents to a
lower-cost tier.
In Automatic Mode, Easy Tier also provides an auto-rebalance capability that adjusts the
system to continuously provide excellent performance by balancing the load on the ranks
within a given tier in an extent pool.
򐂰 Easy Tier Manual Mode:
Easy Tier Manual Mode allows a set of manually initiated actions to relocate data among
the storage system resources in a dynamic fashion (without any disruption of the host
operations). The Manual Mode capabilities include dynamic volume relocation, dynamic
extent pool merge, and rank depopulation. Dynamic volume relocation allows a DS8000
volume to be migrated to the same or another extent pool. This capability also provides
the means to manually rebalance the extents of a volume across ranks when additional
capacity is added to the pool. Dynamic extent pool merge allows an extent pool to be
merged to another extent pool. Rank depopulation allows you to remove an allocated rank
from an extent pool and relocate the allocated extents to the other ranks in the pool.
14
IBM System Storage DS8000 Easy Tier Server
Combining these different capabilities greatly improves the configuration flexibility of the
DS8000 providing ease of use.
Even though both modes have data relocation capabilities, Manual Mode and Automatic
Mode do not have the same goals:
򐂰 Easy Tier Manual Mode enables operations such as dynamic volume relocation and
dynamic extent pool merge that simplify manual DS8000 storage management regarding
capacity and performance needs.
򐂰 Easy Tier Automatic Mode enables automated storage performance and storage
economics management through automated data placement across or even within storage
tiers on extent level. Automatic Mode provides automated tiering capabilities on
subvolume (extent) level across different physical resources with various performance and
cost characteristics.
IBM System Storage DS8000 Easy Tier, REDP-4667 covers all the Easy Tier concepts and
usage, in relevant details. For more information, refer to this Redpaper publication.
The general functions that we just described have been improved over time and new
functions have been added. A chronological summary of those functions is presented in the
section that follows: 1.1.2, “Easy Tier evolution” on page 15.
1.1.2 Easy Tier evolution
The evolution of Easy Tier advanced functions throughout its five generations is summarized
in Figure 1-1.
Easy Tier Functions by Release
Easy Tier
Release/
DS8000
Model
Introduced
in
Microcode
Relea se
Tier Support
Auto Mode
(Sub Volume)
Manual Mode
(Full Volume)
Easy Tier 1
DS8700
R5.1
Two ti er
SSD + ENT
SSD + NL
• Promote
• Demote
• Swap
• Dynamic extent pool
merge
• Dynamic volume
relocation
Easy Tier 2
DS8700
DS8800
R6.1
R6.1
Any two tiers
SSD + ENT
SSD + NL
ENT + NL
•
•
•
•
• Rank depopulation
• Manual vo lume
rebala nce
Easy Tier 3
DS8700
DS8800
R6.2
R6.2
Any th ree tiers
SSD + ENT + NL
• Auto Rebalance (Homog eneous Pool)
• ESE Volume supp ort
Easy Tier 4
DS8700
DS8800
DS8870
R6.3
R6.3
R7.0
Ful l support for FDE
(encryption) d ri ves
• Automatic data relocati on capabi lities for
all FDE di sk environme nts
Easy Tier 5
DS8870
Easy Tier App licati on
R7.1
• Storage admin istrato rs can control data
placement vi a CLI
• Provides directive data placement API to
enable software integrati on
• Learning data capture and apply for heat
map transfer for remote copy
environme nts
• Unified storage caching and tiering
capability for AIX servers
Easy Tier Heat Map
Transfer
Easy Tier Server
1
Promote
Demote
Swap
Auto Rebalance (Hybrid pool only)
• Support for all
manual mod e
command for FDE
environments
© 2013 IB M Corporat ion
Figure 1-1 Easy Tier functions by release
The first generation of Easy Tier introduced automated storage performance management by
efficiently boosting Enterprise-class performance with SSDs and automating storage tiering
from Enterprise-class (ENT) or Nearline (NL) drives to SSDs, thus optimizing SSD
Chapter 1. Easy Tier Server overview
15
deployments with minimal costs. It also introduced dynamic volume relocation and dynamic
extent pool merge.
The second generation of Easy Tier added automated storage economics management by
combining Enterprise-class drives with Nearline drives with the objective to maintain
Enterprise-tier performance while shrinking the footprint and reducing costs with large
capacity Nearline drives. The second generation also introduced intra-tier performance
management (auto-rebalance) for hybrid pools as well as manual volume rebalance and
rank depopulation.
The third generation of Easy Tier introduced further enhancements that provide automated
storage performance and storage economics management across all three drive tiers, which
allows you to consolidate and efficiently manage more workloads on a single DS8000
system. It also introduced support for auto-rebalance in homogeneous pools and support for
thin provisioned (extent space-efficient (ESE)) volumes.
The fourth generation of Easy Tier added the support for Full Disk Encryption (FDE). The
FDE can protect business-sensitive data by providing disk-based hardware encryption that is
combined with a sophisticated key management software (IBM Tivoli® Key Lifecycle
Manager).This level of tiering offers an advanced level of security and efficiency in data
protection. For more information about this and other Easy Tier functions, refer to IBM System
Storage DS8000 Easy Tier, REDP-4667.
The fifth generation of Easy Tier starts with DS8000 Licensed Machine Code (LMC)
7.7.10.xx.xx, on bundle version 87.10.xxx.xx. The main approach of this Easy Tier generation
is to move the hottest data closer to the host and overcome latency of the storage area
network (SAN). Also, the Easy Tier design is evolving towards enabling applications to guide
Easy Tier data placement functions in order to increase performance even more (Easy Tier
Application) or maintain performance at a secondary site (Easy Tier Heat Map Transfer).
These advanced features offer a self-tuning mechanism and reduce administrative costs.
1.1.3 Easy Tier fifth generation
One of the Easy Tier fifth generation improvements is to move the hottest data closer to the
hosts (AIX hosts only in this release). This functionality is implemented by the integrated
Easy Tier Server feature, and is the focus of this IBM Redpaper publication.
Easy Tier fifth generation also brings additional enhancements. It implements applicationand user-assisted data placement and optimization through another new feature, called
Easy Tier Application. Moreover, it introduces the Easy Tier Heat Map Transfer Utility to
manage data placement optimization across Copy Services-paired DS8870 Storage
Systems.
Easy Tier Application
Easy Tier Application is an application-aware storage application programming interface (API)
to help clients to deploy storage more efficiently. It enables applications and middleware to
direct more optimal placement of data by communicating important information about current
workload activity and application performance requirements.
In this release, the new Easy Tier Application feature also enables clients to assign distinct
application volumes to a particular tier in the Easy Tier pool, disregarding Easy Tier's
advanced data migration function. This provides a flexible option for clients that want to
ensure that certain applications remain on a particular tier to meet performance or cost
requirements.
16
IBM System Storage DS8000 Easy Tier Server
For more information about this feature, refer to IBM System Storage DS8000 Easy Tier
Application, REDP-5014.
Easy Tier Heat Map Transfer
Easy Tier Heat Map Transfer enables a DS8870 Easy Tier optimized data placement on the
primary site of either Metro Mirror, Global Copy, or Global Mirror to be applied on a
DS8870 Storage System at the secondary site.
With this capability, DS8000 systems can maintain application-level performance at the
secondary site following a failover from the primary to secondary site. Upon receiving a
heat map, also known as learning data, Easy Tier on the DS8000 at the secondary site
follows the heat map to relocate data to the most optimized storage tiers, regularly.
See IBM System Storage DS8000 Easy Tier Heat Map Transfer, REDP-5014 for detailed
information about this feature.
1.2 Easy Tier Server overview
Easy Tier Server allows the most frequently accessed or “hottest” data to be placed (cached)
closer to the hosts, thus overcoming the SAN latency. This feature, introduced with the
IBM DS8000 DS8870 LMC 7.7.10.xx.xx, leverages the integration between AIX hosts and
DS8870 by implementing advanced caching coherency algorithms.
Essentially, Easy Tier Server copies the DS8870 “hottest” data to direct-attached storage
(DAS) solid-state drives (SSDs), also known as flash devices, on Easy Tier Server-enabled
hosts. Easy Tier Server caching coherency algorithms ensure that the most valuable data is
available on the host local flash devices and also guarantees data integrity across DS8870
internal tiers and AIX hosts DAS cache.
The Easy Tier Server core relies on DS8870 cooperating with heterogeneous hosts to make a
global decision on which data to copy to the hosts’ local SSDs, for improved application
response time. Therefore, DAS SSD devices play an important role in an Easy Tier Server
implementation.
Solid-state storage direct-attached to the host means using a memory-type device for mass
storage, rather than a spinning disk. IBM is making solid-state storage affordable, with
innovative architectures, system and application integration, and management tools that
enable effective use of solid-state storage. By eliminating the rotational delay of a spinning
platter and of waiting for an arm to move to the correct position, solid-state drive technology
makes data available nearly immediately. Thus, it can result in a great performance
improvement when integrated with Easy Tier Server.
1.2.1 Business motivation for Easy Tier Server
Traditional usage of SSD technology within storage systems can certainly enhance the
system’s overall performance at many levels, but it comes at a cost. Easy Tier Server
architecture might not only break through I/O performance barriers, but also helps to minimize
costs, by leveraging SSD usage on the hosts, cooperatively with DS8870 Easy Tier data
placement functions.
DAS SSD cache on the hosts that are coherently integrated with DS8870 can enable
analytical applications that were not cost- or time-effective previously. Cooperative caching
can also drastically reduce I/O-bound batch windows, while improving response time and
Chapter 1. Easy Tier Server overview
17
throughput. Easy Tier Server can speed up critical response time applications to allow people
and systems to react more quickly and provide a higher service level to the business.
Specializing in high I/O performance, SSD cache has the upper hand in cost per input/output
operations per second (IOPS). To this effect, you can look at Easy Tier Server benefits from
two different, yet complementary, perspectives: performance improvements and
cost-efficiency ratio.
With regard to performance gains, Easy Tier Server allows response time improvements by
overcoming SAN latency when its algorithm fetches data from DS8870 and keeps it in the
hosts’ local SSD cache, accordingly. Still, DAS SSD cache on the hosts allows scaling up and
out while maintaining good performance levels.
On the other hand, a good cost-efficiency ratio can be achieved by dedicating SSD resources,
as cache devices, just for hosts or applications that would benefit the most from them or for
the mission critical ones. Therefore, the resources would be spent based on the importance
of the application to the business.
Because Easy Tier Server implements a read-only local DAS cache on the hosts, there are
some particular scenarios that can take the best advantage of this feature. In general terms,
read-intensive environments tend to most benefit from the Easy Tier Server cooperative
caching implementation.
The workload types listed below are just a few examples that are a good fit for Easy Tier
Server:
򐂰
򐂰
򐂰
򐂰
򐂰
Real-time analytics workload
Large content data
Online transaction processing (OLTP) workload
Virtual machine (VM) consolidation
Big Data
Whether it is a latency sensitive environment, high read/write ratio applications, or a highly
parallel processing system, there is an increasing need to process data quickly and
Easy Tier Server can be considered for these situations.
In some cases, the high volume of clients accessing a database can result in the need for a
high IOPS rate. Nevertheless, some applications simply cannot be run fast enough to satisfy
their business need. OLTP systems are the classic example of such applications. Many of
these use-cases create a need to operate at high speed.
In cases where the read performance of the storage can lead to a major bottleneck to the
environment, there is a high value in faster storage, and therefore, a good fit for the Easy Tier
Server. This feature is available at no charge with the DS8870.
1.2.2 Easy Tier Server for performance improvements
The Easy Tier Server feature has been extensively tested by IBM and the test results have
shown that it can improve overall response time up to five times, as explained in the demo
video Planned IBM Easy Tier integration with server-based SSDs. Although this video was
released in June 2012, a year before the official announcement of the feature by IBM, it
presents a good visual demonstration of the Easy Tier Server concept, not mentioning the
real use-case performance improvement. The video is available at the following website:
http://www.youtube.com/watch?v=SLQfNoidG3I
18
IBM System Storage DS8000 Easy Tier Server
In preparing this IBM Redpaper publication, we set up a similar environment to the one
described in the video, with Easy Tier Server enabled on a DS8870 Storage System and a
Power 770 host, attached to an EXP30 Expansion Enclosure with SSD devices.
Within this environment, we were able to constantly generate random OLTP workload to a set
of DS8870 logical unit numbers (LUNs) on the Power 770 host, periodically changing the I/O’s
pattern, though. The results are shown in the graph presented in Figure 1-2.
Disclaimer: Easy Tier Server workload tests conducted for this publication only
demonstrate Easy Tier Server feature behavior and concept, under a random workload.
These tests are not meant for benchmarking purposes and the results might vary
depending on the workload used and the systems in the environment.
Consult your IBM sales representative for benchmarking and performance-related
information about Easy Tier Server.
Figure 1-2 Easy Tier Server test with random OLTP workload
In this scenario, the workload was shifting and changing from time to time. The several points
on the horizontal axis that are indicated by letters represent a workload variation, whether it
was in terms of read/write ratio, increased load, transfer sizes shifts, and so on. The overall
response time, in the vertical axis, was measured by the application that generated the
workload. The time is an average of both read and write response times.
The tests’ results turned out to demonstrate a very good performance improvement achieved
by the application after Easy Tier Server implementation, with an impressive four times
improvement on the overall response time to the application.
Chapter 1. Easy Tier Server overview
19
Figure 1-3 compares the highest response time that is presented in the different scenarios:
with Easy Tier Server enabled, and without Easy Tier Server.
Figure 1-3 Highest response time comparison in Easy Tier Server test with random OLTP workload
20
IBM System Storage DS8000 Easy Tier Server
2
Chapter 2.
Easy Tier Server concepts and
architecture
This chapter describes the IBM Easy Tier Server concepts, architecture, and design
characteristics. It provides an insight into Easy Tier Server advanced caching functions.
© Copyright IBM Corp. 2013. All rights reserved.
21
2.1 Easy Tier Server Concepts
Easy Tier Server is a unified storage caching and tiering implementation across AIX hosts
and IBM System Storage DS8870. Easy Tier Server manages cache data placement across
direct-attached storage (DAS) solid-state drives (SSDs) within IBM Power Systems hosts and
DS8870 storage tiers by caching the “hottest” data on hosts’ local DAS flash disks.
Easy Tier Server copies the frequently accessed data from the DS8870 storage tier to SSD
drawers, directly attached to either an IBM POWER7® or IBM POWER7+™ supported host,
as listed in 3.1.2, “Easy Tier Server coherency client requirements” on page 35.
Thus, a host that is optimized for Easy Tier Server can read data directly from flash memory
that is attached to the host cache locally, rather than from disk drives or cache in the DS8870
storage system. This data-retrieval optimization results in improved performance, with I/O
requests that are generally satisfied in microseconds.
Essentially, the Easy Tier Server feature provides the following capabilities:
򐂰 Coordination of data placement across hosts’ flash cache, the DS8870 cache, and the
DS8870 internal tiers.
򐂰 Managed consistency of the data across the set of hosts that access the data (it is
assumed that only one host will access the data at a time).
򐂰 Caching of the “hottest” data to the direct-attached flash disks.
򐂰 Significant reduction of file-retrieval times.
Important: Easy Tier Server is an optional feature of the DS8870 and is available with the
feature code 0715 along with the corresponding Function Authorization, 7084.
No-Charge: Both Easy Tier and Easy Tier Server licenses, although required, are
available at no cost.
2.2 Easy Tier Server architecture
The Easy Tier Server feature consists of two major components, as represented in Figure 2-1
on page 23:
򐂰 The Easy Tier Server coherency server, which runs on the DS8870 system.
The Easy Tier Server coherency server runs in the DS8870 and manages how data is
placed onto the internal flash caches on the attached hosts. Also, it integrates with Easy
Tier data placement functions for the best optimization on DS8870 internal tiers (SSD,
Enterprise, and Nearline). The coherency server asynchronously communicates with the
hosts system (the coherency clients) and generates caching advice for each coherency
client, which is based on Easy Tier placement and statistics.
򐂰 The Easy Tier Server coherency client, which runs on the host system.
The Easy Tier Server coherency client keeps local caches on DAS solid-state drives. The
coherency client uses the Easy Tier Server protocol to establish system-aware caching
that interfaces with the coherency server. An Easy Tier Server coherency client driver
cooperates with the operating system to direct I/Os either to local DAS cache or to
DS8870, in a transparent way to the applications.
22
IBM System Storage DS8000 Easy Tier Server
Figure 2-1 shows the Easy Tier Server architecture.
IBM Power System
(AIX with coherency client)
DAS
DAS
ta
Da stics
ati
St
Data
Statistics
DAS
IBM Power System
(AIX with coherency client)
D
Sta ata
tist
ics
IBM Power System
(AIX with coherency client)
DS8870
Easy Tier Server coherency server
(in the DS8870 firmware)
Figure 2-1 Easy Tier Server architecture
Note: In this IBM Redpaper publication, we might refer to the Easy Tier Server coherency
server (DS8870), just as server, and to the Easy Tier Server coherency client (host), as
client.
Easy Tier Server can run in existing hosts and DS8870 environments when the Easy Tier
Server software (client) is deployed on the hosts and if the direct-attached flash drawers are
installed on these hosts as well.
The Easy Tier Server coherency client software consists of an Easy Tier Server I/O driver and
a user level daemon. On the DS8870, the Easy Tier Server coherency server is implemented
natively by the Licensed Machine Code, starting in version 7.7.10.xx.xx. It must be enabled by
particular licensing and configuration, as described in Chapter 3.1.1, “Easy Tier Server
coherency server requirements” on page 34.
The clients cooperate with the server to determine which data to cache in the DAS flash
drives, while the server is monitoring I/O statistics of DS8870 internal extents to determine
which ones are likely to benefit from DAS caching. When the extents are selected, the server
sends a list to each Easy Tier Server coherency client via storage area network (SAN).
Chapter 2. Easy Tier Server concepts and architecture
23
Attention: DAS on the client is used only as read cache. Write I/Os from the client to
DS8870 disks (logical unit numbers (LUNs)) are directly transmitted to DS8870.
2.3 Easy Tier Server design
Easy Tier Server coherency clients are designed to route I/O read hits to the application host
DAS, while sending read misses directly to DS8870. In the same way, the write I/Os are
routed to DS8870 and cache pages related to the I/O address spaces are invalidated on the
client’s local cache to keep cache coherency and data integrity. Table 2-1 shows the expected
client behavior upon different I/O request types.
Table 2-1 Easy Tier Server coherency client behavior upon I/O Requests
Host I/O request and caching
Write
DS8870
Normal write command
processing
Easy Tier Server coherency
client
1. Invalidate the DAS cache
contents for the addresses
written
2. Send write request to the
DS8870
Read Miss
Read Hit
Populate DAS cache based on
Easy Tier Server coherency
client and server heat advices
Normal read command
processing
N/A
1. Send read request to the
DS8870
2. Store data read from
DS8870 in the DAS cache
1. Return data in the DAS
cache to the user request
handler
1. Move hot data from the
DS8870 into the DAS
cache
Normal read command
processing
2. Limit the effective size of
the DAS cache so that DAS
bandwidth and IOPS are
not overloaded
The coherency client and coherency server share statistics to ensure that the best caching
decisions are made.
Local DAS caches on clients have the most current information about local data access
patterns. The coherency driver in the client Small Computer System Interface (SCSI) stack
determines what to cache based on access patterns, server-generated advice
(frequency-based advice), and local statistics (recency- and frequency-based statistics).
The cache algorithm makes decisions per-I/O for what to keep in cache and what to evict.
Selection algorithms achieve high efficiency by remembering the latest evicted tracks when
selecting what to promote into the cache.
The DS8870 receives through its host adapters, incoming I/O requests sent from hosts over
the SAN. Each internal DS8870 server (Server 0 and Server 1) owns the access of half the
logical subsystems’ (LSSs) worth of volumes in the DS8870. This means that the host
24
IBM System Storage DS8000 Easy Tier Server
adapters in the DS8870 forward requests from the SAN to the internal server that owns
access to the appropriate volume.
Easy Tier Server coherency server is designed to intercept I/Os directed to Easy Tier
Server-enabled LUNs. Its design implements mechanisms and algorithms to support the
movement of statistics data from hosts to DS8870 and advices from DS8870 to hosts.
2.3.1 Easy Tier Server coherency client to coherency server communication
The Easy Tier Server coherency clients communicate with the Easy Tier Server coherency
server using SCSI commands sent over the SAN. The SCSI commands used in this context
are IBM proprietary. Most of them transfer small amounts of data, in the kilobytes range,
which ensures a small communication overhead of the coherency protocol.
The proprietary protocol enables the server to invalidate cache on the client. This capability
means that the server can maintain global cache coherency for data integrity and robustness.
The Easy Tier Server I/O flow is depicted in Figure 2-2.
Figure 2-2 Easy Tier Server I/O flow
Chapter 2. Easy Tier Server concepts and architecture
25
2.4 Easy Tier Server caching details
After receiving the list of hot extents from the server, the client then measures the heat of
subextent ranges called fragments to decide which ones to copy to local DAS cache. These
fragments are 1 MB in size.
Important: Although the DS8870 extent size is 1 GB, the Easy Tier Server coherency
client heat measurement is based on 1 MB fragments, implementing fine-granularity on
heat analysis and precise performance improvements by moving to DAS only the “hottest”
1 MB fragments within the 1 GB extent.
The client filters the “hottest” fragments to check which are already cached and those eligible
to be cached.
After a period of measuring fragment heat, the coherency client can decide to demote (evict)
fragments currently in the DAS cache and replace (populate) them with hotter fragments.
Client adaptive thresholds guarantee that only fragments being hotter than those evicted are
populated. Moreover, the algorithm checks that the same fragment is accessed frequently
enough, which filters out any sequential data from being cached.
As already indicated, the Easy Tier Server coherency client manages 1 MB cache fragments.
Furthermore, each fragment is in a contiguous space of the DAS pool.
Although a 1 MB fragment is the unit of cache population and eviction on a client DAS pool,
which is based on the server’s advice, the invalidation is done at a sector level of 512 bytes.
Cache invalidation is a process whereby entries in a cache are deleted. A sector is
invalidated when its data is written and the cached data on the host is not coherent anymore.
Figure 2-3 represents the physical SSD 1 MB fragments and its 512 bytes valid and invalid
sectors.
Figure 2-3 Easy Tier Server coherency client cache granularity
Upon population, the entire fragment is valid. Sectors that are later invalidated result in
512 bytes “holes” in the 1 MB SSD cache space. The population and eviction logic eliminates
these holes.
26
IBM System Storage DS8000 Easy Tier Server
2.4.1 Caching advice
Both the Easy Tier Server coherency client and server exchange messages to cooperatively
cache and place data. The client sends I/O statistics to the server and the server sends extent
placement advice to the client.
To optimize short-term performance with more frequently fine-grained data movement, each
client sends I/O statistics to the server every 15 minutes. The advice and feedback from the
clients are used by the server in its advice generation process. Refer to Figure 2-4.
To respond to client workload quickly, the server is generating the data placement advice that
is based on a DS8870 one-hour average performance statistic data, always considering the
latest 15-minute statistic from the server.
Easy Tier Server coherency client
Easy Tier Server coherency server
DS8870
Figure 2-4 Easy Tier Server coherency server and client caching advices exchange
As the coordinator among all heterogeneous clients, Easy Tier Server coherency server on
the DS8870 has a consolidated view of all performance statistic data from all Easy Tier
Server coherency clients, as depicted in Figure 2-5 on page 28.
Attention: Currently, the DS8870 supports a limited number of Easy Tier Server
coherency clients. For bandwidth considerations of promoting data to DAS, the DS8870
maximum number of coherency clients is 16.
Chapter 2. Easy Tier Server concepts and architecture
27
Figure 2-5 shows the Easy Tier Server coherency server coordinated view among clients.
Figure 2-5 Easy Tier Server coherency server coordinated view among clients
The server does a good coordination of both the short-term performance optimization
(DAS SSD) and the longer term performance optimization (DS8870 internal tiers).
28
IBM System Storage DS8000 Easy Tier Server
2.5 Easy Tier Server and Easy Tier data placement integration
Although Easy Tier Server is responsible for promoting the “hottest” extents’ fragments to the
hosts’ local DAS flash cache, Easy Tier still manages the data placement optimization within
and across internal DS8870 tiers. Easy Tier operations in the DS8870 are based on the
following migration types, as represented by Figure 2-6:
򐂰 Promote and swap for moving hot data to higher performing tiers.
򐂰 Warm demote which prevents performance overload of a tier by demoting warm extents to
the lower tier and being triggered when bandwidth or IOPS thresholds are exceeded.
򐂰 Cold demote on hard disk drive (HDD) tiers, in which the coldest data is identified and
moved into the Nearline tier.
򐂰 Expanded cold demote for HDD tiers to demote some of the sequential workload to better
use the bandwidth in the Nearline tier.
򐂰 Auto-rebalance, which redistributes the extents within a tier to balance utilization across all
ranks of the same tier for maximum performance.
Figure 2-6 Easy Tier migration types
The Easy Tier Server coherency server integrates with Easy Tier internal data placement
functions to ensure that the home tier for given data is not adversely affected by DAS caching.
For example, DAS caching can make hot data appear cold to Easy Tier because little I/O is
reaching the DS8870.
Typically, Easy Tier demotes cold data to Nearline class storage. If hot data cached in DAS
becomes cold, an Easy Tier Server coherency client can demote that data from DAS,
invalidating the selection of Nearline class home storage. So, an Easy Tier Server coherency
Chapter 2. Easy Tier Server concepts and architecture
29
server informs Easy Tier not to cold demote data that is made cold by DAS caching. The
server relies on the cached extent information from the client to determine which extents
might be hotter than the server measured.
Consequently, Easy Tier Server implies subtle changes to some of the Easy Tier migration
types, as discussed below.
Promote and swap migration type
The promote and swap migration type moves the most frequently accessed extents in the
long term to the higher performance tier in the DS8870 to improve performance. Extents
cached in DAS have no read access in the storage layer, but write access is tracked.
Promoting extents that are cached in DAS, if they are really hot, still have benefit for
performance improvement. Therefore, extents that are cached in DAS are allowed to be
promoted.
Warm demote
Because warm demote aims to relieve the bandwidth overload on higher performance tiers,
and high-bandwidth extents are the target candidates, there is no implication on the extents
cached in DAS.
Cold demote and expanded cold demote
Easy Tier monitors the DS8870 statistics and selects the extents with no I/O for a long time to
cold demote them from the Enterprise to Nearline tier.
If extents have been cached in DAS for a long time, there would have been no hits for these
extents on DS8870 in this period. Thus, the I/O statistics for those extents in DS8870 will be
down to zero, but they are likely frequently accessed in DAS.
Easy Tier and Easy Tier Server algorithms will handle this situation in a way that extents on
DAS are not cold-demoted on DS8870.
Auto-rebalance
The auto-rebalance migration type moves extents to balance the IOPS among ranks on the
same tier. In the same way as in promote and swap, extents cached in DAS moved by
auto-rebalance still have benefit for performance improvement. Therefore, extents cached in
DAS can be moved by auto-rebalance.
2.6 Direct-attached storage considerations
The Easy Tier Server coherency client requires at least one direct-attached storage
expansion drawer with SSDs to be the host local cache for Easy Tier Server implementation.
Currently, three different expansion drawer models are supported for this role: EXP30,
EXP24S, and 12X I/O Drawer PCIe (#5802), as described in Chapter 3.1.1, “Easy Tier Server
coherency server requirements” on page 34.
Tip: Although one expansion drawer is the minimum, if you know that you will need
multiple drawers, it is better to attach them from the very beginning rather than adding
drawers over time. By allocating all the drawers up front, Easy Tier avoids any rebalancing
when some cache devices are full.
30
IBM System Storage DS8000 Easy Tier Server
When the drawer is attached to the Power Systems host, the newly added SSD devices
become available to the AIX operating system as pdisks, as explained in detail in
“Direct-attached storage expansion enclosure and SSD devices” on page 44. Then, you can
either use each one as a different cache device for Easy Tier Server or create
Redundant Array of Independent Disks (RAID) array devices for caching.
In summary, a DAS Cache Directory is created on top of all physical cache devices defined
on the Easy Tier Server coherency client. This approach creates a layer of abstraction
between the hardware devices and the usable cache space available to the client driver,
disregarding whether it consists of single SSD devices or SSD RAID arrays, as represented
on Figure 2-7.
If the cache device is not created on top of a RAID array, failure of one of the SSDs means
that the entire cache needs to be destroyed and re-created. The SSDs may have different
sizes and different performance characteristics, but the performance characteristics are not
taken into account by the cache. The load is distributed evenly across the cache virtual
address space. If the devices have the same size, this effectively means that the load is
distributed evenly across devices, too.
Figure 2-7 Easy Tier Server coherency client DAS architecture
Easy Tier Server coherency client’s cache implementation, as well as DAS SSDs formatting,
are discussed in “Configuring direct-attached storage flash (SSD) devices” on page 61.
Chapter 2. Easy Tier Server concepts and architecture
31
32
IBM System Storage DS8000 Easy Tier Server
3
Chapter 3.
Planning for Easy Tier Server
Before deploying IBM Easy Tier Server, you must check the minimum software and hardware
requirements for both the Easy Tier Server coherency server and the Easy Tier Server
coherency client. This chapter covers these requirements and also discusses some general
relevant information to be validated when deploying Easy Tier Server.
© Copyright IBM Corp. 2013. All rights reserved.
33
3.1 Planning and requirements guidelines
As a baseline, the Easy Tier Server implementation requires licensing and enabling both
Easy Tier and Easy Tier Server on the IBM System Storage DS8870. For the client-side of
this solution, an AIX host must have particular direct-attached storage (DAS) expansion
drawers (with solid-state drives (SSDs)) locally attached. In addition, this host must have the
Easy Tier Server coherency client driver installed and configured.
Important: In order for a host to support DS8870 Easy Tier Server, you need to install and
configure supported flash (SSD) devices on the host system, as indicated in 3.1.2, “Easy
Tier Server coherency client requirements” on page 35.
The requirements that must be met and validated during the planning phase for Easy Tier
Server deployment are summarized into the following items:
1. Easy Tier Server coherency server: DS8870
a.
b.
c.
d.
DS8870 Licensed Machine Code
Easy Tier Licensing
Easy Tier Server Licensing
Easy Tier Monitor Setting
2. Easy Tier Server coherency clients: Power Systems
a.
b.
c.
d.
AIX operating system level
Supported DAS
Easy Tier Server coherency client driver
Configuration requirements on the host
3.1.1 Easy Tier Server coherency server requirements
Easy Tier Server is supported starting with DS8000 Licensed Machine Code 7.7.10.xx.xx. In
addition, both Easy Tier and Easy Tier Server licenses (both, available at no charge) must be
enabled on the DS8870.
All of the base requirements are listed in Table 3-1.
Table 3-1 DS8870 requirements for Easy Tier Server
34
Requirement
Description
Model
DS8870
Licensed Machine Code (LMC)
7.7.10.xx.xx or higher
Hardware Configuration
All supported
Volume Support
Open System Volumes only (fixed block)
Licenses Required
Easy Tier (0713) and Easy Tier Server (0715),
along with the correspondent Function
Authorizations, 7083 and 7084, respectively.
Maximum number of Easy Tier Server coherency
clients attached to the DS8870
16 hosts
IBM System Storage DS8000 Easy Tier Server
3.1.2 Easy Tier Server coherency client requirements
Easy Tier Server is supported on AIX hosts starting with operating system levels 6100-08 and
7100-02 on selected IBM Power Systems. The fileset bos.etcacheDD.rte is the driver
required for the Easy Tier Server coherency client to work with Easy Tier Server coherency
server.
Important: Currently, an IBM Power System running AIX is the only supported platform for
Easy Tier Server coherency clients.
All the base requirements for the Easy Tier Server coherency clients are listed in Table 3-2.
Table 3-2 Host requirements for Easy Tier Server
Requirement
Description
Host Platform
IBM Power Systems
Hardware Model
Power 720 (8202-E4B/E4C/E4D), Power 740
(8205-E6B/E6C/E6D), Power 750 (8233-E8B),
Power 750 (8408-E8D), Power 760 (9109-RMD),
Power 770 (9117-MMB/MMC/MMD), Power 780
(9179-MHB/MHC/MHD), Power 795 (9119-FHB)
Direct-Attached Storage Expansion Enclosures
with SSDs
EXP30 Ultra SSD I/O Drawer (FC-EDR1) with
GX++ 2-port PCIe2 x8 Adapters (FC-1914)
EXP24S SFF Gen2-bay Drawer (FC-5887) with
PCIe2 1.8 GB Cache RAID SAS Adapters
Tri-port (FC-5913)
FC 5802/5877 Expansion Drawer, SSD in
expansion drawer via RAID SAS adapter
Operating System
Native AIX
No support for Virtual I/O Servers (VIOSs)
OS Levels
AIX 6100-08 and higher Technology Levels
AIX 7100-02 and higher Technology Levels
Multipathing Driver
Native AIX MPIO (AIXPCM), Subsystem Device
Driver (SDD), or Subsystem Device Driver Path
Control Module (SDDPCM)
Maximum number of SSD Cache Devices
Maximum values supported by the Power
Systems host
SAN
All Power Systems and DS8870 supported
switches and host bus adapters (HBAs)
Attention: Refer to the IBM System Storage Interoperation Center (SSIC) website for the
most up-to-date list of supported host devices on the DS8870 at the following website:
http://www.ibm.com/systems/support/storage/config/ssic
Besides the hardware and software requirements for the Easy Tier Server coherency client,
there are some mandatory disk and specific host adapter configurations that must be
Chapter 3. Planning for Easy Tier Server
35
checked and validated during the planning phase of your Easy Tier Server implementation.
The required configuration is shown in Table 3-3.
Table 3-3 Host configuration requirements for Easy Tier Server
Requirements
Description
Easy Tier Server coherency client Fileset
bos.etcacheDD.rte
Host bus adapter (HBA) settings
fc_err_recov = fast_fail
dyntrk = yes
Disk (LUN) settings
reserve_policy = no_reserve
Therefore, all the host’s HBAs used for DS8870 connectivity must have its respective fscsiX
device’s parameters checked. In the same way, all disks (DS8870 LUNs) to be configured for
Easy Tier Server reserve_policy attribute must not be set to single_path.
3.1.3 Supported DAS enclosures
Currently (with DS8000 Licensed Machine Code 7.7.10.xx.xx), only specific SSD expansion
enclosures can be connected to an AIX host in support of the Easy Tier Server functionality.
As indicated in Table 3-2 on page 35, the supported SSD enclosures are: EXP30, EXP24S,
and 12X I/O Drawer PCIe (#5802).
Note: For setting up any of these enclosures on the Power Systems hosts, refer to the IBM
Power Systems Hardware documentation available at the following website or contact your
IBM service representative:
http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5/index.jsp
We provide below a brief description of the supported expansion drawers models.
EXP30 Ultra SSD I/O Drawer (#5888 and #EDR1)
The IBM EXP30 Ultra SSD I/O Drawer (feature code 5888 and EDR1) is a 1.8-inch solid-state
drive (SSD) optimized PCIe storage enclosure that is mounted in a 19-inch rack. It attaches to
supported POWER7 processor-based systems by using PCI Express generation 2 (PCIe2)
GX++ adapters and PCIe2 cables. Figure 3-1 shows the EXP30.
Figure 3-1 EXP30 Ultra SSD I/O Drawer
The 5888 and EDR1 PCIe storage enclosures feature two redundant enclosure RAID
modules (ERMs), which contain an SAS RAID controller, an SAS port expander, and a
replaceable fan assembly. The SAS RAID controllers are always configured in a dual
controller (dual storage I/O adapter) configuration. This configuration provides redundant
36
IBM System Storage DS8000 Easy Tier Server
access paths to the SAS devices and mirrored copies of cache data and parity update
footprints. The dual controller configuration also allows for increased performance capability
when multiple RAID arrays are used in an Active/Active configuration mode. The SAS RAID
controllers support RAID 0, 10, 5, and 6 and the hot-spare function.
EXP24S SFF Gen2-bay Drawer (#5887):
The EXP24S SFF Gen2-bay Drawer is an expansion drawer with twenty-four 2.5-inch form
factor (small form factor (SFF)) SAS bays. It supports up to 24 SAS-bay-based SSD on IBM
POWER6® or POWER7 servers in 2U of 19-inch rack space. The EXP24S bays are
controlled by SAS adapters attached to the I/O drawer by SAS X or Y cables. Figure 3-2
shows EXP24S.
Figure 3-2 EXP24S SFF Gen2-bay Drawer
The EXP24S (#5887) has many high reliability design points. The SAS bays support hot
swap. It has redundant and hot-swap power/fan assemblies. It has dual line cords. It has
redundant and hot-swap ESMs (Electronic Service Modules). It has redundant data paths to
all drives. LED indicators on drives, bays, ESMs, and power supplies support problem
identification. Plus, through the SAS adapters/controllers, drives can be protected with RAID
and mirroring and hot spare capability.
12X I/O Drawer PCIe (#5802)
The Feature Code 5802 12X I/O Drawer PCIe is a 19-inch I/O and storage drawer. It provides
a 4U-tall drawer containing 10 PCIe-based I/O adapter slots and 18 SAS hot-swap SFF disk
bays. Figure 3-3 shows #5802 Expansion Drawer.
Figure 3-3 12X I/O Drawer PCIe
Chapter 3. Planning for Easy Tier Server
37
3.1.4 Easy Tier Server coherency client and server connectivity
The same Fibre Channel connections between DS8870 and the hosts used for data access
are the ones used by the Easy Tier Server coherency server to communicate with the Easy
Tier Server coherency client.
Important: There is no need for additional connectivity between hosts and DS8870 for
Easy Tier Server implementation. Fibre Channel (FC) connectivity already used by hosts
for data access on DS8870 is all that is required.
3.2 Validating Easy Tier Server requirements
All Easy Tier Server implementation requirements for both the client and server are described
in Table 3-1 on page 34, Table 3-2 on page 35, and Table 3-3 on page 36. In this section, we
go through the validation of all those requirements.
3.2.1 Easy Tier Server coherency server validation
We divided the validation into three phases: First, validating the DS8870 Licensed Machine
Code; next, ensuring that required licenses on the DS8870 have been enabled; and finally,
validating the Easy Tier settings.
DS8870 Licensed Machine Code
Before deploying Easy Tier Server on DS8870, you must ensure that its Licensed Machine
Code (LMC) level is 7.7.10.xx.xx or higher.
You can use the data storage graphical user interface (DS GUI) to check your LMC version.
From the DS GUI main window, click Home  System Status  Right-Click the Storage
Image  Storage Image  Properties  Advanced tab. The resulting panel is shown in
Figure 3-4.
Figure 3-4 Checking DS8870 LMC with the DS GUI
38
IBM System Storage DS8000 Easy Tier Server
Alternatively, you can use data storage command-line interface (DS CLI) ver command, as
shown in Example 3-1.
Example 3-1 Checking DS8870 LMC using DS CLI
dscli> ver -lmc
Storage Image
LMC
===========================
IBM.2107-75ZA571 7.7.10.287
DS8870 required licenses
As previously mentioned, both Easy Tier and Easy Tier Server licenses are required for
Easy Tier Server to be deployed. They are feature codes 0713 (Function Authorization 7083)
and 0715 (Function Authorization 7084).
You can obtain the authorization codes from the IBM data storage feature activation (DFSA)
website at http://www.ibm.com/storage/dsfa. The codes provided are then applied using
either DS CLI commands or through the DS GUI. See Chapter 10, “IBM System Storage
DS8000 features and license keys” from IBM System Storage DS8870 Architecture and
Implementation, SG24-8085 for more assistance on entering licenses keys on DS8870.
Attention: Easy Tier and Easy Tier Server are no charge features of the IBM System
Storage DS8870. However, as with any other DS8870 licensed function, they must first be
ordered from IBM. Consult with your IBM sales representative if these licenses are not
available for your DS8870 Storage System on the IBM DFSA website.
After enabling the licenses, check the status using DS CLI or DS GUI. Example 3-2 shows
how to check it using DS CLI.
Example 3-2 Checking DS8870 entered license keys using DS CLI
dscli> lskey -l IBM.2107-75ZA571
Activation Key
Authorization Level (TB) Scope
==========================================================================
Easy Tier Server
on
All
IBM System Storage Easy Tier
on
All
Operating environment (OEL)
170,4
All
Chapter 3. Planning for Easy Tier Server
39
For checking the DS8870’s currently entered License Keys using DS GUI, click Home 
System Status  Right-Click the appropriate Storage Image  Storage Image  Add
Activation Key, as shown in Figure 3-5.
Figure 3-5 Checking DS8870 licenses via DS GUI
Easy Tier settings for Easy Tier Server
Both Easy Tier Server and Easy Tier internal data placement functions depend upon DS8870
array I/O statistics and their statistics monitoring configurations are integrated on the storage
system. For this reason, Easy Tier Server requires Easy Tier Monitor Mode (ETMonitor) to be
enabled.
Easy Tier Monitor Mode
The Easy Tier monitoring capability monitors the workload on the DS8870 back-end storage,
at the extent level. Monitoring statistics are gathered and analyzed every 24 hours. In an Easy
Tier managed extent pool, the analysis is used to create an extent relocation or migration plan
for the extents to be relocated to the most appropriate storage tier and storage resource
within the pool.
The Easy Tier Monitor control can be set to automode, all, or none, referring to the volumes to
be monitored. The default setting is automode, which means that only volumes under control
of Easy Tier Automatic Mode in managed extent pools are monitored.
The settings are described as follows:
򐂰 In automode, only extent statistics for logical volumes in managed extent pools under
control of Easy Tier Automatic Mode are updated to reflect current workload activity.
򐂰 When set to all, extent statistics for all logical volumes in managed and non-managed
extent pools are updated to reflect current workload activity.
򐂰 When set to none, extent statistics collection is suspended. The Easy Tier learning data in
memory is reset and all migration plans are cleared. However, the last Easy Tier summary
report for the Storage Tier Advisor Tool (STAT) remains available for download and is not
automatically deleted.
40
IBM System Storage DS8000 Easy Tier Server
Requirement: Easy Tier Server requires Easy Tier Monitor Mode to be set to either
automode or all. If the volume to be used by an Easy Tier Server-enabled host is within a
DS8870 Easy Tier managed pool, automode is enough. Otherwise, all must be selected for
the Easy Tier Monitor Mode. The lsextpool -l command shows whether an extent pool is
managed or not by Easy Tier.
Although Easy Tier Monitor Mode has to be enabled, it does not mean that Easy Tier
Automatic Mode has to be on (tiered or all) for Easy Tier Server to work. Still, Easy Tier
Server volumes do not need to reside in hybrid pools on DS8870, if ETMonitor is set to all.
Refer to Chapter 2, “IBM System Storage DS8000 Easy Tier concepts, design, and
implementation” in IBM System Storage DS8000 Easy Tier, REDP-4667 for further
information about Easy Tier Monitor and operation modes.
You can use either DS CLI or DS GUI for checking or changing Easy Tier Monitor Mode. If
using the DS GUI, click Home  System Status  Right-Click the Storage Image 
Storage Image  Properties, as shown in Figure 3-4 on page 38.
For DS CLI, you can use the showsi command to display the current Easy Tier Monitor Mode
setting, as illustrated in Example 3-3.
Besides using DS GUI to change the Easy Tier Monitor Mode setting, the DS CLI chsi
command can be used as well, with the following syntax:
chsi -ETMonitor automode|all|none storage_image_ID
Example 3-3 Displaying the current Easy Tier Monitor Mode setting
dscli> showsi IBM.2107-75ZA571
Name
DS8870_ATS02
desc
Mako
ID
IBM.2107-75ZA571
Storage Unit
IBM.2107-75ZA570
Model
961
WWNN
5005076303FFD5AA
Signature
XXXX-XXXX-XXXX-XXXX
State
Online
ESSNet
Enabled
Volume Group
V0
os400Serial
5AA
NVS Memory
8.0 GB
Cache Memory
233.7 GB
Processor Memory 253.7 GB
MTS
IBM.2421-75ZA570
numegsupported
1
ETAutoMode
all
ETMonitor
all
IOPMmode
Managed
ETCCMode
Enabled
ETHMTMode
Enabled
Chapter 3. Planning for Easy Tier Server
41
Attention: Do not turn off Easy Tier monitoring if Easy Tier Automatic Mode (ETAutoMode)
is enabled. When Easy Tier monitoring is turned off, no new performance statistics are
collected and Easy Tier Automatic Mode cannot create migration plans. As a result, Easy
Tier Automatic Mode stops managing volumes in all managed extent pools.
Changing the Easy Tier monitoring mode affects the statistics collection and can lead to a
reset (reinitialization) of the gathered monitoring data. This situation means that it might
take up to 24 hours of collecting new performance statistics after Easy Tier monitoring has
been enabled again until new migration plans are created.
3.2.2 Easy Tier Server coherency client validation
Moving to the client-side of the implementation, the host requirements are validated.
Hardware, operating system, and software
First, the Power Systems model is queried by the prtconf command and AIX version is
obtained by the oslevel command, as shown by Example 3-4.
Example 3-4 Client’s hardware model and operating system level validation
# prtconf | grep "System Model"
System Model: IBM,9117-MMD
# oslevel -s
7100-02-02-1316
Next, validate the multipathing software requirements by using the lslpp command, as
demonstrated in Example 3-5. In our environment, we used SDDPCM as the multipathing
software: devices.sddpcm.71.rte. Fileset devices.fcp.disk.ibm.mpio.rte is the required
host attachment for SDDPCM in a DS8000 environment. The host attachment allows AIX
Multipath Input/Output (MPIO) device driver configuration methods to properly identify and
configure the DS8870 logical unit numbers (LUNs).
Fileset devices.common.IBM.mpio.rte is natively installed along with AIX Base Operating
System. It is the default operating system Path Control Module (PCM) used by SDDPCM.
This fileset is always installed in the base operating system installation.
Example 3-5 Client’s multipathing software validation
# lslpp -l devices.common.IBM.mpio.rte
Fileset
Level State
Description
---------------------------------------------------------------------------Path: /usr/lib/objrepos
devices.common.IBM.mpio.rte
7.1.2.15 COMMITTED MPIO Disk Path Control Module
Path: /etc/objrepos
devices.common.IBM.mpio.rte
7.1.2.15 COMMITTED MPIO Disk Path Control Module
# lslpp -l devices.fcp.disk.ibm.mpio.rte
Fileset
Level State
Description
---------------------------------------------------------------------------Path: /usr/lib/objrepos
devices.fcp.disk.ibm.mpio.rte
42
IBM System Storage DS8000 Easy Tier Server
1.0.0.24
COMMITTED
IBM MPIO FCP Disk Device
# lslpp -l devices.sddpcm.71.rte
Fileset
Level State
Description
---------------------------------------------------------------------------Path: /usr/lib/objrepos
devices.sddpcm.71.rte
2.6.3.2 COMMITTED IBM SDD PCM for AIX V71
Path: /etc/objrepos
devices.sddpcm.71.rte
2.6.3.2 COMMITTED IBM SDD PCM for AIX V71
If you do not have the multipathing software installed in the AIX host system, see “Installing
multipathing software” on page 58. You can also refer to the latest Multipath Subsystem
Device Driver User's Guide for more information about SDD and SDDPCM installation and
usage. It is available at the following website:
http://www-01.ibm.com/support/docview.wss?rs=540&context=ST52G7&q=ssg1*&uid=ssg1S7
000303&loc=en_US&cs
Important: The AIX operating system is discontinuing support to SDD. Currently, SDD is
supported in AIX 6.1, but it is not in AIX 7.1 releases. Therefore, we strongly encourage
you to use SDDPCM in your implementation, whether it is AIX 6.1 or 7.1.
In Example 3-6, the manage_disk_drivers command is used to list the device driver used by
the operating system to manage particular storage model devices.
For DS8000 LUNs, the NO_OVERRIDE option (the default option) and the AIX_AAPCM
option are supported. By using NO_OVERRIDE, you are selecting SDDPCM to manage the
DS8000 disks. Instead, the AIX_AAPCM option makes the operating system use AIX native
MPIO. It would be selected if the IBM HyperSwap® function were used, which is not the case
with Easy Tier Server.
Example 3-6 Client’s multipathing software selection validation
# manage_disk_drivers -l
Device
Present Driver
2107DS8K
NO_OVERRIDE
Driver Options
NO_OVERRIDE,AIX_AAPCM,NO_OVERRIDE,NO_OVERRIDE
Refer to the IBM AIX 7.1 Information Center website for more information about this
command and the AIX default PCM:
http://pic.dhe.ibm.com/infocenter/aix/v7r1/index.jsp
After checking the minimum software versions and the required Power Systems hardware
model, we validate the SSD device’s availability on the AIX system.
Note: At this point, the DAS enclosure with SSD devices is supposed to be already
connected to the Power System. For any assistance in this matter, refer to IBM Power
Systems Hardware documentation at the following website:
http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5/index.jsp
Or, consult with your IBM service representative.
Chapter 3. Planning for Easy Tier Server
43
Direct-attached storage expansion enclosure and SSD devices
After the expansion enclosure is attached and properly installed onto the Power Systems
host, each SSD device within the expansion enclosure is recognized by the operating system
as an AIX physical disk, or pdisk. Every single pdisk is formatted into a single-device RAID 0
array automatically. Therefore, for each SSD pdisk, there is a corresponding hdisk, which is
the logical representation of an SSD array.
The lsdev AIX command can be used to list both pdisk and hdisk devices. At this point, you
might notice that the amount of pdisks and hdisks matches, as illustrated by Example 3-7.
Also, the lsdev command is used to list the expansion drawers attached to the host; in this
case, an EXP30.
Example 3-7 Client’s SDD devices listing
# lsdev -C | grep pdisk
pdisk0
Available 0M-00-00
pdisk1
Available 0M-00-00
pdisk2
Available 0M-00-00
pdisk3
Available 0M-00-00
pdisk4
Available 0M-00-00
pdisk5
Available 0M-00-00
pdisk6
Available 0M-00-00
pdisk7
Available 0M-00-00
# lsdev
hdisk0
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
hdisk12
hdisk13
hdisk14
hdisk15
-Cc disk
Available
Available
Available
Available
Available
Available
Available
Available
Available
Available
Available
Available
Available
Available
Available
Available
0H-00-00
0H-00-00
0H-00-00
0H-00-00
0N-00-00
0N-00-00
0N-00-00
0N-00-00
0N-00-00
0N-00-00
0N-00-00
0N-00-00
0V-00-00
0V-00-00
0V-00-00
0V-00-00
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
Physical
Physical
Physical
Physical
Physical
Physical
Physical
Physical
Disk
Disk
Disk
Disk
RAID
RAID
RAID
RAID
RAID
RAID
RAID
RAID
Disk
Disk
Disk
Disk
Drive
Drive
Drive
Drive
0 SSD
0 SSD
0 SSD
0 SSD
0 SSD
0 SSD
0 SSD
0 SSD
Drive
Drive
Drive
Drive
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
Solid
Solid
Solid
Solid
Solid
Solid
Solid
Solid
State
State
State
State
State
State
State
State
Drive
Drive
Drive
Drive
Drive
Drive
Drive
Drive
Array
Array
Array
Array
Array
Array
Array
Array
# lsdev -Cc drawer
sasdrawer0 Available 0N-00-00 EXP30 Ultra SSD I/O Drawer
Now that the SSD devices were on the operating system, you can check which SAS (sissas)
adapters are connecting the expansion enclosure to the Power System. The lscfg command
can be used for this cross-check by the devices’ Hardware Location Codes.
44
IBM System Storage DS8000 Easy Tier Server
As shown in Example 3-8, we list SSD devices with the lscfg command to get their Hardware
Location Codes. Following that, we filter lscfg output with this code.
Example 3-8 Client’s SSD to SAS adapter matching
# lscfg -v | grep SSD
sasdrawer0 UEDR1.001.G2BG00D EXP30 Ultra
hdisk4
U2C4E.001.DBJG299-P1-C3-T1-L1-T1-L204B858940-L0
hdisk5
U2C4E.001.DBJG299-P1-C3-T1-L1-T1-L404B858940-L0
hdisk6
U2C4E.001.DBJG299-P1-C3-T1-L1-T1-L604B858940-L0
hdisk7
U2C4E.001.DBJG299-P1-C3-T1-L1-T1-L804B858940-L0
hdisk8
U2C4E.001.DBJG299-P1-C3-T1-L1-T1-LA04B858940-L0
hdisk9
U2C4E.001.DBJG299-P1-C3-T1-L1-T1-LC04B858940-L0
hdisk10 U2C4E.001.DBJG299-P1-C3-T1-L1-T1-LE04B858940-L0
hdisk11 U2C4E.001.DBJG299-P1-C3-T1-L1-T1-L1004B858940-L0
SSD
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
I/O Drawer
RAID 0 SSD
RAID 0 SSD
RAID 0 SSD
RAID 0 SSD
RAID 0 SSD
RAID 0 SSD
RAID 0 SSD
RAID 0 SSD
Array
Array
Array
Array
Array
Array
Array
Array
# lscfg -v | grep U2C4E.001.DBJG299-P1-C3 | grep sissas
sissas7 U2C4E.001.DBJG299-P1-C3-T1-L1-T1 PCIe2 3.1GB Cache RAID SAS Enclosure
sissas3 U2C4E.001.DBJG299-P1-C3-T2-L1-T1 PCIe2 3.1GB Cache RAID SAS Enclosure
# lsdev
sissas0
sissas1
sissas2
sissas3
sissas4
sissas5
sissas6
sissas7
-Cc adapter | grep sas
Available 0A-00 PCI Express
Available 0G-00 PCI Express
Available 0H-00 PCI Express
Available 0M-00 PCIe2 3.1GB
Available 0O-00 PCI Express
Available 0U-00 PCI Express
Available 0V-00 PCI Express
Available 0N-00 PCIe2 3.1GB
x1 Planar 3Gb SAS Adapter
x8 Planar 3Gb SAS RAID Adapter
x8 Planar 3Gb SAS RAID Adapter
Cache RAID SAS Enclosure 6Gb x8
x1 Planar 3Gb SAS Adapter
x8 Planar 3Gb SAS RAID Adapter
x8 Planar 3Gb SAS RAID Adapter
Cache RAID SAS Enclosure 6Gb x8
After the adapters connecting SSD devices to the host are known, the sissasraidmgr
command can be used to show the relation between pdisks and hdisks, as demonstrated in
Example 3-9.
Example 3-9 Client’s SSD pdisks to SSD hdisks arrays relation
# sissasraidmgr -L -j1 -l sissas7
-----------------------------------------------------------------------Name
Resource State
Description
Size
-----------------------------------------------------------------------sissas7
FEFFFFFF Primary
PCIe2 3.1GB Cache RAID SAS Enclosure 6Gb x8
sissas3
FEFFFFFF HA Linked
Remote adapter SN 002B9004
hdisk4
pdisk0
FC0000FF
000000FF
Optimal
Active
RAID 0 Array
SSD Array Member
387.9GB
387.9GB
hdisk5
pdisk2
FC0100FF
000002FF
Optimal
Active
RAID 0 Array
SSD Array Member
387.9GB
387.9GB
hdisk6
pdisk3
FC0200FF
000003FF
Optimal
Active
RAID 0 Array
SSD Array Member
387.9GB
387.9GB
hdisk7
pdisk4
FC0300FF
000004FF
Optimal
Active
RAID 0 Array
SSD Array Member
387.9GB
387.9GB
hdisk8
FC0400FF
Optimal
RAID 0 Array
387.9GB
Chapter 3. Planning for Easy Tier Server
45
pdisk5
000005FF
Active
SSD Array Member
387.9GB
hdisk9
pdisk6
FC0500FF
000006FF
Optimal
Active
RAID 0 Array
SSD Array Member
387.9GB
387.9GB
hdisk10
pdisk7
FC0600FF
000007FF
Optimal
Active
RAID 0 Array
SSD Array Member
387.9GB
387.9GB
hdisk11
pdisk1
FC0700FF
000001FF
Optimal
Active
RAID 0 Array
SSD Array Member
387.9GB
387.9GB
Note: Despite the initial one-to-one RAID 0 array configuration, SSD devices (pdisks) can
be formatted with different RAID levels and capacity (hdisks). For example, you can create
one single hdisk array with all pdisks. Such configuration and best practices that are related
to SSD cache devices for Easy Tier Server coherency client are discussed in “Configuring
direct-attached storage flash (SSD) devices” on page 61.
Adapters and disks configuration
Fibre Channel adapter configuration requirements on AIX are validated, as shown in
Example 3-10.
The AIX lsdev command and pcmpath SDDPCM command are used to list the host bus
adapters (HBAs) fcs devices on the client. Then, the lsattr command shows the attributes
for the fcs correspondent fscsi devices. You must check the settings on all HBAs used for
connecting the AIX host to DS8870.
Example 3-10 Client devices’ settings requirement validation
# lsdev
fcs18
fcs19
fcs20
fcs21
-Cc adapter | grep fc
Available 0E-00 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)
Available 0E-01 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)
Available 0k-00 PCIe2 4-Port 8Gb FC Adapter (df1000f114100104)
Available 0k-01 PCIe2 4-Port 8Gb FC Adapter (df1000f114100104)
# pcmpath query adapter
Total Dual Active and Active/Asymmetric Adapters : 4
Adpt#
Name
State
Mode
Select
0 fscsi19
NORMAL
ACTIVE
18
1 fscsi20
NORMAL
ACTIVE
17
2 fscsi21
NORMAL
ACTIVE
17
3 fscsi18
NORMAL
ACTIVE
17
# lsattr -El
attach
dyntrk
fc_err_recov
scsi_id
sw_fc_class
fscsi18
switch
yes
fast_fail
0xd1900
3
Errors
0
0
0
0
Paths
8
8
8
8
Active
0
0
0
0
How this adapter is CONNECTED
False
Dynamic Tracking of FC Devices
True
FC Fabric Event Error RECOVERY Policy True
Adapter SCSI ID
False
FC Class for Fabric
True
If the DS8870 LUNs are already mapped and recognized by the Easy Tier Server coherency
client at this point, you must also validate that the hdisks’ reserve_policy is set to no_reserve,
as demonstrated in Example 3-11 on page 47.
46
IBM System Storage DS8000 Easy Tier Server
Example 3-11 Client DS8870 LUN disk devices’ settings validation
# lsdev
hdisk16
hdisk17
hdisk18
hdisk19
hdisk20
hdisk21
hdisk22
hdisk23
-Cc disk | grep 2107
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-00-02 IBM
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
FC
FC
FC
FC
FC
FC
FC
FC
2107
2107
2107
2107
2107
2107
2107
2107
# lsattr -El hdisk16
PCM
PCM/friend/sddpcm
PR_key_value
none
algorithm
load_balance
clr_q
no
dist_err_pcnt
0
dist_tw_width
50
flashcpy_tgtvol no
hcheck_interval 60
hcheck_mode
nonactive
location
lun_id
0x404c400000000000
lun_reset_spt
yes
max_coalesce
0x100000
max_transfer
0x100000
node_name
0x5005076303ffd5aa
pvid
00c24dbad90815280000000000000000
q_err
yes
q_type
simple
qfull_dly
2
queue_depth
20
recoverDEDpath no
reserve_policy no_reserve
retry_timeout
120
rw_timeout
60
scbsy_dly
20
scsi_id
0xc2500
start_timeout
180
timeout_policy fail_path
unique_id
200B75ZA5714C0007210790003IBMfcp
ww_name
0x50050763031015aa
PCM
Reserve Key
Algorithm
Device CLEARS its Queue on error
Distributed Error Percentage
Distributed Error Sample Time
Flashcopy Target Lun
Health Check Interval
Health Check Mode
Location Label
Logical Unit Number ID
Support SCSI LUN reset
Maximum COALESCE size
Maximum TRANSFER Size
FC Node Name
Physical volume identifier
Use QERR bit
Queuing TYPE
delay secs for SCSI TASK SET FULL
Queue DEPTH
Recover DED Failed Path
Reserve Policy
Retry Timeout
READ/WRITE time out value
delay in seconds for SCSI BUSY
SCSI ID
START unit time out value
Timeout Policy
Device Unique Identification
FC World Wide Name
Although the preceding example covers just one hdisk, you must check the reserve_policy
setting for all DS8870 LUNs on the AIX host.
Chapter 3. Planning for Easy Tier Server
47
Changing adapters and disks’ settings on AIX
If the adapters and disks’ settings do not match the Easy Tier Server implementation
requirements, you can use the chdev command, as shown in Example 3-12, to make them
compliant.
Attention: Changing these settings on hdisk and fscsi devices on AIX requires either a
reboot of the host or that the device be removed and “rescanned” by the operating system.
Example 3-12 Changing adapter and disk settings on AIX for Easy Tier Server compliance
# pcmpath query adapter
Total Dual Active and Active/Asymmetric Adapters : 4
Adpt#
Name
State
Mode
Select
0 fscsi19
NORMAL
ACTIVE
63261815
1 fscsi20
NORMAL
ACTIVE
63078617
2 fscsi21
NORMAL
ACTIVE
62873180
3 fscsi18
NORMAL
ACTIVE
59490
Errors
6
4
350
0
Paths
8
8
8
8
Active
8
8
8
8
Errors
0
Paths
8
Active
0
Paths
8
8
8
8
Active
8
8
8
8
# pcmpath set adapter 3 offline aa
Success: set adapter 3 to offline
Adpt#
Name
3 fscsi18
State
FAILED
Mode
OFFLINE
Select
150265
# chdev -l fscsi18 -a 'fc_err_recov=fast_fail dyntrk=yes' -P
fscsi18 changed
# rmdev -Rl fcs18
fcnet14 Defined
sfwcomm18 Defined
fscsi18 Defined
fcs18 Defined
# cfgmgr -l fcs18
# pcmpath query adapter
Total Dual Active and Active/Asymmetric Adapters : 4
Adpt#
Name
State
Mode
Select
0 fscsi19
NORMAL
ACTIVE
63675002
1 fscsi20
NORMAL
ACTIVE
63495052
2 fscsi21
NORMAL
ACTIVE
63283670
3 fscsi18
NORMAL
ACTIVE
52246
Errors
6
4
352
0
# chdev -l hdisk16 -a reserve_policy=no_reserve -P
hdisk16 changed
After changing the reserve_policy attribute of the hdisks to no_reserve, either reboot the
system or rmdev and cfgmgr the devices.
To be able to remove the devices, any structure on top of them (VGs, filesystems, and so on)
can be varied off or unmounted. This includes the applications that run on top of that
structure.
48
IBM System Storage DS8000 Easy Tier Server
3.3 Other considerations
This section presents additional relevant planning information for Easy Tier Server
deployment. This information covers the integration of Easy Tier Server with other DS8870
advanced functions and some best practices that you should be aware of.
3.3.1 DS CLI and DS GUI support to Easy Tier Server
Although Easy Tier monitor and operation modes can be configured using both DS GUI and
DS CLI, Easy Tier Server configuration in its current implementation is only possible with the
DS CLI.
3.3.2 Easy Tier and Easy Tier Server integration
In order for you to take best advantage of all the benefits of Easy Tier and Easy Tier Server
integration, you need to consider using Easy Tier Automatic Mode for extents placement
optimization on DS8870 internal tiers.
The IBM Redpaper publication, IBM System Storage DS8000 Easy Tier, REDP-4667,
presents Easy Tier general concepts and configuration guidelines based on different
scenarios.
3.3.3 Easy Tier Server interaction with other DS8870 advanced features
In general terms, there is no limitation or impact when it comes to Easy Tier Server integration
with other DS8870 advanced features. Still, it is worthy to discuss the DS8870 major features
interaction with the Easy Tier Server feature.
Easy Tier Server and Easy Tier Application
Easy Tier Application enables users and applications to assign distinct volumes to a particular
tier in an Easy Tier pool, disregarding Easy Tier's advanced data migration function. Easy
Tier Application pinning volumes up in given DS8870 internal tiers, is transparent to Easy Tier
Server.
If a particular volume manipulated by Easy Tier Application is also Easy Tier Server-enabled
and used by an Easy Tier Server coherency client, selected fragments of the volume’s hottest
extents will be selected for DAS cache population with no intervention to Easy Tier
Application setting for that volume.
For more information about Easy Tier Application, see IBM System Storage DS8000 Easy
Tier Application, REDP-5014.
Easy Tier Server and Easy Tier Heat Map Transfer
Easy Tier Heat Map Transfer enables a DS8870 Easy Tier-optimized data placement on the
primary site of either Metro Mirror, Global Copy, or Global Mirror to be applied on a
System Storage DS8000 at the secondary site.
In a scenario with Heat Map Transfer Utility (HMTU), Easy Tier Server works transparently as
well. Easy Tier Heat Map Transfer only cares about DS8870 internal tiers data placement.
Therefore, data populated to DAS cache in existing Easy Tier Server coherency clients on the
primary site are not taken into account.
Chapter 3. Planning for Easy Tier Server
49
Refer to the IBM Redpaper publication, IBM System Storage DS8000 Easy Tier Heat Map
Transfer, REDP-5014, for detailed information about this feature.
Easy Tier Server and DS8870 Copy Services
IBM Copy Services are supported by Easy Tier and Easy Tier Server functions enabled.
Metro Mirror, Global Mirror, and IBM FlashCopy® do not impact the operations of Easy Tier
(in Automatic Mode or Manual Mode).
Note: As well as host operations, Copy Services are unaware of the extent or volume level
optimizations being performed by Easy Tier and Easy Tier Server.
All back-end I/Os, except the extent migration encountered with Easy Tier Manual Mode
(dynamic volume relocation) and Automatic Mode (automatic data relocation), are counted in
the Easy Tier I/O statistics, including Copy Services back-end I/Os. However, most of the
Copy Services background I/O activity has sequential access patterns and is not contributing
to the cross-tier heat calculation of Easy Tier on a bigger scale, although it is taken into
account for the bandwidth and rank utilization calculations.
In a Metro Mirror environment, there is an additional time delay due to the required data
transfer of a write I/O to the secondary site. This additional latency or service time is not
included in the performance data considered by Easy Tier because this I/O activity is not an
activity that is occurring in the disk back-end on the rank level.
FlashCopy copy-on-write activity results in full track writes, which are considered large rather
than small operations. The FlashCopy target is not hot in terms of small I/Os unless it is being
written to. Remember that a FlashCopy track space-efficient repository is not considered for
extent relocation.
Easy Tier Server and DS8870 thin provisioned volumes
DS8870 offers two different types of space-efficient or thin-provisioned volumes: extent
space-efficient (ESE) and track space-efficient (TSE) volumes. In essence, TSE volumes are
used as target volumes of a FlashCopy SE operation (with the nocopy option enabled).
Instead, ESE volumes are designated for standard host access.
ESE volumes are fully supported by Easy Tier Server. On the other hand, although TSE
volumes can be in an extent pool managed by Easy Tier, they themselves are not managed
by Easy Tier. Therefore, TSE volumes are not supported by Easy Tier Server either.
Attention: Both ESE and TSE volumes can be in any extent pool managed by Easy Tier
Automatic Mode, but only ESE volumes are managed and fully supported by Easy Tier and
Easy Tier Server.
For more information about thin provisioning on DS8000, see the IBM Redpaper publication
DS8000 Thin Provisioning, REDP-4554.
Easy Tier Server and I/O Priority Manager
DS8870 I/O Priority Manager provides more effective storage consolidation and performance
management, which is combined with the ability to align quality of service (QoS) levels to
separate workloads in the system.
I/O Priority Manager prioritizes access to system resources to achieve the desired QoS,
based on defined performance goals (high, medium, or low) for either the volume or single I/O
request. The Priority Manager constantly monitors and balances system resources to help
50
IBM System Storage DS8000 Easy Tier Server
applications meet their performance targets automatically, without operator intervention, as
described in DS8000 I/O Priority Manager, REDP-4760.
If I/O Priority Manager, Easy Tier, and Easy Tier Server features are all enabled, they would
provide independent benefits. Although I/O Priority Manager attempts to ensure that the most
important I/O operations get serviced when a given rank is overloaded by delaying less
important I/Os, it does not move any extents.
Then, Easy Tier Server moves the “hottest” extents to DAS flash disks on the host for the
fastest response time. Cooperatively, Easy Tier optimizes the extents placement moving them
to the storage tier that is most appropriate for the frequency and recency of host access. Easy
Tier also relocates extents between ranks within a storage tier in an attempt to distribute the
workload evenly across available ranks to avoid rank overloading.
Tip: Easy Tier Server, Easy Tier, and I/O Priority cooperation in a single DS8870 storage
system is supported. Together, these functions can help the various applications running
on DS8000 systems to meet their respective service levels in a simple and cost-effective
manner. The DS8000 can help address storage consolidation requirements, which in turn
helps to manage increasing amounts of data with less effort and lower infrastructure costs.
Chapter 3. Planning for Easy Tier Server
51
52
IBM System Storage DS8000 Easy Tier Server
4
Chapter 4.
Easy Tier Server implementation
This chapter covers the details of an Easy Tier Server deployment, starting with enabling the
feature on the DS8870 up to the attached AIX host configuration. The chapter also covers
upgrade and uninstallation procedures of the Easy Tier Server coherency client.
© Copyright IBM Corp. 2013. All rights reserved.
53
4.1 Implementing Easy Tier Server
We assume that all hardware and software requirements discussed in Chapter 3.1, “Planning
and requirements guidelines” on page 34 have been validated, for both the Easy Tier Server
coherency client and server. Deploying Easy Tier Server starts by enabling the feature on the
DS8870 storage system and then installing and configuring Easy Tier Server coherency client
driver.
4.1.1 Setting up DS8870 for Easy Tier Server
The Easy Tier Server feature is activated by setting the ETCCMode parameter on the DS8870
system. Another important parameter for this implementation is ETMonitor, as stated in “Easy
Tier settings for Easy Tier Server” on page 40. Start by checking the status of both
parameters and then enabling them if required, as shown in Example 4-1.
Example 4-1 Enabling Easy Tier Server ETCCMode parameter status on DS8870 storage image
dscli> showsi IBM.2107-75ZA571
Name
DS8870_ATS02
desc
Mako
ID
IBM.2107-75ZA571
Storage Unit
IBM.2107-75ZA570
Model
961
WWNN
5005076303FFD5AA
Signature
XXXX-XXXX-XXXX-XXXX
State
Online
ESSNet
Enabled
Volume Group
V0
os400Serial
5AA
NVS Memory
8.0 GB
Cache Memory
233.7 GB
Processor Memory 253.7 GB
MTS
IBM.2421-75ZA570
numegsupported
1
ETAutoMode
all
ETMonitor
all
IOPMmode
Managed
ETCCMode
Disabled
ETHMTMode
Enabled
dscli> chsi -ETCCMode enable IBM.2107-75ZA571
CMUC00042I chsi: Storage image IBM.2107-75ZA571 successfully modified.
dscli> showsi IBM.2107-75ZA571
Name
DS8870_ATS02
desc
Mako
ID
IBM.2107-75ZA571
Storage Unit
IBM.2107-75ZA570
Model
961
WWNN
5005076303FFD5AA
Signature
XXXX-XXXX-XXXX-XXXX
State
Online
ESSNet
Enabled
Volume Group
V0
54
IBM System Storage DS8000 Easy Tier Server
os400Serial
NVS Memory
Cache Memory
Processor Memory
MTS
numegsupported
ETAutoMode
ETMonitor
IOPMmode
ETCCMode
ETHMTMode
5AA
8.0 GB
233.7 GB
253.7 GB
IBM.2421-75ZA570
1
all
all
Managed
Enabled
Enabled
At this point, the required Easy Tier Server configuration on the DS8870 is accomplished. No
other setting needs to be changed.
DS8870 LUNs for an Easy Tier Server coherency client
No particular setting has to be defined for logical unit numbers (LUNs) on DS8870 to have
them read-cache enabled on the AIX host for Easy Tier Server. As soon as regular DS8870
LUNs are mapped to Easy Tier Server coherency clients, with active and properly configured
cache devices, the read cache function is enabled on the client for those LUNs, as discussed
later on in “Enabling the read cache function for DS8870 hdisks” on page 62.
Thus, if you already have DS8870 LUNs created and mapped to an AIX host that is going to
have Easy Tier Server coherency client driver deployed, you are ready to begin. In the client
configuration, you choose the LUNs for which you want to have the read cache function
enabled.
Important: Easy Tier Server does not require any particular setting in the LUN level on
DS8870. It uses regular LUNs. The per-LUN read-cache enabled setting on the Easy Tier
Server coherency client is what defines whether the LUN is Easy Tier Server-managed or
not.
If the AIX host is being deployed and does not have any DS8870 LUNs defined yet, proceed
as usual, using the mkfbvol command to create LUNs like we did for our experimentation
scenario, as shown in Example 4-2.
In this scenario, the extent pool that is used for the sample volumes is managed by Easy Tier
Automatic Mode, with regard to Easy Tier internal data placement optimization, and it has
three different internal tiers (solid-state drive (SSD), Enterprise, and Nearline).
Example 4-2 Creating regular volumes on DS8870 for an Easy Tier Server coherency client
dscli> showextpool P4
Name
EasyTierPool_P4
ID
P4
stgtype
fb
totlstor (2^30B)
18027
availstor (2^30B)
14745
resvdstor (2^30B)
0
rankgrp
0
numranks
6
numvols
129
status
below
%allocated
18
%available
81
Chapter 4. Easy Tier Server implementation
55
configured
allowed
available
allocated
reserved
%limit
%threshold
virextstatus
%virallocated
%viravailable
virconfigured
virallowed
viravailable
virallocated
virreserved
%virextlimit
%virextthreshold
encryptgrp
%allocated(ese)
%allocated(rep)
%allocated(std)
%allocated(over)
%virallocated(ese)
%virallocated(tse)
%virallocated(init)
%migrating(in)
%migrating(out)
numtiers
etmanaged
18027
18027
14745
3282
0
100
15
below
7
92
8944
8916
8238
678
0
2
0
16
0
8
0
0
0
0
3
yes
dscli> mkfbvol -dev IBM.2107-75ZA571 -extpool P4 -cap 100 -name itso_ETS00 4C00
CMUC00025I mkfbvol: FB volume 4C00 successfully created.
dscli> mkfbvol -dev IBM.2107-75ZA571 -extpool P4 -cap 200 -name itso_ETS01 4C01
CMUC00025I mkfbvol: FB volume 4C01 successfully created.
dscli> mkfbvol -dev IBM.2107-75ZA571 -extpool P4 -cap 50 -name itso_ETS02 4C02
CMUC00025I mkfbvol: FB volume 4C02 successfully created.
dscli> mkfbvol -dev IBM.2107-75ZA571 -extpool P4 -cap 100 -name itso_ETS03 4C03
CMUC00025I mkfbvol: FB volume 4C03 successfully created.
Note: Although the DS8870 extent pool used for these examples is hybrid (mixed tiers)
and managed by Easy Tier, these are not requirements. For Easy Tier Server, it is only
mandatory that Easy Tier Monitor Mode is either set to all or, if the volumes happen to be in
pools already managed by Easy Tier, automode.
DS8870 LUN mapping for an Easy Tier Server coherency client
If your DS8870 storage system already has hostconnects created for the Easy Tier Server
coherency client’s worldwide port name (WWPN), there is no other special setting that is
required for Easy Tier Server.
56
IBM System Storage DS8000 Easy Tier Server
Example 4-3 demonstrates the regular DS8870 LUN mapping process to a given host. A new
volume group is created to be the container and integrator of the client’s Fibre Channel (FC)
ports. The client’s FC ports are defined on DS8870 as hostconnects. The DS8870 LUNs that
this client is supposed to access are added to the newly created DS8870 volume group.
Example 4-3 Creating a volume group and hostconnects on DS8870
dscli> mkvolgrp -dev IBM.2107-75ZA571 -hosttype pSeries -volume 4C00-4C03
ITSO_p7_ETS_vg
CMUC00030I mkvolgrp: Volume group V0 successfully created.
dscli> showvolgrp v0
Name ITSO_p7_ETS_vg
ID
V0
Type SCSI Mask
Vols 4C00 4C01 4C02 4C03
dscli> mkhostconnect -dev IBM.2107-75ZA571 -wwpn 10000090fa1f7a50 -hosttype
pSeries -volgrp V0 ITSO_p7_ETS_0
CMUC00012I mkhostconnect: Host connection 0044 successfully created.
dscli> mkhostconnect -dev IBM.2107-75ZA571 -wwpn 10000090fa1f7a51 -hosttype
pSeries -volgrp V0 ITSO_p7_ETS_1
CMUC00012I mkhostconnect: Host connection 0045 successfully created.
dscli> mkhostconnect -dev IBM.2107-75ZA571 -wwpn 10000090fa263476 -hosttype
pSeries -volgrp V0 ITSO_p7_ETS_2
CMUC00012I mkhostconnect: Host connection 004C successfully created.
dscli> mkhostconnect -dev IBM.2107-75ZA571 -wwpn 10000090fa263477 -hosttype
pSeries -volgrp V0 ITSO_p7_ETS_3
CMUC00012I mkhostconnect: Host connection 004D successfully created.
dscli> lshostconnect
Name
ID
WWPN
HostType
Profile
portgrp volgrpID ESSIOport
===================================================================================================
ITSO_p7_ETS_0
0044 10000090FA1F7A50 pSeries
IBM pSeries - AIX
0 V0
all
ITSO_p7_ETS_1
0045 10000090FA1F7A51 pSeries
IBM pSeries - AIX
0 V0
all
ITSO_p7_ETS_2
004C 10000090FA263476 pSeries
IBM pSeries - AIX
0 V0
all
ITSO_p7_ETS_3
004D 10000090FA263477 pSeries
IBM pSeries - AIX
0 V0
all
dscli> lshostconnect -login
WWNN
WWPN
ESSIOport LoginType Name
ID
===========================================================================
20000120FA263476 10000090FA263476 I0000
SCSI
ITSO_p7_ETS_2
004C
20000120FA1F7A50 10000090FA1F7A50 I0130
SCSI
ITSO_p7_ETS_0
0044
20000120FA263477 10000090FA263477 I0200
SCSI
ITSO_p7_ETS_3
004D
20000120FA1F7A51 10000090FA1F7A51 I0330
SCSI
ITSO_p7_ETS_1
0045
For more information about DS8870 virtualization concepts and the LUN mapping process,
see IBM System Storage DS8870 Architecture and Implementation, SG24-8085.
Note: There is no special setting requirement for Easy Tier Server regarding storage area
network (SAN) connectivity. You just need to ensure that clients and servers are properly
zoned on the SAN.
Chapter 4. Easy Tier Server implementation
57
4.1.2 Setting up an AIX host for Easy Tier Server client
After the DS8870 storage system has been configured to be an Easy Tier Server coherency
server, you need to install the appropriate drivers and configure the AIX host to be an Easy
Tier Server coherency client.
In summary, these are the steps that you need to follow for the client-side deployment:
1.
2.
3.
4.
5.
6.
Install DS8870 multipathing software and its required device drivers.
Install Easy Tier Server coherency client driver.
Start Easy Tier Server coherency client driver.
Format and manage the Power Systems direct-attached storage (DAS) flash devices.
Configure the flash (SSD) devices or arrays as Easy Tier Server cache devices.
Enable the read cache function on DS8870 provisioned hdisks.
The details for all these steps follow.
Installing multipathing software
As indicated in Chapter 3.1.2, “Easy Tier Server coherency client requirements” on page 35,
the host must have either AIX native Multipath Input/Output (MPIO), Subsystem Device Driver
(SDD), or Subsystem Device Driver Path Control Module (SDDPCM) to support DS8870
LUNs.
Therefore, you can have any of this supported multipathing software installed on AIX.
Example 4-4 shows how to prepare SDDPCM and its host attachment driver for installation.
Example 4-4 Installing SDDPCM and a host attachment for DS8000
# ls
devices.fcp.disk.ibm.mpio.rte.tar
devices.sddpcm.71.rte.tar
devices.sddpcm.71.2.6.3.2.bff.tar
# tar -xvf devices.fcp.disk.ibm.mpio.rte.tar
x
# tar -xvf devices.sddpcm.71.2.6.3.2.bff.tar
x
# tar -xvf devices.sddpcm.71.rte.tar
x
# inutoc .
For more information about SDDPCM installation files and required host attachment drivers,
see the SDDPCM user’s guide at this website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000201
Figure 4-1 on page 59 illustrates the smit install_all command used for the actual
installation and the System Management Interface Tool (SMIT) settings that you can use for
the fileset installation.
Instead of using the AIX SMIT, you can also use the installp command with the following
syntax for the filesets installation:
# installp -acgX -d <directory_path> <fileset_name>
58
IBM System Storage DS8000 Easy Tier Server
# smit install_all
Install and Update from ALL Available Software
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* INPUT device / directory for software
* SOFTWARE to install
PREVIEW only? (install operation will NOT occur)
COMMIT software updates?
SAVE replaced files?
AUTOMATICALLY install requisite software?
EXTEND file systems if space needed?
OVERWRITE same or newer versions?
VERIFY install and check file sizes?
DETAILED output?
Process multiple volumes?
ACCEPT new license agreements?
PREVIEW new LICENSE agreements?
[Entry Fields]
.
[ALL]
no
yes
no
yes
yes
no
no
yes
yes
yes
no
+
+
+
+
+
+
+
+
+
+
+
+
Figure 4-1 Fileset installation via AIX SMIT
To ensure that the filesets were properly installed, the lslpp command can be used, as
shown in Example 4-5.
Example 4-5 Validating SDDPCM and host attachment installation
# lslpp -l devices.sddpcm.71.rte
Fileset
Level State
Description
---------------------------------------------------------------------------Path: /usr/lib/objrepos
devices.sddpcm.71.rte
2.6.3.2 COMMITTED IBM SDD PCM for AIX V71
Path: /etc/objrepos
devices.sddpcm.71.rte
2.6.3.2
COMMITTED
IBM SDD PCM for AIX V71
# lslpp -l devices.fcp.disk.ibm.mpio.rte
Fileset
Level State
Description
---------------------------------------------------------------------------Path: /usr/lib/objrepos
devices.fcp.disk.ibm.mpio.rte
1.0.0.24 COMMITTED IBM MPIO FCP Disk Device
Installing the Easy Tier Server coherency client driver
Easy Tier Server coherency client driver is installed by the bos.etcacheDD.rte fileset. This
installable file is provided in an optical media shipped with IBM System Storage DS8870.
Otherwise, it can be downloaded from the IBM website:
http://www.ibm.com/support
Chapter 4. Easy Tier Server implementation
59
After obtaining the media, the fileset itself can be transferred to the AIX host. Alternatively, the
ISO image file can be copied on the host and mounted by using the loopmount command, as
shown in Example 4-6.
Example 4-6 Mounting an ISO file on AIX for bos.etcacheDD.rte fileset installation
# ls -l
total 1688
-rw-r-----
1 root
system
864256 Jun
3 12:51 ETCC-7.7.10.287.iso
# mkdir /ETS
# loopmount -i /tmp/ETS/ETCC-7.7.10.287.iso -o "-V cdrfs -o ro" -m /ETS
# cd /ETS
# ls
bos.etcachedd
The fileset can be installed using the # smit install_all command with the SMIT settings
that are indicated in Figure 4-1 on page 59, or the installp command. Example 4-7
demonstrates how to check the fileset installation on AIX.
Example 4-7 Checking Easy Tier Server coherency client driver installation
# lslpp -l bos.etcacheDD.rte
Fileset
Level State
Description
---------------------------------------------------------------------------Path: /usr/lib/objrepos
bos.etcacheDD.rte
7.7.10.287 COMMITTED Easy Tier Server
Path: /etc/objrepos
bos.etcacheDD.rte
7.7.10.287 COMMITTED Easy Tier Server
Starting Easy Tier Server coherency client driver
The cfgmgr AIX command is used to configure and load all Easy Tier Server coherency client
kernel extensions installed by the bos.etcacheDD.rte fileset. After running the cfgmgr
command, the device etcdd is configured and the service etcdaemon is automatically started,
as shown in Example 4-8.
Example 4-8 Starting Easy Tier Server coherency client driver
# cfgmgr
# lsdev -Ct etcdd
etcdd Available Easy Tier Cache Parent
# lssrc -s etcdaemon
Subsystem
Group
etcdaemon
PID
1378020
Status
active
Note: The Easy Tier Server daemon is designed to be started or stopped by the Easy Tier
Server client driver configuration and deconfiguration methods. Do not issue any command
to manually start or stop the Easy Tier Server daemon. If the Easy Tier Server daemon is
not in the expected state, contact IBM support and report the problem.
60
IBM System Storage DS8000 Easy Tier Server
Configuring direct-attached storage flash (SSD) devices
Easy Tier Server is now enabled and the next deployment step is to define the SSD cache
devices. However, before doing so, you might want to examine the current one-to-one RAID 0
array formatting (pdisk to hdisk) that is recognized by default on the AIX host.
The physical adapter on the Power System host that is connected to the DAS expansion
enclosure determines the Redundant Array of Independent Disks (RAID) level supported by
the SSD devices within the enclosure. Currently, most of the supported adapters allow RAID
0, 5, 6, and 10 for creating SSD arrays.
Best Practice: In terms of client DAS cache devices for Easy Tier Server, any RAID
configuration is supported. A general recommendation is to use RAID 5 for creating SSD
arrays to be used as cache devices.
Although, RAID 0 and RAID 10 would result in excellent performance as well, RAID 0 does
not tolerate any disk failure, and using RAID 10 can be too costly, provided that 50% of the
raw capacity of the SSD disks would be used purely for redundancy.
Consider a scenario with several hdisks assigned as cache devices to the Easy Tier Server
coherency client. If one drive fails within a RAID 0 array, the entire client cache directory (all
cache devices) has to be destroyed and recreated to exclude the failed RAID 0 array.
Because the client DAS cache is a repository filled only with data copies from DS8870 “hot”
data and it is only supporting read operations from the application, there will be no impact to
the system if the entire cache is lost, other than increased read response times, when the
applications can no longer benefit from the read cache.
However, you might want to avoid the Easy Tier Server caching directory being vulnerable to
a single disk failure. Although the application data integrity is not impacted by a full cache
directory loss, you cannot afford having your application running without the performance
improvements inherent to Easy Tier Server, even for a while. Such situations can be
prevented by formatting the SSD devices into RAID 5 arrays.
Also, the fewer RAID arrays that you create, the more usable space there will be. For
example, if eight 387 GB SSD devices are split into two 4-drives RAID 5 arrays, each one
would provide roughly 1 TB of usable capacity. Thus, it would be around 2 TB total capacity
with those two arrays. Instead, if all eight 387 GB-SSD devices are formatted into a single
RAID 5 array, the total usable capacity of the array would be around 2.5 TB.
Alternatively, although a two RAID 5 array DAS cache directory would tolerate the failure of
two drives, one per array, a single RAID 5 array configuration tolerates just one drive failure.
Therefore, it is a trade-off that you should consider when planning your cache devices.
All the details for managing and formatting SSD devices into RAID arrays are described in
Chapter 5.1.2, “Managing and configuring direct-attached storage” on page 77.
Chapter 4. Easy Tier Server implementation
61
Defining cache devices
At this point, we use the etcadmin command to create cache devices based on the newly
created SSD arrays. The bos.etcacheDD.rte fileset implements the etcadmin command. This
etcadmin command is the management tool for Easy Tier Server coherency client. The
command is extensively used in the next implementation steps and it is also covered in details
in 5.1.1, “Easy Tier Server coherency client management tasks” on page 70.
Example 4-9 first shows a listing of the current SSD arrays’ hdisks reported by the lsdev
command. Use the etcadmin -a list command to ensure that there is no cache device
currently configured in the environment, as expected.
In our scenario, there are two RAID 5 arrays: hdisk4 and hdisk5. The first cache device is then
created by the etcadmin -a create command using /dev/hdisk4 as a parameter.
Although hdisk4 was selected for creating the first cache device, we could have used any
SSD hdisk available. Eventually, other SSD hdisks are added to the cache directory
subsequently, by using the etcadmin -a add command, as we did for /dev/hdisk5.
Note: When dealing with cache devices as parameters of the etcadmin command, the
prefix /dev must always be used.
Example 4-9 Creating a client caching directory and adding cache devices
# lsdev -Cc disk | grep SSD
hdisk4 Available 0N-00-00 SAS RAID 5 SSD Array
hdisk5 Available 0N-00-00 SAS RAID 5 SSD Array
# etcadmin -a list
No cache device has been created
# etcadmin -a create -d /dev/hdisk4
Cache device has been created with Flash Cache /dev/hdisk4
# etcadmin -a add -d /dev/hdisk5
Flash Cache device /dev/hdisk5 has been added to cache device
# etcadmin -a list
------------------------------------------------------------------------------Device Name
| Caching Capacity (bytes)
|
------------------------------------------------------------------------------/dev/hdisk4
| 1163869028352
|
/dev/hdisk5
| 1163869028352
|
Enabling the read cache function for DS8870 hdisks
Easy Tier Server coherency client cache devices are ready, as previously reported by the
etcadmin -a list command. The next step is to assign DS8870 hdisks to be read-cache
enabled for Easy Tier Server.
Example 4-10 on page 63 shows a list of the DS8870 LUNs recognized by our particular host.
The etcadmin -a start command is used to start the read cache function on the LUNs and
the etcadmin -a query is used next to validate that the read cache function has been enabled
for the selected devices.
62
IBM System Storage DS8000 Easy Tier Server
Example 4-10 Enabling the Easy Tier Server read cache function on DS8870 hdisks
# lsdev
hdisk16
hdisk17
hdisk18
hdisk19
hdisk20
hdisk21
hdisk22
hdisk23
-Cc disk | grep 2107
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-00-02 IBM
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
FC
FC
FC
FC
FC
FC
FC
FC
2107
2107
2107
2107
2107
2107
2107
2107
# etcadmin -a start -D hdisk16 hdisk23
SAN device hdisk16 read_cache function
SAN device hdisk17 read_cache function
SAN device hdisk18 read_cache function
SAN device hdisk19 read_cache function
SAN device hdisk20 read_cache function
SAN device hdisk21 read_cache function
SAN device hdisk22 read_cache function
SAN device hdisk23 read_cache function
is
is
is
is
is
is
is
is
enabled
enabled
enabled
enabled
enabled
enabled
enabled
enabled
# etcadmin -a query
Total read cache enabled SAN devices is 8
------------------------------------------'*': indicates SAN device read_cache has started
' ': indicates SAN device read_cache has not started
'~': indicates failing to get SAN device read_cache status
Device Name
===========
hdisk16
hdisk17
hdisk18
hdisk19
hdisk20
hdisk21
hdisk22
hdisk23
From the moment when a device read cache function is started on DAS cache, the Easy Tier
Server coherency client and server initiate a learning phase of the volumes’ data pattern for
about 15 minutes. After that, read caching effectively starts.
Refer to “Managing and enabling the read cache function on DS8870 hdisks” on page 72, and
6.1.2, “AIX operating system IOSTAT Tool” on page 87 for more details and examples about
read cache function enablement and startup.
Chapter 4. Easy Tier Server implementation
63
4.2 Uninstalling Easy Tier server coherency client driver
If for any reason you no longer want or need the coherency client driver on a particular AIX
host, you can uninstall the driver. Before uninstalling the Easy Tier Server coherency client
driver, you must stop the read cache function for all DS8870 hdisks that have it enabled. For
that purpose, use the etcadmin -a shutdown command as shown in Example 4-11.
Example 4-11 Shutting down Easy Tier Server coherency client driver
# etcadmin -a shutdown -s no
Total read cache enabled SAN devices is 8
-------------------------------------------SAN device hdisk16 read_cache function is disabled
SAN device hdisk17 read_cache function is disabled
SAN device hdisk18 read_cache function is disabled
SAN device hdisk19 read_cache function is disabled
SAN device hdisk20 read_cache function is disabled
SAN device hdisk21 read_cache function is disabled
SAN device hdisk22 read_cache function is disabled
SAN device hdisk23 read_cache function is disabled
Removing etcdd device...
etcdd device is deleted successfully
The shutdown parameter from etcadmin stops all read-cache enabled DS8870 devices and
removes the cache directory (and so its cache devices). In addition, it unconfigures and
removes the etcdd device from the client. In Example 4-12, you can see that the etcdd device
has been removed and the etcdaemon has been stopped. As a result, no cache device or
read-cache enabled ones are displayed on etcadmin command outputs.
Example 4-12 Validating Easy Tier Server coherency client status after etcadmin -a shutdown
# lsdev -C | grep etc
# lssrc -s etcdaemon
Subsystem
Group
etcdaemon
PID
Status
inoperative
# etcadmin -a list
Fail the call since etcdd device is not configured
# etcadmin -a query
No read_cache enabled SAN device has been found.
When the client driver is not operational anymore, the smit remove AIX command can be
used to have the bos.etcacheDD.rte fileset removed off the operating system. Figure 4-2 on
page 65 displays the AIX SMIT menu that is opened for removing the fileset.
64
IBM System Storage DS8000 Easy Tier Server
Remove Installed Software
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* SOFTWARE name
PREVIEW only? (remove operation will NOT occur)
REMOVE dependent software?
EXTEND file systems if space needed?
DETAILED output?
[Entry Fields]
[bos.etcacheDD.rte]
no
no
no
no
+
+
+
+
+
Figure 4-2 SMIT menu for fileset removal
After entering the fileset name and pressing the Enter key, AIX removes the fileset. If
everything runs fine, an OK message is returned. Otherwise, if the removal status displayed is
Failed, either scroll through the logs or check the smit.log file to identify the root cause of
the failure.
Note: Contact your AIX support representative for more assistance on handling fileset
installation or removal failures.
Upon fileset removal, the lslpp command will not find that fileset any longer and the etcadmin
command is not recognized by the operating system either, as Example 4-13 demonstrates.
Example 4-13 Validating bos.etcacheDD.rte fileset removal from the operating system
# lslpp -l bos.etcacheDD.rte
lslpp: Fileset bos.etcacheDD.rte not installed.
# etcadmin
ksh: etcadmin: not found
4.3 Upgrading Easy Tier Server coherency client driver
An upgrade of the Easy Tier Server coherency client driver consists of uninstalling the current
version and installing the new one. Therefore, before upgrading it, you must stop all
read-cache enabled DS8870 hdisks and remove the client etcdd device.
Despite that the current driver must be uninstalled, the Easy Tier Server configuration for
cache devices and read-cache enabled DS8870 hdisks can be saved. You just need to issue
the etcadmin -a shutdown -s yes|cacdev|rdcac command to save either the configuration of
the cache devices, of the enabled read cache devices, or both.
The saved client configuration is automatically restored after the new driver is installed, during
the etcdd device configuration on cfgmgr. When the new package is installed, you can either
reboot the system or issue the cfgmgr command.
Chapter 4. Easy Tier Server implementation
65
In the next examples, we cover all the upgrade procedures, starting on checking the current
configuration for both cache devices and read-cache enabled DS8870 hdisks, as shown in
Example 4-14.
Example 4-14 Querying the current Easy Tier Server coherency client configuration
# etcadmin -a list
------------------------------------------------------------------------------Device Name
| Caching Capacity (bytes)
|
------------------------------------------------------------------------------/dev/hdisk4
| 1163869028352
|
/dev/hdisk5
| 1163869028352
|
# etcadmin -a query
Total read cache enabled SAN devices is 8
------------------------------------------'*': indicates SAN device read_cache has started
' ': indicates SAN device read_cache has not started
'~': indicates failing to get SAN device read_cache status
Device Name
===========
hdisk16
hdisk17
hdisk18
hdisk19
hdisk20
hdisk21
hdisk22
hdisk23
The etcadmin -a shutdown command takes care of stopping all read-cache enabled DS8870
hdisks and of removing the client etcdd devices. The -s yes flag was used to save the current
configuration of both the cache devices and read-cache enabled DS8870 hdisks, as shown in
Example 4-15.
Example 4-15 Shutting down Easy Tier Server coherency client driver
# etcadmin -a shutdown -s yes
Total read cache enabled SAN devices is 8
-------------------------------------------SAN device hdisk16 read_cache function is disabled
SAN device hdisk17 read_cache function is disabled
SAN device hdisk18 read_cache function is disabled
SAN device hdisk19 read_cache function is disabled
SAN device hdisk20 read_cache function is disabled
SAN device hdisk21 read_cache function is disabled
SAN device hdisk22 read_cache function is disabled
SAN device hdisk23 read_cache function is disabled
Removing etcdd device...
etcdd device is deleted successfully
Because the Easy Tier Server coherency client has been shut down while saving
configuration data, the uninstallation procedures described in 4.2, “Uninstalling Easy Tier
server coherency client driver” on page 64 can be followed to remove the current version of
the bos.etcacheDD.rte fileset from the operating system.
66
IBM System Storage DS8000 Easy Tier Server
Next, with the new client driver available, refer to “Installing the Easy Tier Server coherency
client driver” on page 59 for procedures for the driver installation.
The cfgmgr AIX command loads the driver’s kernel extensions, configures the etcdd device,
and automatically starts etcdaemon, as demonstrated in Example 4-16. If you properly used
the etcadmin -a shutdown -s yes|cacdev|rdcac command to shut down the client driver
while saving some configuration, as we did in Example 4-15 on page 66, cfgmgr will also
restore the saved configuration.
Attention: Do not issue the etcadmin -a cfgetcdd -r command if you are migrating or
using the Easy Tier Server client package. It will not load the saved configuration when the
driver was shut down.
Example 4-16 Restoring client driver configuration after an upgrade
# cfgmgr
# lsdev -C | grep etc
etcdd
Available
# lssrc -s etcdaemon
Subsystem
Group
etcdaemon
Easy Tier Cache Parent
PID
2293776
Status
active
# etcadmin -a list
------------------------------------------------------------------------------Device Name
| Caching Capacity (bytes)
|
------------------------------------------------------------------------------/dev/hdisk4
| 1163869028352
|
/dev/hdisk5
| 1163869028352
|
# etcadmin -a query
Total read cache enabled SAN devices is 19
------------------------------------------'*': indicates SAN device read_cache has started
' ': indicates SAN device read_cache has not started
'~': indicates failing to get SAN device read_cache status
Device Name
===========
hdisk16
hdisk17
hdisk18
hdisk19
hdisk20
hdisk21
hdisk22
hdisk23
Chapter 4. Easy Tier Server implementation
67
68
IBM System Storage DS8000 Easy Tier Server
5
Chapter 5.
Managing Easy Tier Server
This chapter describes how to properly manage and maintain an Easy Tier Server
environment after it is operational.
© Copyright IBM Corp. 2013. All rights reserved.
69
5.1 Managing Easy Tier Server
Easy Tier Server coherency client functions are managed by the etcadmin command, which
is installed with the bos.etcacheDD.rte fileset. In the next sections of this chapter, we cover
several management tasks enabled by the etcadmin command.
5.1.1 Easy Tier Server coherency client management tasks
As a starting point, Figure 5-1 lists the etcadmin command usage and its actions and options.
# etcadmin
USAGE: etcadmin -a <action> [-d <devName>]
-a create -d </dev/devName>
-a destroy
-a add
-d </dev/devName>
-a start
-d <devName>
-a stop
-d <devName>
-a start
-D <devName1> <devName2>
-a stop
-D <devName1> <devName2>
-a stopall
-a query
-a list
-a iostat -d <devName>
-a shutdown -s <yes|cacdev|rdcac|no>
-a cfgetcdd -r <yes|cacdev|rdcac|no>
Figure 5-1 etcadmin command usage
Starting Easy Tier Server coherency client driver
The Easy Tier Server coherency client driver can be started using three different methods:
AIX cfgmgr command, AIX restart, or etcadmin -a cfgetcdd.
In general, the cfgmgr command is used to initially start the client driver, just after
bos.etcacheDD.rte installation, for instance. See Example 5-1.
Example 5-1 AIX cfgmgr usage for starting Easy Tier Server coherency client driver
# cfgmgr
# lsdev -Ct etcdd
etcdd Available Easy Tier Cache Parent
# lssrc -s etcdaemon
Subsystem
Group
etcdaemon
PID
1836592
Status
active
# etcadmin -a list
No cache device has been created
# etcadmin -a query
No read_cache enabled SAN device has been found.
70
IBM System Storage DS8000 Easy Tier Server
An operating system restart operation works exactly like the cfgmgr command, in terms of
starting the client driver, when the cfgmgr command runs upon AIX startup.
Creating a cache directory and adding DAS cache devices
The client cache directory with cache devices is a key component of the Easy Tier Server
feature. With the etcadmin command, you can designate which direct-attached storage (DAS)
solid-state drive (SSD) devices are used as cache devices.
First, you might want to list the available SDD devices on the client, using the lsdev
command. Then, if there is no cache device designated for this client yet, the etcadmin -a
create command must be used in order to create the cache directory. Later, other SSD
devices are added to the cache directory as cache devices by the etcadmin -a add
command.
Example 5-2 shows the usage of both the etcadmin -a create and etcadmin -a add
commands to initially create a caching directory and then to add other cache devices. It also
shows the etcadmin -a list command that is used to check the newly created cache devices
for this Easy Tier Server coherency client.
Note: When referring to DAS SSD cache devices on the etcadmin command, you must use
the prefix /dev/ in the hdisk name parameter. By contrast, when using the etcadmin
command to manage the DS8870 hdisks read cache function, you must use only the hdisk
name, without the /dev/ prefix.
Example 5-2 Creating or adding DAS cache devices
# lsdev -Cc disk | grep SSD
hdisk4 Available 0N-00-00 SAS RAID 5 SSD Array
hdisk5 Available 0N-00-00 SAS RAID 5 SSD Array
# etcadmin -a create -d /dev/hdisk4
Cache device has been created with Flash Cache /dev/hdisk4
# etcadmin -a list
------------------------------------------------------------------------------Device Name
| Caching Capacity (bytes)
|
------------------------------------------------------------------------------/dev/hdisk4
| 1163869028352
|
# etcadmin -a add -d /dev/hdisk5
Flash Cache device /dev/hdisk5 has been added to cache device
# etcadmin -a list
------------------------------------------------------------------------------Device Name
| Caching Capacity (bytes)
|
------------------------------------------------------------------------------/dev/hdisk4
| 1163869028352
|
/dev/hdisk5
| 1163869028352
|
Tip: The etcadmin command will not allow you to designate a non-SSD device as a cache
device. If you use a DS8870 hdisk in the etcadmin -a add or etcadmin -a create
commands, the following error is displayed: hdiskX is not a valid Flash Cache device.
Chapter 5. Managing Easy Tier Server
71
Managing and enabling the read cache function on DS8870 hdisks
DS8870 hdisk read caching does not start automatically after installing the cache device. You
still need to define which DS8870 hdisk is to benefit from the Easy Tier Server feature.
Across a given set of DS8870 hdisks mapped to this host, you might want to just select a
limited group of logical unit numbers (LUNs) to be read-cache enabled, or you can enable
them all.
In environments where a single host uses LUNs from different storage systems, ensure that
you select only for read cache LUNs from a DS8870 with the Easy Tier Server feature
enabled.
An Easy Tier Server coherency client can be served by as many Easy Tier Server coherency
servers as you would like. It is just a matter of enabling the Easy Tier Server feature on the
DS8870 systems that map LUNs to the client.
The opposite is partially true: a DS8870 can act as an Easy Tier Server coherency server for
at most 16 clients.
In Example 5-3, we list the DS8870 LUNs recognized by the host. Referring to this list, the
etcadmin -a start command is used to start the read cache function for those hdisks.
Note: An AIX host with the Easy Tier Server coherency client driver enabled is capable of
using LUNs from different DS8870 storage systems, whether they have the Easy Tier
Server feature enabled or not. If all DS8870 storage systems that map LUNs to this AIX
host are Easy Tier Server-enabled, the DAS cache in the client is shared by all of the Easy
Tier Server coherency servers whose LUNs have been read-cache enabled on the client.
Example 5-3 Enabling the read cache function on DS8870 hdisks
# lsdev
hdisk16
hdisk17
hdisk18
hdisk19
hdisk20
hdisk21
hdisk22
hdisk23
-Cc disk | grep 2107
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-01-02 IBM
Available 0k-00-02 IBM
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
FC
FC
FC
FC
FC
FC
FC
FC
2107
2107
2107
2107
2107
2107
2107
2107
# etcadmin -a start -D hdisk16 hdisk21
SAN device hdisk16 read_cache function
SAN device hdisk17 read_cache function
SAN device hdisk18 read_cache function
SAN device hdisk19 read_cache function
SAN device hdisk20 read_cache function
SAN device hdisk21 read_cache function
is
is
is
is
is
is
enabled
enabled
enabled
enabled
enabled
enabled
# etcadmin -a start -d hdisk22
SAN device hdisk22 read_cache function is enabled
# etcadmin -a start -d hdisk23
SAN device hdisk23 read_cache function is enabled
# etcadmin -a query
Total read cache enabled SAN devices is 8
72
IBM System Storage DS8000 Easy Tier Server
------------------------------------------'*': indicates SAN device read_cache has started
' ': indicates SAN device read_cache has not started
'~': indicates failing to get SAN device read_cache status
Device Name
===========
hdisk16
hdisk17
hdisk18
hdisk19
hdisk20
hdisk21
hdisk22
hdisk23
With the etcadmin -a start command, you can specify either a contiguous range of hdisks,
by using the -D flag, or single hdisks to have the read cache function started, by using -d.
Note: It will take up to 15 minutes for the read cache function to start for a just-enabled
DS8870 hdisk. In these first minutes, the Easy Tier Server coherency client and server will
be exchanging advices and learning about the data pattern access on the LUNs.
After the devices were read cache enabled by the etcadmin -a start command, there is
no user intervention required during this first learning cycle. As soon as the read caching
function is effectively started, an asterisk is displayed by the side of each hdisk on the
output of the etcadmin -a query command.
Disabling the read cache function on DS8870 hdisks
At any time, the Easy Tier Server coherency client driver allows you to stop the read cache
function for either a single DS8870 hdisk, a contiguous set of these hdisks, or for all of the
currently read-cache enabled hdisks. Refer to Example 5-4 for illustration.
Example 5-4 Disabling the read cache function on DS8870 hdisks
# etcadmin -a stop -d hdisk16
SAN device hdisk16 read_cache function is disabled
# etcadmin -a stop -D hdisk17 hdisk20
SAN device hdisk17 read_cache function
SAN device hdisk18 read_cache function
SAN device hdisk19 read_cache function
SAN device hdisk20 read_cache function
is
is
is
is
disabled
disabled
disabled
disabled
# etcadmin -a stopall
Total read cache enabled SAN devices is 3
-------------------------------------------SAN device hdisk21 read_cache function is disabled
SAN device hdisk22 read_cache function is disabled
SAN device hdisk23 read_cache function is disabled
# etcadmin -a query
No read_cache enabled SAN device has been found.
# etcadmin -a list
Chapter 5. Managing Easy Tier Server
73
------------------------------------------------------------------------------Device Name
| Caching Capacity (bytes)
|
------------------------------------------------------------------------------/dev/hdisk4
| 1163869028352
|
/dev/hdisk5
| 1163869028352
|
Even if the read cache function is stopped for all read-cache enabled hdisks, the cache
devices are expected to remain active.
Shutting down the Easy Tier Server coherency client driver
The etcadmin -a shutdown command allows you to stop both the read cache function for
enabled DS8870 hdisks and the DAS cache devices. Using the -s flag and depending on the
parameter specified allows you to either save the cache devices configuration, read-cache
enabled devices configuration, both configurations, or none before a shutdown. The
respective parameters with the -s flag are as follows:
򐂰 yes: Saves both the client cache devices and read-cache enabled DS8870 hdisk
configurations before shutdown.
򐂰 cacdev: Saves only the client cache devices configuration before shutdown.
򐂰 rdcac: Saves only the client read cache enabled DS8870 hdisks configuration before
shutdown.
򐂰 no: Does not save any configuration before shutdown.
Example 5-5 demonstrates the etcadmin -a shutdown command with the -s flag yes
parameter to save all Easy Tier Server coherency client configuration before the driver is shut
down. In this example following the etcadmin -a shutdown command, the etcdd device is
unconfigured from AIX and the etcdaemon is stopped.
Example 5-5 Shutting down the Easy Tier Server coherency client driver while saving all configuration
# etcadmin -a shutdown -s yes
No rd_cache enabled SAN device found.
Removing etcdd device...
etcdd device is deleted successfully
# lsdev -Ct etcdd
#
# lssrc -s etcdaemon
Subsystem
Group
etcdaemon
PID
Status
inoperative
# etcadmin -a list
Fail the call since etcdd device is not configured
# etcadmin -a query
No read_cache enabled SAN device has been found.
Important: Upon a restart of the AIX operating system as an Easy Tier Server coherency
client, there is no need to perform any additional operations on the client driver. It will
automatically stop the read cache function for enabled devices and the cache devices
before AIX shuts down, and upon AIX boot, the cfgmgr command will automatically recover
the configuration.
74
IBM System Storage DS8000 Easy Tier Server
Restoring a saved configuration and restarting the client driver
If the client driver was shut down by etcadmin -a shutdown -s no, having no configuration
saved, a cfgmgr command is the best way to restart the driver. Instead, if any configuration
(yes, cacdev, or rdcac) was saved, the etcadmin -a cfgetcdd command must be used with
the appropriate -r flag parameter.
The -r flag parameter of the etcadmin -a cfgetcdd command matches the etcadmin -a
shutdown -s ones: yes, cacdev, rdcac, or no. In Example 5-6, we restart the client driver
restoring all the saved configuration.
Example 5-6 Restoring Easy Tier Server coherency client saved configuration and restarting the driver
# etcadmin -a cfgetcdd -r yes
etcdd
Configuring etcdd device...
etcdd device is configured to Available state
# etcadmin -a list
------------------------------------------------------------------------------Device Name
| Caching Capacity (bytes)
|
------------------------------------------------------------------------------/dev/hdisk4
| 1163869028352
|
/dev/hdisk5
| 1163869028352
|
Tip: If you saved all configuration using the etcadmin -a shutdown command, by using -s
yes, you can also choose among recovering all configuration (yes), cached devices
configuration (cacdev), read-cache enabled devices configuration (rdcac), or no
configuration (no) on the etcadmin -a cfgetcdd command.
In Example 5-5 on page 74, when the etcadmin -a shutdown command was run, there were
no read-cache enabled devices as the output of the command displayed the informational
message: No rd_cache enabled SAN device found. Therefore, although we use etcadmin -a
cfgetcdd -r yes to restore all saved configuration, no read-cache enabled device is restored,
as expected.
Attention: The AIX cfgmgr command will not restore the saved configuration after the
client driver is shut down by etcadmin -a shutdown -s yes|cacdev|rdcac. If you want the
configuration to be restored, the etcadmin -a cfgetcdd command must be used.
If cfgmgr is run after the etcadmin -a shutdown -s yes|cacdev|rdcac execution, it starts the
client driver properly, but it deletes the saved data, as shown in Example 5-7.
Example 5-7 Saving configuration data before driver shutdown and using cfgmgr inappropriately
# etcadmin -a shutdown -s yes
No rd_cache enabled SAN device found.
Removing etcdd device...
etcdd device is deleted successfully
# cfgmgr
# lsdev -Ct etcdd
etcdd Available Easy Tier Cache Parent
# etcadmin -a list
Chapter 5. Managing Easy Tier Server
75
No cache device has been created
There is an exception to this rule, in which cfgmgr will load the saved configuration. This
exception applies for Easy Tier Server coherency client driver upgrades and migrations. Refer
to Chapter 4.3, “Upgrading Easy Tier Server coherency client driver” on page 65 for more
information.
Destroying DAS cache devices
If the goal is to reformat the existing cache devices, you can use the etcadmin -a destroy
command to remove all cache devices at one time, as shown in Example 5-8.
Example 5-8 Destroying DAS cache devices and querying etcadmin outputs
# etcadmin -a list
------------------------------------------------------------------------------Device Name
| Caching Capacity (bytes)
|
------------------------------------------------------------------------------/dev/hdisk4
| 1163869028352
|
/dev/hdisk5
| 1163869028352
|
# etcadmin -a destroy
Cache device has been destroyed
DAS device hdisk4 cache device is set to non-Active
DAS device hdisk5 cache device is set to non-Active
# etcadmin -a list
No cache device has been created
# etcadmin -a query
Total read cache enabled SAN devices is 8
------------------------------------------'*': indicates SAN device read_cache has started
' ': indicates SAN device read_cache has not started
'~': indicates failing to get SAN device read_cache status
Device Name
===========
hdisk16
hdisk17
hdisk18
hdisk19
hdisk20
hdisk21
hdisk22
hdisk23
If read-cache enabled DS8870 hdisks are defined, they remain defined, waiting for new cache
devices to be created.
76
IBM System Storage DS8000 Easy Tier Server
5.1.2 Managing and configuring direct-attached storage
As stated in “Configuring direct-attached storage flash (SSD) devices” on page 61,
Redundant Array of Independent Disks 5 (RAID 5) is the recommended configuration for
Easy Tier Server coherency client cache devices.
When a supported expansion enclosure is attached to the host, each SSD disk is formatted
as a single RAID 0 array hdisk. For best practices, you might want to destroy the current
RAID 0 SSD arrays’ hdisks and create new RAID 5 ones.
Refer to Example 5-9. First, we list the current SSD hdisks (arrays) available on AIX, using
the lsdev command. Next, the sissasraidmgr AIX command is used for listing the current
array formatting and destroying these arrays.
Attention: Before reformatting SSD arrays that are already defined and currently in use as
cache devices, shut down the client driver or destroy the cache directory using the
etcadmin command.
Example 5-9 Using the sissasraidmgr AIX command to list and delete RAID arrays configuration
# lsdev
hdisk4
hdisk5
hdisk6
hdisk7
hdisk8
hdisk9
hdisk10
hdisk11
-Cc disk | grep SSD
Available 0M-00-00 SAS
Available 0M-00-00 SAS
Available 0M-00-00 SAS
Available 0M-00-00 SAS
Available 0M-00-00 SAS
Available 0M-00-00 SAS
Available 0M-00-00 SAS
Available 0M-00-00 SAS
RAID
RAID
RAID
RAID
RAID
RAID
RAID
RAID
0
0
0
0
0
0
0
0
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
Array
Array
Array
Array
Array
Array
Array
Array
# sissasraidmgr -L -j1 -l sissas7
-----------------------------------------------------------------------Name
Resource State
Description
Size
-----------------------------------------------------------------------sissas7
FEFFFFFF Primary
PCIe2 3.1GB Cache RAID SAS Enclosure 6Gb x8
sissas3
FEFFFFFF HA Linked
Remote adapter SN 002B9004
hdisk4
pdisk0
FC0000FF
000000FF
Optimal
Active
RAID 0 Array
SSD Array Member
387.9GB
387.9GB
hdisk5
pdisk2
FC0100FF
000002FF
Optimal
Active
RAID 0 Array
SSD Array Member
387.9GB
387.9GB
hdisk6
pdisk3
FC0200FF
000003FF
Optimal
Active
RAID 0 Array
SSD Array Member
387.9GB
387.9GB
hdisk7
pdisk4
FC0300FF
000004FF
Optimal
Active
RAID 0 Array
SSD Array Member
387.9GB
387.9GB
hdisk8
pdisk5
FC0400FF
000005FF
Optimal
Active
RAID 0 Array
SSD Array Member
387.9GB
387.9GB
hdisk9
pdisk6
FC0500FF
000006FF
Optimal
Active
RAID 0 Array
SSD Array Member
387.9GB
387.9GB
Chapter 5. Managing Easy Tier Server
77
hdisk10
pdisk7
FC0600FF
000007FF
Optimal
Active
RAID 0 Array
SSD Array Member
387.9GB
387.9GB
hdisk11
pdisk1
FC0700FF
000001FF
Optimal
Active
RAID 0 Array
SSD Array Member
387.9GB
387.9GB
# sissasraidmgr -D -l sissas7 -d hdisk4
hdisk4 deleted
pdisk0 Defined
# sissasraidmgr -D -l sissas7 -d hdisk5
hdisk5 deleted
pdisk2 Defined
# sissasraidmgr -D -l sissas7 -d hdisk6
hdisk6 deleted
pdisk3 Defined
# sissasraidmgr -D -l sissas7 -d hdisk7
hdisk7 deleted
pdisk4 Defined
# sissasraidmgr -D -l sissas7 -d hdisk8
hdisk8 deleted
pdisk5 Defined
# sissasraidmgr -D -l sissas7 -d hdisk9
hdisk9 deleted
pdisk6 Defined
# sissasraidmgr -D -l sissas7 -d hdisk10
hdisk10 deleted
pdisk7 Defined
# sissasraidmgr -D -l sissas7 -d hdisk11
hdisk11 deleted
pdisk1 Defined
# sissasraidmgr -L -j1 -l sissas7
-----------------------------------------------------------------------Name
Resource State
Description
Size
-----------------------------------------------------------------------sissas7
FEFFFFFF Primary
PCIe2 3.1GB Cache RAID SAS Enclosure 6Gb x8
sissas3
FEFFFFFF HA Linked
Remote adapter SN 002B9004
pdisk0
pdisk4
pdisk7
pdisk2
pdisk3
pdisk6
pdisk1
pdisk5
78
000000FF
000004FF
000007FF
000002FF
000003FF
000006FF
000001FF
000005FF
Active
Active
Active
Active
Active
Active
Active
Active
IBM System Storage DS8000 Easy Tier Server
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
Array
Array
Array
Array
Array
Array
Array
Array
Candidate
Candidate
Candidate
Candidate
Candidate
Candidate
Candidate
Candidate
387.9GB
387.9GB
387.9GB
387.9GB
387.9GB
387.9GB
387.9GB
387.9GB
After the sissasraidmgr command has destroyed the previously defined RAID arrays, each
SSD device is represented only as a pdisk, with no associated hdisk. The pdisks are then
SSD Array Candidates and they are ready to be assigned to new RAID arrays.
The sissasraidmgr command is used with the appropriate flags to create new arrays. In
Example 5-10, we create two RAID 5 arrays, each with four SSD devices.
Example 5-10 Using the sissasraidmgr AIX command to create new SSD arrays
# sissasraidmgr -C -r 5 -s 256 -z 'pdisk0 pdisk1 pdisk2 pdisk3'
.
# sissasraidmgr -C -r 5 -s 256 -z 'pdisk4 pdisk5 pdisk6 pdisk7'
.
# sissasraidmgr -L -j1 -l sissas7
-----------------------------------------------------------------------Name
Resource State
Description
Size
-----------------------------------------------------------------------sissas7
FEFFFFFF Primary
PCIe2 3.1GB Cache RAID SAS Enclosure 6Gb x8
sissas3
FEFFFFFF HA Linked
Remote adapter SN 002B9004
hdisk4
pdisk0
pdisk2
pdisk3
pdisk1
FC0000FF
000000FF
000002FF
000003FF
000001FF
Rebuilding
Active
Active
Active
Active
RAID 5 Array
SSD Array Member
SSD Array Member
SSD Array Member
SSD Array Member
1163GB Create 25%
387.9GB
387.9GB
387.9GB
387.9GB
hdisk5
pdisk4
pdisk7
pdisk6
pdisk5
FC0100FF
000004FF
000007FF
000006FF
000005FF
Rebuilding
Active
Active
Active
Active
RAID 5 Array
SSD Array Member
SSD Array Member
SSD Array Member
SSD Array Member
1163GB Create 25%
387.9GB
387.9GB
387.9GB
387.9GB
# lsdev -Cc disk | grep SSD
hdisk4 Available 0N-00-00 SAS RAID 5 SSD Array
hdisk5 Available 0N-00-00 SAS RAID 5 SSD Array
Just after launching the command, the RAID array status is Rebuilding. It takes a few minutes
for the array creation to complete, depending on the number of drives contained by the arrays
and the drive sizes. When the rebuild operation completes, the array status changes to
Optimal, as shown in Example 5-11.
Example 5-11 Using the sissasraidmgr AIX command to list RAID arrays configuration and their status
# sissasraidmgr -L -j1 -l sissas7
-----------------------------------------------------------------------Name
Resource State
Description
Size
-----------------------------------------------------------------------sissas7
FEFFFFFF Primary
PCIe2 3.1GB Cache RAID SAS Enclosure 6Gb x8
sissas3
FEFFFFFF HA Linked
Remote adapter SN 002B9004
hdisk4
pdisk0
pdisk2
pdisk3
FC0000FF
000000FF
000002FF
000003FF
Optimal
Active
Active
Active
RAID 5 Array
SSD Array Member
SSD Array Member
SSD Array Member
1163GB
387.9GB
387.9GB
387.9GB
Chapter 5. Managing Easy Tier Server
79
pdisk1
000001FF
Active
SSD Array Member
387.9GB
hdisk5
pdisk4
pdisk7
pdisk6
pdisk5
FC0100FF
000004FF
000007FF
000006FF
000005FF
Optimal
Active
Active
Active
Active
RAID 5 Array
SSD Array Member
SSD Array Member
SSD Array Member
SSD Array Member
1163GB
387.9GB
387.9GB
387.9GB
387.9GB
At this point, the SSD RAID arrays, represented by hdisks, are available for use as
Easy Tier Server coherency client cache devices. In Example 5-12, the newly created arrays
are displayed by the lsdev command and then, the etcadmin command is used to create the
cache devices.
Example 5-12 Creating SSD DAS cache directory and enabling read cache function for DS8870 hdisks
# lsdev -Cc disk | grep SSD
hdisk4 Available 0N-00-00 SAS RAID 5 SSD Array
hdisk5 Available 0N-00-00 SAS RAID 5 SSD Array
# etcadmin -a create -d /dev/hdisk4
Cache device has been created with Flash Cache /dev/hdisk4
# etcadmin -a add -d /dev/hdisk5
Flash Cache device /dev/hdisk5 has been added to cache device
# etcadmin -a list
------------------------------------------------------------------------------Device Name
| Caching Capacity (bytes)
|
------------------------------------------------------------------------------/dev/hdisk4
| 1163869028352
|
/dev/hdisk5
| 1163869028352
|
# etcadmin -a query
No read_cache enabled SAN device has been found.
# etcadmin -a start -D hdisk16 hdisk23
SAN device hdisk16 read_cache function
SAN device hdisk17 read_cache function
SAN device hdisk18 read_cache function
SAN device hdisk19 read_cache function
SAN device hdisk20 read_cache function
SAN device hdisk21 read_cache function
SAN device hdisk22 read_cache function
SAN device hdisk23 read_cache function
is
is
is
is
is
is
is
is
enabled
enabled
enabled
enabled
enabled
enabled
enabled
enabled
# etcadmin -a query
Total read cache enabled SAN devices is 8
------------------------------------------'*': indicates SAN device read_cache has started
' ': indicates SAN device read_cache has not started
'~': indicates failing to get SAN device read_cache status
Device Name
===========
*hdisk16
*hdisk17
80
IBM System Storage DS8000 Easy Tier Server
*hdisk18
*hdisk19
*hdisk20
*hdisk21
*hdisk22
*hdisk23
For more information about the sissasraidmgr command, see the IBM AIX 7.1 Information
Center:
http://pic.dhe.ibm.com/infocenter/aix/v7r1/index.jsp
Chapter 5. Managing Easy Tier Server
81
82
IBM System Storage DS8000 Easy Tier Server
6
Chapter 6.
Easy Tier Server monitoring
This chapter covers the Easy Tier Server monitoring in terms of overall workload and
distribution of read I/Os, either sent to direct-attached storage (DAS) solid-state drive (SSD)
cache devices or the DS8870.
We discuss the following tools that can be used for monitoring Easy Tier Server operations:
򐂰 Storage Tier Advisor Tool (STAT)
򐂰 AIX iostat command
򐂰 Easy Tier Server coherency client etcadmin command
© Copyright IBM Corp. 2013. All rights reserved.
83
6.1 Monitoring Easy Tier Server
In general, the performance improvements generated by Easy Tier Server operation are
mostly observed at the application layer, while monitoring the response time from the
application perspective. In addition, the DS8870 Storage System, AIX operating system, and
Easy Tier Server etcadmin command offer specific tools for monitoring DAS cache hits,
overall input/output operations per second (IOPS) performance, and general workload
characteristics.
The results shown by these monitoring tools along with the application’s performance metrics
give a comprehensive understanding of Easy Tier Server operation and the performance
improvements that this feature can use.
6.1.1 DS8870 Storage Tier Advisor Tool
The Storage Tier Advisor Tool (STAT) is a Windows application that can be used to analyze
the characteristics of the workload running on DS8870 storage systems. It provides capacity
planning information associated with the current or future use of the Easy Tier.
The STAT processes data that is collected by Easy Tier and Easy Tier Server monitors.
Monitoring statistics are gathered and analyzed at least every 24 hours. The results of this
data are integrated in a summary report data that can be downloaded from the DS8870
storage system for reporting purposes using the STAT.
Note: STAT version 7.7.10.xxx enhancements enable users to monitor Easy Tier Server
coherency clients’ IOPS statistics, regarding overall read IOPS and DAS cache hit ratio.
STAT can be downloaded from the following File Transfer Protocol (FTP) server address:
ftp://ftp.software.ibm.com/storage/ds8000/updates/DS8K_Customer_Download_Files/Sto
rage_Tier_Advisor_Tool/
Refer to IBM System Storage DS8000 Easy Tier, REDP-4667 for more information about the
Storage Tier Advisor Tool concepts and usage.
DS8870 Easy Tier summary report
The collected Easy Tier data can be downloaded at any time from the DS8870, by using the
offloadfile command. The offloadfile command creates two output files, one file for each
DS8870 internal server. These output files consist of two compressed files, which are known
as heat data files, and are processed by the STAT tools to generate a graphical report that
can be viewed in a web browser.
You can also use the data storage graphical user interface (DS GUI) for exporting DS8870
Easy Tier Summary Report by clicking Home  System Status  Right-Click the Storage
Image  Storage Image  Export Easy Tier Summary Report, as illustrated by
Figure 6-1 on page 85.
Decompress the heat data files generated by DS8870 and use them as input files for
STAT.exe, using the following syntax:
>STAT.exe inputfile1 inputfile2
The index.html file in the STAT installation directory is updated with new heat data.
84
IBM System Storage DS8000 Easy Tier Server
Figure 6-1 Exporting Easy Tier Summary Report via DS GUI
Easy Tier Server overview in STAT
The Storage Tier Advisor Tool (STAT) System Summary Report provides an overview of
Easy Tier monitored DS8870 extent pools. It also shows all Easy Tier Server coherency
clients managed by this server, as highlighted in Figure 6-2.
In this view, STAT displays the average read IOPS and DAS Cache Hit Ratio for a given client
in the last 24-hours data collection time frame. It also shows the Configured DAS Cache Size
for each client.
Important: The output of the STAT is based on data collected by the Easy Tier monitoring
function and a migration plan is only created after a data collection period of at least 24
hours. Easy Tier Server information about STAT is not immediately reclaimed by STAT data
collection. It will be available in the next report cycle after Easy Tier Server deployment on
the Easy Tier Server coherency clients.
Figure 6-2 STAT System Summary view
Chapter 6. Easy Tier Server monitoring
85
Detailed information for each Easy Tier Server coherency client is displayed by clicking the
client’s correspondent worldwide port name (WWPN), in the HOST ID column. The view
represented by Figure 6-3 shows a comprehensive graph containing the overall read IOPS for
one client over the time.
Note: The IOPS count brought by STAT for Easy Tier Server coherency clients is the
average of overall read IOPS for all DS8870 LUNs in a given client, considering both DAS
read IOPS and DS8870 read IOPS. Cache Hit Ratio is the percentage of the read IOPS
responded by DAS cache. Write I/Os are not taken into account in these summaries for
Easy Tier Server.
Figure 6-3 STAT detailed view for a given Easy Tier Server coherency client
Still, STAT breaks down the IOPS and Cached Hit Ratio statistics in the LUN level. Hence, it
allows you to see the individual results for all DS8870 LUNs mapped to this particular client,
as shown in Figure 6-4. To get the detailed breakdown per-LUN statistics, expand the
Existing Attached Volume Status option.
Figure 6-4 STAT breakdown per-LUN statistics for a given Easy Tier Server coherency client
Storage Tier Advisor Tool provides several other monitoring views for Easy Tier internal data
placement functions. IBM System Storage DS8000 Easy Tier, REDP-4667 can be consulted
for more information.
86
IBM System Storage DS8000 Easy Tier Server
6.1.2 AIX operating system IOSTAT Tool
The iostat AIX command is used to monitor the system I/O devices’ statistics from the
operating system perspective. Besides monitoring the DS8870 hdisks, iostat can also
provide I/O statistics for the SSD arrays’ hdisks that are designated as cache devices.
By monitoring iostat outputs, we can see a breakdown, per hdisk (whether it is a DS8870
LUN or an SSD array) on reads and writes IOPS. In a measurement interval, iostat displays
hdisks’ utilization, amount of IOPS, throughput, response time, and more relevant information.
Using iostat flags -D and -l shows a long listed extended utilization report, continuously
updating the performance information at one-second intervals. Use the commands as follows:
# iostat -Dl 1
Example 6-1 illustrates the monitoring scenario with iostat. In this scenario, there are eight
DS8870 LUNs, represented by hdisk16 to hdisk23. All of them are read-cache enabled by
Easy Tier Server coherency client cache device, which is a single 8-SSD device RAID 5 array,
represented by hdisk4.
Example 6-1 Listing the DAS cache devices and read cache enabled DS8870 hdisks
# lsdev
hdisk16
hdisk17
hdisk18
hdisk19
hdisk20
hdisk21
hdisk22
hdisk23
-Cc disk | grep 2107
Available 0E-00-02 IBM
Available 0E-00-02 IBM
Available 0E-00-02 IBM
Available 0E-00-02 IBM
Available 0E-00-02 IBM
Available 0E-00-02 IBM
Available 0E-00-02 IBM
Available 0E-00-02 IBM
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
MPIO
FC
FC
FC
FC
FC
FC
FC
FC
2107
2107
2107
2107
2107
2107
2107
2107
# lsdev -Cc disk | grep SSD
hdisk4 Available 0N-00-00 SAS RAID 0 SSD Array
# etcadmin -a list
------------------------------------------------------------------------------Device Name
| Caching Capacity (bytes)
|
------------------------------------------------------------------------------/dev/hdisk4
| 2715694399488
|
# etcadmin -a query
Total read cache enabled SAN devices is 8
------------------------------------------'*': indicates SAN device read_cache has started
' ': indicates SAN device read_cache has not started
'~': indicates failing to get SAN device read_cache status
Device Name
===========
hdisk16
hdisk17
hdisk18
hdisk19
hdisk20
hdisk21
hdisk22
Chapter 6. Easy Tier Server monitoring
87
hdisk23
After starting a random online transaction processing (OLTP)-like workload, we collected
initial iostat output, as shown in Figure 6-5.
This first iostat output was captured during the initial 15-minute interval, immediately after
enabling the read cache function on the DS8870 hdisks. At this time, the Easy Tier Server
coherency client and server are learning about the data patterns and exchanging advices to
effectively start the caching mechanisms.
Therefore, we can see that all read I/Os were still being handled by the DS8870 hdisks, while
there was no I/O at all to hdisk4, which is the cache device for this client.
Disks:
xfers
read
write
-------------- -------------------------------- ------------------------------------ -----------------------------------%tm
bps
tps bread bwrtn
rps
avg
min
max time fail
wps
avg
min
max time fail
act
serv serv serv outs
serv serv serv outs
hdisk18
96.0 24.8M 898.6
20.5M
4.3M 722.1
1.3
0.1 518.9
0
0 176.5
0.3
0.2
0.5
0
0
hdisk16
93.4 21.7M 923.6
17.7M
3.9M 723.4
1.4
0.1 515.7
0
0 200.2
0.3
0.2
0.5
0
0
hdisk21
96.5 28.8M 1425.3 23.5M
5.3M 1104.2 1.3
0.1 486.3
0
0 321.1
0.3
0.2
5.2
0
0
hdisk17
96.2 21.8M 1203.7 17.5M
4.3M 909.3
1.3
0.1 507.4
0
0 294.4
0.2
0.2
8.3
0
0
hdisk22
93.0 30.7M 1069.1 25.8M
5.0M 879.9
1.2
0.1 507.5
0
0 189.2
0.3
0.2
0.5
0
0
hdisk23
100.0 28.3M 1136.2 23.4M
5.0M 906.6
1.2
0.1 504.5
0
0 229.6
0.3
0.2
0.5
0
0
hdisk19
100.0 20.9M 909.0
17.0M
3.9M 706.2
1.5
0.1 517.5
0
0 202.8
0.3
0.2 33.5
0
0
hdisk20
97.8 26.2M 1073.1 21.4M
4.8M 842.1
1.4
0.1 149.5
0
0 231.0
0.3
0.2
0.5
0
0
hdisk4
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0
0
0.0
0.0
0.0
0.0
0
0
Figure 6-5 Iostat output collected when the read cache function on DS8870 hdisks had not effectively started yet
After the initial 15-minute learning time, we can see with the etcadmin -a query command
output that the read cache function has been effectively started on the read-cache enabled
devices. Now, these devices have asterisks by their sides, as displayed in Example 6-2.
Example 6-2 Listing read-cache enabled DS8870 hdisks when read_cache is started
# etcadmin -a query
Total read cache enabled SAN devices is 8
------------------------------------------'*': indicates SAN device read_cache has started
' ': indicates SAN device read_cache has not started
'~': indicates failing to get SAN device read_cache status
Device Name
===========
*hdisk16
*hdisk17
*hdisk18
*hdisk19
*hdisk20
*hdisk21
*hdisk22
*hdisk23
88
IBM System Storage DS8000 Easy Tier Server
Another iostat output was collected and you can see the amount of read IOPS being served
by the cache device to the application in Figure 6-6.
As most of the read I/Os requested by the application were hitting the DAS cache, cache
device read throughput and IOPS contrast greatly with these same statistics for the
read-cache enabled DS8870 hdisks.
For instance, while the read IOPS on the DS8870 hdisks were in the 200 range, the read
IOPS for the cache device were above 5000. As expected, we can see the same behavior for
the overall throughput. The average DS8870 hdisks throughput was around 60 Mbps, while
the cache device’s was higher than 500 Mbps.
Disks:
xfers
read
write
-------------- -------------------------------- ------------------------------------ -----------------------------------%tm
bps
tps bread bwrtn
rps
avg
min
max time fail
wps
avg
min
max time fail
act
serv serv serv outs
serv serv serv outs
hdisk18
74.3 39.6M 292.3 34.6M
5.1M 137.0
8.6
0.1 320.1
0
0 155.3
0.3
0.2
0.6
0
0
hdisk16
94.7 57.1M 295.6 53.9M
3.3M 196.4
9.9
0.1 237.7
0
0 99.2
0.3
0.2
0.6
0
0
hdisk21
83.9 87.2M 486.6 82.5M
4.6M 345.2
3.8
0.1 101.6
0
0 141.4
0.3
0.2
4.8
0
0
hdisk17
97.5 60.8M 312.9 58.8M
2.1M 249.3
9.2
0.1 570.1
0
0 63.6
0.3
0.2
0.6
0
0
hdisk22
46.7 53.1M 385.9 45.1M
8.0M 142.1
3.6
0.1 52.8
0
0 243.8
0.3
0.2
0.6
0
0
hdisk23
100.0 52.0M 365.8 45.1M
6.9M 156.1
4.2
0.1 72.7
0
0 209.7
0.3
0.2
0.6
0
0
hdisk19
100.0 55.4M 297.3 52.1M
3.3M 197.0
9.4
0.1 369.7
0
0 100.3
0.3
0.2
0.6
0
0
hdisk20
60.2 54.4M 380.0 48.0M
6.4M 184.3
4.0
0.1 100.7
0
0 195.7
0.3
0.2
0.6
0
0
hdisk4
98.5 547.4M 6735.4 168.6M 378.8M 5290.5 0.6
0.0 42.8
0
0 1444.9 1.9
1.3
8.3
0
0
Figure 6-6 Iostat output displaying the I/O behavior in a Easy Tier Server coherency client
These results demonstrate that Easy Tier Server cache devices are effectively serving most
of the application’s read requests, at the same time, they are fetching data from DS8870 to
populate the cache.
Although the Easy Tier Server coherency client cache device is a read-only cache for the
application, you might notice some amount of write IOPS. These write IOPS reflect the
caching population logic of Easy Tier Server, in which data is read from DS8870 and written
to the cache devices.
Attention: With Easy Tier Server enabled, the read service times of DS8870 hdisks do not
represent the real response time observed at the application layer. This is because most of
the read requests made for read-cache enabled DS8870 hdisks are directly handled by the
cache devices, whose average service time is in the microseconds range. The high
number of read transactions on the read-cache enabled DS8870 hdisks are for populating
the DAS cache. Upon population, there will be read I/Os to DS8870 LUNs and write I/Os to
the cache device.
For more information about each iostat statistic, consult IBM AIX 7.1 Information Center at the
following site:
http://pic.dhe.ibm.com/infocenter/aix/v7r1/index.jsp
6.1.3 Monitoring with etcadmin
The Easy Tier Server coherency client management tool etcadmin can be used to monitor
read-cache enabled DS8870 hdisks, using the etcadmin -a iostat command. It displays I/O
statistics, classifying read and write I/Os to DS8870, and read I/Os to the DAS cache device.
Chapter 6. Easy Tier Server monitoring
89
The output shown in Example 6-3 was collected during the Easy Tier Server initial 15-minute
learning time. Thus, although the device is read-cache enabled, the read cache function has
not started yet.
Example 6-3 Etcadmin iostat output collected before the read cache function started
# etcadmin -a iostat -d hdisk16
ETS Device I/O Statistics -- hdisk16
--------------------------------------------------Read Count:
449020
Write Count:
85921
Read Hit Count:
0
Partial Read Hit Count:
0
Read Bytes Xfer:
14713487360
Write Bytes Xfer:
2815459328
Read Hit Bytes Xfer:
0
Partial Read Hit Bytes Xfer:
0
Promote Read Count:
0
Promote Read Bytes Xfer:
0
Example 6-4 shows the etcadmin -a iostat command for DS8870 hdisk16 when the read
cache function has already been started.
You can use the numbers of this output to get individualized per-LUN statistics like
Read/Write Ratio, and DAS Cache Read Hit Percentage (Read Count/Read Hit Count +
Partial Read Hit Count), for example.
Example 6-4 Etcadmin iostat output from a given DS8870 hdisk with the read cache function started
# etcadmin -a iostat -d hdisk16
ETS Device I/O Statistics -- hdisk16
--------------------------------------------------Read Count:
3385448
Write Count:
914249
Read Hit Count:
2186632
Partial Read Hit Count:
5934
Read Bytes Xfer:
83262103552
Write Bytes Xfer:
18077851648
Read Hit Bytes Xfer:
47124586496
Partial Read Hit Bytes Xfer:
194445312
Promote Read Count:
110750
Promote Read Bytes Xfer:
116129792000
In this output, the Read Hit Count is the total number of read operations that were issued to
the driver that were full DAS cache read hits. The Partial Read Hit Count is the total number
of read operations that were issued to the driver that were partial DAS cache read hits. The
Partial Read Hit is a number of instances in which a read request had part, but not all, of the
data requested in cache. The remainder of the data that is not available in cache must be
acquired from DS8870.
For more information about etcadmin -a iostat output, consult the Easy Tier Server User’s
Guide, available in IBM System Storage DS8000 Information Center, at this site:
http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp
90
IBM System Storage DS8000 Easy Tier Server
Related publications
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this paper.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only:
򐂰 IBM System Storage DS8870: Architecture and Implementation, SG24-8085
򐂰 IBM System Storage DS8000 Host Attachment and Interoperability, SG24-8887
򐂰 IBM System Storage DS88870 Product Guide
򐂰 IBM System Storage DS8000 Easy Tier, REDP-4667
򐂰 IBM System Storage DS8000 Easy Tier Heat Map Transfer, REDP-5015
򐂰 IBM System Storage DS8000: Easy Tier Application, REDP-5014
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, Web Docs, drafts, and additional materials, at the following website:
ibm.com/redbooks
Other publications
These publications are also relevant as further information sources:
򐂰 IBM System Storage DS8700 and DS8800 Introduction and Planning Guide,
GC27-2297-07
򐂰 IBM System Storage DS8700 Performance with Easy Tier, WP101675
򐂰 IBM System Storage DS8700 and DS8800 Performance with Easy Tier 2nd Generation,
WP101961
򐂰 IBM System Storage DS8800 and DS8700 Performance with Easy Tier 3rd Generation,
WP102024
Online resources
These websites and URLs are also relevant as further information sources:
򐂰 IBM data storage feature activation (DSFA):
http://www.ibm.com/storage/dsfa
򐂰 Documentation for the DS8000 system:
http://www.ibm.com/systems/storage/disk/ds8000/index.html
© Copyright IBM Corp. 2013. All rights reserved.
91
򐂰 IBM System Storage Interoperation Center (SSIC):
http://www.ibm.com/systems/support/storage/config/ssic/index.jsp
򐂰 IBM announcement letters (search for R6.1):
http://www.ibm.com/common/ssi/index.wss
򐂰 IBM Techdocs Library - The IBM Technical Sales Library:
http://www.ibm.com/support/techdocs/atsmastr.nsf/Web/Techdocs
򐂰 IBM System Storage DS8000 Information Center:
http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic
Help from IBM
IBM Support and downloads
ibm.com/support
IBM Global Services
ibm.com/services
92
IBM System Storage DS8000 Easy Tier Server
Back cover
®
IBM System Storage DS8000
Easy Tier Server
Redpaper
Unified storage
caching and
tiering solution
Leverage AIX
direct-attached flash
devices
Cache management
and
workload monitoring
IBM Easy Tier Server is one of several Easy Tier enhancements
introduced with the IBM DS8000 Licensed Machine Code 7.7.10.xx.xx.
Easy Tier Server is a unified storage caching and tiering solution across
IBM AIX servers and supported direct-attached storage (DAS) flash
drives. Easy Tier Server manages placing a copy of the “hottest”
volume extents on flash drives attached to an AIX server. Data can be
read directly from flash drives local to the application host rather than
from cache or disk drives in the DS8870, while maintaining other
advanced feature functions.
™
INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION
This IBM Redpaper publication explains the Easy Tier Server concept
and explores key aspects of its architecture, design, and
implementation.
BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
From a more practical standpoint, this publication also contains
numerous illustrations and examples that help you set up, manage,
and monitor Easy Tier Server.
IBM Redbooks are developed
by the IBM International
Technical Support
Organization. Experts from
IBM, Customers and Partners
from around the world create
timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.
For more information:
ibm.com/redbooks
REDP-5013-00
Download