IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments Front cover

advertisement
IBM ® Tivoli ®
Front cover
IBM Tivoli Storage
Productivity Center in Virtual
Infrastructure Environments
Learn how to implement Tivoli Storage
Productivity Center in VMware environments
Review the fundamentals of Tivoli
Storage Productivity Center
Understand the architecture and
features of VMware vSphere 4
Mary Lovelace
Harsha Gunatilaka
Hector Hugo Ibarra
ibm.com/redbooks
Redpaper
International Technical Support Organization
IBM Tivoli Storage Productivity Center in
Virtual Infrastructure Environments
September 2011
REDP-4471-01
Note: Before using this information and the product it supports, read the information in “Notices” on page v.
Second Edition (September 2011)
This edition applies to Version 4.2 of IBM Tivoli Storage Productivity Center (product numbers 5608-WB1,
5608-WB2, 5608-WB3, 5608-WC3, 5608-WC4, and 5608-E14).
© Copyright International Business Machines Corporation 2009, 2011. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
The team who wrote this paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Chapter 1. Tivoli Storage Productivity Center on virtual infrastructures . . . . . . . . . . . 1
1.1 Why Tivoli Storage Productivity Center on virtual infrastructures . . . . . . . . . . . . . . . . . . 2
1.1.1 Overview of Tivoli Storage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Structure of Tivoli Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Overview of Hypervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.4 Overview of VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.5 Architecture of VMware Infrastructure 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.1.6 Components of VMware Infrastructure 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.1.7 Components of VMware vSphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.8 Physical topology of the VMware Infrastructure Data Center . . . . . . . . . . . . . . . . 11
1.1.9 VMware Pegasus CIMOM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2 Tivoli Storage Productivity Center V4.2 support for VMware . . . . . . . . . . . . . . . . . . . . 16
1.2.1 Overview of VMware support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2.2 Support for VMware Infrastructure 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2.3 Supported virtual infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.2.4 Supported guest operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.2.5 Supported storage subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.2.6 Monitoring in a VMware Infrastructure 3 environment . . . . . . . . . . . . . . . . . . . . . 18
1.3 Lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.3.1 Stand-alone environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.3.2 Enterprise environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Chapter 2. The importance of storage in virtual infrastructures . . . . . . . . . . . . . . . . .
2.1 Direction of the industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Hypervisor data storage capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.1 VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.2 Citrix XenServer 5.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.3 Microsoft Hyper-V R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.1 What is new in VMware vSphere vStorage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.2 vSphere storage architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 SAN planning and preferred practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4.1 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4.2 Preferred practices from VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
24
24
24
24
25
25
25
27
29
29
30
Chapter 3. Planning and configuring the Tivoli Storage Productivity Center and VMware
environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.1 VMware levels and supported environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.1.1 Supported SMI-S agents on VMware virtual machines. . . . . . . . . . . . . . . . . . . . . 38
3.1.2 Agent Manager on VMware Virtual Machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
© Copyright IBM Corp. 2009, 2011. All rights reserved.
iii
3.1.3 Tivoli Storage Productivity Center server on VMware virtual machine . . . . . . . . .
3.1.4 Tivoli Storage Productivity Center licensing with VMware . . . . . . . . . . . . . . . . . .
3.1.5 Tivoli Storage Productivity Center Data agents on VMware . . . . . . . . . . . . . . . . .
3.1.6 Tivoli Storage Productivity Center Fabric agents on VMware. . . . . . . . . . . . . . . .
3.1.7 Storage Resource agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.8 Tivoli Storage Productivity Center VMware LUN Correlation support. . . . . . . . . .
3.1.9 Tivoli Storage Productivity Center limitations with Hypervisors . . . . . . . . . . . . . .
3.2 Configuring Tivoli Storage Productivity Center communication with VMware. . . . . . . .
39
39
40
40
40
41
41
42
Chapter 4. Monitoring a VMware environment with Tivoli Storage
Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1 VMware ESX Server reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1 VMware ESX Server asset reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Disk Manager reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 VMware server alerting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4 VMware virtual machine reporting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.1 Disk reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.2 Mapping to Hypervisor Storage report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.3 Virtual Machine File Systems report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.4 Disk Capacity By Computer report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.5 VMware virtual machines without an agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.6 Virtual machines with agents, but without an ESX data source . . . . . . . . . . . . . .
4.4.7 Unused Virtual Disk Files report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.8 IBM TotalStorage Productivity Center reports. . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5 Removed Resource Retention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
52
52
60
64
65
67
69
70
71
72
73
74
75
81
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
83
83
83
83
83
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
iv
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
© Copyright IBM Corp. 2009, 2011. All rights reserved.
v
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX®
DB2®
DS4000®
DS6000™
DS8000®
Enterprise Storage Server®
eServer™
FlashCopy®
IBM®
pSeries®
Redbooks®
Redpaper™
Redbooks (logo)
System Storage®
System x®
System z®
®
Tivoli®
TotalStorage®
XIV®
xSeries®
z/OS®
zSeries®
The following terms are trademarks of other companies:
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
NetApp, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and other
countries.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
NetApp, and the Network Appliance logo are trademarks or registered trademarks of Network Appliance, Inc.
in the U.S. and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
vi
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Preface
Many customers have adopted VMware ESX as their server consolidation and virtualization
solution. This IBM® Redpaper™ publication explains how to plan for IBM Tivoli® Storage
Productivity Center monitoring of your VMware ESX environment. This paper is intended for
storage administrators who will plan for and configure Tivoli Storage Productivity Center to
Monitor VMware servers and then produce reports. This paper guides you through the
required steps to successfully complete these tasks. In addition, this paper includes several
scenarios that show you how to use Tivoli Storage Productivity Center to monitor your
VMware environment.
The team who wrote this paper
This paper was produced by a team of specialists from around the world working at the
International Technical Support Organization (ITSO) in San Jose, CA.
Mary Lovelace is a Consulting IT Specialist at the ITSO. She has experience with IBM in
large systems, storage and Storage Networking product education, system engineering and
consultancy, and system support. She has written IBM Redbooks® publications about Tivoli
Storage Productivity Center, IBM Tivoli Storage Manager, Scale Out Network Attached
Storage, and IBM z/OS® storage products.
Harsha Gunatilaka is a Software Engineer for Tivoli Storage Software in Tucson, AZ. He is
currently part of the Tivoli Storage Productivity Center development and test team. He is an
IBM Certified Deployment Professional on Tivoli Storage Productivity Center and has
experience with a wide array of IBM storage products and software. He holds a degree in
Management Information Systems from the University of Arizona.
Hector Hugo Ibarra is an Infrastructure IT Architect, specializing in cloud computing and
storage solutions. He is based in Argentina and is currently working at the IBM Argentina
Delivery Center. In 2006, Hector was designated as the ITA Leader for The VMware Center of
Competence. He specializes in virtualization technologies and has assisted several IBM
clients in deploying virtualized infrastructures around the world.
Thanks to the following people for their contributions to this project:
David Bennin
Richard Conway
ITSO, Poughkeepsie Center
Ajay Lunawat
IBM Tivoli, San Jose
© Copyright IBM Corp. 2009, 2011. All rights reserved.
vii
Now you can become a published author, too!
Here's an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks publications in one of the following ways:
򐂰 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
򐂰 Send your comments in an email to:
redbooks@us.ibm.com
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Stay connected to IBM Redbooks
򐂰 Find us on Facebook:
http://www.facebook.com/IBMRedbooks
򐂰 Follow us on Twitter:
http://twitter.com/ibmredbooks
򐂰 Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
򐂰 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
򐂰 Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
viii
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
1
Chapter 1.
Tivoli Storage Productivity
Center on virtual infrastructures
This chapter outlines the advantages of implementing IBM Tivoli Storage Productivity Center
in virtual infrastructure environments. It provides an overview of IBM Tivoli Storage
Productivity Center and VMware. In addition, it includes a high-level technical introduction to
both products, their architectures, and their basic concepts.
This chapter includes the following sections:
򐂰 Why Tivoli Storage Productivity Center on virtual infrastructures
򐂰 Tivoli Storage Productivity Center V4.2 support for VMware
򐂰 Lab environment
© Copyright IBM Corp. 2009, 2011. All rights reserved.
1
1.1 Why Tivoli Storage Productivity Center on virtual
infrastructures
Today, companies all over the world are using virtualization technologies to virtualize their
business servers, desktops, or applications. There are many good reasons to go in that
direction. For example, virtual infrastructures are increasing their reliability by providing high
availability (HA) solutions for critical business applications, and maintenance cost are
significantly lower than in physical environments.
Companies are focusing on virtualizing their existing physical servers and doing planning by
using virtual desktop provisioning. Considering this point, you have enough reasons to start
thinking about how to improve the virtual infrastructure resources utilization. Another good
reason is for application virtualization and new projects that will be fulfilled in a virtual world.
IBM Tivoli Storage Productivity Center V4.2 is an integrated storage infrastructure
management solution that simplifies, automates, and optimizes the management of storage
devices, storage networks, and capacity utilization of file systems. It provides disk and tape
library configuration and management, performance management, storage area network
(SAN) fabric management and configuration, and host-centered usage reporting and
monitoring.
The integration between Tivoli Storage Productivity Center and Hypervisors helps in providing
environment reliability by providing performance analysis of storage resources. It also helps
to reduce space utilization, decreasing the cost of allocated storage.
This section provides an overview of Tivoli Storage Productivity Center, Hypervisor, and
VMware. It includes information about the architecture of Tivoli Storage Productivity Center
and VMware Infrastructure 3. It highlights the components of VMware Infrastructure 3 and
vSphere and provides information about the topology of the VMware Infrastructure Data
Center. This section helps you to understand how the products work and what they can do
together.
1.1.1 Overview of Tivoli Storage Productivity Center
Tivoli Storage Productivity Center is an integrated set of software components. It provides
end-to-end storage management from the host and application to the target storage device in
a heterogeneous platform environment.
Tivoli Storage Productivity Center provides the following functions:
򐂰
򐂰
򐂰
򐂰
򐂰
Simplifies the management of storage infrastructures
Manages, configures, and provisions SAN-attached storage
Monitors and tracks the performance of SAN-attached devices
Monitors, manages, and controls (through zones) SAN fabric components
Manages the capacity utilization and availability of file systems and databases
With Tivoli Storage Productivity Center V4.2, you can perform the following tasks:
򐂰
򐂰
򐂰
򐂰
򐂰
2
Manage the capacity utilization of file systems and databases.
Automate file system capacity provisioning.
Configure devices and manage multiple devices from a single user interface.
Tune and proactively manage the performance of storage devices on the SAN.
Manage, monitor, and control your SAN fabric.
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Tivoli Storage Productivity Center V4.2 provides a single management platform that you can
use to centralize the management of your storage infrastructure. By providing an integrated
suite with management modules focused on various aspects of the storage infrastructure,
Tivoli Storage Productivity Center delivers role-based administration, single sign-on (SSO),
and a single management server and repository. The central console provides a centralized
place to monitor, plan, configure, report, and perform problem determination on the SAN
fabric, storage arrays, storage capacity, and virtual infrastructures.
1.1.2 Structure of Tivoli Storage Productivity Center
This section provides information about the structure of the Tivoli Storage Productivity Center
from the logical and physical views.
Logical structure
The logical structure of Tivoli Storage Productivity Center V.4.2 is composed of the
infrastructure layer, application layer, and interface layer, as illustrated in Figure 1-1.
Interfaces
Integrated
User
Interface
Automated
Best
Practices
Provisioning / Workflow
CL
WSDL
Management
Applications
Infrastructure
Fabric
Fabric
Disk
Disk
Device Discovery
Device Discovery
And Control
And Control
Control
Interface
Monitor Discovery
Interface Interface
Control
Translator
Discover A PI
Process
ess
Proc
Proc
ess
Job E ngine
Queue
Queue
Queue
Queue
Monitor
Copy
Services
( eCS)
CIM
ESSNI Agent SNMP
Client
Library Library Library
Library
CIMSc annerSLPSc annerSLPPars er CIMPars er
CIMObj ect
Pars er
Legend
Infr astr uct ure
Domai n s pecific
Plug-ins
CIMXMLP arser
CIMProc ess or
Profil e
Replication
Replication
Performance
Performance
Consolidated Database
Consolidated Database
C
l
C
a
l
s
C
a
ls
s
C
M
a
ls
a
s
M
a
p
s
a
s
p
M
p
s
e
a
p
M
r
p
e
DB a
Driver
p
r
p
e
p
r
e
r
Data
Data
Others …
Others …
Scheduling,
Scheduling,
Messages
Messages
Logging
Logging
Figure 1-1 Logical structure of Tivoli Storage Productivity Center
Infrastructure layer
The infrastructure layer consists of basic functions, such as messaging, scheduling, logging,
and device discovery. It also includes a consolidated database that is shared by all
components of Tivoli Storage Productivity to ensure consistent operation and performance.
Application layer
The application layer consists of core management functions of Tivoli Storage Productivity
Center. These functions are based on the infrastructure implementation and provide various
disciplines of storage or data management. These application components are most often
Chapter 1. Tivoli Storage Productivity Center on virtual infrastructures
3
associated with the product components that make up the product suite, such as fabric
management, disk management, replication management, and data management.
Interface layer
The interface layer presents integration points for the products that make up the suite. The
integrated GUI unites product and component functions in a single representation that
seamlessly interacts with the components. This layer centralizes the tasks for planning,
monitoring, configuring, reporting, viewing topology, and problem determination.
Physical structure
Figure 1-2 shows the physical structure of Tivoli Storage Productivity Center, which consists
of three components:
򐂰 A data component, which is Tivoli Storage Productivity Center for Data
򐂰 A disk component, which is Tivoli Storage Productivity Center for Disk
򐂰 A replication component, which is Tivoli Storage Productivity Center for Replication
Figure 1-2 Structure of Tivoli Storage Productivity Center
The Data server is the control point for product scheduling functions, configuration, event
information, reporting, and GUI support. It coordinates communication with agents and data
collection from agents that scan file systems and databases to gather storage demographics
and populate the database with results. Automated actions can be defined to perform file
system extension, data deletion, and Tivoli Storage Manager backup, archiving, or event
reporting when defined thresholds are encountered. The Data server is the primary contact
4
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
point for GUI functions. It includes functions that schedule data collection and discovery for
the Device server.
The Device server component discovers, gathers information from, analyzes the performance
of, and controls storage subsystems and SAN fabrics. It coordinates communication with, and
data collection from, agents that scan SAN fabrics. The Device server also collects
information about Hypervisors.
The single database instance serves as the repository for all Tivoli Storage Productivity
Center components.
Outside of the server, several interfaces are used to gather information about the
environment. The most important sources of information are the Tivoli Storage Productivity
Center agents (Storage Resource agent, Data agent, and Fabric agent) and SMI-S-enabled
storage devices that use a Common Information Model object manager (CIMOM) agent
(embedded or as a proxy agent). Storage Resource agents, Common Information Model
(CIM) agents, and Out-of-Band fabric agents gather host, application, storage system, and
SAN fabric information and send that information to the Data server or Device server.
Data agents and Fabric agents are supported in Tivoli Storage Productivity Center V4.2.
However, no new functions were added to those agents for this release. For optimal results
when using Tivoli Storage Productivity Center, migrate the Data agents and Fabric agents to
Storage Resource agents.
Native storage system interfaces are provided in Tivoli Storage Productivity Center V4.2 for
IBM System Storage® DS8000®, IBM System Storage SAN Volume Controller, and IBM
XIV® Storage System to improve the management capabilities and performance of data
collection. The native interface (also referred to as native application programming interface
(NAPI)) replace the CIM agent (SMI-S agent) implementation for these storage systems.
If you are upgrading Tivoli Storage Productivity Center, a storage subsystem credential
migration tool is provided to help you migrate the existing storage system credentials for the
native interfaces. The native interfaces are supported for the following release levels:
򐂰 DS8000 Release 2.4.2 or later
򐂰 SAN Volume Controller Version 4.2 or later
򐂰 XIV Storage System Version 10.1 or later
With the GUI, you can enter information or receive information for all Tivoli Storage
Productivity Center components. With the command-line interface (CLI), you can issue
commands for major Tivoli Storage Productivity Center functions.
IBM Tivoli Storage Productivity Center for Data
Tivoli Storage Productivity Center for Data can provide over 300 enterprise-wide reports,
monitoring and alerts, policy-based action, and file system capacity automation in a
heterogeneous environment.
IBM Tivoli Storage Productivity Center for Disk
Tivoli Storage Productivity Center for Disk offers the Performance Manager feature. It also
can enable device configuration and management of supported SAN-attached devices from a
single console.
IBM Tivoli Storage Productivity Center for Replication
The basic functions of Tivoli Storage Productivity Center for Replication provide management
of IBM FlashCopy®, Metro Mirror, and Global Mirror capabilities for the IBM Enterprise
Storage Server® (ESS) Model 800, IBM System Storage DS6000™, IBM System Storage
Chapter 1. Tivoli Storage Productivity Center on virtual infrastructures
5
DS8000, IBM Storwize V7000, and IBM System Storage SAN Volume Controller. Tivoli
Storage Productivity Center for Replication is available for both IBM System z® and open
systems platforms.
1.1.3 Overview of Hypervisor
The Hypervisor or virtual machine monitor (VMM) is a software layer that supports the
utilization of multiple operating systems or virtual machines in one single physical server. It is
the core of a virtual machine that performs physical resource management. The Hypervisor
operates, manages, and arbitrates the four main resources of a computer (processor,
memory, network, and storage). You can dynamically allocate those resources among all
virtual machines that are defined by the central computer.
Currently, hardware and Hypervisor manufacturers are working to improve and help the
Hypervisor to reach a full, reliable, and robust virtualization. The following types of Hypervisor
are available:
򐂰 Hosted (hardware, operating system, Hypervisor, virtual machine)
The Hypervisor requires an operating system running to be started.
򐂰 Non-Hosted or binary translation (hardware, Hypervisor, virtual machine)
The Hypervisor operates as a layer between the hardware and the virtual machines. All
binary translations for processor, memory, network, and storage are managed by the
VMM.
򐂰 Full virtualization, full hardware assist (hardware, Hypervisor, virtual machine)
The Hypervisor operates as a layer between the hardware and the virtual machines. All
binary translations are managed by the VMM.
򐂰 Paravirtualization, non-hosted, hardware assist (hardware, Hypervisor, virtual machine)
The Hypervisor operates as a layer between the hardware and the virtual machines. All
binary translations for network and storage are managed by the VMM. The binary
translations for the processor and memory are done directly by the hardware.
At the time this document was written, the following Hypervisor products were considered
most important:
򐂰 Citrix Xen Server 5.5
򐂰 Microsoft Hyper-V R2
򐂰 VMware vSphere for VMware ESX
VMware is the only Hypervisor supported by Tivoli Storage Productivity Center V4.2 and,
therefore, is the focus of this paper.
1.1.4 Overview of VMware
VMware products enable virtualization of hardware for x86 technology-based computer
hardware. The VMware workstation, VMware server, VMware ESX Server, and VMware
vSphere provide the facility to create, configure, and run virtual machines. The VMware
workstation and VMware server install and run inside an operating system that is installed on
a physical machine because they are a hosted Hypervisor type.
6
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
VMware Hypervisor (VMware ESX Server 3 or vSphere)
The VMware Hypervisor can host multiple virtual machines that run independently of each
other while sharing hardware resources. With VMware, a single physical computer system
can be divided into logical virtual machines running various operating systems. To the
applications running inside the virtual machine, it is a computer system with a unique IP
address and access to storage that is virtualized by the hosting system (Hypervisor).
VMware VirtualCenter
The VMware VirtualCenter is the management application that is the central entry point for
the management and monitoring of multiple instances of ESX Server in a data center. To use
the improved VMware support, two data sources are required. A VMware ESX Server of a
VMware Virtual Infrastructure data source is needed. Also, a Tivoli Storage Productivity
Center Data agent or Storage Resource agent is required on each virtual machine that you
will monitor.
For more information about the VMware ESX Server or VMware VirtualCenter, go to the
VMware site at the following address:
http://www.vmware.com
For a list of supported VMware products and guest operating systems, consult the IBM Tivoli
Support Site at:
http://www.ibm.com/support/entry/portal/Overview/Software/Tivoli/
Tivoli_Storage_Productivity_Center_Standard_Edition
Tivoli Storage Productivity Center and VMware data flow
As shown in Figure 1-3 on page 8, the data between the VMware environment and the Tivoli
Storage Productivity Center server flows in two different connections:
򐂰 The connection of the Tivoli Storage Productivity Center server to the VMware Host Agent
of the VMware ESX Server through the VMware Virtual Infrastructure Data Source
򐂰 The connection of the Tivoli Storage Productivity Center Data agents or Storage Resource
agents residing on the VMware virtual machines inside the VMware ESX Server
You do not need to install an agent on the VMware ESX Server itself. Installing an agent on
the VMware ESX Server is not supported.
Chapter 1. Tivoli Storage Productivity Center on virtual infrastructures
7
ESX Server
Host Agent
SOAP
Tivoli Storage
Productivity Center
Device server
Virtualized storage
Storage 1
SOAP
Virtual Center
Virtual Machine 1
Data Agent
Data server
Virtual Machine 2
Data Agent
ESX Server
ESX Server
Virtual Machine n
HBA
Data Agent
Figure 1-3 VMware and Tivoli Storage Productivity Center environment flow
At the time this paper was written, no host bus adapter (HBA) virtualization was available for
the VMware virtual machines. Therefore, if you install a Fabric agent on a VMware virtual
machine, the Fabric agent will not be useful.
1.1.5 Architecture of VMware Infrastructure 3
VMware ESX Server 3.5 and ESXi are the core of the VMware Infrastructure 3 product suite.
They function as the Hypervisor, or virtualization layer, that serves as the foundation for the
entire VMware Infrastructure 3 package. ESX Server is a bare metal installation, which
means no host operating system, for example, Microsoft Windows or Linux, is required. ESX
Server is a leaner installation than products that require a host operating system. The ESX
Server allows more of its hardware resources to be used by virtual machines rather than by
processes that are required to run the host. The installation process for ESX Server installs
two components, the Service Console and the VMkernel, that interact with each other to
provide a dynamic and robust virtualization environment.
Service Console
The Service Console is the operating system that is used to manage the ESX Server and the
virtual machines that run on the server. The console includes services that are found in other
operating systems, such as a firewall, Simple Network Management Protocol (SNMP) agents,
and a web server. The Service Console does not have many of the features and benefits
other operating systems offer, making it a lean virtualization machine.
VMkernel
The other installed component is the VMkernel. Although the Service Console provides
access to the VMkernel, the VMkernel is the real foundation of the virtualization process. The
VMkernel manages the access of the virtual machines to the underlying physical hardware by
providing processor scheduling, memory management, and virtual switch data processing.
8
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Core architecture
Based on the design of the core architecture of the VMware ESX Server 3 (Figure 1-4), ESX
Server 3 implements the abstractions that allow hardware resources to be allocated to
multiple workloads in fully isolated environments.
The design of the ESX Server has the following key elements:
򐂰 The VMware virtualization layer, which provides the idealized hardware environment and
virtualization of underlying physical resources
򐂰 The resource manager, which enables the partitioning and guaranteed delivery of
processor, memory, network bandwidth, and disk bandwidth to each virtual machine
򐂰 The hardware interface components, including device drivers, which enable
hardware-specific service delivery while hiding hardware differences from other parts of
the system
Application
Windows
2000
Application
Windows
NT
Application
Linux
Application
Windows
2000
Console
OS
VMware
Virtualization Layer
Intel Architecture
Processor
Memory
Disk
NIC
Figure 1-4 Architecture of the VMware ESX Server
1.1.6 Components of VMware Infrastructure 3
The VMware Infrastructure 3 product suite includes the following products, among others, that
make up the full feature set of enterprise virtualization:
Virtual Infrastructure Client
An interface for administrators and users to connect remotely to the
VirtualCenter Management Server or individual VMware ESX Server
installations from any Windows technology-based computer.
VMware Consolidated Backup (VCB)
Provides an easy-to-use, centralized facility for agent-free backup of
virtual machines. It simplifies backup administration and reduces the
load on VMware ESX Server installations.
VMware Distributed Resource Scheduler (DRS)
Intelligently allocates and balances computing capacity dynamically
across collections of hardware resources for virtual machines.
Chapter 1. Tivoli Storage Productivity Center on virtual infrastructures
9
VMware ESX Server A production-proven virtualization layer that runs on physical servers
that abstract processor, memory, storage, and networking resources to
be provisioned to multiple virtual machines.
VMware High Availability
Provides easy-to-use, cost-effective high availability for applications
running on virtual machines. If server failure occurs, affected virtual
machines are automatically restarted on other production servers that
have spare capacity.
VMware Infrastructure SDK
Provides a standard interface for VMware and third-party solutions to
access VMware Infrastructure.
Virtual Infrastructure Web Access
A web interface for virtual machine management and remote console
access.
VMware Storage VMotion
Performs live storage migrations. Moves virtual machines from one
data store to the other, allowing storage changes or refresh without
affecting the virtual images.
VMware Virtual Machine File System (VMFS)
A high-performance cluster file system for virtual machines.
VMware Virtual SMP Enables a single virtual machine to use multiple physical processors
simultaneously.
VMware VirtualCenter
The central point for configuring, provisioning, and managing a
virtualized IT infrastructure.
VMware VMotion
Enables the live migration of running virtual machines from one
physical server to another with zero downtime, continuous service
availability, and complete transaction integrity.
Important: The focus of this paper is on the VMware ESX Server, VirtualCenter, VMotion,
Storage VMotion, DRS, and HA.
1.1.7 Components of VMware vSphere
VMware vSphere shares the following components with ESX Server 3:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
10
VMware Consolidated Backup
VMware Distributed Resource Scheduler
VMware ESX and ESXi
VMware High Availability
VMware vCenter Server
VMware Virtual Machine File System
VMware Virtual SMP
VMware VMotion and Storage VMotion
VMware vSphere Client
VMware vSphere SDK
VMware vSphere Web Access
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
VMware vSphere incorporates the following components:
򐂰 Host profiles
The host profiles feature simplifies host configuration management through user-defined
configuration policies. The host profile policies capture the blueprint of a known, validated
host configuration and use this information to configure networking, storage, security, and
other settings across multiple hosts.
򐂰 Pluggable Storage Array (PSA)
The PSA is a storage partner plug-in framework that enables greater array certification
flexibility and improved array-optimized performance. PSA is a multipath I/O framework so
that storage partners can enable their array asynchronously to ESX release schedules.
򐂰 VMware Fault Tolerance
When Fault Tolerance is enabled for a virtual machine, a secondary copy of the original is
created. All actions completed on the primary virtual machine are also applied to the
secondary. If the primary virtual machine becomes unavailable, the secondary copy
becomes active, providing continual availability.
򐂰 VMware vNetwork Distributed Switch
The VMware vNetwork Distributed Switch feature includes a distributed virtual switch
(DVS) that spans many ESX or ESXi hosts. It enables a significant reduction of ongoing
network maintenance activities and increasing network capacity. With this feature, virtual
machines can maintain consistent network configuration as they migrate across multiple
hosts.
1.1.8 Physical topology of the VMware Infrastructure Data Center
The Virtual Infrastructure topology can be as simple as having a unique physical VMware
ESX Server with local storage and virtual machines running on it. The environment can grow
within a heterogeneous strategy, through the addition of more instances of VMware ESX
Server from different hardware manufacturers or within the same manufacturer but that use
different product families (see Figure 1-5).
VMware ESX Server
Hardware vendor A
Local Storage
VMware ESX Server
Hardware vendor B
Local Storage
VMware ESX Server
Hardware vendor C
Local Storage
Figure 1-5 Simple heterogeneous Virtual Infrastructure environment
Heterogeneous environment
Using a heterogeneous environment is a limitation, because most of the virtual infrastructure
features are not available with different types of hardware in the environment, which can affect
availability.
Chapter 1. Tivoli Storage Productivity Center on virtual infrastructures
11
When demand begins to grow, the environment needs to improve resource utilization or
technology. The first bottleneck is in storage. As the number of virtual images grow, they
demand more I/O to the local storage. You might ask these questions: Is there sufficient local
storage, and is it robust enough to support the current demand? Should we add new
technology, change the storage configuration, or increase the number of VMware ESX
Servers?
You must consider how the answers to these questions affect the solution cost. You can use
Tivoli Storage Productivity Center to provide the performance reports that can help determine
the best solution.
You can increase the complexity of the Virtual Infrastructure solution by adding different types
of storage devices. VMware supports SAN, iSCSI, and network-attached storage (NAS). The
recommendations for making the right storage selection depend on the performance, cost,
purpose, and reliability that the environment needs.
Because Tivoli Storage Productivity Center supports only NAS and SAN in the VMware
Infrastructure by design, you only need to add SAN to the original Virtual Infrastructure, as
shown in Figure 1-6. You can add as many storage devices as you need.
VMware ESX Server
Hardware vendor A
Local Storage
VMware ESX Server
Hardware vendor B
Local Storage
VMware ESX Server
Hardware vendor C
Local Storage
Fabric Switches
Storage Device
Figure 1-6 Heterogeneous Virtual Infrastructure environment with SAN enterprise storage devices
If you are not familiar with SAN, read the IBM Redbooks publication Introduction to Storage
Area Networks, SG24-5470.
12
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
With the addition of storage, the environment is more reliable and faster, but other questions
remain: What is going to happen if the business requires more virtual images running in the
virtual infrastructure? What if those new virtual images are critical for the business? Now it is
time to replace the heterogeneous VMware ESX hosts with a homogeneous hardware
infrastructure where compatibility with VMware Infrastructure features plays a key role, as
shown in Figure 1-7.
Virtual Guests
All IBM System x
VMware ESX Servers
Fabric Switches
Storage Device
Figure 1-7 Homogeneous Virtual Infrastructure environment with SAN storage
Homogeneous server hardware environment
You can keep the same storage solution. However, now that hardware compatibility is not an
issue, you can use VMware Clusters with DRS, HA, VMotion, and Storage VMotion.
Now you might feel that you need nothing more. You have invested in hardware and software,
and you can move virtual machines from one host to the other with no outages. In addition,
you can support a host that is down, and you can run maintenance without affecting the
business servers.
Soon you will introduce disk-intensive applications and databases in your virtual
infrastructure. You will virtualize the previous hardware, applications owners will start
demanding more virtual images for new projects, and the isolated test and development will
go to your virtual infrastructure. Desktop virtualization and provisioning will be implemented
for local and remote users. Application virtualization will also be implemented by increasing
the number of servers, appliance gateways, databases, profiles, and security management
servers all running in your virtual infrastructure.
Chapter 1. Tivoli Storage Productivity Center on virtual infrastructures
13
Figure 1-8 illustrates how the virtual infrastructure environment now looks.
VMI DataCenter
Virtual Center
Management Console
VMI Clusters
Virtual Guests
All IBM System x
VMware ESX
Servers
Fabric Switches
Storage Devices
Figure 1-8 Typical scenario of Virtual Machine Interface (VMI) for enterprise data centers
Because of the constant change in environments, you must consider the implementation and
maintenance of the virtual environment.
1.1.9 VMware Pegasus CIMOM
Pegasus CIMOM provides the enterprise storage management industry with a CIM-compliant
object model for virtual machines and their related storage devices. The Pegasus CIMOM is
installed with the VMware ESX Server so that virtual machine resources can be explored, as
illustrated in Figure 1-9 on page 15.
Tivoli Storage Productivity Center: Tivoli Storage Productivity Center does not use the
Pegasus CIMOM or communicate with it. The information regarding the Pegasus CIMOM
is included here for completeness.
14
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Virtual Machine 1
VMware Virtual Center
HTTP/HTTPS
Virtual Machine 2
VI API
VMware ESX Server
Virtual Machine 3
VI API
Device Server
Tivoli Storage Productivity Center
V3.3.2
VI API
VMware ESX Server
VMware Cluster
Figure 1-9 VMware environment and Tivoli Storage Productivity Center communication flow
With the Pegasus CIMOM, independent software vendors can perform the following tasks:
򐂰 Explore the virtual machines on the ESX Server machine and view their storage resources
using any CIM client.
򐂰 Examine virtual machine storage allocation to determine if availability and utilization
policies are being satisfied.
򐂰 Examine the physical storage allocated to a virtual machine.
򐂰 Verify the operational status of virtual machine storage, including all storage devices and
paths involved in supplying storage to virtual machines.
VMware includes a Pegasus CIMOM with ESX Server for the following reasons:
򐂰 Accessibility
With a CIMOM and VMware-specific providers running on the ESX Server machine,
vendors can quickly build custom agents and clients to incorporate VMware servers,
virtual machines, and their available resources into enterprise management applications.
򐂰 Industry support
CIM and SMI-S are independent of any particular programming language and other
implementation-specific semantics. Clients or applications that query the Pegasus CIMOM
can be implemented on any platform and in any programming language with an
implementation of the Distributed Management Task Force (DMTF) CIM standard.
򐂰 Remote operations
With appropriate firewall rules, it is possible to explore VMware environments remotely by
querying the Pegasus CIMOM.
򐂰 Low barrier to entry
The concepts behind Pegasus CIMOM are easy to understand, and developers can
quickly create and deploy them by using many open source toolkits that are available on
the web.
Chapter 1. Tivoli Storage Productivity Center on virtual infrastructures
15
1.2 Tivoli Storage Productivity Center V4.2 support for VMware
Tivoli Storage Productivity Center V3.3.0 introduced support for VMware ESX Server 3.0.2
and later. This section describes the VMware support that is provided in Tivoli Storage
Productivity Center V4.2.
1.2.1 Overview of VMware support
Tivoli Storage Productivity Center V4.2 provides the following support for VMware:
򐂰 VMware vSphere ESX and ESXi
򐂰 VMware ESX Server 3.5 3i
򐂰 VMware VirtualCenter V2.5. and V4
򐂰 Data agent equivalent support for ESX Server
򐂰 Reporting on the ESX Server virtualization infrastructure
– Includes ESX Server storage and its usage
– Reports that now show the logical unit number (LUN) correlation
򐂰 Mapping of virtual machines to the hosting ESX Server
򐂰 Mapping of Data agent reports to virtual machines on multiple instances of ESX Server
򐂰 Reporting on storage uses by virtual machines and multiple instances of ESX Server
򐂰 Mapping of storage between ESX Server and virtual machines
򐂰 Data agent support in virtual guests on ESX Server 3.0.x, 3.5.x, or 4
򐂰 LUN mapping between the VMware ESX Server and back-end storage
Important: Tivoli Storage Productivity Center also supports VMware ESX Server 3.0 and
VirtualCenter V2.0, but does not support the LUN correlation for these releases.
1.2.2 Support for VMware Infrastructure 3
Tivoli Storage Productivity Center supports the following VMware Infrastructure 3 features:
򐂰
򐂰
򐂰
򐂰
򐂰
ESX Server 3.0.x (LUN correlation is not supported.)
ESX Server 3.5 or later (LUN correlation is supported.)
ESX Server 3.5 3i or later (LUN correlation is supported.)
VMware VirtualCenter V2.0.x (LUN correlation is not supported.)
VMware VirtualCenter V2.5 or later (LUN correlation is supported.)
The hierarchical mapping of storage allocated to the virtual machine is available for the virtual
machines on the VMware ESX Server.
Important: Tivoli Storage Productivity Center V4.2 supports the mapping of storage from
the VMware ESX Server to the back-end storage subsystems with VMware ESX Server
3.5 or 4.
16
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
1.2.3 Supported virtual infrastructure
The Tivoli Storage Productivity Center V4.2 can monitor the following virtual infrastructure
components:
򐂰
򐂰
򐂰
򐂰
VMware ESX Server 3.0.x, 3.5, and 4.0
VMware ESXi Server 3.5 and 4.0
VMware VirtualCenter V4.0
VMware VirtualCenter V2.5 and V2.0.x
Important: IBM Tivoli Storage Productivity Center does not support Citrix Xen Server or
Microsoft Hyper-V.
1.2.4 Supported guest operating systems
The Tivoli Storage Productivity Center server can be installed on a virtual machine on
VMware ESX Server 3.0.x, 3.5.x, or 4.0.x. The hardware and operating system requirements
are the same requirements as for a physical machine.
Tivoli Storage Productivity Center V.4.2 supports the following guest operating systems:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
Red Hat Enterprise Linux AS 4
Red Hat Enterprise Linux 5
SUSE Linux Enterprise 9
SUSE Linux Enterprise 10
SUSE Linux Enterprise 11
Windows Server 2003
Windows Server 2003 R2
Windows Server 2008
Windows Server 2008 R2
Important: No Tivoli Storage Productivity Center component can be installed directly on
the ESX Server (Service Console).
1.2.5 Supported storage subsystems
The following storage subsystems are supported through VMware Infrastructure 3 and Tivoli
Storage Productivity Center V4.2:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
3PAR
EMC CLARiiON
EMC Symmetrix
Hewlett-Packard Enterprise Virtual Arrays (EVA)
Hitachi Data Systems 9xxxx
IBM Enterprise Storage Server
IBM System Storage DS4000® storage server
IBM System Storage DS6000 storage server
IBM System Storage DS8000 storage server
IBM System Storage SAN Volume Controller
IBM XIV Storage System
Chapter 1. Tivoli Storage Productivity Center on virtual infrastructures
17
1.2.6 Monitoring in a VMware Infrastructure 3 environment
Tivoli Storage Productivity Center can provide the following information for a VMware
Infrastructure 3 environment:
򐂰 Monitoring virtual images
– Comprehensive discovery of the following items:
•
•
•
•
•
–
–
–
–
–
–
–
Databases
File systems
Files
Servers
Storage
Enterprise-wide reporting
Threshold monitoring
Alerts
Automated scripting facility
Automatic policy-based provisioning
Chargeback capability
Reports on all storage, including IBM, EMC, Hitachi Data System, Hewlett-Packard,
and NetApp
򐂰 Monitoring multiple instances of VMware ESX Server and storage devices
–
–
–
–
–
–
–
–
–
–
–
–
–
Centralized point of control for disk configuration
Device grouping services
Logging
Automated management and provisioning
Capacity monitoring and reporting
Scheduled actions
Created and assigned LUNs
Integrated with Fabric management
Performance trending
Performance thresholds and notification
DS8000 Element Manager enhanced GUI
Automated status and problem alerts
Integration with third-party system management through SNMP
򐂰 Monitoring fabric
–
–
–
–
–
–
–
–
–
–
Centralized point of control for SAN configuration
Automated management
Multivendor-switch zone provisioning
Brocade, Cisco, and McDATA
Multivendor HBA support
Visualization of the topology
Real-time monitoring
Automated status and problem alerts
Direct integration with Tivoli system management
Integration with third-party system management through SNMP
򐂰 Reporting on the virtualization infrastructure of VMware ESX Server
Tivoli Storage Productivity Center Topology Viewer provides the following information:
– Level 0 computers: VMware ESX Server, Data agents, Storage Resource Agents
(SRAs), and unknown systems
– Hypervisor alerts: Discover, missing, new virtual machines, and remove virtual
machines
18
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
– Level 2: Data store, local and SAN, and how they are connected
– Virtual machines: List of virtual machines and disk information
– Connectivity: HBA and Fabric
– Performance in the fabric and storage ports
1.3 Lab environment
To help you gain a better understanding of this topic, we created two environments: a
stand-alone environment and an enterprise environment. Based on these environments, we
discovered a set of preferred practices as provided in this paper.
1.3.1 Stand-alone environment
The stand-alone environment includes small and medium virtual infrastructure scenarios. It
consists of one physical server with local and SAN storage where a VMware ESX Server is
installed. Several virtual machines run on it. The number of physical servers in parallel that
you can add is unlimited. The hardware configurations, test scenarios, and the monitoring that
we performed with Tivoli Storage Productivity Center work for all of the servers. In addition, in
a different physical server, we installed VMware Virtual Center V4 with Microsoft SQL Server
2008 as the database engine.
Figure 1-10 shows the complete stand-alone environment.
VMI DataCenter
Virtual Center 2.5 Update 2
Lenovo Hardware
Virtual Guests
IBM eServer™ xSeries® 346
Two 300 GB SAS HD 5 GB RAM
Two LUNs 150 GB 4 paths
Two HBA Qlogic 2460 2 GB
VMware ESX Server 3.5 update 2
Fabric Switches
Storage Devices
Figure 1-10 Stand-alone environment created in the lab
Chapter 1. Tivoli Storage Productivity Center on virtual infrastructures
19
Configuration of the VMware host hardware
The VMware server hardware in the lab was configured as follows:
򐂰 VMware ESX host
–
–
–
–
IBM System x® 3650
2 x 300 GB 15 K RPM SAS - Local Storage
One dual port HBA Qlogic PCI-X 4 Gb FC
Two LUNs, 400 GB, with four paths to SAN Volume Controller
򐂰 VMware Virtual Center host
– IBM System x 5260, 2 CPU, 16 GB RAM
򐂰 Fabric switches
– Brocade IBM B32
򐂰 Storage devices
– SVC
– IBM System Storage DS4700 storage server
Scenario summary
Based on the test scenarios run in the stand-alone environment, we made the following
observations:
򐂰 Working with templates
Do not install a Tivoli Storage Productivity Center Data agent in templates. Even having
SYSPREP in the VirtualCenter means that you must remove and install the agent again.
After a new virtual machine is created from a template, the Data agent can be installed. It
starts sending data to the Tivoli Storage Productivity Center server immediately.
򐂰 Working with snapshots
If a job is scheduled in a Tivoli Storage Productivity Center server that runs in a snapshot,
and at some point you return to the original status, the job log file will be missing. In this
situation, an alert is in the Tivoli Storage Productivity Center Server.
Important: Rerun all scheduled jobs if a virtual machine returns to its original status
from a snapshot.
򐂰 Changing IP addresses in the virtual machines
We did not detect any issues while changing the IP address of a virtual machine. You must
follow these steps if an IP address change is required:
a.
b.
c.
d.
Update the Domain Name System (DNS) record in your DNS server.
Change the IP address of the virtual machine.
Flush the DNS in the virtual machine and Tivoli Storage Productivity Center Server.
Restart the IBM Tivoli Common Agent in the virtual machine.
Important: If the Tivoli Storage Productivity Center Server V.4.2 is running in a virtual
machine, retain a static IP address. If a static IP is not assigned, Tivoli Storage
Productivity Center Server V.4.2 can stop working. In this case, you must reinstall it.
򐂰 Adding and removing virtual hardware
Removing and adding virtual hardware is supported by Tivoli Storage Productivity Center
V.4.2. Disks that were added to virtual machines while they were powered on were
detected by Tivoli Storage Productivity Center V.4.2 after running a probe.
20
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
򐂰 Ports used for virtual machines and multiple instances of VMware ESX Server
For VMware ESX Server, we used port 443. For VMware Virtual Machines, we used ports
9510–9515.
More information: For more information about the ports that are used, see
Deployment Guide Series: IBM TotalStorage Productivity Center for Data, SG24-7140.
򐂰 LUN allocations and deallocations on the VMware ESX Server
Tivoli Storage Productivity Center V.4.2 detected LUN allocation and deallocation to the
VMware ESX Server. Changes were reflected in the Topology Viewer after running a
probe.
1.3.2 Enterprise environment
The enterprise environment includes enterprise virtual infrastructure scenarios. The
environment consists of two physical servers with local and SAN storage where the VMware
ESX Server is installed. Several virtual machines run on the servers. Both servers are part of
a unique VMware cluster where DRS, HA, and VMotion are enabled. The number of VMware
clusters in parallel that you can add is unlimited. The hardware configurations, test scenarios,
and the monitoring that we performed with Tivoli Storage Productivity Center work for all of
the servers.
The VirtualCenter Server is the same one that was used in the stand-alone environment.
Figure 1-11 shows the complete enterprise environment.
VMI DataCenter
Virtual Center 2.5 Update 2
VMI Cluster
Lenovo Hardware
Virtual Guests
IBM eServer xSeries 346
Two 300 GB SAS HD 5 GB RAM
Two LUNs 150 GB 4 paths
Two HBA Qlogic 2460 2 GB
VMware ESX Server 3.5 update 2
Fabric Switches
Storage Devices
Figure 1-11 Enterprise environment created in the lab
Chapter 1. Tivoli Storage Productivity Center on virtual infrastructures
21
Hardware description
We added a second VMware ESX host to the existing stand-alone environment, and then we
built a VMware cluster.
Scenario summary
The following tests were performed in the lab and documented in this paper:
򐂰 Enabling VMotion
򐂰 Enabling DRS with different options
򐂰 Enabling HA
22
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
2
Chapter 2.
The importance of storage in
virtual infrastructures
This chapter explains the importance of having a robust and reliable storage design in order
to support demanding current virtual infrastructures. It explains the new storage-related
technologies and features released on VMware vSphere 4. It also explains how IBM Tivoli
Storage Productivity Center can help in the new storage area network (SAN) design planning.
This chapter includes the following sections:
򐂰
򐂰
򐂰
򐂰
Direction of the industry
Hypervisor data storage capabilities
VMware
SAN planning and preferred practices
© Copyright IBM Corp. 2009, 2011. All rights reserved.
23
2.1 Direction of the industry
The industry is going in one direction, toward the cloud operating system. The cloud operating
system uses the power of virtualization to transform data centers into dramatically simplified
cloud-computing infrastructures. IT organizations can use internal and external resources to
deliver the next generation of flexible and reliable IT, securely and with low risk.
In cloud servers, applications and desktops are converging to demand computing processor
power, reliability, business continuity, and cost savings. As time passes, key software vendors
are trusting in the cloud to officially support their products that are running on virtual
infrastructures.
With the correct infrastructure design, a cloud environment can deliver the right combination
of resources, performance, reliability, and cost. The storage portion of the design plays a key
role because it must be able to handle thousands of I/O per second, with no impact in case of
hardware failures. The storage portion of the design must be able to support continuous
technology and competitive improvements, including cost savings.
2.2 Hypervisor data storage capabilities
This section explores the following key Hypervisor products:
򐂰 VMware
򐂰 Citrix XenServer 5.5
򐂰 Microsoft Hyper-V R2
2.2.1 VMware
The virtual storage manager in VMware resides in the vCenter management console. The
vStorage Virtual Machine File System (VMFS) is a cluster file system that provides optimized
virtual machine storage virtualization. vStorage stores the virtual machine state in a central
location. Then, storage administrators can run multiple, concurrent instances of the VMware
ESX Server and access the same virtual machine storage.
VMware data storage management also includes advanced features such as thin provisioning.
Another advanced feature is hot extension for virtual logical unit numbers (LUNs) so that
storage managers can expand LUNs at the VMFS level without taking down the LUN.
In addition, VMware Data Recovery takes virtual machine snapshots and does file-level
recovery from virtual machine disk (VMDK) backups.
2.2.2 Citrix XenServer 5.5
Citrix XenServer 5.5 has limited data storage management capabilities unless you purchase
the Citrix Essentials add-on package, which includes the Citrix StorageLink technology. With
StorageLink, storage arrays behave as native XenServer storage. This way, administrators
can provision virtual machine storage directly from the XenServer management console. In
this approach, they make available such features as thin provisioning, data deduplication, and
performance optimization that are included with the arrays.
24
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
2.2.3 Microsoft Hyper-V R2
Microsoft Hyper-V R2 included improvements in the Hypervisor storage capabilities, which
are mostly features that VMware customers are already accustomed to. Data storage
managers who use Hyper-V can now dynamically add and remove disks from guest operating
systems and expand virtual hard disks. Hyper-V R2 also introduced Cluster Shared Volumes
so that multiple virtual machines can share LUNs.
Hyper-V R2 with System Center Virtual Machine Manager 2008 R2 also supports Live Storage
Migration. Live Storage Migration allows for LUN-to-LUN migration of storage from one platform
to another, which requires some downtime. VMware Storage VMotion supports similar
migration without downtime. On vSphere 4.1, migration across different storage vendors is also
supported. Hyper-V also uses the Microsoft System Center Data Protection Manager and
Microsoft Volume Shadow Copy Service technology for virtual machine snapshots.
2.3 VMware
This section provides information about the new features, functions, and architecture of
VMware.
2.3.1 What is new in VMware vSphere vStorage
VMware vSphere vStorage has several new features.
Virtual disk thin provisioning
With VMware thin provisioning, virtual machines can use storage space on an as-needed
basis, further increasing utilization of storage for virtual environments. vCenter Server 4.0
enables alerts and provides alarms and reports that specifically track allocation and current
usage of storage capacity. This way administrators can optimize the allocation of storage for
virtual environments. With thin provisioning, users can safely optimize available storage
space, by using over-allocation, and reduce storage costs for virtual environments.
VMware Paravirtualized SCSI
VMware Paravirtualized SCSI (PVSCSI) adapters are high-performance storage adapters
that offer greater throughput and lower processor utilization for virtual machines. These
adapters are best suited for environments in which guest applications are I/O intensive.
VMware recommends that you create a primary adapter for use with a disk that hosts the
system software (boot disk) and a separate PVSCSI adapter for the disk that stores user
data, such as a database. The primary adapter is the default for the guest operating system
on the virtual machine. For example, for virtual machines with Microsoft Windows 2008 guest
operating systems, LSI Logic is the default primary adapter.
VMFS Volume Grow
vCenter Server 4.0 allows dynamic expansion of a Virtual Machine File System (VMFS)
volume extent to add capacity to an existing data store. VMFS Volume Grow is a new method
for expanding a data store without disrupting currently running virtual machines. After a LUN
that backs that data store is expanded through an array management utility, the administrator
can use VMFS Volume Grow to expand the VMFS extent on the expanded LUN. The newly
available space appears as a larger VMFS volume (data store) along with an associated
growth event in vCenter Server systems.
Chapter 2. The importance of storage in virtual infrastructures
25
Pluggable Storage Architecture
The Pluggable Storage Architecture (PSA) is an open modular framework that enables
third-party storage multipathing solutions for workload balancing and high availability. You can
use the vSphere command-line interface (CLI) or vCenter Server to manage paths controlled
by the default native multipathing. If array-specific functionality is required, a third-party
plug-in using the vStorage API for Multipathing can be configured by using the vSphere CLI.
Hot extend for virtual disks
Hot extend is supported for VMFS flat virtual disks in persistent mode and without any VMFS
snapshots. When used with the new VMFS Volume Grow capability, the user has maximum
flexibility in managing growing capacity in vSphere 4.0.
Storage stack performance and scalability
The combination of the new in-guest virtualization-optimized SCSI driver and additional ESX
kernel-level storage stack optimizations dramatically improves storage I/O performance. It
makes even the most I/O-intensive applications, such as databases and messaging
applications, prime candidates for virtualization.
Software iSCSI and NFS support with jumbo frames
vSphere 4.0 adds support for jumbo frames with both Network File System (NFS) and iSCSI
on 1 Gb and 10 Gb network interface cards (NICs).
Fibre Channel over Ethernet
vSphere 4.0 extends the number of I/O consolidation options available to VMware customers
by adding Fibre Channel over Ethernet support on Converged Network Adapters (CNAs). For
a list of supported Fibre Channel over Ethernet CNAs with vSphere 4.0, see the VMware
website at:
http://www.vmware.com
Managing VMFS volumes with array-based LUN snapshots
The mounting of array-based LUN snapshots (and array-based LUN clones) now occurs
easily and in a well-managed way in vSphere 4.0. Such LUNs are now automatically
discovered after a storage rescan. Single snapshots (or single clones) can be selected for
mounting and use by the ESX host. However, to mount a snapshot (clone), the snapshot must
be writable. VMFS must write a new unique identifier, or a new VMFS volume signature, to
the snapshot or clone to safely mount it in the same farm as the original LUN. For disaster
recovery scenarios, in which the replicated volume is not in the same farm, LUNs can be
mounted without writing a new signature.
iSCSI support improvements
Updates to the iSCSI stack offer improvements to both software iSCSI and hardware iSCSI.
The iSCSI initiator runs at the ESX layer, and ESX uses a hardware-optimized iSCSI host bus
adapter (HBA). The result is a dramatic improvement of the performance and functionality of
the software and hardware iSCSI and a significant reduction of processor overhead for
software iSCSI.
Increased NFS datastore support
ESX now supports up to 64 NFS shares as data stores in a cluster.
26
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
2.3.2 vSphere storage architecture
The VMware vSphere storage architecture consists of layers of abstraction that hide and
manage the complexity and differences among physical storage subsystems. Figure 2-1
illustrates this storage architecture.
Figure 2-1 vSphere storage architecture
To the applications and guest operating systems inside each virtual machine, the storage
subsystem appears as a virtual SCSI controller connected to one or more virtual SCSI disks.
These controllers are the only types of SCSI controllers that a virtual machine can see and
access. They include BusLogic Parallel, LSI Logic Parallel, LSI Logic SAS, and VMware
Paravirtual.
The virtual SCSI disks are provisioned from datastore elements in the data center. A data
store is similar to a storage appliance that delivers storage space for virtual machines across
multiple physical hosts.
The datastore abstraction is a model that assigns storage space to virtual machines while
insulating the guest from the complexity of the underlying physical storage technology. The
guest virtual machine is not exposed to Fibre Channel storage area network (SAN), iSCSI
SAN, direct-attached storage, and network-attached storage (NAS).
Each virtual machine is stored as a set of files in a directory in the data store. The disk
storage associated with each virtual guest is a set of files within the directory of the guest.
You can operate on the guest disk storage as an ordinary file. You can copy, move, or back up
the disk. New virtual disks can be added to a virtual machine without powering it down. In that
Chapter 2. The importance of storage in virtual infrastructures
27
case, a virtual disk file (a .vmdk file) is created in VMFS to provide new storage for the added
virtual disk. Alternatively, an existing virtual disk file is associated with a virtual machine.
Each data store is a physical VMFS volume on a storage device. NAS data stores are an NFS
volume with VMFS characteristics. Data stores can span multiple physical storage
subsystems. A single VMFS volume can contain one or more LUNs from a local SCSI disk
array on a physical host, a Fibre Channel SAN disk farm, or iSCSI SAN disk farm. New LUNs
added to any of the physical storage subsystems are detected and made available to all
existing or new data stores. Storage capacity on a previously created data store can be
extended without powering down physical hosts or storage subsystems. If any of the LUNs
within a VMFS volume fails or becomes unavailable, only virtual machines that touch that
LUN are affected.
An exception is the LUN that has the first extent of the spanned volume. All other virtual
machines with virtual disks that reside in other LUNs continue to function as normal.
VMFS is a clustered file system that uses shared storage so that multiple physical hosts can
read and write to the same storage simultaneously. VMFS provides on-disk locking to ensure
that the same virtual machine is not powered on by multiple servers at the same time. If a
physical host fails, the on-disk lock for each virtual machine is released so that virtual
machines can be restarted on other physical hosts.
VMFS also features failure consistency and recovery mechanisms, such as distributed
journaling, a failure consistent virtual machine I/O path, and machine state snapshots. These
mechanisms can aid quick identification of the cause and recovery from virtual machine,
physical host, and storage subsystem failures.
VMFS also supports raw device mapping (RDM), which is illustrated in Figure 2-2.
Figure 2-2 Raw device mapping
28
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
RDM provides a mechanism for a virtual machine to have direct access to a LUN on the
physical storage subsystem (Fibre Channel or iSCSI only). RDM is useful for supporting two
typical types of applications:
򐂰 SAN snapshot or other layered applications that run in virtual machines
RDM enables better scalable backup by offloading systems using features inherent to the
SAN.
򐂰 Microsoft Clustering Services (MSCS) spanning physical hosts and using virtual-to-virtual
and physical-to-virtual clusters
Cluster data and quorum disks must be configured as RDMs rather than files on a shared
VMFS.
An RDM is a symbolic link from a VMFS volume to a raw LUN. The mapping makes LUNs
appear as files in a VMFS volume. The mapping file, not the raw LUN, is referenced in the
virtual machine configuration.
When a LUN is opened for access, the mapping file is read to obtain the reference to the raw
LUN. Thereafter, reads and writes go directly to the raw LUN rather than going through the
mapping file.
The person with design authority must also make the right decision about the type of storage
device and how the storage devices will be selected, configured, and implemented. A wrong
decision can damage a virtual infrastructure project. For example, not having the proper
knowledge and understanding of the importance of storage can result in performance or
availability issues and a loss of money.
2.4 SAN planning and preferred practices
This section provides information about SAN planning and preferred practices.
2.4.1 Planning
When a SAN array is configured for use by a virtual infrastructure, some aspects of
configuration and setup are different than with other types of storage. Planning requires the
following key considerations:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
Determine the SAN hardware and technology to select.
Determine how to provision the LUNs.
Determine the storage multipathing and failover options to use.
Determine whether VMware ESX boot from SAN.
Determine the number of expected I/O on each virtual machine.
Consider the vStorage utilization improvements.
The following SAN (FC) configurations and topologies are possible:
򐂰 Storage options:
– Single fabric
– Dual fabric
򐂰 Number of paths to each volume:
– One
– Two
– Four
Chapter 2. The importance of storage in virtual infrastructures
29
򐂰 Bandwidth
– 2 GFC
– 4 GFC
– Fibre Channel over Ethernet
򐂰 Array types:
–
–
–
–
Active/passive
Active/active
FC-AL
Direct-connect storage arrays
򐂰 Number of virtual machines per ESX host
This field determines the type of physical server.
򐂰 How large are the operating system and data disks of each virtual machine?
This field determines the storage capacity. For each virtual machine, you can roughly
estimate storage requirements by using the following calculation:
(Size of virtual machine) + (size of suspend/resume space for virtual machine))
+ (size of RAM for virtual machine) + (100 MB for log files per virtual
machine) = the minimum space needed for each virtual machine
Size of suspend/resume snapshots: The size of suspend/resume snapshots of
running virtual machines is equal to the size of the virtual machine.
򐂰 What sorts of applications are planned for the virtual machines?
Having this information helps determine the Fibre Channel bandwidth requirements and
I/O.
򐂰 Virtual Disk Type:
– Thin
– Fixed
򐂰 Which is the expected growth rate (business, data, and bandwidth)?
This field determines how to build the virtual infrastructure to allow room for growth while
keeping disruption to a minimum.
2.4.2 Preferred practices from VMware
This section outlines the preferred practices that are provided by VMware.
Creating VMFS volumes
When you set up VMFS-based data stores, select a larger volume (2 TB maximum) if you
plan to create multiple virtual machines on it. You can then add virtual machines dynamically
without having to request additional disk space. However, if more space is needed, you can
increase the VMFS datastore size by adding extents at any time, up to 64 TB. Each VMFS
extent has a maximum size of 2 TB.
You must plan how to set up storage for your ESX host systems before you format storage
devices with VMFS. Have one VMFS partition per data store in most configurations. However,
you can decide to use one large VMFS data store or one that expands across multiple LUN
extents. With the VMware ESX Server, you can have up to 256 LUNs per system, with the
minimum volume size of 1.2 GB.
30
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
You might want fewer, larger VMFS volumes for the following reasons:
򐂰 You have more flexibility to create virtual machines without going back to the storage
administrator for more space.
򐂰 It is simpler to resize virtual disks, create storage array snapshots, and so on.
򐂰 You have fewer VMFS-based data stores to manage.
You might want more, smaller storage volumes, each with a separate VMFS data store, for
the following reasons:
򐂰 Less contention is on each VMFS due to locking and SCSI reservation issues.
򐂰 Less storage space is wasted.
򐂰 Applications might require different RAID characteristics.
򐂰 You have more flexibility, because the multipathing policy and disk shares are set per volume.
򐂰 The use of Microsoft Cluster Service requires each cluster disk resource is in its own LUN.
(The RDM type is required for MSCS in the VMware ESX environment.)
For VMware Infrastructure 3, you can have 16 VMFS extents at most per volume. However,
you can decide to use one large volume or multiple small volumes depending on I/O
characteristics and your requirements.
Making volume decisions
When the storage characterization for a virtual machine is not available, often no simple
answer is available when you need to decide on the volume size and number of LUNs to use.
You can use a predictive or an adaptive approach to help you decide.
Predictive scheme
The predictive scheme entails the following tasks:
򐂰 Create several volumes with different storage characteristics.
򐂰 Build a VMFS data store in each volume. (Label each data store according to its
characteristics.)
򐂰 Locate each application in the appropriate RAID for its requirements.
򐂰 Use disk shares to distinguish high-priority from low-priority virtual machines. Disk shares
are relevant only within a given ESX host. The shares assigned to virtual machines on one
ESX host have no effect on virtual machines on other ESX hosts.
Adaptive scheme
The adaptive scheme entails the following tasks:
򐂰
򐂰
򐂰
򐂰
Create a large volume (RAID 1+0 or RAID 5), with write caching enabled.
Build a VMFS data store on that LUN.
Place four or five virtual disks on the VMFS data store.
Run the applications and see whether disk performance is acceptable.
If performance is acceptable, you can place additional virtual disks on the VMFS. If it is not
acceptable, you create a new, larger volume, possibly with a different RAID level, and
repeat the process. You can use cold migration so that you do not lose virtual machines
when recreating the volume.
RAID level: Each volume must have the correct RAID level and storage characteristics
for the applications in virtual machines that use the volume. If multiple virtual machines
access the same data store, use disk shares to prioritize the virtual machines.
Chapter 2. The importance of storage in virtual infrastructures
31
VMFS or RDM
By default, a virtual disk is created in a VMFS volume during virtual machine creation. When
guest operating systems issue SCSI commands to their virtual disks, the virtualization layer
translates these commands to VMFS file operations. An alternative to VMFS is to use RDMs.
As described earlier, RDMs are implemented by using special files stored in a VMFS volume.
These files act as a proxy for a raw device. Using an RDM maintains many of the same
advantages as creating a virtual disk in the VMFS but gains the advantage of benefits similar
to those benefits of direct access to a physical device.
Advantages of using RDM in VMware ESX
RDM offers the following advantages:
򐂰 User-friendly persistent names
RDM provides a user-friendly name for a mapped device. When you use a mapping, you
do not need to refer to the device by its device name. Instead, you refer to it by the name of
the mapping file, for example, /vmfs/volumes/myVolume/myVMDirectory/myRawDisk.vmdk.
򐂰 Dynamic name resolution
RDM stores unique identification information for each mapped device. The VMFS file
system resolves each mapping to its current SCSI device, regardless of changes in the
physical configuration of the server due to adapter hardware changes, path changes,
device relocation, and so on.
򐂰 Distributed file locking
RDM makes it possible to use VMFS distributed locking for raw SCSI devices. Distributed
locking on a raw device mapping makes it safe to use a shared raw volume without losing
data when two virtual machines on different servers try to access the same LUN.
򐂰 File permissions
RDM makes it possible to set up file permissions. The permissions of the mapping file are
enforced at file-open time to protect the mapped volume.
򐂰 File system operations
RDM makes it possible to use file system utilities to work with a mapped volume, using the
mapping file as a proxy. Most operations that are valid for an ordinary file can be applied to
the mapping file and are redirected to operate on the mapped device.
򐂰 Snapshots
RDM makes it possible to use virtual machine storage array snapshots on a mapped volume.
Availability with RDM: Snapshots are not available when raw device mapping is used
in physical compatibility mode.
򐂰 VMotion
With RDM, you can migrate a virtual machine by using VMotion. When you use RDM, the
mapping file acts as a proxy so that VirtualCenter can migrate the virtual machine by using
the same mechanism that exists for migrating virtual disk files.
򐂰 SAN management agents
RDM makes it possible to run SAN management agents inside a virtual machine. Similarly,
any software that needs to access a device using hardware-specific SCSI commands can
be run inside a virtual machine. This software is called SCSI target-based software.
32
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Physical compatibility mode: When you use SAN management agents, you must
select physical compatibility mode for the mapping file.
Limitations of using RDM in VMware ESX
When planning to use RDM, consider the following limitations:
򐂰 RDM is not available for devices that do not support the export of serial numbers.
RDM (in the current implementation) uses a SCSI serial number to identify the mapped
device. Thus these devices (also known as block devices that connect directly to the cciss
device driver or a tape device) cannot be used in RDMs.
򐂰 RDM is available for VMFS-3 volumes only.
RDM requires the VMFS-3 format.
򐂰 RDM does not allow use of VMware snapshots in physical compatibility mode.
The term snapshots applies to the ESX host feature and not to the snapshot feature in
storage array data replication technologies. If you are using RDM in physical compatibility
mode, you cannot use a snapshot with the disk. In physical compatibility mode, the virtual
machine can manage its own snapshot or mirroring operations.
򐂰 Mapping to a partition is not supported.
RDM requires the mapped device to be a whole volume presented from a storage array.
򐂰 Using RDM to deploy LUNs can require many more LUNs than are used in the typical
shared VMFS configuration.
The maximum number of LUNs supported by VMware ESX Server 3.x is 256.
Path management
VMware ESX supports multipathing to maintain a constant connection between the server
machine and the storage device in case of the failure of an HBA, switch, storage processor, or
FC cable. Multipathing support does not require specific failover drivers or software. To
support path switching, the server typically has two or more HBAs available from which the
storage array can be reached by using one or more switches. Alternatively, the setup can
include one HBA and two storage processors so that the HBA can use a different path to
reach the disk array.
VMware ESX supports both HBA and storage processor failover with its multipathing
capability. You can choose a multipathing policy for your system, either Fixed or Most
Recently Used. If the policy is Fixed, you can specify a preferred path. Each volume that is
visible to the ESX host can have its own path policy.
I/O delay: Virtual machine I/O might be delayed for 60 seconds at most while failover takes
place, particularly on an active/passive array. This delay is necessary to allow the SAN
fabric to stabilize its configuration after topology changes or other fabric events.
Pluggable Storage Architecture
To manage storage multipathing, vSphere uses a special VMkernel layer, the Pluggable
Storage Architecture. The PSA is an open, modular framework that coordinates the
simultaneous operation of multiple multipathing plug-ins (MPPs).
The VMkernel multipathing plug-in that vSphere provides by default is the VMware Native
Multipathing Plug-in (NMP). The NMP is an extensible module that manages subplug-ins.
Two types of NMP subplug-ins are available: Storage Array Type Plug-Ins (SATPs) and Path
Chapter 2. The importance of storage in virtual infrastructures
33
Selection Plug-ins (PSPs). SATPs and PSPs can be built in and provided by VMware, or they
can be provided by a third party.
VMware ESX booting from SAN
You might want to use boot from SAN in the following situations:
򐂰 When you do not want to handle maintenance of local storage
򐂰 When you need easy cloning of service consoles
򐂰 In diskless hardware configurations, such as on some blade systems
Do not use boot from SAN in the following situations:
򐂰 When you are using Microsoft Cluster Service with VMware ESX Server 3.5 or earlier
releases
VMware Infrastructure 3.5 Update 1 lifted this restriction
򐂰 When there is a risk of I/O contention between the service console and VMkernel
򐂰 When the SAN vendor does not support boot from SAN
Boot from SAN: With VMware ESX Server 2.5, you could not use boot from SAN
together with RDM. With VMware ESX Server 3, this restriction has been removed.
SAN design
When designing a SAN for multiple applications and servers, you must balance the
performance, reliability, capacity, and cost attributes of the SAN. Each application demands
resources and access to storage provided by the SAN. The SAN switches and storage arrays
must provide timely and reliable access for all competing applications.
Determining application needs
The SAN must support fast response times consistently for each application even though the
requirements made by applications vary over peak periods for both I/O per second and
bandwidth (in MB per second (MBps)). A properly designed SAN must provide sufficient
resources to process all I/O requests from all applications. Designing an optimal SAN
environment is, therefore, neither simple nor quick.
The first step in designing an optimal SAN is to define the storage requirements for each
application in terms of the following characteristics:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
I/O performance (I/O per second)
Bandwidth (megabytes per second)
Capacity (number of volumes and capacity of each volume)
Redundancy level (RAID level)
Response times (average time per I/O)
Overall processing priority
Identifying peak period activity
Base the SAN design on peak-period activity, and consider the nature of the I/O within each
peak period. You might find that additional storage array resource capacity is required to
accommodate instantaneous peaks. For example, a peak period might occur when
processing at noon, characterized by several peaking I/O sessions requiring twice or even
four times the average for the entire peak period. Without additional resources, I/O demands
that exceed the capacity of a storage array result in delayed response times.
34
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Configuring the storage array
Storage array design involves mapping the defined storage requirements to the resources of
the storage array by using these guidelines:
򐂰 Each RAID group provides a specific level of I/O performance, capacity, and redundancy.
Volumes are assigned to RAID groups based on these requirements.
򐂰 If a particular RAID group cannot provide the required I/O performance, capacity, and
response times, you must define an additional RAID group for the next set of volumes. You
must provide sufficient RAID-group resources for each set of volumes.
򐂰 The storage arrays must distribute the RAID groups across all internal channels and
access paths. This distribution results in load balancing of all I/O requests to meet
performance requirements of I/O operations per second and response time.
Caching
Although ESX systems benefit from write cache, the cache can be saturated with sufficiently
intense I/O. Saturation reduces the effectiveness of the cache. Because the cache is often
allocated from a global pool, allocate it only if it will be effective:
򐂰 A read-ahead cache might be effective for sequential I/O, such as during certain types of
backup activities, and for template repositories.
򐂰 A read cache is often ineffective when applied to a VMFS-based volume because multiple
virtual machines are accessed concurrently. Because data access is random, the read
cache hit rate is often too low to justify allocating a read cache.
򐂰 A read cache is often unnecessary when the application and operating system cache data
are within the memory of the virtual machine. In this case, the read cache caches data
objects that the application or operating system already cached.
Considering high availability
Production systems must not have a single point of failure. Make sure that redundancy is built
into the design at all levels. Include additional switches, HBAs, and storage processors,
creating, in effect, a redundant access path:
򐂰 Redundant SAN components
Redundant SAN hardware components, including HBAs, SAN switches, and storage array
access ports, are required. In some cases, multiple storage arrays are part of a
fault-tolerant SAN design.
򐂰 Redundant I/O paths
I/O paths from the server to the storage array must be redundant and dynamically
switchable if a port, device, cable, or path failure occurs.
򐂰 I/O configuration
The key to providing fault tolerance is within the configuration of the I/O system of each
server. With multiple HBAs, the I/O system can issue I/O across all of the HBAs to the
assigned volumes.
Failures can have the following results:
– If an HBA, cable, or SAN switch port fails, the path is no longer available, and an
alternate path is required.
– If a failure occurs in the primary path between the SAN switch and the storage array, an
alternate path at that level is required.
– If a SAN switch fails, the entire path from server to storage array is disabled. Therefore,
a second fabric with a complete alternate path is required.
Chapter 2. The importance of storage in virtual infrastructures
35
򐂰 Mirroring
With protection against volume failure, applications can survive storage access faults.
Mirroring can accomplish that protection. Mirroring designates a second non-addressable
volume that captures all write operations to the primary volume. Mirroring provides fault
tolerance at the volume level. Volume mirroring can be implemented at the server, SAN
switch, or storage-array level.
򐂰 Duplication of SAN environment
For extremely high availability requirements, SAN environments can be duplicated to
provide disaster recovery on a per-site basis. The SAN environment must be duplicated at
different physical locations. The two resultant SAN environments can share operational
workloads, or the second SAN environment can be a failover-only site.
򐂰 Planning for disaster recovery
If a site fails for any reason, you might need to immediately recover the failed applications
and data from a remote site. The SAN must provide access to the data from an alternate
server to start the data recovery process. The SAN can handle synchronization of the site
data.
Site Recovery Manager makes disaster recovery easier because you do not have to
recreate all the virtual machines on the remote site when a failure occurs. Disk-based
replication is integrated with Site Recovery Manager to provide a seamless failover from a
replicated VMware Infrastructure environment.
36
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
3
Chapter 3.
Planning and configuring the
Tivoli Storage Productivity
Center and VMware environment
This chapter provides guidelines for you to plan for and configure your Tivoli Storage
Productivity Center and VMware environment. It provides information about the supported
platforms and configurations for Tivoli Storage Productivity Center in a VMware environment.
This chapter includes the following sections:
򐂰 VMware levels and supported environments
򐂰 Configuring Tivoli Storage Productivity Center communication with VMware
© Copyright IBM Corp. 2009, 2011. All rights reserved.
37
3.1 VMware levels and supported environments
Tivoli Storage Productivity Center V4.2 can monitor the following levels of VMware:
򐂰 VMware ESX Server 3.0.x, 3.5, and 4.0
򐂰 VMware ESXi Server 3.5 and 4.0
򐂰 VMware VirtualCenter V.2.0.x, V2.5, and V4.0
3.1.1 Supported SMI-S agents on VMware virtual machines
The following Storage Management Initiative Specification (SMI-S) agents are supported on a
VMware virtual machine. These agents can be installed on a VMware virtual machine and
can provide relevant data to the Tivoli Storage Productivity Center server.
Processor or RAM allocations: Before deploying the following SMI-S agents, verify that
you have adequate processor or RAM allocations to your virtual machines, as
recommended by the individual vendors.
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
Brocade Fusion SMIS Agent for EOS (older McData Devices)
Brocade SMI Agent
Hitachi HiCommand Device Manager
IBM CIM Agent for DS Open API
IBM SMI-S Agent for Tape
LSI (Engenio) SMI Provider
EMC SMI-S Provider is not supported on VMware due to the inband connectivity
requirements.
Use IBM System Storage SAN Volume Controller SMI-S/GUI for running on the IBM System
Storage Productivity Center (SSPC).
Supported SMI-S agent levels: For supported SMI-S agent levels, see the Tivoli Storage
Productivity Center Product Support List at the IBM Documentation page at:
http://www.ibm.com/systems/support/supportsite.wss/selectproduct?brandind=50000
33&familyind=5329731&oldbrand=5000033&oldfamily=5329737&oldtype=0&taskind=7&psi
d=sr
3.1.2 Agent Manager on VMware Virtual Machine
Agent Manager is supported on a virtual machine on a VMware ESX Server. The Agent
Manager can be installed on the same machine as the Tivoli Storage Productivity Center
server or on a separate machine. Agent Manager supports the following platforms:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
38
Red Hat Enterprise Linux AS 4
Red Hat Enterprise Linux 5
SUSE Linux Enterprise 9
SUSE Linux Enterprise 10
SUSE Linux Enterprise 11
Windows Server 2003
Windows Server 2003 R2
Windows Server 2008
Windows Server 2008 R2
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Additional platforms: Agent Manager supports additional platforms that are not
supported by the Tivoli Storage Productivity Center server.
3.1.3 Tivoli Storage Productivity Center server on VMware virtual machine
Installing the Tivoli Storage Productivity Center server on a virtual machine on VMware ESX
Servers is supported. The Tivoli Storage Productivity Center server can be installed on either
a physical server or on a VMware virtual machine. Before deploying Tivoli Storage
Productivity Center in the environment, ensure that you have the required resources for the
environment that you will monitor.
3.1.4 Tivoli Storage Productivity Center licensing with VMware
To monitor a VMware environment with Tivoli Storage Productivity Center, you must have at
least a Tivoli Storage Productivity Center for Data license. To gather all the asset information
and performance data, obtain a Tivoli Storage Productivity Center Standard Edition license.
Table 3-1 shows the functions based on the license that is installed.
License used for the examples in this paper: A Tivoli Storage Productivity Center
Standard Edition license was used for all the examples and application windows shown in
this paper.
Table 3-1 Tivoli Storage Productivity Center licensing with VMware
License
Data
agent
VMware VI
data source
LUN Correlation
information
Tivoli Storage Productivity Center Basic Edition
No
No
No
Tivoli Storage Productivity Center for Data
Yes
Yes
Yes
Tivoli Storage Productivity Center Standard Edition
Yes
Yes
Yes
Tivoli Storage Productivity Center server sizings
The requirements for the Tivoli Storage Productivity Center server depend on the
environment that you will monitor. Table 3-2 shows sizing suggestions. Design your
environment within these limits on any one Tivoli Storage Productivity Center server.
Table 3-2 Tivoli Storage Productivity Center server sizing suggestions
Configuration
Maximum
volumes
Maximum
subsystems
Maximum
switches
Maximum
switch ports
Maximum Data
agents
Intel Single Quad Core
Processor, 4 GB of RAM
7500
50
50
500
750
Intel Dual Quad Core
Processor, 8 GB of RAM
15000
100
100
1000
1500
Chapter 3. Planning and configuring the Tivoli Storage Productivity Center and VMware environment
39
Requirements
Consider the following requirements:
򐂰 On the VMware ESX Server, do not have more virtual processors than physical cores in
the system.
򐂰 Plan your virtual machine so that no processor scheduling is required by the VM kernel.
򐂰 Ensure that you have enough RAM in the VMware ESX Server to service all the virtual
machines with a maximum RAM usage.
򐂰 Plan the environment so that the VMware ESX Server does not need to swap RAM for the
virtual machines.
򐂰 Use storage area network (SAN)-attached RDM with SCSI pass-through for the IBM
DB2® table space and log storage.
Tivoli Storage Productivity Center server requirements
Consider the following requirements for the Tivoli Storage Productivity Center server:
򐂰 Dual Intel Class 3.2 GHz processors or their equivalent
򐂰 A minimum of 4 GB of RAM for the server memory
򐂰 80 GB of free disk space for the database
3.1.5 Tivoli Storage Productivity Center Data agents on VMware
Tivoli Storage Productivity Center Data agents are required to gather virtual machine disk and
file system information. To generate complete capacity reports, a data agent is required on
each virtual machine. These agents are supported on the following platforms:
򐂰
򐂰
򐂰
򐂰
Red Hat Enterprise Linux AS 4.0
Red Hat Enterprise Linux AS 5.0
SUSE Linux Enterprise 9, 10, and 11
Windows 2003 and Windows 2008
3.1.6 Tivoli Storage Productivity Center Fabric agents on VMware
Tivoli Storage Productivity Center Fabric agents are not supported on VMware virtual
machines. Use the switch SMI-S agents to gather data on the fabric.
3.1.7 Storage Resource agents
Tivoli Storage Productivity Center supports the VMware Virtual Infrastructure, which consists
of the VMware ESX Server and VMware VirtualCenter. The VMware ESX Server is a true
Hypervisor product, which can host multiple virtual machines that run independently of each
other while sharing hardware resources. The VirtualCenter is the management application
that is the central entry point for the management and monitoring of multiple instances of the
VMware ESX Server for a data center.
Install a Storage Resource agent on each virtual machine that you want to monitor. The
Storage Resource agent supports the VMware ESX environment in tolerance mode only. No
fabric functions are available with the Storage Resource agent for any guest operating
systems.
The Storage Resource agents now perform the functions of the Data agents and Fabric
agents. (Out-of-band Fabric agents are still supported and their function has not changed).
40
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Before migrating an existing Data agent or Fabric agent to a Storage Resource agent or
deploying a new Storage Resource agent, verify that the product functions you want to use on
the monitored devices are available for those agents.
The following functions are not available for storage devices that are monitored by Storage
Resource agents:
򐂰 Changing zone configuration and reporting of host bus adapter (HBA), fabric topology, or
zoning information for fabrics connected to hosts running Linux on pSeries® or zSeries®
hardware
These conditions also apply to Storage Resource agents on all guest operating systems
for VMware configurations.
򐂰 IBM AIX® Virtual I/O Server monitoring
You must use Data agents and Fabric agents to monitor Virtual I/O Servers.
򐂰 Novell NetWare monitoring
You must use Data agents to monitor NetWare Servers.
3.1.8 Tivoli Storage Productivity Center VMware LUN Correlation support
VMware LUN Correlation is supported on the following subsystems. To generate reports and
views with LUN Correlation, you must have both the VMware server and the back-end storage
subsystem monitored by Tivoli Storage Productivity Center.
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
3PAR
EMC CLARiiON
EMC Symmetrix
Hitachi Data Systems 9xxxx
Hewlett-Packard EVA
IBM Enterprise Storage Server
IBM System Storage DS4000 storage server
IBM System Storage DS6000 storage server
IBM System Storage DS8000 storage server
IBM System Storage SAN Volume Controller
IBM XIV Storage System
3.1.9 Tivoli Storage Productivity Center limitations with Hypervisors
When using Tivoli Storage Productivity Center and VMware with Hypervisors, keep in mind
the following considerations:
򐂰 The following products are not supported as virtual infrastructure data sources:
–
–
–
–
–
–
VMware Server
VMware GSX
VMware Workstation
VMware Player
Citrix Xen Server
Microsoft Hyper-V
򐂰 Tivoli Storage Productivity Center Data agents can be only installed on virtual machines.
򐂰 VMware ESX Server clustering (High Availability and Distributed Resource Scheduler)
and VMotion are not supported.
򐂰 A probe of the Hypervisor and Data agent is required for complete reporting.
򐂰 Alerts require a probe. (No events or traps are sent from the virtual infrastructure).
Chapter 3. Planning and configuring the Tivoli Storage Productivity Center and VMware environment
41
򐂰 Tivoli Storage Productivity Center provides reporting only of the VMware environment. No
active management of VMware from Tivoli Storage Productivity Center is available.
򐂰 No scripts or FS extensions are on the VMware ESX Server.
򐂰 Fabric agents do not provide any information about virtual machines.
򐂰 Data Path Explorer is not supported for multiple instances of the VMware ESX Server.
3.2 Configuring Tivoli Storage Productivity Center
communication with VMware
To configure the Tivoli Storage Productivity Center instance to communicate with a VMware
ESX Server, follow these steps:
1. Download the VMware Secure Sockets Layer (SSL) certificate. For communication, the
VMware ESX Server and the VMware VirtualCenter Server use self-generated SSL
certificates, in rui.crt files, which are located in the following directories:
– For the VMware ESX Server:
i. Open a browser and point to the following address:
https://ESX_server_IP_address/host
ii. In the list of configuration files, right-click ssl_cert and save the file with the name
<ESX_machine_name>.crt.
Saving the file: Depending on your browser, save the file by select Save Target
as (for Internet Explorer) or Save Link as (for Firefox). Use this file to import the
certificate.
– For the VMware VirtualCenter Server, the certificate is in C:\Documents and
Settings\All Users\Application Data\VMware\VMware VirtualCenter\SSL.
2. Copy the certificate files from the VMware components to a directory on your local client
machine.
3. Install these VMware certificates in a certificate store, which you can do on your local
workstation. Afterward, copy the truststore to your Tivoli Storage Productivity Center
server. Use keytool on your local workstation to generate a certificate store or truststore.
The keytool command is part of the Java Runtime Environment (JRE). If you work on
Windows, locate keytool by running a search. (Select Start  Search  All files and
folders, and search for keytool.*.)
42
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Figure 3-1 shows an example of the search results. Use a current version of keytool, such
as the keytool.exe file that is highlighted in Figure 3-1.
Figure 3-1 Keytool search results
Chapter 3. Planning and configuring the Tivoli Storage Productivity Center and VMware environment
43
4. Create a certificate store or truststore. Use the keytool command to create the truststore.
Figure 3-2 shows the command syntax of the keytool command.
Figure 3-2 The keytool command syntax
a. Use the following syntax to create the truststore for the Tivoli Storage Productivity
Center server:
keytool -import -file certificate-filename -alias server-name -keystore
vmware.jks
In the environment for this paper, the following command is used:
keytool -import -file rui.crt -alias pegasus-keystore vmware.jks
In this example, the file from the VMware ESX Server is named rui.crt. The VMware
ESX Server in our environment is named PEGASUS. The truststore is called vmware.jks.
44
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Figure 3-3 shows the results of running this command.
b. Enter a password for the keystore.
c. When you see the question “Trust this certificate?,” enter y for yes.
Figure 3-3 Keytool command results
5. If you have multiple instances of VMware ESX Server or Virtual Centers, add all of the
certificate files (rui.crt) to a single keystore file. Copy the new rui.crt file and then run
the same keystore command with a different alias. This action appends the second
certificate to the existing vmware.jks file.
6. Copy the newly created certificate store or truststore to the
<TPC_install_directory>/device/conf Device server configuration directory of your
Tivoli Storage Productivity Center server.
The truststore is automatically defined at service startup time as the
javax.net.ssl.trustStore System property in the Device server Java virtual machine (JVM).
Chapter 3. Planning and configuring the Tivoli Storage Productivity Center and VMware environment
45
7. Add the VMware Virtual Infrastructure data source.
The data source can be a Hypervisor (VMware ESX Server or VirtualCenter). This step is
the first step in getting information from VMware Virtual Infrastructure. Adding a VMware
data source is similar to adding a Common Information Model (CIM) agent or Data agent.
a. In the IBM Tivoli Storage Productivity Center Main Navigation Tree, select
Administrative Services  Data Sources  VMware VI Data Source in your Tivoli
Storage Productivity Center GUI, and then click Add VMware VI Data Source.
b. In the Add VMware VI Data Source window (Figure 3-4), complete these steps:
i. Enter the required information.
ii. Select the Test VMware VI Data Source connectivity before adding box check.
iii. Click Save to add the VI Data Source to Tivoli Storage Productivity Center.
Figure 3-4 Add VMware VI Data Source window
46
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Figure 3-5 shows the result of adding the PEGASUS server as a VMware VI Data Source.
You can add additional VI Data Sources, test the connection to a VMware VI Data Source,
or remove a VI Data Source using this window.
Figure 3-5 VMware connection status
Chapter 3. Planning and configuring the Tivoli Storage Productivity Center and VMware environment
47
8. After successfully connecting to the data source, run a Discovery job for the VMware
environment.
The discovery is required to retrieve every VMware ESX Server instance that is part of the
virtual infrastructure that has been added. The discovery mechanism is similar to a
discovery for storage subsystems. Discovery jobs can be scheduled and are performed on
the complete list of known VMware data sources.
a. Select Administrative Services  Discovery  VMware VI Data Source
(Figure 3-6).
b. Configure your VMware VI Data Source discovery. You can choose to run the
Discovery job immediately or schedule to run the job once or on a regular basis. Click
the Run Now button.
c. Select File  Save to save the job definition and execute the discovery.
Figure 3-6 VMware: Data Source discovery
9. Run a probe job for the VMware ESX Server, Hypervisor, and virtual machines.
In this step, you obtain detailed information from the Hypervisors and virtual machines for
Tivoli Storage Productivity Center.
a. From the main Navigation Tree, select IBM Tivoli Storage Productivity Center 
Monitoring  Probes.
b. Create a probe for your VMware Hypervisor and Computers.
For a total view of your VMware VI environment, you need the VMware VI Data Source
and the Data agents running on the virtual machines.
48
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
10.Configure alerts for VMware. You can create alerts for the following alert conditions:
–
–
–
–
Hypervisor discovered
Hypervisor missing
Virtual Machine added
Virtual Machine deleted
a. From the main Navigation Tree (Figure 3-7), select Data Manager  Alerting
right-click Hypervisor Alerts and select Create Alert.
b. Specify the alert details and click Save to save your alert.
Figure 3-7 Creating a Hypervisor alert
11.Install the Data agent on each of the virtual machines that you want to monitor. For full
functionality, you need two data sources. The installation of a Data agent inside a VMware
virtual machine is performed in the same manner as the installation of a Data agent on a
physical server. Make sure that you have a platform that is supported by VMware and
Tivoli Storage Productivity Center.
Chapter 3. Planning and configuring the Tivoli Storage Productivity Center and VMware environment
49
50
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
4
Chapter 4.
Monitoring a VMware
environment with Tivoli Storage
Productivity Center
After you collect data, you can create reports about your environment by using IBM Tivoli
Storage Productivity Center. This chapter explains the most useful reports that you can create
to help you monitor your VMware environment.
This chapter includes the following sections:
򐂰
򐂰
򐂰
򐂰
򐂰
VMware ESX Server reporting
Disk Manager reporting
VMware server alerting
VMware virtual machine reporting
Removed Resource Retention
© Copyright IBM Corp. 2009, 2011. All rights reserved.
51
4.1 VMware ESX Server reporting
You can use Tivoli Storage Productivity Center for Data reports to help manage the VMware
ESX environment. This chapter highlights the key reports.
4.1.1 VMware ESX Server asset reports
With the Tivoli Storage Productivity Center for Data Manager asset reports, you can view
information about the VMware servers and agents within your environment. These asset
reports provide a hierarchical view of your environment so that you can drill down and view
your assets in greater detail.
For asset information about VMware ESX Server, you can view the reports by expanding
Data Manager  Reporting  Asset  By Hypervisor.
As shown in Figure 4-1, you can view asset information about the physical VMware server.
This information includes data about the physical machine resources.
Figure 4-1 Hypervisor Asset report
52
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
By navigating through the asset reports, you can see detailed information about the virtual
machines, controllers, disks, and file systems of Hypervisor. To access this information, you
expand By Hypervisor  Hypervisor name and select the type of information to view as
shown in Figure 4-2:
򐂰
򐂰
򐂰
򐂰
Virtual Machines
Controllers
Disks
File Systems or Logical Volumes
Figure 4-2 Hypervisor Asset report options
Chapter 4. Monitoring a VMware environment with Tivoli Storage Productivity Center
53
To view information about the virtual machines that are configured on the Hypervisor, expand
By Hypervisor  Hypervisor name  Virtual Machines. Then you see a list of all the
virtual machines that are configured on the Hypervisor, as shown in Figure 4-3. To view more
detailed information about these machines, you can either click the virtual machine name or
expand the name for more options.
Figure 4-3 Virtual machine information for Hypervisor
54
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Expand By Hypervisor  Hypervisor name  Controllers to obtain more information
about the SCSI and Fibre Channel controllers for the Hypervisor. In the results, you see a list
of all the controllers for the Hypervisor, as shown in Figure 4-4.
Figure 4-4 Hypervisor controller list
Chapter 4. Monitoring a VMware environment with Tivoli Storage Productivity Center
55
By expanding a particular controller, you can see the properties of that controller and the
disks (Controllers  Disks) that are assigned to the controller as shown in Figure 4-5.
Figure 4-5 Hypervisor controller information
56
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
To view detailed information about each disk assigned to the Hypervisor, expand By
Hypervisor  Hypervisor name  Disks and select a disk as shown in Figure 4-6.
Figure 4-6 Hypervisor disk details
Chapter 4. Monitoring a VMware environment with Tivoli Storage Productivity Center
57
Within the Disks report, you can view information about the disk paths, probe information, and
the logical unit number (LUN) definition. To view which subsystem LUN a particular
fiber-attached disk correlates to, select the LUN Definition tab in the right pane (Figure 4-7).
Tip: To see the LUN Definition tab, the back-end storage device must be discovered and
probed by Tivoli Storage Productivity Center.
Figure 4-7 LUN Definition information for a Hypervisor disk
58
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
You can also view data on all of the virtual machines that are configured on a specific VMware
ESX Server. If an agent is not deployed on the virtual machines, you see high-level
information about the image as shown in Figure 4-8. Such information includes the operating
system, Hypervisor, VM configuration file, processor count, and the RAM allocated to the
virtual machine.
Figure 4-8 Hypervisor Virtual Machines report
For detailed information about the virtual machine resources, you must deploy a Data agent
or Storage Resource Agent (SRA) on the virtual machine. After you install an agent and it
collects data, you can view detailed information about the virtual machine disks, file systems,
and controllers.
Chapter 4. Monitoring a VMware environment with Tivoli Storage Productivity Center
59
4.2 Disk Manager reporting
To see information about which storage subsystems have LUNs allocated to VMware ESX
Server, you can use the Tivoli Storage Productivity Center Disk Manager reports.
From the Navigation Tree in the left pane, you expand Disk Manager  Reporting 
Storage Subsystems  Computer Views  By Computer (Figure 4-9) to see the By
Computer report. This report includes the LUNs that are assigned to a VMware ESX Server
from any supported subsystem that is monitored by Tivoli Storage Productivity Center.
Figure 4-9 Disk Manager By Computer report for VMware ESX Server
60
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
You expand Disk Manager  Reporting  Storage Subsystems  Computer Views  By
Filesystem/Logical Volume to see the By Filesystem/Logical Volume report. This report
includes any Virtual Machine File System (VMFS) volumes that are assigned to the VMware
ESX Server from supported subsystems, as shown in Figure 4-10. The TOTAL row in this report
includes file systems (on raw assigned volumes) from virtual machines in the capacity total.
Figure 4-10 Disk Manager By Filesystem report
Chapter 4. Monitoring a VMware environment with Tivoli Storage Productivity Center
61
As shown in Figure 4-11, you expand Disk Manager  Reporting  Storage
Subsystems  Storage Subsystem Views  By Volume report to access the By Volume
report. This report includes LUNs that are assigned to multiple instances of VMware ESX
Server from supported subsystems. The total unallocated volume space does not include the
capacity of LUNs from VMware ESX Server if they are assigned by raw device mapping
(RDM) to a virtual machine.
Figure 4-11 Disk Manager By Volume report
62
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
You expand Disk Manager  Reporting  Storage Subsystems  Storage Subsystem
Views  Disks to see the Disks report (Figure 4-12). This report now includes LUNs that are
assigned to multiple instances of VMware ESX Server. The total physical allocation excludes
VM-related physical allocation.
Figure 4-12 Disk Manager By Disk report
Chapter 4. Monitoring a VMware environment with Tivoli Storage Productivity Center
63
4.3 VMware server alerting
To set up Tivoli Storage Productivity Center alerts on the VMware Hypervisors, as shown in
Figure 4-13, expand Data Manager  Alerting, right-click Hypervisor Alerts, and select
Create Alert.
Figure 4-13 Creating a Hypervisor alert
64
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
In the right pane, under Triggering Condition, select the condition for triggering an alert and
sending a notification. You can select the following conditions, as shown in Figure 4-14:
Hypervisor Discovered
Triggered when a new Hypervisor is detected during the VMware
discovery job.
Hypervisor Missing Triggered when a previously discovered Hypervisor is no longer
accessible by Tivoli Storage Productivity Center.
Virtual Machine Added
Triggered when a new virtual machine is created on a Hypervisor
monitored by Tivoli Storage Productivity Center.
Virtual Machine Removed
Triggered when a virtual machine on a Hypervisor managed by Tivoli
Storage Productivity Center is removed.
Figure 4-14 Hypervisor Alert definition
4.4 VMware virtual machine reporting
Prerequisite: To see detailed information about the virtual machines, you must have an
agent deployed on each virtual machine.
To view detailed information about a particular virtual machine, expand Data Manager 
Reporting  Asset  By Computer and select the virtual machine.
Chapter 4. Monitoring a VMware environment with Tivoli Storage Productivity Center
65
As shown in Figure 4-15, this report shows information, including the following details, about
the assets of the machine:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
Machine host name
Host ID, which is the unique machine identifier generated by the Tivoli GUID
Group and domain information
Network address and IP address
Machine time zone
Manufacturer, model, and serial number
Processor type, speed, and count
RAM information
Operating system type and version
CPU architecture and swap space
Disk capacity and unallocated disk space
File system free space
Last boot time and discovered time
Last probe time and status
For VMware virtual machines, information about the Hypervisor and VM configuration file
Figure 4-15 Virtual machine asset report
From this view, you can expand a particular virtual machine further to view its controllers,
disks, file systems, exports/shares, and monitored directories.
66
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
4.4.1 Disk reports
To view details about disks that are assigned to a virtual machine, expand Data Manager 
Reporting  Asset  By Computer  computer name  Disks  Disk #. The disk
detail window has six tabs. The focus in this section is on the following tabs:
򐂰
򐂰
򐂰
򐂰
򐂰
General
Paths
Latest Probe
Probe History
Mapping To Hypervisor
General report
The General page (Figure 4-16) includes the computer name, path name, SCSI target ID, logical
unit number, and the number of access paths. This page also includes disk information such as
the manufacturer, model number, firmware, serial number, and manufacture date of the disk.
Figure 4-16 Virtual machine disk information
Paths report
The Paths page shows information regarding the host, operating system type, path, controller,
instance, bus number, SCSI target ID, and logical unit number.
Chapter 4. Monitoring a VMware environment with Tivoli Storage Productivity Center
67
Latest Probe report
The Latest Probe page shows information gathered by Tivoli Storage Productivity Center
during the most recent probe of the disk. This page includes information about the sectors,
number of heads, number of cylinders, logical block size, disk capacity, revolutions per minute
information, power-on time, failure prediction, disk defect information, and time of last probe.
Probe History report
The Probe History page shows the history of probes that have been run on this disk for
tracking purposes.
Mapping to Hypervisor report
For a given virtual machine disk, you can also view how it is mapped from the Hypervisor by
selecting the Mapping To Hypervisor tab on the disk information report (Figure 4-17).
Figure 4-17 Mapping to Hypervisor report for a virtual machine disk
68
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
4.4.2 Mapping to Hypervisor Storage report
To access the Mapping to Hypervisor Storage report, expand Data Manager  Reporting 
Asset  System-wide  File Systems or Logical Volumes  Mapping to Hypervisor
Storage. This report provides information about the path from the file system on a virtual
machine to the back-end storage volumes on the Hypervisor or mapped to the Hypervisor. A
probe job must have been run for VMware ESX Server and for the agent on the virtual machine.
Tip: You can use this report to display information similar to information shown in Data
Path Explorer for physical hosts.
As shown in Figure 4-18, this report shows the following information:
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
򐂰
Computer name
Mount point
Disk on VM (Each file system can have more than one disk, resulting in one row per disk.)
Hypervisor name
VM name
VM disk file (on Hypervisor)
VMFS name (on Hypervisor)
VMFS mount point (on Hypervisor)
VMFS disks (on Hypervisor, shown as a comma-delimited list)
Storage volumes (on Hypervisor)
Figure 4-18 Mapping to Hypervisor Storage report
Chapter 4. Monitoring a VMware environment with Tivoli Storage Productivity Center
69
4.4.3 Virtual Machine File Systems report
To view information about the VMFS, select Data Manager  Reporting  Asset  By
Computer  virtual machine  File Systems or Logical Volumes  file system. The
File System page (Figure 4-19) shows a pie chart that illustrates the used space and free
space on the file system. It includes information about the file system type, use count, mount
point, physical size, file system capacity, date and time of last probe and scan, and when the
file system was discovered or removed.
Figure 4-19 VMFS report
70
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
4.4.4 Disk Capacity By Computer report
You can see a complete list of all the computers with agents in your environment (both virtual
and physical) reporting to a specific Tivoli Storage Productivity Center instance and their total
disk capacity information. To find this list, expand Data Manager  Reporting 
Capacity  Disk Capacity  By Computer as shown in Figure 4-20.
You can use this report to view and chart disk capacity. The report includes one row for each
computer, which shows the total capacity of storage capacity on that computer and the
associated computer and disk storage information. For more details about a specific
computer, you can click the magnifying glass to the left of the Computer column. You can also
generate charts for one or multiple computers.
Tip: For accurate and consistent data, run regularly scheduled probes against both the
host machines and the storage devices managed by Tivoli Storage Productivity Center.
Figure 4-20 Disk Capacity By Computer report
Chapter 4. Monitoring a VMware environment with Tivoli Storage Productivity Center
71
4.4.5 VMware virtual machines without an agent
You can view a list of VMware virtual machines that Tivoli Storage Productivity Center has
discovered through the VMware ESX Server, but that do not have an agent installed. To
access this report, expand Data Manager  Reporting  Asset  System-wide 
Unmanaged Virtual Machines (Figure 4-21).
Figure 4-21 Unmanaged Virtual Machines report
72
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
4.4.6 Virtual machines with agents, but without an ESX data source
You can view a list of virtual machines that have an agent installed, but where no VMware
ESX Server is discovered by Tivoli Storage Productivity Center. To view this list, expand Data
Manager  Reporting  Asset  System-wide  Virtual Machines with No VMware
Agent (Figure 4-22).
Figure 4-22 Virtual Machines with No VMware Agent report
Chapter 4. Monitoring a VMware environment with Tivoli Storage Productivity Center
73
4.4.7 Unused Virtual Disk Files report
With Tivoli Storage Productivity Center, you can identify VMware data stores that were
created on multiple instances of VMware ESX Server, but that are currently not allocated or
being used by a virtual machine. This report can help you to identify files that can be deleted
and storage that can be reclaimed and includes information about the Hypervisor, file path,
disk, and file size.
To access the Unused Virtual Disk Files report, expand Data Manager  Reporting 
Asset  System-wide  File Systems or Logical Volumes  Unused Virtual Disk Files
(Figure 4-23).
Figure 4-23 Unused Virtual Disk Files report
74
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
4.4.8 IBM TotalStorage Productivity Center reports
Use of IBM TotalStorage® versus Tivoli Storage name change: At the time this paper
was written, the report options reflected the name IBM TotalStorage Productivity Center
as shown in the windows in this section. The name in the application has since been
updated to reflect IBM Tivoli Storage Productivity Center.
When you expand IBM TotalStorage Productivity Center, you can drill down through
VMware Topology Viewer reports and Rollup Reports, as explained in the following sections,
for more details.
VMware Topology Viewer reports
To view a graphical view of your VMware ESX environment, you can use the Topology Viewer.
To view a summary of the entire storage environment, expand IBM Tivoli Storage
Productivity Center  Topology. The topology overview is displayed in the right pane as
shown in Figure 4-24.
The Topology Viewer does not differentiate between types of computers. Therefore, you do
not see any special indicators for Hypervisors or virtual machines.
Figure 4-24 Topology Viewer overview
Chapter 4. Monitoring a VMware environment with Tivoli Storage Productivity Center
75
To view the Hypervisors and virtual machines, double-click the Computers box, which opens
the L0: Computers View. The L0: Computers view (Figure 4-25) shows the Hypervisors and
virtual machines, and includes the By Hypervisor grouping.
Virtual machines with no agent deployed: Any virtual machines where an agent is not
deployed are displayed with a status of Unknown in this view.
Figure 4-25 Topology Viewer L0 Computers view
76
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
To view information about a particular group of computers, you can double-click the entire
group, which opens the L1:Computers view (Figure 4-26).
Figure 4-26 L1:Computers view
Configuration tip: The default topology grouping is by health status (normal, warning,
critical, unavailable, or unknown). To configure your own groups, right-click an object,
select Launch Detail Panel, and set a user-defined property.
Chapter 4. Monitoring a VMware environment with Tivoli Storage Productivity Center
77
In the L1:Computers view, you can double-click a particular computer to see detailed
information about that computer, which is shown in the L2:Computer View (Figure 4-27). The
L2:Computer view for the Hypervisor shows information about Hypervisor and the virtual
machines configured on the Hypervisor.
Figure 4-27 L2:Computer view for VMware server
Within the L2:Computer view for the Hypervisor, you can also view connectivity information
for the server to see how the Hypervisor is connected to the Fibre Channel network.
78
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
The L2:Computer view for a virtual machine is similar to the current L2:Computer view for a
regular machine and includes a connectivity box. The connectivity box is a subview within the
L2:Computer view that shows the connectivity of the current computer (such as the switches
that are connected). This view (Figure 4-28) shows the relationship between its LUNs and
disks and the virtual machine disks within the Hypervisor.
The L2:Computer view for a Hypervisor includes a mapping of its virtual machines. It shows
the relationship of the virtual machine disk to the storage subsystem volume.
Figure 4-28 Topology HBA information for VMware server
Chapter 4. Monitoring a VMware environment with Tivoli Storage Productivity Center
79
Within the connectivity view, you can see which Fibre Channel switches the server is
connected to, as shown in Figure 4-29. You can also examine a particular switch and view
which switch ports are used and connected to the Hypervisor.
Figure 4-29 Switch connectivity information for VMware server
Within the L2:Computer topology view, you can also view disk information and LUN
Correlation information.
LUN Correlation information: To view the LUN Correlation information, you must have
the back-end storage device discovered and probed.
Rollup Reports
With Tivoli Storage Productivity Center, you can consolidate asset and health information that
is collected from multiple Tivoli Storage Productivity Center instances. This consolidation
effort provides the ability for scalable enterprise-wide management of the storage
environments.
To implement Rollup Reports, a Tivoli Storage Productivity Center instance is configured as a
master. One or more master Tivoli Storage Productivity Center servers are configured to
collect rollup information from the subordinates. The master server communicates with the
subordinate servers by using the Device Server API.
80
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
A subordinate Tivoli Storage Productivity Center server can have one or more master servers
that are configured to collect rollup information. The master server must be running IBM Tivoli
Storage Productivity Center Standard Edition. The health of the subordinate servers is based
on the result of the check operation and the status from the probe actions. All Tivoli Storage
Productivity Center instances can be configured for discovery and probing for enterprise
reports.
To see a complete list of multiple instances of VMware ESX Server, expand IBM Tivoli
Storage Productivity Center  Rollup Reports  Asset  Hypervisors  By
Hypervisor (Figure 4-30). The By Hypervisor Rollup report provides a single report that
contains the asset information of all the Hypervisors in the environment.
Multiple Tivoli Storage Productivity Center servers: If you have multiple Tivoli Storage
Productivity Center servers in the environment and have the server rollup function enabled,
you can use this report to see all the Hypervisors in your environment.
Figure 4-30 By Hypervisor Rollup Report
4.5 Removed Resource Retention
Information about resources that have been removed from the system or can no longer be
found is retained in the Tivoli Storage Productivity Center database repository. This
information includes details about Hypervisors and virtual machines.
The Removed Resource Retention option deletes removed resources from the database after
a retention period. You run Removed Resource Retention after the Hypervisor probes. To
Chapter 4. Monitoring a VMware environment with Tivoli Storage Productivity Center
81
configure the retention period, expand Administrative Services  Configuration 
Removed Resource Retention, as shown in Figure 4-31.
To clear the history record for a resource and activate a new period for resource retention,
perform a discovery or probe job on the resource after updating the retention period.
Removed Resource Retention period: The default Removed Resource Retention period
is 14 days. You can change the duration depending on how often your environment
changes.
Figure 4-31 Removed Resource Retention configuration window
82
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Related publications
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this paper.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy only.
򐂰 Deployment Guide Series: IBM TotalStorage Productivity Center for Data, SG24-7140
򐂰 Introduction to Storage Area Networks, SG24-5470
򐂰 Tivoli Storage Productivity Center V4.2 Release Guide, SG24-7894
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Other publications
These publications are also relevant as further information sources:
򐂰 IBM Tivoli Storage Productivity Center: Installation and Configuration Guide, SC27-2337
򐂰 IBM Tivoli Storage Productivity Center User’s Guide Version 4.2, SC27-2338
򐂰 IBM Tivoli Common Reporting User’s Guide, SC23-8737
Online resources
These websites are also relevant as further information sources:
򐂰 Tivoli Storage Productivity Center support site:
https://www.ibm.com/software/sysmgmt/products/support/
IBMTotalStorageProductivityCenterStandardEdition.html
򐂰 VMware website
http://www.vmware.com
Help from IBM
IBM Support and downloads
ibm.com/support
IBM Global Services
ibm.com/services
© Copyright IBM Corp. 2009, 2011. All rights reserved.
83
84
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Index
A
Agent Manager 38
agents 5
Data 16, 20
deployment 38
Fabric 40
machines 71
alerts 49, 64
array-based LUN snapshots 26
asset
information 52
reports 52
C
central console 3
Citrix Xen Server 6, 24
cloud operating system 24
Cluster Shared Volumes 25
communication configuration 42
components
Data server 4
Device server 5
configuration
communication 42
storage array 35
D
Data agents 5, 40–41, 49
machines 71
Data Protection Manager 25
Data server 4
data store 27–28
abstraction 27
deployed agent 59
Device server 5
discovery job 48
Disk Capacity by Computer report 71
disk capacity report 71
Disk Manager reports 60
E
ESX boot 34
F
Fabric agents 5, 40
Fibre Channel over Ethernet 26
full virtualization 6
Hypervisor 6
G
General report 67
© Copyright IBM Corp. 2009, 2011. All rights reserved.
H
HBA
reporting 41
virtualization 8
health status 77
heterogeneous environment limitations 11
host bus adapter
See HBA
hosted Hypervisor 6
hot extend 26
Hypervisor 46
full virtualization 6
hosted 6
nonhosted 6
overview 6
paravirtualization 6
products 6, 24
types 6
I
IP address change 20
iSCSI improvements 26
K
keytool command 44
L
L2 Computer View 78
lab environment 19
Latest Probe report 68
license with VMware 39
limitations 41
LUN
correlation 41, 80
definition 58
LUN Definition tab 58
M
Mapping to Hypervisor report 68
Mapping to Hypervisor Storage report 69
Microsoft Clustering Services 29
Microsoft Hyper-V R2 25
monitoring
fabric 18
VMware ESX Server 18
multipathing plug-in (MPP) 33
N
NAS data store 28
native multipathing 26
Native Multipathing Plug-in (NMP) 33
85
native storage system interfaces 5
nonhosted Hypervisor 6
Rollup Reports 80
rui.crt certificate 42
P
S
paravirtualization 6
Hypervisor 6
SCSI adapters 25
path management 33
Paths report 67
Pegasus CIMOM 14
capabilities 15
Pluggable Storage Architecture (PSA) 26, 33
ports 21
predictive scheme 31
Probe History report 68
probe job 48, 69
provision VM storage 24
SAN design 34
application needs 34
high availability 35
peak period activity 34
SAN Management Agents 32
SAN snapshot 29
SAN Volume Controller
SMI-S 38
SCSI target-based software 32
server sizings 39
Service Console 8
Site Recovery Manager 36
SMI-S agents 38
levels, support URL 38
snapshot 20
SNMP agents 8
SSL certificates 42
static IP 20
storage array configuration 35
Storage Resource agents 40
VMware limitations 41
supported storage subsystems 17
R
RAM usage 40
raw device mapping
See RDM
RDM
advantages
limitations 33
LUN assignment 62
VMFS 28
Redbooks website 83
Contact us viii
Removed Resource Retention 81–82
period 82
reports
controllers 55
Data Manager asset 52
deployed agent 59
Disk Capacity By Computer 71
Disk Manager 61
disks 56
General 67
Latest Probe 68
LUN definition 58
LUNs assigned 62
Mapping to Hypervisor 68
Mapping to Hypervisor Storage 69
Paths 67
Probe History 68
probe job 69
Rollup 80
Topology Viewer 78
Virtual Disk Files Unused 74
virtual machines 54, 59, 65
disks 67
without Data agents 72
virtual machines with agents but without an ESX data
source 73
VMFS 70
VMware ESX Server 18
VMware Topology Viewer 75
VMware virtual machine 65
86
T
Tivoli Storage Productivity Center
application layer 3
Data agents 40
Data server 4–5
Device server 5
Fabric agents 40
infrastructure layer 3
interface layer 4
license 39
limitations 41
logical structure 3
overview 2–3
physical layer 4
physical structure 4
server
sizings 39
VMware machines 17
Storage Resource Agent 40
structure 3
VMware configuration 42
VMware Infrastructure 3
feature support 16
monitoring 18
Tivoli Storage Productivity Center for Disk 5
Tivoli Storage Productivity Center for Replication 5
Topology Viewer 75
Hypervisor connectivity 78
truststore 42, 45
creation 44
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
U
Unmanaged Virtual Machines report 72
Unused Virtual Disk Files report 74
V
vCenter management console 24
virtual disk file 28
Virtual Infrastructure
client 9
data sources 46
supported sources 41
planning 29
web access 10
virtual machine
assets 66
calculate storage capacity 30
changing IPs 20
data agent installed 73
disks 67
IP address change 20
no agent deployed 76
reporting 65
with agents, but without ESX Data Source 73
virtual machine file systems
See VMFS
virtual machine monitor (VMM) 6
virtual SCSI disks 27
VMFS 28
failure consistency 28
or RDM 32
raw device mapping 28
reports 70
volumes 28, 61
creation of 30
VMFS Volume Grow 25
VMkernel 8
VMotion 32
VMware
alerts 49
asset information 52
certificates 42
consolidated backup 9
data flow 7
data recovery 24
data stores 74
Distributed Resource Scheduler 9
HBA virtualization 8
high availability 10
homogeneous hardware 13
Infrastructure SDK 10
levels 38
limitations 41
LUN correlation 41
multipathing 33
Pegasus CIMOM 14
products 6
SAN configurations 29
server alerting 64
server snapshot 20
Service Console 8
Storage VMotion 10
thin provisioning 25
Tivoli Storage Productivity Center license 39
topology 11
Topology Viewer reports 75
virtual machine
Fabric agent 8
reporting 65
without an agent 72
Virtual SMP 10
VirtualCenter 10
virtualization layer 9
VMFS 10–11
VMkernel 8
volume decisions 31
VMware ESX Server 10
architecture 9
asset reports 52
clustering 41
hardware interface components 9
LUN correlation 60
overview 6
Pegasus CIMOM 15
reporting 52
resource manager 9
Storage Resource Agent support 40
Tivoli Storage Productivity Center
configuration 42
Data agent 7
VMware ESX vSphere 6
VMware Hypervisor 7
VMware Infrastructure 3 2
architecture 8
features support 16
monitoring 18
product suite 8–9
VMware Storage VMotion 10
VMware Topology Viewer reports 75
VMware Virtual Infrastructure Client 9
VMware Virtual Infrastructure Web Access 10
VMware Virtual SMP 10
VMware VirtualCenter 7, 10
VMware VMotion 10
VMware VSphere 2
Volume Shadow Copy Service 25
vSphere 2, 7
CLI 26
storage architecture 27
vStorage VMFS 24
Index
87
88
IBM Tivoli Storage Productivity Center in Virtual Infrastructure Environments
Back cover
IBM Tivoli Storage
Productivity Center in Virtual
Infrastructure Environments
Learn how to
implement Tivoli
Storage Productivity
Center in VMware
environments
Review the
fundamentals of Tivoli
Storage Productivity
Center
Many customers have adopted VMware ESX as their server
consolidation and virtualization solution. This IBM Redpaper
publication explains how to plan for IBM Tivoli Storage Productivity
Center monitoring of your VMware ESX environment. This paper is
intended for storage administrators who will plan for and configure
Tivoli Storage Productivity Center to Monitor VMware servers and then
produce reports. This paper guides you through the required steps to
successfully complete these tasks. In addition, this paper includes
several scenarios that show you how to use Tivoli Storage Productivity
Center to monitor your VMware environment.
Understand the
architecture and
features of VMware
vSphere 4
®
Redpaper
™
INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION
BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
IBM Redbooks are developed
by the IBM International
Technical Support
Organization. Experts from
IBM, Customers and Partners
from around the world create
timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.
For more information:
ibm.com/redbooks
REDP-4471-01
Download