Hitachi Unversal Storage Platform Family Best Practices with Hyper-V

Hitachi Universal Storage Platform™
Family Best Practices with Hyper-V
Best Practices Guide
By Rick Andersen and Lisa Pampuch
April 2009
Summary
Increasingly, businesses are turning to virtualization to achieve several important objectives, including increase
return on investment, decreasing total cost of operation, improving operational efficiencies, improving
responsiveness and becoming more environmentally friendly.
While virtualization offers many benefits, it also brings risks that must be mitigated. The move to virtualization
requires that IT administrators adopt a new way of thinking about storage infrastructure and application
deployment. Improper deployment of storage and applications can have catastrophic consequences due to the
highly consolidated nature of virtualized environments. The Hitachi Universal Storage Platform™ family brings
enterprise-class availability, performance and ease of management to organizations of all sizes that are
dealing with an increasing number of virtualized business-critical applications.
This paper is intended for use by IT administrators who are planning storage for a Hyper-V deployment. It
provides guidance on how to configure both the Hyper-V environment and a Hitachi Universal Storage
Platform V or Hitachi Universal Storage Platform VM family storage system to achieve the best performance,
scalability and availability.
Contributors
The information included in this document represents the expertise, feedback and suggestions of a number of
skilled practitioners. The authors recognize and sincerely thank the following contributors and reviewers of this
document:
Table of Contents
Hitachi Product Family......................................................................................................................................................... 2
Hitachi Universal Storage Platform Features ........................................................................................................... 2
Hitachi Storage Navigator Software ......................................................................................................................... 2
Hitachi Performance Monitor Software..................................................................................................................... 3
Hitachi Virtual Partition Manager Software............................................................................................................... 3
Hitachi Universal Volume Manager Software ........................................................................................................... 3
Hitachi Dynamic Provisioning Software.................................................................................................................... 3
Hyper-V Architecture............................................................................................................................................................ 4
Windows Hypervisor ................................................................................................................................................ 4
Parent and Child Partitions....................................................................................................................................... 5
Integration Services ................................................................................................................................................. 5
Emulated and Synthetic Devices.............................................................................................................................. 6
Hyper-V Storage Options..................................................................................................................................................... 6
Disk Type ................................................................................................................................................................. 6
Disk Interface ........................................................................................................................................................... 7
I/O Paths .................................................................................................................................................................. 7
Basic Hyper-V Host Setup ................................................................................................................................................... 8
Basic Storage System Setup ............................................................................................................................................... 9
Fibre Channel Storage Deployment ......................................................................................................................... 9
Storage Provisioning .............................................................................................................................................. 15
Storage Virtualization and Hyper-V................................................................................................................................... 16
Hitachi Dynamic Provisioning................................................................................................................................. 17
Storage Partitioning................................................................................................................................................ 21
Hyper-V Protection Strategies........................................................................................................................................... 23
Backups ................................................................................................................................................................. 23
Storage Replication................................................................................................................................................ 23
Hyper-V Quick Migration ........................................................................................................................................ 23
Hitachi Storage Cluster Solution ............................................................................................................................ 24
Hyper-V Performance Monitoring ..................................................................................................................................... 25
Windows Performance Monitor .............................................................................................................................. 25
Hitachi Performance Monitor Feature .................................................................................................................... 25
Hitachi Tuning Manager Software .......................................................................................................................... 26
Hitachi Universal Storage Platform™
Family Best Practices with Hyper-V
Best Practices Guide
By Rick Andersen and Lisa Pampuch
Increasingly, businesses are turning to virtualization to achieve several important objectives:
• Increase return on investment by eliminating underutilization of hardware and reducing administration
overhead
• Decrease total cost of operation by reducing data center space and energy usage
• Improve operational efficiencies by increasing availability and performance of critical applications and
simplifying deployment and migration of those applications
In addition, virtualization is a key tool companies use to improve responsiveness to the constantly changing
business climate and to become more environmentally friendly.
While virtualization offers many benefits, it also brings risks that must be mitigated. The move to virtualization
requires that IT administrators adopt a new way of thinking about storage infrastructure and application
deployment. Improper deployment of storage and applications can have catastrophic consequences due to the
highly consolidated nature of virtualized environments.
The Hitachi Universal Storage Platform™ family brings enterprise-class availability, performance and ease of
management to organizations of all sizes that are dealing with an increasing number of virtualized businesscritical applications. The Hitachi Universal Storage Platform with Hitachi Dynamic Provisioning software
supports both internal and external virtualized storage, simplifies storage administration and improves
performance to help reduce overall power and cooling costs.
The storage virtualization technology offered by the Universal Storage Platform readily complements the power
and streamlined operations of Hyper-V environments for rapid deployment of virtual machines. With the
Universal Storage Platform, the Hyper-V infrastructure can be tied to a virtualized pool of storage. This
functionality allows virtual machines under Hyper-V to be configured with a virtual amount of storage, leading to
more efficient utilization of storage resources and reduced storage costs.
The Universal Storage Platform virtualization architecture offers significant storage consolidation benefits that
complement the server consolidation benefits provided by the Hyper-V environment. The Universal Storage
Platform is able to present the storage resources of both Hitachi storage and many heterogeneous third-party
storage systems all as one unified storage pool. This allows storage administrators to allocate storage
resources into multiple storage pools for the needs of each virtual machine under Hyper-V.
This paper is intended for use by IT administrators who are planning storage for a Hyper-V deployment. It
provides guidance on how to configure both the Hyper-V environment and a Hitachi Universal Platform Storage
V or Hitachi Universal Storage Platform VM storage system to achieve the best performance, scalability and
availability.
1
Hitachi Product Family
Hitachi Data Systems is the most trusted vendor in delivering complete storage solutions that provide dynamic
tiered storage, common management, data protection and archiving, enabling organizations to align their
storage infrastructures with their unique business requirements.
Hitachi Universal Storage Platform Features
The Hitachi Universal Storage Platform V is the most powerful and intelligent enterprise storage system in the
industry. The Universal Storage Platform V and the smaller footprint Universal Storage Platform VM are based
on the Universal Star Network™ architecture. These storage systems deliver proven and innovative controllerbased virtualization, logical partitioning and universal replication for open systems and mainframe
environments.
With this architecture as its engine, the Hitachi Universal Storage Platform V redefined the storage industry. It
represents the world's first implementation of a large scale, enterprise-class virtualization layer combined with
thin provisioning software. It delivers unprecedented performance, supporting over 4.0 million I/O per second
(IOPS), up to 247PB of internal and external virtualized storage capacity and 512GB of directly addressable
cache.
The Hitachi Universal Storage Platform VM blends enterprise-class functionality with a smaller footprint to meet
the business needs of entry level enterprises and fast growing mid-sized organizations, while supporting
distributed or departmental applications in large enterprises. With the Hitachi Universal Storage Platform VM,
smaller organizations can enjoy the same benefits as large enterprises in deploying and managing their
storage infrastructure in a way never possible before. It supports over 1.2 million I/O per second (IOPS), up to
96PB of internal and external virtualized storage capacity and 128GB of directly addressable cache.
An integral component of the Hitachi Services Oriented Storage Solutions architecture, the Hitachi Universal
Storage Platform V and Universal Storage Platform VM provide the foundation for matching application
requirements to different classes of storage. These storage systems deliver critical services such as these:
• Virtualization of storage from Hitachi and other vendors into one pool
• Thin provisioning through Hitachi Dynamic Provisioning for nondisruptive volume expansion
• Security services, business continuity services and content management services
• Load balancing to improve application performance
• Nondisruptive dynamic data migration from Hitachi and other storage systems
• Use control unit virtualization to support massive consolidation of storage services on a single platform
Hitachi Storage Navigator Software
Hitachi Storage Navigator software is the integrated interface for the Universal Storage Platform family
firmware and software features. Use it to take advantage of all of the Universal Storage Platform’s features.
Storage Navigator software provides both a Web-accessible graphical management interface and a commandline interface to allow ease of storage management.
Storage Navigator software is used to map security levels for SAN ports and virtual ports and for inter-system
path mapping. It is used for RAID-level configurations, for LU creation and expansion, and for online Volume
Migrations. It also configures and managers Hitachi Replication products. It enables online microcode updates
and other system maintenance functions and contains tools for SNMP integration with enterprise management
systems.
2
Hitachi Performance Monitor Software
Hitachi Performance Monitor software provides detailed, in-depth storage performance monitoring and
reporting of Hitachi storage systems including drives, logical volumes, processors, cache, ports and other
resources. It helps organizations ensure that that they achieve and maintain their service level objectives for
performance and availability, while maximizing the utilization of their storage assets. Performance Monitor
software’s in-depth troubleshooting and analysis reduce the time required to resolve storage performance
problems. It is an essential tool for planning and analysis of storage resource requirements.
Hitachi Virtual Partition Manager Software
Hitachi Virtual Partition Manager software logically partitions Universal Storage Platform V and Universal
Storage Platform VM cache, ports and disk capacity, including capacity on externally attached storage
systems. The software enables administrators to create Hitachi Virtual Storage. Each machine is an isolated
group of storage resources, with their own storage partition administrator. Logical partitioning guarantees data
privacy and quality of service (QoS) for host virtualized and non-virtualized environments sharing the same
storage platform.
Hitachi Universal Volume Manager Software
Hitachi Universal Volume Manager software provides for the virtualization of a multi-tiered storage area
network comprised of heterogeneous storage systems. It enables the operation of multiple storage systems
connected to a Hitachi Universal Storage Platform system as if they are all in one storage system and provides
common management tools and software. The shared storage pool comprised of external storage volumes can
be used with storage system-based software for data migration and replication, as well as any host-based
application. Combined with Hitachi Volume Migration software, Universal Volume Manager provides an
automated data lifecycle management solution across multiple tiers of storage.
Hitachi Dynamic Provisioning Software
Hitachi Dynamic Provisioning software provides the Universal Storage Platform V and Universal Storage
Platform VM with thin provisioning services. Thin provisioning gives applications access to virtual storage
capacity. Applications accessing virtual, thin provisioned volumes are automatically allocated physical disk
space, by the storage system, as they write data. This means volumes use enough physical capacity to hold
application data, and no more. All thin provisioned volumes share a common pool of physical disk capacity.
Unused capacity in the pool is available to any application using thin provisioned volumes. This eliminates the
waste of overallocated and underutilized storage.
Hitachi Dynamic Provisioning software also simplifies storage provisioning and automates data placement on
disk for optimal performance. Administrators do not need to micro-manage application storage allocations or
perform complex, manual performance tuning. In addition, physical storage resources can be added to the thin
provisioning pool at any time, without application downtime. In Hyper-V environments, Hitachi Dynamic
Provision software provides another benefit: wide striping, which greatly improves performance and eliminates
the need for administrators to tune virtual machine volume placement across spindles.
3
Hyper-V Architecture
Microsoft® Hyper-V is a hypervisor-based virtualization technology from Microsoft that is integrated into
Windows Server 2008 x64 editions of the operating system. Hyper-V allows a user to run multiple operating
systems on a single physical server. To use Hyper-V in Windows Server 2008, enable the Hyper-V role on the
Microsoft Windows Server 2008 server.
Figure 1 illustrates Hyper-V architecture.
Figure 1. Hyper-V Architecture
The Hyper-V role provides the following functions:
• Hypervisor
• Parent and child partitions
• Integration services
• Emulated and synthetic devices
Windows Hypervisor
The Windows Hypervisor, a thin layer of software that allows multiple operating systems to run simultaneously
on a single physical server is the core component of Hyper-V. The Windows Hypervisor is responsible for the
creation and management of partitions that allow for isolated execution environments. As shown in Figure 1,
the Windows Hypervisor runs directly on top of the hardware platform, with the operating systems running on
top.
4
Parent and Child Partitions
To run multiple guest virtual machines with isolated execution environments on a physical server, Hyper-V
technology uses a logical entity called a partition. These partitions are where the operating systems and its
applications execute. Hyper-V defines two kinds of partitions, parent and child.
Parent Partition
Each Hyper-V installation consists of one parent partition, which is a virtual machine that has special or
privileged access. Some documentation might also refer to parent partitions as host partitions. This document
uses the term parent partition.
The parent partition is the only virtual machine with direct access to hardware resources. All of the other virtual
machines, which are known as child partitions, go through the parent partition for device access.
To create the parent partition, enable the Hyper-V role in Server Manager and restart the server. After the
system restarts, the Windows Hypervisor is loaded first, and then the rest of the stack is converted to become
the parent partition. The virtualization stack runs in the parent partition and has direct access to the hardware
devices. The parent partition then creates the child partitions that house the guest operating systems.
Child Partition
Hyper-V executes a guest operating system and its associated applications in a virtual machine, or child
partition. Some documentation might also refer to child partitions as guest partitions. This document uses the
term child partition.
Child partitions do not have direct access to hardware resources, but instead have a virtual view of the
resources, which are referred to as virtual devices. Any request to the virtual devices is redirected via the
VMBus to the devices in the parent partition. The VMBus is a logical channel that enables inter-partition
communication.
The parent partition runs Virtualization Service Providers (VSPs), which connect to the VMBus and handle
device access requests from child partitions. Child partition virtual devices internally run a Virtualization Service
Client (VSC), which redirects the request to VSPs in the parent partition via the VMBus. This entire process is
transparent to the guest OS.
Integration Services
Integration services are made up of two services that are installed on the guest OS that improve performance
while running under Hyper-V: enlightened I/O and integration components. The version of the guest OS
deployed determines which of these two services can be installed on the guest OS.
Enlightened I/O
Enlightened I/O is a Hyper-V feature that allows virtual devices in a child partition to use host resources better
because VSC drivers in these partitions communicate directly with VSPs directly over the VMBus for storage,
networking and graphics subsystems access. Enlightened I/O is a specialized virtualization-aware
implementation of high-level communication protocols like SCSI that take advantage of VMBus directly,
bypassing any device emulation layer. This makes the communication more efficient, but requires the guest OS
to support Enlightened I/O. At the time of this writing, Windows 2008, Windows Vista and SUSE Linux are the
only operating systems that support Enlightened I/O, allowing them to run faster as guest operating systems
under Hyper-V than other operating systems that need to use slower emulated hardware.
Integration Components
Integration components (ICs) are sets of drivers and services that enable guest operating systems to use
synthetic devices, thus creating more consistent child partition performance. By default, guest operating
systems only support emulated devices. Emulated devices normally require more overhead in the hypervisor to
perform the emulation and do not utilize the high-speed VMBus architecture. By installing integration
components on the supported guest OS, you can enable the guest to utilize the high-speed VMBus and utilize
synthetic SCSI devices.
5
Emulated and Synthetic Devices
Hardware devices that are presented inside of a child partition are called emulated devices. The emulation of
this hardware is handled by the parent partition. The advantage of emulated devices is that most operating
systems have built-in device drivers for them. The disadvantage is that emulated devices are not designed for
virtualization and thus have lower performance than synthetic devices.
Synthetic devices are optimized for performance in a Hyper-V environment. Hyper-V presents synthetic
devices to the child partition. Synthetic devices are high performance because they do not emulate hardware
devices. For example, with storage, the SCSI controller only exists as a synthetic device. For a list of guest
operating systems that support synthetic SCSI devices, see the Hyper-V Planning and Deployment Guide.
Hyper-V Storage Options
Hyper-V deployment planning requires consideration of three key factors: the type of disk to deploy and
present to child partitions, the disk interface and the I/O path.
Disk Type
The Hyper-V parent partition can present two disk types to guest operating systems: virtual hard disks (VHD)
and pass-through disks.
Virtual Hard Disks
Virtual hard disks (VHDs) are files that are stored on the parent hard disks. These disks can either be SAN
attached or local to the Hyper-V server. The child partition sees these files as its own hard disk and uses the
VHD files to perform storage functions.
Three types of VHD disks are available for presentation to the host:
• Fixed VHD — The size of the VHD is fixed and the LU is fully allocated at the time the VHD is defined.
Normally this allows for better performance than dynamic or differencing VHDs. This is due to less
fragmentation since the VHD is always pre-allocated and the parent partition file system does not need to
incur the overhead required to extend the VHD file, since all the blocks have been pre-allocated. A fixed VHD
has the potential for wasted or unused disk space. Consider also that after the VHD is full, any further write
operations fail even though additional free storage might exist on the storage system.
• Dynamic VHD — The VHD is expanded by Hyper-V as needed. Dynamic VHDs occupy less storage as
compared to fixed VHDs, but at the cost of slower throughput. The initial size that the disk can expand to is
set at creation time and the writes will fail when the VHD is fully expanded. Note that this dynamic feature
only applies to expanding the VHD. In other words, the VHD does not automatically decrease in size when
data is removed. However, Dynamic VHDs can be compacted under the Hyper-V virtual hard disk manager
to free any unused space.
• Differencing VHD — VHD that involves both a parent and child disk. The parent VHD disk contains the
baseline disk image with the guest operating systems and most likely an application and data associated
with that application. After the VHD parent disk is configured for the guest, a differencing disk is assigned as
a child to that partition. As the guest OS executes, any changes made to the parent baseline VHD are stored
on the child differencing disk. Differencing VHDs are good for test environments but performance can
degrade because all I/O must access the parent VHD disk as well as the differencing disk. This causes
increased CPU and disk I/O utilization.
Because dynamic VHDs have more overhead, best practice is to use fixed VHDs in most circumstance. For
heavy application workloads such as Exchange or SQL, create multiple fixed VHDs and isolate applications
files such as database and logs on their own VHDs.
Pass-through Disks
A Hyper-V pass-through disk is a physical disk or LU that is mapped or presented directly to the guest OS.
Hyper-V pass-through disks normally provide better performance than VHD disks.
6
After the pass-through disk is visible to and offline within the parent partition, it can be made available to the
child partition using the Hyper-V Manager. Pass-through disks have the following characteristics:
• Must be in the offline state from the Hyper-V parent perspective, except in the case of clustered or highly
available virtual machines.
• Presented as raw disk to the parent partition
• Cannot be dynamically expanded
• Do not allow the capability to take snapshots or utilize differencing disks
Disk Interface
Hyper-V supports both IDE and SCSI controllers for both VHD and pass-through disks. The type of controller
you select is the disk interface that the guest operating system sees. The disk interface is completely
independent of the physical storage system.
Table 1 shows a summary of disk interface capabilities and restrictions.
Table 1. Disk Interface Considerations
Disk
Interface
Considerations
Restrictions
IDE
All child partitions must boot from an IDE
device.
None.
A maximum of four IDE devices are available
for each child partition.
A maximum of one device per IDE controller for a
maximum of four devices per child partition.
Virtual DVD drives can only be created as an
IDE device.
None.
Best choice for all volumes based on I/O
performance.
None.
Requires that Integration Services be
installed on the child partition.
Guest OS specific.
Can define a maximum of four SCSI
controllers per child partition.
A maximum of 64 devices per SCSI controller for a
maximum of 256 devices per child partition.
SCSI
I/O Paths
The storage I/O path is the path that a disk I/O request generated by an application within a child partition must
take to a disk on the storage system. Two storage configurations are available, based on the type of disk
selected for deployment.
VHD Disk Storage Path
With VHD disks, all I/O goes through two complete storage stacks, once in the child partition and once in the
parent partition. The guest application disk I/O request goes through the storage stack within the guest OS and
onto the parent partition file system.
Pass-through Disk Storage Path
When using the pass-through disk feature, the NTFS file system on the parent partition can be bypassed
during disk operations, minimizing CPU overhead and maximizes I/O performance. With pass-through disks,
the I/O traverses only one file system, the one in the child partition. Pass-through disks offer higher throughput
because only one file system is traversed, thus requiring less code execution.
When hosting applications with high storage performance requirements, deploy pass-through disks.
7
Basic Hyper-V Host Setup
Servers utilized in a Hyper-V environment must meet certain hardware requirements. For more information,
see the Hyper-V Planning and Deployment Guide.
Note: Best practice is to install the Integration components on any child partition to be hosted under Hyper-V.
The integration components install enlightened drivers to optimize the overall performance of a child partition.
Enlightened drivers provide support for the synthetic I/O devices, which significantly reduces CPU overhead for
I/O when compared to using emulated I/O devices. In addition, it allows the synthetic I/O device to take
advantage of the unique Hyper-V architecture not available to emulated I/O devices, further improving the
performance characteristics of synthetic I/O devices. For more information, see the Hyper-V Planning and
Deployment Guide.
Multipathing
Hitachi recommends the use of dual SAN fabrics, multiple HBAs and host-based multipathing software when
deploying business critical Hyper-V Server applications. Two or more paths from the Hyper-V Server
connecting to two independent SAN fabrics is essential for ensuring the redundancy required for critical
applications. The Universal Storage Platform V supports up to 224 Fibre Channel ports and the Universal
Storage Platform VM supports up to 48 Fibre Channel ports that support both direct connect as well as multiple
paths with the use of a Fibre Channel switch. Unique port virtualization technology dramatically expands
connectivity from Windows Server to the Universal Storage Platform. Each physical Fibre Channel port
supports 1024 virtual ports.
Multipathing software such as Hitachi Dynamic Link Manager and Microsoft Windows Server 2008 native MPIO
are critical components of a highly available system. Multipathing software allows the Windows operating
system to see and access multiple paths to the same LU, enabling data to travel down any available path for
increased performance or continued access to data in the case of a failed path. Hitachi Dynamic Link Manager
includes the following load balancing algorithms that are especially suited for Hitachi storage systems:
• Round robin
• Extended round robin
• Least I/Os
• Extended least I/Os
• Least blocks
• Extended least blocks
The introduction of additional load balancing algorithms increase the choices that are available for improving
the performance of your Hyper-V environment. Conduct testing before establishing which of these algorithms is
better suited for your Hyper-V environment.
Hitachi Global Link Manager software consolidates, simplifies and enhances the management, configuration
and reporting of multipath connections between servers and storage systems. Hitachi Global Link Manager
software manages all of the Hitachi Dynamic Link Manager installations in the environment. Use it to configure
multipathing on the Hyper-V hosts, monitor all the connections to the Universal Storage Platform V or Universal
Storage Platform VM storage system, and to report on those connections. Global Link Manager also enables
administrators to configure load-balancing on a per-LU level. Hitachi Global Link Manager software also
integrates with the Hitachi Storage Command Suite of products and is usually installed on the same server as
Hitachi Device Manager.
8
Key Considerations:
• Hitachi Dynamic Link Manager software can only be used on the Hyper-V parent partition.
• Use the most current Hitachi supported HBA drivers.
• Select the proper HBA queue depth using the formula described below.
• Use at least two HBAs and place them on different buses within the server to distribute the workload over the
server’s PCI bus architecture
• Use at least two Fibre Channel switch fabrics to provide multiple independent paths to the Universal Storage
Platform VM to prevent configuration errors from bringing down the entire SAN infrastructure.
Queue Depth Settings on the Hyper-V Host
Queue depth settings determine how many command data blocks can be sent to a port at one time. Setting
queue depth too low can artificially restrict an application’s performance, while setting it too high might cause a
slight reduction in I/O. Setting queue depth correctly allows the controllers on the Hitachi storage system to
optimize multiple I/Os to the physical disk. This can provide significant I/O improvement and reduce response
time.
Applications that are I/O intensive can have many concurrent, outstanding I/O requests. For that reason, better
performance is generally achieved with higher queue depth settings. However, this must be balanced with the
available command data blocks on each front-end port of the storage system.
The Universal Storage Platform V and Universal Storage Platform VM have a maximum of 2048 command
data blocks available on each front-end port. This means that at any one time up to 2048 active host channel
I/O commands can be queued for service on a front-end port. The 2048 command data blocks on each
front-end port are used by all LUs presented on the port, regardless of the connecting server. When calculating
queue depth settings for Hyper-V Server HBAs, you must also consider queue depth requirements for other
LUs presented on the same front-end ports to all other servers. Hitachi recommends setting HBA queue depth
on a per-target basis rather than per-port basis.
To calculate queue depth, use the following formula:
2048 ÷ total number of LUs presented through the front-end port = HBA queue depth per host
For example, suppose that four servers share a front-end port on the storage system, and between the four
servers, 16 LUs are assigned through the shared front-end port and all LUs are constantly active. The
maximum dynamic queue depth per HBA port is 128, that is:
2048 command data blocks ÷ 16 LUs presented through the front-end port = 128 HBA
queue depth setting
Basic Storage System Setup
The Universal Storage Platform has no system parameters that need to be set specifically for a Hyper-V
environment. The Universal Storage Platform V supports up to 224 Fibre Channel ports and the Universal
Storage Platform VM supports up to 48 Fibre Channel ports.
Fibre Channel Storage Deployment
When deploying Fibre Channel storage on a Universal Storage Platform V system in a Hyper-V environment, it
is important to properly configure the Fibre Channel ports and to select the proper type of storage for the child
partitions that are to be hosted under Hyper-V.
9
Fibre Channel Front-end Ports
Provisioning storage on two Fibre Channel front-end ports is sufficient for redundancy on the Universal Storage
Platform. This results in two paths to each LU from the Hyper-V host's point of view. For higher availability,
ensure that the target ports are configured to two separate fabrics to make sure multiple paths are always
available to the Hyper-V server.
Hyper-V servers that access LUs on Universal Storage Platform storage systems must be properly zoned so
that the appropriate Hyper-V parent and child partitions can access the storage. With the Universal Storage
Platform, zoning is accomplished at the storage level by using host storage domains (HSDs). Zoning defines
which LUs a particular Hyper-V server can access. Hitachi Data Systems recommends creating a HSD group
for each Hyper-V server and using the name of the Hyper-V server in the HSD for documentation purposes.
Figure 2 illustrates using host storage domains for zoning of the Hyper-V servers and assignment of the LUs.
Figure 2. Hitachi Storage Navigator LU Path and Security Settings
10
Host Modes
To create host groups for Hyper-V parent partitions, choose 0C[Windows] or 2C[Windows extension] from
the Host Mode drop-down menu. Host Mode 2C[Windows extension] allows the storage administrator to
expand a LU using Logical Unit Size Expansion (LUSE) while the LU is mapped to the host.
Figure 3. Host Mode
Selecting Child Partition Storage
It is important to correctly select the type of storage deployed for the guest OS that is to be virtualized under
Hyper-V. Consider also whether VHD or pass-through disks are appropriate. The following questions can help
you make this determination:
• Is the child partition’s I/O workload heavy, medium or light?
If the child partition has a light workload, you might be able to place all the storage requirements on one VHD
LU. If the child partition is hosting an application such as SQL or Exchange, allocate files that are accessed
heavily, such as log and database files, to individual VHD LUs. Attach each individual LU to its own synthetic
controller.
• What is the maximum size LU required to support the child partition?
If the maximum LU is greater that 2040GB, you must either split the data or utilize pass-through disks. This is
due to the size limitation of 2040GB for a VHD LU.
11
Dedicated VHD Deployment
Figure 4 shows dedicated VHDs for the application files and the mapping within the Universal Storage Platform
V or Universal Storage Platform VM storage system to the mapping within the Hyper-V parent partition, and the
child partition. Note that this scenario uses synthetic SCSI controller interface for the application LUs.
Figure 4. Dedicated VHD Connection
Key Considerations:
• For better performance and easier management of child partitions, assign a single set of LUs.
• To enable the use of Hyper-V quick migration of a single child partition, deploy dedicated VHDs.
• To enable multiple child partitions to be moved together using quick migration, deploy shared VHDs.
• To achieve good performance for heavy I/O applications, deploy dedicated VHDs.
12
Shared VHD Deployment
This scenario utilizes a shared VHD disk, with that single VHD disk hosting multiple child partitions. Figure 5
shows a scenario where Exchange and SQL child partitions share a VHD disk on a Universal Storage Platform
and SharePoint and BizTalk child partitions also share a VHD disk on the Universal Storage Platform.
Figure 5. Shared VHD Connection
Key Considerations:
• It is important to understand the workloads of individual child partitions when hosting them on a single shared
VHD. It is critical to ensure that the RAID groups on the Universal Storage Platform system that are used to
host the shared VHD LUs can support the aggregate workload of the child partitions.
For more information, see the “Number of Child Partitions per VHD, per RAID Group” section of this paper.
• If using quick migration to move a child partition, understand that all child partitions hosted within a shared
VHD move together. Whether the outage is due to automated recovery from a problem with the child partition
or because of a planned outage, all the child partitions in the group are moved.
13
Pass-through Deployment
This scenario uses pass-through disks instead of VHD disks. A dedicated VHD LU is still required to host
virtual machine configuration files. Do not share this VHD LU with other child partitions on the Hyper-V host.
Figure 6 shows a scenario in which virtual machine configuration files, guest OS binaries, the page file and
SQL Server application libraries are placed on the VHD LU, and the application files are deployed as passthrough disks.
Figure 6. Pass-through Connection
14
Key Considerations:
• For higher throughput, deploy pass-through disks. Pass-through disks normally provide higher throughput
because only the guest partition file system is involved.
• To achieve an easier migration path, deploy pass-through disks. Pass-through disks can provide an easier
migration path because the LUs used by a physical machine on a SAN can be moved easily to a Hyper-V
environment, and allow a new child partition access to the disk. This scenario is especially appropriate for
partially virtualized environments.
• To support multi-terabyte LUs, deploy pass-through disks. Pass-through disks are not limited in size, so a
multi-terabyte LU is supported.
• Pass-through disks appear as raw disks and offline to the parent.
• If snapshots are required, remember that pass-through disks do not support Hyper-V snapshot copies.
Storage Provisioning
Capacity and performance cannot be considered independently. Performance always depends on and affects
capacity and vice versa. That’s why it’s very difficult or impossible in real-life scenarios to provide best
practices for the best LU size, the number of child partition that can run on a single VHD and so on without
knowing capacity and performance requirements. However, several factors must be considered when planning
storage provisioning for a Hyper-V environment.
Size of LU
When determining the right LU size, consider the factors listed in Table 2. These factors are especially
important from a storage system perspective. In addition, the individual child partition’s capacity and
performance requirements (basic virtual disk requirements, virtual machine page space, spare capacity for
virtual machine snapshots, and so on) must also be considered.
Table 2. LU Sizing Factors
Factor
Comment
Guest base OS size
The guest OS resides on the boot device of the child partition.
Guest page file size
Recommended size is 1.5 times the amount of RAM allocated to the child partition.
Virtual machine files
Define the size the same as the size of the child partition memory plus 200MB.
Application data required
by the guest machine
Storage required by the application files such as database and logs.
Data replication
Using more but smaller LUs offers better flexibility and granularity when using replication
within a storage system (Hitachi ShadowImage® Replication software, Hitachi Copy-onWrite Snapshot software) or across storage systems (Hitachi Universal Replicator,
TrueCopy® Synchronous or Extended Distance software).
Number of Child Partitions per VHD LU, per RAID Group
If you decide to run multiple child partitions on a single VHD LU, understand that the number of child partitions
that can run simultaneously on a VHD LU depends on the aggregated capacity and performance requirements
of the child partitions. Because all LUs on a particular RAID group share the performance and capacity offered
by the RAID group, Hitachi Data Systems recommends dedicating RAID groups to a Hyper-V host or a group
of Hyper-V hosts (for example, a Hype-V failover cluster) and not assigning LUs from the same RAID group to
other non-Hyper-V hosts. This prevents the Hyper-V I/O from affecting or being affected by other applications
and LUs on the same RAID group and makes management simpler.
15
Follow these best practices:
• Create and dedicate RAID groups to your Hyper-V hosts.
• Always present LUs with the same H-LUN if they are shared with multiple hosts.
• Create VHDs on the LUs as needed.
• Monitor and measure the capacity and performance usage of the RAID group with Hitachi Tuning Manager
software and Hitachi Performance Monitor software.
Monitoring and measuring the capacity and performance usage of the RAID group results in one the following
cases:
• If all of the capacity offered by the RAID group is used but performance of the RAID group is still good, add
RAID groups and therefore more capacity. In this case, consider migrating the LUs to a different RAID group
with less performance using Hitachi Volume Migration or Hitachi Tiered Storage Manager.
• If all of the performance offered by the RAID group is used but capacity is still available, do not use the
remaining capacity by creating more LUs because this leads to even more competition on the RAID group
and overall performance for the child partitions residing on this RAID group is affected. In this case, leave the
capacity unused and add more RAID groups and therefore more performance resources. Also consider
migrating the LUs to a different RAID group with better performance.
• Consider using Hitachi Dynamic Provisioning to dynamically add RAID groups to the storage pool that the
Hyper-V LUs reside in. This can add additional performance and capacity dynamically to the Hyper-V
environment. For further information about Hitachi Dynamic Provisioning, the “Hitachi Dynamic Provisioning”
section in this document.
In a real environment, it is not possible to use 100 percent of both capacity and performance of a RAID group,
but the usage ratio can be optimized by actively monitoring the systems and moving data to the appropriate
storage tier if needed using Hitachi Modular Volume Migration or Hitachi Tiered Storage Manager. An
automated solution using these applications from the Hitachi Storage Command Suite helps to reduce the
administrative overhead and optimize storage utilization.
Storage Virtualization and Hyper-V
As organizations implement server virtualization with Hyper-V, the need for storage virtualization becomes
more evident. The Hitachi Universal Storage Platform offers built-in storage virtualization that allows other
storage systems (from Hitachi and from third parties) to be attached or virtualized behind the Hitachi Universal
Storage Platform. From a Hyper-V parent point of view, virtualized storage is accessed through the Hitachi
Universal Storage Platform and appears like internal, native storage capacity. The virtualized storage systems
immediately inherit every feature available on the Hitachi Universal Storage Platform (data replication, Hitachi
Dynamic Provisioning, and so on) and enables management and replication using Hitachi software.
The virtualized storage that is attached behind the Hitachi Universal Storage Platform will allow for the
implementation of a tiered storage configuration in the Hyper-V environment. This gives Hyper-V parent and
child partitions access to a wide range of storage with different price, performance and functionality profiles.
Storage allocated to each guest machine under Hyper-V can be migrated between different tiers of storage
according to the needs of the application. Data can be replicated locally, or remotely, to accommodate
business continuity needs. For example, utilizing Hitachi Tiered Storage Manager, virtual machines under
Hyper-V can be moved or replicated between different tiers of storage with no disruption to the applications
running in the Hyper-V child partitions.
16
To virtualize storage systems behind a Hitachi Universal Storage Platform for a Hyper-V infrastructure
environment, use the following high-level checklist. Although the process itself is conceptually simple and
usually only requires logical reconfiguration tasks, always check and plan this process with your Hitachi Data
Systems representative.
1. Quiesce I/O to and unmap the LUs from the Hyper-V hosts on the storage system to be virtualized.
2. Reconfigure the SAN zoning as needed.
3. Map the LUs to the Hitachi Universal Storage Platform using the management tools available on the
storage system to be virtualized.
4. Map the (virtualized) LUs to the Hyper-V parent partition.
Hitachi Dynamic Provisioning
Storage can be provisioned to a Hyper-V infrastructure environment using Hitachi Dynamic Provisioning.
Virtual DP volumes have a defined size, viewed by the Hyper-V hosts as any other normal volume and initially
do not allocate any physical storage capacity from the HDP pool volumes. Data is written and striped across
the HDP pool volumes in a fixed size that is optimized to achieve both performance and storage area savings.
Hitachi Dynamic Provisioning provides support for both thin provisioning and wide striping in a Hyper-V
environment.
Figure 7 provides an overview of Hitachi Dynamic Provisioning on the Universal Storage Platform V and
Universal Storage Platform VM.
Figure 7. Hitachi Dynamic Provisioning Concept Overview
17
Thin Provisioning
With the use of thin provisioning capabilities provided within the Universal Storage Platform V or Universal
Storage Platform VM, you can provision virtual capacity to a virtual machine application only once and then
purchase physical capacity only as virtual machine applications truly require it for written data. Capacity for all
virtual applications is drawn automatically and as needed from the Universal Storage Platform’s shared storage
pool to eliminate allocated but unused capacity and simplify storage administration.
Table 3. Hitachi Dynamic Provisioning Capacity Allocation
Configuration Step
Effect on Capacity Allocation on the HDP Pool Volumes
Map an HDP volume to Hyper-V
Parent
This process does not allocate any physical capacity on the HDP pool volumes.
Create a VHD file on the HDP
volume.
This process does allocate some physical capacity on the HDP pool volumes to
write VHD metadata.
Install an operating system in the
virtual machine on the VHD volume.
This process does allocate capacity on the HDP pool volumes depending on the
file system being used in the virtual machine and the amount of data written to
the virtual machine's VHD file.
Deploy a virtual machine from a
template
This process does allocate the whole capacity of the virtual machine's disk file on
the HDP pool volumes.
Delete data within the virtual
machine.
The capacity remains allocated on the HDP pool volumes but might be reused by
the virtual machine.
Delete the virtual machine and
delete the virtual machine's disk file
VHD file.
The capacity remains allocated on the HDP pool volumes
The use of thin provisioning within a Hyper-V environment can yield greater utilization of storage assets, and
also simplify storage administration tasks when allocating and managing storage for Hyper-V guest machines.
Figure 8 illustrates the significant savings that Hitachi thin provisioning can yield in a Hyper-V configuration
versus the traditional model of storage provisioning in a shared storage environment.
18
Figure 8. Thin Provisioning Savings in a Hyper-V Environment
19
VHD Types and Thin Provisioning
Fixed VHDs normally provide better performance than utilizing expanding or dynamic VHDs. With thin
provisioning on the Universal Storage Platform, deploying fixed VHDs for Hyper-V guest machines provides the
performance benefits of fixed VHDs with the added storage savings of dynamic VHDs. This is because space
is only allocated to the guest machine VHDs as required.
For more information about VHD types and their attributes, see the Microsoft TechNet article Frequently Asked
Questions: Virtual Hard Disks in Windows 7.
Performance Factors with Hitachi Dynamic Provisioning
Another benefit in a Hyper-V environment is the use of Hitachi Dynamic Provisioning to utilize wide striping.
Wide striping comes from the allocation of chunks across all the drives in a storage pool known as a HDP pool,
which might be a hundreds of drives or more. Spreading an I/O across that many more physical drives greatly
magnifies performance by parallelizing the I/O across all the spindles in the pool, and can also eliminate the
administrative requirement to tune the placement of virtual machine volumes across spindles.
Performance design and consequent disk design recommendations for Hitachi Dynamic Provisioning are
similar to static provisioning. But for Hitachi Dynamic Provisioning, the requirement is on the HDP pool, rather
than the array group. In addition, the pool performance requirement (number of IOPS) is the aggregate of all
applications using the same HDP pool.
Pool design and use depend on the performance requirements for the applications running on the virtual
machines under Hyper-V. The volume performance feature is an automatic result from the manner in which the
individual HDP pools are created. A pool is created using up to 1024 LDEVs (pool volumes) that provide the
physical space, and the pool’s 42MB allocation pages are assigned on demand to any of the Hitachi Dynamic
Provisioning volumes (DP-VOLS) connected to that pool. Each individual 42MB pool page is consecutively laid
down on a whole number of RAID stripes from one pool volume. Other pages assigned over time to that DPVOL randomly originate from the next free page from other pool volumes in that pool.
As an example, assume that an HDP pool is assigned 24 LDEVs from 12 RAID-1+ (2D+2D) array groups. All
48 disks contribute their IOPS and throughput power to all of the DP-VOL assigned to that pool. If more
random read IOPS horsepower is desired for that pool, it can be created with 64 LDEVs from 32 RAID-5
(3D+1P) array groups, thus providing 128 disks of IOPS power to that pool. You can also increase the capacity
of an HDP pool by adding array groups to it, thus re-leveling the wide striping along the pool and contributing
its IOPS and throughput power to all of the DP-VOL assigned to that HDP pool. This is a powerful feature that
can be used in combination with Hyper-V for rapidly deploying virtual machines and their associated storage
capacity and performance requirements.
Up to 1024 such LDEVs can be assigned to a single pool. This can represent a considerable amount of I/O
power under (possibly) just a few DP-VOLs. This type of aggregation of disks was only possible previously by
the use of somewhat complex host-based volume managers (such as Veritas VxVM) on the servers. One
alternative available on both the Universal Storage Platform V and Universal Storage Platform VM is to use the
LUSE feature, which provides a simple concatenation of LDEVs. Unlike Hitachi Dynamic Provisioning,
however, the LUSE feature is mostly geared towards solving capacity problems only rather then both capacity
and performance capability problems.
Key Considerations:
• Hitachi Dynamic Provisioning volumes (DP-VOLS) are assigned to Hyper-V servers in the same method as
used for static provisioning.
• Always utilize the quick formatting option when formatting Hitachi Dynamic Provisioning volumes, because it
is a thin-friendly operation. Slow formatting is equally efficient on Windows 2003 but tests on Windows 2008
show that slow format writes more data. Slow formatting with RAID systems offers no benefit because all
devices are preformatted.
20
• Do not defragment file systems on Hitachi Dynamic Provisioning volumes, including those containing
database or transaction log files. In a Hitachi Dynamic Provisioning environment, defragmentation of NTFS is
rarely space efficient with any data.
• Consider using separate HDP pools for virtual machines that contain databases and logs that operate at high
transaction levels at the same time. This still provides capacity savings while ensuring the highest level of
performance required by your Hyper-V environment.
Storage Partitioning
Ensure application quality of service by partitioning storage resources with Hitachi Virtual Storage Machine™
technology. Hitachi Virtual Partition Manager software enables the logical partitioning of ports, cache and disk
(parity groups) into Virtual Storage Machines on the Hitachi Universal Storage Platform. Partitions allocate
separate, dedicated, secure storage resources for specific users (departments, servers, applications and so
on). Administrators can control resources and execute business continuity software within their assigned
partitions, secured from affecting any other partitions. Partitions can be dynamically modified to meet quality of
service requirements. Overall system priorities, disk space and tiers of storage can be optimized for application
QoS based on changing business priorities.
For example, with Virtual Storage Machines, you can align the storage configuration with the Hyper-V server
infrastructure for test and production deployment lifecycles so that production workloads are not affected by
other non-production array-based activity.
Figure 9 demonstrates the unique connectivity, partitioning and security features available with Hitachi Virtual
Storage Machines.
21
Figure 9. Hitachi Virtual Storage Machine Connectivity and Partitioning Flexibility
.
Hitachi Storage Virtual Machines embody many of the same high level aspects of a Hyper-V virtual
environment, including these:
• Physical resources can be dedicated or shared.
• For those resources that are shared, priorities can be set within and among the Virtual Storage Machines.
• Mobility of virtual machines under Hyper-V is maximized.
• Production and development workloads can be partitioned to meet service level objectives.
• Hyper-V workloads can be isolated to their own partitions allowing for the protection of production workloads.
22
Hyper-V Protection Strategies
A successful Hyper-V deployment requires careful consideration of protection strategies for backups, disaster
recovery and quick migration.
Backups
Regularly scheduled backups of the Hyper-V servers and the data that resides on the child partitions under
Hyper-V are an important part of any Hyper-V protection plan. With Hyper-V, the backup and protection
process involves both the Hyper-V parent partition and the child partitions that execute under Hyper-V, along
with the applications that reside within the child partition.
When protecting child partitions, two protection strategies are available. You can create application-aware
backups of each child partition as if they are hosted on individual physical servers, or you can back up the
parent partition at a point in time, which then creates a backup of the child partitions that were executing on the
parent partition.
When backing up the parent partition, it’s important to keep the state of the physical server in mind. For
example, if a backup of the parent partition is created while two child partitions are executing applications, the
backup is a point-in-time copy of the parent and the child partitions. Any applications that are executing in the
child partitions are unaware that a backup occurred. This means that applications such as Exchange or SQL
cannot freeze writes to the databases, set the appropriate application checkpoints, or flush the transaction logs.
Best practice is to perform application-aware backups in the child partitions.
Storage Replication
Another important part of protection strategy is storage replication. The Universal Storage Platform family has
built-in storage replication features such as ShadowImage Replication software and Copy-on-Write software
that can provide rapid recovery and backup in a Hyper-V environment. As more and more child partitions are
placed on a physical Hyper-V server, the resources within the Hyper-V server might become constrained, thus
affecting the backup window. By using solutions such as ShadowImage Replication software on the Universal
Storage Platform, backups can created with little effect on the Hyper-V host. These ShadowImage Replication
software copies can also be backed up to tape or disk. This means that child partitions hosted by Hyper-V can
be recovered very quickly.
Hyper-V Quick Migration
Hyper-V quick migration provides a solution for both planned and unplanned downtime. Planned downtime
allows for the quick movement of virtualized workloads to service the underlying physical hardware. This is the
most common scenario when considering the use of quick migration.
Quick migration requires the use of failover clustering because the storage must be shared between the
physical Hyper-V nodes. For a planned migration, quick migration saves the state of a running child partition
(memory of original server to the disk and shared storage), moves the storage connectivity from one physical
server to another and restores the partition to the second server (the disk and shared storage to memory on
the new server).
Consider the following when configuring disks for quick migration:
• Best practice is to leverage MPIO and Hitachi Dynamic Link Manager software for path availability and
improved I/O throughput within the Hyper-V cluster.
• Pass-through disks:
– Require that the virtual machine configuration file be stored on a separate LU from the LUs that host the
data files. Normally this is a VHD LU that is presented to the Hyper-V parent partition.
– Do not allow any other child partitions to share the virtual machine configuration file or VHD LU. Sharing
the virtual machine configuration file or VHD LU among child partitions can lead to corruption of data.
23
– Alleviate problems associated with child partitions that have a large number of LUs, due to the 26-drive
letter limitation. Pass-through disks do not require a drive letter because they are offline to the parent.
• VHD disks:
– Best practice is to use one child partition per LU.
– Ability does exist to provision more than one child partition per LU, but remember that all child partitions on
the VHD LU failover as a unit.
Hitachi Storage Cluster Solution
Integrating Hyper-V with the Universal Storage Platform family replication solutions provides high availability for
disaster recovery scenarios. This solution leverages the Quick Migration feature of Hyper-V to allow for the
planned and unplanned recovery of child partitions under Hyper-V.
Disaster recovery solutions consist of remote LU replication between two sites, with automated failover of child
partition resources to the secondary site in the event that the main site goes down or is otherwise unavailable.
Data replication and control are handled by the Hitachi Storage Cluster (HSC) software and the storage system
controllers. This has little effect on the applications running in the child partition and is fully automated.
Consistency groups and time-stamped writes ensure database integrity.
Child partitions run as clusters resources within the Hyper-V cluster. If a node within the cluster that is hosting
the child partition fails, the child partition automatically fails over to an available node. The child partitions can
be quickly moved between cluster nodes to allow for planned and unplanned outages. With HSC, the replicated
LUs and the child partition are automatically brought online.
Figure 10 illustrates how multiple child partitions and their associated applications can be made highly available
using HSC.
Figure 10. Hitachi Storage Cluster for Hyper-V Solution
24
Hyper-V Performance Monitoring
A complete, end-to-end picture of your Hyper-V Server environment and continual monitoring of capacity and
performance are key components of a sound Hyper-V management strategy. The principles of analyzing the
performance of a guest partition installed under Hyper-V are the same as analyzing the performance of an
operating system installed on a physical machine. Monitor servers, operating systems, child partition
application instances, databases, database applications, storage and IP networks and the 2000 family storage
system using tools such as Windows Performance Monitor (PerfMon) and Hitachi Performance Monitor feature.
Note that while PerfMon provides good overall I/O information about the Hyper-V parent and the guests under
the Hyper-V parent, it cannot identify all possible bottlenecks in an environment. For a good overall
understanding of the I/O profile of a Hyper-V parent and its guest partitions, monitor the storage system’s
performance with Hitachi Performance Monitor feature. Combining data from at least two performancemonitoring tools provides a more complete picture of the Hyper-V environment. Remember that PerfMon is a
per-server monitoring tool and cannot provide a holistic view of the storage system. For a complete view, use
PerfMon to monitor all servers that are sharing a RAID group.
Windows Performance Monitor
PerfMon is a Windows-based application that allows administrators to monitor the performance of a system
using counters or graphs, in logs or as alerts on the local or remote host. The best indicator of disk
performance on a Hyper-V parent operating system is obtained by using the \Logical Disk(*)\Avg.
sec/Read and \Logical Disk(*)\Avg. sec/Write performance monitor counters. These performance
monitor counters measure the latency time that read and write operations take to respond to the operating
system. In general, average disk latency response times greater than 20ms on a disk are cause for concern.
For more information about monitoring Hyper-V related counters, see Microsoft® TechNet’s Measuring
Performance on Hyper-V article.
Hitachi Performance Monitor Feature
Hitachi Performance Manager feature is a controller-based software application, enabled through Hitachi
Enterprise Storage Navigator, which monitors the performance of RAID groups, logical units and other
elements of the disk subsystem while tracking utilization rates of resources such as hard disk drives and
processors. Information is displayed using line graphs in the Performance Manager windows and can be saved
in comma-separated value (.csv) files.
When the disk subsystem is monitored using Hitachi Performance Manager feature, utilization rates of
resources in the disk subsystem (such as load on the disks and ports) can be measured. When a problem such
as slow response occurs in a host, an administrator can use Hitachi Performance Manager feature to quickly
determine if the disk subsystem is the source of the problem. Figure 11 shows the Hitachi Performance
Manager Feature interface.
25
Figure 11. Hitachi Performance Monitor Feature Interface
You can measure utilization rates of disk subsystem resources, such as load on disks and ports, with Hitachi
Performance Monitor feature. When a problem such as slow response occurs in a host, an administrator can
use Hitachi Performance Monitor feature to quickly determine if the disk subsystem is the source of the
problem.
Hitachi Tuning Manager Software
Hitachi Tuning Manager software enables you to proactively monitor, manage and plan the performance and
capacity for the Hitachi storage that is attached to your Hyper-V servers. Hitachi Tuning Manager software
consolidates statistical performance data from the entire storage path. It collects performance and capacity
data from the operating system, switch ports, storage ports on the storage system, RAID groups and LUs and
provides the administrator a complete performance picture. It provides historical, current and forecast views of
these metrics. For more information about Hitachi Tuning Manager software, see the Hitachi Data Systems
support portal.
26
Corporate Headquarters 750 Central Expressway, Santa Clara, California 95050-2627 USA
Contact Information: + 1 408 970 1000 www.hds.com / info@hds.com
Asia Pacific and Americas 750 Central Expressway, Santa Clara, California 95050-2627 USA
Contact Information: + 1 408 970 1000 www.hds.com / info@hds.com
Europe Headquarters Sefton Park, Stoke Poges, Buckinghamshire SL2 4HD United Kingdom
Contact Information: + 44 (0) 1753 618000 www.hds.com / info.uk@hds.com
Hitachi is a registered trademark of Hitachi, Ltd. in the United States and others countries. Hitachi Data Systems is a
registered trademark and service mark of Hitachi, Ltd. in the United States and other countries. ShadowImage and
TrueCopy are registered trademarks of Hitachi Data Systems. Universal Storage Platform, Universal Star Network
and Virtual Storage Machine are trademarks of Hitachi Data Systems.
All other trademarks, service marks and company names mentioned in this document or website are properties of
their respective owners.
Notice: This document is for information purposes only, and does not set forth any warranty, expressed or implied,
concerning any equipment or service offered or to be offered by Hitachi Data Systems. This document describes
some capabilities that are conditioned on a maintenance contract with Hitachi Data Systems being in effect, and that
may be configuration dependent, and features that may not be currently available. Contact your local Hitachi Data
Systems sales office for information on feature and product availability.
© Hitachi Data Systems Corporation 2009. All Rights Reserved.
AS-008-00 April 2009