Best practices for deploying an HP EVA array with Microsoft Hyper

advertisement
Best practices for deploying an HP EVA
array with Microsoft Hyper-V R2
Table of contents
Executive summary............................................................................................................................... 3
Environment planning ........................................................................................................................... 3
Compatibility matrix ......................................................................................................................... 3
Sizing for Hyper-V............................................................................................................................ 4
Memory on a Hyper-V host ............................................................................................................ 4
Configuration details ............................................................................................................................ 4
Blade enclosure ............................................................................................................................... 5
Clustered Hyper-V servers .............................................................................................................. 6
Management server ...................................................................................................................... 7
Command View EVA server ........................................................................................................... 8
Storage........................................................................................................................................... 8
I/O test configuration ....................................................................................................................... 9
Storage options ................................................................................................................................... 9
Multipath in Windows Server 2008 R2 .............................................................................................. 9
EVA Vraid levels ............................................................................................................................ 10
Cluster-shared versus VM-specific volumes ......................................................................................... 11
EVA configuration considerations ..................................................................................................... 13
Disk types ......................................................................................................................................... 14
Virtual hard disks ........................................................................................................................... 15
Fixed VHD ................................................................................................................................. 15
Dynamically expanding VHD ....................................................................................................... 15
Fixed versus dynamically expanding VHD performance .................................................................. 15
Differencing disks ....................................................................................................................... 15
Snapshots .................................................................................................................................. 17
VHD and volume sizing ............................................................................................................... 19
Pass-through disks .......................................................................................................................... 19
Disk expansion .............................................................................................................................. 20
Disk options summary ..................................................................................................................... 21
Virtual disk controllers ........................................................................................................................ 22
Disk controller performance ............................................................................................................. 22
Disks per controller type .................................................................................................................. 22
Controller usage recommendation .................................................................................................... 22
VM and storage management ............................................................................................................. 23
Server Manager ............................................................................................................................. 23
Remote Management .................................................................................................................. 23
Hyper-V Manager....................................................................................................................... 25
Failover Cluster Manager ............................................................................................................ 26
Performance Monitor .................................................................................................................. 27
System Center Virtual Machine Manager .......................................................................................... 29
Quick Storage Migration ............................................................................................................. 30
HP Command View EVA software .................................................................................................... 31
HP Systems Insight Manager ........................................................................................................... 31
System Center Operations Manager (SCOM) .................................................................................... 32
VM deployment options ...................................................................................................................... 34
Windows Deployment Services ........................................................................................................ 35
EVA Business Copy snapclones........................................................................................................ 35
Deployment with SCVMM ............................................................................................................... 35
VM cloning with SCVMM ............................................................................................................ 35
VM template creation and deployment with SCVMM ...................................................................... 35
Physical-to-virtual (P2V) deployment through SCVMM ..................................................................... 36
Summary .......................................................................................................................................... 39
Appendix A—Disk expansion ............................................................................................................. 40
VHD expansion.............................................................................................................................. 40
For more information .......................................................................................................................... 46
HP links ..................................................................................................................................... 46
Microsoft links ............................................................................................................................ 46
Feedback .................................................................................................................................. 46
Executive summary
Server virtualization has been widely adopted in production data centers and continues to gain
momentum due to benefits in power consumption, reduced data center footprint, IT flexibility,
consolidation, availability, and lower total cost of ownership (TCO). With HP server, storage,
infrastructure, and software products, businesses can take advantage of a converged infrastructure.
Doing so reduces segmentation of IT departments and enables better resource use by using pools that
are based on virtualized assets that adapt to ever-changing business requirements.
As server and storage consolidation increases, there is an increased risk of production outages.
Consolidation also raises the workload on servers and storage to new levels. Highly available and
high-performance storage and server solutions are an integral part of creating an efficient,
dependable environment. With the HP StorageWorks Enterprise Virtual Array (EVA) family, HP
BladeSystem components, and Microsoft® Windows® Server 2008 R2 Hyper-V (Hyper-V), businesses
can create a highly available and high-performance solution.
This white paper outlines the virtualized infrastructure and offers best practices for planning and
deploying the EVA with Hyper-V on HP ProLiant BladeSystem servers. New Hyper-V R2 features, such
as Live Migration and Cluster Shared Volumes, help resolve consolidation challenges. This white
paper serves as a resource aid for IT professionals who are responsible for implementing a Hyper-V
environment and covers many server, storage, and virtualization concepts. HP strongly recommends
thoroughly reviewing the documentation supplied with individual solution components to gain the indepth knowledge necessary for a comprehensive and reliable solution.
Target audience: This white paper is intended for solutions architects, engineers, and project
managers involved with the deployment of HP StorageWorks arrays with virtualization solutions.
Recommendations are offered in this white paper, but it should not be regarded as a standalone
reference.
Familiarize yourself with virtualized infrastructures and with networking in a heterogeneous
environment. A basic knowledge of HP ProLiant servers, HP StorageWorks EVA products, and
management software, such as HP StorageWorks Command View EVA, is required. Links to
information on these topics are available in the For more information section.
In addition, it is important to understand the basic concepts of the Microsoft Hyper-V architecture and
how this product virtualizes hardware resources. For more information, see Hyper-V Architecture.
This white paper describes testing performed in November 2009.
Environment planning
Compatibility matrix
HP SAN compatibility is a key verification in designing a hardware and software solution. The
compatibility tool is available from the HP StorageWorks Single Point of Connectivity Knowledge
(SPOCK) website at http://h20272.www2.hp.com.
Note
An HP Passport account is required to access SPOCK.
3
In addition to hardware interoperability requirements, SPOCK provides detailed version and
configuration constraints through the Solution Software link. After logging in to SPOCK, select View
by Array under the SAN Compatibility section on the left side of the screen. When prompted to
navigate to a specific storage array, select the Refine link. After choosing an array, select an
operating system and view the complete configuration details.
Sizing for Hyper-V
While this white paper suggests many best practices for a Hyper-V environment, it is not intended to
be a sizing guide, nor does it suggest the maximum capabilities of the equipment used in testing. A
critical piece of planning a virtualized environment is correctly sizing the equipment. For help sizing
your environment, see HP Sizer for Microsoft Hyper-V 2008 R2 and the documents listed in For more
information.
Converting a physical server to a virtual machine (VM) is a convenient method of replacing aging
hardware. Also, converting multiple servers to VMs and consolidating them on fewer physical hosts
creates significant savings in equipment, power, cooling, and real estate. Doing so, however,
requires careful preparation to maintain performance and availability.
Because server consolidation is a common purpose for implementing a virtualized solution, Hyper-V
host servers should have at least as many resources as the sum of used resources on the physical
servers being converted to VMs plus overhead for the host operating system.
Memory on a Hyper-V host
Hyper-V does not allow memory overcommit (assigning more memory to the host and VMs on that
host than is physically available), nor can VMs exist in memory that is paged to the disk. To be
certain the Hyper-V server has sufficient resources, provide physical memory equal to the sum of all
memory allocated to local VMs plus the following:
 At least 512 MB for the host operating system
 300 MB for the Hypervisor
 32 MB for the first GB of RAM allocated to each virtual machine
 8 MB for every additional GB of RAM allocated to each virtual machine
For more information, see Checklist: Optimizing Performance on Hyper-V.
Configuration details
This project consists of an HP BladeSystem c3000 enclosure with five HP ProLiant BL460c blade
servers and an EVA4400 storage array. These components are connected in two fabrics, each with a
Brocade 4Gb SAN Switch for HP c-Class BladeSystem as shown in Figure 1.
4
Figure 1. Configuration overview
Blade enclosure
The HP ProLiant BladeSystem provides many convenient features through the Onboard Administrator
(OA), including environment status information, remote management, and remote connectivity to
servers and switches, making it a great tool for administrators. This powerful interface even includes
clickable images of the enclosure and its components for easy navigation and control as shown in
Figure 2. HP Integrated Lights-Out (iLO) is a management tool that provides power management,
virtual media control, remote console access, and many other administrative benefits. iLO is also
tightly integrated with OA. These components make the blade enclosure easy to remotely manage
and control.
Five BL460c blade servers are used in this environment. These servers each have Intel® Xeon®
processors, 16 GB RAM (except for the storage management server, which has only 8 GB of RAM),
and a dual port Emulex LPe1105-HP 4Gb FC HBA for HP c-Class BladeSystem for connecting to the
SAN.
5
Figure 2. HP BladeSystem Onboard Administrator
Clustered Hyper-V servers
Three servers (named HyperV1, HyperV2, and HyperV3) are placed in a Microsoft Windows Failover
Cluster for application and VM high availability. These servers have Windows Server 2008 R2
Datacenter, which allows more than four VMs per Hyper-V host, and the Hyper-V role installed on
them. They are used for consolidating many physical servers because each host houses several VMs,
each of which can represent a physical server being consolidated. One of the three host servers,
HyperV3, has a slightly different processor. This is used to verify the new functionality of Hyper-V R2
to live migrate VMs to servers with slightly different processors.
Note
Hyper-V Live Migration can only move VMs between servers with
processors of the same vendor processor family. Hyper-V cannot live
migrate VMs between AMD™-based and Intel-based servers.
Note
VM (guest) operating systems cannot be clustered in this configuration. To
include the guest OS in a cluster, iSCSI storage must be used.
6
For testing purposes in this environment, each VM runs one of three guest operating systems:
Windows Server 2008, Windows Server 2008 R2, or Red Hat Enterprise Linux 5.3. While the OS of
each physical (host) server resides on its local hard drive, each VM (guest OS) and all data volumes
that those VMs use are located on EVA Vdisks (LUNs) and presented to all three Hyper-V servers. This
provides high availability of those volumes.
Table 1 and Table 2 list the Hyper-V host server specifications in this environment.
Table 1. HyperV1 and HyperV2 host server specifications
Purpose
Hyper-V host (clustered nodes 1 and 2)
Operating system
Windows Server 2008 R2 Datacenter Hyper-V
Processors
Two Intel Xeon (Dual Core) 5160 @ 3.00 GHz (4 cores total)
Memory
16 GB
Table 2. HyperV3 host server specifications
Purpose
Hyper-V host (clustered node 3)
Operating system
Windows Server 2008 R2 Datacenter Hyper-V
Processors
One Intel Xeon (Quad Core) 5365 @ 3.00 GHz (4 cores total)
Memory
16 GB
Management server
A non-clustered blade server is used for server, VM, and cluster management. The management
software can be installed either directly on the host OS or on individual VMs created on the
management server.
Table 3. Management host server specifications
Purpose
Server, VM, and cluster management
Operating system
Windows Server 2008 R2 Datacenter Hyper-V
Processors
Two Intel Xeon (Dual Core) 5160 @ 3.00 GHz (4 cores total)
Memory
16 GB
Microsoft System Center Virtual Machine Manager 2008 R2 (SCVMM)
Software
Microsoft System Center Operations Manager 2007 R2 (SCOM)
HP Systems Insight Manager 5.3 SP1
7
Command View EVA server
A non-clustered blade server running Windows Server 2008 is used for storage management with HP
StorageWorks Command View EVA 9.1 (CV-EVA). Windows Server 2008 R2 is not used on this host
because (at the time of completion of this white paper) CV-EVA is not supported on Windows Server
2008 R2 or on a Hyper-V VM.
Table 4. CV-EVA host server specifications
Purpose
Storage management
Operating system
Windows Server 2008 Enterprise
Processors
Two Intel Xeon (Dual Core) 5160 @ 3.00 GHz (4 cores total)
Memory
8 GB
Software
HP StorageWorks Command View EVA 9.1
Storage
The EVA4400 used in this environment is running firmware XCS v09522000. It has four disk shelves
with 48 300 GB 15K Fibre Channel hard disk drives. However, only 16 of those drives are used to
hold VM OS disks and other local data or applications. The remaining disks are used for other
applications that are not relevant to this project. This configuration follows existing EVA best practices
that suggest having a multiple of eight drives in each disk group.
Note
EVA performance best practices suggest using as few disk groups as
possible. When considering using multiple EVA disk groups, carefully
evaluate each workload and decide whether environments with similar
workloads can share disk groups for improved performance. For more
information, see the HP StorageWorks 4400/6400/8400 Enterprise
Virtual Array configuration - Best practices white paper.
The logical configuration used in this project is shown in Figure 3.
8
Figure 3. Logical configuration
I/O test configuration
Although this project is not meant to benchmark performance of the environment, testing a workload
is useful to determine best practices. In this environment, an I/O-intensive workload is generated that
is 60% random and 40% sequential, with a 60/40 read/write ratio. Block sizes range uniformly
between 8 KB and 64 KB, in 8 KB increments (that is, 8 KB, 16 KB, 24 KB, … 64 KB).
Storage options
Multipath in Windows Server 2008 R2
When setting up the storage environment, be sure to obtain the latest Multipath I/O (MPIO) drivers
and management software and install them on each server that accesses the EVA. At the release of
this white paper, the current version of the HP MPIO Full Featured DSM for EVA4x00/6x00/8x00
families of Disk Arrays (EVA MPIO DSM) is 4.00.00, which does not yet support Cluster Shared
Volumes. If Cluster Shared Volumes are used, the built-in Microsoft Windows MPIO drivers must be
used. To use the Microsoft MPIO drivers and tool, enable the Multipath I/O feature as explained in
Installing and Configuring MPIO.
If, however, HP StorageWorks Business Copy EVA Software or HP StorageWorks Continuous Access
EVA Software is used, or if more manual control over the MPIO settings is desired, use the HP MPIO
device-specific module (DSM) software. The Windows MPIO DSM for EVA software is available from
Download drivers and software. Also, follow the existing EVA best practices in the HP StorageWorks
4400/6400/8400 Enterprise Virtual Array configuration - Best practices white paper.
9
EVA Vraid levels
Because each Vdisk on the EVA is striped across all of the disks in a disk group, one of the largest
factors in array performance is the number of disks in the disk group. However, the Virtual RAID
(Vraid) level can also have a significant impact on performance. With the EVA4400, three Vraid
levels are of interest: Vraid1, Vraid5, and Vraid6.
To test the performance of these Vraid levels, the previously specified workload is applied to fixed
virtual hard disks (VHDs) attached to several VMs. The average I/O operations per second (IOPS)
and response times (ms) for each Vraid level are shown in Figure 4.
Figure 4. IOPS and response times (ms) by Vraid level
As shown in the IOPS by Vraid Level chart, Vraid1 outperforms Vraid5 by 16%, while Vraid5
outperforms Vraid6 by 37% for this workload (each workload has differing results). The drawback is
that because Vraid1 mirrors data, it requires significantly more disk capacity than Vraid5 or Vraid6.
While these results are not meant to benchmark the EVA, they demonstrate the point that the Vraid
level must be carefully considered. For performance-critical environments, Vraid1 is the clear choice.
However, with sufficient disks, and for lower requirements on performance or availability, consider
Vraid5 or Vraid6. Vraid6 is similar in structure to Vraid5 except that it uses two parity disks instead of
just one, creating extra I/O traffic, but also greater availability.
Best Practice
For the highest performance and availability, use Vraid1 for EVA Vdisks.
For lower performance or availability requirements, consider using Vraid5
or Vraid6.
10
Cluster-shared versus VM-specific volumes
Prior to Hyper-V R2, to allow a VM in a cluster to failover or to perform a migration without impacting
other VMs, each VM had to be on its own LUN presented by external storage. Each LUN also had to
be carefully sized to provide adequate capacity for the VHD, configuration files, snapshots, and other
potential data, all while wasting as little extra drive capacity as possible. This method of storage
management easily leads to either wasted storage or the need to frequently grow LUNs and volumes.
Also, under this model, environments with many VMs need numerous LUNs presented to the Hyper-V
hosts, thus complicating storage management. Furthermore, during failover or migration, ownership of
these LUNs must be transferred between hosts because only one host in the cluster can write to these
volumes at a time.
The Cluster Shared Volume (CSV) feature helps to address these issues. For more information, see
Using Cluster Shared Volumes in a Failover Cluster in Windows Server 2008 R2. With a CSV,
multiple Hyper-V hosts can read/write to the same volume (same LUN) at the same time, allowing
multiple VMs from different hosts to be placed on the same shared volume. Additionally, ownership of
this volume/LUN does not have to be transferred between hosts when a failover or migration occurs,
which allows for faster failover/migration times. In fact, in this environment, start to finish times for live
migrating VMs using CSVs are up to 33% faster than when using individual (non-CSV) LUNs for each
VM. Also, the visible pause in the guest OS when using individual LUNs is roughly 6 seconds,
compared to only a 2 second pause when using CSVs (when no workload is applied).
Note
To perform live migrations, Hyper-V hosts must be in a cluster. However,
CSVs are not required for live migration. Live migration performance
depends heavily upon the current workload applied to the VM, the amount
of RAM the VM owns, and the bandwidth on the virtual network used for
live migration. Live migration takes a different amount of time in each
environment.
With this flexibility, a few large LUNs (each with a CSV on it) can be created for many (or all) VMs
and their associated configuration files and snapshots. This method of storage allocation reduces
wasted disk space as well because the VMs share a larger capacity pool.
To test I/O performance of CSVs versus non-CSV volumes on individual LUNs, I/O workloads are run
on multiple VMs (ranging from 1 to 10 VMs at a time). In one series of tests (―X LUNs‖), several EVA
Vdisks are presented (as LUNs) to the host and formatted, each holding a VHD for a different VM.
This is the method required under the original release of Hyper-V. In the second series of tests (―2
CSVs‖), two large EVA Vdisks are presented to the host, formatted, and added to the cluster as CSVs.
In this case, all of the VM VHDs reside on these two CSVs. The workload previously specified is
applied and the IOPS achieved is shown in Figure 5. Impressively, the CSVs perform comparably to
using individual non-CSV LUNs for each VHD. Also, the CSVs have response times that are almost
identical to the individual volumes when testing five or fewer VMs.
11
Figure 5. IOPS for VMs on separate volumes versus two CSV
Note
This is not meant to be a sizing guide, nor do these test results benchmark
the EVA. The workload applied is a very heavy load for only a 16-drive
disk group. Monitor the disk latency in your environment, and, if necessary,
add disk drives to the disk group to improve performance.
Because the EVA spreads the Vdisks across the entire disk group, EVA performance is not significantly
impacted due to having more or fewer Vdisks, as long as there at least two Vdisks to balance across
the two EVA controllers. Notice, however, that the final column of data points (with 10 VMs tested)
shows a performance improvement when using separate volumes for each VM. This is because the
HBA on each host is set by default to have LUN-based queues, meaning there is a queue for each
LUN (EVA Vdisk) presented to the host. Therefore, with 10 LUNs, there are more queues on the host
sending I/O requests, allowing fewer I/O conflicts and keeping the EVA busier than with only two
LUNs. If using a small number of Vdisks on the EVA, consider increasing the HBA queue depths as
recommended in the ―LUN count influences performance‖ section of the HP StorageWorks
4400/6400/8400 Enterprise Virtual Array configuration - Best practices white paper.
12
Best Practice
For ease of management, faster migrations, and excellent performance, use
Cluster Shared Volumes in your clustered Hyper-V environment. If very few
Vdisks (LUNs) are used on the EVA, consider increasing HBA queue depths
for possible performance improvements.
EVA configuration considerations
The CSV testing previously mentioned uses two LUNs on the EVA. Some testing is also done with only
one CSV on one LUN, and all VM data residing on that LUN. Performance results when using two
LUNs (one CSV on each LUN) is 2% to 8% better, depending on the number of VMs, than when using
only one CSV and LUN. This is because with two LUNs, each is managed by one of the EVA
controllers. If only one LUN exists, the controller managing that LUN must service all requests,
eliminating the performance benefits of the second controller. Requests sent to the secondary
controller are proxied to the managing controller to be serviced, increasing service times.
Simply having an even number of LUNs, however, does not guarantee optimally balanced
performance. By default, when creating a LUN in CV-EVA, the preferred path option is No
preference. With this setting, by mere chance, the LUNs with the heaviest workloads might be
managed by one controller, leaving the other nearly idle. It is therefore beneficial to specify which
LUNs should be managed by which controller to balance the workload across both controllers.
Best Practice
For optimal performance, balance usage of the EVA controllers by
specifically placing each LUN on a desired controller based on the LUN’s
expected workload.
If an event causes a controller to go offline (even briefly), all LUNs that the controller manages are
moved to the other controller to maintain availability, and they do not immediately fail back when the
first controller is again available. Therefore, a small outage on controller A might leave controller B
managing all LUNs and limiting performance potential even if both controllers are currently available.
To avoid this scenario, check the managing controller of the LUNs on the EVA periodically and after
significant events in the environment occur. This can be done using CV-EVA or HP StorageWorks EVA
Performance Monitor (EVAPerf). In CV-EVA, view Vdisk Properties, and select the Presentation tab to
view or change the managing controller of the desired LUN as shown in Figure 6. When changing
the managing controller, be sure to click Save changes at the top of the page or those changes will
not be applied. The owning controller for all LUNs can be quickly viewed using EVAPerf by issuing
the EVAPerf command with the vd (virtual disk) parameter as shown in Figure 7. HP StorageWorks
Storage System Scripting Utility (SSSU) for CV-EVA can also be used to change ownership of the
LUNs and proves to be a powerful scripting tool for changing many configurations of the EVA. For
more information about CV-EVA and SSSU for CV-EVA, see Manuals - HP StorageWorks Command
View EVA Software.
Best Practice
Use CV-EVA and EVAPerf to monitor the managing controller for each LUN
and rebalance the LUNs across the controllers with CV-EVA or SSSU, if
necessary.
13
Figure 6. Viewing and setting the managing controller for a Vdisk in CV-EVA
Figure 7. Viewing virtual disk properties in EVAPerf
Disk types
Two disk configurations can be used for a VM to access storage: virtual hard disks and pass-through
disks. Each of these disk types has its own purpose and special characteristics as explained in the
following sections.
14
Virtual hard disks
A virtual hard disk (VHD) is a file with a .vhd extension that exists on a formatted disk on a Hyper-V
host. This disk can be local or external, such as on a SAN, and each VHD has a maximum capacity
of 2,040 GB. To use the VHD, a VM is assigned ownership of the VHD to place its operating system
or other data on. This can be done either during or after VM creation, whereas pass-through disks are
attached to an existing VM. Three VHD types—fixed, dynamically expanding, and differencing disk—
provide the option to focus on either performance or capacity management.
Fixed VHD
With a fixed VHD, the VHD file consumes the specified capacity at creation time, thereby allocating
the initially requested drive space all at once and limiting fragmentation of the VHD file on disk. From
the VM’s perspective, this VHD type then behaves much like any disk presented to an OS. The VHD
file represents a hard drive. As data is written to a hard drive, the data fills that drive; or in this case,
the VM writes data in the existing VHD file, but does not expand that file.
Dynamically expanding VHD
With a dynamically expanding VHD, a maximum capacity is specified. However, upon creation, the
VHD file only grows to consume as much capacity on the volume as is currently required. As the VM
writes more data to the VHD, the file dynamically grows until it reaches the maximum capacity
specified at creation time. Because dynamically expanding VHDs only consume the capacity they
currently need, they are very efficient for disk capacity savings. However, whenever the VHD file
needs to grow, I/O requests might be delayed because it takes time to expand that file. Additionally,
increased VHD file fragmentation might occur as it is spread across the disk and mixed with other I/O
traffic on the disk.
Fixed versus dynamically expanding VHD performance
To directly compare the performance of fixed and dynamically expanding VHDs, VMs with each VHD
type have a 50 GB file placed on the VHD. This large file forces the dynamically expanding VHD files
to grow, roughly matching the size of the fixed VHD and eliminating the need for expansion during
the I/O test. The workload specified in I/O test configuration is then applied to the VMs.
With the fixed and dynamically expanding VHD files nearly identical in size and the same workload
applied to both, the dynamically expanding VHDs are expected to perform equal to the fixed VHD.
The results, however, reveal that the fixed VHDs achieve up to 7% more IOPS and at a 7% lower
latency. It is also important to recognize that dynamically growing a VHD, as would occur in a real
environment, would further slow its performance and likely cause increased fragmentation. For this
reason, when using VHDs for VM storage, place performance-sensitive applications and VMs on fixed
VHDs. Dynamically expanding VHDs, on the other hand, offer significant disk capacity savings and
should be used when performance is not critical.
Best Practice
Use fixed VHDs rather than dynamically expanding VHDs for production
environments where performance is critical. Where capacity is more of a
concern, or for general use, use dynamically expanding disks.
Differencing disks
A differencing disk allows the creation of new (child) VMs from previously existing (parent) VMs. To
create a differencing disk, the parent VM is shut down and put into read-only mode to protect the
parent VHD. Any changes to the parent’s VHD after the creation of the differencing disk ruins the data
integrity of that differencing disk. A new VHD is then created by specifying the differencing disk type
and making a reference to the parent VHD. The differencing disk that is created contains changes that
15
would otherwise be made to the parent VM’s VHD. The differencing disk is then assigned to a new
(child) VM that, when started, is identical to the read-only parent VM. At this point, the child VM can
be changed and used like any other VM. Because the differencing disk records only the changes to
the parent VM, it initially uses much less disk capacity compared to the parent VHD. Figure 8 shows
the VHD for the parent VM (FixVM.vhd) and the emphasized differencing disk (CSV7_Diff.vhd) for the
child VM.
Note
Differencing disks use the dynamic disk type, allowing them to dynamically
grow as needed. Therefore, differencing disks experience the same
performance issues as dynamic disks.
Figure 8. Differencing disk file
This technique allows rapid provisioning of test VMs because multiple differencing disks and the
associated child VMs can be created referencing the same parent VM to create a tree-structure of
parent-child relationships. A child VM can also serve as a parent for another differencing disk, thus
creating a chain of child VMs.
Note
Because multiple differencing disks can depend on the same VHD, VMs
that use differencing disks cannot be moved with live or quick migration. If
the child VM is in the same cluster resource group as the parent, moving
the parent VM also moves the child, but this is not a live migration.
With differencing disks, many VMs can be created quickly with the same installation and
applications, and then each can be changed as needed. This is ideal for testing different stages of an
upgrade or compatibility between applications or updates. A child VM that uses a differencing disk
16
can also be merged with the parent VM (assuming that the child VM is the only child of the parent
VM) or the differencing disk can be converted to a fixed or dynamically expanding VHD of its own,
making the child VM independent of the parent.
For example, if a VM has Windows Server 2008 installed, a differencing disk can be created from it
and applied to a child VM. The child can then have a service pack or update installed to verify
functionality with existing software. Next, that child can become the parent for another differencing
disk and VM, which can have the next service pack or update installed. In this manner, multiple
upgrade paths can be tested and thoroughly verified. Then, either of the child VMs can be merged
with the parent VM to apply those changes to the parent, or a child VM’s VHD can be converted to a
fixed or dynamically expanding VHD to allow it to run independent of the parent VM.
Recognize that the more differencing disks there are on a volume, the more I/O is created, thus
degrading performance. The features and functionality of differencing disks are invaluable for test
and development environments, but are likely not suitable for a production environment because of
that performance impact. Also, because LUNs on the EVA are striped across the entire disk group, do
not use differencing disks on LUNs that share a production system disk group.
Best Practice
Use differencing disks for test and development environments to rapidly
deploy similar VMs and to test compatibility. To avoid performance
degradation, do not use differencing disks in a production environment or
in environments that share a disk group with a production system.
Snapshots
With Hyper-V, a point-in-time snapshot can be taken of a VM to save the current state, whether the
VM is turned off or still running. This saved state includes not just the content on the VHD, but also the
state of the memory and configuration settings, which allows a VM to be rolled back to a previous
state of the machine. Be aware that rolling back some applications, such as a database, might cause
synchronization issues or data inconsistency. Rolling back VMs with such applications must be done
carefully and might also require some restoration techniques to return those applications to a
functional state.
Snapshots are not an alternative for backups. This point-in-time copy can be taken while an
application is changing data on the disk and application data might not be in a consistent state. Also,
if the base VHD is lost or corrupted, the snapshot is no longer usable, so it is very important to have
an alternative backup strategy.
Note
Snapshots are not a backup solution. Use Microsoft System Center Data
Protection Manager (SCDPM), HP Data Protector, or another backup utility
to prevent data loss in case of a hardware failure or data corruption.
Multiple snapshots of a VM can be taken, creating a chain of possible restore points as shown in
Figure 9.
17
Figure 9. Snapshot chain
Snapshots record changes to the VM by creating a differencing disk with an .avhd extension as well
as a duplicate XML file with VM configuration data, a .bin file with VM memory contents, and a .vsv
file with processor and other VM state data. If the VM is off when the snapshot is created, no .bin or
.vsv files are created because there is no memory or state to record.
Because snapshots contain changes to the disk, memory, and configuration, they start small, but can
grow rapidly, consuming otherwise available disk capacity and creating additional I/O traffic. This
can hinder the performance of VMs and other applications on the Hyper-V host.
Be aware that taking a snapshot of a VM creates a differencing disk for every VHD that the VM owns.
Therefore, the more VHDs the VM owns, the longer it takes, the more disk capacity it consumes, and
the greater the potential for performance degradation. Also, while there is a setting in Server
Manager called Snapshot File Location, this setting only changes the storage location of the VM
memory, state, and configuration components of the snapshot. The differencing disk file is still stored
on the volume that owns the original VHD.
Note
Snapshots cannot be taken of VMs that have any pass-through disks
attached.
The ability to roll a VM back to a previous point in time is very useful when a VM experiences
undesirable results while installing patches, updates, or new software, or performing other
configuration changes. The performance impact of having existing snapshots, however, is not ideal
for most production environments.
Because snapshots cause extra I/O, if a snapshot is no longer needed, it is beneficial to remove the
snapshots. Deleting one or more snapshots causes a merging/deleting process to remove the
requested snapshots. If, however, the VM is running when a snapshot is deleted, the system does not
remove the differencing disk immediately. The differencing disk is actually still used until the VM is
shut down, at which time the differencing disk is merged with any necessary VHDs and the disk space
it used is released.
18
Best Practice
Use Hyper-V snapshots for test and development environments before
installing new patches, updates, or software. To avoid performance
degradation, and because VMs must be shut down to properly remove
differencing disks (.avhd files), do not use snapshots in production
environments or on a disk group shared with a production system.
VHD and volume sizing
One benefit of virtualization is potential storage consolidation. Because of a common best practice of
using separate disks for OS and application data, many physical servers might have large amounts of
unused space on their OS (root) drives. Collectively, those physical servers might have terabytes of
unused disk capacity. With Hyper-V VMs, however, OS VHDs can be sized more appropriately to
save disk capacity.
Properly sizing VHDs at creation time is important because undersizing a VHD can cause problems
for the owning VM (just as a physical server that has a full root disk) and oversizing a VHD wastes
disk capacity. Although it might be tempting to size a VHD just slightly more than necessary for the
OS and applications, remember that unless manually changed, the swap (paging) file resides on the
root VHD of the VM. Also be sure to plan for patches, application installations, and other events that
might increase the capacity that the root VHD needs.
Sizing the volume that the VHD resides on is also very important. For ease of management, it is
common to place the configuration files on the same volume as the root VHD. Remember, however,
that the configuration file with a .bin extension consumes as much disk capacity as there is memory
given to the VM. Also, remember that snapshots are stored on the same volume that holds the VHD.
Hyper-V also periodically uses a small amount of capacity (generally less than 30 MB) on the VHD’s
root volume. Without sufficient free capacity on that volume, a VM might transition to a PausedCritical state where all VM operations are halted until the issue is resolved and the VM is manually
returned to a Running state. To avoid VM complications, reserve at least 512 MB on any volumes that
a VM owns. If snapshots are used, reserve more capacity, depending on the workload. Also carefully
monitor free space on those volumes. While this might be easy with volumes that hold fixed VHDs,
dynamically expanding VHDs and differencing disks might increase suddenly, causing failures due to
lack of disk capacity.
Best Practice
Keep at least 512 MB free on volumes holding virtual machine VHDs or
configuration data. If dynamically expanding VHDs, differencing disks, or
snapshots are used, keep sufficient disk space free for unexpected sudden
increases in VHD or AVHD size. The necessary capacity depends on the
workloads applied.
Best Practice
Use HP Storage Essentials or Microsoft System Center Operations Manager
(SCOM) to monitor free disk capacity on volumes that hold VM VHDs or
configuration files. Alert the storage administrator if free disk capacity
drops below the desired threshold.
Pass-through disks
Pass-through disks are different from VHDs in that they are not online to the Hyper-V host, nor do they
use a VHD file. Instead they are passed directly to the VM, allowing less processing overhead and
19
slightly better performance for the disk. In this environment, pass-through disks show only up to a 5%
performance improvement over VHDs. However, pass-through disks have another significant benefit:
no capacity limit. VHDs have a limit of 2,040 GB. Pass-through disks do, however, lack many of the
features available with VHDs, such as the use of snapshots, differencing disks, CSVs, and overall
portability. With a pass-through disk, the disk is presented to the Hyper-V host, but left offline. It is
later brought online from within the VM.
Note
When using pass-through disks, make sure that the disk is offline to the host
OS. If a host and guest OS each attempt to access a pass-through disk at
the same time, the disk can become corrupted.
When using the Create New Virtual Machine wizard, a pass-through disk cannot be attached to a
VM at the time of the VM’s creation. Instead, when placing the OS on a pass-through disk, create the
VM, and when choosing a disk, select the Attach a virtual hard disk later option. After the VM is
created, instead of selecting a VHD, in VM settings, choose the physical disk option and select a disk
from the list. Be aware that because the VM consumes the entire pass-through disk, the configuration
files must be placed on a different volume, whereas they can reside on the same volume as a VHD.
Best Practice
Because VHDs offer functionality above what is available with pass-through
disks, do not use pass-through disks for OS boot disks. Use pass-through
disks only when application performance is of extreme importance, when
the application vendor recommends allowing raw access to the disk, or
when a single disk needs a capacity greater than 2 TB.
Disk expansion
If a fixed VHD’s capacity is nearly consumed or a dynamically expanding VHD reaches its maximum
limit and more capacity is desired, it is possible to expand the VHD. Pass-through disks can also be
expanded in certain circumstances. Note, however, that disk expansion (VHD or pass-through) is not
the same as having a dynamically expanding VHD.
Warning
Attempts to expand pass-through disks that act as the OS boot disk or passthrough disks that use an IDE controller frequently result in data corruption.
Do not attempt to expand bootable pass-through disks or pass-through disks
that use IDE controllers without first testing the expansion thoroughly and
having a current backup of the disk.
Expanding VHDs and pass-through disks might first require increasing the EVA Vdisk capacity. While
an EVA Vdisk’s capacity can be increased with concurrent I/O activity, expanding a VHD requires
that the VHD be disconnected briefly from the VM or, alternatively, that the VM be shut down.
Modifying the structure of any disk includes inherent risk. Before expanding, compressing, or
otherwise changing any disk, make a backup and stop I/O to that disk to prevent data corruption.
For more information about how to expand a VHD or pass-through disk, see Appendix A—Disk
expansion.
20
Best Practice
If necessary, VHDs and some pass-through disks can be expanded. Before
expanding a disk, be sure to have a current backup of the disk and pause
the I/O traffic to that disk to avoid data corruption.
Disk options summary
Figure 10 shows a logical comparison of the different disk types. For the best performance, use passthrough disks, but remember that they lack many of the features that are available with VHDs and
perform only slightly better than fixed VHDs. Therefore, using fixed VHDs might be the better option,
while dynamically expanding VHDs can provide significant savings in disk capacity.
Figure 10. Logical disk types
21
Virtual disk controllers
To present a VHD or pass-through disk to a VM, an IDE or SCSI virtual disk controller must be used.
Controller performance has improved over older virtualization technologies and there are several
subtle differences to be aware of.
Disk controller performance
In some previous virtualization tools from Microsoft, virtual SCSI controllers performed better than
virtual IDE controllers. This is because the SCSI controllers are synthetic devices designed for minimal
overhead (they do not simulate a real or physical device) and I/O requests are quickly sent from the
guest across the Virtual Machine Bus (VMBus) to the host I/O stack.
IDE controllers, however, emulate a real device, which previously required extra processing before
I/O requests were sent to the host I/O stack. With Hyper-V, however, a filter driver bypasses much of
the extra processing and improves performance to equal that of the SCSI controller. Testing in this
project confirms this with drives on SCSI controllers performing less than 3% better than those on IDE
controllers.
Disks per controller type
Hyper-V allows two IDE controllers and four SCSI controllers per VM. Each IDE controller can have
two devices attached, and one of these four total IDE devices must be the VM’s boot disk. If a DVD
drive is desired, it must also reside on an IDE controller. This leaves only two device slots (three if no
DVD drive is used) available for other IDE drives to be attached, after which SCSI controllers must be
used to allow more devices.
Each of the four SCSI controllers has 64 device slots, allowing 256 total SCSI drives. Although DVD
drives cannot reside on a SCSI controller and a VM cannot boot from a SCSI controller, with the
release of R2, the SCSI controller provides the benefit of hot-swapping drives to a VM. To allow this
capability, be sure to add the desired SCSI controllers to the VM before powering on the VM because
controllers cannot be added or removed when a VM is running. Also, Microsoft recommends
spreading disks across separate SCSI controllers for optimal performance. For more information, see
Performance Tuning Guidelines for Windows Server 2008 R2.
Best Practice
To allow VMs to hot-add or hot-remove disks without requiring VM
shutdown and for optimal performance, add all four virtual SCSI disk
controllers to each VM at setup time and balance presented storage evenly
across the controllers.
Controller usage recommendation
A common best practice recommends placing the OS and application data on separate disks. This
can increase availability because it prevents application workloads from impacting an operating
system. For consistency and ease of management, place application and data disks on SCSI
controllers.
22
Best Practice
The boot disk for a Hyper-V VM must use a virtual IDE controller. For
ease of management and to use the hot-add or hot-remove disk feature
of Hyper-V R2, place application and data disks on virtual SCSI disk
controllers.
If necessary, the controller type that a disk is attached to can be changed. However, because IDE
controllers cannot hot-add or hot-remove disks, the VM must be shut down to do so. After the VM
is shut down, change the controller by simply removing the disk from one controller and adding it
to another.
VM and storage management
While virtualization has many benefits, managing a virtualized environment can still be difficult
because there are many new concepts and challenges. Several tools for managing storage and a
Hyper-V environment are discussed in the following sections.
Server Manager
Server Manager, which includes a compilation of modules in the Microsoft Management Console
(MMC), has interfaces for managing failover clusters, local and external storage, performance, and
many more components of the server. It is also likely the most accurate source of information on VMs
in a Hyper-V environment because it resides on each full Windows Server installation and remains
very current. Many management applications are installed on management servers and must request
information from the necessary host. Some collect VM status information only periodically and do not
present current data all of the time.
Remote Management
With Windows Server 2008 R2, Server Manager can manage remote hosts as well. To do this, first
enable remote management from the target server’s Server Manager by clicking the Configure Server
Manager Remote Management link, and selecting the box in the pop-up window as shown in Figure
11. Then, in Server Manager on another Windows Server 2008 R2 host, right-click the server name,
select Connect to Another Computer, and enter the server name. This can also be done from the
Action menu.
23
Figure 11. Configuring Server Manager Remote Management
Server core installations of Windows Server 2008 R2 can also be managed from a remote Server
Manager. To enable remote management, on the server core machine, run SCONFIG, and select the
4) Configure Remote Management option. After configuring the necessary firewall ports, select
the Connect to Another Computer option from a host with a full installation of Windows Server
2008 R2 (as explained previously). With this functionality, storage of a server core machine can be
managed from the Server Manager GUI.
For more information about remote management, see Remote Management with Server Manager and
Windows Server 2008 R2 Core: Introducing SCONFIG.
Note
You can also manage the storage of Windows Server 2008 R2 machines
(core or full edition) from a workstation running Windows 7 or Windows
Vista by installing the Remote Server Administration Tools for Windows 7
or the Remote Server Administration Tools for Windows Vista, respectively.
24
Hyper-V Manager
After the Hyper-V server role is installed, a Hyper-V Manager component becomes enabled that you
can use to view the VMs on a host (local or other) and perform various tasks. From the Hyper-V
Manager, you can edit VM settings, take snapshots, connect to the VM, and more (see Figure 12).
This is a critical tool in managing Hyper-V VMs and is included in Windows Server 2008 R2.
Figure 12. Hyper-V Manager within Server Manager
25
Failover Cluster Manager
Another valuable feature in Server Manager is the Failover Cluster Manager. While server
administrators who have set up a cluster in previous versions of Windows might be familiar with this
component, there are two new pieces in the Failover Cluster Manager of interest when working with
VMs.
First, by right-clicking a cluster, there is a new option to enable CSVs. After CSVs have been enabled,
a new folder for CSVs appears under the cluster as shown in Figure 13. CSV storage details are
visible in this folder, including the capacity and path to each CSV, none of which use drive letters.
CSVs can now be created from unused cluster storage.
Note
To add a CSV, the desired volume must already be a resource in the cluster
(visible in the Storage folder under the cluster name) and available. If any
VM or other resource is using the volume, it cannot be turned into a CSV.
Figure 13. CSVs from within Failover Cluster Manager
26
Second, if the cluster is properly configured, live migration becomes an option. To do this, right-click
the VM, highlight Live migrate virtual machine to another node, and select a target as shown in Figure
14. This option is also available through the Actions window.
Figure 14. Live migration from within Failover Cluster Manager
VMs can be started, stopped, and in general managed from within Failover Cluster Manager, making
it a valuable tool for managing a highly available virtualized environment. However, some features,
such as creating snapshots and importing or exporting VMs, are still only available in the Hyper-V
Manager.
Performance Monitor
The Windows Performance Monitor (Perfmon) is a very useful tool for determining what might be the
performance bottleneck in a solution. Perfmon has numerous counters, with categories that include
server hardware, networking, database, cluster, and Hyper-V. While it is possible to collect Perfmon
counters from each VM that runs Windows, doing so can be very tedious. Instead, on the host, collect
Hyper-V counters, such as those in the Hyper-V Virtual Storage Device category, to avoid having to
compile results from multiple VMs. Also, monitor the Cluster Shared Volumes category to determine
whether or not that CSV is being properly used. Counters from the Hyper-V Virtual Storage Device
and Cluster Shared Volumes categories can also be compared with the host’s LogicalDisk category to
see the difference between I/O requests from the VMs and what the host actually sends to disk.
Because the storage devices can also be on a SAN, these counters can be compared to EVAPerf
statistics, such as the virtual disk counters seen by running evaperf with the vd option.
27
Best Practice
Instead of collecting numerous performance counters from each VM, use
the Windows Performance Monitor on the host to collect Hyper-V counters.
Compare counter sets such as Cluster Shared Volumes and Hyper-V Virtual
Storage Device to those in the LogicalDisk set to understand disk
performance from the VM’s perspective versus the host’s perspective.
In addition to monitoring disk performance, it is important to watch the performance of the processors
because it can be challenging to properly balance the processor usage in a virtualized environment.
On non-virtualized servers, it might be sufficient to monitor the \Processor(*)\% Processor Time counter
to determine whether or not a server’s processors are being properly used. However, with Hyper-V
enabled, monitoring this counter either on the host or on the VMs can be very misleading. This is
because for performance monitoring, the host OS is considered another guest OS or VM, and that
counter reports only the processor usage for that particular VM or host OS. Therefore, viewing the
\Processor(*)\% Processor Time counter only shows usage as seen by the host, not the total processor
usage of the physical server.
On VMs, this value can be further skewed if more virtual processors are allocated to the resident VMs
than there are logical processors available. For example, if a server has two dual-core CPUs, this is a
total of four logical processors. However, if the server also has four VMs, each with two virtual
processors, then eight total virtual processors compete for CPU time (not including the virtual
processors for the host OS). If each of these VMs has a CPU-intensive workload, they might saturate
the available logical processors and cause excessive context switching, further decreasing processor
performance.
In this scenario, consider that all four logical processors are busy 100% of the time due to VM
workloads. Because one logical processor is roughly equivalent to one virtual processor and there are
eight virtual processors for the VMs, only roughly half of the virtual processor resources are fully used.
Therefore, if the \Processor(*)\% Processor Time counter is viewed on any or all of the VMs, they
show (on average) 50% usage, suggesting that the server is capable of a heavier workload.
Consider, on the other hand, a scenario with only two VMs, each with only one virtual processor.
Then, with each VM running a heavy workload, the \Processor(*)\% Processor Time counter shows
high usage, suggesting that the VM and even the physical server is not capable of handling the load.
The same counter from the host’s perspective, however, might show a very low usage because the
host is relatively idle. This might cause one to assume that the server is capable of handling the
workload and that all is well. However, more virtual processors allocated to each VM can saturate the
server’s logical processors.
The values in this example are overly simplified because workloads are rarely so evenly distributed
and processor usage is rarely so clearly balanced. However, the example does reveal the importance
of understanding the counters found in Perfmon.
To properly monitor processor usage in a Hyper-V environment, pay attention to two counter sets:
Hyper-V Hypervisor Logical Processor and Hyper-V Hypervisor Virtual Processor. To view the physical
processor usage of the entire server (host OS and guest VMs), view the \Hyper-V Hypervisor Logical
Processor(_Total)\% Total Run Time counter.
Viewing only the \Hyper-V Hypervisor Logical Processor(_Total)\% Total Run Time counter, however,
is not sufficient to understand the processor usage on the VMs. If the virtual processor usage is high
and the logical processor usage is low, add more VMs or virtual processors to existing VMs. If the
logical processor usage is high and the virtual processor usage is low, then there are likely more
virtual processors than there are logical processors, causing unnecessary context switching. In this
case, reduce the number of virtual processors and attempt to reach a 1:1 ratio of virtual to logical
processors by either moving VMs to another server or reducing the virtual processors allocated to
28
each VM. If both logical and virtual processors show high usage, move the VM to a different box or
add processor resources, if possible.
Best Practice
Do not use the \Processor(*)\% Processor Time counter on the host or VMs
to monitor the Hyper-V server’s total processor usage. Instead, use the
\Hyper-V Hypervisor Logical Processor(_Total)\% Total Run Time counter to
view processor usage. This includes usage from the host and guest
operating systems.
Best Practice
Monitor VM processor usage with the \Hyper-V Hypervisor Logical
Processor(_Total)\% Total Run Time and \Hyper-V Hypervisor Virtual
Processor(_Total)\% Total Run Time counters. If the virtual processor usage
is high and the logical processor usage is low, add VMs or virtual
processors to existing VMs. If the logical processor usage is high and the
virtual processor usage is low, move VMs to a different server, reduce
virtual processors on local VMs, or add physical processors to the server.
For more information about Hyper-V performance monitoring, including a flowchart for processor
usage analysis, see Measuring Performance on Hyper-V. For information about calculating expected
processor requirements and adding, removing, or otherwise configuring virtual processors on a VM,
see A quick sizing guide for Microsoft Hyper-V R2 running on HP ProLiant servers.
System Center Virtual Machine Manager
The Microsoft System Center suite is a family of products that prove to be very helpful in managing
any Windows environment. The System Center Virtual Machine Manager 2008 R2 (SCVMM) is a
powerful application that makes monitoring, managing, and deploying VMs easier and faster. Much
of the virtual machine information available in Server Manager is also available in SCVMM in a more
centralized interface. In addition to VM summary information, such as processor, memory, and other
details, SCVMM shows networking and storage information. When performing many actions, such as
migrating between hosts or taking snapshots, one helpful feature is the job detail tab, which shows
each step of the job. Figure 15 shows a VM storage migration.
Note
If hosts are added to a cluster after they are imported into SCVMM, the
cluster nodes must be removed from the SCVMM interface and re-added for
SCVMM to recognize the cluster and update the Failover Cluster Manager
when performing operations such as quick storage migration.
29
Figure 15. SCVMM—VM job progress
From within SCVMM, VM migrations can be performed and settings can be changed. VMs can be
deployed, snapshots can be created and managed (in SCVMM snapshots are called checkpoints),
and physical hosts can be converted to VMs.
For administrators who are also working with VMware, SCVMM can manage VMware ESX servers
through VMware VirtualCenter, allowing a single management pane. SCVMM also has a feature for
converting VMs from VMware to Hyper-V VMs as shown in Figure 15.
SCVMM has a library for easy VM deployment (see Deployment with SCVMM) and it integrates with
other System Center applications. SCVMM also shows the underlying PowerShell scripts used to
perform operations, making it easy for administrators who are unfamiliar with PowerShell to learn the
commands and structure necessary to write their own scripts to automate the deployment of VMs.
Quick Storage Migration
Another feature that is available with Hyper-V R2 and SCVMM 2008 R2 is Quick Storage Migration
(QSM), which allows the storage location for a VM’s VHDs to be moved to a different volume with
limited downtime of the VM. Because these volumes can reside in completely separate locations,
QSM can migrate multiple VHDs for a VM from local to shared storage, from a traditional volume to
a CSV, or from one SAN to an entirely different SAN, if desired. Quick Storage Migration is not
limited by storage protocol, meaning that it can move a VHD between Fibre Channel and iSCSI
SANs. This makes it easy to migrate existing VMs from an older storage array to the new EVA.
Note
Quick Storage Migration cannot be performed if the VM has a passthrough disk attached. To perform a migration, first disconnect the passthrough disks, and then perform the migration. If necessary, copy the data
on the pass-through disk to a VHD to include that data in the migration.
30
HP Command View EVA software
Managing an EVA is very simple with CV-EVA because it provides a convenient interface to access
the EVA’s many powerful features. While in the past, CV-EVA was always a server-based
management (SBM) tool, recent releases of the EVA allow array-based management (ABM), which
uses the EVA management module and requires no external server for CV-EVA. However, the two
previously mentioned utilities, EVAPerf and SSSU, are not available on the management module.
Using EVAPerf along with Windows Perfmon to monitor EVA performance and SSSU to script
configuration changes on the EVA can greatly enhance management of a Hyper-V environment.
Best Practice
To understand disk performance of the entire solution, run EVAPerf with the
vd (Virtual Disk) option and compare it to Perfmon counter sets such as
LogicalDisk, HP DSM High Performance Provider, Hyper-V Virtual Storage
Device, and Cluster Shared Volumes.
HP Systems Insight Manager
Many administrators use HP Systems Insight Manager (SIM) to manage their physical servers. This
utility has a Virtualization Manager Module that allows SIM to recognize VMs and even perform
basic functions with the VMs such as starting, stopping, deploying, and copying the VMs as shown in
Figure 16.
31
Figure 16. HP SIM VM recognition
System Center Operations Manager (SCOM)
Another component of the Microsoft System Center suite is System Center Operations Manager
(SCOM). This tool is useful for monitoring the physical state of the Hyper-V host servers and VMs,
providing useful server information and configurable warnings and alerts when thresholds are
reached, such as when processor or memory usage is above a certain point. SCOM also has many
management packs that enable rich application monitoring. When integrated properly with SCVMM,
SCOM can also monitor the performance of the VMs, generating reports that can be viewed from
either SCOM or SCVMM (see Figure 17).
Note
If using SCOM 2007, the HP StorageWorks management pack can be
integrated for extra storage management functionality within SCOM. This
management pack is not yet available for SCOM 2007 R2.
32
Figure 17. Virtual Machine Utilization report from SCOM/SCVMM
When properly integrated, SCOM can pass Performance and Resource Optimization (PRO) tips to
SCVMM. These tips are based on configurable settings and can be automatically implemented to
increase the availability or performance of a Hyper-V environment. For example, a PRO tip might
warn when a clustered Hyper-V host server reaches 90% CPU usage and, if configured to act
automatically, SCVMM will move a VM from that host to another Hyper-V host in the cluster. Properly
configuring PRO tips can prevent degraded performance and even prevent server failures. PRO tips
can be enabled by viewing the properties for a cluster or host group as shown in Figure 18. Each VM
can be set individually to inherit the parent group’s PRO tip settings. For more information about PRO
tips, see TechNet Webcast: PRO Tips in System Center Virtual Machine Manager 2008 R2 (Level
300).
Best Practice
Integrate System Center Operations Manager with System Center Virtual
Machine Manager to generate Performance and Resource Optimization
(PRO) tips to increase availability and performance. If desired, allow
System Center applications to automatically perform the recommended PRO
tips to avoid system failures in extreme circumstances.
33
Warning
If not properly configured and tested, automatic operations without
administrator supervision can cause degraded performance or other results.
Thoroughly test all PRO tips before allowing them to be automatically
activated.
Figure 18. Cluster group PRO tip settings
The System Center suite can also be integrated with the HP Insight Control 6.0 suite for increased
functionality.
VM deployment options
One of the greatest benefits of using Hyper-V is the ability to easily deploy VMs, whether it is creating
brand new VMs, converting physical to virtual servers, or even converting from VMware VMs to
Hyper-V VMs. There are several deployment tools and methods available.
In addition to deploying new VMs with a fresh OS image, several of the methods described in the
following sections can also create some form of clone of the server. When creating VM clones for
deployment, it is important to remove server names and other server-specific information before
creating the clone to avoid communication conflicts. This can be done by running the Sysprep
command with the /generalize and /shutdown options.
34
Best Practice
To avoid server name conflicts when deploying virtual machines based on
existing operating system images or clones, run the Sysprep command
with the /generalize and /shutdown options prior to creating the
image or clone.
Note
When using SCVMM to create a virtual machine template, it is not
necessary to run Sysprep first because SCVMM runs Sysprep and
removes server names and other conflicting data as part of the template
creation process.
Windows Deployment Services
Many large environments already make use of the Windows Deployment Services (WDS) tool. This
tool allows deployment of new or captured server images to both physical servers and VMs.
Therefore, a server can be set up with all service packs, patches, and applications. Then the image
can be captured and redeployed to servers and VMs much more quickly than installing a new OS
and applying the desired changes manually. WDS can also deploy VHD files to physical servers,
allowing them to boot from a VHD. To use WDS to deploy VMs, a new VM with a blank disk must be
set up and then booted from the network using a legacy NIC, which is less efficient for the VM and
host than using a VM’s native NIC. If WDS is already set up in the environment, this method can
improve VM deployment.
EVA Business Copy snapclones
If a Business Copy license for CV-EVA is installed, it is possible to use snapclones to create what can
be considered a VM clone. By creating an EVA snapclone of a Vdisk, the OS and all data on that
Vdisk is duplicated and the VHD on that clone can then be attached to a VM. Note, however, that
even if the snapclone name is changed in CV-EVA, the duplicate LUN appears identical to the host,
including LUN capacity and formatting. If many duplicate LUNs are presented to the same host,
determining which LUN is which might be difficult. For this reason, using snapclones is not the
recommended method for creating a duplicate VM.
Deployment with SCVMM
Although some management components of SCVMM have already been discussed, SCVMM
deployment tools are of special interest. SCVMM allows OS images, hardware profiles, and software
profiles to be stored in its library for easy deployment. These profiles and images make VM cloning
and template deployment very efficient.
VM cloning with SCVMM
Creating a VM clone through the SCVMM wizard is efficient and easy. If moving or duplicating a VM
to an entirely different environment, creating a clone first ensures that if the export fails due to network
or other issues, the original is still available and unchanged. Also, VMs or clones can be moved to the
SCVMM library for later deployment. This is an ideal method for duplicating existing VMs.
VM template creation and deployment with SCVMM
With System Center Virtual Machine Manager, a deployment template can be made from a VM and
stored in the SCVMM library for repeated use (see Figure 19). After the template is created, the
hardware and OS profiles previously mentioned make deploying new VMs very easy. Additionally,
35
XML unattend files can be used to further automate configuration. Using SCVMM templates to deploy
new VMs is perhaps the most effective method of deployment because with profiles, unattend files,
and existing images, new VMs can be named, automatically added to a domain, and have
applications and updates preinstalled. Many other settings can also be configured, reducing an
administrator’s post-installation task list.
Figure 19. SCVMM Library with templates, profiles, and images for deployment
Creating a VM template in SCVMM does not require running Sysprep first as with other deployment
tools because Sysprep is part of the template creation process. Be aware, however, that creating a
template consumes the source VM, so it might be beneficial to first create a clone of the source VM,
and then use the clone to create a new template.
Best Practice
Use XML unattend files, hardware and operating system profiles, and
SCVMM templates to easily deploy highly configurable VMs. Because
creating a VM template consumes the source VM, use SCVMM to create a
VM clone of the desired VM, and then create the template from that VM
clone.
Physical-to-virtual (P2V) deployment through SCVMM
Server consolidation is one of the greatest reasons for implementing a virtualized environment.
However, creating new VMs and changing them to function as the physical servers did can be a slow
process. To simplify the process, SCVMM has a feature to convert a physical server to a VM. The
Convert Physical Server (P2V) Wizard scans the specified server and presents valuable information
about the server as shown in Figure 20.
36
Figure 20. SCVMM convert physical to virtual wizard—system information scan
When performing a P2V conversion, it is possible to select some or all (default) of the desired disks
(both local and external) to convert to VHDs for the new VM. However, all selected disks are
converted to VHDs on one (specified) volume. If any of the new VM’s disks are intended to be passthrough disks or if they should reside on separate volumes or disks, this must be done manually.
To avoid consuming excessive network bandwidth and time, instead of converting all physical disks to
VHDs, if possible (for example, if storage is on a SAN), unpresent the disks from the physical host,
and re-present them to the new VM after it has been created. Although this cannot be done for the
root partition, which must be converted if using the P2V conversion wizard, this might result in
additional time and bandwidth savings.
Best Practice
To save time and consume less bandwidth, when performing a physical-tovirtual conversion, convert only the root partition and disks that cannot be
moved to connect to the Hyper-V host. For all other (nonroot) disks, simply
unpresent them from the physical server, re-present them to the Hyper-V
host, and attach them to the new VM.
37
Although the default VHD type is dynamic, it is possible to choose a fixed VHD for a P2V conversion.
A previously discussed performance best practice is to use fixed rather than dynamic VHDs. Creating
a fixed VHD, however, matches the specified size of the source disk, including unused capacity as
shown in Figure 21. In environments with large disk drives or slow networks, this conversion might
take a significant amount of time and bandwidth. Using the dynamic VHD copies only the necessary
bits, resulting in faster conversions, and is therefore recommended when source disks are large or the
network is not sufficient. If fixed VHDs are desired and performance is a concern, then after
converting large disks to dynamic VHDs, convert the dynamic VHDs to fixed within Hyper-V and
expand the VHDs if necessary.
Best Practice
To avoid copying large quantities of unused disk capacity across the
network, for all required P2V disk conversions, convert to dynamically
expanding VHDs instead of fixed VHDs. If necessary, after the conversion,
convert the dynamically expanding VHDs to fixed VHDs and expand them
for improved performance.
Figure 21. SCVMM convert physical to virtual wizard—volume configuration
38
Summary
This white paper details best practices and storage considerations for a Hyper-V environment with the
EVA4400. The primary take-away from this paper is that understanding Hyper-V and the underlying
storage can greatly improve performance, manageability, and overall satisfaction with the virtualized
environment.
39
Appendix A—Disk expansion
Any time configuration changes are made on a file system with critical application or operating
system data, it is best to make a backup of those volumes before performing the changes.
Warning
Attempts to expand pass-through disks with the OS root partition or on an
IDE controller frequently result in data corruption. Do not attempt to expand
bootable pass-through disks or pass-through disks that use IDE controllers
without first testing the expansion thoroughly and having a current backup
of the disk.
VHD expansion
Note
To expand a pass-through disk, perform only steps 1 and 8.
1. Expand the Vdisk (LUN) on the EVA with CV-EVA by changing the Requested value for a Vdisk,
and clicking Save changes as shown in Figure 22. Wait for the Allocated capacity to match the
new Requested capacity.
Figure 22. Expanding an EVA Vdisk
40
Note
EVA LUNs can also be expanded from within Windows by installing the HP
StorageWorks VDS & VSS Hardware Providers, turning on the Virtual Disk
service, and enabling the Storage Manager for SANs feature in Windows.
For more information, see Storage Manager for SANs or download the
Storage Manager for SANs Step-by-Step Guide.
2. Using the Disk Management tool on the Hyper-V host, rescan the disks as shown in Figure 23 to
reveal the extra capacity on the Windows volume, and extend the volume to the desired size as
shown in Figure 24.
41
Figure 23. Rescanning the disks on the Hyper-V host
Figure 24. Extending the Windows volume
42
3. Shut down the VM that owns the VHD file on the newly expanded volume to allow configuration
changes to the VHD file.
Note
If the VHD to be expanded is not the root (OS) VHD, the VHD can simply
be removed from the VM instead of shutting down the entire VM. After the
VHD is successfully expanded, it can be reattached to the VM. If the VM is
left running during this VHD expansion, be sure to stop I/O traffic to that
VHD to prevent application errors when the disk is removed.
4. With the VM shut down (or the VHD disconnected from the VM), open the VM settings dialog box
as shown in Figure 25.
Figure 25. Editing a VHD
5. In the wizard, locate the desired VHD, and select the option to Expand the VHD.
43
6. Set the new size as shown in Figure 26. Notice that the new VHD size chosen leaves several GB
still available on the expanded volume as previously suggested.
Figure 26. Setting the new VHD capacity
7. After the VHD expansion is complete, turn on the VM (or reattach the VHD to the running VM).
44
8. Rescan and expand the volume with the guest OS (VM) disk management tool as was done at the
host OS level, as shown in Figure 27.
Figure 27. Extending the volume capacity on the VM
45
For more information
HP links
EVA Arrays, http://www.hp.com/go/eva
HP ActiveAnswers Sizers,
http://h71019.www7.hp.com/activeanswers/Secure/71110-0-0-0-121.html
HP BladeSystem, http://www.hp.com/go/bladesystem
HP ProLiant Servers, http://www.hp.com/go/proliant
HP Sizer for Microsoft Hyper-V 2008 R2,
http://h71019.www7.hp.com/ActiveAnswers/us/en/sizers/microsoft-hyper-v2008.html
HP StorageWorks 4400/6400/8400 Enterprise Virtual Array configuration – Best practices white
paper,
http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA2-0914ENW.pdf
HP StorageWorks Command View EVA Software – Overview & Features,
http://h18006.www1.hp.com/products/storage/software/cmdvieweva/
HP ActiveAnswers for Microsoft Hyper-V Server Virtualization,
http://h71019.www7.hp.com/ActiveAnswers/cache/604726-0-0-225-121.html
Microsoft links
Hyper-V, http://technet.microsoft.com/en-us/library/cc753637%28WS.10%29.aspx
Hyper-V Architecture, http://msdn.microsoft.com/en-us/library/cc768520%28BTS.10%29.aspx
Hyper-V Technical Information and Resources, http://technet.microsoft.com/en-us/dd565807.aspx
Performance Tuning Guidelines for Windows Server 2008 R2,
http://www.microsoft.com/whdc/system/sysperf/Perf_tun_srv-R2.mspx
Windows Server 2008 and Windows Server 2008 R2,
http://technet.microsoft.com/en-us/library/dd349801(WS.10).aspx
Windows Server 2008 R2 & Microsoft Hyper-V Server 2008 R2 – Hyper-V Live Migration Overview
& Architecture,
http://www.microsoft.com/downloads/details.aspx?FamilyID=FDD083C6-3FC7-470B-85697E6A19FB0FDF&displaylang=en
Feedback
To help us improve our documents, please provide feedback at
http://h20219.www2.hp.com/ActiveAnswers/us/en/solutions/technical_tools_feedback.html.
Technology for better business outcomes
© Copyright 2010 Hewlett-Packard Development Company, L.P. The information
contained herein is subject to change without notice. The only warranties for HP
products and services are set forth in the express warranty statements
accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. HP shall not be liable for technical or editorial
errors or omissions contained herein.
Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation.
AMD is a trademark of Advanced Micro Devices, Inc. Intel and Xeon are
trademarks of Intel Corporation in the U.S. and other countries.
4AA0-1907ENW, February 2010
Download