BizTalk Server 2009 Hyper-V Guide - Center

advertisement
Contents
BizTalk Server 2009 Hyper-V Guide ............................................................................................... 7
Introduction ................................................................................................................................... 7
Who Should Read This? ........................................................................................................... 8
Goals of this Guide ................................................................................................................... 9
What’s in this Guide? ................................................................................................................ 9
Acknowledgements ................................................................................................................. 10
Deploying BizTalk Server on Hyper-V ........................................................................................... 11
In This Section............................................................................................................................ 11
Potential Benefits of Deploying a BizTalk Server Solution to a Hyper-V Virtualized Environment 11
Benefits of Running a BizTalk Server Solution on a Hyper-V Virtualized Environment ............. 11
Installing and Configuring a Hyper-V Virtual Machine for use with BizTalk Server ....................... 12
Installing and Configuring Hyper-V ............................................................................................ 12
Hyper-V Platform Prerequisites .............................................................................................. 12
Determining Hardware Requirements ................................................................................. 12
Storage Configuration Options ......................................................................................... 13
Networking ....................................................................................................................... 16
CPU .................................................................................................................................. 17
Memory ............................................................................................................................ 17
Choosing Root Operating System Version ......................................................................... 18
Creating Your Virtual Machines ........................................................................................... 18
Installing the Base Operating System .............................................................................. 19
Installing and Configuring BizTalk Server .................................................................................. 19
Checklist: Best Practices for Installing and Configuring BizTalk Server on Hyper-V .................... 20
Before Installing Hyper-V… ........................................................................................................ 20
When Creating Hyper-V Virtual Machines… .............................................................................. 21
When Installing and Configuring BizTalk Server… .................................................................... 24
Evaluating BizTalk Server Performance on Hyper-V .................................................................... 26
In This Section............................................................................................................................ 26
Checklist: Measuring Performance on Hyper-V ............................................................................ 26
Measuring Disk I/O Performance ............................................................................................... 26
Measuring Memory Performance ............................................................................................... 28
Measuring Network Performance............................................................................................... 29
Measuring Processor Performance............................................................................................ 32
Checklist: Optimizing Performance on Hyper-V ............................................................................ 37
Allocate 110%–125% of CPU and Disk Resources to the Hyper-V Virtual Machines ............... 37
Optimize Hyper-V Performance ................................................................................................. 38
Optimize Performance of Disk, Memory, Network, and Processor in a Hyper-V Environment . 39
Optimize Processor Performance ........................................................................................... 39
Optimize Disk Performance .................................................................................................... 41
Optimize Memory Performance .............................................................................................. 43
Optimize Network Performance .............................................................................................. 45
Optimize SQL Server Performance............................................................................................ 47
Optimize BizTalk Server Solution ............................................................................................... 48
System Resource Costs on Hyper-V ............................................................................................. 48
System Resource Costs Associated with Running a Guest Operating System on Hyper-V ..... 48
CPU Overhead........................................................................................................................ 48
Memory Overhead .................................................................................................................. 48
Network Overhead .................................................................................................................. 48
Disk Overhead ........................................................................................................................ 49
Disk Performance When Running a BizTalk Server Solution on Hyper-V .......................... 49
Measuring PassThrough Disk Performance ....................................................................... 49
Configuration Used for Testing ............................................................................................ 50
IOMeter Configuration ......................................................................................................... 51
Test Description................................................................................................................... 52
Results .................................................................................................................................... 52
Testing BizTalk Server Virtualization Performance ....................................................................... 54
In This Section............................................................................................................................ 54
Test Scenario Overview ................................................................................................................ 55
Test Application .......................................................................................................................... 55
Testing Methodology .................................................................................................................. 60
Key Performance Indicators Measured During Testing ............................................................. 64
Physical Infrastructure Specifics ................................................................................................ 64
Virtualization Specifics ............................................................................................................... 66
See Also ..................................................................................................................................... 67
Test Scenario Server Architecture................................................................................................. 67
Overview of Message Flow During Load Testing ...................................................................... 67
Baseline Server Architecture ...................................................................................................... 69
Virtual BizTalk Server / Physical SQL Server ............................................................................ 70
Virtual BizTalk Server / Virtual SQL Server ............................................................................... 72
Consolidated Environment ......................................................................................................... 73
See Also ..................................................................................................................................... 74
Test Results: BizTalk Server Key Performance Indicators ........................................................... 74
Summary of BizTalk Server Key Performance Indicators .......................................................... 74
Performance Comparison Results Summary............................................................................. 76
Throughput Comparison Sample Results ............................................................................... 76
Latency Comparison Sample Results..................................................................................... 76
Test Results: SQL Server Key Performance Indicators ................................................................ 76
Summary of SQL Server Key Performance Indicators .............................................................. 76
Test Results: Networking Key Performance Indicators ................................................................. 79
Summary of Network Key Performance Indicators .................................................................... 79
Test Results: Memory Key Performance Indicators ...................................................................... 80
Summary of Memory Key Performance Indicators .................................................................... 80
Summary of Test Results .............................................................................................................. 82
Summary of Test Results ........................................................................................................... 82
Throughput Comparison Sample Results ............................................................................... 82
Latency Comparison Sample Results..................................................................................... 82
SQL Server Processor Utilization and Batch Requests per Second Sample Results ............ 83
BizTalk Server and SQL Server Network Throughput Sample Results ................................. 83
BizTalk Server and SQL Server Available Memory Sample Results ..................................... 83
Appendices .................................................................................................................................... 83
In This Section............................................................................................................................ 83
Appendix A: Optimizations Applied to Computers in Test Environment ....................................... 84
In This Section............................................................................................................................ 84
Operating System Optimizations ................................................................................................... 84
General guidelines for improving operating system performance ............................................. 84
Install the latest BIOS, storage area network (SAN) drivers, network adapter firmware and
network adapter drivers ....................................................................................................... 84
Assign the MSDTC log file directory to a separate dedicated drive ....................................... 85
Configure antivirus software to avoid real-time scanning of BizTalk Server executables and
file drops .............................................................................................................................. 85
Disable intrusion detection network scanning between computers in the BizTalk Server
environment ......................................................................................................................... 85
Defragment all disks in the BizTalk Server environment on a regular basis .......................... 85
If antivirus software is installed on the SQL Server computer(s), disable real-time scanning of
data and transaction files .................................................................................................... 86
Configure MSDTC for BizTalk Server ..................................................................................... 86
Configure firewall(s) for BizTalk Server .................................................................................. 86
Use the NTFS file system on all volumes ............................................................................... 86
Do not use NTFS file compression ......................................................................................... 87
Review disk controller stripe size and volume allocation units ............................................... 87
Monitor drive space utilization ................................................................................................ 88
Implement a strategy to avoid disk fragmentation .................................................................. 88
Optimize Windows Server performance for background services .......................................... 88
Manually load Microsoft Certificate Revocation lists .............................................................. 89
Synchronize time on all servers .............................................................................................. 89
Configure the Windows PAGEFILE for optimal performance ................................................. 89
Network Optimizations ................................................................................................................... 91
Improving Network Performance of BizTalk Server on Hyper-V ................................................ 91
Configure Hyper-V Virtual Machines that are Running on the same Hyper-V host computer to
use a Private Virtual Network .............................................................................................. 91
Disable TCP Offloading for the Virtual Machine Network Cards ............................................ 93
General guidelines for improving network performance ............................................................ 93
Add additional network cards to computers in the BizTalk Server environment .................... 94
Implement network segmentation ........................................................................................... 94
Where possible, replace hubs with switches .......................................................................... 94
Remove unnecessary network protocols ................................................................................ 94
Network adapter drivers on all computers in the BizTalk Server environment should be tuned
for performance ................................................................................................................... 95
SQL Server Optimizations ............................................................................................................. 97
In This Section............................................................................................................................ 97
Pre-Configuration Database Optimizations ................................................................................... 97
Set NTFS File Allocation Unit ..................................................................................................... 97
Database planning considerations ............................................................................................. 98
Install the latest service pack and cumulative updates for SQL Server ..................................... 98
Install SQL Service Packs on both BizTalk Server and SQL Server ......................................... 98
Consider implementing the SQL Server 2008 Data Collector and Management Data
Warehouse.............................................................................................................................. 98
Grant the account which is used for SQL Server the Windows Lock Pages In Memory privilege
................................................................................................................................................ 98
Grant the SE_MANAGE_VOLUME_NAME right to the SQL Server Service account .............. 99
Set Min and Max Server Memory............................................................................................... 99
Split the tempdb database into multiple data files of equal size on each SQL Server instance
used by BizTalk Server ......................................................................................................... 100
Enable Trace Flag T1118 as a startup parameter for all instances of SQL Server ................. 100
Do not change default SQL Server settings for max degree of parallelism, SQL Server
statistics, or database index rebuilds and defragmentation ................................................. 100
Post-Configuration Database Optimizations ............................................................................... 100
Pre-allocate space for BizTalk Server databases and define auto-growth settings for BizTalk
Server databases to a fixed value instead of a percentage value ........................................ 101
Move the Backup BizTalk Server output directory to a dedicated LUN ................................... 101
Verify the BizTalk Server SQL Agent Jobs are running ........................................................... 101
Configure Purging and Archiving of Tracking Data .................................................................. 102
Monitor and reduce DTC log file disk I/O contention ............................................................... 103
Separate the MessageBox and Tracking Databases ............................................................... 103
Optimize filegroups for the BizTalk Server databases ............................................................. 104
Optimizing Filegroups for the Databases .................................................................................... 104
Overview .................................................................................................................................. 104
Databases created with a default BizTalk Server configuration .............................................. 106
Separation of data files and log files ........................................................................................ 107
The 80/20 rule of distributing BizTalk Server databases ......................................................... 108
Manually adding files to the MessageBox database, step-by-step .......................................... 108
Manually adding files to the MessageBox database on SQL Server 2005 or SQL Server 2008
........................................................................................................................................... 108
Sample SQL script for adding filegroups and files to the BizTalk MessageBox database ...... 113
BizTalk Server Optimizations ...................................................................................................... 115
In This Section.......................................................................................................................... 115
General BizTalk Server Optimizations ........................................................................................ 115
Create multiple BizTalk Server hosts and separate host instances by functionality ................ 115
Configure a dedicated tracking host......................................................................................... 116
Manage ASP.NET thread usage or concurrently executing requests for Web applications that
host orchestrations published as a Web or WCF Service .................................................... 117
Manage ASP.NET thread usage for Web applications that host orchestrations on IIS 6.0 and
on IIS 7.0 running in Classic mode.................................................................................... 118
Manage the number of concurrently executing requests for Web applications that host
orchestrations on IIS 7.0 running in Integrated mode ....................................................... 118
Define CLR hosting thread values for BizTalk host instances ................................................. 120
Disable tracking for orchestrations, send ports, receive ports, and pipelines when tracking is not
required ................................................................................................................................. 122
Decrease the purging period for the DTA Purge and Archive job from 7 days to 2 days in high
throughput scenarios ............................................................................................................ 122
Install the latest service packs ................................................................................................. 122
Do not cluster BizTalk hosts unless absolutely necessary ...................................................... 123
Performance optimizations in the BizTalk Server documentation ........................................... 123
Low-Latency Scenario Optimizations .......................................................................................... 123
Increase the BizTalk Server host internal message queue size .............................................. 123
Reduce the MaxReceiveInterval value in the adm_ServiceClass table of the BizTalk Server
management database ......................................................................................................... 124
Appendix B: Hyper-V Architecture and Feature Overview .......................................................... 125
Hyper-V Architecture ................................................................................................................ 125
Advantages of Hyper-V ............................................................................................................ 128
Disadvantages of Hyper-V ....................................................................................................... 129
Differences between Hyper-V and Virtual Server 2005 ........................................................... 130
Appendix C: BizTalk Server and SQL Server Hyper-V Supportability ........................................ 131
Appendix D: Tools for Measuring Performance .......................................................................... 131
Performance Analysis of Logs (PAL) tool ................................................................................ 131
Performance Monitor ................................................................................................................ 131
Log Parser ................................................................................................................................ 132
Relog ........................................................................................................................................ 132
LoadGen ................................................................................................................................... 132
Visual Studio Team System 2008 Load Testing ...................................................................... 132
BizUnit ...................................................................................................................................... 132
IOMeter .................................................................................................................................... 132
BizTalk Server Orchestration Profiler ....................................................................................... 133
Pathping ................................................................................................................................... 133
SQL Server Tools for Performance Monitoring and Tuning ..................................................... 133
SQL Profiler .......................................................................................................................... 133
SQL Trace............................................................................................................................. 134
SQL Activity Monitor ............................................................................................................. 134
SQL Server 2008 Data Collection ......................................................................................... 134
SQL Server 2005 Performance Dashboard Reports ............................................................ 135
SQLIO ................................................................................................................................... 135
Glossary....................................................................................................................................... 135
Glossary ................................................................................................................................... 135
BizTalk Server 2009 Hyper-V Guide
The purpose of this guide is to provide practical guidance for using Microsoft BizTalk Server 2009
with Microsoft Windows Server 2008 Hyper-V. The emphasis is on BizTalk Server, but the
performance evaluation methods and performance testing scenarios are useful for analyzing the
performance of virtualized server applications in general. This guidance will be of interest to both
the IT Pro and Developer communities.
To download a copy of this guide, go to http://go.microsoft.com/fwlink/?LinkId=149267.
Introduction
Server virtualization offers companies the opportunity to run multiple operating systems on a
single physical machine. This enables the consolidation of underutilized servers onto a smaller
number of fully utilized machines. By implementing virtualization, companies can minimize
operational and capital expenditure costs associated with deploying and operating the servers
required for enterprise applications.
The potential costs savings has prompted IT departments to evaluate new and existing
applications to identify candidates suitable for server virtualization. Most such evaluations seek to
discover the total cost of virtualization. The total cost of virtualization is the sum of monetary costs
for hardware and IT operations, and the performance cost of virtualization as compared to the
performance attainable in a physical environment. This guide focuses exclusively on the
performance aspect of virtualization.
Windows Server 2008 Hyper-V creates new opportunities for server virtualization. Compared to
its worthy predecessor, Microsoft Virtual Server 2005 R2, Hyper-V demonstrates improved virtual
machine performance and tight integration with the host operating system. Hyper-V makes more
efficient use of physical system hardware and host operating system resources, and
consequently reduces the overhead associated with virtualization. In other words, Hyper-V
imposes a significantly lower performance cost of virtualization than Virtual Server. Lower
performance cost of virtualization allows us to consider server virtualization of applications whose
performance requirements are not easily (if at all) obtainable using virtual machines running on
Virtual Server.
A BizTalk Server deployment typically consists of a number of other components including: SQL
Server, Windows Server and Internet Information Services (IIS). Server Virtualization enables
BizTalk customers to minimize the hardware footprint of a BizTalk deployment by consolidating
underutilized resources in a secure manner. To verify the potential of Hyper-V and BizTalk usage
we compared the performance of BizTalk Server in a number of scenarios with different
components virtualized. The results suggest that BizTalk Server is a strong candidate for
virtualization – our analysis indicates there is approximately 5% to 13% overhead for virtualizing
the BizTalk Server tier. Due to its stateless nature processing power can be achieved through
adding additional BizTalk Servers. Hyper-V provides support for dynamic provisioning through
7
System Center Virtual Machine Manager (VMM) which makes provisioning on demand a realistic
scenario.
Windows Server 2008 provides the Hyper-V technology to accommodate server consolidation
through virtualization of multiple operating system instances onto a single physical server. HyperV is provided as a core part of Windows Server 2008 or as a stand-alone product to make it as
easy as possible for customers to embrace virtualization in their organization. There are several
key scenarios for implementing Hyper-V:

Server Consolidation – Minimize server footprint, operational and capital expenditure (TCO)
associated with running applications by consolidating multiple physical servers onto one box.

Testing and Development – Using virtual machines, developers and architects can quickly
provision new machines to try out new technology and scenarios in a safe environment that
accurately reflects the characteristics of a physical environment. Virtualization enables new
machines to be provisioned running on a wide platform of operating systems without new
hardware being required. This provides a great platform for testing and development
environments.

Business Continuity and Disaster Recovery – Hyper-V includes powerful business
continuity and Disaster Recovery features such as live backup and quick migration which
enables businesses to meet their service level agreements.
Note


For information about how to back up Hyper-V virtual machines using Windows Server
Backup, see Microsoft Knowledge Base article 958662, “How to back up Hyper-V virtual
machines from the parent partition on a Windows Server 2008-based computer by using
Windows Server Backup” at http://go.microsoft.com/fwlink/?LinkId=131207.

For information about how to use the Hyper-V Live Migration Feature available in
Windows Server 2008 R2, see “Hyper-V: Step-by-Step Guide to Using Live Migration in
Windows Server 2008 R2” at http://go.microsoft.com/fwlink/?LinkID=139667. Note that as
of the publication of this guide, Windows Server 2008 R2 is not yet a released product
but should be available for release in 2009.
Dynamic Data Center – By combining Hyper-V with the Microsoft System Center suite of
tools, organizations can automate virtual machine configuration and monitoring. For more
information, see “System Center Virtual Machine Manager” at
http://go.microsoft.com/fwlink/?LinkID=111303.
The information in this guide directly relates to the Server Consolidation and Testing and
Development scenarios for Hyper-V. The other two were out of scope for this guide.
For more information about Hyper-V, see the topic “Virtualization and Consolidation with Hyper-V”
at http://go.microsoft.com/fwlink/?LinkID=121187 and the topics in the Appendices section of this
guide.
Who Should Read This?
All IT Professionals who work with BizTalk Server
8
IT Professionals who deploy, optimize and maintain an application environment
IT Professionals who work with development teams to evaluate and optimize system
architectures
Developers who create and maintain BizTalk Server applications
Developers interested in performance optimization and identifying performance bottlenecks
Goals of this Guide
The primary goal of this guide is to provide guidance about how to determine if BizTalk Server
2009 running on Hyper-V is likely to meet performance expectations. This guidance will also be of
value as an aid to optimization of a deployed BizTalk Server application.
This project was conducted with the following goals:

Provide specific guidance for anyone who is evaluating, designing, or implementing a
virtualized BizTalk Server environment.

Provide an introduction to the performance monitor counters and tools used to measure the
performance capabilities of a virtualized server platform.

Provide guidelines for determining the cost of virtualization as a function of the performance
difference between physical and virtualized server environments.

Develop best practices for use when planning or optimizing a virtualized BizTalk Server
environment.

Provide architectural guidance to help you determine how to deploy BizTalk Server in a
virtualized environment.

Identify and document performance bottlenecks in a virtualized environment.
What’s in this Guide?
Guidance for implementing a BizTalk Server solution on a Hyper-V virtualized environment. This
guide includes:

Deploying BizTalk Server on Hyper-V: Deploying BizTalk Server on Hyper-V describes the
steps that were followed to set up the lab environment used to compare the performance of a
BizTalk Server solution running on Hyper-V virtual machine to the same BizTalk Server
solution running on physical hardware.

Evaluating BizTalk Server Performance on Hyper-V: Evaluating BizTalk Server
Performance on Hyper-V details important considerations when measuring performance of a
BizTalk Server solution running on a Hyper-V virtualized environment.

Testing BizTalk Server Performance on Hyper-V: Testing BizTalk Server Virtualization
Performance provides detailed results of four distinct testing scenarios that compare the
performance of a BizTalk Server solution running on Hyper-V virtual machine to the same
BizTalk Server solution running on physical hardware.

Appendices: The topics in Appendices provide important reference material for this guide
including:
9


Appendix A: Optimizations Applied to Computers in Test Environment – Provides detailed
information about the performance optimizations that were applied to the computers in
the test environment.

Appendix B: Hyper-V Architecture and Feature Overview - Provides an overview of
Hyper-V architecture, describes advantages and disadvantages of Hyper-V, and
describes differences between Hyper-V and Virtual Server 2005

Appendix C: BizTalk Server and SQL Server Hyper-V Supportability – Describes support
policies for running BizTalk Server and SQL Server on a Hyper-V virtual machine.

Appendix D: Tools for Measuring Performance - Describes several tools that can be used
to monitor and evaluate the performance of a BizTalk Server environment.
Glossary: The Glossary defines key terms used throughout this guide.
Acknowledgements
The BizTalk Server User Education team gratefully acknowledge the outstanding contributions of
the following individuals for providing both technical feedback and content for this guide:
Authors

Ewan Fairweather (Microsoft)

Paolo Salvatori (Microsoft)
Contributors

Ben Cooper (Microsoft)

Valery Mizonov (Microsoft)

Tim Wieman (Microsoft)
Reviewers

Petr Kratochvil (Microsoft)

Lindsey Allen (Microsoft)

Tony Voellm (Microsoft)

Todd Uhl (Microsoft)

Guy Lau (Microsoft)

Quoc Bui (Microsoft)

Saravana Kumar

Richard Seroter (Amgen)

Jim Allen (Research Machines)

Robert Hogg (Black Marble)
Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.
10
Deploying BizTalk Server on Hyper-V
This section provides recommendations and best practices for installing, configuring, and
deploying a BizTalk Server 2009 solution in a Hyper-V virtual environment. This section describes
advantages of deploying your BizTalk Server 2009 solution to a Hyper-V virtualized environment,
recommendations for setting up the Hyper-V virtual machines and recommendations for installing
and configuring BizTalk Server 2009 on the Hyper-V virtual machines.
In This Section

Potential Benefits of Deploying a BizTalk Server Solution to a Hyper-V Virtualized
Environment

Installing and Configuring a Hyper-V Virtual Machine for use with BizTalk Server

Checklist: Best Practices for Installing and Configuring BizTalk Server on Hyper-V
Potential Benefits of Deploying a BizTalk
Server Solution to a Hyper-V Virtualized
Environment
This topic describes some of the benefits of deploying your BizTalk Server solution to a Hyper-V
virtualized environment.
Benefits of Running a BizTalk Server Solution on
a Hyper-V Virtualized Environment
Deploying your BizTalk Server solution to run on a Hyper-V virtualized environment provides the
following flexibility and functionality:

Ability to define multiple distinct logical security boundaries on a single physical
computer - Hyper-V accommodates the creation of distinct logical security boundaries or
partitions within a single physical hardware resource. A partition is a single logical unit of
isolation, supported by the hypervisor, in which operating systems execute. For example, you
could create multiple BizTalk Server groups to run on a single Hyper-V host computer
whereas you would not be able to do this when installing BizTalk Server on the host
operating system of a single host computer.

Ease of deployment and management – Consolidation of BizTalk Server computers into
fewer physical servers simplifies deployment. Furthermore, a comprehensive Hyper-V
management solution is available with System Center Virtual Machine Manager. For more
information about System Center Virtual Machine Manager, see
http://go.microsoft.com/fwlink/?LinkID=111303.
11

Fault tolerance support through Hyper-V clustering - Because Hyper-V is a cluster aware
application, Windows Server 2008 provides native host clustering support for virtual
machines created in a Hyper-V virtualized environment.

Ease of scale-out - Additional processing power, network bandwidth, and storage capacity
can be accommodated for your BizTalk Server solution quickly and easily by apportioning
additional available resources from the host computer to the guest virtual machine(s). This
may require that the host computer is upgraded or that the guest virtual machines are moved
to a more capable host computer.

Consolidation of hardware resources - Multiple physical servers can be easily
consolidated into comparatively fewer servers by implementing virtualization with Hyper-V.
Consolidation accommodates full use of deployed hardware resources.
Installing and Configuring a Hyper-V Virtual
Machine for use with BizTalk Server
This topic provides recommendations for installing and configuring BizTalk Server in a Hyper-V
environment, including recommendations for installation and configuration of the Hyper-V virtual
machine and recommendations for installing BizTalk Server on a Hyper-V virtual machine.
Installing and Configuring Hyper-V
Before installing Hyper-V, follow the instructions available on the “How to Install Windows
Server 2008 Hyper-V” page at http://go.microsoft.com/fwlink/?LinkId=119204.
“The Performance Tuning Guidelines for Windows Server 2008” document provides details on
tuning Windows Server 2008 and includes a section specifically focused on Hyper-V. The
document is available at http://go.microsoft.com/fwlink/?LinkId=121171.
Hyper-V Platform Prerequisites
Hyper-V is a server role available for 64-bit, x64-based editions of Windows Server 2008.
Additionally, the physical hardware must support hardware assisted virtualization. This means the
processor must be compatible with Intel VT or AMD Virtualization (AMD-V) technology, the
system BIOS must support Data Execution Prevention (DEP), and DEP must be enabled.
Note
After enabling these options in the system BIOS, turn off the computer completely and
then restart the computer to ensure that these settings are applied.
Determining Hardware Requirements
Due to the demands of server consolidation, Hyper-V servers tend to consume more CPU and
memory, and require greater disk I/O bandwidth than physical servers with comparable
12
computing loads. In order to deploy an environment that will meet expectations, consider the
factors below to determine the exact hardware requirements of your server.
Storage Configuration Options
The storage hardware should provide sufficient I/O bandwidth and storage capacity to meet the
current and future needs of the virtual machines that you plan to host. There is a trade-off when
choosing the storage configuration for Hyper-V between capacity usage and the performance it
can provide.
When planning storage configuration, consider the requirements of the environment you are
provisioning. The requirements for production, pre-production, and development environments
may differ considerably.
If you are deploying a production BizTalk Server 2009 environment on Hyper-V, performance will
be a key requirement. To avoid disk I/O contention on busy production systems, install integration
services on both the host and guest operating system and configure disks for data volumes with
the synthetic SCSI controller. For highly intensive storage I/O workloads that span multiple data
drives, each VHD should be attached to a separate synthetic SCSI controller for better overall
performance. In addition, each VHD should be stored on separate physical disks. For more
information about configuring disks for data volumes with the synthetic SCSI controller see the
“Optimize Disk Performance” section of the topic Checklist: Optimizing Performance on Hyper-V.
Typically, development environments do not have stringent performance requirements since
maximizing resource utilization tends to be the main priority. For development environments the
performance provided when hosting multiple VHD files on a single physical drive is generally
acceptable.
Hyper-V supports several different types of storage disk options. Each of the storage options can
be attached via an IDE or SCSI controller to the machine. A potential benefit of using the SCSI
controller over the IDE controller is that it will only work correctly if the correct versions of the
operating system integration components have been installed on the guest virtual machine. This
is a straightforward method for ensuring that correct operating system integration components are
installed on the guest operating system.
Note
Unlike previous versions of Microsoft virtualization technology, there is no performance
difference between using a virtual IDE controller or a virtual SCSI controller when
accessing virtual hard disks.
For intensive read-write activities, such as hosting SQL Server databases, the passthrough disk
option provides incremental performance advantages over fixed virtual hard drive (VHD) disks.
The passthrough option permits the virtual machine to have direct access to the physical disk and
bypasses the NTFS file system in the root partition but does not support certain functionality of
virtual disks, such as Virtual machine snapshots and clustering support. Therefore use of the
passthrough disk feature is not recommended in a BizTalk or SQL Server environment because
the marginal performance benefits are more than offset by the missing functionality.
13
The following table summarizes the advantages and disadvantages of available Hyper-V storage
options:.
Hyper-V Storage Type
Pros
Cons
Considerations for BizTalk
Server
Fixed size disks
Performs better than a Requires allocation of
dynamic VHD because the full amount of disk
the VHD file is
space up-front.
initialized at its
maximum possible
size when it is created
on the physical hard
drive.
This makes
fragmentation less
likely and, therefore,
mitigates scenarios
where a single I/O is
split into multiple I/Os.
This has the lowest
CPU overhead of the
VHD types because
reads and writes do
not need to look up the
mapping of the block.
Dynamically
expanding disks
The size of the VHD
file increases to the
size specified when
creating the disk, as
more data is stored on
the virtual machine
itself. This
accommodates the
most efficient use of
available storage.
Does not perform as
well as a fixed size
VHD. This is because
the blocks in the disk
start as zeroed blocks
but are not backed by
any actual space in
the VHD file. Reads
from such blocks
return a block of
zeros. When a block
is first written to, the
virtualization stack
must allocate space
within the VHD file for
the block and then
update the
Use for operating
system volumes on
BizTalk Server and SQL
Server.
Important
The startup disk
of a Hyper-V
guest partition
must be
attached to an
IDE contoller.
Does not perform as
well as a fixed size
VHD.
If performance is not a
concern, for instance in
a development
environment, this may
be a suitable option for
the operating system
hard drives.
Causes additional CPU
overhead due to block
mapping lookup.
14
Hyper-V Storage Type
Pros
Cons
Considerations for BizTalk
Server
corresponding
metadata. In addition
to this every time an
existing block is
referenced the block
mapping must be
looked up in the
metadata. This
increases the number
of read and write
activities which in turn
causes increased
CPU utilization.
The dynamic growth
also requires that the
server administrator
monitor disk capacity
to ensure that there is
sufficient disk storage
as the storage
requirements
increase.
Differencing disks
This a parent-child
configuration where
the differencing disk
stores all changes
relative to a base VHD
and the base VHD
remains static.
Therefore only the
blocks which are
different from the
parent need to be
stored in the child
differencing VHD.
Performance can
degrade because
read/writes need to
access the
fixed/dynamic parent
VHD as well as the
differencing disk. This
increases CPU
utilization and disk I/O
overhead.
A large amount of
machine specific
configuration is required
for BizTalk Server 2009
installations and child
VHD files may grow
substantially which
would minimize the
benefits of using this
disk configuration.
Reading from multiple
VHD’s in this scenario
incurs additional CPU
and disk I/O overhead.
Passthrough disks
These are physical
disks which are set to
offline in the root
Requires a fully
If your SQL Server
dedicated disk or LUN instance is running on a
in order for it to be
Hyper-V, you may
15
Hyper-V Storage Type
Pros
Cons
Considerations for BizTalk
Server
partition and enable
Hyper-V to have
exclusive read-write
access to the physical
disk.
allocated to a virtual
machine.
A physical disk is
more difficult to move
between machines
than VHD files.
obtain incremental
performance
improvements by using
passthrough disks over
using fixed virtual hard
disks (VHD) for the
BizTalk Server 2009
data volumes.
If you are hosting local
file receive locations on
BizTalk Server 2009 or
streaming large
messages to disk during
processing, you may
obtain incremental
performance
improvements using
passthrough disks over
using fixed virtual hard
disks (VHD).
For more information about implementing disks and storage with Hyper-V see Implementing
Disks and Storage (http://go.microsoft.com/fwlink/?LinkID=142362).
Networking
BizTalk Server 2009 tends to exhibit high network utilization. Therefore, when network
performance is an issue, consider allocating a separate physical network card for each virtual
machine.
When configuring a virtual machine, ensure that you use the Network Adapter instead of the
Legacy Network Adapter. The legacy network adapter is intended for operating systems that do
not support integration components.
To measure network performance use the “\Network Interface \Bytes Total/sec” and the
\Network Interface(*)\Output Queue Length performance monitor counters on the host
operating system to measure overall performance of the network card. If a physical network has
been identified as being busy, use the “\Hyper-V Virtual Network Adapter (*)\Bytes/sec”
counter on the host operating system to identify which virtual machine network adapter(s) is/are
generating high load.
For more information about evaluating network performance on a Hyper-V environment see the
Measuring Network Performance section of Checklist: Measuring Performance on Hyper-V.
16
CPU
Hyper-V supports different numbers of virtual processors for different guest operating systems; as
summarized in the table below. To allocate the maximum CPU resources for BizTalk Server
2009, install it on a Windows Server 2008 64-bit or 32-bit edition guest operating system, both of
which support four virtual processors per virtual machine.
Configure a 1-1 allocation of virtual processors in the guest operating system(s) to logical
processors available to the host operating system to prevent excessive context switching.
Excessive context switching between processors will result in performance degradation. For more
information about allocating virtual processors to logical processors, see the “Optimize Processor
Performance” section of the topic Checklist: Optimizing Performance on Hyper-V.
The “\Hyper-V Hypervisor Logical Processor(_Total)\% Total Run Time” Performance Monitor
counter measures the overall resource utilization of all guest machines and the hypervisor on the
Hyper-V host. If this value is above 90%, the server is running at maximum capacity; allocating
additional virtual processors to virtual machines in this scenario can degrade overall system
performance and should be avoided. For further details on using the HyperV performance
counters, see the Evaluating BizTalk Server Performance on Hyper-V section of this guide.
Guest Operating System
Virtual Processor Limit
Windows Server 2008, all editions
4
Windows Server 2003, all editions
2
Windows Vista with Service Pack 1 (SP1)
2
Windows XP with Service Pack 3 (SP3)
2
SUSE/RedHat LINUX
1
Note
For more information about the guest operating systems that are supported on Hyper-V,
see http://go.microsoft.com/fwlink/?LinkID=118347.
Memory
The physical server requires enough memory for the root partition and any virtual machines
running on the server. During testing, a minimum of 2GB of memory was allocated to the root
partition and the Memory/Available Mbytes performance monitor counter was monitored to
ensure no memory pressure was experienced.
The amount of memory that should be allocated to each virtual machine in a BizTalk Server 2009
environment depends on the workload and type of processing that will be performed. There are
many factors that affect memory requirements of BizTalk Server 2009 including:

Size of messages processed

Throughput of messages

Orchestration design
17

Pipeline processing

Number of BizTalk hosts that you plan to run within the virtual machine
For a comprehensive list of factors that affect memory, see “The Performance Factors” section of
the BizTalk Server Performance Optimizations Guide at
http://go.microsoft.com/fwlink/?LinkId=122587.
Proactively monitor the Memory/Available Mbytes counter from within each virtual machine and
the root partition itself. The following guidelines from Checklist: Measuring Performance on
Hyper-V should be used to determine whether there is enough available physical memory for the
virtual machine and for the root partition:

50% of free memory available or more = Healthy

25% of free memory available = Monitor

10% of free memory available = Warning

Less than 5% of free memory available = Critical, performance will be adversely affected
Choosing Root Operating System Version
Hyper-V is supported on a Server Core as well as a full installation of 64-bit Windows
Server 2008. To minimize the overhead of the root partition, install Hyper-V on a Server Core
installation of Windows Server 2008. The Hyper-V role can be managed remotely from the HyperV Manager on a different system. Server Core provides a smaller disk and memory profile,
therefore, leaving more resources available for virtual machines. For more information about the
Server Core installation option available for Windows Server 2008, see
http://go.microsoft.com/fwlink/?LinkId=146344.
If you choose to use a full installation of Windows Server 2008, ensure that the root partition is
dedicated only to the Hyper-V server role. Running additional server roles will consume memory,
disk, processor, and network resources and will degrade performance.
Creating Your Virtual Machines
After you have installed and configured the Hyper-V server role, you need to create the virtual
machines. Before doing this, it is useful to answer the following questions:

What edition of Windows Server 2008 will I use to run Hyper-V?

What storage configuration will I use?

How many virtual processors does the guest operating system support?

How much memory will be allocated to the virtual machine?

How many virtual machines can I run on my Hyper-V Server?

How will I install the operating system onto the machine?
Steps 2-4 in the “Step-by-Step Guide to Getting Started with Hyper-V” guide provide a full walkthrough of how to create and configure virtual machines in Hyper-V. This guide is available at
http://go.microsoft.com/fwlink/?LinkId=122588.
18
Installing the Base Operating System
All the options available for a physical server installation are available in Hyper-V. A bootable
CD/DVD-ROM media or an ISO image can be used to perform a manual installation. A network
installation can be performed if the virtual machine has been configured with a network adapter
connected to the same network as a server that hosts the ISO images.
Important
Whichever installation method is chosen, for performance reasons it is critical the
operating system integration components be installed for each virtual machine running
under Hyper-V. The integration components provide a set of drivers and services that
enable the guest machine to perform by using synthetic devices. Synthetic devices avoid
the need for emulated devices, which are used on operating systems that do not support
integration components. Emulated devices incur greater system overhead compared to
synthetic devices.
To install and configure the machines used in this lab, an initial base image was created on a
fixed size VHD. This involved a manual installation of Windows Server 2008 64-bit Enterprise
Edition. Once all appropriate updates had been installed the base virtual machine was imaged
using sysprep utility that is installed with Windows Server 2008, in the
%WINDIR%\system32\sysprep directory.
Note
Running Sysprep after BizTalk Server 2009 has been installed and configured on the
server can be accomplished through the use of a Sysprep answer file and scripts
provided with BizTalk Server 2009. These sample scripts are designed for use with
BizTalk Server 2009 installed on 32-bit and 64-bit versions of Windows Server 2008 only.
For more information see the BizTalk Server 2009 online documentation.
Installing and Configuring BizTalk Server

To minimize the time required to install virtual machines, create a base image consisting only
of the guest operating system and software prerequisites. Use SysPrep to prepare the VHD
image for reuse, and then base all your virtual machines (VMs) on this VHD.
Note

With BizTalk Server 2009, it is possible to run Sysprep against a base image afterBizTalk
Server 2009 has been installed and configured on the server. This can be accomplished
through the use of a Sysprep answer file and scripts provided with BizTalk Server 2009.
These sample scripts are designed for use with BizTalk Server 2009 installed on 32-bit
and 64-bit versions of Windows Server 2008 only. For more information see the BizTalk
Server 2009 online documentation.

The Unattended Windows Setup Reference is available at
http://go.microsoft.com/fwlink/?LinkId=142364.
19

Follow the recommendations in the “When Installing and Configuring BizTalk Server…”
section of the topic Checklist: Best Practices for Installing and Configuring BizTalk Server on
Hyper-V.

For information on the supportability of BizTalk Server and SQL Server in a Hyper-V
environment, see Appendix C: BizTalk Server and SQL Server Hyper-V Supportability.
Checklist: Best Practices for Installing and
Configuring BizTalk Server on Hyper-V
The sections below are a summary of the installation and configuration requirements described in
the Deploying BizTalk Server on Hyper-V section of this guide. These should be used as a quick
reference when installing, configuring and deploying BizTalk Server in a Hyper-V environment.
Links to the relevant sections are provided for further information.
Before Installing Hyper-V…
Step
Reference
Hyper-V is available only for 64-bit, x64
editions of Windows Server 2008. Ensure you
are using an x64-based version of Windows
Server 2008.
See the topic Hyper-V Installation
Prerequisites available at
http://go.microsoft.com/fwlink/?LinkId=142350.
Ensure that your processor supports hardwareassisted virtualization and Data Execution
Prevention (DEP) and that these features are
enabled. This requires a processor that is
compatible with Intel VT or AMD Virtualization
(AMD-V).
See the topic Hyper-V Installation
Prerequisites available at
http://go.microsoft.com/fwlink/?LinkId=142350.
Use Windows Server 2008 Core Edition for the
root partition. This will minimize server
overhead and improve Hyper-V performance.
See the topic Install the Hyper-V Role on a
Server Core Installation of Windows Server
2008 available at
http://go.microsoft.com/fwlink/?LinkId=142370.
Run only the Hyper-V server role on the root
partition.
From the Performance Tuning for
Virtualization Servers section of the
Performance Tuning Guidelines for
Windows Server 2008 whitepaper available for
download at
http://go.microsoft.com/fwlink/?LinkID=135682:
Dedicated Server Role
20
Step
Reference
The root partition should be dedicated to the
virtualization server role. Additional server roles
can adversely affect the performance of the
virtualization server, especially if they consume
significant CPU, memory, or I/O bandwidth.
Minimizing the server roles in the root partition
has additional benefits such as reducing the
attack surface and the frequency of updates.
System administrators should consider carefully
what software is installed in the root partition
because some software can adversely affect
the overall performance of the virtualization
server.
When Creating Hyper-V Virtual Machines…
Step
Reference
Using a fixed size virtual hard disk (VHD)
provides improved performance compared to
dynamically-resized VHDs for operating system
drives.
From the Performance Tuning for
Virtualization Servers section of the
Performance Tuning Guidelines for
Windows Server 2008 whitepaper available for
download at
http://go.microsoft.com/fwlink/?LinkID=135682:
Fixed-size VHD
Space for the VHD is first allocated when the
VHD file is created. This type of VHD is less apt
to fragment, which reduces the I/O throughput
when a single I/O is split into multiple I/Os. It
has the lowest CPU overhead of the three VHD
types because reads and writes do not need to
look up the mapping of the block.
Use fixed-size virtual hard drive (VHD) disks for
high disk I/O activities and configure disks for
data volumes using the SCSI controller. For
highly intensive storage I/O workloads that
span multiple data drives, each VHD should be
attached to a separate synthetic SCSI
controller for better overall performance. In
addition, each VHD should be stored on
From the Performance Tuning for
Virtualization Servers section of the
Performance Tuning Guidelines for
Windows Server 2008 whitepaper available for
download at
http://go.microsoft.com/fwlink/?LinkID=135682:
Synthetic SCSI Controller
The synthetic storage controller provides
21
Step
Reference
separate physical disks.
significantly better performance on storage I/Os
with reduced CPU overhead than the emulated
IDE device. The VM integration services
include the enlightened driver for this storage
device and are required for the guest operating
system to detect it. The operating system disk
must be mounted on the IDE device for the
operating system to boot correctly, but the VM
integration services load a filter driver that
reroutes IDE device I/Os to the synthetic
storage device.
We strongly recommend that you mount the
data drives directly to the synthetic SCSI
controller because that configuration has
reduced CPU overhead. You should also mount
log files and the operating system paging file
directly to the synthetic SCSI controller if their
expected I/O rate is high.
For highly intensive storage I/O workloads that
span multiple data drives, each VHD should be
attached to a separate synthetic SCSI controller
for better overall performance. In addition, each
VHD should be stored on separate physical
disks.
Use the SCSI controller to attach VHD disks for
high I/O activities, such as for SQL Server data
and log files.
Even though Hyper-V IDE controller and SCSI
controller offer comparable performance, the
SCSI controller can only be installed if Hyper-V
integration services are installed. Therefore,
use of the SCSI controller to attach
passthrough disks will ensure that Hyper-V
integration services are installed which in turn
will ensure optimal disk I/O performance.
Note
Do not attach a system disk to a SCSI
controller. A virtual hard disk that
contains an operating system must be
attached to an IDE controller.
Use the Network Adapter instead of the Legacy
Network Adapter when configuring networking
for a virtual machine. The legacy network
adapter is designed for operating systems that
do not support integration components.
From the Performance Tuning for
Virtualization Servers section of the
Performance Tuning Guidelines for
Windows Server 2008 whitepaper available for
download at
http://go.microsoft.com/fwlink/?LinkID=135682:
Synthetic Network Adapter
Hyper-V features a synthetic network adapter
22
Step
Reference
that is designed specifically for VMs to achieve
significantly reduced CPU overhead on network
I/O when it is compared to the emulated
network adapter that mimics existing hardware.
The synthetic network adapter communicates
between the child and root partitions over
VMBus by using shared memory for more
efficient data transfer. The emulated network
adapter should be removed through the VM
settings dialog box and replaced with a
synthetic network adapter. The guest requires
that the VM integration services be installed.
Ensure that integration services are installed on
any enlightened guest operating systems and
verify that the most current version of
integration services is installed. To check for
the most current version of integration services,
connect to
http://go.microsoft.com/fwlink/?LinkID=120732
or run Windows Update from the Start menu.
From the Performance Tuning for
Virtualization Servers section of the
Performance Tuning Guidelines for
Windows Server 2008 whitepaper available for
download at
http://go.microsoft.com/fwlink/?LinkID=135682:
Enlightened Guests
The operating system kernel in Windows Vista
SP1, Windows Server 2008, and later releases
features enlightenments that optimize its
operation for VMs. For best performance, we
recommend that you use Windows Server 2008
as a guest operating system. The
enlightenments decrease the CPU overhead of
Windows that runs in a VM. The integration
services provide additional enlightenments for
I/O. Depending on the server load, it can be
appropriate to host a server application in a
Windows Server 2008 guest for better
performance.
Whenever possible, configure a 1-1 allocation
of virtual processors to available logical
processors.
For more information about configuring a 1-to-1
allocation of virtual processors to available
logical processors see the “Optimize Processor
Performance” section in Checklist: Optimizing
Performance on Hyper-V.
Convert or migrate virtual machines running on
Microsoft Virtual PC, Microsoft Virtual Server,
or VMWare ESX Server to run on Hyper-V.

Use System Center Virtual Machine
Manager 2008 to convert or migrate virtual
machines to run on Hyper-V. For more
23
Step
Reference
information, see the topic “V2V: Converting
a Virtual Machine to a VMM Virtual
Machine” at
http://go.microsoft.com/fwlink/?LinkId=1463
42.

If required, the process of converting virtual
machines running on Microsoft Virtual PC
or Microsoft Virtual Server can be
performed manually. For more information,
see the topic “Virtual Machine Migration
Guide: How To Migrate from Virtual Server
to Hyper-V” at
http://go.microsoft.com/fwlink/?LinkID=1372
58.

The sample tool VMC2Hyper-V can also
be used to migrate virtual machines
running on Microsoft Virtual PC or Microsoft
Virtual Server to Hyper-V. For more
information about the VMC2Hyper-V
sample tool, see
http://go.microsoft.com/fwlink/?LinkID=1356
83.
Note
Use of this tool is not supported by
Microsoft, and Microsoft makes no
guarantees about the suitability of
this program. Use of this program
is entirely at your own risk.
When Installing and Configuring BizTalk Server…
When installing BizTalk Server 2009 in a virtual environment, the same practices should be
followed as in a physical environment. The following resources should be utilized when installing
and during configuration of BizTalk Server 2009:
Step
Reference
For instructions on how to install BizTalk Server The BizTalk Server installation guides are
2009 on the guest operating system, see the
available at
BizTalk Server 2009 Installation Guides.
http://go.microsoft.com/fwlink/?LinkId=81041.
24
Step
Reference
Run the BizTalk Server Best Practices
Analyzer (BPA) tool on the completed BizTalk
Server 2009 installation.
The BPA tool is available at
http://go.microsoft.com/fwlink/?LinkId=122578.
If the BizTalk Server databases are going to be
housed on SQL Server 2005, run the SQL
Server 2005 Best Practices Analyzer (BPA)
tool on the SQL Server 2005 instance before
configuring the BizTalk Server databases.
The SQL Server 2005 Best Practices Analyzer
is available at
http://go.microsoft.com/fwlink/?LinkID=132957.
The Microsoft BizTalk Server Operations Guide The BizTalk Server Operations Guide is
provides Operational Readiness Checklists that available at
can be used to ensure that all necessary
http://go.microsoft.com/fwlink/?LinkId=110533.
prerequisite software has been installed.
Checklists that provide BizTalk Server specific
configuration information are provided for all
the components required as part of a BizTalk
Server stack including the operating system,
IIS, and SQL Server. In addition, guidance is
provided about how to configure BizTalk Server
for high availability.
Follow guidance in the “Optimizing
Performance” section of the BizTalk Server
Performance Optimizations Guide to tune
performance of your BizTalk Server 2009
installation.
The BizTalk Server Performance Optimizations
Guide is available at
http://go.microsoft.com/fwlink/?LinkId=122579.
Consider installing and running the BizTalk
MsgBoxViewer utility to analyze and validate
the configuration of the BizTalk Server
MessageBox database.
The BizTalk MsgBoxViewer utility is available at
http://go.microsoft.com/fwlink/?LinkID=117289.
Note
Use of this tool is not supported by
Microsoft, and Microsoft makes no
guarantees about the suitability of this
programs. Use of this program is
entirely at your own risk.
Verify that CPU is being properly allocated to
guest operating systems running in Hyper-V.
“Measuring Processor Performance” section in
Checklist: Measuring Performance on Hyper-V.
25
Evaluating BizTalk Server Performance on
Hyper-V
This section provides checklists for evaluating and optimizing performance of a BizTalk Server
2009 application running on a guest operating system installed on a Hyper-V virtual machine and
a summary of the system resource costs associated with running Hyper-V.
In This Section
Checklist: Measuring Performance on Hyper-V
Checklist: Optimizing Performance on Hyper-V
System Resource Costs on Hyper-V
Checklist: Measuring Performance on HyperV
While most of the principles of analyzing performance of a guest operating system installed on a
Hyper-V virtual machine performance are the same as analyzing the performance of an operating
system installed on a physical machine, many of the collection methods are different. The
sections below should be used as a quick reference when evaluating performance of your BizTalk
Server 2009 solution running on a guest operating system installed on a Hyper-V virtual machine.
Measuring Disk I/O Performance
Use the following performance monitor counters to measure disk I/O performance on a guest
operating system installed on a Hyper-V virtual machine:
Step
Reference
Measure disk latency on a Hyper-V host
operating system – The best initial indicator of
disk performance on a Hyper-V host operating
system is obtained by using the “\Logical
Disk(*)\Avg. Disk sec/Read” and “\Logical
Disk(*)\Avg. Disk sec/Write” performance
monitor counters. These performance monitor
counters measure the amount of time that read
and write operations take to respond to the
operating system. As a general rule of thumb,
average response times greater than 15ms are
For more information about disk performance
analysis, see the following resources:

“Performance Overhead of Running SQL
Server in Hyper-V” section of “Running SQL
Server 2008 in a Hyper-V Environment –
Best Practices and Performance
Considerations” whitepaper at
http://go.microsoft.com/fwlink/?LinkId=1446
22.

Ruling Out Disk-Bound Problems at
http://go.microsoft.com/fwlink/?LinkId=1209
26
Step
Reference
considered sub-optimal.
47.
This is based on the typical seek time of a

single 7200 RPM disk drive without cache. The
use of logical disk versus physical disk
performance monitor counters is recommended
because Windows applications and services

utilize logical drives represented as drive letters
wherein the physical disk (LUN) presented to
the operating system can be comprised of
multiple physical disk drives in a disk array.
Use the following rule of thumb when

measuring disk latency on the Hyper-V host
operating system using the \Logical
Disk(*)\Avg. Disk sec/Read or \Logical
Disk(*)\Avg. Disk sec/Write performance
monitor counters:
SQL Server Predeployment I/O Best
Practices at
http://go.microsoft.com/fwlink/?LinkId=1209
48.

1ms to 15ms = Healthy

15ms to 25ms = Warning or Monitor

26ms or greater = Critical, performance will
be adversely affected
“I/O Bottlenecks” section of the
“Troubleshooting Performance Problems in
SQL Server 2005” whitepaper available at
http://go.microsoft.com/fwlink/?LinkId=1463
45.
How to Identify a Disk Performance
Bottleneck Using the Microsoft Server
Performance Advisor (SPA) Tool at
http://go.microsoft.com/fwlink/?LinkID=9809
6.
Note
Physical disks installed on a nonvirtualized environment offer better
performance than disks accessed
through a Hyper-V host operating
system. If disk performance is
absolutely critical to the overall
performance of your application,
consider hosting disks on physical
hardware only.
Note
When evaluating disk I/O performance,
ensure that you configure antivirus
software to exclude scanning of any
disk partitions that are being evaluated.
Antivirus scanning introduces overhead
that can negatively impact performance
and skew test results.
Measure disk latency on guest operating
27
Step
Reference
systems – Response times of the disks used
by the guest operating systems can be
measured using the same performance monitor
counters used to measure response times of
the disks used by the Hyper-V host operating
system.
Measuring Memory Performance
Use the following performance monitor counters to measure the impact of available memory on
the performance of a guest operating system installed on a Hyper-V virtual machine:
Step
Reference
Measure available memory on the Hyper-V
host operating system – The amount of
physical memory available to the Hyper-V host
operating system can be determined by
monitoring the “\Memory\Available MBytes”
performance monitor counter on the physical
computer. This counter reports the amount of
free physical memory available to the host
operating system. Use the following rules of
thumb when evaluating available physical
memory available to the host operating system:
For more information about the impact of
available physical memory on application server
performance, see the Exchange Server 2003
Help topic “Ruling Out Memory-Bound
Problems” at
http://go.microsoft.com/fwlink/?LinkId=121056.

\Memory\Available Mbytes – Available
MBytes measures the amount of physical
memory available to processes running on
the computer, as a percentage of physical
memory installed on the computer. The
following guidelines apply when measuring
the value of this performance monitor
counter:

50% of free memory available or more
= Healthy

25% of free memory available =
Monitor

10% of free memory available =
Warning

Less than 5% of free memory available
28
Step
Reference
= Critical, performance will be
adversely affected

\Memory\Pages/sec – This performance
monitor counter measures the rate at which
pages are read from or written to disk to
resolve hard page faults. To resolve hard
page faults, the operating system must
swap the contents of memory to disk,
which negatively impacts performance. A
high number of pages per second in
correlation with low available physical
memory may indicate a lack of physical
memory. The following guidelines apply
when measuring the value of this
performance monitor counter:

Less than 500 = Healthy

500 - 1000 = Monitor or Caution

Greater than 1000 = Critical,
performance will be adversely affected
Measure available memory on the guest
operating system – Memory that is available
to the guest operating systems can be
measured with the same performance monitor
counters used to measure memory available to
the Hyper-V host operating system.
Measuring Network Performance
Hyper-V allows guest computers to share the same physical network adapter. While this helps to
consolidate hardware, take care not to saturate the physical adapter. Use the following methods
to ensure the health of the network used by the Hyper-V virtual machines:
Step
Reference
Test network latency
Ping each virtual machine to ensure adequate
network latency. On local area networks, expect
to receive less than 1ms response times.
Test for packet loss
Use the pathping.exe utility to test packet loss
between virtual machines. Pathping.exe
29
Step
Reference
measures packet loss on the network and is
available with all versions of Windows since
Windows 2000 Server. Pathping.exe sends out
a burst of 100 ping requests to each network
node and calculates how many pings are
returned. On local area networks there should
be no loss of ping requests from the
pathping.exe utility.
Test network file transfers
Copy a 100MB file between virtual machines
and measure the length of time required to
complete the copy. On a healthy 100Mbit
(megabit) network, a 100MB (megabyte) file
should copy in 10 to 20 seconds. On a healthy
1Gbit network, a 100MB file should copy in
about 3 to 5 seconds. Copy times outside of
these parameters are indicative of a network
problem. One common cause of poor network
transfers occurs when the network adapter has
“auto detected” a 10MB half-duplex network
which prevents the network adapter from taking
full advantage of available bandwidth.
Measure network utilization on the Hyper-V
host operating system
Use the following performance monitor counters
to measure network utilization on the Hyper-V
host operating system:
\Network Interface(*)\Bytes Total/sec – The
percentage of network utilization is calculated
by multiplying Bytes Total/sec by 8 to convert it
to bits, multiply the result by 100, then divide by
the network adapter’s current bandwidth. Use
the following thresholds to evaluate network
bandwidth utilization:

Less than 40% of the interface consumed =
Healthy

41%-64% of the interface consumed =
Monitor or Caution

65-100% of the interface consumed =
Critical, performance will be adversely
affected
\Network Interface(*)\Output Queue Length –
The output queue length measures the number
30
Step
Reference
of threads waiting on the network adapter. If
there are more than 2 threads waiting on the
network adapter, then the network may be a
bottleneck. Common causes of this are poor
network latency and/or high collision rates on
the network. Use the following thresholds to
evaluate output queue length:

0 = Healthy

1-2 = Monitor or Caution

Greater than 2 = Critical, performance will
be adversely affected.
Ensure that the network adapters for all
computers (physical and virtual) in the solution
are configured to use the same value for
maximum transmission unit (MTU). For more
information about configuring the MTU value,
see “Appendix A: TCP/IP Configuration
Parameters” at
http://go.microsoft.com/fwlink/?LinkId=113716.
If an output queue length of 2 or more is
measured, consider adding one or more
physical network adapters to the physical
computer that hosts the virtual machines and
bind the network adapters used by the guest
operating systems to these physical network
adapters.
Measure network utilization on the guest
operating systems
If a network adapter on the Hyper-V root
partition is busy as indicated by the
performance monitor counters mentioned
above, then consider using the "\Hyper-V
Virtual Network Adapter(*)\Bytes/sec"
performance monitor counter to identify which
virtual network adapters are consuming the
most network utilization.
For more information about network performance analysis, see “Chapter 15 - Measuring .NET
Application Performance” at http://go.microsoft.com/fwlink/?LinkId=121073.
31
Measuring Processor Performance
Use the following methods to evaluate processor performance on a guest operating system
installed on a Hyper-V virtual machine:

Measure guest operating system processor utilization - Traditionally, processor
performance can be measured using the “\Processor(*)\% Processor Time” performance
monitor counter. This is not an accurate counter for evaluating processor utilization of a guest
operating system though because Hyper-V measures and reports this value relative to the
number of processors allocated to the virtual machine. If more processors are allocated to
running virtual machines than are actually present on the physical computer, the value
returned by each guest operating system for the “\Processor(*)\% Processor Time”
performance monitor counter will be low, even if in fact processor utilization is a bottleneck.
This occurs because the virtual processors utilize the physical processors in a round-robin
fashion. Each virtual processor will try and allocate itself a share of overall system resources,
so in a 4 physical processor system, each virtual processor will by default try to utilize 25% of
the system resources. If 8 virtual processors are created this means that collectively the
virtual processors will attempt to utilize 200% of the server CPU capacity. In this case, each
virtual processor will report a low utilization as measured by the “\Processor(*)\% Processor
Time” performance monitor counter (relative to the level it expects) and the excessive context
switching between the virtual processors will result in poor performance for each virtual
machine. In this scenario, consider reducing the number of virtual processors allocated to
Hyper-V virtual machines on the host operating system.
Hyper-V provides hypervisor performance objects to monitor the performance of both logical
and virtual processors. A logical processor correlates directly to the number of processors or
cores that are installed on the physical computer. For example, 2 quad core processors
installed on the physical computer would correlate to 8 logical processors. Virtual processors
are what the virtual machines actually use, and all execution in the root and child partitions
occurs in virtual processors.
To accurately measure the processor utilization of a guest operating system, use the “\HyperV Hypervisor Logical Processor(_Total)\% Total Run Time” performance monitor counter on
the Hyper-V host operating system. Use the following thresholds to evaluate guest operating
system processor utilization using the “\Hyper-V Hypervisor Logical Processor(_Total)\%
Total Run Time” performance monitor counter:

Less than 60% consumed = Healthy

60% - 89% consumed = Monitor or Caution

90% - 100% consumed = Critical, performance will be adversely affected
To troubleshoot processor performance of guest operating systems on a Hyper-V
environment, it is best to strive for a balance between the values reported by the host
operating system for “\Hyper-V Hypervisor Logical Processor(_Total)\% Total Run Time”
(LPTR) and “\Hyper-V Hypervisor Virtual Processor(_Total)\% Total Run Time” (VPTR). If
LPTR is high and VPTR is low then verify that there are not more processors allocated to
virtual machines than are physically available on the physical computer. Use the “\Hyper-V
32
Hypervisor Virtual Processor(*)\%Guest Run Time” counters to determine which virtual
Processors are consuming CPU and de-allocate virtual processors from virtual machines as
appropriate to configure a one to one mapping of virtual processors to logical processors. For
more information about configuring a one to one mapping of virtual processors to logical
processors, see the “Optimizing Processor Performance” section in Checklist: Optimizing
Performance on Hyper-V.
If VPTR is high and LPTR is low, then consider allocating additional processors to virtual
machines if there are available logical processors and if additional processors are supported
by the guest operating system. In the case where VPTR is high, LPTR is low, there are
available logical processors to allocate, but additional processors are not supported by the
guest operating system, consider scaling out by adding additional virtual machines to the
physical computer and allocating available processors to these virtual machines. In the case
where both VPTR and LPTR are high, the configuration is pushing the limits of the physical
computer and should consider scaling out by adding another physical computer and
additional Hyper-V virtual machines to the environment. The flowchart below describes the
process that should be used when troubleshooting processor performance in a Hyper-V
environment.
33
Troubleshooting CPU performance in a Hyper-V Environment
34
35
Note
Guest operating system processors do not have a set affinity to physical
processors/cores – The hypervisor determines how physical resources are used. In
the case of processor utilization, the hypervisor schedules the guest processor time
to physical processor in the form of threads. This means the processor load of virtual
machines will be spread across the processors of the physical computer.
Furthermore, virtual machines cannot exceed the processor utilization of the
configured number of logical processors, for example if a single virtual machine is
configured to run with 2 logical processors on a physical computer with 8
processors/cores, then the virtual machine cannot exceed the processor capacity of
the number of configured logical processors (in this case 2 processors).

Measure overall processor utilization of the Hyper-V environment using Hyper-V
performance monitor counters - For purposes of measuring processor utilization, the host
operating system is logically viewed as just another guest operating system. Therefore, the
“\Processor(*)\% Processor Time” monitor counter measures the processor utilization of the
host operating system only. To measure total physical processor utilization of the host
operating system and all guest operating systems, use the “\Hyper-V Hypervisor Logical
Processor(_Total)\% Total Run Time” performance monitor counter. This counter measures
the total percentage of time spent by the processor running the both the host operating
system and all guest operating systems.
Use the following thresholds to evaluate overall processor utilization of the Hyper-V
environment using the “\Hyper-V Hypervisor Logical Processor(_Total)\% Total Run Time”
performance monitor counter:

Less than 60% consumed = Healthy

60% - 89% consumed = Monitor or Caution

90% - 100% consumed = Critical, performance will be adversely affected
For more information about processor utilization review the following resources:

“How To: Identify Functions causing a High User-mode CPU Bottleneck for Server
Applications in a Production Environment” at
http://go.microsoft.com/fwlink/?LinkID=107047.

“Chapter 15 - Measuring .NET Application Performance” at
http://go.microsoft.com/fwlink/?LinkId=121073.
36
Checklist: Optimizing Performance on HyperV
The following considerations apply when running BizTalk Server 2009 and/or the SQL Server
instance(s) that have the BizTalk Server 2009 databases on Hyper-V virtual machines.
Allocate 110%–125% of CPU and Disk Resources
to the Hyper-V Virtual Machines
Plan to allocate 110% to 125% of the CPU resources and 105% - 110% of the disk resources
required by a physical hardware solution to the Hyper-V virtual machines used for the solution. By
configuring the Hyper-V virtual machine with additional resources, you will ensure that it can
provide performance on par with physical hardware while accommodating any overhead required
by Hyper-V virtualization technology.
Step
Reference
Scope the hardware requirements for the
BizTalk Server solution.

Follow the guidance in the “Planning the
Environment for BizTalk Server” section of
the BizTalk Server Operations Guide at
http://go.microsoft.com/fwlink/?LinkId=1223
99 to scope the hardware requirements for
the solution.

To scope the edition and number of BizTalk
Servers that will be required by the solution
review BizTalk Server planning
considerations documented in “Planning the
BizTalk Server Tier” at
http://go.microsoft.com/fwlink/?LinkId=1224
01.

To scope the version and number of SQL
Server computers that will be required by
the solution, review database planning
considerations documented in “Planning the
Database Tier” at
http://go.microsoft.com/fwlink/?LinkId=1224
02 and the “Performance Overhead of
Running SQL Server in Hyper-V” section of
the “Running SQL Server 2008 in a HyperV Environment Best Practices and
Performance Considerations” whitepaper
37
Step
Reference
available at
http://go.microsoft.com/fwlink/?LinkId=1446
22.

After scoping the hardware requirements of
your BizTalk Server 2009 solution, plan to
configure the Hyper-V machines with 110% –
125% of the CPU and disk resources if
possible.
To complete planning for development,
testing, staging, and production
environments review “Planning the
Development, Testing, Staging, and
Production Environments” at
http://go.microsoft.com/fwlink/?LinkId=1224
03.
For example, if the hardware requirements for a
physical BizTalk Server 2009 computer used by
the solution is determined to be 2GB RAM, a
dual core CPU running at 2GHZ, and 2x 500
GB physical disks, then ideally, the Hyper-V
virtual machine used by the solution would be
configured with 2 or more virtual processors
running >= 2.2 GHZ, and faster physical disks
(typically by adding spindles or using faster
disks).
Optimize Hyper-V Performance
Use the following general guidelines to configure Hyper-V for optimal performance.
Step
Reference
Apply recommended guidance for performance
tuning virtualization servers.
“Performance Tuning for Virtualization Servers”
section of the “Performance Tuning Guidelines
for Windows Server 2008” document available
at
http://go.microsoft.com/fwlink/?LinkId=121171.
Note
For the test scenarios described in
Testing BizTalk Server Virtualization
Performance, the configuration options
that were applied are described in the
“Physical Infrastructure Specifics” and
the “Virtualization Specifics” sections of
the Test Scenario Overview topic.
Close any Virtual Machine Connection windows The Virtual Machine Connection window(s)
that aren’t being used.
displayed when double-clicking a virtual
38
Step
Reference
machine name in the Hyper-V manager
consume resources that could be otherwise
utilized.
Close or minimize the Hyper-V manager.
The Hyper-V manager consumes resources by
continually polling each running virtual machine
for CPU utilization and uptime. Closing or
minimizing the Hyper-V manager will free up
these resources.
Optimize Performance of Disk, Memory, Network,
and Processor in a Hyper-V Environment
Use the following guidelines to optimize performance of disk, memory, network, and processor in
a Hyper-V virtual environment.
Optimize Processor Performance
Follow these guidelines to optimize processor performance of guest operating systems running in
a Hyper-V virtual environment:

Configure a 1-to-1 allocation of virtual processors to available logical processors for
best performance - When running a CPU intensive application, the best configuration is a 1to-1 ratio of virtual processors in the guest operating system(s) to the logical processors
available to the host operating system. Any other configuration such as 2:1 or 1:2 is less
efficient. The following graphic illustrates a 1-to-1 allocation of virtual processor cores in the
guest operating system(s) to logical processors available to the host operating system:
39
Virtual to logical processor ratio

Be aware of the virtual processor limit for different guest operating systems and plan
accordingly - The number of processor cores that are available to the guest operating
system running in a Hyper-V virtual machine can impact the overall performance of the
hosted application. Therefore, consideration should be made as to which guest operating
system will be installed on the Hyper-V virtual machine to host the BizTalk Server 2009
and/or SQL Server instance(s) that host the BizTalk Server 2009 databases. Hyper-V
accommodates the following number of virtual processors for the specified guest operating
system:
Operating system
Virtual processor limit
Windows Server 2008 64-bit
4
Windows Server 2008 32-bit
4
Windows Server 2003 64-bit
2
Windows Server 2003 32-bit
2
Windows Vista SP1 32-bit
2
Windows XP SP3 32-bit
2
40
Note
For more information about the guest operating systems that are supported on Hyper-V,
see http://go.microsoft.com/fwlink/?LinkID=118347.
Optimize Disk Performance
Follow these guidelines to optimize disk performance of guest operating systems running in a
Hyper-V virtual environment:
Step
Reference
Configure virtual disks for use with the Hyper-V
virtual machines using the fixed-size virtual
hard disk (VHD) option. Fixed-size VHD offers
performance approaching that of physical disks
together with the flexibility of features such as
clustering support and snapshot disk support.
Disk storage in a Hyper-V environment is
accessible through a virtual IDE controller or a
virtual SCSI controller. Unlike previous versions
of Microsoft virtualization technology, there is
no performance difference between using a
virtual IDE controller or a virtual SCSI controller
when accessing virtual hard disks. The
following disk storage options are available for
use in a Hyper-V environment:

Fixed size disks - A fixed-size virtual hard
disk (VHD) is one for which data blocks are
pre-allocated on a physical disk based on
the maximum disk size defined at the time
of creation. For example, if you create a
100 GB fixed-size VHD, Hyper-V will
allocate all 100 GB of data block storage in
addition to the overhead required for the
VHD headers and footers when it creates
the new VHD.

Dynamically expanding disks - A
dynamically expanding VHD is one for
which the initial virtual hard disk contains no
data blocks. Instead space is dynamically
allocated as data is written to the VHD, up
to the maximum size specified when the
VHD was created. For example, a 100-GB
dynamically expanding disk initially contains
only VHD headers and requires less than 2
MB of physical storage space. As new data
is written by the virtual machine to the
dynamically expanding VHD, additional
physical data blocks are allocated in 2-MB
41
Step
Reference
increments to the VHD file, up to a
maximum of 100 GB.

Differencing Disks - A differencing disk is
a special type of dynamically expanding
VHD file that is associated with a “parent”
VHD. In this parent/child storage topology,
the parent disk remains unchanged and any
write operations made to the “child”
differencing disk only. Any read operations
are first checked against the differencing
disk to see whether updated content was
written to the differencing disk; if the
content isn’t in the differencing disk, then
the content is read from the parent VHD.
Differencing disks are useful for scenarios
where you need to maintain a particular
baseline configuration and would like to
easily test and then rollback changes to the
baseline. While the flexibility of the
parent/child storage topology provided
through differencing disks is useful for
testing, this is not the optimal configuration
for performance because there is overhead
associated with maintaining the parent/child
topology required when using differencing
disks.

Passthrough disks – The passthrough
disk feature allows the guest operating
system to bypass the Hyper-V Host file
system and access the disk directly. Disks
that are made available to guest operating
systems via passthrough must be set to
“offline” in the Hyper-V host to ensure that
both the host and guest operating system
do not attempt to access the disk
simultaneously. The passthrough disk does
offer a marginal performance advantage
over other disk storage options but does not
support certain functionality of virtual disks,
such as Virtual machine snapshots and
clustering support. Therefore use of the
42
Step
Reference
passthrough disk feature is not
recommended in a BizTalk or SQL Server
environment because the marginal
performance benefits are more than offset
by the missing functionality.
For more information about the relative
performance of disk storage choices provided
with Hyper-V, see the blog entry “Hyper-V
Storage Analysis”, at
http://go.microsoft.com/fwlink/?LinkID=132848.
Configure disks for data volumes using the
SCSI controller
This is recommended because the SCSI
controller can only be installed if Hyper-V
integration services are installed whereas the
emulated IDE controller is available without
installing Hyper-V integration services. Disk I/O
performed using the IDE filter driver provided
with integration services is significantly better
than disk I/O performance provided with the
emulated IDE controller. Therefore, to ensure
optimal disk I/O performance for the data files in
a Hyper-V virtualized environment, install
integration services on both the host and guest
operating system and configure disks for data
volumes with the synthetic SCSI controller. For
highly intensive storage I/O workloads that span
multiple data drives, each VHD should be
attached to a separate synthetic SCSI controller
for better overall performance. In addition, each
VHD should be stored on separate physical
disks.
Important
Do not attach a system disk to a SCSI
controller. A virtual hard disk that
contains an operating system must be
attached to an IDE controller.
Optimize Memory Performance
Follow these guidelines to optimize memory performance of guest operating systems running in a
Hyper-V virtual environment:
43
Step
Reference
Ensure there is sufficient memory installed on
the physical computer that hosts the Hyper-V
virtual machines

Available physical memory is often the
most significant performance factor for
BizTalk Server 2009 running on a Hyper-V
virtual machine. This is because each
virtual machine must reside in non-pagedpool memory, or memory that cannot be
paged to the disk. Because non-paged-pool
memory cannot be paged to disk, the
physical computer that hosts the virtual
machines should have available physical
memory equal to the sum of the memory
allocated for each virtual machine plus the
following:
300 MB for the Hypervisor
plus 32 MB for the first GB of RAM
allocated to each virtual machine
plus another 8 MB for every
additional GB of RAM allocated to
each virtual machine
plus 512 MB for the host operating
system running on the root
partition
For example, if a Hyper-V virtual machine is
allocated 2 GB of memory in the Hyper-V
Manager, the actual physical memory used
when running that Hyper-V virtual machine
would be approximately 2388MB (300MB
for the hypervisor +2GB allocated for the
virtual machine + 32MB + 8MB = 2388MB).
Because the hypervisor only needs to be
loaded once, initialization of subsequent
virtual machines does not incur the 300 MB
overhead associated with loading the
hypervisor. Therefore, if two Hyper-V virtual
machines are each allocated 2 GB of
memory in the Hyper-V Manager, the actual
physical memory used when running these
Hyper-V virtual machines would be
approximately 4476MB (300MB for the
44
Step
Reference
hypervisor +4GB allocated for the virtual
machines + 64 MB + 16MB = 4476MB).
Note
As a general rule of thumb, plan to
allocate at least 512 MB memory
for the root partition to provide
services such as I/O virtualization,
snapshot files support, and child
partition management.

Use a 64-bit guest operating system
when possible – Consider using a 64-bit
operating system for each guest operating
system. This should be done because by
default, 32-bit Windows operating systems
can only address up to 2GB of virtual
address space per process. Installation of a
64-bit operating system allows applications
to take full advantage of the memory
installed on the physical computer that
hosts the Hyper-V virtual machines.
Optimize Network Performance
Hyper-V supports synthetic and emulated network adapters in virtual machines, but the synthetic
devices offer significantly better performance and reduced CPU overhead. Each of these
adapters is connected to a virtual network switch, which can be connected to a physical network
adapter if external network connectivity is needed. Follow the recommendations in this section to
optimize network performance of guest operating systems running in a Hyper-V virtual
environment.
Note
These recommendations are from the “Performance Tuning for Virtualization Servers”
section of the “Performance Tuning Guidelines for Windows Server 2008” whitepaper
available for download at http://go.microsoft.com/fwlink/?LinkID=135682. For how to tune
the network adapter in the root partition, including interrupt moderation, refer to the
“Performance Tuning for Networking Subsystem” section of this guide. The TCP tunings
in that section should be applied, if required, to the child partitions.
Step
Reference
Configure Hyper-V Virtual
Follow recommendations in the “Configure Hyper-V Virtual
45
Step
Reference
Machines that are Running on the
same Hyper-V host computer to
use a Private Virtual Network
Machines that are Running on the same Hyper-V host
computer to use a Private Virtual Network” section of
Network Optimizations.
Disable TCP Offloading for the
Virtual Machine Network Cards
Follow recommendations in the “Disable TCP Offloading for
the Virtual Machine Network Cards” section of Network
Optimizations.
Configure guest operating
systems to use the Hyper-V
synthetic network adapter.
Hyper-V features a synthetic network adapter that is
designed specifically for VMs to achieve significantly reduced
CPU overhead on network I/O when it is compared to the
emulated network adapter that mimics existing hardware.
The synthetic network adapter communicates between the
child and root partitions over the VMBus by using shared
memory for more efficient data transfer.
The emulated network adapter should be removed through
the VM settings dialog box and replaced with a synthetic
network adapter. The guest requires that the VM integration
services be installed.
If available, enable offload
capabilities for the physical
network adapter driver in the root
partition.
As with the native scenario, offload capabilities in the
physical network adapter reduce the CPU usage of network
I/Os in VM scenarios. Hyper-V currently uses LSOv1 and
TCPv4 checksum offload. The offload capabilities must be
enabled in the driver for the physical network adapter in the
root partition. For details on offload capabilities in network
adapters, refer to the “Choosing a Network Adapter” section
of the “Performance Tuning for Virtualization Servers” section
of the “Performance Tuning Guidelines for Windows Server
2008” whitepaper available for download at
http://go.microsoft.com/fwlink/?LinkID=135682.
Drivers for certain network adapters disable LSOv1 but
enable LSOv2 by default. System administrators must
explicitly enable LSOv1 by using the driver Properties dialog
box in Device Manager.
Configure network switch
topology to make use of multiple
network adapters.
Hyper-V supports creating multiple virtual network switches,
each of which can be attached to a physical network adapter
if needed. Each network adapter in a VM can be connected
to a virtual network switch. If the physical server has multiple
network adapters, VMs with network-intensive loads can
benefit from being connected to different virtual switches to
better use the physical network adapters.
46
Step
Reference
If multiple physical network cards
are installed on the Hyper-V host
computer, bind device interrupts
for each network card to a single
logical processor.
Under certain workloads, binding the device interrupts for a
single network adapter to a single logical processor can
improve performance for Hyper-V. We recommend this
advanced tuning only to address specific problems in fully
using network bandwidth. System administrators can use the
IntPolicy tool to bind device interrupts to specific processors.
For more information about the IntPolicy tool, see
http://go.microsoft.com/fwlink/?LinkID=129773.
If possible, enable VLAN tagging
for the Hyper-V synthetic network
adapter.
The Hyper-V synthetic network adapter supports VLAN
tagging. It provides significantly better network performance
if the physical network adapter supports
NDIS_ENCAPSULATION_IEEE_802_3_P_AND_Q_IN_OOB
encapsulation for both large send and checksum offload.
Without this support, Hyper-V cannot use hardware offload
for packets that require VLAN tagging and network
performance can be decreased.
Install high speed network
adapter on the Hyper-V host
computer and configure for
maximum performance.
Consider installing 1-GB network adapters on the Hyper-V
host computer and configure the network adapters with a
fixed speed as opposed to using “auto negotiate” - It is very
important that the network speed, duplex, and flow control
parameters are set to correspond to the settings on the
switch to which they are connected.
Follow best practices for
optimizing network performance.
The topic Network Optimizations offers general guidance for
optimizing network performance. While this topic does not
offer specific recommendations for optimizing performance of
BizTalk Server in a Hyper-V virtualized environment, the
techniques are applicable to any BizTalk Server solution,
whether running on physical hardware or on a Hyper-V
virtualized environment.
Optimize SQL Server Performance
Follow the recommendations in the topic SQL Server Optimizations to optimize SQL Server
performance for the BizTalk Server solution. While this topic does not offer specific
recommendations for optimizing performance of BizTalk Server in a Hyper-V virtualized
environment, the techniques are applicable to any BizTalk Server solution, whether running on
physical hardware or on a Hyper-V virtualized environment.
47
Optimize BizTalk Server Solution
Follow the recommendations in the topic BizTalk Server Optimizations to optimize performance of
the BizTalk Server solution. While this topic does not offer specific recommendations for
optimizing performance of BizTalk Server in a Hyper-V virtualized environment, the techniques
are applicable to any BizTalk Server solution, whether running on physical hardware or on a
Hyper-V virtualized environment.
System Resource Costs on Hyper-V
System Resource Costs Associated with Running
a Guest Operating System on Hyper-V
As with any server virtualization software, there is a certain amount of overhead associated with
running the virtualization code required to support guest operating systems running on Hyper-V.
The following list summarizes the overhead associated with specific resources when running
guest operating systems on Hyper-V virtual machines:
CPU Overhead
The CPU overhead associated with running a guest operating system in a Hyper-V virtual
machine was found to range between 9 and 12%. For example, a guest operating system
running on a Hyper-V virtual machine typically had available 88-91% of the CPU resources
available to an equivalent operating system running on physical hardware.
Memory Overhead
For the Hyper-V host computer, the memory cost associated with running a guest operating
system on a Hyper-V virtual machine was observed to be approximately 300 MB for the
hypervisor, plus 32 MB for the first GB of RAM allocated to each virtual machine, plus another 8
MB for every additional GB of RAM allocated to each virtual machine. For more information about
allocating memory to guest operating systems running on a Hyper-V virtual machine, see the
“Optimizing Memory Performance” section in Checklist: Optimizing Performance on Hyper-V.
Network Overhead
Network latency directly attributable to running a guest operating system in a Hyper-V virtual
machine was observed to be less than 1 ms and the guest operating system typically maintained
a network output queue length of less than one. For more information about measuring the
network output queue length, see the “Measuring Network Performance” section in Checklist:
Measuring Performance on Hyper-V.
48
Disk Overhead
When using the passthrough disk feature in Hyper-V, disk I/O overhead associated with running a
guest operating system in a Hyper-V virtual machine was found to range between 6 and 8 %. For
example, a guest operating system running on Hyper-V typically had available 92-94% of the disk
I/O available to an equivalent operating system running on physical hardware as measured by the
open source disk performance benchmarking tool IOMeter.
For information about measuring disk latency on a Hyper-V host or guest operating system using
Performance Monitor, see the “Measuring Disk I/O Performance” section in Checklist: Measuring
Performance on Hyper-V.
The remainder of this section provides background information on BizTalk Server disk
performance, describes the test configuration parameters used, and provides a summary of test
results obtained.
Disk Performance When Running a BizTalk Server Solution on Hyper-V
BizTalk Server 2009 is an extremely database intensive application that may require the creation
of up to 13 databases in SQL Server. BizTalk Server 2009 persists data to disk with great
frequency and furthermore, does so within the context of an MSDTC transaction. Therefore,
database performance is paramount to the overall performance of any BizTalk Server solution.
Hyper-V provides a synthetic SCSI controller and an IDE filter driver which both provide
significant performance benefits over using an emulated IDE device such as is provided with
Virtual Server 2005.
Configure disks for data volumes using the SCSI controller. This will guarantee that the
integration services are installed because the SCSI controller can only be installed if Hyper-V
integration services are installed whereas the emulated IDE controller is available without
installing Hyper-V integration services. Disk I/O performed using either the SCSI controller or the
IDE filter driver provided with integration services is significantly better than disk I/O performance
provided with the emulated IDE controller. Therefore, to ensure optimal disk I/O performance for
the data files in a Hyper-V virtualized environment, install integration services on both the host
and guest operating system and configure disks for data volumes with the synthetic SCSI
controller. For highly intensive storage I/O workloads that span multiple data drives, each VHD
should be attached to a separate synthetic SCSI controller for better overall performance. In
addition, each VHD should be stored on separate physical disks or LUNs.
Measuring PassThrough Disk Performance
During any consolidation exercise it is important to make maximum use of available resources.
As discussed previously, storage I/O on SQL data volumes plays a significant part in the overall
performance of a BizTalk Server 2009 solution. Therefore as part of this guidance, the relative
performance of a physical disk to the performance of a passthrough disk in Hyper-V was tested.
The relative performance of the MessageBox data drive in Physical_SQL01 and Virtual_SQL01
was measured using the IOMeter open source tool originally developed by Intel Corporation and
49
now maintained by the open Source Development Lab (OSDL). For more information about
IOMeter, see http://go.microsoft.com/fwlink/?LinkId=122412.
The following tables describe the physical and virtual hardware configuration used in the test
environment, the IOMeter configuration options that were used, a description of the test that was
run, and a summary of results.
Configuration Used for Testing
Physical_SQL01
Model
HP DL580
Processor
Quad processor, Quad-core Intel Xeon 2.4Ghz
Memory
8 GB
Networking
HP NC3T3i Multifunction Gigabit Server
adapter
SAN configuration
Direct attached SAN storage (see table below)
Physical_SQL01 – SAN Configuration
Drive letter
Description
LUN Size
RAID configuration
G:
Data_Sys
10
RAID 0 + 1
H:
Logs_Sys
10
RAID 0 + 1
I:
Data_TempDb
50
RAID 0 + 1
J:
Logs_TempDb
50
RAID 0 + 1
K:
Data_BtsMsgBox
300
RAID 0 + 1
L:
Logs_BtsMsgBox
100
RAID 0 + 1
M:
MSDTC
5
RAID 0 + 1
Hyper-V_Host_SQL01
Model
HP DL580
Processor
Quad processor, Quad-core Intel Xeon 2.4Ghz
Memory
32 GB
Networking
Broadcom BCM5708C NetXtreme II GigEHP
DL380 G5
50
Virtual_SQL01 - Virtual Machine Configuration
Virtual processors
4 allocated
Memory
8 GB
Networking
Virtual Machine Networking connected to:
Broadcom BCM5708C NetXtreme II GigE
Hard disk configuration
IDE controller – 30 GB fixed vhd for Operating
System
SCSI controller - 7 directly attached
passthrough SAN LUNs (see table below)
Virtual_SQL01 – SAN Configuration
Drive letter
Description
LUN Size
RAID configuration
G:
Data_Sys
10
RAID 0 + 1
H:
Logs_Sys
10
RAID 0 + 1
I:
Data_TempDb
50
RAID 0 + 1
J:
Logs_TempDb
50
RAID 0 + 1
K:
Data_BtsMsgBox
300
RAID 0 + 1
L:
Logs_BtsMsgBox
100
RAID 0 + 1
M:
MSDTC
5
RAID 0 + 1
IOMeter Configuration
The IOMeter tool can be used as a benchmark and troubleshooting tool by replicating the
read/write performance of applications. IOMeter is a configurable tool that can be used to
simulate many different types of performance. For purposes of this test scenario, IOMeter
configuration parameters were set as described in the table below on both the physical SQL
Server computer that was tested and on the guest operating system that was running SQL Server
in a Hyper-V virtual machine:
IOMeter – Passthrough Disk Comparison Test Configuration
Test length
10 minutes
Ramp up time
30 seconds
Number of workers
4
Transfer request size
2 KB
51
Read/write distribution
66% read, 33% write
Burst length
1 I/Os
Target Drive
K:\
Test Description
The SQL Server service was stopped on both servers to ensure that IOMeter was the only
process performing I/O against the disk. The LUN’s used in this test were both located on the
same SAN which was dedicated to this lab environment. No other I/O activity was performed
against the SAN during the test to ensure that the results were not skewed. The test was then run
by executing the IOMeter tool locally from each SQL Server and the following performance
monitor counters were collected:
Collected from both Virtual_SQL01 and Physical_SQL01:

\LogicalDisk(*)\*

\PhysicalDisk(*)\*
Collected from virtual machine Hyper-V_02:

\Hyper-V Virtual Storage Device\*
Results
The passthrough disk was able to attain over 90% of the throughput of the SAN LUN connected
directly to Physical_SQL01. Total, read and write I/Os per second were all within 10% as was the
total MB transferred per second. Response times for healthy disks should be between 1-15 ms
for read and write. Average I/O response times were less than 4 ms on both disks. Random reads
response time was 5.4 ms on the physical and 5.7 ms on the pass-through disk. Write response
time was less than 0.5 ms on both the physical and virtual environments.
The results indicate that a passthrough disk using the enlightened SCSI controller can provide
over 90% of the performance of a directly connected physical disk. I/O subsystem performance is
critical for efficient BizTalk Server 2009 operation, by providing excellent throughput and
response times Hyper-V is an excellent candidate for consolidating a BizTalk Server 2009
environment. The table below provides a summary of the disk test results observed when
comparing performance of a passthrough disk to a physical disk:
Measurement
Physical_SQL01
Virtual_SQL01
Relative performance of
(Physical Disk)
(passthrough)
passthrough disks to
physical disks
Total I/Os per
second
269.73
250.47
92.86%
Read I/Os per
180.73
167.60
92.74%
52
Measurement
Physical_SQL01
Virtual_SQL01
Relative performance of
(Physical Disk)
(passthrough)
passthrough disks to
physical disks
second
Write I/Os per
second
89.00
82.87
93.11%
Total MBs per
second
0.53
0.49
92.45%
Average read
response time (ms)
5.4066
5.7797
93.54%
Average write
response time (ms)
0.2544
0.3716
68.42%
Note
Although the
relative
performance of
the pass through
disks for Average
write response
time was 68.42%
of the
performance of
physical disks, the
Average write
response time of
the passthrough
disks was still well
within established
acceptable limits
of 10 ms.
Average I/O
response time (ms)
3.7066
3.9904
93.89%
Note

The percentage values for Total I/Os per second, Read I/Os per second, Write I/Os per
second, and Total MBs per second were calculated by dividing passthrough disk values
by the corresponding physical disk values.

The percentage values for Average read response time (ms), Average write response
time (ms), and Average I/O response time (ms) were calculated by dividing physical disk
values by the corresponding passthrough disk values.
53
Testing BizTalk Server Virtualization
Performance
Each of the performance test scenarios described in this guide were deployed on physical
computers in a Microsoft test lab, and then the same load test was performed on each distinct
system architecture. The host operating system on each physical computer was a full installation
of Windows Server 2008 Enterprise, 64-Bit Edition, with the Hyper-V server role installed. The
virtual machines used for testing BizTalk Server 2009 were set up with Windows Server 2008
Enterprise, 64-Bit Edition as the guest operating system. The virtual machine used for testing
SQL Server 2008 was set up with Windows Server 2008 Enterprise, 64-Bit Edition as the guest
operating system. The test scenarios, test methods, performance test results, and subsequent
analysis were used to formulate a series of best practices and guidance for designing,
implementing, and optimizing virtualized BizTalk Server.

Test Scenario 1: Baseline – The first scenario was designed to establish baseline
performance of a BizTalk Server environment running on physical hardware only. For this
scenario both BizTalk Server and SQL Server were installed and run on physical hardware
only.

Test Scenario 2: Virtual BizTalk Server/Physical SQL Server - The second scenario was
designed to determine the performance impact of hosting BizTalk Server on multiple guest
virtual machines on the same physical server. Test results taken from multiple virtual
machine configurations were then compared to a physical machine processing with the same
number of logical processors as the total number dispersed across all virtual machines.

Test Scenario 3: Virtual BizTalk Server/Virtual SQL Server on separate physical HyperV hosts - The third scenario was conducted to determine the performance impact of running
both BizTalk Server and SQL Server in a virtualized environment. Tests were performed
using BizTalk Server running on Hyper-V virtual machines with the BizTalk databases hosted
on a SQL Server 2008 instance running on a Hyper-V virtual machine. For this scenario, the
BizTalk Server virtual machines and the SQL Server virtual machines were hosted on
separate physical Hyper-V hosts.

Test Scenario 4: Server consolidation - Consolidating a full BizTalk Group Including
SQL onto one Physical Server on Hyper-V – In the scenario, all virtual machines (VMs)
needed to run the test application are hosted on one physical server. The purpose of this
scenario is to determine the performance costs of hosting SQL Server 2008 and BizTalk
Server 2009 virtual machines in a consolidated environment.
This section provides an overview of the test application and the server architecture used for
each scenario and also presents key performance indicators (KPIs) observed during testing.
In This Section
Test Scenario Overview
Test Scenario Server Architecture
54
Test Results: BizTalk Server Key Performance Indicators
Test Results: SQL Server Key Performance Indicators
Test Results: Networking Key Performance Indicators
Test Results: Memory Key Performance Indicators
Summary of Test Results
Test Scenario Overview
This topic provides an overview of the test application; a description of the testing methodology
used, and lists the key performance indicators (KPIs) captured during load testing.
Test Application
A synchronous request-response application was used to compare performance of BizTalk
Server 2009 running on Hyper-V to BizTalk Server 2009 running on physical hardware. This
application was used to illustrate performance of a BizTalk Server solution that has been tuned
for low latency. Low latency messaging is critical for certain scenarios such as online banking
where a client sends a request and expects a response message within a very short interval (for
example < 3 seconds).
The figure below illustrates the high-level architecture used. Visual Studio Team System (VSTS)
2008 Test Load Agent invoked a custom test class, which used the WCF transport to generate
load against BizTalk Server. The BizTalk Server application in this scenario was exposed via a
WCF-BasicHttp request-response receive location. VSTS 2008 Test Load Agent was used as the
test client because of the great flexibility that it provides, including the capability to configure the
number of messages sent in total, number of simultaneous threads, and the sleep interval
between requests sent.
Several VSTS 2008 Test Load Agent computers can be run in tandem to simulate real world load
patterns. For these tests, the VSTS 2008 Test Load Agent computers were driven by a single
VSTS 2008 Test Load Agent Controller computer that was also running BizUnit 3.0. As a result, a
consistent load was sent to both the physical and virtual BizTalk Server computers. For more
information about using VSTS 2008 Test Edition to generate simulated load for testing, see
http://go.microsoft.com/fwlink/?LinkID=132311.
55
Test Application Architecture
1. A WCF-BasicHttp or WCF-Custom Request-Response Receive Location receives a new
CalculatorRequest from a Test Load Agent computer.
2. The XML disassembler component promotes the Method element inside the
CalculatorRequest xml document. The Message Agent submits the incoming message to the
MessageBox database (BizTalkMsgBoxDb).
3. The inbound request starts a new instance of the LogicalPortsOrchestration. This
orchestration uses a direct bound port to receive the CalculatorRequest messages with the
Method promoted property = “LogicalPortsOrchestration”.
4. The LogicalPortsOrchestration uses a loop to retrieve operations and for each item it invokes
the downstream Calculator WCF web service using a Logical Solicit-Response Port. The
request message for the Calculator WCF web service is created using a helper component
and published to the MessageBox.
5. The request message is consumed by a WCF-BasicHttp Send Port.
6. The WCF-BasicHttp Send Port invokes one of the methods (Add, Subtract, Multiply, Divide)
exposed by the Calculator WCF web service.
7. The Calculator WCF web service returns a response message.
8. The response message is published to the MessageBox.
9. The response message is returned to the caller LogicalPortsOrchestration. The orchestration
repeats this pattern for each operation within the inbound CalculatorRequest xml document.
56
10. The LogicalPortsOrchestration publishes the CalculatorResponse message to the
MessageBox.
11. The response message is retrieved by the Request-Response WCF-BasicHttp Receive
Location.
12. The response message is returned to the Load Test Agent computer.
A screenshot of the orchestration used during the load test is shown below:
Note
For purposes of illustration, the orchestration depicted below is a simplified version of the
orchestration that was actually used during load testing. The orchestration used during
load testing included multiple scopes, error handling logic, and additional port types.
57
Test Application Orchestration
58
59
Testing Methodology
Performance testing involves many tasks, which if performed manually are repetitive,
monotonous, and error prone. In order to improve test efficiency and provide consistency
between test runs, Visual Studio 2008 Team System (VSTS) Test Edition with BizUnit 3.0 was
used to automate the tasks required during the testing process. VSTS 2008 Test Load Agent
computers were used as the test client to generate the message load against the system and the
same message types were used on each test run to improve consistency. Following this process
provides a consistent set of data for every test run. For more information about BizUnit 3.0, see
http://go.microsoft.com/fwlink/?LinkID=85168. For more information about Visual Studio 2008
Team System Test Edition, see http://go.microsoft.com/fwlink/?LinkID=141387.
The following steps were automated:

Stop BizTalk hosts.

Clean up test directories.

Restart IIS.

Clean up the BizTalk Server Messagebox database.

Restart SQL Server.

Clear event logs.

Create a test results folder for each run to store associated performance metrics and log files.

Start BizTalk Hosts.

Load Performance Monitor counters.

Warm up BizTalk environment with a small load.

Send through representative run.

Write performance logs to a results folder.

Collect application logs and write to a .csv file in the results folder.

Run the Performance Analysis of Logs (PAL) tool, Relog and Log Parser tools against the
collected performance logs to produce statistics, charts and reports. For more information
about PAL, Relog, and Log Parser, see Appendix D: Tools for Measuring Performance.
Note
All tracking was disabled and the BizTalk Server SQL Server Agent job was disabled
during testing.
To ensure that the results of this lab were able to provide a comparison of the performance of
BizTalk Server in a physical and Hyper-V environment, performance metrics and logs were
collected in a centralized location for each test run.
The test client was used to create a unique results directory for each test run. This directory
contained all the performance logs, event logs and associated data required for the test. This
approach provided information needed when retrospective analysis of prior test runs was
required. At the end of each test, the raw data was compiled into a set of consistent results and
60
key performance indicators (KPIs). Collecting consistent results set for physical and virtualized
machines provided the points of comparison needed between the different test runs and different
environments. The data collected included:

Environment – To record which environment the test was being run against, either BizTalk
Server on physical hardware or BizTalk Server on Hyper-V.

Test Run Number – To uniquely identify each test run

Test Case – To record the architecture of the BizTalk Server solution used during testing.
(For example Orchestration with Logical Ports versus Orchestration using inline Sends)

Date – To record the date and time the test was run

Time Started – As reported by the first VSTS load test agent initiated

Time Stopped – As reported by the last VSTS load test agent to complete

Test Duration in Minutes – To record the duration of the test.

Messages Sent in Total – To record the total number of messages sent from the Load
Agent computers to the BizTalk Server computers during the test.

Messages Sent per Second – To record the messages sent per second from the Load
Agent computers to the BizTalk Server computers during the test.

Average Client Latency – To record the average amount of time between when Test Load
Agent clients initiated a request to and received a response from the BizTalk Server
computers during the load test.

Request-Response Duration Average (ms) – As reported by the BizTalk:Messaging
Latency\Request-Response Latency (sec) Performance Monitor counter for the
BizTalkServerIsolatedHost
Note
Where multiple virtualized BizTalk hosts were running an average of these counters
as calculated from the logs was used.

Orchestrations Completed per Second – As reported by the XLANG/s
Orchestrations(BizTalkServerApplication)\Orchestrations completed/sec Performance
Monitor counter. This counter provides a good measure of the throughput of the BizTalk
Server solution.

% of Messages Processed < 3 seconds – To record the total number of messages
processed within 3 seconds during the test.
VSTS 2008 Load Test was used to generate a consistent load throughout all the tests. The
following test run settings and load pattern were modified during testing to adjust the load profile
of each test:

Test Run Settings
The following test run setting was modified depending on the test being performed:

Run Duration – Specifies how long the test is run.
61
Test Run Settings

Test Pattern Settings
The following test pattern settings were modified depending on the test being performed:
a. Pattern – Specifies how the simulated user load is adjusted during a load test. Load
patterns are either Constant, Step, or Goal based. All load testing performed was either
Constant or Step.
Note

All testing performed for purposes of this guide used either a Constant load pattern or a
Step load pattern. Constant load patterns and Step load patterns provide the following
functionality:
62
b. Constant User Count (Constant Load Pattern) – Number of virtual users that are
generating load against the endpoint address specified in the app.config file of the Visual
Studio Load Test project. This value is specified in the Load Pattern settings used for the
load test.
c.
Initial User Count (Step Load Pattern) – Number of virtual users that are generating
load against the specified endpoint address at the beginning of a Step Load Pattern test.
This value is specified in the Load Pattern settings used for the load test.
d. Maximum User Count (Step Load Pattern) – Number of virtual users that are
generating load against the specified endpoint address at the end of a Step Load Pattern
test. This value is specified in the Load Pattern settings used for the load test.
e. Step Duration (Step Load Pattern) – Number of seconds that virtual users are
generating load against the specified endpoint address for a load test step.
f.
Step User Count (Step Load Pattern) – Number of virtual users to increase at each
step when using a step load pattern.
Test Pattern Settings
63
For more information about working with load tests in Visual Studio 2008, see the topic Working
with Load Tests in the Visual Studio 2008 Team System documentation at
http://go.microsoft.com/fwlink/?LinkId=141486.
Key Performance Indicators Measured During
Testing
The following Performance Monitor counters were captured as key performance indicators (KPI)
for all test runs:
Note
For more information about evaluating performance with Performance monitor counters,
see Checklist: Measuring Performance on Hyper-V.
BizTalk Server KPI

Documents processed per second – As measured by the BizTalk:Messaging/Documents
processed/Sec counter.

Latency – As measured as returned by the VSTS 2008 Load Test Controller.
SQL Server KPI

SQL Server processor utilization – As measured by the
SQL\Processor(Total)\%Processor Time counter. This counter measures CPU utilization of
SQL Server processing on the SQL Server computer.

Transact SQL command processing performance – As measured by the \SQL
Server:SQL Statistics\Batch Requests/sec counter. This counter measures the number of
Transact-SQL command batches received per second. This counter is used to measure
throughput on the SQL Server computer.
Networking KPI

BizTalk Server network throughput – As measured by the \Network Interface(*)\Bytes
Total/sec performance monitor counter on the BizTalk Server computers.

SQL Server network throughput – As measured by the SQL Network Interface\Bytes
Total/sec (Avg) returned by the VSTS 2008 Load Test Controller.
Memory KPI

Available memory – As measured by the \Memory\Available Mbytes counter for the
various scenarios.
Physical Infrastructure Specifics
For each of the servers that were installed the following settings were adjusted.
For all servers:
64

The paging file was set to 1.5 times the amount of physical memory allocated. The paging file
was set to a fixed size by ensuring that the initial size and maximum values were identical in
MB.

The “Adjust for best performance” performance option was selected from the advanced
System Properties screen.

It was verified that the system had been adjusted for best performance of Background
services in the Performance Options section of System Properties.

Windows Server 2008 was installed as the guest operating system on each of the virtual
machines.

Windows Update was successfully run on all servers to install the latest security updates.
For SQL Server:

SQL Server 2008 was installed as per the installation guide available at
http://go.microsoft.com/fwlink/?LinkId=141021.

SQL Server used had the SAN LUNs configured as per the table below. The database and
log files were separated to across the LUNs as follows to reduce possible disk I/O contention:


The Data_Sys volume was used to store all database files (including system and BizTalk
databases) except the MessageBox and TempDb databases.

The Log_Sys volume was used to store all log files (including system and BizTalk Server
databases) except the MessageBox and TempDb databases.

The Data_TempDb volume was used to store the TempDb database file.

The Logs_TempDb volume was used to store the TempDb log file.

The MessageBox database file was stored on the Data_BtsMsgBox volume and the log
file was stored on the Log_BtsMsgBox volume.
In addition to this, a separate LUN was provided for the MSDTC log file. On high throughput
BizTalk systems, the MSDTC log file activity has been shown to cause an I/O bottleneck if it
is left on the same physical drive as the operating system.
65
Volume Name
Files
LUN Size
Host
GB
Partition Size
Cluster Size
GB
Data_Sys
MASTER, and MSDB
data files
10
10
64KB
Logs_Sys
MASTER and MSDB
log files
10
10
64KB
Data_TempDb
TempDB data file
50
50
64KB
Logs_TempDb
TempDB log file
50
50
64KB
Data_BtsMsgBox
BizTalkMsgBoxDb
data file
300
100
64KB
Logs_BtsMsgBox
BizTalkMsgBoxDb log
file
100
100
64KB
Data_BAMPrimaryImport
BAMPrimaryImport
data file
10
10
64KB
Logs_BAMPrimaryImport
BAMPrimaryImport
log file
10
10
64KB
Data_BizTalkDatabases
Other BizTalk
database data files
20
20
64KB
Logs_BizTalkDatabases
Other BizTalk
database log files
20
20
64KB
N/A
MSDTC log file
5
5
N/A

BizTalk Server 2009 was installed as per the installation guides available at
http://go.microsoft.com/fwlink/?LinkId=128383.

The BizTalk Server Best Practices Analyzer (BPA) tool was used to perform platform
validation once the system had been configured. The BizTalk Server BPA is available at
http://go.microsoft.com/fwlink/?LinkId=67150.
Virtualization Specifics
A single 50 GB fixed VHD was used to host the operating system for each Hyper-V virtual
machine.
Fixed VHDs were used instead of dynamically sized VHDs because they immediately allocate the
maximum storage for the VHD to the file on the drive where it is hosted. This reduces
fragmentation of the VHD file occurring on the physical drive where it is hosted, which improves
disk I/O performance.
66
To set-up the virtual machines, an installation of Windows Server 2008 64-bit edition was
performed on a single VHD. Once all appropriate updates had been installed the base virtual
machine was imaged using sysprep utility that is installed with Windows Server 2008, in the
%WINDIR%\system32\sysprep directory.
This base VHD was then copied and used as the basis for all Hyper-V virtual machines that were
deployed across the environment. Sysprep was run on the base VHD image to reset system
security identifiers before any SQL Server or BizTalk Server binaries were deployed to the
system.
Note
Running Sysprep after BizTalk Server 2009 has been installed and configured on the
server can be accomplished through the use of a Sysprep answer file and scripts
provided with BizTalk Server 2009. These sample scripts are designed for use with
BizTalk Server 2009 installed on 32-bit and 64-bit versions of Windows Server 2008 only.
For more information see the BizTalk Server 2009 online documentation.
The Unattended Windows Setup Reference is available at
http://go.microsoft.com/fwlink/?LinkId=142364.
See Also
Appendix C: BizTalk Server and SQL Server Hyper-V Supportability
Test Scenario Server Architecture
This topic provides an overview of the flow of messages between servers during load testing and
the distinct server architectures against which the load test was performed.
Overview of Message Flow During Load Testing
The following diagram provides a generic overview of the server architecture used for all test
scenarios and the flow of messages between servers during a load test.
Note
Each distinct server architecture that was tested is described in the section Baseline
Server Architecture.
The following figure provides an overview of the message flow. The numbers in the figure
correspond to the steps listed below the figure.
67
Message Flow Overview
1. Load testing is initiated by the Load Agent Controller computer VSTS_TestController:

A Visual Studio 2008 project on VSTS_TestController is executed. The project loads an
instance of the BizUnit class, loads the specified BizUnit XML configuration file, and
begins executing the steps defined in the BizUnit configuration file.
Note
For more information about the XML configuration file used by BizUnit, see the
topic “Defining Tests Using an XML Configuration File” at
http://go.microsoft.com/fwlink/?LinkId=143432.

After completing the Test Setup steps, one of the steps in the BizUnit project executes a
command that displays a dialog box which prompts you to start a “priming” test run to
submit priming messages to the BizTalk Server environment.

Priming messages are submitted from a separate Visual Studio 2008 Test project on
VSTS_TestController. Priming messages are sent to “warm up” the test environment by
initializing system caches.

After all priming messages have been processed; the BizUnit instance loads
Performance Monitor counters for all computers being tested in the main test run and
executes a command to display a dialog box which prompts you to submit messages for
the main test run.

The Visual Studio 2008 Load Test project on VSTS_TestController directs the Load
Test Agent computers to submit messages for the main test run.
2. The Load Test Agent computers submit test messages to the BizTalk Server 2009 computers
specified in the app.config file of the Visual Studio 2008 Load Test project on the Load Test
Controller computer (VSTS_TestController).
68
3. The BizTalk Server computers receive the messages submitted by the Load Test Agent
computers, for this load test the messages were received by a two way request-response
receive location.

BizTalk Server publishes the message to the MessageBox database.

The messages are consumed by an orchestration.

The orchestration is bound to a two way solicit-response send port which invokes the
downstream calculator service.
Note
The downstream calculator service is based upon Windows Communication
Foundation samples described at http://go.microsoft.com/fwlink/?LinkId=141762. The
Windows Communication Foundation samples are available for download at
http://go.microsoft.com/fwlink/?LinkId=87352.
4. The calculator service consumes the request from BizTalk Server 2009 and returns a
response to the BizTalk Server 2009 solicit-response send port.
5. BizTalk Server processes the response and persists the response message to the
MessageBox database. Then the response message from the Calculator web service is
retrieved from the MessageBox database by the BizTalk request-response port, and a
response message is delivered back to the Load Test Agent computers.
Baseline Server Architecture
For the Baseline Server architecture, the Hyper-V role was not installed and Both BizTalk Server
2009 and SQL Server 2008 were installed on to the host operating system. This was done to
establish “baseline” performance metrics of the BizTalk Server 2009 solution on a physical
hardware environment.
The following figure depicts the physical BizTalk Server and SQL Server 2008 tiers for the
Baseline Server Architecture.
69
Physical BizTalk Server / Physical SQL Server (Baseline)


BizTalk Server - 2 BizTalk Server computers configured as follows:

One BizTalk Server 2009 computer with 6 GB RAM and 8 processor cores available.

One BizTalk Server 2009 computer with 3 GB RAM and 4 processor cores available.

Total of 6 + 3 = 9 GB RAM available and 8 + 4 = 12 processor cores available for BizTalk
Server.
SQL Server - 1 SQL Server 2008 computer configured as follows:

8 GB RAM available.

4 processor cores available.
Virtual BizTalk Server / Physical SQL Server
The following figure depicts the virtual BizTalk Server and physical SQL Server tiers.
70
Virtual BizTalk Server / Physical SQL Server
For this scenario, the load test was performed against BizTalk Server 2009 running on Hyper-V
virtual machines and SQL Server 2008 running on physical hardware.
Note
The allocation of RAM and processor cores described below was identical for each nonbaseline scenarios, the only difference being whether certain computers are running on a
Hyper-V virtual machine or on physical hardware.


BizTalk Server - 3 BizTalk Server 2009 computers configured as follows:

3 GB RAM allocated to each BizTalk Server computer with a total of 3 x 3 = 9 GB RAM
available for BizTalk Server.

4 processor cores allocated to each BizTalk Server computer with a total of 3 x 4 = 12
processor cores available for BizTalk Server.
SQL Server - 1 SQL Server 2008 computer configured as follows:

8 GB RAM available.

4 processor cores available.
71
Virtual BizTalk Server / Virtual SQL Server
The following figure depicts a virtual BizTalk Server 2009 computer and a virtual SQL
Server 2008 computer on separate Hyper-V host computers.
Virtual BizTalk Server / Virtual SQL Server
For this scenario, the load test was performed against BizTalk Server running on Hyper-V virtual
machines and SQL Server running on a Hyper-V virtual machine. The BizTalk Server Hyper-V
virtual machines and the SQL Server Hyper-V virtual machine were run on separate Hyper-V host
computers.
Note
The allocation of RAM and processor cores for this scenario is identical to the allocation
of RAM and processor cores for the Virtual BizTalk Server / Physical SQL Server
scenario, the only difference being that SQL Server was configured to run on a Hyper-V
virtual machine rather than physical hardware.
72
Consolidated Environment
The following figure depicts virtual BizTalk Server 2009 computers and a virtual SQL Server 2008
computer consolidated on one Hyper-V host computer.
Consolidated Environment
For this scenario, the load test was performed against BizTalk Server running on Hyper-V virtual
machines and SQL Server running on a Hyper-V virtual machine. The BizTalk Server Hyper-V
virtual machines and the SQL Server Hyper-V virtual machine were all run on the same Hyper-V
host computer.
Note
The allocation of RAM and processor cores for this scenario is identical to the allocation
of RAM and processor cores for the Virtual BizTalk Server / Virtual SQL Server
scenario, the only difference being that both the BizTalk Server Hyper-V virtual machines
and SQL Server Hyper-V virtual machines were configured to run on the same Hyper-V
host computer.
73
See Also
Test Scenario Overview
Test Results: BizTalk Server Key
Performance Indicators
This topic summarizes BizTalk Server Key Performance Indicators (KPI) observed during the test
scenarios. Specifically these tests evaluated throughput as measured by the
"BizTalk:Messaging/Documents processed/Sec" performance monitor counter, and latency, as
measured by the Visual studio client response time.
Summary of BizTalk Server Key Performance
Indicators
For each scenario the physical machines were restricted so that number of logical processors
and virtual processors was equivalent. This was done using the /maxmem and /numproc boot.ini
switches. For more information about using these switches, see “Boot INI Options Reference” at
http://go.microsoft.com/fwlink/?LinkId=122139.
Comparison of BizTalk Server Key Performance Indicators – Running BizTalk Server 2009
on a Hyper-V virtual machine provided approximately 95% of the throughput and latency
performance of BizTalk Server 2009 on physical hardware for this test scenario. Because of the
stateless nature of BizTalk Server, additional BizTalk Server 2009 virtual machines can be easily
added to the environment as required to provide scale out and increase the overall performance
of the system. Creating and adding additional BizTalk Server 2009 to the environment can be
accomplished by using the sysprep utility to generate new images from a base image.
Note
A sysprep answer file and scripts are provided with BizTalk Server 2009 to accommodate
using sysprep to create additional images from an existing image of a computer that has
BizTalk Server 2009 installed. These sample scripts are designed for use with BizTalk
Server 2009 installed on 32-bit and 64-bit versions of Windows Server 2008 only. For
more information see the BizTalk Server 2009 online documentation.
Provisioning, consolidation, and management of virtual machines can be significantly expedited
through the use of System Center Virtual Machine Manager (VMM). For more information about
System Center Virtual Machine Manager, see http://go.microsoft.com/fwlink/?LinkID=111303
The results obtained in this performance lab show a marked improvement from the performance
achieved when running BizTalk Server 2006 R2 on Windows Server 2003 in a Hyper-V virtual
machine. Running BizTalk Server 2006 R2 on a Hyper-V virtual machine provided approximately
75% of the throughput and latency performance of BizTalk Server 2006 R2 on physical hardware
versus the approximately 95% performance observed when running BizTalk Server 2009 and
74
Windows Server 2008 on Hyper-V virtual machines. This improved performance is largely
attributable to the improved performance of Windows Server 2008 when running as a guest
operating system on Hyper-V. The related performance comparison from the BizTalk Server 2006
R2 Hyper-V guide is available at http://go.microsoft.com/fwlink/?LinkId=147144.
The graphic below illustrates the performance of BizTalk Server 2009 on the various test
platforms:
The table below illustrates the relative performance of the collected KPI’s for each configuration.
Each result set is calculated as a percentage of the Baseline configuration KPI.
KPI
Virtual
Virtual
Virtual
BizTalk/Physical
BizTalk/Virtual
BizTalk/Virtual
SQL
SQL on separate
SQL on
Hosts
Consolidated
environment
\BizTalk:Messaging\Documents
processed/Sec
94.3%
79.8%
67%
Latency as measured by the
Visual Studio client
94.3%
79.7%
66.9%
75
For more information about how to optimize the performance of a BizTalk Server solution, see the
BizTalk Server Performance Optimizations Guide available at
http://go.microsoft.com/fwlink/?LinkId=122477.
Performance Comparison Results Summary
The 94.3% throughput and 94.3% latency achieved when running only BizTalk Server on Hyper-V
suggests that virtualizing this tier of your solution using Hyper-V provides excellent performance
together with the provisioning, consolidation, flexibility and ease of management that are possible
when deploying solutions to a Hyper-V environment.
Throughput Comparison Sample Results
When the BizTalk Server computers used in the BizTalk Server environment were run on HyperV virtual machines, throughput of the BizTalk Server solution as measured by the
"BizTalk:Messaging/Documents processed/Sec" performance monitor counter ranged from
67% to 94.3% of the throughput attainable when all of the computers used in the BizTalk Server
environment were installed on physical hardware.
Latency Comparison Sample Results
When the BizTalk Server computers used in the BizTalk Server environment were run on HyperV virtual machines, latency of the BizTalk Server solution as measured by the Visual Studio client
response time ranged from 66.9% to 94.3% of the latency attainable when all of the computers
used in the BizTalk Server environment were installed on physical hardware.
Test Results: SQL Server Key Performance
Indicators
This topic summarizes SQL Server Key Performance Indicators (KPI) observed during the test
scenarios. These tests evaluated the following SQL Server KPI:

SQL Processor Utilization as measured by the \SQL\Processor(_Total)\% Processor Time
performance monitor counter.

The number of Transact-SQL command batches received per second as measured by the
\SQL Server:SQL Statistics\Batch Requests/sec performance monitor counter.
Summary of SQL Server Key Performance
Indicators
For each scenario the physical machines were restricted so that number of logical processors
and virtual processors was equivalent. This was done using the /maxmem and /numproc boot.ini
76
switches. For more information about using these switches, see “Boot INI Options Reference” at
http://go.microsoft.com/fwlink/?LinkId=122139.
Comparison of SQL Server Key Performance Indicators – SQL Server processor utilization as
measured by \SQL\Processor(_Total)\% Processor Time counter was approximately the same
on all test environments, ranging from a low of 88% to a high of 90.1%.
There is, however, a significant difference between the \SQL Server:SQL Statistics\Batch
Requests/sec measured on the consolidated environment (4520) and the \SQL Server:SQL
Statistics\Batch Requests/sec measured on the physical environment (6350). The \SQL
Server:SQL Statistics\Batch Requests/sec performance monitor counter provides a good
indicator of how much work is being performed by SQL Server. The reduction in Batch
Requests/sec when SQL Server is running in a Hyper-V environment can be attributed to the
CPU overhead required by Hyper-V.
Follow these steps to increase performance of SQL Server running on a Hyper-V virtual machine
as measured by the \SQL Server:SQL Statistics\Batch Requests/sec performance monitor
counter:
1. Allocate additional fixed VHD disks with dedicated virtual controllers and channels –
Allocation of additional fixed VHD disks using dedicated virtual controllers and channels will
increase disk throughput versus using a single VHD disk.
2. Optimize Network Performance – Follow steps outlined in the “Optimize Network
Performance” section of Checklist: Optimizing Performance on Hyper-V. When running
multiple Hyper-V virtual machines on the same Hyper-V host it is of particular importance to
follow recommendations in the “Configure Hyper-V Virtual Machines that are Running on the
same Hyper-V host computer to use a Private Virtual Network” section of Network
Optimizations.
Because of the stateless nature of BizTalk Server, additional SQL Server virtual machines can be
easily added to the environment as required to provide scale out and increase the overall
performance of the system.
The graphic below illustrates the performance of SQL Server on the various test platforms:
77
SQL Key Performance Indicators
The table below illustrates the relative performance of the collected KPI’s for each configuration.
Each result set is calculated as a percentage of the Baseline configuration KPI
KPI
Virtual
Virtual
Virtual
BizTalk/Physical SQL
BizTalk/Virtual SQL
BizTalk/Virtual SQL
on separate Hosts
on Consolidated
environment
\SQL\Processor(_Total)\%
Processor Time
97.7%
98.4%
99.9%
\SQL Server:SQL
Statistics\Batch Requests/sec
97.1%
83.3%
71.2%
For more information about how to evaluate Disk I/O performance, see the Measuring Disk I/O
Performance section of the topic Checklist: Measuring Performance on Hyper-V.
For more information about Best Practices when running SQL Server 2008 in a Hyper-V
environment, see the whitepaper “Running SQL Server 2008 in a Hyper-V Environment – Best
Practices and Performance Recommendations” available for download at
http://go.microsoft.com/fwlink/?LinkId=144622.
78
Test Results: Networking Key Performance
Indicators
This topic summarizes Network Key Performance Indicators (KPI) observed during the test
scenarios. These tests evaluated Network Performance as measured by the \Network
Interface(*)\Bytes Total/sec performance monitor counter and by measuring the SQL Network
Interface\Bytes Total/sec (Avg ) returned by the VSTS 2008 Load Test Controller.
Summary of Network Key Performance Indicators
Comparison of Networking Key Performance Indicators – Network throughput for BizTalk
Server running on Hyper-V virtual machines was observed to range from approximately 70% to
96% of the network throughput achieved on the physical BizTalk Servers, depending on the
particular test environment. Network throughput for SQL Server running on a Hyper-V virtual
machine was observed to range from approximately 68% to 81% of the network throughput
achieved on the physical SQL Server, again depending on the particular test environment. The
delta in the observed network throughput can be attributed to the resource requirements of the
Hyper-V Hypervisor.
Follow the steps in Network Optimizations to maximize network throughput on Hyper-V virtual
machines as measured by \Network Interface(*)\Bytes Total/sec
The graphic below illustrates the network performance on the various test platforms:
79
The table below illustrates the relative performance of the collected KPI’s for each configuration.
Each result set is calculated as a percentage of the Baseline configuration KPI
KPI
Virtual BizTalk/Physical
Virtual BizTalk/Virtual
Virtual BizTalk/Virtual
SQL
SQL on separate
SQL on Consolidated
Hosts
environment
\Network
Interface(*)\Bytes
Total/sec (Total Avg
Across all BizTalk
Servers)
96%
82.1%
70.2%
SQL Network
Interface\Bytes
Total/sec (Avg )
95.5%
81.2%
68.4%
For more information about how to evaluate Network performance, see the Measuring Network
Performance section of the topic Checklist: Measuring Performance on Hyper-V.
Test Results: Memory Key Performance
Indicators
This topic summarizes Memory Key Performance Indicators (KPI) observed during the test
scenarios. These tests evaluated available memory as measured by the \Memory\Available
Mbytes performance monitor counter.
Summary of Memory Key Performance Indicators
Comparison of Memory Key Performance Indicators – Total memory available to SQL Server
and BizTalk Server as measured by the \Memory\Available Mbytes performance monitor
counter was fairly consistent across all test scenarios. The difference in the average memory
available to the physical BizTalk Server computers and the average memory available to the
BizTalk Server computers running on virtual machines is due to the fact that two physical BizTalk
Server computers were used for testing while three BizTalk Server computers running on virtual
machines were used for testing.
The graphic below illustrates Memory performance on the various test platforms:
80
The table below illustrates the relative performance of the collected KPI’s for each configuration.
Each result set is calculated as a percentage of the Baseline configuration KPI
KPI
Virtual BizTalk/Physical
Virtual BizTalk/Virtual
Virtual BizTalk/Virtual
SQL
SQL on separate Hosts
SQL on Consolidated
environment
SQL Server
Available Memory
(Mbytes) Per
Server
100.1%
97.1%
103.2%
Total BizTalk
Available Memory
(Mbytes)
88.3%
95.9%
96%
Average Per Server 58.9%
/ BizTalk Available
Memory (Mbytes)
63.9%
64%
For more information about how to evaluate Memory performance, see the Measuring Memory
Performance section of the topic Checklist: Measuring Performance on Hyper-V.
81
Summary of Test Results
This topic summarizes the results from the test scenarios.
Summary of Test Results
The Testing BizTalk Server Virtualization Performance section of this guide describes the test
application used and the configuration of the various BizTalk Server environments against which
the test application was run. The testing was performed to compare the performance of a BizTalk
Server / SQL Server environment running on physical hardware to the performance of the
environment running on Hyper-V virtual machines. Key Performance Indicators (KPIs) measured
during testing included the following;
1. Message throughput measured on the BizTalk Server computers.
2. Request-response latency measured on the Visual Studio Test client which submitted
synchronous requests to BizTalk Server.
3. Processor utilization and Batch requests per second observed on SQL Server.
4. Network throughput observed on the BizTalk Server and SQL Server computers.
5. Available memory for the BizTalk Server and SQL Server computers.
Throughput Comparison Sample Results
With all other factors being equal, throughput of the BizTalk Server solution as measured by the
"BizTalk:Messaging/Documents processed/Sec" performance monitor counter ranged from 67%
to 94.3% of the throughput attainable when both the BizTalk Server computers and the SQL
Server computers in the environment were installed on physical hardware.
When the SQL Server computers in the environment were installed on Hyper-V virtual machines,
throughput of the solution was observed to decline significantly, this reduction in throughput can
be attributed to the CPU overhead required by Hyper-V.
Latency Comparison Sample Results
With all other factors being equal, when the BizTalk Server computers used in the BizTalk Server
environment were run on Hyper-V virtual machines, latency of the BizTalk Server solution as
measured by the "BizTalk:Messaging Latency/Request-Response Latency (sec)" performance
monitor counter ranged from 66.9% to 94.3% of the latency attainable when the BizTalk Server
computers used in the BizTalk Server environment were installed on physical hardware.
When the SQL Server computers in the environment were installed on Hyper-V virtual machines,
throughput of the solution was observed to decline significantly, this reduction in throughput can
be attributed to the CPU overhead required by Hyper-V on the SQL Server virtual machines.
82
SQL Server Processor Utilization and Batch Requests per
Second Sample Results
SQL Server processor utilization as measured by \SQL\Processor(_Total)\% Processor Time
counter was approximately the same on all test environments, ranging from a low of 88% to a
high of 90.1%. There is however a significant difference between the \SQL Server:SQL
Statistics\Batch Requests/sec measured on the consolidated environment (4520) and the \SQL
Server:SQL Statistics\Batch Requests/sec measured on the physical environment (6350). The
\SQL Server:SQL Statistics\Batch Requests/sec performance monitor counter provides a good
indicator of how much work is being performed by SQL Server. The reduction in Batch
Requests/sec when SQL Server is running in a Hyper-V environment can be attributed to the
CPU overhead required by Hyper-V.
BizTalk Server and SQL Server Network Throughput Sample
Results
Network throughput for BizTalk Server running on Hyper-V virtual machines was observed to
range from approximately 70% to 96% of the network throughput achieved on the physical
BizTalk Servers, depending on the particular test environment. Network throughput for SQL
Server running on a Hyper-V virtual machine was observed to range from approximately 68% to
81% of the network throughput achieved on the physical SQL Server, again depending on the
particular test environment. The delta in the observed network throughput can be attributed to the
resource requirements of the Hyper-V Hypervisor.
BizTalk Server and SQL Server Available Memory Sample
Results
Total memory available to SQL Server and BizTalk Server as measured by the
\Memory\Available Mbytes performance monitor counter was fairly consistent across all test
scenarios.
Appendices
The Appendices contain important information that is referenced by other sections of this guide.
In This Section

Appendix A: Optimizations Applied to Computers in Test Environment

Appendix B: Hyper-V Architecture and Feature Overview

Appendix C: BizTalk Server and SQL Server Hyper-V Supportability

Appendix D: Tools for Measuring Performance
83
Appendix A: Optimizations Applied to
Computers in Test Environment
This section describes the performance optimizations that were applied to the BizTalk Server
environment before testing.
In This Section

Operating System Optimizations

Network Optimizations

SQL Server Optimizations

BizTalk Server Optimizations
Operating System Optimizations
This topic provides recommendations for optimizing performance of the BizTalk Server computers
used in a production BizTalk Server environment. These optimizations are applied after BizTalk
Server has been installed and configured.
General guidelines for improving operating
system performance
The following recommendations can be used to increase operating system performance:
Install the latest BIOS, storage area network (SAN) drivers,
network adapter firmware and network adapter drivers
Hardware manufacturers regularly release BIOS, firmware, and driver updates that can improve
performance and availability for the associated hardware. Visit the hardware manufacturer’s Web
site to download and apply updates for the following hardware components on each computer in
the BizTalk Server environment:
1. BIOS updates
2. SAN drivers (if using a SAN)
3. NIC firmware
4. NIC drivers
84
Assign the MSDTC log file directory to a separate dedicated
drive
In a BizTalk Server environment with multiple MessageBox databases on separate SQL Server
computers, additional overhead associated with Microsoft Distributed Transaction Coordinator
(MSDTC) is incurred. By default, the MSDTC log files are located in the
%systemdrive%\windows\system32\msdtc directory of the computers running the DTC service.
To mitigate the possibility that DTC logging could become a performance bottleneck, consider
moving the MSDTC log file directory to a fast disk drive. To change the MSDTC log file directory
follow these steps:
1. Click Start, click Run, and type dcomcnfg to launch the Component Services Management
console.
2. Expand Component Services, expand Computers, right-click My Computer, and then click
Properties.
3. In the My Computer Properties dialog box, click the MSDTC tab.
4. In the Location edit box under Log Information, type the path where you want the new log
to be created (for example, G:\Logs\DTCLog).
5. Click Reset log, and you will be prompted for service restart. Click OK to restart the DTC
service, and then click OK to confirm the MSDTC service has been restarted.
Configure antivirus software to avoid real-time scanning of
BizTalk Server executables and file drops
Antivirus software real-time scanning of BizTalk Server executable files and any folders or file
shares monitored by BizTalk Server receive locations can negatively impact BizTalk Server
performance. If antivirus software is installed on the BizTalk Server computer(s), disable real-time
scanning of non-executable file types referenced by any BizTalk Server receive locations (usually
.XML, but can also be .csv, .txt, etc.) and configure antivirus software to exclude scanning of
BizTalk Server executable Files
Disable intrusion detection network scanning between
computers in the BizTalk Server environment
Intrusion detection software can slow down or even prevent valid communications over the
network. If intrusion detection software is installed, disable network scanning between BizTalk
Server computers and external data repositories (SQL Server) computers or messaging services
(Message Queuing, WebSphere MQSeries, etc.) computers.
Defragment all disks in the BizTalk Server environment on a
regular basis
Excessive disk fragmentation in the BizTalk Server environment will negatively impact
performance. Follow these steps to defragment disks in the BizTalk Server environment:
85
1. Defragment all disks (local and SAN/NAS) on a regular basis by scheduling off-hours disk
defragmentation.
2. Defragment the Windows PageFile and pre-allocate the Master File Tables of each disk in
the BizTalk Server environment to boost overall system performance.
Note
Use the PageDefrag utility available at http://go.microsoft.com/fwlink/?LinkId=108976
to defragment the Windows PageFile and pre-allocate the Master File Tables.
If antivirus software is installed on the SQL Server computer(s),
disable real-time scanning of data and transaction files
Real-time scanning of the SQL Server data and transaction files (.mdf, .ndf, .ldf, .mdb) can
increase disk I/O contention and reduce SQL Server performance. Note that the names of the
SQL Server data and transaction files may vary between BizTalk Server environments. For more
information about the data and transaction files created with a default BizTalk Server
configuration, see Optimizing Filegroups for the BizTalk Server Databases.
Configure MSDTC for BizTalk Server
Review the following information to configure MSDTC for BizTalk Server:

"How to Enable MSDTC on the BizTalk Server" at
http://go.microsoft.com/fwlink/?LinkId=108445.

"Troubleshooting Problems with MSDTC" at http://go.microsoft.com/fwlink/?LinkId=101609.
Configure firewall(s) for BizTalk Server
Note
This step is only required if one or more firewalls are in place in your BizTalk Server
environment.
Review the following information to configure firewall(s) for BizTalk Server:

"Required Ports for BizTalk Server" at http://go.microsoft.com/fwlink/?LinkId=101607.

”How to configure RPC dynamic port allocation to work with firewalls” at
http://go.microsoft.com/fwlink/?LinkID=76145.
Use the NTFS file system on all volumes
Windows Server offers multiple file system types for formatting drives, including NTFS, FAT, and
FAT32. NTFS should always be the file system of choice for servers.Windows Server
NTFS offers considerable performance benefits over the FAT and FAT32 file systems and should
be used exclusively on Windows servers. In addition, NTFS offers many security, scalability,
stability and recoverability benefits over FAT and FAT32.
86
Under previous versions of Windows, FAT and FAT32 were often implemented for smaller
volumes (say <500 MB) because they were often faster in such situations. With disk storage
relatively inexpensive today and operating systems and applications pushing drive capacity to a
maximum, it is unlikely that such small volumes will be in use. FAT32 scales better than FAT on
larger volumes but is still not an appropriate file system for Windows servers.
FAT and FAT32 have often been implemented in the past as they were seen as more easily
recoverable and manageable with native DOS tools in the event of a problem with a volume.
Today, with the various NTFS recoverability tools built both natively into the operating system and
available as third-party utilities available, there should no longer be a valid argument for not using
NTFS for file systems.
Do not use NTFS file compression
Though using NTFS file system compression is an easy way to reduce space on volumes, it is
not appropriate for enterprise file servers. Implementing compression places an unnecessary
overhead on the CPU for all disk operations and is best avoided. Think about options for adding
additional disks, near-line storage or consider archiving data before seriously considering file
system compression.
Review disk controller stripe size and volume allocation units
When configuring drive arrays and logical drives within your hardware drive controller, ensure you
match the controller stripe size with the allocation unit size that the volumes will be formatted
with. This will ensure disk read and write performance is optimal and offer better overall server
performance.
Configuring larger allocation unit (or cluster or block) sizes will cause disk space to be used less
efficiently, but will also provide higher disk I/O performance as the disk head can read in more
data during each read activity.
To determine the optimal setting to configure the controller and format the disks with, you should
determine the average disk transfer size on the disk subsystem of a server with similar file system
characteristics. Use the Windows Server Performance Monitor tool to monitor the Logical Disk
object counters of Avg. Disk Bytes/Read and Avg. Disk Bytes/Write over a period of normal
activity to help determine the best value to use.
Although smaller allocation unit sizes may be warranted if the system will be accessing many
small files or records, an allocation unit size of 64 KB delivers sound performance and I/O
throughput under most circumstances. Improvements in performance with tuned allocation unit
sizes can be particularly noted when disk load increases.
Note
Either the FORMAT command line tool or the Disk Management tool is required to
specify an allocation unit size larger than 4096 bytes (4 KB) when formatting volumes.
Windows Explorer will only format up to this threshold. The CHKDSK command can be
used to confirm the current allocation unit size of a volume however it needs to scan the
87
entire volume before the desired information is displayed (shown as Bytes in each
allocation unit).
Monitor drive space utilization
The less data on a disk, the faster it will operate. This is because on a well defragmented drive,
data is written as close to the outer edge of the disk as possible because this is where the disk
spins the fastest and yields the best performance.
Disk seek time is normally considerably longer than read or write activities. As noted above, data
is initially written to the outside edge of a disk. As demand for disk storage increases and free
space reduces, data is written closer to the center of the disk. Disk seek time is increased in
locating the data as the head moves away from the edge, and when found, it takes longer to
read, hindering disk I/O performance.
This means that monitoring disk space utilization is important not just for capacity reasons but for
performance also.
As a rule of thumb, work towards a goal of keeping disk free space between 20% to 25% of total
disk space. If free disk space drops below this threshold, then disk I/O performance will be
negatively impacted.
Implement a strategy to avoid disk fragmentation
Run a defragmenter utility regularly on your disks, including the root drive, to prevent
performance degradation. Do this weekly on busy disks. A disk defragmenter is installed with
Windows Server and can be run from a Scheduled Task at specified intervals.
Optimize Windows Server performance for background services
The BizTalk Server process (BTSNTSVC.exe) runs as a background service. By default,
Windows Server is configured to adjust for best performance of application programs and not for
background services.
Windows Server uses preemptive multi-tasking to prioritize process threads that will be attended
to by the CPU. Preemptive multi-tasking is a methodology whereby the execution of a process is
halted and another process is started, at the discretion of the operating system. This scheme
prevents a single thread from dominating the CPU.
Switching the CPU from executing one process to the next is known as context-switching. The
Windows operating system includes a setting that determines how long individual threads are
allowed to run on the CPU before a context-switch occurs and the next thread is serviced. This
amount of time is referred to as a quantum. This setting lets you choose how processor quanta
are shared between foreground programs and background services. Typically for a server it is not
desirable to allow a foreground program to have more CPU time allocated to it than background
services. That is, all applications and their processes running on the server should be given equal
consideration for CPU time.
To increase performance for background service like BizTalk host instances, follow these steps:
88
1. Click Start, click Control Panel, and then click System.
2. Click the Advanced tab, and then click Settings under Performance.
3. Click the Advanced tab, click Background services, and then click OK twice.
Manually load Microsoft Certificate Revocation lists
When starting a .NET application, the .NET Framework will attempt to download the Certificate
Revocation list (CRL) for any signed assembly. If your system does not have direct access to the
Internet, or is restricted from accessing the Microsoft.com domain, this may delay startup of
BizTalk Server. To avoid this delay at application startup, you can use the following steps to
manually download and install the code signing Certificate Revocation Lists on your system.
1. Download the latest CRL updates from
http://crl.microsoft.com/pki/crl/products/CodeSignPCA.crl and
http://crl.microsoft.com/pki/crl/products/CodeSignPCA2.crl.
2. Move the CodeSignPCA.crl and CodeSignPCA2.crl files to the isolated system.
3. From a command prompt, enter the following command to use the certutil utility to update the
local certificate store with the CRL downloaded in step 1:
certutil –addstore CA c:\CodeSignPCA.crl
The CRL files are updated regularly, so you should consider setting a reoccurring task of
downloading and installing the CRL updates. To view the next update time, double-click the .crl
file and view the value of the Next Update field.
Synchronize time on all servers
Many operations involving tickets, receipts and logging rely on the local system clock being
accurate. This is especially true in a distributed environment, where time discrepancies between
systems may cause logs to be out of sync or tickets issued by one system to be rejected by
another as expired or not yet valid.
For more information on configuring a server to automatically synchronize time, see
http://go.microsoft.com/fwlink/?LinkId=99420.
Configure the Windows PAGEFILE for optimal performance
Follow these guidelines to configure the Windows PAGEFILE (paging file) for optimal
performance:
1. Move the paging file to a physical volume separate from the physical drive that
operating system is installed on to reduce disk contention and increase disk
performance - On BizTalk Server computers, the performance gain associated with moving
the paging file will vary depending on the document processing load. On SQL Server
computers, moving the paging file to a separate volume is considered a best practice in all
scenarios due to the disk intensive nature of SQL Server.
89
2. Isolate the paging file onto one or more dedicated physical drives that are configured
as either RAID-0 (striping) or RAID-1 (mirroring) arrays, or on single disks without
RAID - By using a dedicated disk or drive array where PAGEFILE.SYS is the only file on the
entire volume, the paging file will not become fragmented, which will also improve
performance. As with most disk-arrays, performance of the array is improved as the number
of physical disks in the array is increased. If the paging file is distributed between multiple
volumes on multiple physical drives in a disk array, the paging file size should be the same
size on each drive in the array. When configuring a disk array, it is also recommended to use
physical drives that have the same capacity and speed. Note that redundancy is not normally
required for the paging file.
3. Do not configure the paging file on a RAID 5 array - Configuration of the paging file on a
RAID 5 array is not recommended because paging file activity is write intensive and RAID 5
arrays are better suited for read performance than write performance.
4. If you do not have resources to move the paging file to a physical volume other than
the operating system is installed on, configure the paging file to reside on the same
logical volume as the operating system - Configuring the paging file to reside on a another
logical volume which is on the same physical disk as the operating system will increase disk
seek time and reduce system performance as the disk drive platter heads will be continually
moving between the volumes, alternately accessing the page file, operating system files,
application files, and data files. Also, the operating system is typically installed on the first
partition of a physical disk, which is usually the closest to the outside edge of the physical
disk and where disk speed is and associated performance are optimal for the disk.
Important
If you do remove the paging file from the boot partition, Windows cannot create a
crash dump file (MEMORY.DMP) in which to write debugging information in the event
that a kernel mode STOP error occurs. If you do require a crash dump file, then you
will have no option but to leave a paging file of at least the size of physical memory +
1 MB on the boot partition.
5. Manually set the size of the paging file – Manually setting the size of the paging file
typically provides better performance than allowing the server to size it automatically or
having no paging file at all. Best-practice tuning is to set the initial (minimum) and maximum
size settings for the paging file to the same value. This ensures that no processing resources
are lost to the dynamic resizing of the paging file, which can be intensive. This is especially
true given that this resizing activity typically occurs when the memory resources on the
system are already becoming constrained. Setting the same minimum and maximum page
file size value also ensures the paging area on a disk is one single, contiguous area,
improving disk seek time. To determine the appropriate page file size for 64-bit versions of
Windows Server, follow the recommendations in Microsoft Knowledge Base article How to
determine the appropriate page file size for 64-bit versions of Windows Server 2003 or
Windows XP (http://go.microsoft.com/fwlink/?LinkId=148945).
90
Note
The steps in this article apply to 64-bit versions of Windows Server 2008 and
Windows Vista as well as 64-bit versions of Windows Server 2003 and Windows XP.
Network Optimizations
In a BizTalk Server environment where the BizTalk Server computer(s) are separate from the
SQL Server computer(s), each and every message processed by BizTalk Server requires
communication over the network. This communication includes considerable traffic between the
BizTalk Server computers and the BizTalk Message Box database(s), the BizTalk Management
database(s), the BAM databases, and other databases. In high-load scenarios, this
communication can result in considerable network traffic and can become a bottleneck, especially
when network settings have not been optimized, not enough network interface cards are installed,
or insufficient network bandwidth is available.
This topic provides steps for improving networking performance between Hyper-V virtual
machines running on the same Hyper-V host computer and provides some general
recommendations for improving network performance.
Note
The most common indicator that Network IO is a bottleneck is the counter “SQL
Server:Wait Statistics\Network IO waits.” When the value for Avg Wait Time in this
counter is greater than zero on one or more of your SQL Server computers, then Network
IO is a bottleneck.
Improving Network Performance of BizTalk Server
on Hyper-V
Configure Hyper-V Virtual Machines that are Running on the
same Hyper-V host computer to use a Private Virtual Network
To improve networking performance between Hyper-V virtual machines that are running on the
same Hyper-V host computer, create a private virtual network and route network traffic between
virtual machines through the private virtual network.
Create a Private Virtual Network
1. Click Start, click All Programs. Click Administrative Tools, and then click Hyper-V
Manager.
2. In the left-hand pane of the Hyper-V Manager, right-click Hyper-V Manager, and then
click Connect to Server.
91
3. In the Select Computer dialog box, enter the name of the Hyper-V host computer, and
then click OK.
4. In the left-hand pane of the Hyper-V Manager, right-click the Hyper-V host, and then click
Virtual Network Manager.
5. In the Virtual Network Manager, under What type of virtual network do you want to
create?, click Private, and then click Add.
6. Enter a name for the new virtual network, and then click OK. The virtual network is now
available to each Hyper-V virtual machine that is run on this Hyper-V host.
Add the Private Virtual Network to Hyper-V Virtual Machines running on the Hyper-V
Host
1. Click Start, click All Programs. Click Administrative Tools, and then click Hyper-V
Manager.
2. In the left-hand pane of the Hyper-V Manager, right-click Hyper-V Manager, and then
click Connect to Server.
3. In the Select Computer dialog box, enter the name of the Hyper-V host computer, and
then click OK.
4. Shut down any running virtual machines for which you would like to add the private virtual
network by right-clicking on the virtual machine, and then clicking Shut down.
5. After shutting down the virtual machines, right-click a virtual machine, and then click
Settings to change the settings for a virtual machine.
6. In the Settings for <machine_name> dialog box, under Add Hardware, click to select
Network Adapter, and then click Add.
7. On the Network Adapter configuration page, under Network:, select the private virtual
network that you created earlier, and then click OK. You have now made the private
virtual network available to the Hyper-V virtual machine which will be accessible the next
time that the virtual machine is started.
8. Repeat the steps above for each virtual machine for which you want to route network
traffic through the private virtual network.
9. Start the virtual machines that you have added the private virtual network to. Right click
each virtual machine and click Start.
Configure each Virtual Machine to use the Private Virtual Network
1. Once each virtual machine has been started, the private virtual network is accessible to
the virtual machine as a network connection. Configure the network connection on each
virtual machine to use TCP/IPv4, and specify settings for the TCP/IPv4 protocol.
a. Access the network connection properties page, select Internet Protocol Version
4(TCP/IPv4), and then click Properties.
b. Click the radio button next to Use the following IP address.
92
2. Enter a value for the IP address field from the range of private IP addresses identified in
“RFC 1918, Address Allocation for Private IP Addresses” at
http://go.microsoft.com/fwlink/?LinkID=31904.
3. Make a note of the IP address that you specified; you will need to associate this value
with the NetBIOS name of this computer in a HOSTS file entry later.
4. Enter an appropriate value for the Subnet mask field.
Note
Windows should populate the Subnet mask field with an appropriate value
based upon the value that you entered into the IP address field.
5. Leave the Default gateway field blank, click OK, and then click Close.
6. After configuring each virtual machine with a unique private IP address, update the
HOSTS file on each virtual machine with the IP address and NetBIOS name of the other
virtual machines running on the Hyper-V host computer. The updated HOSTS file should
be saved to the %systemroot%\drivers\etc\ folder on each virtual machine.
Note
Because by default Windows checks the local HOSTS file first to resolve
NetBIOS names, by updating the HOSTS file on each virtual machine with the
unique private IP addresses of the other virtual machines, network traffic
between these machine will now be routed over the private virtual network.
Disable TCP Offloading for the Virtual Machine Network Cards
Edit the registry as described in the MSDN topic “Using Registry Values to Enable and Disable
Task Offloading (NDIS 5.1)” at http://go.microsoft.com/fwlink/?LinkId=147619 to disable TCP
offloading for the network cards on each virtual machine.
Important
Incorrect use of Registry Editor may cause problems requiring you to reinstall your
operating system. Use Registry Editor at your own risk. For more information about how
to back up, restore, and modify the registry, see the Microsoft Knowledge Base article
"Description of the Microsoft Windows registry" at
http://go.microsoft.com/fwlink/?LinkId=62729.
General guidelines for improving network
performance
The following recommendations can be used to increase network performance:
93
Add additional network cards to computers in the BizTalk Server
environment
Just as adding additional hard drives can improve disk performance, adding additional network
cards can improve network performance. If the network cards on the computers in your BizTalk
Server environment are saturated and the card is a bottleneck, consider adding one or more
additional network cards to improve performance.
Implement network segmentation
Follow the recommendations in the Subnets section of the "BizTalk Server Database
Optimization" whitepaper at http://go.microsoft.com/fwlink/?LinkID=101578.
Where possible, replace hubs with switches
Switches contain logic to directly route traffic between the source and destination whereas hubs
use a broadcast model to route traffic. Therefore switches are more efficient and offer improved
performance.
Remove unnecessary network protocols
Windows Server computers sometimes have more network services and protocols installed than
are actually required. Each additional network client, service or protocol places additional
overhead on system resources.
In addition, each installed protocol generates network traffic. By removing unnecessary network
clients, services and protocols, system resources are made available for other processes, excess
network traffic is avoided and the number of network bindings that must be negotiated is reduced
to a minimum.
To see the currently installed network clients, protocols and services, follow these steps:
1. Click Start, point to Settings, and then click Control Panel.
2. Double-click Network Connections to display the network connections on the computer.
3. Right-click Local Area Connection (or the entry for your network connection), and then click
Properties to display the properties dialog box for the network connection.
4. To remove an unnecessary item, select it and click Uninstall. To disable an item, simply
clear the checkbox associated with the item.
If you are unsure about the effects of uninstalling an item for the connection, then disable the item
rather than uninstalling it. Disabling items allows you to determine which services, protocols and
clients are actually required on a system. When it has been determined that disabling an item has
no adverse affect on the server, the item can then be uninstalled.
In many cases, only the following three components are required for operation on a standard
TCP/IP based network:

Client for Microsoft Networks

File and Printer Sharing for Microsoft Networks
94

Internet Protocol (TCP/IP)
Network adapter drivers on all computers in the BizTalk Server
environment should be tuned for performance
Important
Before applying tuning to network adapter drivers, always install the latest network
adapter device drivers for the network cards in the environment.
Adjust the network adapter device drivers to maximize the amount of memory available for packet
buffering, both incoming and outgoing. Also maximize buffer counts, especially transmit buffers
and coalesce buffers. The default values for these parameters, and whether they are even
provided, vary between manufacturers and driver versions. The goal is to maximize the work
done by the network adapter hardware, and to allow the greatest possible buffer space for
network operations to mitigate network traffic bursts and associated congestion.
Note
Steps to tune network adapter drivers vary by manufacturer.
Follow these steps to access settings for network adapters in Windows Server 2003:
1. Click Start, point to Settings, click Control Panel, and then double-click Network
Connections.
2. Right-click Local Area Connection (or the name of your network connection), and then click
Properties.
3. On the General tab, click Configure.
4. Click the Advanced tab to access properties that can be configured for the network adapter.
The following properties should be configured for each network adapter in the BizTalk Server
environment:
Note
You apply these settings for each physical network adapter, including the individual
network adapters within a teamed set of network adapters that are configured for
aggregation, load balancing, or fault tolerance. With some teaming software, you might
need to apply these settings to the team also. Note that some network adapters are selftuning and may not offer the option to configure parameters manually.

Power Option – Configure the network adapter driver to prevent power management
functionality from turning off the network adapter to save power. This functionality may be
useful for client computers but should seldom, if ever, be used on a BizTalk Server or SQL
Server computer.

Fixed Speed/Duplex (do not use AUTO) - It is very important that the network speed,
duplex, and flow control parameters are set to correspond to the settings on the switch to
which they are connected. This will mitigate the occurrence of periodic “auto-synchronization”
which may temporarily take connections off-line.
95

Max Coalesce Buffers - Map registers are system resources used to convert physical
addresses to virtual addresses for network adapters that support bus mastering. Coalesce
buffers are available to the network driver if the driver runs out of map registers. Set this
value as high as possible for maximum performance. On servers with limited physical
memory, this may have a negative impact as coalesce buffers consume system memory. On
most systems however, the maximum setting can be applied without significantly reducing
available memory.

Max Transmit/Send Descriptors and Send Buffers - This setting specifies how many
transmit control buffers the driver allocates for use by the network interface. This directly
reflects the number of outstanding packets the driver can have in its “send” queue. Set this
value as high as possible for maximum performance. On servers with limited physical
memory, this may have a negative impact as send buffers consume system memory. On
most systems however, the maximum setting can be applied without significantly reducing
available memory.

Max Receive Buffers - This setting specifies the amount of memory buffer used by the
network interface driver when copying data to the protocol memory. It is normally set by
default to a relatively low value. Set this value as high as possible for maximum performance.
On servers with limited physical memory, this may have a negative impact as receive buffers
consume system memory. On most systems however, the maximum setting can be applied
without significantly reducing available memory.

All offload options ON - In almost all cases performance is improved when enabling
network interface offload features. Some network adapters provide separate parameters to
enable or disable offloading for send and receive traffic. Offloading tasks from the CPU to the
network adapter can help lower CPU usage on the server which will improve overall system
performance. The Microsoft TCP/IP transport can offload one or more of the following tasks
to a network adapter that has the appropriate capabilities:

Checksum tasks - The TCP/IP transport can offload the calculation and validation of IP
and TCP checksums for sends and receives to the network adapter, enable this option if
the network adapter driver provides this capability.

IP security tasks - The TCP/IP transport can offload the calculation and validation of
encrypted checksums for authentication headers (AH) and encapsulating security
payloads (ESP) to the network adapter. The TCP/IP transport can also offload the
encryption and decryption of ESP payloads to the network adapter. Enable these options
if the network adapter driver provides this capability.

Segmentation of large TCP packets - The TCP/IP transport supports large send offload
(LSO). With LSO, the TCP/IP transport can offload the segmentation of large TCP
packets.

Stack Offload – The entire network stack can be offloaded to a network adapter that has
the appropriate capabilities. Enable this option if the network adapter driver provides this
capability.
96

Wake On LAN disabled (unless being used) – Configure the network adapter driver to
disable wake-on lan functionality. This functionality may be useful for client computers but
should seldom if ever be used on a BizTalk Server or SQL Server computer.
For more information about tuning network adapters for performance, see the Network Device
Settings section of the "BizTalk Server Database Optimization" whitepaper at
http://go.microsoft.com/fwlink/?LinkID=101578.
SQL Server Optimizations
BizTalk Server is an extremely database intensive application that may require the creation of up
to 13 databases in SQL Server. Because one of the primary design goals of BizTalk Server is to
ensure that no messages are lost, BizTalk Server persists data to disk with great frequency and
furthermore, does so within the context of an MSDTC transaction. Therefore, database
performance is paramount to the overall performance of any BizTalk Server solution.
This section describes general methods for maximizing SQL Server performance as well as
methods for maximizing database performance that are specific to a BizTalk Server environment.
For additional information on optimizing BizTalk database performance, see the BizTalk
Database Optimization TechNet article at http://go.microsoft.com/fwlink/?LinkId=118001.
In This Section

Pre-Configuration Database Optimizations

Post-Configuration Database Optimizations

Optimizing Filegroups for the Databases
Pre-Configuration Database Optimizations
BizTalk Server is an extremely database-intensive application that may require the creation of up
to 13 separate databases in Microsoft SQL Server. Because of the critical role that SQL Server
plays in any BizTalk Server environment, it is of paramount importance that SQL Server is
configured/tuned for optimal performance. If SQL Server is not tuned to perform well, then the
databases used by BizTalk Server will become a bottleneck and the overall performance of the
BizTalk Server environment will suffer. This topic describes several SQL Server performance
optimizations that should be followed before installing BizTalk Server and configuring the BizTalk
Server databases.
Set NTFS File Allocation Unit
SQL Server stores its data in Extents, which are groups of eight 8K pages. Therefore, to optimize
disk performance, set the NTFS Allocation Unit size to 64KB as described in the “Disk
97
Configuration Best Practices” section of the SQL Server best practices article “Predeployment I/O
Best Practices” available at http://go.microsoft.com/fwlink/?LinkId=140818. For more information
about SQL Server pages and extents see the SQL Server 2008 Books Online topic
Understanding Pages and Extents (http://go.microsoft.com/fwlink/?LinkId=148939).
Database planning considerations
We recommend that you host your SQL Server databases on fast storage (for example, fast SAN
disks or fast SCSI disks). We recommend RAID 10 (1+0) instead of RAID 5 because raid 5 is
slower at writing. Newer SAN disks have very large memory caches, so in these cases, the RAID
selection is not as important. To increase performance, databases and their log files can reside
on different physical disks.
Install the latest service pack and cumulative
updates for SQL Server
Install the latest service packs and the latest cumulative updates for SQL Server 2005 and SQL
Server 2008 as well as the latest .NET Framework service packs.
Install SQL Service Packs on both BizTalk Server
and SQL Server
When installing service packs for SQL Server, also install the service pack on the BizTalk Server
computer. BizTalk Server uses SQL Client components that are updated by the SQL Server
service packs.
Consider implementing the SQL Server 2008 Data
Collector and Management Data Warehouse
SQL Server 2008 accommodates the use of the new Data Collector and Management Data
Warehouse to collect environment/database performance related data for test and trend analysis.
The Data Collector persists all collected data to the specified Management Data Warehouse.
While this is not a performance optimization, this will be useful for analysis of any performance
issues.
Grant the account which is used for SQL Server
the Windows Lock Pages In Memory privilege
Grant the Windows “Lock Pages in Memory” privilege to the SQL Server service account. This
should be done to prevent the Windows operating system from paging out the buffer pool
memory of the SQL Server process by locking memory that is allocated for the buffer pool in
98
physical memory. For more information, see Microsoft Knowledge Base article 914483 “How to
reduce paging of buffer pool memory in the 64-bit version of SQL Server 2005” at
http://go.microsoft.com/fwlink/?LinkId=148948.
Grant the SE_MANAGE_VOLUME_NAME right to
the SQL Server Service account
Ensure the account running the SQL Server service has the “Perform Volume Maintenance
Tasks” Windows privilege or ensure it belongs to a security group which does. This will allow
instant file initialization ensuring optimum performance if a database has to auto-grow.
Set Min and Max Server Memory
The computers running SQL Server that host the BizTalk Server databases should be dedicated
to running SQL Server. We recommend that the “min server memory” and “max server memory”
options on each SQL Server instance are set to specify the fixed amount of memory to allocate to
SQL Server. In this case, you should set the “min server memory” and “max server memory” to
the same value (equal to the maximum amount of physical memory that SQL Server will use).
This will reduce overhead that would otherwise be used by SQL Server dynamically managing
these values. Run the following T-SQL commands on each SQL Server computer to specify the
fixed amount of memory to allocate to SQL Server:
sp_configure ‘Max Server memory (MB)’,(max size in MB)
sp_configure ‘Min Server memory (MB)’,(min size in MB)
Before you set the amount of memory for SQL Server, determine the appropriate memory setting
by subtracting the memory required for Windows Server from the total physical memory. This is
the maximum amount of memory you can assign to SQL Server.
Note
If the computer(s) running SQL Server that host the BizTalk Server databases also host
the Enterprise Single Sign-On Master Secret Server, then you may need to adjust this
value to ensure that there is sufficient memory available to run the Enterprise Single
Sign-On Service. It is not an uncommon practice to run a clustered instance of the
Enterprise Single Sign-On service on a SQL Server cluster to provide high availability for
the Master Secret Server. For more information about clustering the Enterprise Single
Sign-On Master Secret Server, see the topic “How to Cluster the Master Secret Server” in
the BizTalk Server 2009 documentation at
http://go.microsoft.com/fwlink/?LinkID=106874.
99
Split the tempdb database into multiple data files
of equal size on each SQL Server instance used
by BizTalk Server
Ensuring that the data files used for the tempdb are of equal size is critical because the
proportional fill algorithm used by SQL Server is based on the size of the data files. This algorithm
attempts to ensure that SQL Server fills each file in proportion to the free space left in that file so
that they reach their maximum capacity at about the same time. If data files are created with
unequal sizes, the proportional fill algorithm will use the largest file more for GAM allocations
rather than spreading the allocations between all the files, thereby defeating the purpose of
creating multiple data files. The number of data files for the tempdb database should be
configured to be at least equal to the number of processors assigned for SQL Server.
Enable Trace Flag T1118 as a startup parameter
for all instances of SQL Server
Implementing Trace Flag –T1118 helps reduce contention across the SQL Server instances by
removing almost all single page allocations. For more information, see Microsoft Knowledge Base
article 328551 "PRB: Concurrency enhancements for the tempdb database" at
http://go.microsoft.com/fwlink/?LinkID=103713.
Do not change default SQL Server settings for
max degree of parallelism, SQL Server statistics,
or database index rebuilds and defragmentation
If a SQL Server instance will house BizTalk Server databases, then certain SQL Server settings
should not be changed. Specifically, the SQL Server max degree of parallelism, the SQL Server
statistics on the MessageBox database, and the settings for the database index rebuilds and
defragmentation should not be modified. For more information, see the topic “SQL Server
Settings That Should Not Be Changed” in the BizTalk Server Operations Guide at
http://go.microsoft.com/fwlink/?LinkId=114358.
Post-Configuration Database Optimizations
In addition to following the recommendations in Post-Configuration Database Optimizations,
several steps should be followed to optimize BizTalk Server database performance on SQL
Server after BizTalk Server has been installed and the BizTalk Server databases have been
configured. This topic provides a list of these optimizations.
100
Pre-allocate space for BizTalk Server databases
and define auto-growth settings for BizTalk Server
databases to a fixed value instead of a percentage
value

SQL Server database auto-growth is a blocking operation that hinders BizTalk Server
database performance. Therefore it is important to allocate sufficient space for the BizTalk
Server databases in advance to minimize the occurrence of database auto-growth.

Database auto-growth should be set to a fixed number of megabytes instead of to a
percentage (specify file growth In Megabytes). This should be done so if auto-growth occurs,
it does so in a measured fashion which reduces the likelihood of excessive database growth.
The growth increment should generally be no larger than 100 MB (for large files), 10 MB (for
medium-sized files), or 1 MB (for small files).

When SQL Server increases the size of a file, the new space must first be initialized before it
can be used. This is a blocking operation that involves filling the new space with empty
pages. SQL Server 2005 running on Windows Server 2003 or later supports “instant file
initialization.” This can greatly reduce the performance impact of a file growth operation. For
more information, see "Database File Initialization" in the SQL Server 2008 documentation at
http://go.microsoft.com/fwlink/?LinkId=132063. This topic provides steps for enabling instant
file initialization.
Move the Backup BizTalk Server output directory
to a dedicated LUN
Move the Backup BizTalk Server (Full and Log backup) output directory to dedicated LUN, Edit
steps 1 and 2 (insert new output path) of the Backup BizTalk Server [BizTalkMgmtDb] job.
Moving the Backup BizTalk Server output directory to a dedicated LUN will reduce disk I/O
contention when the job is running by writing to a different disk than the job is reading from.
Verify the BizTalk Server SQL Agent Jobs are
running
BizTalk Server includes several SQL Server Agent jobs that perform important functions to keep
your servers operational and healthy. You should monitor the health of these jobs and ensure
they are running without errors.
One of the most common causes of performance problems in BizTalk Server is the BizTalk
Server SQL Agent Jobs are not running, which in turn can cause the MessageBox and Tracking
databases to grow unchecked. Follow these steps to ensure the BizTalk Server SQL Agent Jobs
are running without problems:

Verify the SQL Server Agent service is running.
101

Verify the SQL Server Agent jobs installed by BizTalk Server are enabled and running
successfully.
The BizTalk Server SQL Server Agent jobs are crucial—if they are not running, system
performance will degrade over time.

Verify the BizTalk Server SQL Server Agent jobs are completing in a timely manner.
Set up Microsoft Operations Manager (MOM) 2005 or Microsoft System Center Operations
Manager 2007 to monitor the jobs.
You should be aware of schedules that are particular to certain jobs:


The MessageBox_Message_ManageRefCountLog_BizTalkMsgBoxDb job runs
continuously by default. Monitoring software should take this schedule into account and
not produce warnings.

The MessageBox_Message_Cleanup_BizTalkMsgBoxDb job is not enabled or
scheduled, but it is started by the
MessageBox_Message_ManageRefCountLog_BizTalkMsgBoxDb job every 10 seconds.
Therefore, this job should not be enabled, scheduled, or manually started.
Verify the Startup type of the SQL Server Agent service is configured correctly.
Verify the SQL Server Agent service is configured with a Startup type of Automatic unless
the SQL Server Agent service is configured as a cluster resource on a Windows Server
cluster. If the SQL Server Agent service is configured as a cluster resource, then you should
configure the Startup type as Manual because the service will be managed by the Cluster
service.
Configure Purging and Archiving of Tracking Data
Follow these steps to ensure that purging and archiving of tracking data is configured correctly:

Ensure the SQL Agent job “DTA Purge and Archive” is properly configured, enabled, and
successfully completing. For more information, see "How to Configure the DTA Purge and
Archive Job" in the BizTalk Server documentation at
http://go.microsoft.com/fwlink/?LinkId=104908.

Ensure the job is able to purge the tracking data as fast as the incoming tracking data is
generated. For more information, see "Measuring Maximum Sustainable Tracking
Throughput" in the BizTalk Server 2006 R2 documentation at
http://go.microsoft.com/fwlink/?LinkId=104909.

Review the soft purge and hard purge parameters to ensure you are keeping data for the
optimal length of time. For more information, see "Archiving and Purging the BizTalk Tracking
Database" in the BizTalk Server documentation at
http://go.microsoft.com/fwlink/?LinkId=101585.

If you only need to purge the old data and do not need to archive it first, change the SQL
Agent job to call the stored procedure “dtasp_PurgeTrackingDatabase.” For more
information, see "How to Purge Data from the BizTalk Tracking Database" in the BizTalk
Server documentation at http://go.microsoft.com/fwlink/?LinkId=101584.
102
Monitor and reduce DTC log file disk I/O
contention
The Microsoft Distributed Transaction Coordinator (MS DTC) log file can become a disk I/O
bottleneck in transaction-intensive environments. This is especially true when using adapters that
support transactions, such as SQL Server, MSMQ, or MQSeries, or in a multi-MessageBox
environment. Transactional adapters use DTC transactions, and multi-MessageBox environments
make extensive use of DTC transactions.
To ensure the DTC log file does not become a disk I/O bottleneck, you should monitor the disk
I/O usage for the disk where the DTC log file resides on the SQL Server database server(s). If
disk I/O usage for the disk where the DTC log file resides becomes excessive, then consider
moving the DTC log file to a faster disk.
In an environment where SQL Server is clustered, this is not as much of a concern because the
log file will already be on a shared drive, which will likely be a fast SAN drive with multiple
spindles. You should nevertheless still monitor the disk I/O usage because it can become a
bottleneck in non-clustered environments or when the DTC log file is on a shared disk with other
disk-intensive files.
Separate the MessageBox and Tracking
Databases
Because the BizTalk MessageBox and BizTalk Tracking databases are the most active, we
recommend you place the data files and transaction log files for each of these on dedicated
drives to reduce the likelihood of problems with disk I/O contention. For example, you would need
four drives for the MessageBox and BizTalk Tracking database files, one drive for each of the
following:

MessageBox data file(s)

MessageBox transaction log file(s)

BizTalk Tracking (DTA) data file(s)

BizTalk Tracking (DTA) transaction log file(s)
Separating the BizTalk MessageBox and BizTalk Tracking databases and separating the
database files and transaction log files on different physical disks are considered best practices
for reducing disk I/O contention. Try to spread the disk I/O across as many physical spindles as
possible. You can also reduce disk I/O contention by placing the BizTalk Tracking database on a
dedicated SQL Server, however, you should still follow the practices above with regards to
separating data files and transaction log files.
103
Optimize filegroups for the BizTalk Server
databases
Follow the steps in Optimizing Filegroups for the Databases and the "BizTalk Server Database
Optimization" whitepaper at http://go.microsoft.com/fwlink/?LinkId=101578 to create additional
filegroups and files for the BizTalk Server databases. This will greatly increase the performance
of the BizTalk Server databases from a single disk configuration.
Optimizing Filegroups for the Databases
File input/output (I/O) contention is frequently a limiting factor, or bottleneck, in a production
BizTalk Server environment. BizTalk Server is a very database intensive application and in turn,
the SQL Server database used by BizTalk Server is very file I/O intensive.
This topic describes how to make optimal use of the files and filegroups feature of SQL Server to
minimize the occurrence of file I/O contention and improve the overall performance of a BizTalk
Server solution.
Overview
Every BizTalk Server solution will eventually encounter file I/O contention as throughput is
increased. The I/O subsystem, or storage engine, is a key component of any relational database.
A successful database implementation typically requires careful planning at the early stages of a
project. This planning should include consideration of the following issues:

What type of disk hardware to use, such as RAID (redundant array of independent disks)
devices. For more information about using a RAID hardware solution, see “About Hardwarebased solutions” in the SQL Server Books online at
http://go.microsoft.com/fwlink/?LinkID=113944.

How to apportion data on the disks using files and filegroups. For more information about
using files and filegroups in SQL Server 2008, see “Using Files and Filegroups” in the SQL
Server Books online at http://go.microsoft.com/fwlink/?LinkID=69369 and “Understanding
Files and Filegroups” in the SQL Server Books online at
http://go.microsoft.com/fwlink/?LinkID=96447.

Implementing the optimal index design for improving performance when accessing data. For
more information about designing indexes, see “Designing Indexes” in the SQL Server books
online at http://go.microsoft.com/fwlink/?LinkID=96457.

How to set SQL Server configuration parameters for optimal performance. For more
information about setting optimal configuration parameters for SQL Server, see “Optimizing
Server Performance” in the SQL Server Books online at
http://go.microsoft.com/fwlink/?LinkID=71418.
One of the primary design goals of BizTalk Server is to ensure that a message is never lost. In
order to mitigate the possibility of message loss, messages are frequently written to the
104
MessageBox database as the message is processed. When messages are processed by an
Orchestration, the message is written to the MessageBox database at every persistence point in
the orchestration. These persistence points cause the MessageBox to write the message and
related state to physical disk. At higher throughputs, this persistence can result in considerable
disk contention and can potentially become a bottleneck.
Making optimal use of the files and filegroups feature in SQL Server has been shown to
effectively address File IO bottlenecks and improve overall performance in BizTalk Server
solutions. This optimization should only be done by an experienced SQL Server database
administrator and only after all BizTalk Server databases have been properly backed up. This
optimization should be performed on all SQL Server computers in the BizTalk Server
environment.
SQL Server files and filegroups can be utilized to improve database performance because this
functionality allows a database be created across multiple disks, multiple disk controllers, or RAID
(redundant array of independent disks) systems. For example, if your computer has four disks,
you can create a database that is made up of three data files and one log file, with one file on
each disk. As data is accessed, four read/write heads can concurrently access the data in
parallel. This speeds up database operations significantly. For more information about
implementing hardware solutions for SQL Server disks, see “Database Performance” in the SQL
Server Books online at http://go.microsoft.com/fwlink/?LinkID=71419.
Additionally, files and filegroups enable data placement, because tables can be created in
specific filegroups. This improves performance, because all file I/O for a given table can be
directed at a specific disk. For example, a heavily used table can be placed on a file in a
filegroup, located on one disk, and the other less heavily accessed tables in the database can be
located on different files in another filegroup, located on a second disk.
File IO bottlenecks are discussed in considerable detail in the topic “Identifying Bottlenecks in the
Database Tier” in the BizTalk Server 2009 documentation at
http://go.microsoft.com/fwlink/?LinkId=147626. The most common indicator that File I/O (Disk I/O)
is a bottleneck is the value of the “Physical Disk:Average Disk Queue Length” counter. When the
value of the “Physical Disk:Average Disk Queue Length” counter is greater than about 3 for any
given disk on any of the SQL Servers, then file I/O is likely a bottleneck.
If applying file or filegroup optimization doesn't resolve a file I/O bottleneck problem, then it may
be necessary to increase the throughput of the disk subsystem by adding additional physical or
SAN drives.
This topic describes how to manually apply file and filegroup optimizations but these
optimizations can also be scripted. A sample SQL script is provided at the end of this topic. It is
important to note that this script would need to be modified to accommodate the file, filegroup,
and disk configuration used by the SQL Server database(s) for any given BizTalk Server solution.
Note
This topic describes how to create multiple files and filegroups for the BizTalk
MessageBox database. For an exhaustive list of recommended files and filegroups for all
of the BizTalk Server databases, see Appendix B of the excellent "BizTalk Server
105
Database Optimization" whitepaper available at
http://go.microsoft.com/fwlink/?LinkID=101578.
Databases created with a default BizTalk Server
configuration
Depending on which features are enabled when configuring BizTalk Server, up to 13 different
databases may be created in SQL Server and all of these databases are created in the default file
group. The default filegroup for SQL Server is the PRIMARY filegroup unless the default filegroup
is changed by using the ALTER DATABASE command. The table below lists the databases that
are created in SQL Server if all features are enabled when configuring BizTalk Server.
BizTalk Server Databases
Database
Default Database Name
Description
Configuration database
BizTalkMgmtDb
The central meta-information
store for all instances of
BizTalk Server in the BizTalk
Server group.
BizTalk MessageBox
database
BizTalkMsgBoxDb
Stores subscriptions
predicates. It is a host
platform, and keeps queues
and state tables for each
BizTalk Server host. The
MessageBox database also
stores the messages and
message properties.
BizTalk Tracking database
BizTalkDTADb
Stores business and health
monitoring data tracked by the
BizTalk Server tracking
engine.
BAM Analysis database
BAMAnalysis
SQL Server Analysis Services
database that keeps the
aggregated historical data for
Business Activities.
BAM Star Schema database
BAMStarSchema
Transforms the data collected
from Business Activity
Monitoring for OLAP
Processing. This database is
required when using the BAM
Analysis database.
106
BAM Primary Import
database
BAMPrimaryImport
Stores the events from
Business Activities and then
queries for the progress and
data after activity instances.
This database also performs
real-time aggregations.
BAM Archive database
BAMArchive
Stores subscription
predicates. The BAM Archive
database minimizes the
accumulation of Business
Activity data in the BAM
Primary Import database.
SSO database
SSODB
Securely stores the
configuration information for
receive locations. Stores
information for SSO affiliate
applications, as well as the
encrypted user credentials to
all the affiliate applications.
Rule Engine database
BizTalkRuleEngineDb
Repository for:
Tracking Analysis Server
Administration database
BizTalkAnalysisDb

Policies, which are sets of
related rules.

Vocabularies, which are
collections of userfriendly, domain-specific
names for data references
in rules.
Stores both business and
health monitoring OLAP
cubes.
Separation of data files and log files
As noted above, a default BizTalk Server configuration places the MessageBox Database into a
single file in the default filegroup. By default, the data and transaction logs for the MessageBox
database are placed on the same drive and path. This is done to accommodate systems with a
single disk. A single file/filegroup/disk configuration is not optimal in a production environment.
For optimal performance, the data files and log files should be placed on separate disks.
107
Note
Log files are never part of a filegroup. Log space is managed separately from data space.
The 80/20 rule of distributing BizTalk Server
databases
The main source of contention in most BizTalk Server solutions, either because of disk I/O
contention or database contention, is the BizTalk Server MessageBox database. This is true in
both single and multi-MessageBox scenarios. It is reasonable to assume that as much as 80% of
the value of distributing BizTalk databases will be derived from optimizing the MessageBox data
files and log file. The sample scenario detailed below is focused on optimizing the data files for a
MessageBox database. These steps can then be followed for other databases as needed, for
example, if the solution requires extensive tracking, then the Tracking database can also be
optimized.
Manually adding files to the MessageBox
database, step-by-step
This section describes the steps that can be followed to manually add files to the MessageBox
database. In this example three filegroups are added and then a file is added to each filegroup to
distribute the files for the MessageBox across multiple disks. In this example, the steps are
performed on both SQL Server 2005 and SQL Server 2008.
Note
For purposes of the performance testing done for this guide, filegroups were optimized
through the use of a script which will be published as part of the BizTalk Server 2009
Performance Optimizations Guide. The steps below are provided for reference purposes
only.
Manually adding files to the MessageBox database on SQL
Server 2005 or SQL Server 2008
Follow these steps to manually add files to the MessageBox database on SQL Server 2005
or SQL Server 2008:
Note
While there are subtle differences in the user interface between SQL Server 2005 and
SQL Server 2008, the steps listed below apply to both versions of SQL Server.
1. Click Start, point to All Programs, point to Microsoft SQL Server 2005 or Microsoft SQL
Server 2008, and then click SQL Server Management Studio to display the Connect to
Server dialog box.
108
2. In the Server name field of the Connect to Server dialog box, enter the name of the SQL
Server instance that houses the BizTalk Server MessageBox databases, and then click the
Connect button to display the Microsoft SQL Server Management Studio dialog box.
In the Object Explorer pane of SQL Server Management Studio, expand Databases to view
the databases for this instance of SQL Server.
109
3. Right-click the database for which to add the files, and then click Properties to display the
Database Properties dialog box for the database.
110
4. In the Database Properties dialog box, select the Filegroups page. Click the Add button to
create additional filegroups for the BizTalkMsgBoxDb databases. In the example below, three
additional filegroups are added.
111
5. In the Database Properties dialog box, select the Files page.
Click the Add button to create additional files to add to the filegroups, and then click OK. The
MessageBox database is now distributed across multiple disks, which will provide a
significant performance advantage over a single disk configuration.
In the example below, a file is created for each of the filegroups that were created earlier and
each file is placed on a separate disk.
112
Sample SQL script for adding filegroups and files
to the BizTalk MessageBox database
The sample SQL script below performs the same tasks that were completed manually in the
previous section. This sample script assumes the existence of distinct logical drives G through J.
The script creates filegroups and files for each filegroup and places the log files on the J drive.
Note
Because SQL Server writes to its log files sequentially, there is no performance
advantage realized by creating multiple log files for a SQL Server database.
-- Filegroup changes are made using the master database
USE [master]
GO
-- Script-wide declarations
DECLARE @CommandBuffer nvarchar(2048)
DECLARE @FG1_Path nvarchar(1024)
DECLARE @FG2_Path nvarchar(1024)
113
DECLARE @FG3_Path nvarchar(1024)
DECLARE @Log_Path nvarchar(1024)
-- Set the default path for all filegroups
SET @FG1_Path = N'G:\BizTalkMsgBoxDATA\'
SET @FG2_Path = N'H:\BizTalkMsgBoxDATA\'
SET @FG3_Path = N'I:\BizTalkMsgBoxDATA\'
SET @Log_Path = N'J:\BizTalkMsgBoxLog\'
ALTER DATABASE [BizTalkMsgBoxDb] ADD FILEGROUP [BTS_MsgBox_FG1]
SET @CommandBuffer = N'ALTER DATABASE [BizTalkMsgBoxDb] ADD FILE ( NAME =
N''BizTalkMsgBoxDb_FG1'', FILENAME = N''' + @FG1_Path +
N'BizTalkMsgBoxDb_FG1.ndf'' , SIZE = 102400KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10240KB
) TO FILEGROUP [BTS_MsgBox_FG1]'
EXECUTE (@CommandBuffer)
ALTER DATABASE [BizTalkMsgBoxDb] ADD FILEGROUP [BTS_MsgBox_FG2]
SET @CommandBuffer = N'ALTER DATABASE [BizTalkMsgBoxDb] ADD FILE ( NAME =
N''BizTalkMsgBoxDb_FG1'', FILENAME = N''' + @FG2_Path +
N'BizTalkMsgBoxDb_FG2.ndf'' , SIZE = 102400KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10240KB
) TO FILEGROUP [BTS_MsgBox_FG2]'
EXECUTE (@CommandBuffer)
ALTER DATABASE [BizTalkMsgBoxDb] ADD FILEGROUP [BTS_MsgBox_FG3]
SET @CommandBuffer = N'ALTER DATABASE [BizTalkMsgBoxDb] ADD FILE ( NAME =
N''BizTalkMsgBoxDb_FG1'', FILENAME = N''' + @FG3_Path +
N'BizTalkMsgBoxDb_FG3.ndf'' , SIZE = 102400KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10240KB
) TO FILEGROUP [BTS_MsgBox_FG3]'
EXECUTE (@CommandBuffer)
ALTER DATABASE [BizTalkMsgBoxDb] MODIFY FILE ( NAME = N'BizTalkMsgBoxDb_log', SIZE =
10240KB , MAXSIZE = UNLIMITED, FILEGROWTH = 10240KB )
GO -- Completes the previous batch, as necessary
The sample SQL script below could be used to set a particular filegroup as the default filegroup:
114
USE [BizTalkMsgBoxDb]
GO
declare @isdefault bit
SELECT @isdefault=convert(bit, (status & 0x10)) FROM sysfilegroups WHERE
groupname=N'BTS_MsgBox_FG1'
if(@isdefault=0)
ALTER DATABASE [BizTalkMsgBoxDb] MODIFY FILEGROUP [BTS_MsgBox_FG1] DEFAULT
GO
The advantage to scripting is that scripts can perform multiple tasks quickly, can be reproduced
precisely, and reduce the possibility of human error. The disadvantage of scripting is that the
execution of an incorrectly written script can potentially cause serious problems that could require
the BizTalk Server databases to be re-configured from scratch. Therefore, it is of utmost
importance that SQL scripts such as the sample script listed in this topic are thoroughly tested
before being executed in a production environment.
BizTalk Server Optimizations
This section provides guidelines for improving BizTalk Server performance. The optimizations in
this section are applied after BizTalk Server has been installed and configured.
In This Section

General BizTalk Server Optimizations

Low-Latency Scenario Optimizations
General BizTalk Server Optimizations
The following recommendations can be used to increase BizTalk Server performance. The
optimizations listed in this topic are applied after BizTalk Server has been installed and
configured.
Create multiple BizTalk Server hosts and separate
host instances by functionality
Separate hosts should be created for sending, receiving, processing, and tracking functionality.
Creating multiple BizTalk hosts provides flexibility when configuring the workload in your BizTalk
group and is the primary means of distributing processing across the BizTalk Servers in a BizTalk
group. Multiple hosts also allow you to stop one host without affecting other hosts. For example,
115
you may want to stop sending messages to let them queue up in the Messagebox database,
while still allowing the inbound receiving of messages to occur. Separating host instances by
functionality also provides the following benefits:

Each host instance has its own set of resources such as memory, handles, and threads in the
.NET thread pool.

Multiple BizTalk hosts will also reduce contention on the Messagebox database host queue
tables because each host is assigned its own work queue tables in the Messagebox
database.

Throttling is implemented in BizTalk Server at the host-level. This allows you to set different
throttling characteristics for each host.

Security is implemented at the host-level; each host runs under a discrete Windows identity.
This would allow you, for example, to give Host_A access to FileShare_B, while not allowing
any of the other hosts to access the file share.
Note
While there are benefits to creating additional host instances, there are also potential
drawbacks if too many host instances are created. Each host instance is a Windows
service (BTSNTSvc.exe), which generates additional load against the MessageBox
database and consumes computer resources (such as CPU, memory, threads).
For more information about modifying BizTalk Server Host properties, see "How to Modify Host
Properties" in the BizTalk Server 2009 help at http://go.microsoft.com/fwlink/?LinkId=101588.
Configure a dedicated tracking host
BizTalk Server is optimized for throughput, so the main orchestration and messaging engines do
not actually move messages directly to the BizTalk Tracking or BAM databases, as this would
divert these engines from their primary job of executing business processes. Instead, BizTalk
Server leaves the messages in the MessageBox database and marks them as requiring a move
to the BizTalk Tracking database. A background process (the tracking host) then moves the
messages to the BizTalk Tracking and BAM databases. Because tracking is a resource intensive
operation, a separate host should be created that is dedicated to tracking, thereby minimizing the
impact that tracking has on hosts dedicated to message processing.
Using a dedicated tracking host also allows you to stop other BizTalk hosts without interfering
with BizTalk Server tracking. The movement of tracking data out of the Messagebox database is
critical for a healthy BizTalk Server system. If the BizTalk Host responsible for moving tracking
data in the BizTalk group is stopped, the Tracking Data Decode service will not run. The impact of
this is as follows:

HAT tracking data will not be moved from the Messagebox database to the BizTalk Tracking
database.

BAM tracking data will not be moved from the Messagebox database to the BAM Primary
Import database.

Because data is not moved, it cannot be deleted from the Messagebox database.
116

When the Tracking Data Decode service is stopped, tracking interceptors will still fire and
write tracking data to the Messagebox database. If the data is not moved, this will cause the
Messagebox database to become bloated, which will impact performance over time. Even if
custom properties are not tracked or BAM profiles are not set up, by default some data is
tracked (such as pipeline receive / send events and orchestration events). If you do not want
to run the Tracking Data Decode service, turn off all tracking so that no interceptors save
data to the database. To disable global tracking, see "How to Turn Off Global Tracking" in the
BizTalk Server 2009 help at http://go.microsoft.com/fwlink/?LinkId=101589. Use the BizTalk
Server Administration console to selectively disable tracking events.
The tracking host should be run on at least two computers running BizTalk Server (for
redundancy in case one fails). For optimal performance, you should have at least one tracking
host instance per Messagebox database. The actual number of tracking host instances should be
(N + 1), where N = the number of Messagebox databases. The "+ 1" is for redundancy, in case
one of the computers hosting tracking fails.
A tracking host instance moves tracking data for specific Messagebox databases, but there will
never be more than one tracking host instance moving data for a specific Messagebox database.
For example, if you have three Messagebox databases, and only two tracking host instances,
then one of the host instances needs to move data for two of the Messagebox databases. Adding
a third tracking host instance distributes the tracking host work to another computer running
BizTalk Server. In this scenario, adding a fourth tracking host instance would not distribute any
more tracking host work, but would provide an extra tracking host instance for fault tolerance.
For more information about the BAM Event Bus service, see the following topics in the BizTalk
Server 2009 help:

"Managing the BAM Event Bus Service" at http://go.microsoft.com/fwlink/?LinkId=101590.

"Creating Instances of the BAM Event Bus Service" at
http://go.microsoft.com/fwlink/?LinkId=101591.
Manage ASP.NET thread usage or concurrently
executing requests for Web applications that host
orchestrations published as a Web or WCF
Service
The number of worker and I/O threads (IIS 6.0 and IIS 7.0 in classic mode) or the number of
concurrently executing requests (IIS 7.0 integrated mode) for an ASP.NET Web application that
hosts an orchestration published as a Web service should be modified under the following
conditions:

CPU utilization is not a bottleneck on the hosting Web server.

The value of the ASP.NET Apps v2.0.50727\Request Wait Time or ASP.NET Apps
v2.0.50727\Request Execution Time performance counters is unusually high.
117

An error similar to the following generated in the Application log of the computer that hosts
the Web application:
Event Type: Warning
Event Source: W3SVC Event Category: None
Event ID: 1013
Date: 6/4/2009
Time: 1:03:47 PM
User: N/A
Computer: <ComputerName>
Description: A process serving application pool 'DefaultAppPool' exceeded time
limits during shut down. The process id was '<xxxx>'
Manage ASP.NET thread usage for Web applications that host
orchestrations on IIS 6.0 and on IIS 7.0 running in Classic mode
When the autoConfig value in the machine.config file of an IIS 6.0 server or IIS 7.0 server
running in Classic mode is set to true, ASP.NET 2.0 manages the number of worker threads and
I/O threads that are allocated to any associated IIS worker processes:
<processModel autoConfig="true" />
To manually modify the number of worker and I/O threads for an ASP.NET 2.0 Web application,
open the associated machine.config file, and then enter new values for the maxWorkerThreads
and maxIoThreads parameters:
<!-- <processModel autoConfig="true" /> -->
<processModel maxWorkerThreads="200" maxIoThreads="200" />
Note
These values are for guidance only; ensure you test changes to these parameters.
For more information about tuning parameters in the machine.config file for an ASP.NET 2.0 Web
application, see Microsoft Knowledge Base article 821268 “Contention, poor performance, and
deadlocks when you make Web service requests from ASP.NET applications”
(http://go.microsoft.com/fwlink/?LinkID=66483).
Manage the number of concurrently executing requests for Web
applications that host orchestrations on IIS 7.0 running in
Integrated mode
When ASP.NET 2.0 is hosted on IIS 7.0 in integrated mode, the use of threads is handled
differently than on IIS 6.0 or on IIS 7.0 in classic mode. When ASP.NET 2.0 is hosted on IIS 7.0
in integrated mode, ASP.NET 2.0 restricts the number of concurrently executing requests rather
than the number of threads concurrently executing requests. For synchronous scenarios this will
118
indirectly limit the number of threads but for asynchronous scenarios the number of requests and
threads will likely be very different. When running ASP.NET 2.0 on IIS 7.0 in integrated mode, the
maxWorkerThreads and maxIoThreads parameters in the machine.config file are not used to
govern the number of running threads. Instead, the number of concurrently executing requests
can be changed from the default value of 12 per CPU by modifying the value specified for
maxConcurrentThreadsPerCPU. The maxConcurrentThreadsPerCPU value can be specified
either in the reqistry or in the config section of an aspnet.config file. Follow these steps to change
the default value for maxConcurrentThreadsPerCPU to govern the number of concurrently
executing requests:
To set the maxConcurrentThreadsPerCPU value in the registry
Warning
Incorrect use of Registry Editor may cause problems requiring you to reinstall your
operating system. Use Registry Editor at your own risk. For more information about how
to back up, restore, and modify the registry, see the Microsoft Knowledge Base article
"Description of the Microsoft Windows registry" at
http://go.microsoft.com/fwlink/?LinkId=62729.
Note
This setting is global and cannot be changed for individual application pools or
applications.
1. Click Start, click Run, type regedit.exe, and then click OK to start Registry Editor.
2. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ASP.NET\2.0.50727.0
3. Create the key by following these steps:
a. On the Edit menu, click New, and then click Key.
b. Type maxConcurrentThreadsPerCPU, and then press ENTER.
c.
Under the maxConcurrentThreadsPerCPU key, create a DWORD entry with the new
value for maxConcurrentThreadsPerCPU.
d. Close Registry Editor.
To set the maxConcurrentThreadsPerCPU value for an application pool in the config
section of an aspnet.config file
Note
Microsoft .NET Framework 3.5 Service Pack 1 must be installed to accommodate setting
the values below via configuration file. You can download Microsoft .NET Framework 3.5
Service Pack 1 from http://go.microsoft.com/fwlink/?LinkID=136345.

Open the aspnet.config file for the application pool, and then enter new values for the
maxConcurrentRequestsPerCPU and requestQueueLimit parameters:
<system.web>
<applicationPool maxConcurrentRequestsPerCPU="12" requestQueueLimit="5000"/>
</system.web>
119
Note
This value overrides the value specified for maxConcurrentThreadsPerCPU in the
registry. The requestQueueLimit setting is the same as
processModel/requestQueueLimit, except that the setting in the aspnet.config file will
override the setting in the machine.config file.
Define CLR hosting thread values for BizTalk host
instances
Because a Windows thread is the most basic executable unit available to a Windows process, it
is important to allocate enough threads to the .NET thread pool associated with an instance of a
BizTalk host to prevent thread starvation. When thread starvation occurs, there are not enough
threads available to perform the requested work that negatively impacts performance. At the
same time, care should be taken to prevent allocating more threads to the.NET thread pool
associated with a host than is necessary. The allocation of too many threads to the .NET thread
pool associated with a host may increase context switching. Context switching occurs when the
Windows kernel switches from running one thread to a different thread, which incurs a
performance cost. Excessive thread allocation can cause excessive context switching, which will
negatively impact overall performance.
Modify the number of Windows threads available in the .NET thread pool associated with an
instance of a BizTalk host by creating the appropriate CLR Hosting values in the registry of the
BizTalk Server.
Warning
Incorrect use of Registry Editor may cause problems requiring you to reinstall your
operating system. Use Registry Editor at your own risk. For more information about how
to back up, restore, and modify the registry, see the Microsoft Knowledge Base article
"Description of the Microsoft Windows registry" at
http://go.microsoft.com/fwlink/?LinkId=62729.
Note
Worker threads are used to handle queued work items and I/O threads are dedicated
callback threads associated with an I/O completion port to handle a completed
asynchronous I/O request.
To modify the number of threads available in the .NET thread pool associated with each
instance of a BizTalk host, follow these steps:
1. Stop the BizTalk host instance.
2. Click Start, click Run, type regedit.exe, and then click OK to start Registry Editor.
Navigate to
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\BTSSvc$hostname]
where hostname is the name of the host associated with the host instance.
120
Note
If you have upgraded your BizTalk Server 2006 installation from BizTalk Server 2004,
this registry key may be represented as
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\BTSSvcguid]
where guid is a GUID unique to each instance of a BizTalk Server host.
3. Locate the CLR Hosting key. If this key does not exist, then create the key by following these
steps:
a. On the Edit menu, click New, and then click Key.
b. Type CLR Hosting, and then press ENTER.
4. Under the CLR Hosting key, create the following DWORD entries with the indicated values.
DWORD entry
Default value
Recommended value
MaxIOThreads
20
100
MaxWorkerThreads
25
100
Important
Increasing this value
beyond 100 can have an
adverse effect on the
performance of the SQL
Server computer hosting
the BizTalk Server
MessageBox database.
When this problem
occurs, SQL Server may
encounter a deadlock
condition. It is
recommended this
parameter is not
increased beyond a
value of 100.
MinIOThreads
1
25
MinWorkerThreads
1
25
Note
These recommended values will be sufficient for most scenarios but may need to be
increased depending on how many adapter handlers or orchestrations are running in
each host instance.
121
Note
These values are implicitly multiplied by the number of processors on the server. For
example, setting the MaxWorkerThreads entry to a value of 100 would effectively set
a value of 400 on a 4 CPU server.
5. Close Registry Editor.
6. Restart the BizTalk host instance.
Disable tracking for orchestrations, send ports,
receive ports, and pipelines when tracking is not
required
Tracking incurs performance overhead within BizTalk Server as data has to be written to the
MessageBox database and then asynchronously moved to the BizTalk Tracking database. If
tracking is not a business requirement, then disable tracking to reduce overhead and increase
performance. For more information about configuring tracking, see “Configuring Tracking Using
the BizTalk Server Administration Console” in the BizTalk Server 2009 help at
http://go.microsoft.com/fwlink/?LinkID=106742.
Decrease the purging period for the DTA Purge
and Archive job from 7 days to 2 days in high
throughput scenarios
By default, the purging interval for tracking data in BizTalk Server is set to 7 days. In a high
throughput scenario, this can result in an excessive build up of data in the Tracking database,
which will eventually impact the performance of the MessageBox and in turn negatively impact
message processing throughput.
In high throughput scenarios, reduce the hard and soft purging interval from the default of 7 days
to 2 days. For more information about configuring the purging interval, see “How to Configure the
DTA Purge and Archive Job” in the BizTalk Server 2009 help at
http://go.microsoft.com/fwlink/?LinkID=104908.
Install the latest service packs
The latest service packs for both BizTalk Server and the .NET Framework should be installed, as
these contain fixes that can correct performance issues you may encounter.
122
Do not cluster BizTalk hosts unless absolutely
necessary
While BizTalk Server 2006 and subsequent versions of BizTalk Server allow you to configure a
BizTalk host as a cluster resource, you should only consider doing this if you need to provide high
availability to a resource that cannot be hosted across multiple BizTalk computers. As an
example, ports using the FTP adapter should only reside on one host instance, as the FTP
protocol does not provide file locking, however, this introduces a single point of failure which
would benefit from clustering. Hosts that contain adapters, such as file, SQL, HTTP or processing
only hosts, can be internally load balanced across machines and do not benefit from clustering.
Performance optimizations in the BizTalk Server
documentation
Apply the following recommendations from the BizTalk Server documentation as appropriate:

“Troubleshooting MessageBox Latency Issues” at
http://go.microsoft.com/fwlink/?LinkId=114747

“Identifying Performance Bottlenecks” at http://go.microsoft.com/fwlink/?LinkID=104418

“Avoiding DBNETLIB Exceptions” at http://go.microsoft.com/fwlink/?LinkID=108787

“Avoiding TCP/IP Port Exhaustion” at http://go.microsoft.com/fwlink/?LinkID=101610

“Setting the EPM Threadpool Size” at http://go.microsoft.com/fwlink/?LinkId=114748
Low-Latency Scenario Optimizations
By default, BizTalk Server is optimized for throughput rather than low-latency. The following
optimizations were applied to BizTalk Server for the test scenario used for this guide.
Note
These optimizations will improve latency but may do so at some cost to overall
throughput.
Increase the BizTalk Server host internal message
queue size
Each BizTalk host has its own internal in-memory queue. Increase the size of this queue from the
default value of 100 to 1000 to improve performance for a low-latency scenario. For more
information about modifying the value of the internal message queue size, see “How to Modify the
Default Host Throttling Settings” in the BizTalk Server 2009 help at
http://go.microsoft.com/fwlink/?LinkID=120225.
123
Reduce the MaxReceiveInterval value in the
adm_ServiceClass table of the BizTalk Server
management database
BizTalk Server uses a polling mechanism to receive messages from its host queues in the
Messagebox. The MaxReceiveInterval value in the adm_ServiceClass table of the BizTalk
Management (BizTalkMgmtDb) database is the maximum value in milliseconds that each BizTalk
host instance will wait until it polls the MessageBox. The adm_ServiceClass table contains a
record for the following service types:

XLANG/S – for BizTalk orchestration host instances

Messaging InProcess – for in-process host instances

MSMQT – for MSMQT adapter host instances

Messaging Isolated – for out of process host instances, used by the HTTP, SOAP, and
certain WCF receive adapter handlers
By default, this value is set to 500 milliseconds, which is optimized for throughput rather than lowlatency. In certain scenarios, latency can be improved by reducing this value.
Note
Changes to this value impact all instances of the associated service type, therefore, take
care to evaluate the impact on all host instances before changing this value.
Note
This value is only used if the Messagebox has no remaining unprocessed messages. If
there is a constant backlog of unprocessed messages in the Messagebox, BizTalk Server
will attempt to process the messages without waiting on the polling delay. After all
messages are processed, BizTalk Server will begin polling using the value specified for
MaxReceiveInterval.
Note
In a BizTalk Server environment with a high ratio of host instances to Messagebox
database instances, decreasing the value for MaxReceiveInterval may cause excessive
CPU utilization on the SQL Server computer that houses the Messagebox database
instance. For example, if the MaxReceiveInterval is decreased to a low value (< 100) in a
BizTalk Server environment with a single Messagebox and > 50 host instances, CPU
utilization on the SQL Server may climb above 50%. This phenomenon can occur
because the overhead associated with continually polling host queues is significant. If
you reduce MaxReceiveInterval to a value less than 100, you should also evaluate the
impact that this has on your SQL Server computer’s CPU utilization.
124
Appendix B: Hyper-V Architecture and
Feature Overview
This topic provides an overview of Hyper-V architecture, describes advantages and
disadvantages of Hyper-V, and describes differences between Hyper-V and Virtual Server 2005.
Hyper-V Architecture
Hyper-V is a hypervisor-based virtualization technology for x64 versions of Windows Server 2008.
The hypervisor is the processor-specific virtualization platform that allows multiple isolated
operating systems to share a single hardware platform.
Guest operating systems running in a Hyper-V virtual machine provide performance approaching
the performance of an operating system running on physical hardware if the necessary virtual
server client (VSC) drivers and services are installed on the guest operating system. Hyper-V
virtual server client (VSC) code, also known as Hyper-V enlightened I/O, enables direct access to
the Hyper-V “Virtual Machine Bus” and is available with the installation of Hyper-V integration
services. Presently, both Windows Server 2008 and Windows Vista support Hyper-V enlightened
I/O with Hyper-V integration services. Hyper-V Integration services that provide VSC drivers are
also available for other client operating systems, including Windows Server 2003.
Hyper-V supports isolation in terms of a partition. A partition is a logical unit of isolation,
supported by the hypervisor, in which operating systems execute. The Microsoft hypervisor must
have at least one parent, or root, partition, running Windows Server 2008 64-bit Edition. The
virtualization stack runs in the parent partition and has direct access to the hardware devices. The
root partition then creates the child partitions which host the guest operating systems. A root
partition creates child partitions using the hypercall application programming interface (API).
Partitions do not have access to the physical processor, nor do they handle the processor
interrupts. Instead, they have a virtual view of the processor and run in a virtual memory address
region that is private to each guest partition. The hypervisor handles the interrupts to the
processor, and redirects them to the respective partition. Hyper-V can also hardware accelerate
the address translation between various guest virtual address spaces by using an Input Output
Memory Management Unit (IOMMU) which operates independent of the memory management
hardware used by the CPU. An IOMMU is used to remap physical memory addresses to the
addresses that are used by the child partitions.
Child partitions also do not have direct access to other hardware resources and are presented a
virtual view of the resources, as virtual devices (VDevs). Requests to the virtual devices are
redirected either via the VMBus or the hypervisor to the devices in the parent partition, which
handles the requests. The VMBus is a logical inter-partition communication channel. The parent
partition hosts Virtualization Service Providers (VSPs) which communicate over the VMBus to
handle device access requests from child partitions. Child partitions host Virtualization Service
Consumers (VSCs) which redirect device requests to VSPs in the parent partition via the VMBus.
This entire process is transparent to the guest operating system.
125
Virtual Devices can also take advantage of a Windows Server Virtualization feature, named
Enlightened I/O, for storage, networking, graphics, and input subsystems. Enlightened I/O is a
specialized virtualization-aware implementation of high level communication protocols (such as
SCSI) that utilize the VMBus directly, bypassing any device emulation layer. This makes the
communication more efficient but requires an enlightened guest that is hypervisor and VMBus
aware. Hyper-V enlightened I/O and a hypervisor aware kernel is provided via installation of
Hyper-V integration services. Integration components, which include virtual server client (VSC)
drivers, are also available for other client operating systems. Hyper-V requires a processor that
includes hardware assisted virtualization, such as is provided with Intel VT or AMD Virtualization
(AMD-V) technology.
The following diagram provides a high-level overview of the architecture of a Hyper-V
environment running on Windows Server 2008.
Overview of Hyper-V architecture
Acronyms and terms used in the diagram above are described below:

APIC – Advanced Programmable Interrupt Controller. A device which allows priority levels to
be assigned to its interrupt outputs.

Child Partition – Partition that hosts a guest operating system. All access to physical
memory and devices by a child partition is provided via the Virtual Machine Bus (VMBus) or
the hypervisor.
126

Hypercall – Interface for communication with the hypervisor. The hypercall interface
accommodates access to the optimizations provided by the hypervisor.

Hypervisor – A layer of software that sits between the hardware and one or more operating
systems. Its primary job is to provide isolated execution environments called partitions. The
hypervisor controls and arbitrates access to the underlying hardware.

IC – Integration component. Component that allows child partitions to communication with
other partitions and the hypervisor.

I/O stack – Input/output stack.

MSR – Memory Service Routine.

Root Partition – Manages machine-level functions such as device drivers, power
management, and device hot addition/removal. The root (or parent) partition is the only
partition that has direct access to physical memory and devices.

VID – Virtualization Infrastructure Driver. Provides partition management services, virtual
processor management services, and memory management services for partitions.

VMBus – Channel-based communication mechanism used for inter-partition communication
and device enumeration on systems with multiple active virtualized partitions. The VMBus is
installed with Hyper-V Integration Services.

VMMS – Virtual Machine Management Service. Responsible for managing the state of all
virtual machines in child partitions.

VMWP – Virtual Machine Worker Process. A user mode component of the virtualization
stack. The worker process provides virtual machine management services from the Windows
Server 2008 instance in the parent partition to the guest operating systems in the child
partitions. The Virtual Machine Management Service spawns a separate worker process for
each running virtual machine.

VSC – Virtualization Service Client. A synthetic device instance that resides in a child
partition. VSCs utilize hardware resources that are provided by Virtualization Service
Providers (VSPs) in the parent partition. They communicate with the corresponding VSPs in
the parent partition over the VMBus to satisfy a child partitions device I/O requests.

VSP – Virtualization Service Provider. Resides in the root partition and provide synthetic
device support to child partitions over the Virtual Machine Bus (VMBus).

WinHv – Windows Hypervisor Interface Library. WinHv is essentially a bridge between a
partitioned operating system’s drivers and the hypervisor which allows drivers to call the
hypervisor using standard Windows calling conventions

WMI – The Virtual Machine Management Service exposes a set of Windows Management
Instrumentation (WMI)-based APIs for managing and controlling virtual machines.
Most of these terms are defined in the Glossary.
Note
For more information about Windows Server 2008 Hyper-V, see
http://go.microsoft.com/fwlink/?LinkID=121187.
127
Advantages of Hyper-V
The advantages of running enterprise-level solutions in a Hyper-V virtualized environment include
the following:
1. Consolidation of hardware resources - Multiple physical servers can be easily
consolidated into comparatively fewer servers by implementing virtualization with Hyper-V.
Consolidation accommodates full use of deployed hardware resources.
2. Ease of administration:

Consolidation and centralization of resources simplifies administration.

Implementation of scale-up and scale out is accommodated with much greater ease.
3. Significant cost savings:

Hardware costs are significantly reduced because multiple virtual machines can run on a
single physical machine, therefore, a separate physical machine is not required for every
computer.

Hyper-V licensing costs are included with the license cost of Windows Server 2008.
Hyper-V can also be purchased for use as a stand-alone product that can be installed on
Windows Server 2008 Server Core.

Power requirements may be significantly reduced by consolidating existing applications
onto a virtualized Hyper-V environment due to the reduced physical hardware “footprint”
that is required.
4. Fault tolerance support through Hyper-V clustering – Because Hyper-V is a cluster aware
application, Windows Server 2008 provides native host clustering support for virtual
machines created in a Hyper-V virtualized environment.
5. Ease of deployment and management:

Consolidation of existing servers into fewer physical servers simplifies deployment.

A comprehensive Hyper-V management solution is available with System Center Virtual
Machine Manager. For more information about System Center Virtual Machine Manager,
see http://go.microsoft.com/fwlink/?LinkID=111303.
6. Key Hyper-V performance characteristics:

Improved hardware sharing architecture - Hyper-V provides improved access and
utilization of core resources, such as disk, networking, and video when running guest
operating systems with a hypervisor-aware kernel and which are equipped with requisite
virtual server client (VSC) code (known as Hyper-V enlightened I/O). Enlightenments are
enhancements made to the operating system to help reduce the cost of certain operating
system functions like memory management. Presently, both Windows Server 2008 and
Windows Vista support Hyper-V enlightened I/O and a hypervisor aware kernel via
installation of Hyper-V integration services. Integration components, which include VSC
drivers, are also available for other client operating systems.
Disk performance is critical for disk I/O intensive enterprise applications such as
Microsoft BizTalk Server and in addition to Hyper-V enlightened I/O; Hyper-V provides
128
“Passthrough” disk support which provides disk performance on par with physical disk
performance. Note that “Passthrough” disk support provides improved performance at a
small cost to convenience. “Passthrough” disks are essentially physical disks/LUNs that
are attached to a virtual machine and do not support some of the functionality of virtual
disks, such as Virtual Machine Snapshots.

Processor hardware-assisted virtualization support – Hyper-V takes full advantage of
processor hardware assisted virtualization support that is available with recent processor
technology.

Multi-core (SMP) guest operating system support – Hyper-V provides the ability to
support up to four processors in a virtual machine environment, which allows applications
to take full advantage of multi-threading functionality in a virtual machine.

Both 32-bit and 64-bit guest operating system support – Hyper-V provides broad
support for simultaneously running different types of operating systems, including 32-bit
and 64-bit systems across different server platforms, such as Windows, Linux®, and
others.
7. Proven track record - Key Microsoft Web sites MSDN (http://msdn.microsoft.com) and
TechNet (http://technet.microsoft.com) are hosted in Hyper-V environments.
8. Comprehensive product support – Because Microsoft enterprise applications (such as
Exchange Server and SQL Server) are fully tested running in Hyper-V, Microsoft provides
code fix support for these applications when deployed and run in a Hyper-V environment.
9. Scalability – Additional processing power, network bandwidth, and storage capacity can be
accomplished quickly and easily by apportioning additional available resources from the host
computer to the guest virtual machine(s). This may require that the host computer is
upgraded or that the guest virtual machines are moved to a more capable host computer.
For more in depth information about the benefits of leveraging virtualization technology provided
with Hyper-V, see the whitepaper "Advanced Virtualization Benefits of Windows Server 2008
Editions for the Enterprise" available for download at
http://go.microsoft.com/fwlink/?LinkId=123530.
Disadvantages of Hyper-V
Some disadvantages of running enterprise-level solutions in a Hyper-V virtualized environment
may include:

Hardware requirements – Due to the demands of server consolidation, Hyper-V virtual
machines tend to consume more CPU and memory, and require greater disk I/O bandwidth
than physical servers with comparable computing loads. Because the Hyper-V server role is
only available for 64-bit, x64-based editions of Windows Server 2008, the physical hardware
must support hardware assisted virtualization. This means the processor must be compatible
with Intel VT or AMD Virtualization (AMD-V) technology, the system BIOS must support Data
Execution Prevention (DEP), and DEP must be enabled.
129

Software requirements – While most Microsoft software is supported running on Hyper-V
virtual machines, some Microsoft software is still in the process of being tested to ensure
compatibility with a Hyper-V virtualized environment. For example, most Microsoft enterprise
level applications either support running on Hyper-V or are in the process of being tested for
support on Hyper-V. All versions of BizTalk Server since BizTalk Server 2004 are supported
running on Hyper-V but SQL Server 2005 and SQL Server 2008 are still being tested and
should be fully supported running on Hyper-V in the near future. For more information on the
supportability of BizTalk Server and SQL Server on Hyper-V, see Appendix C: BizTalk Server
and SQL Server Hyper-V Supportability.
Differences between Hyper-V and Virtual Server
2005
When deciding which server virtualization technology to adopt, it may be of value to compare
Microsoft’s latest virtualization technology, Hyper-V, to the previous version of Microsoft server
virtualization technology, Virtual Server 2005. The following table describes key differences
between Hyper-V and Virtual Server 2005.
Virtualization Feature
Virtual Server 2005 R2
Hyper-V
32-bit virtual machines
Yes
Yes
64-bit virtual machines
No
Yes
Multi-processor virtual machines No
Yes, 4 core VMs
Virtual Machine Memory
Support
3.6GB per VM
64GB per VM
Managed by System Center
Virtual Machine Manager
Yes
Yes
Support for Windows Clustering
Services
Yes
Yes
Host side backup support (VSS)
Yes
Yes
Scriptable / Extensible
Yes, COM
Yes, WMI
User Interface
Web Interface
MMC 3.0 Interface
130
Appendix C: BizTalk Server and SQL Server
Hyper-V Supportability
The test scenarios described in Testing BizTalk Server Virtualization Performance were
performed with BizTalk Server 2009 and SQL Server 2008. BizTalk Server 2009 is supported
when installed on a supported operating system that is running on Microsoft Virtual Server 2005
or on Windows Server 2008 Hyper-V. For more information, see the Microsoft Knowledge Base
article “Microsoft BizTalk Server supportability on a virtual machine” available at
http://go.microsoft.com/fwlink/?LinkId=148941.
Support policy for SQL Server 2008 when installed on a supported operating system that is
running on Windows Server 2008 Hyper-V is documented in the Microsoft Knowledge Base
article “Support policy for Microsoft SQL Server products that are running in a hardware
virtualization environment” available at http://go.microsoft.com/fwlink/?LinkId=148942.
Important
As of the writing of this guide, clustering of a SQL Server instance on a Hyper-V
environment is not a supported scenario.
Appendix D: Tools for Measuring
Performance
This topic describes several tools that can be used to monitor and evaluate the performance of a
BizTalk Server environment.
Performance Analysis of Logs (PAL) tool
The PAL tool is used to generate an HTML-based report that graphically charts important
performance monitor counters and generates alerts when thresholds for these counters are
exceeded. PAL is an excellent tool for identifying bottlenecks in a BizTalk Server 2009 solution to
facilitate the appropriate allocation of resources when optimizing the performance of the solution.
For more information about the Performance Analysis of Logs (PAL) tool, see
http://go.microsoft.com/fwlink/?LinkID=98098.
Note
Use of this tool is not supported by Microsoft, and Microsoft makes no guarantees about
the suitability of this programs. Use of this program is entirely at your own risk.
Performance Monitor
Performance Monitor provides a visual display of built-in Windows performance counters, either
in real time or as a way to review historical data.
131
Log Parser
Log parser is a powerful, versatile tool that provides universal query access to text-based data
such as log files, XML files and CSV files, as well as key data sources on the Windows®
operating system such as the Event Log, the Registry, the file system, and Active Directory®. Log
Parser is available for download at http://go.microsoft.com/fwlink/?LinkID=100882.
Relog
The Relog utility is used to extract performance counters from logs created by Performance
Monitor and convert the data into other formats, such as tab-delimited text files (text-TSV),
comma-delimited text files (text-CSV), binary files, and SQL databases. This data can then be
analyzed and queried using other tools, such as Log Parser, to generate statistics for key
performance indicators (KPIs). The Relog utility is provided with Windows Server 2003 and
subsequent versions.
LoadGen
BizTalk LoadGen 2007 is a load generation tool used to run performance and stress tests against
BizTalk Server 2009. The Microsoft BizTalk LoadGen 2007 tool is available for download at
http://go.microsoft.com/fwlink/?LinkId=59841.
Visual Studio Team System 2008 Load Testing
The Visual Studio Team System (VSTS) 2008 provides a tool for creating and running load tests.
For more information about working with load tests see
http://go.microsoft.com/fwlink/?LinkId=141486.
BizUnit
BizUnit is a framework designed for automated testing of BizTalk Server solutions. BizUnit is an
excellent tool for testing end-to-end BizTalk Server 2009 scenarios. For more information about
BizUnit 3.0, see http://go.microsoft.com/fwlink/?LinkID=85168.
Note
Use of this tool is not supported by Microsoft, and Microsoft makes no guarantees about
the suitability of this programs. Use of this program is entirely at your own risk.
IOMeter
IOMeter is an open source tool used for measuring disk I/O performance. For more information
about IOMeter, see http://go.microsoft.com/fwlink/?LinkId=122412.
132
Note
Use of this tool is not supported by Microsoft, and Microsoft makes no guarantees about
the suitability of this programs. Use of this program is entirely at your own risk.
BizTalk Server Orchestration Profiler
The BizTalk Server Orchestration Profiler is used to obtain a consolidated view of orchestration
tracking data for a specified period of time. This provides developers with insight into how
orchestrations are being processed and the level of test coverage that is being applied. The
BizTalk Server Orchestration Profile helps to identify potential problems with latency and code
path exceptions by highlighting long running and error prone orchestration shapes, which are
critical for effective performance testing. The BizTalk Server Orchestration Profiler is available for
download at http://go.microsoft.com/fwlink/?LinkID=102209.
Note
Use of this tool is not supported by Microsoft, and Microsoft makes no guarantees about
the suitability of this programs. Use of this program is entirely at your own risk.
Pathping
Pathping provides information about possible data loss at one or more router hops on the way to
a target host. To do so, pathping sends Internet Control Message Protocol (ICMP) packets to
each router in the path. Pathping.exe is available with all versions of Windows since
Windows 2000 Server.
SQL Server Tools for Performance Monitoring and
Tuning
SQL Server provides several tools for monitoring events in SQL Server and for tuning the
physical database design. These tools are described in the topic “Tools for Performance
Monitoring and Tuning” in the SQL Server Books Online at
http://go.microsoft.com/fwlink/?LinkId=146357. Information about specific tools used for SQL
Server performance monitoring and tuning is provided below:
SQL Profiler
Microsoft SQL Server Profiler can be used to capture Transact-SQL statements that are sent to
SQL Server and the SQL Server result sets from these statements. Because SQL Server is tightly
integrated with SQL Server, the analysis of a SQL Server Profile trace can be a useful tool for
analyzing problems that may occur in BizTalk Server when reading from and writing to SQL
Server databases. For information about how to use SQL Server Profiler, see "Using SQL Server
Profiler" in the SQL Server 2008 Books Online at http://go.microsoft.com/fwlink/?linkid=104423.
133
Important
There is considerable overhead associated with running SQL Profiler. Therefore SQL
Profiler is best suited for use in test or development environments. If using SQL Profiler
to troubleshoot a production environment, be aware of the associated overhead costs
and limit the use of SQL Profiler accordingly.
Note
When using SQL Profiler to capture Transact-SQL statements, configure SQL Profiler to
generate output to a local drive rather than a drive located on a remote network share or
other slow device, for example, a local USB memory stick.
SQL Trace
SQL Server provides Transact-SQL system stored procedures to create traces on an instance of
the SQL Server Database Engine. These system stored procedures can be used from within your
own applications to create traces manually, instead of using SQL Server Profiler. This allows you
to write custom applications specific to the needs of your enterprise. For more information about
using SQL Trace, see “Introducing SQL Trace in the SQL Server 2008 Books Online at
http://go.microsoft.com/fwlink/?LinkId=146354.
Note
When using SQL Trace to capture Transact-SQL statements, configure SQL Trace to
generate output to a local drive rather than a drive located on a remote network share or
other slow device, such as a USB flash drive.
SQL Activity Monitor
SQL Server 2008 Activity Monitor provides information about SQL Server processes and how
these processes affect the current instance of SQL Server. For more information about SQL
Server 2008 Activity Monitor, see “Activity Monitor” in the SQL Server 2008 Books Online at
http://go.microsoft.com/fwlink/?LinkId=146355. For information about how to open Activity Monitor
from SQL Server Management Studio, see “How to: Open Activity Monitor (SQL Server
Management Studio) in the SQL Server 2008 Books Online at
http://go.microsoft.com/fwlink/?LinkId=135094.
SQL Server 2008 Data Collection
SQL Server 2008 provides a data collector that you can use to obtain and save data that is
gathered from several sources. The data collector enables you to use data collection containers,
which enable you to determine the scope and frequency of data collection on a computer that is
running SQL Server 2008. For more information about implementing SQL Server 2008 data
collection, see “Data Collection” in the SQL Server 2008 Books Online at
http://go.microsoft.com/fwlink/?LinkId=146356.
134
SQL Server 2005 Performance Dashboard Reports
SQL Server 2005 Performance Dashboard Reports are used to monitor and resolve performance
problems on your SQL Server 2005 database server. For more information about SQL
Server 2005 Performance Dashboard Reports, see
http://go.microsoft.com/fwlink/?LinkID=118673.
SQLIO
The SQLIO tool was developed by Microsoft to evaluate the I/O capacity of a given configuration.
As the name of the tool implies, SQLIO is a valuable tool for measuring the impact of file system
I/O on SQL Server performance. SQLIO can be downloaded from
http://go.microsoft.com/fwlink/?LinkId=115176.
Glossary
This topic defines key terms used throughout this guide.
Glossary
Term
Definition
advanced programmable interrupt
controller (APIC)
A controller that receives interrupts from various
sources and sends them to a processor core for
handling. In a multiprocessor system, which can
be either a VM or a physical computer, the
APIC sends and receive interprocessor
interrupt messages to and from other logical
processors on the system bus. For more
information about the advanced programmable
interrupt controller see chapter 8 of the Intel®
64 and IA-32 Architectures Software
Developer’s Manual Volume 3A: System
Programming Guide, Part 1
(http://go.microsoft.com/fwlink/?LinkId=148923).
child partition
Any partition that is created by the parent (or
root) partition.
core
See logical processor.
Note
In this guide, core is sometimes used
interchangeably with virtual processor,
135
Term
Definition
especially in graphics. This usage will
be corrected in a future edition of this
guide.
device virtualization
A software technology that lets a hardware
resource be abstracted and shared among
multiple consumers.
emulated device
A virtualized device that mimics an actual
physical hardware device so that guests can
use the typical drivers for that hardware device.
Emulated devices are less efficient than
synthetic devices, but emulated devices provide
support for “unenlightened” operating systems
that do not have integration components
installed.
enlightenment
An optimization to a guest operating system to
make it aware of VM environments and tune its
behavior for VMs. Enlightenments help to
reduce the cost of certain operating system
functions such as memory management.
Enlightenments are accessed through the
hypercall interface. Enlightened I/O can utilize
the VMBus directly, bypassing any device
emulation layer. An operating system that takes
advantage of all possible enlightenments is said
to be “fully enlightened.”
guest operating system
The operating system (OS) software running in
a child partition. Guests can be a full-featured
operating system or a small, special-purpose
kernel.
hypercall Interface
An application programming interface (API) that
partitions use to access the hypervisor.
Hyper-V
Hypervisor-based virtualization technology for
x64 versions of Windows Server 2008. The
Hyper-V virtualization platform allows multiple
isolated operating systems to share a single
hardware platform.
hypervisor
A layer of software that sits just above the
hardware and below one or more operating
136
Term
Definition
systems. Its primary job is to provide isolated
execution environments called partitions. The
hypervisor controls and arbitrates access to the
underlying hardware.
interrupt
An asynchronous signal from hardware
indicating the need for attention or a
synchronous event in software indicating the
need for a change in execution.
Input Output Memory Management Unit
(IOMMU)
Remaps physical memory addresses to the
addresses that are used by the child partitions
integration components (IC)
A set of services and drivers that improve
performance and integration between the
physical and virtual machines. Integration
components enable guest operating systems to
use synthetic devices, significantly reducing the
overhead needed to access devices. See also
enlightenment.
integration services
See integration components.
logical processor
A CPU that handles one thread of execution
(instruction stream). A logical processor can be
a core or a hyper-thread. There can be one or
more logical processors per core (more than
one if hyper-threading is enabled) and one or
more cores per processor socket.
logical unit number (LUN)
A number used to identify a disk on a given disk
controller or within a SAN.
parent partition
See root partition.
partition
A virtual machine (VM) created by the
hypervisor software. Each partition has its own
set of hardware resources (CPU, memory, and
devices). Partitions can own or share hardware
resources.
passthrough disk access
A representation of an entire physical disk as a
virtual disk within the guest. The data and
commands are “passed through” to the physical
disk (through the root partition’s native storage
stack) with no intervening processing by the
137
Term
Definition
virtual stack.
root partition
A partition that is created first and owns all the
resources that the hypervisor does not own
including most devices and system memory. It
hosts the virtualization stack and creates and
manages the child partitions. The root partition
is also known as the parent partition.
storage area network (SAN)
SANs are networks of storage devices. A SAN
connects (typically) multiple servers and
storage devices on a single high-speed fiber
optic network.
synthetic device
A virtualized device with no physical hardware
analog so that guests do not need a driver
(virtualization service client) to that synthetic
device. Drivers for synthetic devices are
included with the integration components
(enlightenments) for the guest operating
system. The synthetic device drivers use the
VMBus to communicate with the virtualized
device software in the root partition.
virtual hard disk (VHD)
A virtual hard disk is a file stored on the
physical computer’s native disk system. From
within a virtual machine, the VHD appears as
though it were a physical hard disk. VHDs are
based on the Virtual Hard Disk Image Format
Specification. For more information about the
Virtual Hard Disk Format Specification, see
http://go.microsoft.com/fwlink/?LinkId=122975.
virtual machine (VM)
A virtual computer that was created by software
emulation and has the same characteristics as
a real computer.
virtual machine management service
(VMMS)
The VMMS is part of the Virtualization Windows
Management Instrumentation (WMI) Provider
interface. Management tools connect to this
service during runtime to gather data about
active partitions.
virtual machine worker process (VMWP)
Each VM has a worker process that runs in the
parent partition. VMWPs run code for saving
138
Term
Definition
state, accessing emulated devices, and
controlling the VM.
virtual processor
A virtual abstraction of a processor that is
scheduled to run on a logical processor. A VM
can have one or more virtual processors.
virtualization service client (VSC)
A software module that a guest loads to
consume a resource or service. For I/O
devices, the virtualization service client can be
a device driver that the operating system kernel
loads.
virtualization service provider (VSP)
A provider, exposed by the virtualization stack,
that provides resources or services such as I/O
to a child partition.
virtualization stack
A collection of software components in the root
partition that work together to support VMs. The
virtualization stack works with and sits above
the hypervisor. It also provides management
capabilities.
VMBus
A channel-based communication mechanism
used for inter-partition communication and
device enumeration on systems with multiple
active virtualized partitions.
139
Download