Virtualizing Microsoft Applications on VMware Virtual SAN REFERENCE ARCHITECTURE Virtualizing Microsoft Applications on VMware Virtual SAN Table of Contents Executive Summary ........................................................................................................................ 2 Key Findings ............................................................................................................................... 2 Scope ......................................................................................................................................... 2 Prerequisites and Assumptions .................................................................................................. 3 Terminology ................................................................................................................................ 3 Consolidation Architecture Overview .............................................................................................. 4 VMware Virtual SAN Physical Server Configuration .................................................................. 4 Virtual SAN Disk Group Configuration ....................................................................................... 5 Exchange 2013 Architecture........................................................................................................... 6 SQL Server 2014 Architecture ...................................................................................................... 10 Virtual Networking Configuration .................................................................................................. 13 Exchange Virtual Network Configuration .................................................................................. 13 SQL Server Virtual Network Configuration ............................................................................... 15 Exchange DAG Configuration on VMware Virtual SAN ................................................................ 16 SQL Server AAG Configuration on VMware Virtual SAN ............................................................. 21 Exchange Performance on Virtual SAN ........................................................................................ 27 Key Performance Considerations ............................................................................................. 27 Performance Testing ................................................................................................................ 28 Jetstress 2013 Metrics – ExchangeMB1 .............................................................................. 28 Jetstress 2013 Metrics – ExchangeMB2 .............................................................................. 29 Load Generator 2013 Results .............................................................................................. 30 SQL Performance on Virtual SAN ................................................................................................ 31 Key Performance Considerations ............................................................................................. 31 Performance Testing ................................................................................................................ 31 DVD Store Results ............................................................................................................... 31 DVD Store Test ds2sqlserver session 1 .............................................................................. 32 DVD Store Test ds2sqlserver session 2 .............................................................................. 32 DVD Store Test Parameters ................................................................................................ 32 Application Availability .................................................................................................................. 34 vSphere HA and Virtual SAN ................................................................................................... 34 HA Configuration for Exchange and SQL Server on Virtual SAN ............................................ 34 SPBM and Data Protection ...................................................................................................... 34 Objects and Components .................................................................................................... 34 Number of Failures to Tolerate ............................................................................................. 34 Conclusion .................................................................................................................................... 36 REFERENCE ARCHITECTURE /1 Virtualizing Microsoft Applications on VMware Virtual SAN Executive Summary Because enterprises consolidate more business critical workloads on a virtualized platform, it is important that they design and build a shared storage infrastructure to satisfy operational and performance requirements. In the recent years, Microsoft has also redesigned applications such as Exchange to encourage deployment of these applications on locally attached storage for better economics. VMware Virtual SAN™ pools local flash-based devices to form a performance accelerating tier and magnetic drives to ® provide shared storage to virtual machines in the VMware vSphere cluster. This architecture brings excellent performance to applications at affordable cost. Furthermore, since Virtual SAN is an integral part of the vSphere platform, it enables unparalleled operational simplicity and ease-of user experience in the vSphere-powered IT environment. VMware Virtual SAN 6.0 provides customers and partners with a proven solution, capable of hosting virtualized Microsoft applications at a consistent performance level along with constant availability. This solution has been designed, configured, and tested to be a Proof of Concept of the capabilities and supported features with VMware Virtual SAN and business-critical applications. This technical white paper describes how concurrent workloads of Microsoft Exchange and Microsoft SQL deliver Tier 1 performance on VMware Virtual SAN. We also cover how to configure Exchange 2013 with Database Availability Groups (DAG), SQL Server 2014 with AlwaysOn Availability Groups (AAG) and Microsoft SharePoint 2013 on an eight-node VMware Virtual SAN cluster. Key Findings The test configuration has all three applications deployed in the same Virtual SAN cluster, along with other management components such as Active Directory and DNS servers. We used industry-standard test tools for both Exchange and SQL Server to generate simultaneous application load on the storage subsystem. During our testing and analysis, we discovered that performance for Exchange and SQL Server on Virtual SAN was comparable to that of enterprise-class network-attached storage (NAS) and storage-area network (SAN) storage. The deployment and configuration of the environment, however, was considerably easier since additional host utilities were not needed for the storage. Figure 1 highlights the key performance results. Jetstress 5000 mailboxes, 1194 transactional IOPS, IO database average latency < 10ms, IO log average latency < 20ms Load Generator 10008 users, 3000 distribution lists, 15000 contacts, 71447 tasks per day in stress mode, DVD Store 1 database, 2 DVD Store instances, 38260 average Orders Per Minute Figure 1. Key test results Scope This technical white paper helps customers and partners deploy a consolidated, high performance, and highly available Exchange 2013, SQL Server 2014 and SharePoint 2013 solution on VMware Virtual SAN. The solution paper describes how we simultaneously: • Configure Exchange 2013 DAG on VMware Virtual SAN • Configure SQL Server 2014 AAG on VMware Virtual SAN REFERENCE ARCHITECTURE /2 Virtualizing Microsoft Applications on VMware Virtual SAN • Configure SharePoint 2013 server farm on VMware Virtual SAN The white paper provides the solution architecture, configuration details, networking best practices, and some performance data points to provide guidance on production design, sizing, and implementation. Prerequisites and Assumptions The prerequisites and assumptions for the solution are as follows: • VMware Distributed Switch is configured • VMware Virtual SAN is installed and configured • Microsoft Exchange, SQL Server, and SharePoint are installed on virtual machines. Terminology Table 1 lists the terms used in this paper. TERM DEFINITION AAG CAS DAG disk group SQL AlwaysON Availability Group Exchange Client Access Server Exchange Database Availability Group A container for magnetic disks and an SSD that acts as a read cache and write buffer. VMware Distributed Virtual Network Switch Distributed virtual network port group Exchange Control Panel Messaging Application Program Interface Server message block A single shared datastore that is presented to all hosts that are parts of the Virtual SAN enabled cluster. A Virtual datastore consists of multiple disk groups. virtual machine disk A Windows file share used for the tie-breaker process between two or more clustered nodes dvSwitch dvportgroup ECP MAPI SMB Virtual SAN datastore VMDK Witness Share Table 1. Terminology REFERENCE ARCHITECTURE /3 Virtualizing Microsoft Applications on VMware Virtual SAN Consolidation Architecture Overview Figure 2 shows the VMware Virtual SAN logical configuration. Figure 2. VMware Virtual SAN Logical Configuration All Exchange 2013 DAG servers, SQL Server 2014 AAG servers, and SharePoint 2013 servers are virtualized and consolidated on the eight-node Virtual SAN cluster, along with necessary infrastructure components including VMware vCenter, Windows Domain Controller, DNS server. Note: Our Microsoft Cluster architecture and testing focused on Node and File Share Majority configuration. Using a File Share for the quorum configuration is a supported and common deployment for Microsoft. A File Share simply uses an SMB share on a separate virtual machine as a quorum for the cluster. By using a SMB share this keeps the complexity and cost at a minimum to the user. Other modes of Microsoft Clustering, particularly those requiring a Cluster Shared Volume, are not supported by Virtual SAN. VMware Virtual SAN Physical Server Configuration Before selecting and configuring the servers and components, we confirm that they are listed in the VMware Virtual SAN Compatibility Guide. Note: We also used two different types of solid state drives (SSDs), which is similar to what happens in a real customer environment: adding different yet supported components after the fact. Table 2 lists the configuration of each Virtual SAN ESXi server in the Virtual SAN cluster. Table 2. Server Configuration COMPONENT SPECIFICATIONS ESX host CPU 2 x Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz 10C (60GHz) ESX host RAM 256GB REFERENCE ARCHITECTURE /4 Virtualizing Microsoft Applications on VMware Virtual SAN ESX Version ESXi 6.0.build Network Adapter 2x Intel Ethernet 82599EB 10-Gigabit SFI/SFP+ Storage Adapter 2x LSI Logic / Symbios Logic LSI-9300-8i Firmware version: 6.00.00.00-IT Driver version: MPT3SAS Balanced (set in BIOS) Power Management Disks SSD: 2x Intel 400GB (SSDSC2BA400G3) SSD: 2x Intel 200GB (Intel PCIe 910) HDD: 12 x Seagate 900GB Virtual SAN Disk Group Configuration Table 3 lists the configuration settings of each virtual SAN diskgroup. Table 3. Virtual SAN Diskgroup Configuration DISK SSD HDD Diskgroup 1 1x Intel 400GB 3x Seagate 900GB Diskgroup 2 1x Intel 400GB 3 x Seagate 900GB Diskgroup 3 1x Intel 200GB 3 x Seagate 900GB Diskgroup 4 1x Intel 200GB 3 x Seagate 900GB Note: The mixed configuration of Flash devices is meant to simulate real-world customer environment. REFERENCE ARCHITECTURE /5 Virtualizing Microsoft Applications on VMware Virtual SAN Exchange 2013 Architecture Figure 3 shows Exchange 2013 DAG VMware Virtual SAN logical configuration. Figure 3. Exchange 2013 DAG VMware Virtual SAN Logical Configuration Four Client Access Servers (one virtual machine per host) using single namesapace. Four Mailbox Servers (one virtual machine per host): • Three Active Mailbox DB copies; three passive Mailbox DB copies; six total DBs per Mailbox Server, 1,250 users with a 2GB mailbox on each Mailbox Server • Twelve Active Mailbox DB copies; 12 passive Mailbox DB copies; 24 total DBs per DAG, 5,000 total users A Windows file share is created on a separate Windows 2012 virtual machine and is used for the DAG File Cluster. As a software-defined storage solution, Virtual SAN transforms storage management from the traditional LUN based provisioning to radically simple and automated VM-centric provisioning using vSphere Storage Policy Based Management (SPBM). Customers can now focus on their applications by simply defining a storage policy that satisfies the service level requirements and applying the policy at virtual machine creation, resource and data service are automatically provisioned and maintained. For the intents and purposes of this initial paper, we select the default of FTT=1, which is the Failures to Tolerate policy setting. REFERENCE ARCHITECTURE /6 Virtualizing Microsoft Applications on VMware Virtual SAN Figure 4 shows Virtual SAN storage policy based management. Figure 4. Virtual SAN Storage Policy Based Management Table 4 lists the storage layout for each Exchange Server virtual machine. Customers can simply focus on designing the application. Table 4. Exchange Server Virtual Machine E XCH AN GE R O LE D RI VE LET T ER / MO UN T P OI NT Mailbox Server – 4 Servers Normal Run Time – 1,250 Mailboxes Each V I R T U AL H A R DWAR E PER V I R T U AL MACH IN E CPU 8 cores Memory - 96GB Storage – SCSI Controller 0: C:\ VMDK 1 – 80GB (OS and application files) Storage – SCSI Controller 1: E:\ F:\ G:\ H:\ VMDK 2 – 1020GB (DB1) VMDK 3 – 60GB (LOG1) VMDK 4 – 1020GB (DB2) VMDK 5 – 60GB (LOG2) Storage – SCSI Controller 2: I:\ J:\ K:\ L:\ VMDK 6 – 1020GB (DB3) VMDK 7 – 60GB (LOG3) VMDK 8 – 1020GB (DB4) VMDK 9 – 60GB (LOG4). Storage – SCSI Controller 3: REFERENCE ARCHITECTURE /7 Virtualizing Microsoft Applications on VMware Virtual SAN E XCH AN GE R O LE D RI VE LET T ER / MO UN T P OI NT M:\ N:\ O:\ P:\ V I R T U AL H A R DWAR E PER V I R T U AL MACH IN E VMDK 10 – 1020GB (DB5) VMDK 11 – 60GB (LOG5) VMDK 12 – 1020GB (DB6) VMDK 13 – 60GB (LOG6) Network – vNIC 1 – LAN/Client Connectivity Network – vNIC 2 – LAN/Client Connectivity Client Access Server – 4 Servers CPU – 2 cores Memory - 8GB Storage – 100GB (OS/application) VMware Virtual SAN. Figure 5. Exchange Mailbox Virtual Machine Details REFERENCE ARCHITECTURE /8 Virtualizing Microsoft Applications on VMware Virtual SAN Figure 5 shows the detailed configuration of one of the Exchange Mailbox virtual machines. Note: We created the number of VMDKs to accommodate the Exchange Sizing Calculator recommendations. Refer to Exchange 2013 Server Role Requirements Calculator v6.6 for more information. REFERENCE ARCHITECTURE /9 Virtualizing Microsoft Applications on VMware Virtual SAN SQL Server 2014 Architecture Figure 6. SQL 2014 AAG VMware Virtual SAN Logical Configuration Two SQL 2014 Servers (one virtual machine per host): this SQL Server installation is a standalone install, not a SQL Cluster installation. AlwaysOn configuration Four availability groups: • AAGSPConent: SharePoint content databases – Primary and Secondary replicas – Unique AAG Listner • AAGSPCore: SharePoint Config and Central Admin databases – Primary and Secondary replicas – Unique AAG Listner • AAGSPServices: SharePoint User and App Service databases – Primary and Secondary replicas – Unique AAG Listner • AAGSPSearch: SharePoint Search databases – Primary and Secondary replicas – Unique AAG Listner REFERENCE ARCHITECTURE /10 Virtualizing Microsoft Applications on VMware Virtual SAN A Windows file share is created on a separate Windows 2012 virutal machine and is used for the AAG Cluster. Table 5 shows the storage layout for each SQL Server VM. Similarly, this is application-centric provisioning through vSphere SPBM. Table 5. SQL Server Virtual Machine SQL SERVER D RI VE LET T ER / MO UN T P OI NT SQL 2014 DB Server V I R T U AL H A R DWAR E PER V I R T U AL MACH IN E CPU 8 cores Memory - 32GB Storage – SCSI Controller 0: C:\ E:\ R:\ VMDK 1 – 80GB (OS) VMDK 2 – 20GB (SQL binaries) VMDK 3 – 15GB (Backup) F:\ G:\ Storage – SCSI Controller 1: VMDK 3 – 10GB (Data Files1) VMDK 4 – 10GB (Data Files2) Storage – SCSI Controller 2: L:\ I:\ VMDK 5 – 10GB (Log Files) VMDK 6 – 10GB (TempDB) Storage – SCSI Controller 3: J:\ VMDK 7 – 10GB (TempLog Files) Network – vNIC 1 – LAN/Client Connectivity Network – vNIC 2 – LAN/Client Connectivity REFERENCE ARCHITECTURE /11 Virtualizing Microsoft Applications on VMware Virtual SAN Figure 7. SQL Server Virtual Machine Details The SQL Server virtual machines are configured to accommodate SharePoint 2013. REFERENCE ARCHITECTURE /12 Virtualizing Microsoft Applications on VMware Virtual SAN Virtual Networking Configuration Follow the listed guidelines for virtual networking configuration: • The Hardware Networking Considerations and Guest Operating Systems sections of Performance Best Practices for VMware vSphere 5.0 • VMware Virtual SAN Network Design Guide Exchange Virtual Network Configuration Because of the flexibility of virtual networking, the topology can take many different forms. There is no single recommended practice because each practice provides its own sets of benefits. Figure 8 shows our configuration for the solution. Although Exchange 2013 DAG members can be deployed with a single network interface for both MAPI and replication traffic, Microsoft recommends using separate network interfaces for each traffic type. The use of at least two network interfaces allows DAG members to distinguish between system and network failures. Per Microsoft Exchange Best Practice, we separated the MAPI and DAG traffic. Note: We also segment the SQL and Exchange replication traffic. Additionally we take advantage of the DvPortGroups feature and create port groups for both MAPI and public virtual machine traffic using VLAN 101. Figure 8. Virtual Network Configuration – Exchange DAG REFERENCE ARCHITECTURE /13 Virtualizing Microsoft Applications on VMware Virtual SAN Figure 9. Server Manager Summary Figure 10. Detail of the Exchange DAG Replication Network The Exchange DAG is isolated on its own VLAN within the dvSwitch. We also configured each Exchange Mailbox Server VM DAG NIC accordingly. REFERENCE ARCHITECTURE /14 Virtualizing Microsoft Applications on VMware Virtual SAN SQL Server Virtual Network Configuration Figure 8 also shows our configuration for SQL Server 2014. Per Microsoft SQL Server Best Practice, we separated the User/Web and AAG replication traffic. Note: We also segment the SQL and Exchange replication traffic. REFERENCE ARCHITECTURE /15 Virtualizing Microsoft Applications on VMware Virtual SAN Exchange DAG Configuration on VMware Virtual SAN In this environment, the Quorum Configuration is a Node and File Share Majority using a File Share Witness to act as a tiebreaker. We created the file share on a separate Windows 2012 Server as shown in Figure 11. The file share can be created externally, such as on a physical Windows server or virtual Windows server outside of the Virtual SAN cluster, for protection purposes. Because the file share only serves as a tiebreaker witness, temporary unavailability does not affect the DAG cluster functioning. Figure 11. Windows File Share on separate Windows 2012 VM for Exchange DAG The virtual machine in Figure 11 hosts both the Exchange DAG share and SQL AAG share. Refer to the Create a database availability group topic for details on the share permissions requirements. REFERENCE ARCHITECTURE /16 Virtualizing Microsoft Applications on VMware Virtual SAN Figure 12. Exchange DAG Overview Enter the created Windows share in the Exchange Control Panel (ECP) as shown in Figure 13. Figure 13. Exchange DAG Witness Server and Share information Mailbox database configuration summary can be viewed in the ECG and DAG cluster configuration summary can be viewed in the Failover Cluster Manager, shown in Figure 14 and Figure 15 respectively. REFERENCE ARCHITECTURE /17 Virtualizing Microsoft Applications on VMware Virtual SAN Figure 14. Mailbox Database Configuration Summary Figure 15. DAG Configuration Summary from Windows Failover Cluster Manager The DAG creates the necessary Windows Failover Cluster resources. REFERENCE ARCHITECTURE /18 Virtualizing Microsoft Applications on VMware Virtual SAN Figure 16. The File Share Witness Summary from Windows Failover Cluster Manager Figure 17. The Cluster Network Summary from Windows Failover Cluster Manager After all configurations are completed, the Failover Cluster Manager contains details about the four server nodes, their respective NICs, and the Windows File Share, as shown in Figure 18. REFERENCE ARCHITECTURE /19 Virtualizing Microsoft Applications on VMware Virtual SAN Figure 18. DAG Failover Cluster REFERENCE ARCHITECTURE /20 Virtualizing Microsoft Applications on VMware Virtual SAN SQL Server AAG Configuration on VMware Virtual SAN In this environment, the Quorum Configuration is also a Node and File Share Majority using a File Share Witness to act as a tiebreaker. The file share can be created externally, such as on a physical Windows server or virtual Windows server outside of the Virtual SAN cluster, for protection purpose. Since the file share only serves as a tiebreaker witness, temporary unavailability of it does not impact the AAG cluster functioning. We created the file share on the same Windows 2012 Server VM that was used for Exchange DAG. Figure 19. Windows File Share on separate Windows 2012 VM for SQL Server AAG The virtual machine in Figure 19 hosts both the Exchange DAG share and SQL AAG share. Refer to the Configure SQL Server 2012 AlwaysOn Availability Groups for SharePoint 2013 topic for details on the share permissions requirements. From within Windows Failover Cluster Manager, we confirm that the AAG is configured correctly and is running. SQL Server AAG NICs are also listed. REFERENCE ARCHITECTURE /21 Virtualizing Microsoft Applications on VMware Virtual SAN Figure 20. AAG Configuration Summary Failover Cluster Manager Figure 21. AAG Network Configuration Windows Failover Cluster Manager REFERENCE ARCHITECTURE /22 Virtualizing Microsoft Applications on VMware Virtual SAN Figure 22. File Share Witness Summary Failover Cluster Manager The AlwaysON Groups are visible within SQL Server Management Studio as shown in Figure 23. Figure 23. SQL AAG summary from SQL Server Management Studio REFERENCE ARCHITECTURE /23 Virtualizing Microsoft Applications on VMware Virtual SAN Expanding the AAG folder displays more details and options as shown in Figure 24. Figure 24. SQL AAG detail from SQL Server Management Studio Back in Windows Failover Cluster Manager, we reconcile the listed Cluster Roles with the AAGs as shown in Figure 25. Figure 25. AlwaysON Role Configuration Summary REFERENCE ARCHITECTURE /24 Virtualizing Microsoft Applications on VMware Virtual SAN On SQL PowerShell, we reconcile the IPs of the AAGs to the listed assigned IPs as shown in Figure 26. Figure 26. AlwaysON Ownership Status Back in SQL Server Mgmt Studio, we can select an AAG and verify that the Windows share is noted as shown in Figure 27. Figure 27. AlwaysON Cluster Configuration REFERENCE ARCHITECTURE /25 Virtualizing Microsoft Applications on VMware Virtual SAN Within SQL Server Mgmt Studio we review the details of the WSS_Content_app AAG, this is in fact the SharePoint Content DB residing in an AAG as shown in Figure 28. Figure 28. SQL AlwaysON group properties REFERENCE ARCHITECTURE /26 Virtualizing Microsoft Applications on VMware Virtual SAN Exchange Performance on Virtual SAN Since 2006, VMware and its partners have used testing to successfully demonstrate the viability of running Exchange on the VMware Infrastructure platform. This testing has been confirmed by organizations that have deployed Exchange 2003, 2007, and 2010 in virtualized production environments and benefit from the considerable operational advantages and cost savings. Many customers have virtualized their entire Exchange 2010 environment and have carefully designed their vSphere infrastructure to accommodate application performance, scalability, and availability requirements. Exchange Server 2013 is an even greater candidate for virtualization than its predecessors. Architectural changes and improvements to the core of Exchange Server, along with advancements in server hardware, make vSphere the default choice for Exchange 2013. The shift towards running Exchange virtualization as the default design choice is a result of advancements in three key areas: • The Exchange information store (the Managed Store) has been rewritten to further optimize resource consumption. This update to the Managed Store has also led to further reduction in storage I/O requirements. • Advances in server hardware such as multicore processors, higher memory density, and advances in storage technology are far outpacing the performance requirements for applications, including Exchange 2013. Virtualization becomes an effective way to leverage the full power of these systems. • The advances in Exchange Server 2013 and server hardware technology have coincided with advances in vSphere. Virtual machines support up to 1TB RAM and 64 vCPUs and are capable of running even the largest Exchange Mailbox servers. Key Performance Considerations A variety of factors can affect Exchange Server 2013 performance on vSphere, including processor and memory allocation to the guest virtual machine, storage layout and design, virtual machine placement, and high availability methods. The following are tips for achieving the best possible performance: • Fully understand your organization’s business and technical requirements for implementing Exchange. • Fully understand the Exchange workload requirements. Current workloads can be measured using the Microsoft Exchange Server Profile Analyzer for environments running Exchange 2003 and 2007. For environments running Exchange 2010, use current Mailbox server utilization as a baseline. • Use Microsoft sizing and configuration guidelines for the Exchange virtual machines. • Follow best practices of virtualizing Microsoft Exchange on VMware vSphere: refer to Microsoft Exchange 2013 on VMware Best Practices Guide. REFERENCE ARCHITECTURE /27 Virtualizing Microsoft Applications on VMware Virtual SAN Performance Testing Every Exchange environment is different, with varying business and technical requirements, many server and storage options, and requirements for integrating with third-party software solutions such as antivirus, anti-spam, and smartphones. It is strongly recommended that each organization test performance on their particular mix of server, storage, and software to determine the best design for their Exchange environment. In addition, several VMware server and storage partners have performed testing to validate Exchange performance on vSphere. Microsoft provides tools to measure the performance of Microsoft Exchange Server architectures. Use LoadGen to measure performance of the entire Exchange Server environment and use Jetstress for storage qualification. Both tools have been written specifically for Exchange 2013. It is important to address a concern with the collection of performance metrics from within virtual machines. Early in the virtualization of high-performance applications, the validity of in-guest performance metrics came into question because of a time skew that can be possible in highly overcommitted environments. With the advancements in hypervisor technology and server hardware, this issue has mostly been addressed, especially when testing is performed on under-committed hardware. This is validated by Microsoft support for running Jetstress within virtual machines. For more information about virtual machine support for Jetstress, refer to the Microsoft TechNet article Microsoft Exchange Server Jetstress 2013 Field Guide. Also, refer to Microsoft Exchange Load Generator 2013 and Microsoft Exchange Server Jetstress 2013 Tool. Jetstress 2013 Metrics – ExchangeMB1 Table 6. Exchange MB1 Jetstress data PERFORMANCE COUNTERS TARGET VALUES Achieved Exchange transactional IOPS (I/O database reads/sec + I/O database writes/sec) I/O database reads/sec 1194 I/O database writes/sec 66 132 Total IOPS (I/O database reads/sec + I/O 325 database writes/sec + BDM reads/sec + I/O log replication reads/sec + I/O log writes/sec) I/O database reads average latency (ms) Less than 20 ms I/O log reads average latency (ms) Less than 10 ms In the test scenario, we configured the first Exchange Mailbox virtual machine to run 2,500 users with a 2GB mailbox database over two hours. The latency is well below the threshold. This test was run concurrently with DVD Store. REFERENCE ARCHITECTURE /28 Virtualizing Microsoft Applications on VMware Virtual SAN Jetstress 2013 Metrics – ExchangeMB2 Table 7. Exchange MB2 Jetstress data PERFORMANCE COUNTERS TARGET VALUES Achieved Exchange transactional IOPS (I/O database reads/sec + I/O database writes/sec) I/O database reads/sec 1265 I/O database writes/sec 70 Total IOPS (I/O database reads/sec + I/O database writes/sec + BDM reads/sec + I/O log replication reads/sec + I/O log writes/sec) I/O database reads average latency (ms) 325 I/O log reads average latency (ms) Less than 10 ms 140 Less than 20 ms In the test scenario, we configured the second Exchange Mailbox virtual machine to run 2,500 users with a 2GB mailbox database over two hours. The latency is well below the threshold. This test was run concurrently with DVD Store. REFERENCE ARCHITECTURE /29 Virtualizing Microsoft Applications on VMware Virtual SAN Load Generator 2013 Results Figure 29. Load Generator Test Results Summary REFERENCE ARCHITECTURE /30 Virtualizing Microsoft Applications on VMware Virtual SAN SQL Performance on Virtual SAN SQL Server databases in an organization have unique deployment needs because of the variety of applications they serve. These applications can require 32-bit or 64-bit versions of the operating system and SQL Server, particular service packs or hot fixes, security and other access control settings, and support for specific legacy application components. When SQL Server databases are consolidated in physical environments, these specific requirements often force the administrator to take a least common denominator approach to configuration, thus compromising optimal performance. Because a VMware vSphere environment enables both 32-bit and 64-bit virtual machines side-by-side but isolated from each other, virtualized consolidation is much more flexible and much less constrained. However, understanding these specific deployment requirements can help the administrator refine the virtualization approaches that work best, and can help to decide whether to adopt a scale-up approach with multiple databases in a single large virtual machine or a scale-out approach with one or only a few databases per virtual machine. Key Performance Considerations A variety of factors can affect SQL Server performance on vSphere, including processor and memory allocation to the guest virtual machine, storage layout and design, virtual machine placement, and high availability methods. Take the following tips to achieve the best possible performance: • Fully understand your organization’s business and technical requirements for consolidating and virtualizing SQL Server. • Fully understand your existing SQL Server workload requirements. Use Microsoft Monitoring and Tuning Tools to capture the SQL Server workload. • Use SQL Server Best Practices for the SQL Server virtual machines. • Follow best practices of virtualizing Microsoft SQL Server on VMware vSphere: refer to SQL Server on VMware Best Practices Guide. Performance Testing There are many free tools available on the Internet that can be used for SQL Server testing. Some are server based and some are storage based such as SQLIO and DVD Store. For this testing, we decide to use DVD Store because the IO generated simulates a SQL DB. The test results are based on our environment and configuration, your results may vary. As with any test tool, it is always recommended that the testing be conducted in a test environment without impacting users and production environments. Note: We ran both Exchange 2013 Jetstress and DVD Store simultaneously. D V D S t o r e R e s u lt s The benchmark DVD Store was run for two hours for the following test case. We used two ds2sqlserver sessions running simultaneously. For the most accurate test results, we rebooted the ESX hosts after each test run to clear out the Virtual SAN SSD Cache. We took two safeguards to ensure this, at the start of each test the test phase went through an initial warmup where the data was staged to cache and then stabilized over time. To avoid the impact of database caches, the virtual machine was power cycled between runs. REFERENCE ARCHITECTURE /31 Virtualizing Microsoft Applications on VMware Virtual SAN D V D S t o r e T e s t d s 2 s q ls e r v e r s e s s io n 1 Combined totals for both test ds2sqlserver sessions: • Total Purchases during 3 hours: 4969793 • Average Orders Per Minute: 38260 Summarized results: • Total Purchases during 2 hours: 2484763 • Average Orders Per Minute: 19130 Detailed results: Final (4/2/2015 2:30:21 AM): et= 7201.1 n_overall=2296019 opm=19130 rt_tot_lastn_max=12 rt_tot_avg=14 n_login_overall=2296019 n_newcust_overall=0 n_browse_overall=6886415 n_purchase_overall=2296019 rt_login_avg_msec=3 rt_newcust_avg_msec=-2147483648 rt_browse_avg_msec=1 rt_purchase_avg_msec=6 rt_tot_sampled=6 n_rollbacks_overall=44948 rollback_rate =2.0% Controller (4/2/2015 2:30:21 AM): all threads stopped, exiting n_purchase_from_start= 2484763 n_rollbacks_from_start= 48566 D V D S t o r e T e s t d s 2 s q ls e r v e r s e s s io n 2 Summarized results: • Total Purchases during 2 hours: 2485030 • Average Orders Per Minute: 19130 Detailed results: Final (4/2/2015 2:30:22 AM): et= 7201.0 n_overall=2295996 opm=19130 rt_tot_lastn_max=12 rt_tot_avg=14 n_login_overall=2295996 n_newcust_overall=0 n_browse_overall=6888407 n_purchase_overall=2295996 rt_login_avg_msec=3 rt_newcust_avg_msec=-2147483648 rt_browse_avg_msec=1 rt_purchase_avg_msec=6 rt_tot_sampled=6 n_rollbacks_overall=45143 rollback_rate = 2.0% Controller (4/2/2015 2:30:23 AM): all threads stopped, exiting n_purchase_from_start= 2485030 n_rollbacks_from_start= 48867 DVD Store Test Parameters Test ds2sqlserver 1 target=localhost n_threads= 32 ramp_rate=10 run_time=120 db_size=100GB warmup_time=10 think_time=0.085 pct_newcustomers=0 n_searches=3 search_batch_size=5 n_line_items= 5 virt_dir= ds2 page_type=php windows_perf_host= detailed_view=N linux_perf_host= REFERENCE ARCHITECTURE /32 Virtualizing Microsoft Applications on VMware Virtual SAN Test ds2sqlserver 2 target=localhost n_threads= 32 ramp_rate=10 run_time=120 db_size=100GB warmup_time=10 think_time=0.085 pct_newcustomers=0 n_searches=3 search_batch_size=5 n_line_items= 5 virt_dir= ds2 page_type=php windows_perf_host= detailed_view=N linux_perf_ho REFERENCE ARCHITECTURE /33 Virtualizing Microsoft Applications on VMware Virtual SAN Application Availability vSphere HA and Virtual SAN By providing a higher level of availability than is possible out-of-the-box for most applications, vSphere HA has become the default HA solution for vSphere virtual machines. Regardless of operating system or application, vSphere HA can provide protection from ESXi host failures, guest operating system failures, and, with the help of third-party add-ons, application failures. Virtual SAN is tightly integrated with vSphere HA. In conjunction with vSphere HA, Virtual SAN provides a highly available solution for virtual machine workloads. If a host fails, vSphere HA will restart the virtual machines running on the host to the remaining hosts in the cluster. HA Configuration for Exchange and SQL Server on Virtual SAN Exchange 2013 environments are built for high availability. Client Access servers can be load-balanced. Mailbox servers are deployed in DAGs for mailbox database high availability. Although this provides the availability needed from the application perspective, however in the case of a hardware failure, utilization of the remaining Client Access servers rises as new connections are established, and DAG protection is reduced as passive databases are activated. In a physical deployment, an administrator needs to address the problem quickly to restore availability levels and mitigate any further outages. On a Virtual SAN infrastructure, a hardware failure results in virtual machines powered back on by vSphere HA, restoring availability levels quickly and keeping utilization balanced. Similarly, running Microsoft SQL Server on VMware Virtual SAN offers many options for database availability and disaster recovery utilizing the best features from both VMware and Microsoft. For example, vSphere vMotion and vSphere DRS can help reduce planned downtime and balance workloads dynamically, and vSphere HA can help recover SQL Server virtual machines in the case of host failure. SPBM and Data Protection It is important to have an understanding of the virtual machine SPBM mechanism for data protection on Virtual SAN. Virtual machine storage policy can define the requirements of the application running in the virtual machine in terms of availability, sizing, and performance. O b je c ts a n d C o m p o n e n ts A virtual machine deployed on a Virtual SAN datastore is comprised of a set of objects. These are the VM Home Namespace, the VMDK, VM Swap (when the virtual machine is powered on). Each of these objects is comprised of a set of components, determined by capabilities placed in the virtual machine storage policy. For example, if NumberOfFailuresToTolerate=1 is set in policy, then the VMDK object would be replicated across servers, with each replica being comprised of at least one component. If the value of NumberOfDiskStripesPerObject is greater than one in the policy, then the object is striped across multiple disks and each stripe is a component of the object. N u m b e r o f F a ilu r e s t o T o le r a t e The NumberOfFailuresToTolerate policy setting is an availability capability that can be applied to all virtual machines or individual VMDKs. This policy plays an important role when planning and sizing storage capacity for Virtual SAN. Based on the availability requirements of a virtual machine, the setting defined in a virtual machine storage policy can lead to the consumption of as many as four times the capacity of the virtual machine. For “n” failures tolerated, "n+1" copies of the object are created and "2n+1" hosts contributing storage are required. The default value for NumberOfFailuresToTolerate is 1. This means that even if a policy is not chosen when deploying a virtual machine, there will still be one replica copy of the virtual machine’s data. REFERENCE ARCHITECTURE /34 Virtualizing Microsoft Applications on VMware Virtual SAN The maximum value for NumberOfFailuresToTolerate is 3. In our solution, we enabled DRS and set Anti-Affinity rules to separate the Exchange CAS virtual machines. The Exchange Mailbox virtual machines were also given the similar rule as well as the SQL Server virtual machines. We left the FTT policy at the default of “1” during testing. Although the Exchange DAG and SQL Server AAG clusters create redundant database copies at the application level, like any other traditional storage systems, Virtual SAN is not aware of the application-level data copies and does not guarantee those copies are not placed on the same host or fault domain. Therefore, we recommend setting the FTT policy at 1 to ensure data availability in case of a failure. We conducted several failure scenarios with regards to availability, including failing disks, powering off hosts, and pulling network cables. Each of these successful results demonstrated the resilient capabilities of Virtual SAN and the HA design ensured no data unavailability. REFERENCE ARCHITECTURE /35 Conclusion VMware Virtual SAN is a low-cost, high-performance storage solution that is rapidly deployed, easy to manage, and fully integrated into the industry-leading VMware Software-Defined Data Center stack. Virtual SAN pools local flash-based devices to form a cache-accelerating tier and magnetic drives to provide shared storage to virtual machines in the VMware vSphere cluster. It is an ideal platform for consolidating Microsoft applications in a shared-nothing architecture and design, such as Exchange DAG and SQL Server AAG configurations. VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com Copyright © 2015 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.