Deploy Microsoft SharePoint Server 2013 on Hitachi Virtual Storage

Deploy Microsoft® SharePoint® Server 2013 on Hitachi
Virtual Storage Platform G1000
Reference Architecture Guide
By Jonathan Parnell
April 21, 2014
Feedback
Hitachi Data Systems welcomes your feedback. Please share your thoughts by
sending an email message to SolutionLab@hds.com. To assist the routing of this
message, use the paper number in the subject and the title of this white paper in
the text.
Table of Contents
Solution Overview........................ ....................................................................... 3
Key Solution Components.................................................................................. 4
Hitachi Virtual Storage Platform G1000.................................................... 4
Hitachi Compute Blade 500......................... ............................................. 9
Hitachi Command Suite......................... ................................................... 9
Hitachi Compute Systems Manager......................... ................................ 9
Solution Design........................ ......................................................................... 10
SharePoint 2013 Server Architecture......................... ............................ 18
Conclusion........................ ................................................................................. 27
1
1
Deploy Microsoft® SharePoint®
Server 2013 on Hitachi Virtual
Storage Platform G1000
Reference Architecture Guide
This reference architecture focuses on the planning, designing, sizing, and
deploying of Microsoft SharePoint 2013 with VMware ESXi using Hitachi
Compute Blade 500, Hitachi Virtual Storage Platform G1000, and Brocade
networking. The environment supports a 200,000 user Microsoft SharePoint
2013 server farm with 20 site collections using twenty 200 GB content
databases at a 1% concurrency rate.
This guide is intended for you if you are a SharePoint administrator looking to
deploy Microsoft SharePoint 2013 in your environment. You need some
familiarity with the following to benefit from this document:

Hitachi Virtual Storage Platform G1000

Hitachi Command Suite version 7 or later

Brocade networking

Microsoft Windows Server® 2012 R2

VMware ESXi

Microsoft SharePoint Server 2013
This solution uses Microsoft Windows Server 2012 R2 Datacenter virtual
machines running on VMware ESXi 5 to host these applications. The benefits
of this Hitachi Virtual Storage Platform G1000 for VMware vSphere solution
are the following:

Faster deployment

Increased scalability

Increased reliability

Reduced risk

Lower cost of ownership
2
2
The key components of this solution are the following:

Storage — Hitachi Virtual Storage Platform G1000

Compute — Hitachi Compute Blade 500

Virtualization — VMware ESXi 5

This solution integrates Hitachi storage and servers with VMware
ESXi to support Microsoft SharePoint Server 2013. SharePoint
Server 2013 runs on virtual machines in an environment that shares
compute and storage resources. Tools from Microsoft, Hitachi Data
Systems, and VMware manage that environment.
Note — Testing of this configuration was in a lab environment. Many things
affect production environments beyond prediction or duplication in a lab
environment. Follow the recommended practice of conducting proof-ofconcept testing for acceptable results in a non-production, isolated test
environment that otherwise matches your production environment before your
production implementation of this solution.
3
3
Solution Overview
This reference architecture uses Microsoft SharePoint Server 2013 for
200,000 users. It uses Hitachi compute and storage, and Brocade
networking.
Figure 1 shows a high-level design of this reference architecture.
Figure 1
4
4
Key Solution Components
These are the key components required to deploy this solution.
Hitachi Virtual Storage Platform G1000
Hitachi Virtual Storage Platform G1000 provides an always-available,
agile, and automated foundation that you need for a continuous
infrastructure cloud. This delivers enterprise-ready software-defined
storage, advanced global storage virtualization, and powerful storage.
Supporting always-on operations, Virtual Storage Platform G1000
includes self-service, non-disruptive migration and active-active storage
clustering for zero recovery time objectives. Automate your operations
with self-optimizing, policy-driven management.
Hitachi Virtual Storage Platform G1000 Architecture
Hitachi Virtual Storage Platform G1000 is a high performance and large
capacity storage system. It has improved Hi-Star Net Architecture and an
8-core microprocessor. The storage consists of the following:

Controller Chassis

Channel Adapter — This controls data transfer between the
upper host and the cache memory.

Disk Adapter — This controls data transfer between the drives
and cache memory.

Cache Path Control Adapter — Using PCI-Express path, this
connects between the processor blades, channel adapter, disk
adapter, and the cache backup module kit. It distributes data and
sends hot-line signals to the processor blades.

Cache Flash Memory — This is memory to back up cache
memory data when a power failure occurs.

Cache Backup Module Kit — This is a kit to back up cache
memory data when a power failure occurs.

Processor Blades — This consists of the DIMMs and the
processor with the chip set. It controls the following using
Ethernet:


Channel adapter

Disk adapter

PCI-Express interface

Local memory

Communication between the service processors
Service Processor — This sets and a modifies the storage
system configuration, a device availability statistical information
acquisition, and maintenance.
5
5

Drive Chassis — This is an installable drive unit that connects into
the controller chassis.
Virtual Storage Platform G1000 offers these features:


Scalability

Number of controller chassis — 1 to 2

Number of racks — 1 to 6

Number of installed channel options — 1 to 12 sets

Capacity of cache memory — 32 GB to 2,048 GB

Number of drives — Up to the following:
2.5-inch HDD — 2,304

3.5-inch HDD — 1,152

2.5-inch SSD (flash drives) — 384

FMD (flash module drive) — 192
High performance



Supports three kinds of high-speed disk drives at the following
speeds:

15k RPM

10k RPM

7.2k RPM

Supports flash drives and flash module drive with ultra-high
speed response

Transfers high speed data between the disk adapter and drives at
a rate of 6 Gb/sec with the SAS interface

Uses the 8-core processor on the processor blade board,
doubling the processing ability
Large capacity
6
6






Supports hard disk drives with the following capacities:

300 GB

600 GB

900 GB

1.2 TB

3 TB

4 TB
Supports flash drives with the following capacities:

400 GB

800 GB
Supports flash module drives with the following capacities:

1.6 TB

3.2 TB
Controls up to 65,280 logical volumes and up to 2,304 disk
drives, providing a physical disk capacity of approximately 4,511
TB per storage system
Flash module drive

Has a 6 Gb/sec SAS interface, the same as that the hard disk
drives and solid state drives

Uses MLC-NAND flash memory, featuring high performance,
long service life, and cost performance
Connectivity — Supports the following configurations:

RAID-6 (6D+2P)

RAID-6 (14D+2P)

RAID-5 (3D+1P)

RAID-5 (7D+1P)

RAID-10 (2D+2D)

RAID-10 (4D+4D)
7
7

Non-disruptive service and upgrade

Add, remove, and replace main components without shutting
down a device while the storage system is in operation

Monitor the running condition of the storage system with a service
processor mounted on the drive chassis

Enable remote maintenance by connecting the service processor
with a service center

Upgrade the microcode without shutting down the storage system
8
8
Figure 2 shows the controller chassis, drive chassis, and its subcomponents for Hitachi Virtual Storage Platform G1000.
Figure 2
9
9
Hitachi Dynamic Provisioning
On Hitachi storage systems, Hitachi Dynamic Provisioning provides wide
striping and thin provisioning functionalities.
Using Dynamic Provisioning is like using a host-based logical volume
manager (LVM), but without incurring host processing overhead. It
provides one or more wide-striping pools across many RAID groups.
Each pool has one or more dynamic provisioning virtual volumes (DPVOLs) of a logical size you specify of up to 60 TB created against it
without allocating any physical space initially.
Deploying Dynamic Provisioning avoids the routine issue of hot spots that
occur on logical devices (LDEVs). These occur within individual RAID
groups when the host workload exceeds the IOPS or throughput capacity
of that RAID group. Dynamic Provisioning distributes the host workload
across many RAID groups, which provides a smoothing effect that
dramatically reduces hot spots.
When used with Hitachi Virtual Storage Platform G1000, Hitachi Dynamic
Provisioning has the benefit of thin provisioning. There can be a dynamic
expansion or reduction of pool capacity without disruption or downtime.
You can rebalance an expanded pool across the current and newly added
RAID groups for an even striping of the data and the workload.
Hitachi Compute Blade 500
Hitachi Compute Blade 500 combines the high-end features with the high
compute density and adaptable architecture you need to lower costs and
protect investment. Safely mix a wide variety of application workloads on
a highly reliable, scalable, and flexible platform. Add server management
and system monitoring at no cost with Hitachi Compute Systems
Manager, which can seamlessly integrate with Hitachi Command Suite in
IT environments using Hitachi storage.
Hitachi Command Suite
Hitachi Command Suite manages virtualized storage and server
infrastructures. With usability, workflow, performance, scalability, and
private cloud enablement, Hitachi Command Suite lets you build
sustainable infrastructures with leading storage technologies. It helps you
flexibly align with changing business requirements and maximize return
on IT investments.
Hitachi Compute Systems Manager
Hitachi Compute Systems Manager is the management software for
Hitachi servers. Compute Systems Manager can be purchased with an
optional Server Management Module, Network Management Module, or
Server Deployment Module. Use Compute System Manager, to introduce
new servers into your Datacenter environment.
10
10
Solution Design
This is detailed information on the designing and sizing for Microsoft
SharePoint 2013 architecture to initially deploy and support a 200,000
user Microsoft SharePoint 2013 server farm with 20 site collections using
twenty 200 GB content databases. A single web application hosts the site
collections. This solution maintains the high availability and performance
levels required for a SharePoint environment.
Table 1 lists the hardware components for Microsoft SharePoint 2013.
Table 1. Hardware Components
Hardware
Description
Version
Qty
Hitachi Compute
Blade 500 chassis
8 × server blades
A0135-D-6829
1
01-59
4
2 × Brocade 5460 FC Switch Modules
2 × Hitachi 1/10 GbE Switch Modules
2 × Management Modules
6 × Cooling Fan Modules
4 × Power Supply Modules
520H B1 server
blade
Half-size blade
2 × 8-Core Intel Xeon E5-2680 @ 2.70 GHz
160 GB Memory 10 × 16 GB DIMM
Hitachi Virtual
Storage Platform
G1000

Dual controller

32 × 8 Gb/sec Fibre Channel ports
1
982 GB cache memory
DBX disk box
36 SAS 600 GB 10k RPM drives
n/a
2
Brocade 6720
Ethernet switch
24-port with 10 GbE speed
2.0.1b
2
7.0.1.a
2
Brocade 6510 Fibre 24-port with 8-16 Gb/sec speed
Channel switch
11
11
Table 2 lists the software components used for Microsoft SharePoint
2013.
Table 2. Software Components
Software
Version
Hitachi Command Suite
8.0
Hitachi Compute System Manager
8.0
Hitachi Dynamic Provisioning
Microcode dependent
VMware ESXi
5.1.0
VMware vCenter
5.1.0
VMware Virtual Infrastructure Client
5.1.0
Microsoft Windows Server
2012 R2 Datacenter
Microsoft SharePoint
2013
Storage Architecture
Table 3 shows the storage port configuration for multipath I/O redundancy
and performance.
Table 3. Storage Port Configuration
vSphere Host
vSphere Port Name
Storage Port
Storage Host Group
ESX0
ESX0_HBA1_1
1A,2A
ESX0_1A_2A
ESX0_HBA1_2
1B,2B
ESX0_1B_2B
ESX1_HBA1_1
1A,2A
ESX1_1A_2A
ESX1_HBA1_2
1B,2B
ESX1_1B_2B
ESX2_HBA1_1
1A,2A
ESX2_1A_2A
ESX2_HBA1_2
1B,2B
ESX2_1B_2B
ESX3_HBA1_1
1A,2A
ESX3_1A_2A
ESX3_HBA1_2
1B,2B
ESX3_1B_2B
ESX1
ESX2
ESX3
12
12
Table 4 shows the detailed volume configuration used in this solution.
Table 4. Volume Configuration
Pool
Number
Pool Size
LDEV
VM Name
0
1TB
0:00
SP-DB-SPSQL 100 GB
Windows OS
SP-DB-SPSC
100 GB
Windows OS
SP-WS-01
40 GB
Windows OS
80 GB
SP Index
40 GB
Windows OS
80 GB
SharePoint IX
Volume
40 GB
Windows OS
80 GB
SP Index
40 GB
Windows OS
80 GB
SP Index
40 GB
Windows OS
80 GB
SP Index
40 GB
Windows OS
80 GB
SP Index
40 GB
Windows OS
80 GB
SP Index
40 GB
Windows OS
80 GB
SP Index
SP-WS-02
SP-WS-03
SP-WS-04
SP-WS-05
SP-WS-06
SP-WS-07
SP-WS-08
VMDK Size
Purpose
13
13
Table 4. Volume Configuration (Continued)
Pool
Number
Pool Size
LDEV
VM Name
1
7 TB
1:00
SP-DB-SPSQL 200 GB
1:01
210 GB
SPContentDB1
210 GB
SPContentDB2
210 GB
SPContentDB3
210 GB
SPContentDB5
210 GB
SPContentDB6
210 GB
SPContentDB7
210 GB
SPContentDB8
210 GB
SPContentDB9
210 GB
SPContentDB10
210 GB
SPContentDB11
210 GB
SPContentDB12
210 GB
SPContentDB13
210 GB
SPContentDB14
210 GB
SPContentDB15
210 GB
SPContentDB16
210 GB
SPContentDB17
210 GB
SPContentDB18
210 GB
SPContentDB19
210 GB
SPContentDB20
110 GB
TempDB1
110 GB
TempDB2
110 GB
TempDB3
110 GB
TempDB4
110 GB
TempDB5
110 GB
TempDB6
110 GB
TempDB7
110 GB
TempDB8
110 GB
TempDB9
110 GB
TempDB10
110 GB
TempDB11
110 GB
TempDB12
110 GB
TempDB13
110 GB
TempDB14
110 GB
TempDB15
110 GB
TempDB16
110 GB
TempDBLog
110 GB
TempDB1
1:02
SP-DB-SPSC
VMDK Size
Purpose
SQL & SP
Databases
14
14
Table 4. Volume Configuration (Continued)
Pool
Number
Pool Size
LDEV
VM Name
2
1TB
2:00
SP-DB-SPSQL 110 GB
3
1TB
3:00
VMDK Size
Purpose
SQL & SP Logs
50 GB
SPContentLog1
50 GB
SPContentLog2
50 GB
SPContentLog3
50 GB
SPContentLog4
50 GB
SPContentLog5
50 GB
SPContentLog6
50 GB
SPContentLog7
50 GB
SPContentLog8
50 GB
SPContentLog9
50 GB
SPContentLog10
50 GB
SPContentLog11
50 GB
SPContentLog12
50 GB
SPContentLog13
50 GB
SPContentLog14
50 GB
SPContentLog15
50 GB
SPContentLog16
50 GB
SPContentLog17
50 GB
SPContentLog18
50 GB
SPContentLog19
50 GB
SPContentLog20
SP-DB-SPSC
110 GB
SP & Search Logs
SP-DB-SPSC
400 GB
SP & Search DBs
SAN Switch Module Configuration
The 520H B1 server blade comes with two Brocade 5460 8 Gb/sec Fibre
Channel switch modules installed into the chassis at slot 2/3. The
Brocade 5460 switch has 22 ports with 6 external and 16 internal ports.
Note — . In order to enable all 22 ports the Ports on Demand feature must
be purchased.
15
15
SAN Architecture
When designing your SAN architecture, follow these recommended
practices to ensure a secure, high-performance, and scalable Microsoft
SharePoint deployment:


Use dual SAN fabrics, multiple HBA ports, and host-based
multipathing software when using Microsoft SQL Server® in a
business-critical deployment. You must have two or more paths from
the SQL and application servers connecting to two independent SAN
fabrics to have the redundancy required for critical applications.
Zone your fabric for multiple, unique paths from HBAs to
storage ports. Use single initiator zoning. Use at least two Fibre
Channel switch fabrics to provide multiple independent paths to
Hitachi Virtual Storage Platform G1000 to prevent configuration errors
from disrupting the entire SAN infrastructure.
Dynamic Provisioning Pool Configuration
All the dynamic provisioning pools and volumes for this environment are
built using the following guidelines:

600 GB 10k RPM SAS drives

RAID-10 (2D+2D) for the following:


Microsoft Windows Server operating system

Microsoft SQL Server database and log files

SharePoint Index volumes
For best performance:

Place the Windows Server operating system, SharePoint index
volume, database, and log files in separate dynamic provisioning
pools

Use RAID-10 for the best performance and reliability

Reserve an additional four drives for spares
16
16
Table 5 shows the number of RAID groups and drives needed to create
the dynamic provisioning pools.
Table 5. Dynamic Provisioning Pool Configuration
HDP Pool
HDP RAID
Configurati
on
# of Drives
Drive
Capacity
0
RAID-10
(2D+2P)
4
1
RAID-10
(2D+2P)
2
3
HDP
Capacity
# LU
Purpose
600 GB SAS 1 TB
10K RPM
3
OS, SP VMs
& IX
24
600 GB SAS 7 TB
10K RPM
2
SQL DBs &
Temp DBs
RAID-10
(2D+2P)
4
600 GB SAS 1 TB
10K RPM
1
DB Logs
RAID-10
(2D+2P)
4
600 GB SAS 1 TB
10K RPM
1
Search DBs
Server Architecture
The 520H B1 server blade delivers performance, scalability, and
configuration flexibility in this hardware configuration. The server blade
hosts the VMware vSphere hypervisor for the guest virtual machines.

Dual-socket Intel Xeon E5-2680 processors


8-core per socket
10 × 16 GB DIMM for 160 GB of RAM
Host Considerations

Queue Depth for all HBAs on each ESXi host was changed to 64 per
best practice for ESXi hosts running SQL Server.

For VMware when the queue depth of an HBA is changed the
Disk.SchedNumReqOutstanding value for the ESXi host must be
updated as well. Please refer to Setting the Maximum
Outstanding Disk Requests virtual machines knowledge base
article from VMware for instructions on how to update this on your
ESXi host(s).
Host Network Configuration
The 520H B1 server blade comes with a single onboard two-channel 10
GbE Converged Network Adapter (CNA) card for network traffic. The
CNA card is configured into four logical NICs per channel and eight NICs
per server blade for performance and redundancy.
17
17
The following vNics are the logical NICs presented to the hosts per
channel. For performance enhancement and security, isolate the
networks using different VLANs, as follows:

vNic 0/1 for Management Network — Chassis management
connections and primary management of the VMware vSphere
hypervisors

vNic 2/3 for vMotion Network — Migration of a virtual machine from
one host to another

vNic 4/5 for SQL Data Network — Communication between SQL
Server, SharePoint Application Server, and SharePoint Web Servers
for Data

vNic 6/7 for WFE NLB & Client Network — Communication for the
NLB and clients to WFE servers
Virtual Machines Configuration
Hosts ESXi0, ESXi1, ESXi2 and ESXi3 are configured to run the
Microsoft SharePoint virtual machines.
Table 6 shows the virtual machine configuration with vCPU, vRAM, and
vNIC allocation.
Table 6. Virtual Machine Configuration
Host
VM Name
vCPU
vRAM
vNIC
ESXi0
SP-DBSPSQL
16
160
7
ESXi1
SP-DBSPSC
16
160 GB 7
ESXi2
SP-WS-01
4
40 GB
7
SP-WS-02
4
40 GB
7
SP-WS-03
4
40 GB
7
SP-WS-04
4
40 GB
7
SP-WS-05
4
40 GB
7
SP-WS-06
4
40 GB
7
SP-WS-07
4
40 GB
7
SP-WS-08
4
40 GB
7
ESXi3
18
18
SharePoint 2013 Server Architecture


Blade 0 and Blade 1 running ESXi 5.1 have Microsoft Windows 2012
R2 Datacenter virtual machines, hosting the following applications:

Blade 0 runs Microsoft SQL Server 2012 Enterprise. It is the
main SQL server for the SharePoint farm.

Blade 1 runs Microsoft SharePoint 2013 and Microsoft SQL
Server 2012. It serves as the application and search crawl
server, with local (in SQL) search tables and hosts the central
administration SharePoint site.
Blade 2 and Blade 3 running ESXi 5.1 have Microsoft Windows 2012
R2 Datacenter virtual machines, hosting the following applications:

Blade 2 hosts four virtual machines. Each virtual machine runs
Microsoft Windows 2012 R2 Datacenter, as the installed
operating system.


Four virtual machines run Microsoft SharePoint 2013 web
front-end and search index.
Blade 3 hosts four virtual machines. Each virtual machine runs
Microsoft Windows 2012 R2 Datacenter, as the installed
operating system.

Four virtual machines run Microsoft SharePoint 2012 web
front-end and search index.
19
19
Figure 3 shows the infrastructure and SharePoint components hosting the
Microsoft SharePoint environment.
Figure 3
20
20
Determining I/O and Capacity Requirements
The Capacity Planning for SharePoint Server 2013 and Capacity
management and sizing overview for SharePoint Server 2013 from
Microsoft were used to determine the storage I/O and capacity
requirements to support 200,000 users interacting with 20 site collections
and twenty 200 GB content databases.
Virtual Machine Processor Configuration
The Capacity Planning for SharePoint Server 2013 and Capacity
management and sizing overview for SharePoint Server 2013 from
Microsoft were used to determine the computing requirements to support
a 200,000 user Microsoft SharePoint 2013 server farm with 20 site
collections using twenty 200 GB content databases.
Virtual Machine Memory Configuration
The Capacity Planning for SharePoint Server 2013 and Capacity
management and sizing overview for SharePoint Server 2013 from
Microsoft were used to determine the computing requirements to support
a 200,000 user Microsoft SharePoint 2013 server farm with 20 site
collections using twenty 200 GB content databases.
Considerations for virtual memory configuration for SharePoint Server
2013:

WFEs having 40 GB of memory allows 10 GB for object cache, 2 0GB
for SharePoint WFE services, and another 10 GB to extend search
components or any of the other 3 caches configurable for WFEs in
SharePoint Server 2013 depending on the data, sites, and user
security profiles in your environment.

Object Cache is configured on the Application Server and all WFE
servers to 10 GB.

If the search topology needs to be extended, each search component
will consume memory.

Having a search index on each WFE increases search response
time but increases processor utilization. Search indexes should
be extended to WFEs, as in the case of this reference
architecture, only if your index is above 10 million items. If you
have exactly 10 million items 1 partition will suffice. If between 10
million to 40 million, up to 4 partitions can be used.

Please refer to the Enterprise search architectures for SharePoint
Server 2013 documentation from Microsoft for further detailed
explanation as well as additional planning beyond the scope of
this document.
21
21


Distributed Cache is reserved only to the application server in this
configuration. Distributed Cache can be extended to any server in the
environment and is dependent on the number of user profiles and the
way in which social media is used and designed within your
environment. Please refer to Manage the Distributed Cache service in
SharePoint Server 2013 documentation for further explanation and
consideration on configuring and extending this functionality of
SharePoint within your environment.

Distributed Cache should be set to 10% of the memory of the
server

If planning to have a dedicated Distributed Cache server for your
environment please follow the followiing best practice:

Determine the total physical memory on the server. For this
example, we will use 16 GB as the total physical memory
available on the server.

Reserve 2 GB of memory for other processes and services
that are running on the cache host. For example, 16 GB –
2 GB = 14 GB. This remaining memory is allocated to the
Distributed Cache service.

Take half of the remaining memory, and convert it to MB. For
example, 14 GB/2 = 7 GB or 7000 MB. This is the cache size
of the Distributed Cache service.
Blob Cache and Page Output Cache are both outside the scope of
this reference architecture due to the variations that are directly
influenced by your data, sites, and security profiles. Both can
increase response times depending on the data and sites within as
well the configuration of your environment. Memory is allocated and
required for both. Please refer to the Cache settings operations in
SharePoint Server 2013 to determine the correct cache configuration.
22
22
Virtual Machine Hard Disk Configuration
Each LDEV from the HDP Pool is presented as a datastore to all VMware
vSphere hosts for failover and redundancy. Please refer to Table 5 for the
HDP pool, datastore, and VMDK configuration.

The disks are configured as thick provision eager zeroed and
formatted as NTFS for better performance when originally created
within vSphere.

Disks are configured to different SCSI controllers (4 SCSI controllers
and 15 per controller max limit) to separate traffic per best practice for
VMware as follows:


HDP Pool 0 VMDKs: SCSI Controllers 0 and 1

HDP Pool 1 VMDKs: SCSI Controllers 1, 2 and 3

HDP Pool 2 VMDKs: SCSI Controllers 3 and 4

HDP Pool 3 VMDKs: SCSI Controller 4
All VMware storage path policies for each ESXi host must be set to
Round Robin
Virtual Machine Network Configuration
All 4 ESXi hosts share the same network configuration. This allows for
failover of any VM in the environment to any of the ESXi hosts. The
following vNics are the logical NICs presented to the Hosts per channel
and available to all virtual machines:

vNic 0/1 for Management Network — Chassis management
connections and primary management of the VMware vSphere
hypervisors

vNic 2/3 for vMotion Network — Migration of a virtual machine from
one host to another

vNic 4/5 for SQL Data Network — Communication between SQL
Server, SharePoint Application Server, and SharePoint Web Servers
for Data

vNic 6/7 for WFE NLB & Client Network — Communication for the
NLB and Clients to WFE Servers:

For NLB on Windows 2012 running SharePoint Server 2013
there are certain considerations. There are 2 options for running
your NLB with VMware ESXi. Unicast or Mulitcast. Unicast
provides better performance but requires additional configuration
and requires all WFEs to be running on the same host and does
not allow vMotion. Multicast allows load balancing across multiple
hosts but requires updates to the ARP tables on the physical
network switches that your ESXi hosts are connected to. Please
refer to the Microsoft Network Load Balancing Multicast and
Unicast operation modes documentation from VMware on how to
properly configure both. In this reference architecture, Unicast
was used for optimal performance.
23
23

Best practice for SharePoint Server 2013 network configuration is
to enable Jumbo Packets within the physical network, and all
network adapters within the VM guest operating systems must
have Jumbo Packets enabled and the MTU set to 9000.
SQL Server Architecture
Below are all best practice recommendations per Microsoft for Storage
and SQL Server capacity planning and configuration for SharePoint
Server 2013. For further detailed explanation on the following
requirements please refer to the Storage and SQL Server capacity
planning and configuration (SharePoint Server 2013) document provided
by Microsoft.

Operating System Disk Management
All disks created within the operating system must be created with an
allocation unit size of 64k.

SQL Server Instance & Database Settings

This step has to be done during the SQL Server Installation.
It cannot be changed without a complete rebuild of the
Master database. During installation, select the following
Collation:






Collation – Latin1_General_CI_AS_KS_WS
For SQL Server running SQL databases and content
databases, memory should be 90% of the total memory of
the server
For SQL Server running SharePoint search databases,
memory should be set to no more than 40% of the total
memory of the server
Under Database Settings, MAXDOP must be set to 1 for both
SQL Servers
Auto create statistics and Auto update statistics must be
disabled for both SQL Servers as SharePoint handles this
functionality itself
Start up parameter (-T1118) should be enabled which instructs
SQL Server to use a round-robin tempDB allocation strategy and
maintains that all tempDB files are of the same size. This will
reduce resource allocation contention in the tempDB database to
improve performance on complex queries.
24
24


SQL Server Database files

All content databases should be created with the initial size
already set to 200 GB and auto growth set to 1000 MB.

Set MAXSIZE for each database file to a value that matches the
capacity of the volume.
SQL Server Transaction Log File


All content database logs should be created as 20% of the size of
the content database with the initial size set to 40 GB and auto
growth set to 200 MB.
SQL Server tempDB Files

By default, tempDB only supports a single data file group and a
single log file group with a default number of files set to 1.
Microsoft recommends creating at least as many data files of
equal size as you have CPU cores. The number of concurrent
threads is less than or equal to the number of CPU cores.
25
25

General Database Maintenance

To keep your databases properly maintained, follow these
recommended practices:

Monitor the database server to make sure that it responds
appropriately and is not overloaded. Key performance counters
to monitor include the following:





Network wait queue — 0 or 1

Average disk queue length (latency) — Less than 20 msec

Memory used — Less than 70%

Free disk space — More than 25% for content growth
Do not auto-shrink databases or set up any maintenance plans
that programmatically shrink your databases.
Shrink a database only when 50% or more of the content in it
has been removed by user or administrator deletions.
Shrinking databases is very resource intensive. It requires careful
scheduling.
Only shrink content databases.

The configuration, central administration, and search databases
do not usually experience enough deletions to contain sufficient
free space.

Avoid needing to shrink databases by including growth
allocations in your capacity planning, including an overhead
allocation of 10% to 20%.
For more information, see the Microsoft TechNet article Monitoring
and maintaining SharePoint Server 2013.
26
26
SharePoint Server 2013 Operating System Additional
Configurations




In order to activate the ability to open document repositories in
Windows Explorer in Windows Server 2012, the Windows
Feature - Desktop Experience must be enabled in Server Role
In order to allow downloading and uploading of any document
over 50 MB in Windows Server 2012, please refer to the KB
article from Microsoft for instructions.
The URL for the SharePoint application must be added into Trusted
Sites in IE for all WFE servers
When running NLB for your SharePoint Farm, the LoopBack
must be disabled to avoid a 401.1 error to the clients. The
following must be run in PowerShell on all WFE servers in the
topology:

New-ItemProperty HKLM:\System\CurrentControlSet\Control\Lsa
-Name "DisableLoopbackCheck" -value "1" -PropertyType dword
27
27
Conclusion
This reference architecture guide describes how to deploy a 200,000 user
Microsoft SharePoint 2013 server farm with VMware ESXi using the
configuration in this paper. The solution provides high availability and
flexible scalability.
For More Information
Hitachi Data Systems Global Services offers experienced storage consultants,
proven methodologies and a comprehensive services portfolio to assist you in
implementing Hitachi products and solutions in your environment. For more
information, see the Hitachi Data Systems Global Services website.
Live and recorded product demonstrations are available for many Hitachi
products. To schedule a live demonstration, contact a sales representative. To
view a recorded demonstration, see the Hitachi Data Systems Corporate
Resources website. Click the Product Demos tab for a list of available recorded
demonstrations.
Hitachi Data Systems Academy provides best-in-class training on Hitachi
products, technology, solutions and certifications. Hitachi Data Systems Academy
delivers on-demand web-based training (WBT), classroom-based instructor-led
training (ILT) and virtual instructor-led training (vILT) courses. For more
information, see the Hitachi Data Systems Services Education website.
For more information about Hitachi products and services, contact your sales
representative or channel partner or visit the Hitachi Data Systems website.
Corporate Headquarters
2845 Lafayette Street, Santa Clara, California 95050-2627 USA
www.HDS.com
Regional Contact Information
Americas: +1 408 970 1000 or info@HDS.com
Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@HDS.com
Asia-Pacific: +852 3189 7900 or hds.marketing.apac@HDS.com
© Hitachi Data Systems Corporation 2014. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Innovate With Information is a trademark or registered
trademark of Hitachi Data Systems Corporation. Microsoft, SharePoint, SQL Server, and Windows Server are trademarks or registered trademarks of Microsoft Corporation. All other
trademarks, service marks, and company names are properties of their respective owners.
Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by Hitachi
Data Systems Corporation.
AS-297-00, April 2014