ZONING 3PAR Best Practices

advertisement
Technical white paper
Best practices for HP 3PAR
StoreServ Storage with the
HP Matrix Operating
Environment and SAN
configurations
Table of contents
Executive summary ...................................................................................................................................................................... 2
Basic SAN configuration ............................................................................................................................................................... 2
High-level Matrix configuration ............................................................................................................................................. 2
HP 3PAR StoreServ Storage ................................................................................................................................................... 3
HP 3PAR StoreServ and SAN connections ........................................................................................................................... 5
Matrix Operating Environment storage considerations ................................................................................................... 6
Storage presentation and zoning in a Matrix OE environment ....................................................................................... 8
Maximum initiator per port count ............................................................................................................................................ 10
Virtual Connect multi-initiator NPIV implementation .......................................................................................................... 10
Matrix storage pool entries ....................................................................................................................................................... 11
Private storage pool entries ................................................................................................................................................. 11
Shared storage pool entries ................................................................................................................................................. 12
Creating and using Matrix storage pool entries .................................................................................................................... 14
Manual creation of storage pool entries ............................................................................................................................ 14
Automated creation of storage pool entries .................................................................................................................... 14
Load balancing storage access over array ports .................................................................................................................. 15
Segregation via multiple fabrics .......................................................................................................................................... 16
Segregation via manual fabric zoning ................................................................................................................................ 19
Summary of best practices ....................................................................................................................................................... 20
HP 3PAR StoreServ ................................................................................................................................................................. 20
Matrix OE configuration ......................................................................................................................................................... 20
SAN fabric configuration ........................................................................................................................................................ 21
For more information ................................................................................................................................................................. 21
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
Executive summary
HP CloudSystem Matrix is the entry-level solution of the CloudSystem portfolio which is ideal for Infrastructure-as-a-Service
(IaaS) for private and hybrid cloud environments. The solution includes a self-service portal for quick auto-provisioning,
along with built-in lifecycle management to optimize infrastructure, manage the resource pools, and help ensure uptime.
HP CloudSystem Matrix provides cloud-bursting capabilities to a variety of public cloud providers including HP Cloud
Services. The Matrix Operating Environment (Matrix OE) software allows the architect to define service templates which
specify the appropriate compute, storage, and network resources to support a given workload. When an end user requires a
service, Matrix will automate the provisioning of infrastructure resources to meet the service needs, which can include
configuration or creation of storage volumes (through the HP Storage Provisioning Manager). This allows resources to be
used as needed (since a service may be needed for days, weeks, or months) and freed for use by other services when no
longer required, thus optimizing the utilization of all resources. In addition to automated provisioning, Matrix also supports
manual provisioning of resources.
Based on the variety of services designed by the architect, the I/O loads on the storage resources (created manually or
dynamically) can be unpredictable. One user can create a Database service with several storage LUNs and start an OLTP
workload on the database, while another user can create a service requiring a large data store for streaming media or file
archiving or a more compute intensive workload with little I/O demand. In order to handle this variety of workload and I/O
demand, you need a storage system that is designed to operate in an unpredictable workload environment and that allows
for easy online scaling to accommodate future demands as well as efficient utilization of all storage resources available.
HP 3PAR StoreServ Storage combines best-in-class open technologies with extensive innovations in hardware and
software design. All HP 3PAR StoreServ Storage features a high-speed, full-mesh backplane that joins multiple controller
nodes (the high-performance data movement engines of the HP 3PAR StoreServ architecture) to form a cache-coherent,
mesh-active cluster. This low-latency interconnect allows for tight coordination among the mesh-active controller nodes
and a simplified software model. In addition, each controller node may have one or more paths to hosts. The clustering of
storage controller nodes enables HP 3PAR StoreServ Storage to present to hosts a single-instance, highly available,
self-optimizing, and high-performing storage system. The HP 3PAR StoreServ Thin Built-in ASIC features a uniquely
efficient, silicon-based zero-detection mechanism that gives HP 3PAR StoreServ systems the power to remove allocated
but unused space without impacting performance. 3PAR StoreServ ASICs also deliver mixed-workload support to alleviate
performance concerns and cut traditional array costs. Transaction- and throughput-intensive workloads run on the same
storage resources without contention. The HP 3PAR StoreServ architecture is modular and can be scaled up to 2.2 PB,
making the system deployable as a small, midrange or very large, centralized system. Historically, enterprise customers
were often required to purchase and manage at least two distinct architectures to span their range of cost and scalability
requirements. The high performance and scalability of the HP 3PAR StoreServ architecture is well suited for large or
high-growth projects, consolidation of mission-critical information, demanding performance-based applications, and data
lifecycle management and the ideal platform for virtualization and cloud computing environments.
In order to get the maximum performance and processing from HP 3PAR StoreServ storage, it is necessary to keep a few
design rules in mind when configuring the infrastructure and when assigning resources. This paper will cover some of these
rules, as well as explain the requirements to load balance the I/O and available servers/blades connected to the HP 3PAR
StoreServ. The paper will also examine the effect of N_Port ID Virtualization (NPIV) used with Virtual Connect and the
Matrix OE. This document will also provide some best practices to keep in mind when designing these configurations.
Basic SAN configuration
High-level Matrix configuration
Figure 1 shows a high-level view of the Matrix OE environment consisting of a number of HP BladeSystem c7000 Enclosures
each with up to 16 HP BladeSystem server blades. Each c7000 enclosure has a pair of HP Virtual Connect FlexFabric
interconnection modules. The uplinks of those modules connect to SAN fabrics, and Virtual Connect can ensure blades can
access those fabrics. Each SAN fabric consists of one or more Fibre Channel SAN switches forming a fabric network. The SAN
fabric provides the interconnection between the HP 3PAR StoreServ and the HP c7000 enclosures and their blade servers.
Storage presentation and fabric zoning enable selective access to specific storage devices from specific blade servers.
2
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
Figure 1. High-level Matrix OE configuration with Virtual Connect and HP 3PAR StoreServ
HP 3PAR StoreServ Storage
An important element of the HP 3PAR StoreServ architecture is the controller node, a proprietary and powerful data
movement engine designed for mixed workloads. Controller nodes deliver performance and connectivity within the
HP 3PAR StoreServ. A single system can be modularly configured as a cluster of two to eight of these nodes. Customers
can start with two controller nodes in a small, “modular array” configuration and grow incrementally to eight nodes in a
non-disruptive manner—providing powerful flexibility and performance. Nodes are numbered 0, through 7 with node 0 and
1 forming the first node pair, and node 2 and 3 the second pair, node 4 and 5 the third pair and node 6 and 7 the fourth pair.
This modular approach provides flexibility, a cost-effective entry footprint, and affordable upgrade paths for increasing
performance, capacity, connectivity, and availability as needs change. The system can withstand an entire controller node
failure without data availability being impacted, and each node is completely hot-pluggable to enable online serviceability.
For host and back-end storage connectivity, each controller node, depending on the model, is equipped with up to nine
high-speed I/O slots (up to 72 slots system-wide on a fully configured system). This design provides powerful flexibility to
natively and simultaneously support adapters of multiple communication protocols. In addition, embedded Gigabit Ethernet
ports can be configured for remote mirroring over IP, eliminating the incremental cost of purchasing Fibre Channel-to-IP
converters. All back-end storage connections use Fibre Channel. Using quad-ported Fibre Channel adapters, each controller
node can deliver up to a total of 36 ports for a total of up to 288 ports system-wide for both host and storage connectivity,
subject to the system’s configuration.
3
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
Figure 2. HP 3PAR StoreServ with two node pairs (four nodes)
Server 1
HBA 1
Server/Blade
HBA 2
NPIV WWN(s)
HBA/CNA WWN(s)
S1-001 S1-002
H1-01 H1-02
Fabric 1
Fabric 2
SAN fabric switches
with zoning
A1-010 A1-011 A1-110 A1-111 A1-210 A1-211 A1-310 A1-311
0:1:0
0:1:1
1:1:0
1:1:1
2:1:0
2:1:1
3:1:0
3:1:1
H1-01 H1-02
S1-001 S1-002
H1-01 H1-02
S1-001 S1-002
H1-01 H1-02
S1-001 S1-002
H1-01 H1-02
S1-001 S1-002
Node 0
Node 1
Node 2
Node 3
3PAR StoreServ
Port WWN(s)
Host-facing port
Node:Slot:Port number
Host or NPIV WWN(s) logged
into this port as initiators
Node pair
HP 3PAR StoreServ
Figure 2 shows an example configuration of a HP 3PAR StoreServ with two node pairs (four nodes) and for simplicity
each node shows two host-facing Fibre Channel ports from a single adaptor, instead of the normal four or eight ports per
adapter. Ports on 3PAR StoreServ are uniquely identified by the node number, slot number the adapter is installed in, and
the port number on the specific adapter. Thus 0:1:0 is the host-facing port located in node 0, slot number 1, and port
number 0 on that adapter. Each host-facing port will have a unique World Wide Name to allow connectivity into the fabric
and this WWN can be used in zone configurations on the fabric to enable access from a server/blade to the storage.
Note:
The WWN is a 64 bit, 16 hex digit number but for simplicity this document uses a naming convention of Storage Server-Port
number to identify the WWN for a specific port (e.g., A1-010 is the WWN for Storage Server A1 port 0:1:0)
Best practice:
It is always recommended to install the same type of adapter with the same number of ports in the exact same slot on both
nodes in a node pair.
The diagram also shows a single physical server with two Host Bus Adapter (HBA) ports connected to two separate fabrics.
These ports could be two ports from two physical separate HBA adapters as displayed in the diagram, or could also be two
ports from a single multi-port HBA adapter. Blades in the Matrix OE environment can use the onboard Converged Network
Adapter (CNA) to create two virtual HBAs. Each server/blade HBA (physical or virtual) will have one or more unique World
Wide Name(s) assigned to it (the first WWN on the port directly, and subsequent WWNs added to the port via NPIV). In a
Matrix environment, rather than a single server, there would be one or more HP BladeSystem c7000 Enclosures with
HP BladeSystem blades and HP Virtual Connect FlexFabric modules (as shown later in figure 4).
4
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
Note:
For simplicity this document uses a naming convention of “server/blade number–index” to identify the WWN for a specific
port (e.g., H1-01 is the WWN for server/blade number 1 port number 1). In the same way this document uses a naming
convention of “storage resource–index number” to identify the NPIV WWN used for a specific storage resource access
applied to a specific HBA.
The diagram shows two separate fabrics–Fabric 1 and Fabric 2. Each fabric can consist of one or more SAN switches
interconnected to create a physical path between server/blades and storage. Although the diagram indicates physical
connections from each fabric to four individual ports on the storage array, device accessibility is determined by SAN Zone
configurations defined in each fabric, using server and storage WWNs (H1 and S1 on the server side, and A1 on the
storage side).
HP 3PAR StoreServ and SAN connections
When designing connectivity between servers/blades and a HP 3PAR StoreServ using a SAN fabric, some redundancy rules
have to be followed. Connectivity consists of both physical cable connections (all lines in the diagram, with grey lines
indicating optional cable connections or cable connections not in use) as well as zone configurations in the fabric. The
diagrams in figure 3 show three different zoning configurations that can be used to enable connectivity between each
server/blade and the HP 3PAR StoreServ.
Note:
Figure 3 describes different zoning methods that can be used for general LUN access from the server/host, but does not
handle any specifics for boot paths used to perform a boot from SAN operation (that is outside the scope of this document).
Figure 3. HP 3PAR StoreServ and SAN configurations.
0:1:0 0:1:1
H1-01
Server 1
Server 1
Server 1
HBA 1 HBA 2
HBA 1 HBA 2
HBA 1 HBA 2
H1-01 H1-02
H1-01 H1-02
H1-01 H1-02
Fabric 1
Fabric 2
1:1:0 1:1:1
2:1:0 2:1:1
Fabric 1
3:1:0 3:1:1
H1-02
Node 0
Node 1
1:1:0 1:1:1
2:1:0 2:1:1
3:1:0 3:1:1
H1-01 H1-02 H1-01 H1-02
Node 2
Node 3
3PAR StoreServ
A
0:1:0 0:1:1
Fabric 2
Single HBA to storage port zoning
Node 0
Node 1
Fabric 2
1:1:0 1:1:1
2:1:0 2:1:1
3:1:0 3:1:1
H1-01 H1-02 H1-01 H1-02 H1-01 H1-02 H1-01 H1-02
Node 2
Node 3
3PAR StoreServ
B
0:1:0 0:1:1
Fabric 1
Single HBA to both storage nodes zoning
Node 0
Node 1
Node 2
Node 3
3PAR StoreServ
C
Single HBA to multiple storage nodes zoning
Figure 3(A) shows the minimum configuration where a server/blade is zoned to at least two ports on the storage server
(zoning must be to both nodes of the same node pair). Node 0 of the pair is zoned to the Fabric 1 and Node 1 is zoned to the
Fabric 2. There is no specific need for a server/blade to be zoned to all nodes regardless of the way the volumes are spread
over all the nodes. Access to a single node pair is sufficient for minimum configuration.
5
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
Note:
The active server/blade WWN(s) will only log in as initiators on the storage ports that were used in the zoning configuration.
Note:
It is not recommended to have a single server zoned to two nodes in a HP 3PAR StoreServ that are not part of a node pair.
For instance in figure 3(A) if the Fabric 2 was zoned to port 2:1:1 on Node 2 and the Fabric 1 is zoned to Node 0 the device
accessibility would be over non paired nodes. Although this will work, some availability aspects of the system will be
defeated.
Figure 3(B) shows a configuration where each HP 3PAR StoreServ controller node is zoned to both fabrics. Ports of the same
pair of nodes with the same ID should be zoned to the same fabric. Example: 0:1:0 and 1:1:0 on Fabric 1, and port 0:1:1 and
1:1:1 on Fabric 2, or in other words even ports should connect to Fabric 1 and odd ports to Fabric 2.
Note:
When using HP Storage Provisioning Manager (SPM) to automate the storage configuration, SPM will always zone in all the
available storage ports in the fabric for the selected HP 3PAR StoreServ. See Storage presentation and zoning in a Matrix OE
environment section later in this document.
Figure 3(C) shows a larger configuration with fabric zoning to all four nodes in the storage server using two ports per node.
This sort of configuration allows for much more flexibility in the fabric zoning (especially when using manual zoning) and is
often how the Matrix Operating Environment is configured to operate by default. This configuration, however, can also
dramatically increase the initiator count per port on all connected ports. The following sections will discuss the impact of
such a configuration as well as offer methods to modify this configuration to more appropriately spread the workload
across multiple fabrics.
Using multiple ports and node pairs from the HP 3PAR StoreServ in the zoning configuration allows the multi-path driver in
the operating system on the server to balance the I/O load to the storage, as well as protect against component failure in
the infrastructure. However, this technique also increases the number of initiators on all the zoned storage ports, since
every active WWN zoned to a storage server port registers as an Initiator on that port. This can lead to a situation where
an Over Subscribed System Alert (OSSA) warning is generated by the HP’s Global Services and Support organization, and an
e-mail is sent to the customer indicating that the maximum Initiator per port limit was exceeded. See Maximum initiator per
port count section later in this document for more information.
Best practice:
Always connect the same ports on a node pair to the same fabric. i.e., port 0:1:1 and 1:1:1 to Fabric 1 and 0:1:2 and 1:1:2 to
Fabric 2. Ensure that device availability is configured to have at least two paths with one to each node in a node pair.
Matrix Operating Environment storage considerations
The Matrix Operating Environment supports logical servers. Logical servers are a class of abstracted servers that allow
administrators to manage physical and virtual machines using the same management construct. HP Matrix Operating
Environment is the infrastructure management at the core of the HP CloudSystem Matrix, a converged infrastructure
solution spanning servers, storage and network resources that is an ideal platform for delivering shared services. HP logical
server technology further enables system administrators to build a converged infrastructure, delivering a new level of
automation, simplicity, interoperability, and integration to meet business demands. Logical servers enable a common
approach to planning, deployment, adjustment, and management, whether the server OS and workloads are hosted directly
on a physical server, or on a hosted virtual machine.
A logical server is defined by a server profile that is easily created and flexibly moved across physical and virtual machines.
A logical server profile describes the system resources needed for a given operating system (OS), application, and workload
to operate (e.g., configuration requirements such as processors and memory, and unique identifiers such as MAC Addresses
6
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
and server World-Wide Names [WWNs]). The profile is managed in software and can be applied to the creation of a virtual
machine using hypervisor-based software or to a bare-metal server blade using HP Virtual Connect technology. Logical
servers can be deactivated or moved as appropriate to support the IT infrastructure adapting to changing needs. Key to this
flexible movement is the use of shared storage for the boot and data volumes.
The definition of a logical server includes one or more storage pool entries which define the types of storage resources
required (e.g., one 10 GB boot volume, one 100 GB shared data volume, one 10 GB private data volume). Storage definitions
include capacity, RAID level, optional text tags, required fabric, and other relevant information. When a logical server is
activated on a blade, a set of Virtual Connect initiator WWNs is applied to the blade’s HBAs or CNAs. The storage is
exported/presented and zoned to those WWNs, thus providing access to suitable boot and data volumes.
Figure 4. Matrix Operating Environment logical servers using blades and Virtual Connect
HP BladeSystem c7000 Enclosure
Logical
server 1
Logical
server 2
Logical
server 3
Blade 1
Blade 2
Blade 3
Blade 16
CNA
CNA
CNA
CNA
H1-01
H1-02
H2-01
H2-02
VC SAN 1
H3-01
H3-02
H16-01 H16-02
VC SAN 2
VC FlexFabric
Module
VC Domain
VC FlexFabric
Module
Uplinks
Fabric 1
Fabric 2
Figure 4 shows an HP BladeSystem c7000 Enclosure with up to 16 blades. The enclosure can have a pair of Virtual Connect
Fibre Channel modules (for blade connectivity to the fabrics via traditional HBAs), or a pair of Virtual Connect FlexFabric
modules (as shown in the diagram) that provide blade connectivity to the fabrics via Converged Network Adapters (CNAs).
The administrator can configure SAN fabrics using the Virtual Connect Manager, with connections to the external fabric via
the Virtual Connect uplink ports. Virtual Connect Enterprise Manager and the Matrix Operating Environment are aware of
these fabric definitions, and they are used to ensure appropriate fabrics for logical server storage.
Matrix can allocate Virtual Connect initiator WWNs to be used for storage presentation and zoning (which can be automated
with the HP Storage Provisioning Manager solution bundled with Matrix OE). Matrix OE ensures the Virtual Connect profile
contains suitable information regarding initiator WWNs and storage controller WWNs, and that profile is applied to a physical
blade, ensuring its HBA or CNA is appropriately configured for storage access (i.e., has appropriate initiator WWNs and
storage WWNs for the boot paths). In this manner, storage access is preserved even when a logical server is migrated from
one blade to another. The Virtual Connect profile containing the WWNs migrates and the new blade is appropriately
configured with initiator WWNs and boot target information. This migration might be to a blade in a different c7000
enclosure (but part of the same Virtual Connect Domain Group).
7
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
N_Port ID Virtualization (NPIV) is a Fibre Channel facility allowing multiple N_Port IDs to share a single physical N_Port.
This allows multiple Fibre Channel initiators to occupy a single physical port. This mechanism is used by Virtual Connect
to allow all of the blades in the enclosure to connect to the SAN fabric via the VC uplink ports. This mechanism can also be
used to allow multiple initiator WWNs to be applied to a single physical port of the CNA or HBA for a blade (often called
multi-initiator NPIV). Matrix OE uses this capability to allow flexible use of storage volumes across logical servers. A volume
intended to be a private boot volume might be presented to one initiator WWN, a shared data volume might be presented to
a set of initiator WWNs, and a logical server might be using both volumes (represented in Matrix as storage pool entries).
Matrix OE would have Virtual Connect apply all appropriate initiator WWNs to the HBA/CNA ports of the chosen physical
blade based on the storage needed by the logical server. If the number of initiators exceeds the number of physical ports,
multi-initiator NPIV can be used to apply them to the physical port (perhaps with one port having three initiator WWNs, one
physical and two applied through NPIV).
Note:
Matrix environments use Virtual Connect for physical connectivity to the SAN and the HP 3PAR StoreServ Storage solutions.
The VC uplinks should be connected to both nodes in a node pair or to separate fabrics that in turn are connected to both
nodes in a node pair. Matrix storage pool entry configuration includes specification of ports and fabrics, and any single
physical port will only have initiators for a given fabric (thus initiators for separate fabrics will be on separate physical ports).
If more than two fabrics are required, additional mezzanine cards can be added to the blade for additional physical ports.
Storage presentation and zoning in a Matrix OE environment
Matrix OE provides information regarding storage needs (the desired fabric, volume size, RAID level, and other relevant
information). The virtual volume may already exist on the HP 3PAR StoreServ (pre-provisioned storage) or SPM can create
the volume as it is required (on-demand provisioned storage). A Virtual volume (Vvol) on the HP 3PAR StoreServ Storage
system can be made visible or accessible to one or more logical servers by exporting the virtual volume to the appropriate
initiator WWNs used by storage pool entries associated with the logical server. This presentation can actually be done
before the storage pool entry is associated with a logical server, and is typically done before the logical server is activated
onto a physical blade. The intent is to have all the storage ready for when the logical server is powered up and gets access
to the SAN through the Virtual Connect module using the appropriate initiator WWNs. This presentation process can either
be done manually by the storage administrator in advance or can be automated by using the HP Storage Provisioning
Manager (SPM). When SPM automates the creation of the host definitions on the HP 3PAR StoreServ, a prefix is used to
allow the storage administrator to differentiate these host definitions from those manually created. SPM will present the
volume to the appropriate initiators over all controller ports on the appropriate fabric. The controller ports choices are
returned to Matrix OE and an appropriate selection can be made automatically or manually by the administrator (perhaps
balancing controller port choices across storage pool entries).
In addition to automation of storage presentation, SPM can also automate the zoning in Brocade SAN environments.
SPM will create a zone set containing the initiator WWN (applied to the blade HBA or CNA by Matrix and Virtual Connect) and
all the storage controller port WWNs on the appropriate fabric (just as the presentation was over all controller ports on the
given fabric). For example, given the Fabric 1 and Fabric 2 in figure 3, SPM would present and zone to one controller port
for configuration A, two ports for configuration B, and four ports for configuration C. Administrators can choose to zone
manually, and may zone to a subset of controller ports, but need to be very careful to ensure the controller port used in the
Matrix storage pool entry configuration is appropriately zoned. That is, if the storage administrator chose port A but the
zoning was manually done for port B, there would be storage access issues. This is avoided in automated zoning
environments by ensuring zoning are to all controller ports on the fabric.
Detailed information regarding Matrix, SPM, storage automation, and the prefixes used in naming are available in the
Faster storage provisioning in the HP Matrix Operating Environment: Use of the HP Storage Provisioning Manager storage
catalog with controlled storage operations white paper located at hp.com/go/matrixoe/docs.
Four and eight port zoning configurations
The number of HP 3PAR StoreServ host-facing ports used in a zone configuration for a specific server can dramatically
affect the initiator count per port on the storage system. Figure 5 shows the difference between using four host-facing
ports versus using eight.
8
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
Figure 5. Four versus eight port configurations
Blade 1
Blade 2
Blade 3
CNA
CNA
CNA
H1-01 H1-02
H2-01 H2-02
H3-01
H3-02
MPIO combines
multiple paths
to one
Blade 1
Blade 2
Blade 3
CNA
CNA
CNA
H1-01 H1-02
H2-01
VC SAN 1
VC SAN 2
VC SAN 1
Fabric 1-A
Fabric 2-A
Fabric 1-A
H2-02
H3-01 H3-02
VC SAN 2
Fabric 2-A
Initiator count
per port= 3
0:1:0
0:1:1
0:1:2
0:1:3 1:1:0
1:1:1
1:1:2
1:1:3
MPIO combines
multiple paths
to one
Initiator count
per port= 3
0:1:0
0:1:1
1:1:0
1:1:1
2:1:0
2:1:1
3:1:0
3:1:1
H1-01 H1-02
H1-01 H1-02
LS1
Vvol H1-01 H1-02
H1-01 H1-02
H1-01 H1-02
H1-01 H1-02
LS1
Vvol
H2-01 H2-02
H2-01 H2-02
LS2
Vvol H2-01 H2-02
H2-01 H2-02
H2-01 H2-02
H2-01 H2-02
LS2
Vvol
H3-01 H3-02
H3-01 H3-02
LS3
Vvol H3-01 H3-02
H3-01 H3-02
H3-01 H3-02
H3-01 H3-02
LS3
Vvol
Node 1
Node 2
Node 3
Node 0
Node 1
3PAR StoreServ
Node 0
3PAR StoreServ
The left side of figure 5 shows a two-node HP 3PAR StoreServ with one port per node connected to each fabric (even ports
to Fabric 1-A and odd ports to Fabric 2-A).
Each of the 3 blades in the configuration has a Converged Network Adapter (CNA), with two virtual HBAs that has WWNs
assigned for storage access (Hx-01 and Hx-02). One of the virtual HBAs from each CNA is connected over the BladeSystem
backplane to the VC SAN 1 in the Virtual Connect FlexFabric module 1, whose uplink ports connect to Fabric 1-A. The other
virtual HBA from each CNA is configured to connect to VC SAN 2 on VC module 2 with its uplink connected to Fabric 2-A.
The actual access to specific ports and paths is accomplished by using WWN zoning on the switches in the fabric. Zone
configurations have to be created in each fabric that includes the CNA virtual HBA WWN as well as the HP 3PAR StoreServ
host-facing port WWNs. (See figure 7 for detail on zone configuration)
In this configuration a virtual volume (Vvol) from the HP 3PAR StoreServ can be made accessible to blade 1 by exporting the
Vvol to the blade using the virtual WWNs (H1-01 and H1-02). The storage administrator (or HP Storage Provisioning
Manager [SPM]) will create the appropriate host definition(s) on the storage server that contains the WWNs of the specific
server/blade requiring access to the storage volumes (Vvols), in this case H1-01 and H1-02. Since H1-01 is zoned and
logged into port 0:1:0 and 1:1:0 as an initiator and H1-02 is zoned and logged into port 0:1:1 and 1:1:1 as another initiator,
the Vvol will be exported via all four host-facing ports.
Blade 1 will now have access to the Vvol by means of four separate paths and on four ports of the storage server. The Multi
path driver (MPIO) on the host will simplify the four paths into one virtual device on the host. In this configuration if a Virtual
Connect module/uplink, switch/fabric or node on the storage server fails for any reason the host will lose two possible paths
to the storage server; the MPIO driver will switch all I/O’s to the remaining two paths and I/O will continue.
Note:
A path failure has to be detected by the MPIO driver and the driver has to react to this failure switching I/O to the remaining
path. This detection of failure and switching of paths can sometimes take a few seconds and depending on the application
might cause failures or slow response time for a short time.
The right side of figure 5 shows the same configuration but a four-node HP 3PAR StoreServ system was used and
each fabric is connected and zoned to four host-facing ports on the storage server (eight in total). Note that in this
configuration each server/blade WWN is logged into all four host-facing ports for each fabric, adding to the initiator
count on all four ports.
9
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
Maximum initiator per port count
In order to prevent oversubscription to any single array port, the HP Global Services and Support organization has set some
limits on the number of servers or HBAs (initiators) that should be connected to a single port. The limits are expressed as
the maximum number of initiators per port, where an initiator is a unique WWN that logged into the port. Although initiators
can be related to server/blade CNA/HBA WWNs and therefore be considered to be a restriction on the number of servers per
port, the restriction is more targeted to minimize the total I/O performed to a specific port. Each port has a fixed amount of
buffers to handle I/O queue-depth and the more initiators connected to a specific port the more the risk of depleting the
buffers and therefore the queue-depth of the port. This can result in longer I/O service time and even rejection of I/Os by the
port. The 3PAR StoreServ storage system can easily report the number of unique WWNs per port and if the number of
initiators is larger than the recommended limit an Over Subscribed System Alert (OSSA) e-mail message will be generated
by HP’s Global Services and Support organization and sent to the customer, making them aware of the potential over
subscription per port and recommending to balance the host connectivity over more physical ports.
The current limit defined for an HP 3PAR StoreServ host-facing port is a maximum of 64 initiators per port (1 initiator =
1 WWN). In a physical world this can be translated to 64 servers (or rather unique WWNs on a CNA/HBA port) per storage
host-facing port. Using “initiators per port” is not a completely foolproof method to limit oversubscription to a port. The
administrator has to take into consideration what sort of application and load the physical servers connected to a port will
produce and then adjust this number accordingly. It is possible that a single server running a very heavy I/O load can by itself
over utilize a single port. The HP Global Services and Support team has found that, in general, a server running virtualization
software such as VMware ESX/ESXi or Microsoft® Windows® Hyper-V hypervisors has the potential to produce more I/O than
a server running a dedicated operating system. It is therefore also recommended that when using hypervisors the
maximum initiators per port should not exceed 16 for 4 Gb/s and 32 for 8 Gb/s Fibre Channel connections.
Note:
HP 3PAR StoreServ also has a recommended total maximum WWNs per array. This number depends on the model and
configuration of the array and is defined in the specification for each array. See the “HP 3PAR StoreServ Storage best
practices guide” for the latest recommendation for specific models.
Other than being able to report the number of initiators (unique WWNs) per port, the HP 3PAR StoreServ has no method to
determine if the WWNs are from a hypervisor system or a standalone server/blade, nor can the array determine if the WWN
is from a dedicated physical CNA/HBA port or from a Virtual Connect multi-initiator NPIV WWN added to a physical CNA/HBA
port. This can lead to a false alarm being raised when using a solution such as Matrix Operating Environment that relies on
using multi-initiator NPIV WWNs rather than physical server WWNs. In these types of environments it is difficult (from the
HP 3PAR StoreServ perspective) to determine how many physical servers/blades (CNAs/HBAs) are actually in use, or to
determine what sort of load profile the active resources will have on a single port.
Virtual Connect multi-initiator NPIV implementation
Multi-initiator NPIV allows HP Virtual Connect to assign multiple WWNs to a CNA or HBA port already containing a WWN.
The new WWNs are added to the existing CNA/HBA WWN and the CNA/HBA will be able to perform I/O using the new WWN(s)
as if they were the actual physical CNA/HBA WWN. A physical/virtual HBA port can have multiple WWNs added through
multi-initiator NPIV. What makes NPIV so ideal for a Matrix environment is that it allows flexible use of storage volumes.
Each storage volume (Vvol) can be exported to distinct initiator WWNs and then combined and recombined as necessary to
provide storage for logical servers. A given volume might be a boot volume one week, a shared database volume the next
week, and one of several volumes the subsequent week. Storage Vvols can be exported and zoned to the Virtual Connect
WWNs, and then associated with logical servers as required. When a logical server is activated on a physical blade, Matrix
will have Virtual Connect apply the appropriate initiator WWNs to the CNA/HBA ports (using multi-initiator NPIV features if
the number exceeds the number of physical ports). This allows the logical server to access the storage resources without
having to modify SAN zoning or array configurations. A logical server can be relocated from one blade to another by
deactivating the Virtual Connect profile (with associated WWNs) on the original blade and then activating the Virtual Connect
profile (with associated WWNs) on a new blade.
10
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
Storage resources can also be made available to multiple servers/blades at the same time by exporting and zoning for a
set of WWNs and activating Virtual Connect profiles with those WWNs on a specific set of server/blades at a time. Matrix OE
requires distinct initiator WWNs for volumes which are to be shared among logical servers, versus private to a specific
logical server.
In some configurations, a single pair of WWNs can be used to access a single storage device (VLUN) on the array, or in other
cases a single pair of WWNs can be used to access a group of storage devices (VLUNs) on the array. Whether a one-to-one
relationship is used or if multiple-to-one is used all depends on the way the user requests the storage resources and the
criteria used for the selection. More details are in the “creating and using SPE’s” section of this document.
Although using multi-initiator NPIV WWNs provides great benefit and flexibility to the Matrix environment, each WWN used
in the configuration (applied to a physical port or via multi-initiator NPIV) adds to the initiator count on the array, perhaps
resulting in some oversubscription alarms on the array. Since the array has no way to distinguish between WWNs applied
directly to a physical port or via multi-initiator NPIV, each WWN that logs in on the array is counted as a unique initiator.
Matrix storage pool entries
The Matrix Operating Environment uses storage pool entries to define specific storage volumes/LUNs required for a logical
server. A storage pool entry will typically consist of a definition of the type of storage required (size, RAID level, tags) and if
the storage should be accessible by multiple servers (a sharer count can be specified). The storage pool entry will also
include ports and fabric information, and a WWN for each port. These WWNs will ultimately be used to access the specific
storage from a specific blade. A logical server may have more than one storage pool entry (e.g. one for a private boot
volume, and another for a shared database volume). Each of those storage pool entries would have fabric specifications for
the volumes and the initiator WWNs for the appropriate ports would be applied to the physical blade via the Virtual Connect
profile (using multi-initiator NPIV if necessary).
Note:
The WWNs in a storage pool entry may end up being physical (non-NPIV, applied directly to the physical port) or virtual
(multiple applied to the same physical port via multi-initiator NPIV). This will be determined based on how the storage pool
entry is associated to a logical server. If a logical server is only using a single storage pool entry with redundant paths to the
storage volumes (two fabrics/ports), these will be physical WWNs of the blade HBA ports.
Private storage pool entries
Storage pool entries containing volumes to be used by a single logical server can be considered private storage pool
entries. Private storage pool entries are typically used for boot devices but can also be application storage devices/LUNs
if the application or part of the application is not to be shared between logical servers. For example, a logical server may
require a private boot volume, a shared database volume, and a private transaction log volume. The private volumes
can be in separate storage pool entries, or combined into one. Each storage pool entry will have Virtual Connect WWNs
assigned to ports for the appropriate fabrics in the storage pool entry. When a logical server is activated, Matrix will use
Virtual Connect to ensure the initiator WWNs for the associated storage pool entries are applied to the blade’s CNA/HBA
ports and these WWNs will log into the SAN fabric, and be visible to the array ports and registered by HP 3PAR StoreServ
as additional initiators.
Note:
HP Storage Provisioning Manager (SPM) or the storage administrator will add the host definitions to the array even before
the appropriate WWNs are activated. The administrator can manually enter the WWNs required for the logical server
(available in the storage pool entry definitions), or SPM will programmatically add them to the host definitions on the
storage array. Storage devices can then be exported to the host definitions and as soon as the WWNs log into the SAN the
storage device will be visible and available for use by the server that has been activated.
11
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
Shared storage pool entries
Storage pool entries allow the specification of a sharer count. The count is one for a storage pool entry which is not intended
to be shared (a private storage pool entry). Shared storage pool entries contain storage resources that need to be available
and accessible to multiple servers/blades at a time. This is typical for clustered or hypervisor environments or where specific
software applications require access to the same devices/LUN(s) from multiple servers. Since the storage pool entry has to
be available to multiple servers at the same time, the storage pool entry will have multiple WWNs assigned to it (to enable
the storage to be presented to all of the necessary servers). The number of WWNs is determined by the sharer count and
number of ports (e.g., one port if no redundancy, two ports for redundancy, and perhaps more ports if the blades have
additional HBA Mezzanine cards).
All volumes in the shared storage pool entry will be shared. Volumes intended to be private to a given logical server must in
be in a separate storage pool entry.
Best practice:
Avoid creating shared storage pool entries unless necessary. Use judgment when requesting shared storage, do you really
need all 64 servers/blades to have access to the storage, or might eight or even four or maybe just two servers be
sufficient? This is not always possible or practical, but wherever possible attempts to reduce the share count in a shared
storage pool entry.
Figure 6. Private and shared storage pool entries and logical servers on blades
Host 1
Switch 1 zone configuration
HBA 1
S4-001
S1-001
H1-01
HBA 1
S4-003
S2-001
H2-01
HBA 1
S4-005
S3-001
H3-01
Host 2
HBA 2
S4-002
S1-002
H1-02
HBA 1
S4-003
S2-001
H2-01
Host 3
HBA 2
S4-004
S2-002
H2-02
HBA 1
S4-005
S3-001
H2-01
Switch 1
Switch 1
Zone 1 = S1-001, A1-011, A1-111
Zone 2 = S2-001, A1-011, A1-111
Zone 3 = S3-001, A1-011, A1-111
Zone 4 = S4-001, S4-003, S4-005,
A1-011, A1-111
HBA 1
S4-001
S1-001
H1-01
H1-01 H2-01 H3-01
S1-001 S2-001 S3-001
HBA 2
S4-006
S3-002
H2-02
Switch 2
H1-01 H2-01 H3-01
H1-02 H2-02 H3-02
A1-010 A1-110
A1-011 A1-111
S4-001 S4-003 S4-005
A1-011
A1-010 A1-110
0:1:1
0:1:1
Initiator count
per port = 6
A1-111
0:1:0
0:1:1
1:1:0
1:1:1
S1-001
S1-001
S1-002
S1-002
SPE1
S2-001
S2-001
S2-002
S2-002
SPE2
S3-001
S3-001
S3-002
S3-002
SPE3
S4-001
S4-001
S4-002
S4-002
S4-003
S4-003
S4-004
S4-004
S4-005
S4-005
S4-006
S4-006
Node 0
SPE4
Node 1
3PAR StoreServ
Figure 6 shows a configuration with three logical servers activated on three blades and four storage pool entries. Three
of the storage pool entries are private and the fourth (SPE4) is a shared storage pool entry shared among all three
logical servers.
Note:
In figure 6 SPE1 and 2 are depicted with a single storage device/LUN and SPE3 and 4 are depicted with multiple
storage/LUNs. This is merely to demonstrate that a storage pool entry can have a single or multiple storage volumes.
A storage pool entry might contain a single storage volume used as a boot device, or might contain the boot volume and
other private volumes. Storage pool entries used for shared application data might have multiple storage volumes for all of
the sharers to access.
The fabric on switch 1 connects one of the CNA/HBA port in each blade to a controller port on both nodes of a node pair on
the HP 3PAR StoreServ. Similarly, the fabric on switch 2 connects the other CNA/HBA port of the server blades to the same
node pair, different ports. The inset diagram shows the details for zoning configuration on switch 1. Assuming the storage
12
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
pool entries were assigned to logical servers and the logical servers were activated on the physical blades, the storage
pool entry WWNs (S1-001, S4-001) as well as the physical server WWN (H1-01) will log into the switch port. WWN zoning is
configured to zone each of the initiator WWNs to the storage server port WWNs on the appropriate fabric. Four separate
zone sets would be created on the switch/fabric in this instance. Zone sets were created either automatically by SPM or
manually by the storage administrator.
Note:
The zone sets in the figure were simplified to demonstrate private vs. shared zone configurations. SPM might create
multiple zone sets for shared storage pool entries.
Note:
SPM will always zone all storage server ports on the appropriate fabric to the HBA/CNA WWN(s). For figure 6, that would be
port 0:1:0 and 1:1:0 for the switch 1 fabric and port 0:1:1 and 1:1:1 for the switch 2 fabric.
Note:
In figure 6 the physical server/blade HBA WWN (H1-01 or H1-02) was not used in any zone set definition in this example,
and therefore would not be visible on the storage server host-facing ports, and would not be counted as another initiators
on the storage server.
Once the zone set configuration is in place, and the logical servers are activated on physical blades in the Virtual Connect
environment, the HBA/CNA WWNs will log into the SAN and be visible to each of the zoned storage server host-facing ports
and will add to the initiator count for each port.
As we can see in figure 6, having four storage pool entries, one being shared among all three servers/blades, can result in an
initiator count of six per port on the HP 3PAR StoreServ storage server. The shared storage pool entry adds three initiators
to each array port as it is shared by three servers, and all three of those servers are activated. If the storage pool entry was
to be shared in a larger configuration, then more initiators would be involved (e.g., an eight-node cluster with a shared
storage pool entry would add an additional eight initiators to the per port initiator count on the storage server port(s) when
those eight servers have been activated). If some of the sharing servers have not yet been activated, they do not add to the
count of visible initiators.
Note:
The HP 3PAR StoreServ only counts active WWN/initiators per port. Any host definitions on the storage array, or configured
storage pool entries in Matrix with WWNs that are associated with a logical server not currently activated on a blade will not
be counted as initiators per port. The count is incremented when the logical server with which the storage pool entry is
associated is activated on a physical blade.
Note:
SPE3 and SPE4 in figure 6 have multiple VLUNs exported to the WWNs in the storage pool entry. It is important to consider
the effect of having multiple storage pool entries each with only one volume/device/VLUN as opposed to having multiple
devices/VLUN in a single storage pool entry. If for instance SPE4 had only one device associated with the storage pool entry
and another storage pool entry was created for the second device, an additional three initiators per port would have been
added to the configuration. (In the example of an eight node cluster it would have used 16 initiators to access the two
devices on eight logical servers).
13
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
Creating and using Matrix storage pool entries
Manual creation of storage pool entries
Prior to the 7.0 release of the Matrix Operating Environment, storage pool entries could only be created manually by the
administrator. The storage pool entry would contain information about the storage requested (size, RAID, redundancy, tags)
and also Virtual Connect WWNs. The storage administrator would manually create, export, and zone the storage (including
host definitions on the array). Information regarding that storage (controller port WWN, LUN) could be manually added to
the storage pool entry or automatically populated by the HP Storage Provisioning Manager (after the storage administrator
imported the volumes into the storage catalog). The storage pool entries are available for provisioning when users request
storage resources and can be added to a logical server configuration.
Manual creation of storage pool entries limits the flexibility of storage provisioning. A pre-defined storage size is used and
this results in a best fit for the user request. There might be a one-to-one relationship between the volume (VLUN) and
storage pool entry with each storage pool entry having only one storage volume, or the storage administrator may know in
advance that the end user might request multiple volumes and specifically create such a storage pool entry (e.g., containing
a 10 GB boot volume and a 50 GB data volume).
To achieve the greatest flexibility, administrators may create separate storage pool entries for each volume (allowing a
logical server to use volumes 1, 2, and 3 one week, and then when that logical server is no longer needed, volume 1 might
be used by database logical server, while volume 2 is used by a web service logical server and volume 3 remains unused).
This separation provides flexible reuse, but comes at the cost of additional WWNs. Each storage pool entry has separate
WWNs, applied to the servers as appropriate. If all three volumes were in one storage pool entry, fewer WWNs are required,
but the volumes will be allocated as a group (so all would go to the database logical server).
For instance, an end user may want to create an SQL database and request four separate storage devices/LUNs for the
database in order to separate data from log and redo or temp space. If those are to be shared volumes, they must be in a
separate storage pool entry than the private boot volume. With four individual storage pool entries for the database, as well
as the additional fifth storage pool entry for the private boot volume (and perhaps a private transaction log), this can result
in an additional five initiators for the count per storage server port. If the four database devices/LUNs were configured into a
single storage pool entry, the request would have only added one additional initiator count per port. Now consider what
would happen if the original request was to have the SQL database shared on an eight-node cluster and the four requested
database devices/LUNs were assigned to four storage pool entries, then for only the four database devices on the eight
servers/blades an addition 32 initiators would have been added to the HP StoreServ controller port. (Again assuming all
logical servers that share the storage pool entry are active at the same time.)
Automated creation of storage pool entries
Since the 7.0 release of the Matrix Operating Environment, the creation of storage pool entries can be automated through
use of the HP Storage Provisioning Manager (SPM). Matrix infrastructure orchestration can auto-generate a storage pool
entry based on storage information in a service template definition. SPM can fulfill a storage request (manually created or
auto-generated storage pool entry) with pre-provisioned volumes or on-demand provisioned volumes. SPM can create the
volume on the array, create a host definition if necessary, do presentation/exports, and adjust zoning (in Brocade SAN
environments). This means that the storage pool entry definition as well as the specific VLUNs required for the user request
can be created just after the user requests the service. If the user has requested multiple private storage devices/LUNs from
the system then Matrix infrastructure orchestration is able to create this request as a single storage pool entry and
therefore reduce the number of WWNs used for the request. Volumes to be shared need to be in a separate storage pool
entry from the private volumes.
Note:
Matrix will always create the boot disk and all private data disks in the same storage pool entry when auto-generating
storage pool entries. Any requested shared disks will be combined into a second storage pool entry.
14
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
Figure 7. Minimizing the number of storage pool entries used
Service request: SQL database
Service request completed with 5 pre-provisioned SPEs
WWN201
WWN202
SPE20
RAID 1 Fiber Channel
Boot disk C:
Service request completed with SPM creating 2 SPEs
SPE20
RAID 1 Fiber Channel
20 GB
Temp disk D:
SPE21
RAID 1 Fiber Channel
50 GB
Data disk E:
SPE21
RAID 1 Fiber Channel
100 GB
Logs disk F:
Data disk E:
SPE21
RAID 1 Fiber Channel
100 GB
Logs disk F:
100 GB
WWN241
WWN242
SPE24
RAID 1 Fiber Channel
Temp disk D:
100 GB
WWN231
WWN232
SPE23
RAID 1 Fiber Channel
WWN211
WWN212
50 GB
WWN221
WWN222
SPE22
RAID 1 Fiber Channel
Boot disk C:
20 GB
WWN211
WWN212
SPE21
RAID 1 Fiber Channel
WWN201
WWN202
Archive disk G:
SPE21
RAID 1 Fiber Channel
100 GB
Archive disk G:
100 GB
Figure 7 shows the difference between using five storage pool entries, each with a single volume/LUN as opposed to
grouping all the volumes/LUNs in a single storage pool entry in servicing a user request. The diagram assumes a private
storage pool with just one server accessing the storage pool entry.
Best practice:
It is important to attempt to group all private storage volumes into a single storage pool entry, and all shared storage
volumes into a separate storage pool entry. This can reduce the number of WWNs used in the configuration. If there are
concerns that future logical servers may not need that same combination of volumes, then Matrix infrastructure
orchestration can auto-generate the storage pool entries based on the specific user request (rather than manually
pre-defining the storage pool entries).
Load balancing storage access over array ports
An earlier section of this document noted that the HP 3PAR StoreServ requirement of a maximum of 64 initiators per port
(32 when using hypervisors with an 8 GB SAN) was established with the intention to restrict the configuration to a maximum
of 64 physical servers per array port and thereby attempt to prevent I/O over subscription per port. This did not take in
consideration the use of multi-initiator NPIV adding a large number of additional initiator WWNs on the same physical port.
In Matrix environments (and in most new hypervisor configurations using NPIV) the initiator count per port can easily exceed
the recommended 64 maximum with only a few physical servers/blades in use. However keep in mind that the
recommendations are intended to prevent oversubscription of I/O to a single array port and not specifically to restrict the
number of physical servers.
A high or even low initiator count per port does not directly relate to over or under subscription of I/O. For instance, using a
shared resource can dramatically increase the initiator count per port, but only one server/blade is performing I/O at any
time (assuming a clustered environment) or a single private resource using only one WWN can perform at an extremely high
I/O rate and cause oversubscription to the port. This paper has described several options to reduce the number of initiators
on each of the array ports. The user can reduce the number of servers sharing a resource and/or ensure that storage
volumes/LUNs are grouped in the minimal number of storage pool entries. These techniques only save a small number of
initiators per port and in many cases are considered too restrictive in servicing the end user request.
In addition to techniques to reduce the initiator WWN count, it is perhaps more important to appropriately distribute the I/O
workload over selective storage server ports. If a storage host-facing port is overloaded by a combined load from multiple
servers, then the solution outlined in this section can help to spread the load over multiple storage server ports. If a single
server is driving more I/O from a single initiator than the array port can handle, then application redesign may be required.
15
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
For the purposes of design and planning, assume that each server/blade in a Matrix configuration will have an average of
three storage pool entries and thus three WWNs assigned to each CNA/HBA port. This allows one storage pool entry for
private volumes, one for shared volumes, and a third if a different storage tag is needed for some volumes. This results in a
total of six WWNs per server/blade, assuming redundancy in SAN fabrics and two CNA/HBAs ports per blade, but since each
HBA port is connected to separate fabrics and then individual port(s) of the array we will use a count of three for 3PAR
StoreServ limitations calculation. Given a c7000 enclosure with 16 blades, there would be 48 WWNs against the array port
initiator count (16 blades * 3 WWN = 48 initiators). Now consider having 2 * c7000 enclosures with 32 blades this results in
32 blades * 3 WWN = 96 initiators as shown in figure 8.
Figure 8. 32 server/blades with two fabric configuration
Depending on the I/O workload of the logical servers, this may or may not be overloading the array port, but it will raise an
over subscription event and an e-mail message. In this two fabric configuration, if more than 21 logical servers each with
three WWNs are defined and running, the initiator count per port will exceed the recommended maximum. In a dynamic
Matrix environment it is very difficult to predict what load or utilization there will be per server/blade. Taking into account
that most Matrix configurations consist of multiple c7000 enclosures (each with 16 blades) it is important to plan on
distributing the load over multiple HP 3PAR StoreServ controller ports for all the servers/blades in the Matrix environment.
Segregation via multiple fabrics
In figure 8 we had only two fabrics (Fabric 1-A and Fabric 2-A) with each fabric connected and zoned to four storage
host-facing ports. In figure 9 we have a configuration with four fabrics (Fabric 1-A, Fabric 2-A, Fabric 1-B, and Fabric 2-B)
each connected and zoned to two storage host-facing ports.
Best practice:
By defining multiple fabrics in the environment, the initiator WWNs used by storage pool entries and logical servers can be
spread across multiple sub-groups of host-facing ports, instead of a single large number of host-facing ports on the
HP 3PAR StoreServ, therefore ensuring that all the WWNs (initiators) are not visible on all the host-facing ports used in the
Matrix configuration. This reduces the initiator per port count for a single port but most importantly recues the possibility of
oversubscription of a single port.
16
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
Figure 9. 32 servers/blades with four fabric configuration
The SAN fabric has to be reconfigured to allow for four individual fabrics. This can either be accomplish by using Virtual
Switch functionality, where supported by the switch vendor, to divide each switch into two virtual switches, each capable to
form its own fabric, or by adding new SAN switches to the environment and creating new fabric configurations.
In the configuration in figure 9, Virtual Connect was used to define the additional logical SAN fabrics in the Virtual Connect
Domain. Note that the diagram displays two c7000 enclosures and 2 sets of Virtual Connect FlexFabric modules, but the
Virtual Connect domain spans both enclosures. Each logical SAN configuration must have a dedicated uplink to the
associated fabric. This configuration allows Virtual Connect to connect any of the blades to any fabric when required.
Note:
Any blade in any enclosure in the Virtual Connect Domain Group can be logically connected to any of the four fabrics
at any point in time. Figure 9 demonstrates how at a given point in time a blade will only be configured to use two of
the four fabrics.
Note:
With multiple fabrics comes also an increase in management cost for the fabric environment and possibly also additional
hardware cost.
With the configuration in figure 8 the two fabrics (Fabric 1-A and Fabric 2-A) were each connected to four storage
host-facing ports, using a total of eight ports on the storage server. SPM would zone any storage to all the host-facing
ports in a fabric, therefore a WWN would log into all four ports used by each of the fabrics (eight in total) and contribute to
the initiator count on all eight ports. In the configuration in figure 9 the fabrics are now only connected to two host-facing
ports per fabric (four in total) and any logical servers zoned on that fabric will only add initiators to the two host-facing ports
each of the fabrics is using (four in total).
The way storage pool entries, and therefore logical servers, are assigned to specific fabrics now becomes an integral part of
balancing the resources on the storage host-facing ports. Virtual Connect will perform the logical connection between any
physical blade in the enclosure, and the required SAN fabric. Any physical blade can be logically connected to any fabric, but
the connection will be made based on the assigned logical server’s fabric requirements. Typically a specific blade will only be
connected to two fabrics, as each CNA have only two virtual HBAs configured, but with additional hardware it can be possible
to connect a blade to all four fabrics. Connecting a logical server or blade to all four fabrics will defeat the purpose of
separating the fabrics.
17
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
Note:
When adding new host-facing ports from the HP 3PAR StoreServ to an HP Matrix configuration, some steps must be taken
to ensure that Matrix and SPM handle this change gracefully.
For SPM version 1.3 (with Matrix Operating Environment 6.3), this change cannot be made while servers that access
SPM-managed storage are online. The steps to perform are as follows:
1.
Cleanly shut down all logical servers using storage volumes from the HP 3PAR StoreServ.
2.
Connect the additional HP 3PAR StoreServ host-facing ports to SAN.
3.
Resynchronize the array in SPM.
4.
Reactivate the SPM services.
5.
Restart the logical servers that were shut down.
Following this sequence allows SPM to rebuild the presentations and SAN zoning to the logical servers.
For SPM releases 2.0 and later (bundled with Matrix Operating Environment 7.0 and later), the following steps are required.
1.
The additions can be done with logical server remaining online.
2.
Connect the additional HP 3PAR StoreServ host-facing ports to the SAN.
3.
Resynchronize the storage array in SPM.
In addition, for both SPM 3.1 and SPM 2.0 and later, each service in SPM that is non-conformant after the array has been
resynchronized should be reactivated by performing the following steps:
1.
Right-click the row containing the service in the services view.
2.
Choose “Configure Requirements” from the contextual menu that appears.
3.
Accept all values in the wizard by selecting “Next” in every case.
4.
Click “Finish” at the end of the wizard.
Figure 9 displays a configuration where the logical servers on blades 1 to 8 in the first enclosure and blades 17 to 24 in
the second enclosure are associated with the Fabric 1-A and Fabric 2-A fabrics. These 16 logical servers and the storage
pool entries associated with them will then be zoned to the four storage host-facing ports used by the two fabrics.
Assuming each logical server will have an average of three WWNs associated to each CNA HBA, the storage server
host-facing ports will have a total of 48 initiators each (assuming all 16 logical servers are active, as only active WWN count
as initiators on the storage port).
In the same way the logical server associated with blades 9 to 16 and 25 to 32 had storage pool entries with requirements
for the Fabric 1-B and Fabric 2-B fabrics. These logical servers will be zoned with the four storage host-facing ports
connected to those fabrics, and all active WWNs will log into those ports. Again assuming an average of three WWNs per
logical server CNA HBA, then there will be 48 initiators per storage host facing port.
Using this method to create four separate fabrics, allows the user to reduce the number of potential initiators per port on
the storage array, and at the same time allows for more logical servers and storage pools to be created in the Matrix
Operating Environment.
Note:
This type of multiple fabric configurations is best implemented when manually creating storage pool entries and requires
the creation of multiple storage pool entries, each with specific fabric requirements. Great care should be taken to make
sure the correct combination of fabrics is used with each storage pool entry definition.
18
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
Best practice:
The HP 3PAR StoreServ is designed to be able to scale in the total number of host-facing storage ports per node in order to
balance host access over multiple resources. In order to limit the number of initiators per port it is recommended to
dedicate a specific set of host-facing ports for the Matrix Operating Environment and not to share those ports with any other
servers or applications where possible.
Note:
When creating shared storage pool entries all the logical servers that access the shared storage resource have to be on the
same set of fabrics. Carefully plan the layout of logical servers and the use of shared volumes on these servers to minimize
the number of WWNs required per fabric.
Segregation via manual fabric zoning
An alternative to the multiple fabric configuration method is to selectively create fabric zones in the default two fabric
configurations, as displayed in figure 10. The same results as in figure 9 could be achieved when using selective manual
zoning configurations, but this method requires active administrator involvement in manually creating or modifying the
storage zones on the fabric to only including a specific set of storage host-facing ports in each zone set (whereas SPM
automated zoning will zone to all controller ports on the fabric). This method requires less physical equipment but might be
more complicated to support over the long term and introduces the delay of manual human intervention.
Best practice:
Using manual zone set configurations to include only a subset of the available storage host-facing ports can limit the
initiator count to only a specific set of ports for each storage pool entry. Balancing storage pool entries over multiple sets of
storage host-facing ports can spread the initiator count per port on the array.
Figure 10. 32 servers/blades with two fabric configuration and manual selective zoning
SPM has the ability to perform automated zoning, but as noted earlier will zone to all ports on the fabric (four ports per
fabric in the figure 8 configuration). The storage administrator can use manual zoning rather than allowing SPM to automate
zoning. HP Storage Provisioning Manager has the concept of Unmanaged SANs (those for which it does not do zoning), and
the storage administrator can indicate a given fabric either has open zoning (no zoning required) or requires manual zoning
(the storage administrator will zone and update XML files to reflect that work).
19
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
Assuming manual zoning, the Matrix infrastructure orchestration workflow requiring storage would pause, and an e-mail
will be sent providing information on what needs to be zoned (which initiator and storage controller WWNs). While the
request would be to zone to all ports on the fabric, the administrator could examine the Matrix storage pool entry and
determine which specific controller port was selected and only use that port for zoning. It is critical to ensure the correct
port is zoned or the server may not be able to access its storage (e.g., a boot device with a specific controller port WWN
configured into the HBA). Because SPM requires zoning to all ports, it will be necessary to have the SPM XML file reflect
zoning to all ports on the fabric, even if only one was actually zoned.
In figure 10 the logical servers and storage pools on blades 1 to 8 and 17 to 24 were manually zoned to only use port 0:1:0,
1:1:0, 0:1:1, and 1:1:1 on the storage array. This will ensure that the WWNs used on these logical servers will only be
counted on these four ports. In the same way the logical servers and storage pools assigned to blades 9 to 16 and 25 to 32
was zoned to only use port 2:1:0, 3:1:0, 2:1:1, and 3:1:1 on the storage array and the WWNs from these logical servers only
counts towards the initiator count on those ports.
Note:
In figure 10 we used the example of using two ports per fabric for the storage pool entries (four in total), but the
administrator could also decide to divide it even further down and use one ports per fabric (two ports in total) for each
storage pool entry. Balancing the load over only two sets of four ports is easier than managing four sets of two ports.
Note:
There are only two fabrics defined in Virtual Connect Fabric 1-A and Fabric 2-A and all blades are connected to these two
fabrics, access to specific host-facing ports are determine by the zone configurations in the fabric.
Note:
It is still the administrator’s responsibility to ensure multi-path access to the storage device is zoned over at least two nodes
and using nodes that are in a node pair relationship.
Summary of best practices
HP 3PAR StoreServ
• Increase 3PAR StoreServ node count as needed to accommodate server I/O load. One pair may be sufficient for
redundant access by several servers with moderate loads. Additional node pairs may be needed as server count and
I/O load increases
• Where possible dedicate sets of ports for Matrix OE and avoid sharing the ports with other servers and solutions
• Connect the same ports on a node pair to the same fabric (i.e. port 0:1:1 and 1:1:1 to Fabric A and 0:1:2 and 1:1:2 to
Fabric B)
• Ensure that device availability is configured to have at least two paths to at least two nodes on the HP 3PAR StoreServ
and that the nodes are part of a node pair.
Matrix OE configuration
• Use the minimal number of Matrix storage pool entries required to minimize initiator WWNs.
• It is important to attempt to group all private storage volumes into a single storage pool entry, and all shared storage
volumes into a separate storage pool entry. This can reduce the number of WWNs used in the configuration. If there
are concerns that future logical servers may not need that same combination of volumes, then Matrix infrastructure
orchestration can auto-generate the storage pool entries based on the specific user request (rather than manually
pre-defining the storage pool entries). When Matrix infrastructure orchestration auto-generates storage pool entries
it will automatically attempt to minimize the number of storage pool entries used (one for private volumes, and one for
shared volumes).
20
Technical white paper | Best practices for HP 3PAR StoreServ Storage with the HP Matrix Operating Environment and SAN configurations
• Use judgment when requesting shared storage. Is it mandatory that all servers/blades have access to the storage?
Could a subset of servers be sufficient? This is not always possible or practical (e.g., an eight-node cluster where all
nodes require access to a common set of volumes), but wherever possible attempt to reduce the share count in a shared
storage pool entry.
• Attempt to always share storage pool entries over the same number of servers. Create a storage pool entry for sharing
over four nodes and another for sharing over eight nodes, but don’t create storage pool entries for sharing over three or
seven nodes. The less storage pool entries used the less WWNs and initiators will be used.
SAN fabric configuration
• Consider implementing multiple fabrics to limit logical servers and storage pool entries to a sub group of storage server
host-facing ports. This option does require additional hardware for separate fabrics and can increase fabric management,
but will reduce and spread the WWNs and initiators to multiple sub groups of storage host-facing ports resulting in better
load balancing of resources.
• Alternatively consider using manual zoning (forgoing the automated SPM zoning to all controller ports on the fabric) to
limit logical servers and storage pool entries to a sub set of storage server host-facing ports. This solution requires less
hardware and fabric administration, but requires active administrator input in managing zoning configurations.
For more information
• HP 3PAR StoreServ Solutions
hp.com/go/storage
hp.com/go/3parstoreserv
http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA4-4524ENW.pdf
• Enabling Storage Automation in HP CloudSystem Matrix
hp.com/go/matrix
hp.com/go/matrixondemandstorage
hp.com/go/matrixoe/docs
hp.com/go/virtualconnect
• Faster storage provisioning in the HP Matrix Operating Environment: Use of the HP Storage Provisioning Manager
storage catalog with controlled storage operations
hp.com/go/matrix
hp.com/go/virtualconnect
• HP BladeSystem c7000 Enclosure
hp.com/go/blades
Sign up for updates
hp.com/go/getupdated
Share with colleagues
Rate this document
© Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation.
4AA4-6422ENW, May 2013
Download