HP Universal Database Solution: Oracle and HP 3PAR StoreServ

advertisement
Technical white paper
HP Universal Database
Solution: Oracle and HP 3PAR
StoreServ 7450 All-flash array
Reference architecture
Table of contents
Executive summary ...................................................................................................................................................................... 3
Introduction .................................................................................................................................................................................... 4
HP ProLiant DL980 G7 NUMA server .................................................................................................................................... 4
3PAR StoreServ Storage cluster technology ...................................................................................................................... 6
HP 3PAR StoreServ 7450 Storage array .............................................................................................................................. 7
HP supported options – sample scenarios.......................................................................................................................... 9
Solution components ................................................................................................................................................................. 11
Architectural diagram ............................................................................................................................................................. 11
Capacity and sizing ...................................................................................................................................................................... 14
DL980 server configurations ................................................................................................................................................ 14
HP 3PAR StoreServ SSD IOPS ............................................................................................................................................... 15
Workload description.................................................................................................................................................................. 17
I/O characterization workload .............................................................................................................................................. 17
Oracle database workload tool ............................................................................................................................................ 17
Workload tuning considerations .......................................................................................................................................... 18
Workload data/configuration results ................................................................................................................................. 18
Oracle OLTP peak transactions and IOPS .......................................................................................................................... 22
Thin Provisioning to Full Provisioning comparison results ............................................................................................ 24
Large block throughput for BI workloads .......................................................................................................................... 26
Best practices ............................................................................................................................................................................... 26
Analysis and recommendations .......................................................................................................................................... 26
Server configuration best practices .................................................................................................................................... 27
Storage configuration best practices ................................................................................................................................. 27
Database configuration best practices .............................................................................................................................. 27
Bill of materials ............................................................................................................................................................................ 27
Reference architecture diagram .......................................................................................................................................... 28
Reference architecture BOM ................................................................................................................................................. 29
Summary ....................................................................................................................................................................................... 30
Implementing a proof-of-concept .......................................................................................................................................... 31
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Appendix........................................................................................................................................................................................ 31
Appendix A – Red Hat 6.4 kernel tunables /etc/sysctl.conf .......................................................................................... 31
Appendix B – Grub configuration for disabling cstates................................................................................................... 32
Appendix C – IRQ affinity script for /etc/rc.local ............................................................................................................... 32
Appendix D – HBA NUMA mapping and IRQ map ............................................................................................................. 33
Appendix E – UDEV configurations ...................................................................................................................................... 34
Appendix F – Storage information ...................................................................................................................................... 35
Appendix G – Check or set operating system tracing parameter ................................................................................. 38
Appendix H – Oracle parameters ......................................................................................................................................... 39
Appendix I – HP ProLiant DL980 PCIe card loading order .............................................................................................. 40
For more information ................................................................................................................................................................. 41
2
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Executive summary
The HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array offers the latest, most robust
reference architecture developed under the HP UDB portfolio of mission-critical, database solutions.
This is the latest HP Unified Database solution providing extreme OLTP database performance with exceptional
management capabilities and the addition of the HP 3PAR StoreServ 7450 All-flash array with its rich feature set. The HP
Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array:
• Delivers I/O capabilities over 1M IOPS at less than 1 millisecond response time and a throughput of 10 GB/second.
• Supports more processors, memory, and I/O than previous systems.
• Reduced overhead with minimum LUN paths and inter-node communication.
• Provides support for a host of features such as High Availability (HA), Thin Provisioning, disaster recovery and much more.
The heart of this solution is mission-critical high performance and flexibility. UDB processing is powered by the robust and
flexible HP ProLiant DL980 G7 server, the industry’s most popular, highly reliable, and comprehensively scalable eightsocket x86 server. The HP ProLiant DL980 G7 Non-Uninform Memory Access (NUMA) server leverages HP’s history of
innovation designing mission-critical servers in RISC, EPIC, and UNIX® environments. This design capitalizes on over a
hundred availability features to deliver a portfolio of resilient and highly reliable scale-up x86 servers.
Performance demands for database implementations continue to escalate. Requirements for higher transaction speed and
greater capacity also continue to increase. As servers such as the HP ProLiant DL980 G7 deliver more and more
performance, the storage subsystem must evolve to support this growth.
The HP 3PAR StoreServ 7450 All-flash storage system meets these performance demands head-on; its all-flash storage
array provides extreme storage IOPS and throughput. With this single rack solution, over one million IOPS have been
validated at a throughput capability of 10.5 GB/sec. The solution supports growth using additional flash drives or complete
storage arrays, adding capacity as needed. As database implementations have grown to require extreme IOPS performance
to meet today’s demanding business environments, the UDB solution meets these needs by leveraging the industry leading
HP 3PAR StoreServ technology.
The Oracle and HP 3PAR StoreServ 7450 All-flash array solution allows for flexibility with storage HA configurations for use
of RAID 1, RAID 5, and RAID 6. Flexibility in SSD choice, between 100GB SLC, 200GB SLC and 400GB MLC, means that
customers can customize different configuration options. The HP UDB Reference Architecture is performance tested for 16,
32, and 48 SSD combinations for each array.
The resulting configuration is combined with a database. This database can be a single instance, multiple single instances, a
high availability, clustered solution, or a disaster recovery solution using versions 11gR2 or 12c. This paper is written
specifically for this UDB implementation with Oracle database.
Customers today require high performing, highly available database solutions without the high cost and inflexibility of “allOracle-stack” solutions. The open-architecture HP solution provides these benefits:
• Allows the customer to use a 100% open system HP hardware solution that has been tested and has solid support.
• The database choice can be an Open solution database or an Oracle database, enabling easy update, expansion, and
integration, as the need arises.
• Allows the choice of standard HP support options, with the flexibility to tier mission-critical requirements as needed.
• DL980 G7 NUMA architecture allows for massive flexibility to scale up with a single large database or multiple databases.
• The HP 3PAR StoreServ 7450 offers, within the array itself, extensive features that surpass offerings of most flash-based
database solutions, including, Thin Provisioning, scalability, volume snapshot capability, cloning, online drive and RAID
migration, and much more.
Customer performance workload characteristics and requirements vary. HP has solutions tailored to provide maximum
performance for various workloads without compromising on required availability commitments to the business.
Target audience: This HP white paper was designed for IT professionals who use, program, manage, or administer large
databases that require high availability and high performance. Specifically, this information is intended for those who
design, evaluate, or recommend new IT high performance architectures, and includes details on the following topics:
• HP Universal Database Solution for extreme performance and capacity
• HP DL980 G7 and HP 3PAR StoreServ 7450 All-flash array, the newest addition of the UDB solution offerings
This reference architecture focuses primarily on the design, configuration, and best practices for deploying a highly available
extreme-performance Oracle database solution. The Oracle and Red Hat® installations are standard configurations except
where explicitly stated in the reference architecture.
3
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
This white paper describes testing performed in July and August 2013.
Introduction
DL980 Universal DB Solution
IT departments are under continuous pressure to add value to the business, improve existing infrastructures, enable growth
opportunities, and reduce overhead. At the same time, exploding transactional data growth is driving database performance
and availability requirements to entirely new levels. The demand for high speed and low latency, along with staggering
volumes of transactional data, is prompting the adoption of new storage technologies that range from traditional disk to
solid state.
Driven by the creation of new, high-value applications, customers are discovering that the Oracle Exadata “one-size-fits-all”
approach – one operating system, one database, one vendor – doesn’t do the job. Rather, Exadata requires extensive
tuning, leads to high cost, and results in vendor lock-in.
In response, IT departments are looking for an “appliance-like” solution that provides a common foundation yet offers solidstate storage flexibility with choice of OS and database. Better performance and lower costs are just the beginning of the
value that the HP ProLiant DL980 Universal DB Solution – optimized for the HP ProLiant DL980 G7 server – delivers.
Common foundation
HP ProLiant DL980 G7 – an HP scale-up, resilient, x86 server based on the PREMA Architecture – is designed to take full
advantage of the latest 10-core Intel® Xeon® processor E7-4800/2800 product families with Intel QuickPath Interconnect
(QPI) technology. Working in concert, they form the foundation for unparalleled transactional performance, scalability, and
energy efficiency, plus significantly lower TCO. With all major Linux operating systems and Microsoft® Windows® supported,
the platform collaborates with the OS and software stack to gain the full benefits of the Reliability, Availability and
Serviceability (RAS) feature set included in the Intel Xeon processor E7-4800/2800 product families.
HP ProLiant DL980 G7 NUMA server
The HP ProLiant DL980 G7 server, using the PREMA architecture, is a stellar choice for scale up mission critical solutions
such as the HP Universal Database. This 8 socket NUMA server consolidates massive processing in to a single server with
multiple NUMA nodes. The DL980 uses Smart CPU caching; with the 10 core processors installed using the Intel Xeon E7
4800/2800 processor families, it is capable of processing with 80 CPU cores and 160 logical cores where Hyper-Threading
is applicable.
Figure 1. HP ProLiant DL980 G7 Server
4
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
HP ProLiant DL980 G7 NUMA technology
The DL980 G7 is ideal for scale-up database implementations. Its modular design provides the flexibility to readily adapt the
configuration to meet the demands of your dynamic environment. The architecture supports an appropriately balanced
system with more processors, more memory, and more I/O than previous generation x86 systems have provided. However,
simply adding processors, memory, and I/O slots is not sufficient to achieve the needed scalability and resiliency.
When a database system scales to a larger number of interconnected processors, communication and coordination
between processors grows at an exponential rate, creating a system bottleneck. To solve this issue in our 8-socket x86
server, HP looked to the design of our higher-end, mission-critical servers. At the core of the HP PREMA Architecture is a
node controller ASIC, derived from technology powering the HP Integrity Superdome 2. The node controller enables two key
functionalities: Smart CPU caching and the redundant system fabric. These reduce communication and coordination
overhead and enhance system resiliency.
Key processes in your system and databases can be individual CPU affinity for most efficient overall processing. Applications
which are NUMA-aware potentially have the capability to optimize system performance through NUMA control. For those
applications which are not NUMA-aware, the affinity can be set manually.
Figure 2 shows an architectural view of the DL980 G7. For the DL980 G7 used in the Oracle and HP 3PAR StoreServ 7450
reference architecture testing, each physical CPU has ten cores.
The HP PREMA Architecture groups the processor sockets into multiple “QPI islands” of two directly connected sockets. This
direct connection provides the lowest latencies. Each QPI island connects to two node controllers (labeled “XNC” in the
diagram). The system contains a total of four node controllers. HP Smart CPU Caching is the key to communication between
the NUMA nodes.
Figure 2. Architecture view of the HP ProLiant DL980 G7
An HP ProLiant DL980 G7 scale-up implementation, using the Intel Xeon processor with an embedded memory controller,
implies a cache-coherent, Non-Uniform Memory Access (ccNUMA) system. In a ccNUMA system, the hardware ensures cache
coherency by tracking where the most up-to-date data is for every cache line held in a processor cache. Latencies between
processor and memory in a ccNUMA system vary depending on the location of these two components in relation to each
other. HP’s goal in designing the PREMA Architecture was to reduce average memory latency and minimize bandwidth
consumption resulting from coherency snoops. The result is less latency for database server processes and I/O processes.
The HP node controller (XNC) works with the processor’s coherency algorithms to provide system-wide cache coherency. At
the same time, it minimizes processor latency to local memory and maximizes usable link bandwidth for all links in the
system.
The architectural diagram in figure 2 shows the 2-socket QPI islands. A pair of XNC node controllers supports two islands in
a 4-socket quad. These quads are then connected to create an 8-socket system. Within a 2-socket-source-snoopy island,
all snoops have at most one QPI link hop between the requesting core, the paired socket cache, the smart CPU cache in the
node controller, and the memory controller. By tagging remote ownership of memory lines, the node controller targets any
remote access to the specific location of the requested memory line.
5
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
With the HP PREMA Architecture smart CPU caching technology, the HP system effectively provides more links connecting
processor sockets – the equivalent of six QPI links connecting the two quads. A glueless 8-socket system has just four QPI
links. In addition, Smart CPU caching uses the links more efficiently because it reduces the overhead of cache coherency
snoops. Because of the reduction in local memory latency compared to glueless 8-processor systems, virtual environments
can have higher performance on the ProLiant DL980 G7. With NUMA-aware OS support, system performance will scale
nearly linearly.
DL980 G7 resiliency and redundancy
The DL980 G7 was designed with the resiliency to meet the high availability demands of mission critical enterprise
environments. A redundant fabric achieves continual uptime.
Six redundant data paths, 50% more than most industry-standard products, provide a high level of protection from failures.
Multiple areas of redundancy such as power supplies, fans, and clocks provide additional data protection.
Read more about the DL980 at the HP product website, hp.com/servers/dl980.
In summary, the benefits of using the DL980 for large, scale-up database implementations include:
• Flexibility
• Modular design
• High availability
• Superior performance
3PAR StoreServ Storage cluster technology
The HP 3PAR StoreServ storage systems use a hardware cluster technology to physically store your data, offering ease of
management, highly available data volumes, high performance, and rich features required to manage your data efficiently.
The HP 3PAR StoreServ systems use pairs of processing nodes in front of many combinations of data drive types, sizes, and
RAID configurations. Physical drives are mapped into logical drives. Virtual volumes are created from the logical drives in
chunklets of data 1GB in size. The architecture is designed to widely stripe the volume data across pools of storage called
common provisioning groups (CPGs). The virtual volumes are then exported to host systems by creating LUN paths to the
volumes called vLUNs. Figure 3 shows how the data changes from a physical mapping to a logical mapping.
Figure 3. HP 3PAR StoreServ Cluster Technology
6
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Key terms for 3PAR architecture
• The following key terms relate to the 3PAR architecture:
• Physical disks – On the HP 3PAR StoreServ 7450 All-flash array, physical disks refers to the flash-based SSDs used.
These include the 100 GB SLC, 200 GB SLC, and 400GB MLC. On the HP 3PAR StoreServ 7400 and 10x00 arrays, physical
disks can also refer to standard SAS hard disks and SATA hard disks. Only flash SSDs are supported for the HP 3PAR
StoreServ 7450 All-flash array.
• Chunklets – A chunklet is a chunk of contiguous space on the physical disk. For the HP 3PAR StoreServ 7450 All-flash
array, this would be 1 GB of space from the SSD. The HP 3PAR StoreServ 7450 nodes and operating system manage the
chunklets and assign only one logical device to any one chunklet. Many chunklets can be in a single logical device, but
only in one logical device.
• Logical disks – A logical disk is a collection of physical disk chunklets. These are organized in rows of RAID sets. Logical
disks are pooled together in common provisioning groups (CPGs). The RAID types supported are RAID 0, RAID 1 RAID 5
and RAID 6.
Note
RAID 1 and RAID 5 were used in the testing.
• Common Provisioning Groups (CPG) – Pool of logical disks to allocate virtual volumes on demand. In this reference
architecture, sixteen virtual disks were created, eight from each HP 3PAR StoreServ 7450 All-flash array.
• Virtual Volumes – Virtual volumes are specifically provisioned volumes by the user. The data is taken from the CPG.
Virtual volumes are exported to hosts by associating LUN paths to them (vLUNs). A virtual volume can be full provisioned
or thin provisioned.
The processing nodes connected to the physical storage are connected in node pairs. Node pairs are interconnected in a
mesh using custom ASICs. The HP 3PAR StoreServ 7450 All-flash arrays used in this reference architecture are four-node
units; node pairs 0, 1 and node pairs 2, 3. Each node has 2 x 8Gb FC port connections and an additional expansion card with
four more 8Gb FC port connections. Four connections are used on each node, two from the internal ports and two from the
expansion ports.
HP 3PAR StoreServ 7450 Storage array
The newest addition to the HP 3PAR family is the HP 3PAR StoreServ 7450 All-flash array, shown in figure 4.
Figure 4. Front view of the HP 3PAR StoreServ 7450
High Availability
The HP 3PAR StoreServ 7450 storage array is a highly available redundant solution for enterprise environments. The array
offers high availability and redundancy at all levels. All of the nodes are clustered together through custom ASICs for
maximum availability. Data paths from the nodes to the disks are all redundant as well as the front end host connections.
Solid State Drives
The HP 3PAR StoreServ 7450 All-flash array offers three types of SSDs in either SFF or LFF profile. The reference
architecture uses the SFF enclosures and drives.
• HP M6710 100GB 6G SAS SFF (2.5-inch) SLC Solid State Drive
• HP M6710 200GB 6G SAS SFF (2.5-inch) SLC Solid State Drive
• HP 3PAR StoreServ M6710 400GB 6Gb SAS SFF(2.5-inch) MLC Solid State Drive
7
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
For this reference architecture, any of the three types of drives can be chosen. It is recommended that they be used in
groups of 4 drives per enclosure. This would be 16 drives minimum per drive type.
HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array rich product feature set
HP Thin Suite (Thin Provisioning, Thin Persistence and Thin Conversion)
The HP suite of technologies includes:
• Thin Provisioning – Thin Provisioning allows users to allocate virtual volumes to servers and provision only a fractional or
part of the physical storage in the volume. This allows for maximum efficiency in capacity utilization, saving a
considerable amount of investment in storage resources being stranded and as data provisioned but not used.
• Thin Conversion – This feature allows users to convert a fully-provisioned set of volumes to thinly-provisioned volumes.
For instance, if a volume was created with the intent of using most of the space, but circumstances resulted in most of
the space not being used, the volume can be converted to a thin-provisioned volume. This results in tangible space and
cost savings.
• Thin Persistence – Thin persistence is a technology within the HP 3PAR StoreServ arrays that detects zero valued data
during data transfers. When data not being used in the volume is identified, it can be reallocated to free-to-use status. If
data is removed from an application volume and those addresses are set to zero, Thin Persistence can free them. Oracle
developed an ASM Storage Reclamation Utility (ASRU) for zeroing out data in an Oracle ASM disk group. This tool can be
run and then Thin Persistence will detect the zeros and free up the data. For more information about HP 3PAR Thin
Provisioning for Oracle and the ASRU utility, see Best Practices for Oracle and HP 3PAR StoreServ Storage.
HP 3PAR Remote Copy
HP 3PAR Remote Copy software brings a rich set of features and benefits that can be used to design disaster tolerant
solutions that cost-effectively address availability challenges of enterprise environments. HP 3PAR Remote Copy is a
uniquely easy, efficient, and flexible replication technology that allows you to protect and share data from any application.
Implemented over native IP (through GbE) or Fibre Channel, users may choose either the asynchronous periodic or
synchronous mode of operation to design a solution that meets their requirements for recovery point objective (RPO) and
recovery time objective (RTO). With these modes, 3PAR Remote Copy allows you to mirror data between any two HP 3PAR
StoreServ Storage systems, eliminating the incompatibilities and complexities associated with trying to mirror between the
midrange and enterprise array technologies from traditional vendors. Source and target volumes may also be flexibly and
uniquely configured to meet your needs, using, for example, different RAID levels, thick or thin volumes or drive types.
For more information, refer to Replication Solutions for demanding disaster tolerant environments.
HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array special features and functions
Table 1 shows specific data points of interest about the HP Universal Database Solution: Oracle and HP 3PAR StoreServ
7450 All-flash array solution.
Table 1. Features specific to HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array special features and
functions
Attribute
Result
IOPS
1M – 8k reads
1M – 4k reads
Usable storage capacity data and redo
26.8 TB RAID 5 – 400GB MLC drive
Two HP 3PAR StoreServ 7450 with 48 SSDs – 96 drives total
Storage HA
Yes – redundant storage nodes and RAID protection
Server HA without performance impact
Yes – redundancy at server
Server Protection – HP Serviceguard
8
Data loss on single failure
No
Oracle Real Application Cluster required
No
Duplicate copy of database
Yes (HP 3PAR Remote Copy or Oracle Data Guard)
Disaster Recovery
Yes
Query standby database
Yes (Remote Copy)
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Attribute
Result
Data device retention
Yes
Database storage
All flash
Storage Thin Provisioning
Yes
Storage Thin Persistence
Yes
Thin Conversion
Yes
Volume Snapshot
Yes
Integrated Oracle backup solution
Yes
Dynamic Optimization
Yes
Ease of management
Open IT tools
Operating System choice
RHEL, SUSE Linux, Oracle Linux, Windows
Database choice
Flexible for other databases, tested with Oracle 11gR2 using ASM on Grid Infrastructure.
HP supported options – sample scenarios
High availability clustering with HP Serviceguard for Linux or Oracle Real Application Clusters
Two options for clustering the UDB Oracle database solution with the HP 3PAR StoreServ 7450 are: 1) running HP
Serviceguard for Linux cluster solution, or 2) Oracle Real Applications Cluster for Linux. Because the UDB solution
implements the HP ProLiant DL980 G7 server, HP Serviceguard for Linux is a complete HP-supported high availability
solution which employs an active-standby cluster and provides great flexibility. Serviceguard can be configured to run
multiple databases and has many features that integrate with not only the database but other components of the
environment, such as applications and webservers.
HP Serviceguard
HP Serviceguard for Linux, the high availability clustering software used in this solution, is designed to protect applications
and services from planned and unplanned downtime. The HP Serviceguard Solutions for Linux portfolio also includes
numerous implementation toolkits that allow you to easily integrate various databases and open source applications into a
Serviceguard cluster with three distinct disaster recovery options. For additional information, see the HP Serviceguard for
Linux website.
Key features of HP Serviceguard for Linux include:
• Robust monitoring protect against system, software, network and storage faults
• Advance cluster arbitration and fencing mechanisms prevent data corruption or loss
• GUI and CLI management interfaces
• Quick and accurate cluster package creation
Refer to the white paper HP ProLiant DL980 Universal Database Solution: HP Serviceguard for Linux and 3PAR StoreServ for
Oracle Enterprise Database.
Figure 5 is an example of how Oracle and HP 3PAR StoreServ 7450 could be implemented in an HP Serviceguard for Linux
Cluster. This example is a two-node active-standby setup in which both servers can be used concurrently by multiple
database instances, and also be configured to fail-over critical databases in case of failures. Much more information is
available from the HP Serviceguard for Linux website.
9
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Figure 5. Sample scenario diagram of Oracle and HP 3PAR StoreServ 7450 integration with Serviceguard for Linux
Oracle Real Application Clusters
Another supported clustering option is Oracle’s Real Application Cluster (RAC) with Oracle Enterprise Database and Grid
Infrastructure. Oracle RAC clustering technology is a scale out active-active cluster where multiple nodes are running their
own instance of the same database allowing multiple server processing on the same database. Scaling out with Oracle RAC
is a high availability and performance option.
Disaster Recovery with HP 3PAR Remote Copy or Oracle Data Guard
HP 3PAR Remote Copy
The HP 3PAR Remote Copy software product provides an array based data replication solution on HP 3PAR StoreServ for
replicating using HP 3PAR Remote Copy software.
Both synchronous and asynchronous replication options are supported. Figure 6 shows an example scenario for disaster
recovery replication between two Oracle and HP 3PAR StoreServ 7450 environments. Replication to a remote site can be
used for more than disaster recovery. The secondary site can be used for remote database reporting or database
development. Use with HP 3PAR Snapshot technology allows for making database copies or even volume copies for remote
backup. HP 3PAR StoreServ All-flash array has the unique ability to provide flash level performance and many of the
desirable 3PAR management features.
10
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Figure 6. Sample scenario configuration of an HP 3PAR Remote Copy environment for Oracle and HP 3PAR StoreServ 7450
Oracle Data Guard
Oracle Data Guard is an Oracle product that provides data protection and disaster recovery for enterprise environments.
Data Guard synchronizes a remote standby database, keeping the data consistent on the standby database. If the
production database fails or needs to be taken down for service, Data Guard can switch the standby database to the
production role. Data Guard can also be used for database backup and recovery.
Solution components
Architectural diagram
HP ProLiant DL980 G7 server
Figure 7 shows an architectural diagram of the tested UDB solution using the DL980 G7 and two HP 3PAR StoreServ 7450
All-flash arrays. The configuration is a good example setup for most scale-up database customers. This configuration is the
basis for several other variant configurations which provide the flexibility to meet the need at hand.
Our testing used a single HP ProLiant DL980 G7 with 8 physical Xeon E7 2.40GHz processors. Each of these processors has
10 cores totaling 80 cores for the entire server. Turning on Hyper-Threading in the DL980 G7 BIOS will enable two threads
per core making 160 logical cores. For this testing, Hyper-Threading was not enabled. The system was equipped with 2TB of
quad rank memory. Out of the total system memory, 70% of it was allocated to the operating system shared memory.
Also installed in the DL980 G7 were 8 dual port QLogic 8Gb fibre channel cards. The cards are placed within different NUMA
nodes for best performance and scalability.
11
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
The environment is fairly simple from the standpoint of number of servers and storage units. The entire solution delivers 1M
IOPS and fits into a single rack with room for storage growth. The optional DL380 Gen8 server for 3PAR StoreServ
management is not included. See the Bill of Materials section for a rack view and details.
The 10GbE network switches are HP 5920 series switches.
Figure 7. HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array architectural diagram
Two HP 8/24 Fibre Channel SAN switches
The storage connection to the server is accomplished using two HP 8/24 Fibre Channel switches in a completely redundant
setup. For each host bus adapter (HBA) card, a SAN connection goes from the first port to the first switch and a redundant
connection goes from the second HBA port to the second switch. The switches were tightly zoned using single initiator to
single target WWN zoning. Each HBA port is connected to a single port on a single HP 3PAR StoreServ 7450 storage node.
This was done also considering the NUMA location of the HBA card. The goal is to create multiple paths for HA while also
minimizing cross communication between NUMA nodes, the storage nodes and the volumes themselves. Too many paths
can create unwanted latencies in the I/O subsystem of the operating system. Tight volume allocation and zoning to nodes
improved I/O performance by 20%.
HP 3PAR StoreServ 7450 – 4 node storage arrays
The HP 3PAR StoreServ 7450 units used for this testing were 4-node units. Each node pair has two additional disk
enclosures. The SSDs were installed equally across the node pairs and expansion units. There were 12 100GB SLC drives
installed in each unit totaling 48 SSDs per 3PAR StoreServ 7450 array. With two arrays the total maximum number of SSDs
tested on a single database was 96 drives in two HP 3PAR StoreServ 7450 All-flash arrays.
Server connection layout
The DL980 server has both I/O expansion modules installed to accommodate the FC HBA cards needed. For maximum
performance, the dual port 8Gb FC HBA cards are spread across three separate NUMA nodes (0, 2, 4). The cards are
connected only to x8 slots in the HP ProLiant DL980 G7. This provides I/O throughput bandwidth for the tested solution as
well as for additional storage array future expansion. Table 2 shows the HBA card NUMA node assignments and the local
CPUs belonging to the NUMA node. This was the tested card placement. To see the DL980 G7 card loading, refer to
Appendix I.
12
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Table 2. HBA card placement in the DL980
Card Slots
Local CPU list
NUMA Node 0
9, 11
0-9
NUMA Node 2
2, 3, 5
20-29
NUMA Node 4
12, 13, 15
40-49
Figure 8 shows the connection locations and FC connection mapping to the HP 3PAR StoreServ All-flash array. All port 0 HBA
connections go to switch A, and all port 1 HBA connections go to switch B. To achieve the maximum IOPS during I/O
characterization testing, the connections to the virtual volumes needed to be isolated to specific NUMA nodes to minimize
latencies in the operating system. Each connection has a specific single initiator to single target zone defined. See Appendix
F for a zoning example.
The integrated 10Gb Ethernet is used for connections back to the switches for client access. The user can choose whatever
10GbE infrastructure connection is required for their environment. The iLO connection is available on the HP ProLiant DL980
for remote management as needed.
Figure 8. HP ProLiant DL980 G7 rear view connection diagram
DL980 rear view connections
FC Switch B to 7450
node port x:x:x
2:1:2
3:1:2
2:2:2
0:2:2
3:2:2
1:1:1
0:1:1
2:2:1
1:2:1
0:2:1
UID
PS 1
10G Port 3 | 10G Port 4
4
3
2
1
11
10
9
FC1243
8
7
6
FC1243
5
FC1243
4
3
FC1243
2
1
FC1243
PS 2
3:2:1
3:1:1
2:1:1
PS 3
iLO
iLO23
PS 4
PORT 2
L A
T
X
L A
8
7
6
5
4
2
3
R
X
TOP
1
iLO
0:2:2
1:1:2
0:1:2
FC Switch A to 7450
node port x:x:x
10GbE bonded
connections to
switches
T
X
R
X
PORT 1
10GbE
SFP
FC1243
TOP
FC1243
FC1243
TOP
TOP
2
4
6
8
1
3
5
7
HP 3PAR StoreServ 7450– two node pairs
Figure 9 shows the rear view of two HP 3PAR StoreServ 7450 Node pairs. A 4 node array has two node pairs. This tested
solution used two 4 node arrays with each array having two additional disk shelves. The SSDs in the array are evenly
distributed across the disk shelves. So for 48 drives in the array, each enclosure holds 12 drives. With two arrays, this would
be a total of 96 drives and 12 drives per enclosure. This leaves 12 more slots per enclosure open for capacity expansion.
The HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array uses the 4 port 8Gb/s FC option for
additional FC ports to achieve the 1M IOPS.
13
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Figure 9. Two node pairs for the HP 3PAR StoreServ 7450 All-flash array
Capacity and sizing
DL980 server configurations
Depending on your application performance requirements, you have several options of processor, CPU and I/O card
configurations.
Recommended hardware settings
• Put FC HBAs in x8 slots for best performance.
• Distribute FC HBA cards evenly across the available I/O bays and I/O hubs.
• Does not use slot 1 for any FC HBA card. It is a PCI-x Gen1 slot.
• For the 8-socket configuration install memory DIMMs across all memory sockets of the eight CPUs for optimum NUMA
performance.
The tables below list the supported Intel Xeon processors, memory DIMMs, and PCI expansion slots for the ProLiant DL980
G7 server.
For the best extreme performance, use the E7-4870 performance processors.
Table 3. Supported E7 Family Processors
14
Processor Type Intel Xeon
Cores per Processor
Max Cores in an 8 Processor DL980 G7
E7-4870 (30MB Cache, 2.4GHz, 130W, 6.4 GT/s QPI)
10
80 (recommended)
E7-2860 (24MB Cache, 2.26GHz, 130W, 6.4 GT/s QPI)
10
80
E7-2850 (24MB Cache, 2.00GHz, 130W, 6.4 GT/s QPI)
10
80
E7-2830 (24MB Cache, 2.13 GHz, 105W, 6.4 GT/s QPI)
8
64
E7-4807 (18MB Cache, 1.86 GHz, 95W, 4.8 GT/s QPI )
6
48
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Note
The Intel Xeon processor E7 series supports Hyper-Threading (HT). HT is not recommended and was disabled in our
configurations. However it is good practice to test HT with your particular application.
The DL980 G7 server comes with the Standard Main I/O board with PCIe slots 7-11. Slots 9 and 11 are x8 Gen2 PCIe slots.
The PCIe expander option provides additional I/O slots 1-6. Slots 2, 3, 5 and 6 are x8 Gen2 PCIe slots. The low profile
expansion option provides additional I/O slots 12-16. Slots 12, 13, 15, and 16 are x8 Gen2 PCIe slots
Table 4. HP ProLiant DL980 G7 server with HP FC HBA PCIe slot configurations
Configuration
Number of
HP 3PAR 7450 arrays
DL980 PCIe x8
slots needed
Recommended PCIe x8
I/O slot number
Slot Type
1
1
4
2, 5, 9, 11
x8 Gen2 PCIe
2
2+
8
2, 3, 5, 9, 11, 12, 13, 15
x8 Gen2 PCIe
Configuration 2 with more than two arrays was not tested. It is recommended to do a proof of concept (POC) to evaluate
your performance workload requirements.
If an add-on SAS controller is installed into the DL980 it may be possible the SAS controller could interfere with the
performance of any FC HBA cards installed in PCIe x8 slots 9 and 11 on the Standard Main I/O. Moving any FC cards to
different NUMA nodes was not in context of the tested configuration.
Note
It is not recommended to use slot 1 for any HP FC HBA cards due to low I/O performance (PCI-e x4 Gen1).
Table 5 shows the memory module kits available for the DL980 G7. The more ranks per DIMM the higher the performance,
so quad rank DIMMs perform better than dual rank DIMMs. Performance is best when the installed DIMMs are all of equal
size.
Table 5. Supported Memory DIMMs
Memory Kits
Rank
HP 4GB 1Rx4 PC3-10600R-9 (DDR3-1333)
Single
HP 8GB 2Rx4 PC3-10600R-9 (DDR3-1333)
Dual
HP 16GB 2Rx4 PC3L-10600R-9 (DDR3-1333)
Dual (recommended)
HP 32GB 4R x4 PC3L-8500R-7 (DDR3-1333)
Quad (recommended)
PC3L = low voltage memory
Table 5 represents the minimum, middle, and maximum memory combinations possible for the 4, 8, 16 and 32 GB memory
kits available for the DL980 G7 servers. For best performance, use dual or quad rank memory DIMMs.
HP 3PAR StoreServ SSD IOPS
There is flexibility in the size of the SSDs used in the HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450
All-flash array solution. There is very minimal performance impact between the drives. Maximum throughput and IOPS are
more dependent on the number of HP 3PAR StoreServ 7450 arrays used. At least two arrays are recommended and
required for 1M IOPS and 10.5 GB/sec throughput. Using one array cuts the maximum IOPS and throughput in half. This
solution was tested with two arrays; but additional arrays are supported.
15
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Table 6 shows reasonable maximum IOPS using two HP 3PAR StoreServ 7450 arrays. Half of the drives go into each array.
For instance, using 16 SSDs in each array (32 total) the infrastructure would support a maximum of 850K IOPS, regardless
of which drive type 100GB SLC, 200GB SLC or 400GB MLC. The only exception would be when using a minimum of 16 drives
in each array where the maximum IOPS difference between SLC and MLC may vary within 5%. The maximum IOPS of 1M are
at 96 drives split between the two arrays. Drive types should be installed in increments of 16 drives (4 per enclosure) per
array. When considering capacity, cost and performance the decision of RAID 1 versus RAID 5 is very much dependent on the
percentage of write in the database workload.
Table 6. IOPS for two arrays
SSD configuration
8k read IOPS
8k write IOPS
Number of drives on two arrays
8k mixed IOPS
RAID
67% read
33% write
32
850K
240K
440K
RAID 1
64
980K
420K
740K
RAID 1
96
1M
420K
740K
RAID 1
32
820K
160K
340K
RAID 5
64
930K
180K
380K
RAID 5
96
1M
180K
380K
RAID 5
Table 7 shows the same type of maximum IOPS list for use with only one array. Maximum IOPS is about 500K IOPS and the
maximum throughput is about 5.2 GB/sec.
Table 7. IOPS for one array
SSD configuration
8k read IOPS
8k write IOPS
Number of drives on one array
8k mixed IOPS
RAID
67% read
33% write
16
425K
120K
220K
RAID 1
32
500K
210K
370K
RAID 1
48
500K
210K
370K
RAID 1
16
410K
80K
170K
RAID 5
32
500K
90K
190K
RAID 5
48
500K
90K
190K
RAID 5
Note
All IOPS results documented in this paper were achieved using the server operating system (Red Hat Enterprise Linux
release 6 update 4), NUMA and storage tuning mentioned in the recommendations and best practices.
16
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Workload description
I/O characterization workload
All I/O characterization testing was performed with I/O generator tools capable of producing standard asynchronous I/Os
using the Linux libaio libraries also used by Oracle and other database solutions. The tools are capable of generating many
variations of workloads allowing for flexibility generating random and sequential access, various block sizes, various
queuing and thread counts. The I/O requests were read, write or mixed. All mixed tests used a ratio of 67% read and 33%
write.
Characterization workloads were run on combinations of SSD sets. On each array, 16, 32 and 48 drive combinations were
tested with RAID 1 and RAID 5 protection. These values are valid in the context of this UDB testing on the DL980 G7 server.
The I/O characterization tests were run repeatedly and the storage system, fabric zoning and DL980 G7 server were tuned
for the purpose of determining maximum I/O performance. Specific information in this paper reflects the best practices for
the tested configuration.
Oracle database workload tool
The Oracle database workload tool consists of an OLTP workload with a table schema similar but not equal to TPC-C. Due to
restrictions from Oracle, HP is not permitted to publish transactional information. The transaction results have been
normalized and will be used to compare UDB test configurations. Other metrics measured during the workload come from
the operating system or standard Oracle Automatic Workload Repository (AWR) stats reports.
Tests, performed on a 3TB database (RAID 1) and a 1.5 TB database (RAID 5), included an I/O intensive OLTP test and a CPU
intensive database test. The database parameters were adjusted from results in the I/O intensive database test. The
environment was tuned for maximum user transactions and maximum % database usage efficiency. After the database was
tuned, the storage IOPS were recorded at different user count levels. Because many workloads vary so much in
characteristics, the measurement was made with maximum transactions but the transactions are not reported because of
legal restrictions imposed by Oracle.
Oracle Enterprise Database version 11.2.0.3 was used in this testing, but other databases can be implemented on the HP
Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array solution.
Storage configuration for testing
The storage configuration involves the use of 16 virtual volumes. Each of the two 3PAR StoreServ 7450 arrays had 8 virtual
volumes. Each virtual volume had 2 vLUNs (device paths). The device mapper on the server saw two paths for each virtual
volume exported to the host. Figure 10 shows how the virtual volumes are mapped to the host for best performance.
Extensive tests were run to achieve 1M IOPS, which required the storage mapping and zoning shown in figure 10.
The virtual volumes have two vLUNs paths per volume. Each virtual volume has both of its paths coming from HBAs
belonging to the same NUMA node. Any one virtual volume never has two paths to different NUMA nodes, but only to
different HBA cards within the node.
Each port 0 on the HBA goes to switch 1 and port 1 goes to switch 2. The zoning is tightly configured to single initiator to
single target for maximum performance.
Each array is identically configured and connected to the server. Any additional arrays would also be configured and
connected the same way.
For the Oracle configuration, fourteen of the volumes were used in an ASM DATA group and two were used in an ASM LOG
group.
17
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Figure 10. Server to storage volume mapping to NUMA nodes
Workload tuning considerations
The server, storage, SAN and operating system parameters were adjusted to deliver best I/O and processing performance
after several I/O characterization test iterations. The I/O characterization workloads were used to validate the best
configuration to deliver the best I/O performance, thus validating the capabilities of the infrastructure. The storage
capabilities are validated by HP’s Storage division and are also validated specific to this configuration in these general areas:
• Server NUMA affinity – minimize communication between NUMA nodes
• BIOS – DL980 has BIOS optimizations for best performance
• Kernel and Operating System – sysfs, sysctl kernel parameters
• Debug/tools – disable processes or tools that can cause latencies
• I/O tuning – provisioning, zoning, multipathing, special array settings, etc.
Workload data/configuration results
I/O characterization results – OLTP random workloads – testing RAID 5/RAID 1 using 16, 32, 48 drives per array
The results of the characterization tests involving random small block workloads revealed capabilities of more than 1
million IOPS for small block reads using 48 drives per array and as high as 980,000 IOPS using only 32 drives per array.
Tests with pure 8k write were as high as 438,400 IOPS. Results of a mixed 8k workload of 67% reads and 33% writes were
as high as 743,700 IOPS using 48 drives per array. The 32 drive configuration was nearly as good with 733,600 IOPS, again
demonstrating the 32 drive RAID 1 configuration performed almost as well for mixed 8k workloads as the 48 drive
configuration.
In testing the configurations of 16 drives, 32 drives and 48 drives per array (using two arrays), a disk performance
bottleneck is not realized with the 32 and 48 drives. The maximum throughput capability of the nodes is reached first. With
the 16 drive tests, the maximum throughput of the drives begins to be evident.
18
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
RAID 5 characterization comparisons for 8k random reads, writes and mixed workloads
Figure 11 below compares 8k read IOPS for a RAID 5 configuration using the 16 drive set, the 32 drive set or the 48 drive
set. The test used two HP 3PAR StoreServ 7450 arrays, 32, 64 and 96 drives total for both. RAID 5 performs well with a
more read weighted workload, as would be expected for RAID 5, the write performance is not optimum. RAID 5 uses 25% of
the usable capacity for data protection with a 3+1 configuration and 12.5% of the usable capacity for data protection with a
7+1 configuration. This is compared to RAID 1 which uses 50% of the usable capacity for data protection. The RAID 5 tests
were performed using a 3+1 configuration.
Figure 11. RAID 5 8k small block results with two arrays
RAID 5 small block 8k random
Comparing 16, 32 and 48 drives per
array - using 2 arrays
1200000
1000000
IOPS
800000
600000
400000
200000
0
32
64
96
Reads
816719
936781
1020408
Writes
146295
180074
187276
Mixed(67/33)
320377
386216
397802
X-Axis: Total number of SSDs in both 7450 All-flash arrays
19
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
RAID 1 characterization comparisons for 8k random reads, writes and mixed workloads
Figure 12 shows the distribution of IOPS for a RAID 1 configuration. Reads in a RAID 1 setup are very similar to the RAID 5
results. Write performance for RAID 1 is significantly better then RAID 5 in figure 11. Looking at the distribution of
performance between using 16, 32 and 48 SSDs in the array, the performance is very similar between 32 drives and 48
drives. RAID 1 uses 50% of the storage capacity for RAID protection.
Figure 12. RAID 1 8k small block results with two arrays
RAID 1 small block 8k random
Comparing 16, 32 and 48 drives per array using 2 arrays
1200000
1000000
IOPS
800000
600000
400000
200000
0
32
64
96
Reads
856250
980207
1007223
Writes
233996
402138
438451
434733
733616
743721
Mixed(67/33)
X-Axis: Total number of SSDs in both 7450 All-flash arrays
20
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Comparison of RAID 5 to RAID 1 relative to 8k reads
Comparing 8k read IOPS on RAID 5 versus RAID 1 in figure 13, the result is very similar, within 1.2%. In a 100% read case a
RAID 5 configuration would be an optimum choice because of the additional user data RAID 5 makes available compared to
RAID 1. In practice, it is fairly rare to have a 100% read workload but more common to have a mostly read workload that is
very light on writes.
Figure 13. RAID 5 and RAID 1 reads comparison
RAID
IOPS for 100% read - 48 drives per array
RAID 5
1020408
RAID 1
1007223
IOPS
Comparison of RAID 5 to RAID 1 relative to 8k writes
Reviewing the maximum writes in figure 14 shows a significant difference in write performance between RAID 5 and RAID 1.
RAID 1 performs better than RAID 5 at a factor of about 2.3 times on the configuration using 48 drives per array. This is
largely due to the overhead of RAID 5 having to do with parity calculations and partial write penalty.
Figure 14. RAID 5 and RAID 1 writes comparison
RAID
IOPS for 100% write - 48 drives per array
RAID 5
187276
RAID 1
438451
IOPS
21
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Comparison of RAID 5 to RAID 1 relative to 8k mixed 67% read and 33% write
As figure 15 shows in a mixed workload of 67% read and 33% write, we see an improvement by a factor of 1.86 times using
RAID 1 over RAID 5. This improvement, like any RAID 5/RAID 1 comparison, is going to be at the cost of user space if RAID 1
is used.
A possible consideration might be to use RAID 5 if the mixed workload is very heavy on reads and lower on writes. For
example if the workload is 90% reads and 10% writes, then there may be a greater performance versus capacity benefit
using RAID 5.
Figure 15. RAID 5 and RAID 1 mixed comparison
RAID
IOPS for mixed read/write (67%/33%) - 48 drives per array
RAID 5
397802
RAID 1
743721
IOPS
Oracle OLTP peak transactions and IOPS
The Oracle test consisted of the creation of a RAID 1 OLTP database 3TB in size, and a RAID 5 database 1.5TB in size. The
workload was an I/O intensive OLTP benchmark tool that could stress the server as well as the I/O subsystem. As the series
of tests was run, the Oracle database init file was adjusted for maximum transactions and minimum physical I/Os. All of the
specific Oracle tuning values are documented in Appendix H and best practices are under the Best practices section of this
paper.
The database uses two ASM groups created with default extent values. The DATA group consists of 14 volumes and a LOG
group with two volumes. The LOG volumes were provisioned to HBAs on NUMA node 4 so that the log writer process could
be pinned to specific CPU cores on NUMA node 4. Test results show that pinning the log writer process did not improve
performance with this workload. This should be tested on individual implementations.
The OLTP stress workload is I/O intensive. Workload ramped from 50 users to 400 users. In real database applications, the
DL980 handles tens of thousands of users but with the stress benchmark, each user is doing thousands of transactions per
second with no latencies or think times. This is why the user count was not tested over 400 users. If the benchmark were
doing a connection stress test, then the user count would be in tens of thousands. The benchmark workload generally
started ramping at 150-200 users and peaked at 250-300 users.
Because HP is not legally allowed to publish any Oracle benchmark results, all of the transactional numbers have been
normalized, only a trend of the transactions are shown. The benchmark we used was not a standard TPC-C type of
workload.
22
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
RAID 1 OLTP
The RAID 1 Oracle workload shows transactions peaking at around 250 benchmark users. On the DL980 server the
operating system usage was 49% user, 33% system and 11% I/O wait. Figure 16 shows the IOPS for reads and writes taken
from the Oracle AWR reports, under the metrics of physical read per second and physical writes per second. At this level of
stress on 80 CPU cores, with the database buffer cache tuned for maximum logical I/Os, the IOPS are into the hundreds of
thousands but well within the storage infrastructure limits.
Figure 16. RAID 1 Oracle OLTP workload physical IOPS
IOPS
OLTP RAID1 - Normalized Transactions
350000
300000
250000
200000
150000
100000
50000
0
50
Total Physical Writes
Total Physical Reads
Normalized Total Transactions
100
50
5,840.55
150
200
250
300
350
400
100
150
200
250
300
350
400
11,608.51 12,937.34 15,401.84 16,169.14 13,399.19 13,132.48 11,871.49
115,284.70 208,877.37 273,774.15 303,448.59 317,606.46 318,316.73 307,814.80 313,318.00
1
2.43
3.9
4.96
5.47
5.51
5.61
5.55
X -Axis: number of users
23
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
RAID 5 OLTP
The RAID 5 Oracle workload shows the transactions peaking at around 300 benchmark users. On the DL980 server the
operating system utilization was 44% user, 37% system and 10% I/O wait. Figure 17 shows the IOPS for reads and writes
taken from the Oracle AWR reports, under the metrics of physical reads per second and physical writes per second.
Figure 17. RAID 5 Oracle OLTP workload physical I/Os per second
OLTP RAID 5 - Normalized Transactions
250000
200000
IOPS
150000
100000
50000
0
50
100
150
300
95,956.20 171,484.00 237,265.97 233,271.17 207,426.96 208,469.03 208,651.69 207,572.09
3.81
9.02
250
16,184.02
400
Total Physical Reads
2.08
200
13,103.44
350
50
3,642.00
1
150
9,877.54
250
Total Physical Writes
Normalized Total Transactions
100
6,246.00
200
12.51
300
15,093.38
12.89
350
14,198.09
400
13,969.51
12.71
12.61
X-Axis: number of users
Thin Provisioning to Full Provisioning comparison results
The HP 3PAR StoreServ 7450 Thin Provisioning feature allows the user to implement storage usage much more efficiently.
When a volume is provisioned without Thin Provisioning, all of the needed space is allocated and dedicated to the volume at
the time of provisioning. When a volume is created using thin provisioning, the entire volume space is provisioned to the
volume but not dedicated to the volume until it is needed. This leaves the unused storage space available for use by other
volumes. Storage administrators can provision more data and take into account the full amount of data that is not
immediately needed. Capacity planning can proactively monitor growth trends and provide more SSDs as needed.
Figures 18 and 19 show the results obtained with an Oracle OLTP workload on 16 fully provisioned volumes, with a
database totaling 1.5 TB. The test was run with an OLTP workload and a tuned database, ramping the workload up to
maximum transactions.
The entire set of ASM database disk groups were then converted from fully provisioned volumes to thin provision volumes
and the same series of tests was run again. Figure 18 shows the resulting differences with physical reads between using
full-provisioned and thin provisioned volumes. The worse-case difference was 3.9%. This difference is very minor
considering the potentially significant cost saving of using Thin Provisioning. Considering the value of a single SSD versus a
traditional HDD, the savings is extremely significant.
24
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Figure 18. RAID 5 Oracle OLTP – Physical reads thin provisioned versus full provisioned
Physical reads/sec
Physical reads - thin versus full provisioning
220,000.00
210,000.00
200,000.00
190,000.00
180,000.00
170,000.00
160,000.00
150,000.00
250
300
350
400
R5 Physical reads/sec - Fully
Provisioned
207,426.96
208,469.03
208,651.69
207,572.09
R5 Physical reads/sec - Thin
Provisioned
207,793.61
205,321.06
208,168.44
200,199.96
X-Axis: number of users
Figure 19 shows the difference in physical writes per second were 3.9%.
Figure 19. RAID 5 Oracle OLTP – Physical writes thin provisioned versus full provisioned
Physical writes - thin versus full provisioning
Physical reads /sec
17,000.00
15,000.00
13,000.00
11,000.00
9,000.00
7,000.00
5,000.00
250
300
350
400
R5 - Physical writes/sec - Full
Provisioned
16,184.02
15,093.38
14,198.09
13,969.51
R5 - Physical writes/sec - Thin
Provisioned
15,370.98
14,495.58
13,519.84
13,576.69
X-Axis: number of users
25
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Large block throughput for BI workloads
Decision Support Systems (DSS) testing was not part of the scope of this paper but I/O throughput tests were run to
measure the large block sequential capabilities of the HP 3PAR StoreServ 7450 storage array. Figure 20 shows the
throughput results for sequential reads and writes with a 1M block size. The result is certainly useful for considering a DSS
implementation on UDB for HP 3PAR StoreServ 7450. The DL980 is a proven solution for BI workloads. The high HP 3PAR
StoreServ 7450 throughput capabilities make for a very good match when doing large block queries with the Oracle
database, and other databases as well.
Figure 20. Sequential read and write access results for 64 SSDs using a 1M block size.
Large block seq RAID 5 - 1M block read and write
0
2000
4000
Sequential writes - 1MB block size
6000
5734
Sequential reads - 1MB block size
10677
8000
10000
12000
MB/second
Best practices
Analysis and recommendations
For best I/O performance on the DL980, HP recommends using multiple paths to maintain high availability while also
maximizing performance and minimizing latencies. A way to achieve better performance in extreme performance
environments is to minimize inter-communication between NUMA nodes. This can be achieved using tightly-zoned
hardware configurations and operating system-to-hardware configurations, such as setting CPU affinity to minimize
latencies across the NUMA nodes.
The approach taken in this effort to achieve maximum I/Os and throughput was to connect and zone the DL980 to storage
in such a way that cross node activity is minimized both from the server and the storage. By dedicating virtual volumes to
specific HBAs and NUMA nodes, all of the I/O for a specific volume stays local to specific storage nodes and server nodes. For
applications that do a good job with NUMA awareness, this can deliver extremely good performance. For those applications
that are not as good with NUMA awareness, more manual tuning may be required, but the flexibility to tune the
environment exists.
• SAN recommendations – Each dedicated port has its own zone. For each virtual volume, two ports are connected to any
single virtual volume. Zoning too many paths on a single volume can create latencies across the NUMA nodes.
Improvements observed were as high as 20%. At the very least, the paths on a single volume should all come from HBAs
within a single NUMA node on the DL980 G7 server.
26
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Server configuration best practices
DL980 server BIOS
• Virtual technology – disabled
• Hyper-Threading – disabled
• Intel Turbo boost – enabled
• HP Power Profile – Max Performance
• Minimum processor idle power state s – no C-states in the BIOS
Operating system
• NUMA configuration – Map the location of the HBAs NUMA node owner. Map out the interrupt numbers in the server’s
Linux directory /proc/irq, then assign the interrupt affinity to a core owned by that NUMA node. See Appendices C and D
for details.
• OS and Kernel configuration
– Disable cstates at kernel boot see details in Appendix B
– Set sysctl.conf values as stated in Appendix A
– Ensure debug is not enabled in sysfs. Remove any packages in the OS that may be enabling tracing. To check to see if
tracing is disabled, see Appendix G.
Storage configuration best practices
• UDEV settings for performance – set udev parameters per values in the Appendix E
• Set the sysfs “rotational” value for disks to 0
• Set the sysfs value “rq_affinity” to 2 for each device. Request completions were all occurring on core 0, causing a
bottleneck. Setting rq_affinity to a value of 2 resolved this problem.
• Set scheduler to NOOP ( no operation)
• Set permissions and ownership for Oracle volumes.
• SSD loading – Load SSDs in groups of 4 per enclosure at a minimum
• Volume size – Virtual volumes should all be the same size and SSD type for each Oracle ASM group.
• vLUNs – HP recommends that any volume path originate from the same NUMA node on the DL980. It’s best to keep the
number of vLUNs per volume down to two. Refer to figure 10.
• Use Thin Provisioning for the storage of the database and logs. If logs are not going to grow, then use full provisioning for
the logs.
Database configuration best practices
• ASM – Use separate DATA and LOG ASM groups
• Logs – Assign at least two volumes to the log group and pick volumes from the same NUMA node
• Oracle parameters – Appendix H
– Set HUGE pages only
– Disable automatic memory management if applicable
– Set buffer cache memory size large enough per your implementation to avoid physical reads
– Enable NUMA support
Bill of materials
Below is the bill of materials (BOM) for the tested configuration. Variations of the configuration based on customer needs
are possible, but would require using a separate BOM. Talk to your HP sale representative for detailed quotes. See figure 21
for the reference architecture diagram of the tested environment.
27
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Reference architecture diagram
Figure 21. Reference architecture diagram.
2 x 10GbE Switches
2 x 8Gb FC Switches
DL380 Gen8 – optional
management server
Universal Database Storage
Two HP 7450 4-node flash storage
arrays with two additional disk
shelves in each array.
• Each node has the 4 port FC
expansion card totaling 24 usable
ports per array.
• Each array tested with 16, 32 and
48 SSD 100GB SLC drives.
• Other drive choices are 200GB SLC
and 400GB MLC.
Universal Database Server
• HP ProLiant DL980 G7 Server
– 2TB quad rank memory
– 8 x 10 core Xeon E7 processors
– 8 x AJ764A dual port FC cards
– 4 integrated 10GbE ports
• Operating system – RHEL 6.4
28
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Reference architecture BOM
Note
Part numbers are at time of publication and subject to change. The bill of materials does not include complete support
options or other rack and power requirements. If you have questions regarding ordering, please consult with your HP
Reseller or HP Sales Representative for more details. hp.com/large/contact/enterprise/index.html
Product number
Description
1
BW930A
HP Air Flow Optimization Kit
1
BW930A
1
BW906A
HP 42U 1075mm Side Panel Kit
2
AF511A
HP Mod PDU Core 48A/3Phs NA Kit
2
AF500A
HP 2, 7X C-13 Stk Intl Modular PDU
1
BW904A
HP 642 1075mm Shock Intelligent Rack
2
C8R07A
HP StoreFabric 8/24 Bundled FC Switch
48
QK735A
HP Premier Flex LC/LC OM4 2f 15m Cbl
48
AJ716B
HP 8Gb Short Wave B-Series SFP+ 1 Pack
2
JG296A
HP 5920 Network Switch
1
653200-B21
HP ProLiant DL380p Gen8 8 SFF CTO
1
715219-L21
HP DL380p Gen8 Intel® Xeon® E5-2640v2 (2.0GHz/8-core/20MB/95W)
FIO Processor Kit
2
713985-B21
HP 16GB (1x16GB) Dual Rank x4 PC3L-12800R (DDR3-1600)
Registered CAS-11 Low Voltage Memory Kit
1
684210-B21
HP Ethernet 10Gb 2-port 530FLR-SFP+ FIO Adapter
2
656363-B21
HP 750W Common Slot Platinum Plus Hot Plug Power Supply Kit
1
AM451A
HP ProLiant DL980 G7 CTO system-E7 proc
1
650770-L21
HP DL980 G7 E7-4870 FIO 4-processor Kit
1
650770-B21
HP DL980 G7 E7-4870 4-processor Kit
1
AM450A
HP DL980 CPU Installation Assembly for E7
8
A0R60A
HP DL980 G7 (E7) Memory Cartridge
Quantity
Rack
B01
Include with complete system
Network
Management
Server (optional)
DB Server
DL980 G7
29
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Quantity
Product number
Description
128
A0R55A
HP DL980 16GB 4Rx4 PC3-8500R-7 Kit
2
627117-B21
HP 300GB 6G SAS 15K 2.5in DP ENT HDD
1
481043-B21
HP Slim 12.7mm SATA DVDRW Optical Kit
1
588137-B21
HP DL580G7 PCI Express Kit
1
AM434A
HP DL980 LP PCIe I/O Expansion Module
4
593722-B21
HP NC365T 4-port Ethernet Server Adapter
8
AJ764A
HP 82Q 8Gb Dual Port PCI-e FC HBA
4
AM470A
HP DL980 1200W CS Plat Ht Plg Pwr Supply
1
339778-B21
HP Raid 1 Drive 1 FIO Setting
1
A0R66A
HP ProLiant DL980 NC375i SPI Board 4 port
2
C8R37A
3PAR StoreServ 7450 4Node
8
QR486A
HP 3PAR 7000 4-pt 8Gb/s FC Adapter
96
QR502A
HP M6710 100GB 6G SAS 2.5in SLC SSD
0
QR503A
HP M6710 200GB 6G SAS 2.5in SLC SSD
0
QR504A
HP M6710 400GB 6G SAS 2.5in MLC SSD
2
BC914A
HP 3PAR 7450 Reporting Suite Media LTU
1
BC890A
HP 3PAR 7450 OS Suite Base Media LTU
96
BC891A
HP 3PAR 7450 OS Suite Drive LTU
4
QR490A
HP M6710 2.5in 2U SAS Drive Enclosure
0
QR516B
Physical service processor
8
QK734A
HP Premier Flex LC/LC OM4 2f 5m Cbl
Storage
Notes
• Refer to HP 5920 Network switch QuickSpecs to determine proper transceivers and accessories for your specific network
environment
• Refer to HP 3PAR StoreServ 7450 QuickSpecs for the service processor (SP) and HP 3PAR StoreServ 7450 OS Suite
options.
• Refer to HP ProLiant DL380p Gen8 QuickSpecs to determine the desired options for your environment.
• Refer to HP 3PAR Software Products QuickSpecs for details on HP 3PAR software options.
Summary
The HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array is a new significant part of the
overall HP performance reference architecture portfolio that was developed to provide high performance I/O throughput for
transactional databases in a package which delivers business continuity, extreme IOPS, faster user response times and
30
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
increased throughput vs. comparable traditional server/storage configurations. This solution integrates with high availability
options and disaster recovery options like HP 3PAR Remote Copy and Serviceguard for Linux.
Key success factors in our extensive testing include:
• Successfully configure an Oracle database environment using the HP ProLiant DL980 G7 server and two HP 3PAR
StoreServ 7450 flash arrays capable of delivering 1M IOPS.
• Demonstrated a stable OLTP I/O stressed workload and compare the same with Thin Provisioning
• The solution addresses the challenge for customers to find extremely high performance flash storage with rich
management features and ease of management and integrates with the world class HP DL980 G7 to produce a high
performance server and over achieving I/O performance with the ability to compete with extreme performance database
appliances.
• The solution provides a flexible, mission critical, extreme database performance solution with more options to meet the
customer’s needs.
Implementing a proof-of-concept
As a matter of best practice for all deployments, HP recommends implementing a proof-of-concept, using a test
environment that closely matches the planned production environment, to obtain appropriate performance and scalability
characterizations. For help with a proof-of-concept, contact an HP Services representative
(hp.com/large/contact/enterprise/index.html) or your HP partner.
Appendix
Appendix A – Red Hat 6.4 kernel tunables /etc/sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
kernel.shmmax = 1517919148032
kernel.shmall = 529408185
kernel.shmmni = 4096
kernel.sem = 500 64000 200 256
fs.file-max = 8388608
fs.aio-max-nr = 4194304
net.core.rmem_default = 1048576
net.core.wmem_default = 1048576
net.core.rmem_max = 4194304
net.core.wmem_max = 1048576
net.ipv4.tcp_rmem = 1048576 1048576 4194304
net.ipv4.tcp_wmem = 1048576 1048576 1048576
net.ipv4.ip_local_port_range = 9000 65500
vm.swappiness=0
vm.dirty_background_ratio=3
vm.dirty_ratio=15
vm.dirty_expire_centisecs=500
vm.dirty_writeback_centisecs=100
vm.hugetlb_shm_group = 1000
vm.nr_hugepages = 375557
31
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Appendix B – Grub configuration for disabling cstates
module /vmlinuz-2.6.32-358.el6.x86_64 ro root=/dev/mapper/vg_aps85180-lv_root intel_iommu=on
rd_NO_LUKS LANG=en_US.UTF-8 rd_LVM_LV=vg_aps85180/lv_swap rd_LVM_LV=vg_aps85180/lv_root
rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM
rhgb quiet elevator=noop nosoftlockup intel_idle.max_cstate=0 mce=ignore_ce
Appendix C – IRQ affinity script for /etc/rc.local
Note
HBA card interrupt numbers must be verified with each specific implementation. See file /proc/interrupts in sysfs of the
Linux operating system.
/etc/rc.local
# This script will be executed *after* all the other init scripts.# You can put your own
initialization stuff in here if you don't
#want to do the full Sys V style init stuff.
touch /var/lock/subsys/local
echo "0" > /proc/irq/106/smp_affinity_list
echo "1" > /proc/irq/107/smp_affinity_list
echo "2" > /proc/irq/108/smp_affinity_list
echo "3" > /proc/irq/109/smp_affinity_list
echo "4" > /proc/irq/110/smp_affinity_list
echo "5" > /proc/irq/111/smp_affinity_list
echo "6" > /proc/irq/112/smp_affinity_list
echo "7" > /proc/irq/113/smp_affinity_list
echo "20" > /proc/irq/114/smp_affinity_list
echo "21" > /proc/irq/115/smp_affinity_list
echo "22" > /proc/irq/116/smp_affinity_list
echo "23" > /proc/irq/117/smp_affinity_list
echo "24" > /proc/irq/118/smp_affinity_list
echo "25" > /proc/irq/119/smp_affinity_list
echo "26" > /proc/irq/120/smp_affinity_list
echo "27" > /proc/irq/121/smp_affinity_list
echo "28" > /proc/irq/122/smp_affinity_list
echo "29" > /proc/irq/123/smp_affinity_list
echo "30" > /proc/irq/124/smp_affinity_list
echo "31" > /proc/irq/125/smp_affinity_list
echo "40" > /proc/irq/126/smp_affinity_list
echo "41" > /proc/irq/127/smp_affinity_list
echo "42" > /proc/irq/128/smp_affinity_list
echo "43" > /proc/irq/129/smp_affinity_list
echo "44" > /proc/irq/130/smp_affinity_list
32
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
echo "45" > /proc/irq/131/smp_affinity_list
echo "46" > /proc/irq/132/smp_affinity_list
echo "47" > /proc/irq/133/smp_affinity_list
echo "48" > /proc/irq/134/smp_affinity_list
echo "49" > /proc/irq/135/smp_affinity_list
echo "50" > /proc/irq/136/smp_affinity_list
echo "51" > /proc/irq/137/smp_affinity_list
Appendix D – HBA NUMA mapping and IRQ map
Note
Host values and WWN values are specific to each implementation and must be obtained for each implementation.
Bus Address
Slot
Local CPU List
NUMA Node
Host
Port WWN
=============================================================================================
0b:00.0
9
0-9
0
host5
0x50014380186b6e5c
0b:00.1
9
0-9
0
host6
0x50014380186b6e5e
11:00.0
11
0-9
0
host3
0x50014380186b6e34
11:00.1
11
0-9
0
host4
0x50014380186b6e36
54:00.0
2
20-29
2
host11
0x500143802422c9d0
54:00.1
2
20-29
2
host12
0x500143802422c9d2
57:00.0
3
20-29
2
host9
0x500143802422b214
57:00.1
3
20-29
2
host10
0x500143802422b216
5d:00.0
5
20-29
2
host7
0x500143802422ca84
5d:00.1
5
20-29
2
host8
0x500143802422ca86
a1:00.0
12
40-49
4
host17
0x500143802422cf00
a1:00.1
12
40-49
4
host18
0x500143802422cf02
a4:00.0
13
40-49
4
host15
0x50014380186b8878
a4:00.1
13
40-49
4
host16
0x50014380186b887a
aa:00.0
15
40-49
4
host13
0x50014380186b6e14
aa:00.1
15
40-49
4
host14
0x50014380186b6e16
QLogic Interrupt Affinity Finder
Interrupt Number
NUMA Node
Affinity
=============================================================================================
106
0
00000000,00000000,00000000,00000000,00000001
(default)
107
0
00000000,00000000,00000000,00000000,00000002
(rsp_q)
108
0
00000000,00000000,00000000,00000000,00000004
(default)
109
0
00000000,00000000,00000000,00000000,00000008
(rsp_q)
110
0
00000000,00000000,00000000,00000000,00000010
(default)
111
0
00000000,00000000,00000000,00000000,00000020
(rsp_q)
112
0
00000000,00000000,00000000,00000000,00000040
(default)
113
0
00000000,00000000,00000000,00000000,00000080
(rsp_q)
33
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
114
2
00000000,00000000,00000000,00000000,00100000
(default)
115
2
00000000,00000000,00000000,00000000,00200000
(rsp_q)
116
2
00000000,00000000,00000000,00000000,00400000
(default)
117
2
00000000,00000000,00000000,00000000,00800000
(rsp_q)
118
2
00000000,00000000,00000000,00000000,01000000
(default)
119
2
00000000,00000000,00000000,00000000,02000000
(rsp_q)
120
2
00000000,00000000,00000000,00000000,04000000
(default)
121
2
00000000,00000000,00000000,00000000,08000000
(rsp_q)
122
2
00000000,00000000,00000000,00000000,10000000
(default)
123
2
00000000,00000000,00000000,00000000,20000000
(rsp_q)
124
2
00000000,00000000,00000000,00000000,40000000
(default)
125
2
00000000,00000000,00000000,00000000,80000000
(rsp_q)
126
4
00000000,00000000,00000000,00000100,00000000
(default)
127
4
00000000,00000000,00000000,00000200,00000000
(rsp_q)
128
4
00000000,00000000,00000000,00000400,00000000
(default)
129
4
00000000,00000000,00000000,00000800,00000000
(rsp_q)
130
4
00000000,00000000,00000000,00001000,00000000
(default)
131
4
00000000,00000000,00000000,00002000,00000000
(rsp_q)
132
4
00000000,00000000,00000000,00004000,00000000
(default)
133
4
00000000,00000000,00000000,00008000,00000000
(rsp_q)
134
4
00000000,00000000,00000000,00010000,00000000
(default)
135
4
00000000,00000000,00000000,00020000,00000000
(rsp_q)
136
4
00000000,00000000,00000000,00040000,00000000
(default)
137
4
00000000,00000000,00000000,00080000,00000000
(rsp_q)
Appendix E – UDEV configurations
/etc/udev/rules.d/10-3par.rules
ACTION=="add|change", KERNEL=="dm-*", PROGRAM="/bin/bash –c
'cat /sys/block/$name/slaves/*/device/vendor | grep 3PARdata'", ATTR{queue/rotational}="0",
ATTR{queue/scheduler}="noop", ATTR{queue/rq_affinity}="2", ATTR{queue/nomerges}="1"
/etc/udev/rules.d/12-dm-permissions.rules
ENV{DM_NAME}=="mpathch", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="mpathcg", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="mpathcf", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="mpathce", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="mpathag", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="mpathcd", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="mpathaf", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="mpathcc", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="mpathae", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="mpathcb", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="mpathad", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="mpathac", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="mpathcl", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="mpathck", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
34
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
ENV{DM_NAME}=="mpathcj", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
ENV{DM_NAME}=="mpathci", OWNER:="oracle", GROUP:="oinstall", MODE:="660"
Appendix F – Storage information
Zoning example – WWNs specific to implementation. Example only.
Switch Top
Effective configuration:
cfg:
CFG_BOTH
zone:
Z1
50:01:43:80:18:6b:6e:5c
zone:
Z10
50:01:43:80:18:6b:6e:34
20:11:00:02:ac:00:5f:9a
21:11:00:02:ac:00:5f:98
zone:
Z11
50:01:43:80:24:22:c9:d0
zone:
Z12
50:01:43:80:24:22:b2:14
20:21:00:02:ac:00:5f:98
21:21:00:02:ac:00:5f:98
zone:
Z13
50:01:43:80:24:22:ca:84
zone:
Z14
50:01:43:80:24:22:cf:00
22:21:00:02:ac:00:5f:98
22:11:00:02:ac:00:5f:98
zone:
Z15
50:01:43:80:18:6b:88:78
23:11:00:02:ac:00:5f:98
zone:
Z16
50:01:43:80:18:6b:6e:14
23:21:00:02:ac:00:5f:98
zone:
Z2
50:01:43:80:18:6b:6e:34
21:11:00:02:ac:00:5f:9a
zone:
Z3
50:01:43:80:24:22:c9:d0
20:21:00:02:ac:00:5f:9a
zone:
Z4
50:01:43:80:24:22:b2:14
21:21:00:02:ac:00:5f:9a
zone:
Z5
50:01:43:80:24:22:ca:84
22:11:00:02:ac:00:5f:9a
zone:
Z6
50:01:43:80:24:22:cf:00
22:11:00:02:ac:00:5f:9a
zone:
Z7
50:01:43:80:18:6b:88:78
zone:
Z8
50:01:43:80:18:6b:6e:14
23:11:00:02:ac:00:5f:9a
23:21:00:02:ac:00:5f:9a
zone:
Z9
50:01:43:80:18:6b:6e:5c
20:11:00:02:ac:00:5f:98
Switch Bottom
Zones
Effective configuration:
cfg:
CFG_BOTH
35
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
zone:
Z1
50:01:43:80:18:6b:6e:5e
22:12:00:02:ac:00:5f:9a
zone:
Z10
50:01:43:80:18:6b:6e:36
23:12:00:02:ac:00:5f:98
zone:
Z11
50:01:43:80:24:22:c9:d2
22:22:00:02:ac:00:5f:98
zone:
Z12
50:01:43:80:24:22:b2:16
23:22:00:02:ac:00:5f:98
zone:
Z13
50:01:43:80:24:22:ca:86
20:22:00:02:ac:00:5f:98
zone:
Z14
50:01:43:80:24:22:cf:02
20:12:00:02:ac:00:5f:98
zone:
Z15
50:01:43:80:18:6b:88:7a
21:12:00:02:ac:00:5f:98
zone:
Z16
50:01:43:80:18:6b:6e:16
zone:
Z2
50:01:43:80:18:6b:6e:36
21:22:00:02:ac:00:5f:98
23:12:00:02:ac:00:5f:9a
zone:
Z3
50:01:43:80:24:22:c9:d2
zone:
Z4
50:01:43:80:24:22:b2:16
22:22:00:02:ac:00:5f:9a
23:22:00:02:ac:00:5f:9a
zone:
Z5
50:01:43:80:24:22:ca:86
zone:
Z6
50:01:43:80:24:22:cf:02
20:22:00:02:ac:00:5f:9a
20:12:00:02:ac:00:5f:9a
zone:
Z7
50:01:43:80:18:6b:88:7a
zone:
Z8
50:01:43:80:18:6b:6e:16
21:12:00:02:ac:00:5f:9a
21:22:00:02:ac:00:5f:9a
zone:
Z9
50:01:43:80:18:6b:6e:5e
22:12:00:02:ac:00:5f:98
HP 3PAR StoreServ 7450 CLI examples
SHOWVLUN
- 48 drives RAID1
prometheus cli% showvlun
Active VLUNs
Lun VVName
36
HostName
-Host_WWN/iSCSI_Name-
Port Type Status ID
0 APS84_11.0 NUMA0_pair1 50014380186B6E5C
0:1:1 host active
0
0 APS84_11.0 NUMA0_pair1 50014380186B6E34
1:1:1 host active
0
0 APS84_11.1 NUMA0_pair2 50014380186B6E5E
2:1:2 host active
0
0 APS84_11.1 NUMA0_pair2 50014380186B6E36
3:1:2 host active
0
0 APS84_11.2 NUMA2_pair1 500143802422C9D0
0:2:1 host active
0
0 APS84_11.2 NUMA2_pair1 500143802422B214
1:2:1 host active
0
0 APS84_11.3 NUMA2_pair2 500143802422CA86
0:2:2 host active
0
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
0 APS84_11.3 NUMA2_pair2 500143802422C9D2
2:2:2 host active
0
0 APS84_11.4 NUMA2_pair3 500143802422CA84
2:2:1 host active
0
0 APS84_11.4 NUMA2_pair3 500143802422B216
3:2:2 host active
0
0 APS84_11.5 NUMA4_pair1 500143802422CF00
2:1:1 host active
0
0 APS84_11.5 NUMA4_pair1 50014380186B8878
3:1:1 host active
0
0 APS84_11.6 NUMA4_pair2 50014380186B887A
1:1:2 host active
0
0 APS84_11.6 NUMA4_pair2 50014380186B6E16
1:2:2 host active
0
0 APS84_11.7 NUMA4_pair3 500143802422CF02
0:1:2 host active
0
0 APS84_11.7 NUMA4_pair3 50014380186B6E14
3:2:1 host active
0
--------------------------------------------------------------------16 total
VLUN Templates
Lun VVName
HostName
-Host_WWN/iSCSI_Name- Port Type
0 APS84_11.0 NUMA0_pair1 ----------------
--- host
0 APS84_11.1 NUMA0_pair2 ----------------
--- host
0 APS84_11.2 NUMA2_pair1 ----------------
--- host
0 APS84_11.3 NUMA2_pair2 ----------------
--- host
0 APS84_11.4 NUMA2_pair3 ----------------
--- host
0 APS84_11.5 NUMA4_pair1 ----------------
--- host
0 APS84_11.6 NUMA4_pair2 ----------------
--- host
0 APS84_11.7 NUMA4_pair3 ----------------
--- host
---------------------------------------------------------8 total
SHOWPORT
N:S:P Connmode ConnType CfgRate MaxRate Class2
UniqNodeWwn VCN
IntCoal
0:0:1 disk
point
6Gbps
6Gbps
n/a
n/a
n/a
enabled
0:0:2 disk
point
6Gbps
6Gbps
n/a
n/a
n/a
enabled
0:1:1 host
point
auto
8Gbps
disabled disabled
disabled disabled
0:1:2 host
point
auto
8Gbps
disabled disabled
disabled disabled
0:2:1 host
point
auto
8Gbps
disabled disabled
disabled disabled
0:2:2 host
point
auto
8Gbps
disabled disabled
disabled disabled
0:2:3 disk
loop
auto
8Gbps
disabled disabled
disabled enabled
0:2:4 disk
loop
auto
8Gbps
disabled disabled
disabled enabled
1:0:1 disk
point
6Gbps
6Gbps
n/a
n/a
n/a
enabled
1:0:2 disk
point
6Gbps
6Gbps
n/a
n/a
n/a
enabled
1:1:1 host
point
auto
8Gbps
disabled disabled
disabled disabled
1:1:2 host
point
auto
8Gbps
disabled disabled
disabled disabled
1:2:1 host
point
auto
8Gbps
disabled disabled
disabled disabled
1:2:2 host
point
auto
8Gbps
disabled disabled
disabled disabled
1:2:3 disk
loop
auto
8Gbps
disabled disabled
disabled enabled
1:2:4 disk
loop
auto
8Gbps
disabled disabled
disabled enabled
2:0:1 disk
point
6Gbps
6Gbps
n/a
n/a
n/a
enabled
2:0:2 disk
point
6Gbps
6Gbps
n/a
n/a
n/a
enabled
2:1:1 host
point
auto
8Gbps
disabled disabled
disabled disabled
37
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
2:1:2 host
point
auto
8Gbps
disabled disabled
disabled disabled
2:2:1 host
point
auto
8Gbps
disabled disabled
disabled disabled
2:2:2 host
point
auto
8Gbps
disabled disabled
disabled disabled
2:2:3 disk
loop
auto
8Gbps
disabled disabled
disabled enabled
2:2:4 disk
loop
auto
8Gbps
disabled disabled
disabled enabled
3:0:1 disk
point
6Gbps
6Gbps
n/a
n/a
n/a
enabled
3:0:2 disk
point
6Gbps
6Gbps
n/a
n/a
n/a
enabled
3:1:1 host
point
auto
8Gbps
disabled disabled
disabled disabled
3:1:2 host
point
auto
8Gbps
disabled disabled
disabled disabled
3:2:1 host
point
auto
8Gbps
disabled disabled
disabled disabled
3:2:2 host
point
auto
8Gbps
disabled disabled
disabled disabled
3:2:3 disk
loop
auto
8Gbps
disabled disabled
disabled enabled
3:2:4 disk
loop
auto
8Gbps
disabled disabled
disabled enabled
SHOWCPG
prometheus cli% showcpg
----------------(MB)-----------------Volumes- -Usage- ----- Usr ----- -- Snp --- -- Adm --Id Name
Warn% VVs TPVVs Usr Snp
Total
Used Total Used Total Used
0 SSD_r1
-
8
0
8
0 1843200 1843200
0
0
0
0
1 SSD_r5
-
0
0
0
0
0
0
0
0
0
0
2 SSD_r6
-
0
0
0
0
0
0
0
0
0
0
3 SSD_R1_16Drives
-
0
0
0
0
0
0
0
0
0
0
4 SSD_R5_16Drives
-
0
0
0
0
0
0
0
0
0
0
5 SSD_R1_32Drives
-
0
0
0
0
0
0
0
0
0
0
6 SSD_R5_32Drives
-
0
0
0
0
0
0
0
0
0
0
-------------------------------------------------------------------------------7 total
8
0 1843200 1843200
0
0
0
0
Appendix G – Check or set operating system tracing parameter
If tracing is enabled on the operating system, latencies from events can be introduced into the kernel causing delays in I/O
operations. During I/O characterization testing as much as 10% I/O performance degradation was observed. Ensure any
tools that enable tracing have been disabled or removed unless they are needed for specific support purposes.
To check the state of tracing on the system, run the following commands:
cat
/sys/kernel/debug/tracing/tracing_enabled
cat
/sys/kernel/debug/tracing/tracing_on
The result of both these commands should be 0. To disable tracing temporarily run the following commands:
echo "0" > /sys/kernel/debug/tracing/tracing_enabled
echo "0" > /sys/kernel/debug/tracing/tracing_on
To permanently disable tracing, remove the application on the system that is enabling debug or add the above commands
to the /etc/rc.local file.
A debug tool called flightrecorder can cause debug to be enabled. To determine if flightrecorder is installed check for it on
your Linux server using this command:
rpm –qa | grep flightrecorder
38
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
If the package exists, delete it using rpm –e or run the following command:
service trace-cmd stop
Appendix H – Oracle parameters
DB1.__db_cache_size=746787438592
DB1.__java_pool_size=536870912
DB1.__large_pool_size=536870912
DB1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
DB1.__pga_aggregate_target=260382392320
DB1.__sga_target=781147176960
DB1.__shared_io_pool_size=0
DB1.__shared_pool_size=28991029248
DB1.__streams_pool_size=536870912
*._db_block_numa=1
*._enable_automatic_maintenance=0
*._enable_NUMA_support=TRUE
*._shared_io_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/DB1/adump'
*.audit_trail='db'
*.compatible='11.2.0.3.0'
*.control_files='+DATA/db1/controlfile/current.256.824195537'
*.db_block_checking='TRUE'
*.db_block_checksum='TRUE'
*.db_block_size=8192
*.db_cache_size=746787438592
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+DATA'
*.db_domain=''
*.db_file_multiblock_read_count=128
*.db_files=1050
*.db_name='DB1'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=DB1XDB)'
*.filesystemio_options='setall'
*.java_pool_size=536870912
*.large_pool_size=536870912
*.open_cursors=3000
*.parallel_degree_policy='MANUAL'
*.parallel_max_servers=0
*.parallel_min_servers=800
*.pga_aggregate_target=260214620160
*.processes=12000
*.recovery_parallelism=240
*.remote_login_passwordfile='exclusive'
*.sessions=1000
*.sga_target=0
*.shared_pool_size=28991029248
*.statistics_level='TYPICAL'
*.streams_pool_size=536870912
*.timed_statistics=TRUE
*.trace_enabled=TRUE
*.undo_tablespace='UNDOTBS1'
*.use_large_pages='ONLY'
39
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
Appendix I – HP ProLiant DL980 PCIe card loading order
Figure 22. DL980 G7 I/O Expansion Slot Options & PCIe Loading
40
Technical white paper | HP Universal Database Solution: Oracle and HP 3PAR StoreServ 7450 All-flash array
For more information
Universal Database Solution for Mission-Critical x86,
http://www8.hp.com/us/en/products/servers/proliant-servers.html?compURI=1452898
HP ProLiant DL980 G7 server, hp.com/servers/dl980
HP 3PAR StoreServ 7450 storage, hp.com/go/storeserv7450
HP Serviceguard Solutions for Linux, hp.com/go/sglx
HP Networking, hp.com/go/networking
HP 3PAR Remote Copy Software,
http://www8.hp.com/us/en/products/storage-software/product-detail.html?oid=5044771
To help us improve our documents, please provide feedback at hp.com/solutions/feedback.
Sign up for updates
hp.com/go/getupdated
© Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Oracle and Java are registered trademarks of Oracle and/or its affiliates.
UNIX is a registered trademark of The Open Group. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Red Hat is a registered
trademark of Red Hat, Inc. in the United States and other countries.
4AA4-8714ENW, October 2013, Rev. 1
Download