Cisco Unified Computing System and Oracle RAC

advertisement
Cisco Unified Computing System and Oracle RAC
11gR2 with Hitachi Virtual Storage Platform G1000
Cisco Data Center Solution for Oracle RAC 11gR2 (11.2.0.4) on Oracle
Linux 6.4 using Cisco Unified Computing System and Hitachi Virtual
Storage Platform G1000
Last Updated: October 14, 2014
Building Architectures to Solve Business Problems
2
Cisco Validated Design
About the Authors
About the Authors
Niranjan Mohapatra, Technical Marketing Engineer, CSPG, Cisco Systems
Niranjan Mohapatra is a Technical Marketing Engineer in Cisco Systems CSPG UCS Product
Management and Data Center Solutions Engineering Group, and specialist on Oracle RAC RDBMS. He
has over 15 years of extensive experience on Oracle RAC Database and associated tools. Niranjan has
worked as a TME and a DBA handling production systems in various organizations. He holds a Master
of Science (MSc) degree in Computer Science and is also an Oracle Certified Professional (OCP -DBA).
Niranjan also has strong background in Cisco UCS, Hitachi Storage and Virtualization.
Kishore Daggubati, Senior Oracle Solution Architect, Hitachi Data Systems
Kishore Daggubati’s main focus is developing Unified Compute Platform based solutions for Oracle
applications. In addition, Kishore defines Oracle reference architectures, implementation guides and
supports Oracle technical presales. Kishore has 16 + years of experience in the Oracle and Storage.
Acknowledgments
The following people were part of the team that made this Cisco Validation Design solution possible:
•
Tushar Patel—Cisco Systems, Inc.
•
YC Chu—Hitachi Systems
3
About the Authors
About Cisco Validated Design (CVD) Program
The CVD program consists of systems and solutions designed, tested, and documented to facilitate
faster, more reliable, and more predictable customer deployments. For more information visit
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING
FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS
SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES,
INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF
THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED
OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR
THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR
OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT
THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY
DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco
WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We
Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS,
Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the
Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital,
the Cisco Systems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone,
iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace
Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels,
ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to
Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of
Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners.
The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
© 2014 Cisco Systems, Inc. All rights reserved
About Cisco Validated Design (CVD) Program
4
Cisco Unified Computing System and Oracle
RAC 11gR2 with Hitachi Virtual Storage
Platform G1000
Executive Summary
This Cisco Validated Design describes how the Cisco Unified Computing System™ can be used in
conjunction with Hitachi Virtual Storage Platform (VSP) G1000 storage systems to implement an Oracle
Real Application Clusters (RAC) Database. The Cisco Unified Computing System provides the
compute, network, and storage access components of the cluster, deployed as a single cohesive system.
The result is an implementation that addresses many of the challenges that database administrators and
their IT departments face today, including needs for a simplified deployment and operation model, high
performance for Oracle 11gR2 RAC software, and lower total cost of ownership (TCO). This document
introduces the Cisco Unified Computing System and provides instructions for implementing it.
Historically, enterprise database management systems have run on costly symmetric multiprocessing
servers that use a vertical scaling (or scale-up) model. However, as the cost of one-to-four-socket
x86-architecture servers continues to drop while their processing power increases, a new model has
emerged. Oracle RAC uses a horizontal scaling, or scale-out, model, in which the active-active cluster
uses multiple servers, each contributing its processing power to the cluster, increasing performance,
scalability, and availability. The cluster balances the workload across the servers in the cluster, and the
cluster can provide continuous availability in the event of a failure.
One approach used by database, system and storage administrators to meet the I/O performance needs
of applications is to deploy faster CPU, high-performance drives. This may be a solution in
environments with smaller database sizes and environments with minimal movement in hot datasets.
However, as databases grow, requires more computing power as frequently accessed data sets change
constantly. It becomes more difficult to identify data based on access frequency and redistribute it to the
correct storage media.
Hitachi Virtual Storage Platform G1000 addresses the challenges with Hitachi Dynamic Provisioning
software and Hitachi Dynamic Tiering software. Hitachi Dynamic Provisioning software provides
efficient and cost effective mechanisms to address capacity planning and database utilization
management challenges. Hitachi Dynamic Tiering software extends the mechanisms to maximize the
utilization of high-cost, high-performance storage media and supports automatic migration of frequently
accessed data to address the data life-cycle management challenge.
Executive Summary
Cisco is the undisputed leader in providing network connectivity in enterprise data centers. With the
introduction of the Cisco Unified Computing System, Cisco is now equipped to provide the entire
clustered infrastructure for Oracle RAC deployments. The Cisco Unified Computing System provides
compute, network, virtualization, and storage access resources that are centrally controlled and managed
as a single cohesive system. With the capability to centrally manage both blade and rack-mount servers,
the Cisco Unified Computing System provides an ideal foundation for Oracle RAC deployments.
One key benefit of the Cisco Unified Computing System with Hitachi Virtual Storage Platform G1000
is the ability to customize the environment to suit a customer's requirements. This is why the reference
architecture detailed in this document highlights the resiliency, cost benefit, and ease of deployment of
a FC-based storage solution. A storage system that is capable of doing dynamic tiering and dynamic
provisioning is the customer's choice for performance and investment protection.
Target Audience
This document is intended to assist solution architects, project managers, infrastructure managers, sales
engineers, field engineers, and consultants in planning, designing, and deploying Oracle Database 11g
R2 RAC hosted on Cisco Unified Computing System and Hitachi Virtual Storage Platform G1000. This
document assumes that the reader has an architectural understanding of the Cisco Unified Computing
System, Oracle Database 11gR2 GRID Infrastructure, Oracle Real Application Clusters, Hitachi storage
systems, and related software.
Purpose of this Guide
This Cisco Validated Design demonstrates how enterprises can apply best practices to deploy Oracle
Database 11g R2 RAC using Oracle Linux, Cisco Unified Computing System, Cisco Nexus family
switches, and Hitachi storage. This design solution shows the deployment and scaling of a four-node
Oracle Database 11g R2 RAC Database in a baremetal environment using a typical OLTP and DSS
workloads to ensure expected stability, performance and resiliency design as demanded by mission
critical data center deployments.
Benefits of the Configuration
Cisco and Oracle are working together to promote interoperability of Oracle's next-generation database
and application solutions with the Cisco Unified Computing System, helping make the Cisco Unified
Computing System a simple and reliable platform on which to run Oracle software.
Database administrators no longer need to painstakingly configure each element in the hardware stack
independently as the entire cluster runs on a single cohesive system. Cisco UCS Manager dynamically
provision network, compute and storage access resources statelessly. This role-based and policy-based
embedded management system handles every aspect of system configuration, from a server's firmware
and identity settings to the network connections that connect storage traffic to the destination storage
system. This capability dramatically simplifies the process of scaling an Oracle RAC configuration or
rehosting an existing node on an upgrade server. Cisco UCS Manager uses the concept of service profiles
and service profile templates to consistently and accurately configure resources. The system
automatically configures and deploys servers in minutes, rather than the hours or days required by
traditional systems composed of discrete, separately managed components. Indeed, Cisco UCS Manager
can simplify server deployment to the point where it can automatically discover, provision, and deploy
a new blade server when it is inserted into a chassis.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
6
Executive Summary
The system is based on a 10-Gbps unified network fabric that radically simplifies cabling at the rack
level by consolidating both IP and Fiber Channel traffic onto the same rack-level 10-Gbps converged
network. This "wire-once" model allows in-rack network cabling to be configured once, with network
features and configurations all implemented by changes in software rather than by error-prone changes
in physical cabling. This Cisco Validated Configuration not only supports physically separate public and
private networks; it provides redundancy with automatic failover.
The Cisco UCS B-Series Blade Servers used in this configuration feature Intel Xeon E5- 2697 v2 series
processors that deliver intelligent performance, automated energy efficiency, and flexible virtualization.
Intel Turbo Boost Technology automatically boosts processing power through increased frequency and
use of hyper threading to deliver high performance when workloads demand and thermal conditions
permit.
The Cisco Unified Computing System's 10-Gbps unified fabric delivers standards-based Ethernet and
Fiber Channel over Ethernet (FCoE) capabilities that simplify and secure rack-level cabling while
speeding network traffic compared to traditional Gigabit Ethernet networks. The balanced resources of
the Cisco Unified Computing System allow the system to easily process an intensive online transaction
processing (OLTP) and decision-support system (DSS) workload with no resource saturation.
Technology Overview
Cisco Unified Computing System
The Cisco Unified Computing System is a third-generation data center platform that unites computing,
networking, storage access, and virtualization resources into a cohesive system. When used as the
foundation for Oracle RAC database and software the system brings lower total cost of ownership
(TCO), greater performance, improved scalability, increased business agility, and Cisco's hallmark
investment protection. The system integrates a low-latency, lossless 10 Gigabit Ethernet (10GbE)
unified network fabric with enterprise-class, x86-architecture servers. The system is an integrated,
scalable, multi-chassis platform in which all resources participate in a unified management domain that
is controlled and managed centrally.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
7
Executive Summary
The system represents a major evolutionary step away from the current traditional platforms in which
individual components must be configured, provisioned, and assembled to form a solution. Instead, the
system is designed to be stateless. It is installed and wired once, with its entire configuration—from
RAID controller settings and firmware revisions to network configurations—determined in software
using integrated, embedded management.
Cisco Unified Computing System is designed to be form-factor neutral. The core of the system is a pair
of Fabric Interconnects that links all the computing resources together and integrates all system
components into a single point of management. Today, blade server chassis are integrated into the system
through Fabric Extenders that bring the system's 10-Gbps unified fabric to each chassis.
The Fibre Channel over Ethernet (FCoE) protocol collapses Ethernet-based networks and storage
networks into a single common network infrastructure, thus reducing CapEx by eliminating redundant
switches, cables, networking cards, and adapters, and reducing OpEx by simplifying administration of
these networks. Other benefits include:
•
I/O and server virtualization
•
Transparent scaling of all types of content, either block or file based
•
Simpler and more homogeneous infrastructure to manage, enabling data center consolidation
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
8
Executive Summary
Figure 1
Cisco UCS Components
The main components of the Cisco Unified Computing System are:
•
Compute—The system is based on an entirely new class of computing system that incorporates
blade servers based on Intel Xeon® E5-2600 Series Processors. Cisco UCS B-Series Blade Servers
work with virtualized and non-virtualized applications to increase performance, energy efficiency,
flexibility and productivity.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
9
Executive Summary
•
Network—The system is integrated onto a low-latency, lossless, 80-Gbps unified network fabric.
This network foundation consolidates LANs, SANs, and high-performance computing networks
which are separate networks today. The unified fabric lowers costs by reducing the number of
network adapters, switches, and cables, and by decreasing the power and cooling requirements.
•
Storage access—The system provides consolidated access to both storage area network (SAN) and
network-attached storage (NAS) over the unified fabric. By unifying storage access, Cisco Unified
Computing System can access storage over Ethernet, Fiber Channel, Fiber Channel over Ethernet
(FCoE), and iSCSI. This provides customers with the options for setting storage access and
investment protection. Additionally, server administrators can reassign storage-access policies for
system connectivity to storage resources, thereby simplifying storage connectivity and management
for increased productivity.
•
Management—The system uniquely integrates all system components which enable the entire
solution to be managed as a single entity by the Cisco UCS Manager. The Cisco UCS Manager has
an intuitive graphical user interface (GUI), a command-line interface (CLI), and a robust application
programming interface (API) to manage all system configuration and operations.
Cisco UCS Blade Chassis
The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified
Computing System, delivering a scalable and flexible blade server chassis.
The Cisco UCS 5108 Blade Server Chassis is six rack units (6RU) high and can mount in an
industry-standard 19-inch rack. A single chassis can house up to eight half-width Cisco UCS B-Series
Blade Servers and can accommodate both half-width and full-width blade form factors.
Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These
power supplies are 92 percent efficient and can be configured to support non-redundant, N+ 1 redundant
and grid-redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power
connectors (one per power supply), and two I/O bays for Cisco UCS 2208 XP Fabric Extenders.
A passive mid-plane provides up to 40 Gbps of I/O bandwidth per server slot and up to 80 Gbps of I/O
bandwidth for two slots. The chassis is capable of supporting future 80 Gigabit Ethernet standards.
Figure 2
Cisco Blade Server Chassis (Front, Rear, and Populated with Blades View)
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
10
Executive Summary
Cisco UCS B200 M3 Blade Server
The Cisco UCS B200 M3 Blade Server is a half-width, two-socket blade server. The system uses two
Intel Xeon® E5-2600 Series Processors, up to 768 GB of DDR3 memory, two optional hot-swappable
small form factor (SFF) serial attached SCSI (SAS) disk drives, and two VIC adapters that provides up
to 80 Gbps of I/O throughput. The server balances simplicity, performance, and density for
production-level virtualization and other mainstream data center workloads.
Figure 3
Cisco UCS B200 M3 Blade Server
Cisco UCS Virtual Interface Card 1240
A Cisco innovation, the Cisco UCS VIC 1240 is a four-port 10 Gigabit Ethernet, FCoE-capable modular
LAN on motherboard (mLOM) designed exclusively for the M3 generation of Cisco UCS B-Series
Blade Servers. When used in combination with an optional port expander, the Cisco UCS VIC 1240
capabilities can be expanded to eight ports of 10 Gigabit Ethernet.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
11
Executive Summary
Figure 4
Cisco Virtual Interface Card 1240
Cisco UCS 6296UP Fabric Interconnect
The Fabric interconnects provide a single point for connectivity and management for the entire system.
Typically deployed as an active-active pair, the system's fabric interconnects integrate all components
into a single, highly-available management domain controlled by Cisco UCS Manager. The fabric
interconnects manage all I/O efficiently and securely at a single point, resulting in deterministic I/O
latency regardless of a server or virtual machine's topological location in the system.
Cisco UCS 6200 Series Fabric Interconnects support the system's 80-Gbps unified fabric with
low-latency, lossless, cut-through switching that supports IP, storage, and management traffic using a
single set of cables. The fabric interconnects feature virtual interfaces that terminate both physical and
virtual connections equivalently, establishing a virtualization-aware environment in which blade, rack
servers, and virtual machines are interconnected using the same mechanisms. The Cisco UCS 6296UP
is a 2-RU fabric interconnect that features up to 96 universal ports that can support 80 Gigabit Ethernet,
Fiber Channel over Ethernet, or native Fiber Channel connectivity.
Figure 5
Cisco UCS 6296UP 96-Port Fabric
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
12
Executive Summary
Cisco UCS Manager
Cisco UCS Manager is an embedded, unified manager that provides a single point of management for
Cisco UCS. Cisco UCS Manager can be accessed through an intuitive GUI, a command-line interface
(CLI), or the comprehensive open XML API. It manages the physical assets of the server and storage
and LAN connectivity, and it is designed to simplify the management of virtual network connections
through integration with several major hypervisor vendors. It provides IT departments with the
flexibility to allow people to manage the system as a whole, or to assign specific management functions
to individuals based on their roles as managers of server, storage, or network hardware assets. It
simplifies operations by automatically discovering all the components available on the system and
enabling a stateless model for resource use.
Some of the key elements managed by Cisco UCS Manager include:
•
Cisco UCS Integrated Management Controller (IMC) firmware
•
RAID controller firmware and settings
•
BIOS firmware and settings, including server universal user ID (UUID) and boot order
•
Converged network adapter (CNA) firmware and settings, including MAC addresses and worldwide
names (WWNs) and SAN boot settings
•
Virtual port groups used by virtual machines, using Cisco Data Center VM-FEX technology
•
Interconnect configuration, including uplink and downlink definitions, MAC address and WWN
pinning, VLANs, VSANs, quality of service (QoS), bandwidth allocations, Cisco Data Center
VM-FEX settings, and Ether Channels to upstream LAN switches
Cisco Unified Computing System is designed from the start to be programmable and self-integrating. A
server's entire hardware stack, ranging from server firmware and settings to network profiles, is
configured through model-based management. With Cisco virtual interface cards (VICs), even the
number and type of I/O interfaces is programmed dynamically, making every server ready to power any
workload at any time.
With model-based management, administrators manipulate a desired system configuration and associate
a model's policy driven service profiles with hardware resources, and the system configures itself to
match requirements. This automation accelerates provisioning and workload migration with accurate
and rapid scalability. The result is increased IT staff productivity, improved compliance, and reduced
risk of failures due to inconsistent configurations. This approach represents a radical simplification
compared to traditional systems, reducing capital expenditures (CAPEX) and operating expenses
(OPEX) while increasing business agility, simplifying and accelerating deployment, and improving
performance.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
13
Executive Summary
Cisco UCS Service Profiles
Figure 6
Traditional Provisioning Approach
A server's identity is made up of many properties such as UUID, boot order, IPMI settings, BIOS
firmware, BIOS settings, RAID settings, disk scrub settings, number of NICs, NIC speed, NIC firmware,
MAC and IP addresses, number of HBAs, HBA WWNs, HBA firmware, FC fabric assignments, QoS
settings, VLAN assignments, remote keyboard/video/monitor etc. I think you get the idea. It's a LONG
list of "points of configuration" that need to be configured to give this server its identity and make it
unique from every other server within your data center. Some of these parameters are kept in the
hardware of the server itself (like BIOS firmware version, BIOS settings, boot order, FC boot settings,
etc.) while some settings are kept on your network and storage switches (like VLAN assignments, FC
fabric assignments, QoS settings, ACLs, etc.). This results in following server deployment challenges:
Lengthy Deployment Cycles
•
Every deployment requires coordination among server, storage, and network teams
•
Need to ensure correct firmware & settings for hardware components
•
Need appropriate LAN & SAN connectivity
Response Time to Business Needs
•
Tedious deployment process
•
Manual, error prone processes, that are difficult to automate
•
High OPEX costs, outages caused by human errors
Limited OS and Application Mobility
•
Storage and network settings tied to physical ports and adapter identities
•
Static infrastructure leads to over-provisioning, higher OPEX costs
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
14
Executive Summary
Cisco Unified Computing System has uniquely addressed these challenges with the introduction of
service profiles that enables integrated, policy based infrastructure management. Cisco UCS Service
Profiles hold the DNA for nearly all configurable parameters required to set up a physical server. A set
of user defined policies (rules) allow quick, consistent, repeatable, and secure deployments of Cisco
UCS servers.
Cisco UCS Service Profiles contain values for a server's property settings, including virtual network
interface cards (vNICs), MAC addresses, boot policies, firmware policies, fabric connectivity, external
management, and high availability information. By abstracting these settings from the physical server
into a Cisco Service Profile, the Service Profile can then be deployed to any physical compute hardware
within the Cisco UCS domain. Furthermore, Service Profiles can, at any time, be migrated from one
physical server to another. This logical abstraction of the server personality separates the dependency of
the hardware type or model and is a result of Cisco's unified fabric model (rather than overlaying
software tools on top).
This innovation is still unique in the industry despite competitors claiming to offer similar functionality.
In most cases, these vendors must rely on several different methods and interfaces to configure these
server settings. Furthermore, Cisco is the only hardware provider to offer a truly unified management
platform, with Cisco UCS Service Profiles and hardware abstraction capabilities extending to both blade
and rack servers.
Some of key features and benefits of Cisco UCS service profiles are discussed below.
Service Profiles and Templates
In summary, service profiles represent all the attributes of a logical server in Cisco UCS data model.
These attributes have been abstracted from the underlying attributes of the physical hardware and
physical connectivity. Using logical servers that are disassociated from the physical hardware removes
many limiting constraints around how servers are provisioned. Using logical servers also makes it easy
to repurpose physical servers for different applications and services.
Figure 7 represents how Server, Network, and Storage Policies are encapsulated in a service profile.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
15
Executive Summary
Figure 7
Service Profile Inclusions
The Cisco UCS Manager provisions servers utilizing service profiles. The Cisco UCS Manager
implements a role-based and policy-based management focused on service profiles and templates. A
service profile can be applied to any blade server to provision it with the characteristics required to
support a specific software stack. A service profile allows server and network definitions to move within
the management domain, enabling flexibility in the use of system resources.
Service profile templates are stored in the Cisco UCS 6200 Series Fabric Interconnects for reuse by
server, network, and storage administrators. Service profile templates consist of server requirements and
the associated LAN and SAN connectivity. Service profile templates allow different classes of resources
to be defined and applied to a number of resources, each with its own unique identities assigned from
predetermined pools.
The Cisco UCS Manager can deploy the service profile on any physical server at any time. When a
service profile is deployed to a server, the Cisco UCS Manager automatically configures the server,
adapters, Fabric Extenders, and Fabric Interconnects to match the configuration specified in the service
profile. A service profile template parameterizes the UIDs that differentiate between server instances.
This automation of device configuration reduces the number of manual steps required to configure
servers, Network Interface Cards (NICs), Host Bus Adapters (HBAs), and LAN and SAN switches.
Programmatically Deploying Server Resources
Cisco UCS Manager provides centralized management capabilities, creates a unified management
domain, and serves as the central nervous system of the Cisco Unified Computing System. Cisco UCS
Manager is embedded device management software that manages the system from end-to-end as a single
logical entity through an intuitive GUI, CLI, or XML API. Cisco UCS Manager implements role- and
policy-based management using service profiles and templates. This construct improves IT productivity
and business agility. Now infrastructure can be provisioned in minutes instead of days, shifting ITs focus
from maintenance to strategic initiatives.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
16
Executive Summary
Dynamic Provisioning
Cisco UCS resources are abstract in the sense that their identity, I/O configuration, MAC addresses and
WWNs, firmware versions, BIOS boot order, and network attributes (including QoS settings, ACLs, pin
groups, and threshold policies) all are programmable using a just-in-time deployment model. A service
profile can be applied to any blade server to provision it with the characteristics required to support a
specific software stack. A service profile allows server and network definitions to move within the
management domain, enabling flexibility in the use of system resources. Service profile templates allow
different classes of resources to be defined and applied to a number of resources, each with its own
unique identities assigned from predetermined pools.
Cisco Nexus 5548UP Switch
The Cisco Nexus 5548UP is a 1RU 1 Gigabit and 10 Gigabit Ethernet switch offering up to 960 gigabits
per second throughput and scaling up to 48 ports. It offers 32 1/10 Gigabit Ethernet fixed enhanced Small
Form-Factor Pluggable (SFP+) Ethernet/FCoE or 1/2/4/8-Gbps native FC unified ports and three
expansion slots. These slots have a combination of Ethernet/FCoE and native FC ports.
Figure 8
Cisco Nexus 5548UP Switch
The Cisco Nexus 5548UP Switch delivers innovative architectural flexibility, infrastructure simplicity,
and business agility, with support for networking standards. For traditional, virtualized, unified, and
high-performance computing (HPC) environments, it offers a long list of IT and business advantages,
including:
Architectural Flexibility
•
Unified ports that support traditional Ethernet, Fiber Channel (FC), and Fiber Channel over Ethernet
(FCoE)
•
Synchronizes system clocks with accuracy of less than one microsecond, based on IEEE 1588
•
Supports secure encryption and authentication between two network devices, based on Cisco
TrustSec IEEE 802.1AE
•
Offers converged Fabric extensibility, based on emerging standard IEEE 802.1BR, with Fabric
Extender (FEX) Technology portfolio, including:
– Cisco Nexus 2000 FEX
– Adapter FEX
– VM-FEX
Infrastructure Simplicity
•
Common high-density, high-performance, data-center-class, fixed-form-factor platform
•
Consolidates LAN and storage
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
17
Executive Summary
•
Supports any transport over an Ethernet-based fabric, including Layer 2 and Layer 3 traffic
•
Supports storage traffic, including iSCSI, NAS, FC, RoE, and IBoE
•
Reduces management points with FEX Technology
•
Meets diverse data center deployments on one platform
•
Provides rapid migration and transition for traditional and evolving technologies
•
Offers performance and scalability to meet growing business needs
Business Agility
Specifications At-a-Glance
•
A 1 -rack-unit, 1/10 Gigabit Ethernet switch
•
32 fixed Unified Ports on base chassis and one expansion slot totaling 48 ports
•
The slot can support any of the three modules: Unified Ports, 1/2/4/8 native Fiber Channel, and
Ethernet or FCoE
•
Throughput of up to 960 Gbps
Hitachi Virtual Storage Platform G1000 Technologies and Benefits
Hitachi Virtual Storage Platform G1000 (VSP G1000) provides the always-available, agile, and
automated foundation you need for a continuous cloud infrastructure. This platform delivers
enterprise-ready software-defined storage, advanced global storage virtualization, and high performance
storage.
Supporting always-on operations, VSP G1000 includes self-service, non-disruptive migration and
active-active storage clustering for zero recovery time objectives. Automate your operations with
self-optimizing, policy-driven management.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
18
Executive Summary
Figure 9
Hitachi Virtual Storage Platform G1000
A VSP G1000 is configured as a collection of these major elements:
•
1 or 2 controller chassis containing controller boards, power supplies and fans
Each controller may be configured with a mixture of the following controller boards: processors
(Virtual Storage Directors), cache switches (Cache Path Control Adapters), front-end directors
(FED) and back-end directors (BED);
•
Up to 12 drive chassis (DC) supporting up to 2304 drives
•
Up to 2048GB of cache
•
1 to 6 19" racks
With certain configurations, a VSP G1000 can deliver extremely high single system performance:
•
Up to 4 million IOPS, 8KB block, 100% random read cache miss
•
Up to 50GB/sec in sustained 100% sequential read (256 KB blocks)
Hitachi Storage Virtualization Operating System
Hitachi Storage Virtualization Operating System spans and integrates multiple platforms. It is integrates
storage system software to provide system element management and advanced storage system functions.
Used across multiple platforms, Storage Virtualization Operating System includes storage virtualization,
thin provisioning, storage service level controls, dynamic provisioning, and performance
instrumentation.
Storage Virtualization Operating System includes standards-based management software on a Hitachi
Command Suite base. This provides storage configuration and control capabilities for you.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
19
Design Topology
This solution uses Hitachi Dynamic Tiering, a part of the Server Virtualization Operating System.
Separately licensed, Dynamic Tiering virtualizes and automates mobility between tiers for maximum
performance and efficiency.
Hitachi Dynamic Tiering
Hitachi Dynamic Tiering (HDT) simplifies storage administration by automatically optimizing data
placement in 1, 2 or 3 tiers of storage that can be defined and used within a single virtual volume. Tiers
of storage can be made up of internal or external (virtualized) storage, and the use of HDT can lower
capital costs. The intuitive unified management of HDT allows for lower operational costs and reduces
the challenges of ensuring applications are placed on the appropriate classes of storage.
Oracle Database 11g R2 RAC
Oracle Database 11g Release 2 provides the foundation for IT to successfully deliver more information
with higher quality of service, reduce the risk of change within IT, and make more efficient use of IT
budgets.
Oracle Database 11g R2 Enterprise Edition provides industry-leading performance, scalability, security,
and reliability on a choice of clustered or single-servers with a wide range of options to meet user needs.
Grid computing relieves users from concerns about where data resides and which computer processes
their requests. Users request information or computation and have it delivered - as much as they want,
whenever they want. For a DBA, the grid is about resource allocation, information sharing, and high
availability. Oracle Database with Real Application Clusters provide the infrastructure for your database
grid. Automatic Storage Management provides the infrastructure for a storage grid. Oracle Enterprise
Manager Grid Control provides you with holistic management of your grid.
Oracle Database 11g Release 2 Enterprise Edition comes with a wide range of options to extend the
world's #1 database to help grow your business and meet your users' performance, security and
availability based service level expectations.
Key Features
•
Protects from server failure, site failure, human error, and reduces planned downtime
•
Secures data and enables compliance with unique low-level security, fine-grained auditing,
transparent data encryption, and total recall of data
•
High-performance data warehousing, online analytic processing, and data mining
•
Easily manages entire lifecycle of information for the largest of databases
Design Topology
This section presents physical and logical design in high-level considerations for Cisco UCS
networking, computing and Hitachi Virtual Storage Platform G1000 for Oracle Database 11g R2 RAC
deployments.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
20
Design Topology
Hardware and Software Used for this Solution
Table 1
Hardware and Software used for Oracle Database 11g R2 GRID Infrastructure with RAC Option
Deployment
Vendor
Cisco
Name
Cisco 6296UP
Version/Model
UCSM 2.2(2c)
Description
Cisco UCS 6200 UP Series Fabric
Cisco
Cisco
Cisco
Cisco UCS Chassis
Cisco UCS IOM
Nexus 5548UP
5108
2204XP
NX-OS
Interconnects
Chassis
IO Module
Nexus 5500 series Unified Port switch
Cisco
Switch
UCS Blade Server
B200 M3 /Intel Xeon
Half width Blade server (Database
E5-2697 v2/ 16 x16GB
Server)
DDR3 1866 MHz
Cisco
Cisco UCS VIC
Memory
1240
mLOM Virtual Interface Card
Oracle
Oracle
Adapter
Oracle Linux 6.4
Oracle 11g R2
6.4 64-bit UEK
11.2.0.4
Operating System
GRID Infrastructure software
Oracle
GRID
Oracle 11g R2
11.2.0.4
Database Software
Oracle
Hitachi
Database
Oracle SwingBench
Hitachi Virtual
2.4.0.845
VSP G1000
Oracle Benchmark kit
Hitachi Virtual Storage Platform
Hitachi
Storage Platform
Hitachi Device
D/N:Isv-47.49
Hitachi Device Manager to manage
Hitachi
Hitachi
Manager
Firmware
Hitachi Disk Drives
80-01-24-00/00
1600 GB FMD
the Hitachi storage
Hitachi Firmware version
Flash Module Drives
800 GB 10k RPM SAS
SAS drives
4TB 7.2k RPM NL-SAS
NL-SAS Drives
Cisco UCS Networking and Hitachi Storage Connectivity Topology
This section explains Cisco UCS networking and computing design considerations when deploying
Oracle Database 11g R2 RAC with Hitachi VSP G1000. In this design, the FC traffic is isolated from
the regular management and application data network using the same Cisco UCS infrastructure by
defining logical VLAN networks and VSAN to provide better data security. Table 2 shows the hardware
details used for this solution.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
21
Design Topology
Table 2
Details about Cisco Unified Computing System and Hitachi Storage
Physical Cisco Unified Computing System Configuration
Description
Cisco UCS 5108 Blade Server Chassis, with 4 power supply units, 8 fans and 2
fabric extenders
Cisco UCS B200-M3 half width blades
Two Socket - Twelve Core Intel Xeon E5-2697 v2 series 2.70 GHz processors
16 GB DDR3 DIMM, 1866 MHz ( 16 per server, totaling 256 GB per blade server )
Cisco UCS VIC 1240 Virtual Interface Card, 256 PCI devices, Dual 4 x 10G ( 1 per
server )
Cisco UCS – 6296UP 96 port Fabric Interconnect
16 port 8 Gbps Fibre Channel expansion module
Cisco Nexus 5548UP Switch
Quantity
2
4
96 Core
64
4
2
2
2
Physical Hitachi Virtual Storage Platform G1000 Configuration
Description
Quantity
Hitachi Accelerated Flash chassis
1
1.6 TB size Flash Modules Drives (FMD)
20
800GB size SAS 10k RPM drives
80
4 TB size NL-SAS 7.2k RPM drives
24
4 Front-end connectivity modules
4
4 x 8 Gb/sec Fiber Channel ports for 16 ports total
4 Back-end connectivity modules
4
8 x 6 Gb/sec SAS links each for 32 links total
Storage Cache in GB
466
Figure 10 presents a detailed view of the physical topology, and some of the main components of Cisco
Unified Computing System.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
22
Design Topology
Figure 10
Table 3
Network
Public
Private
Cisco UCS Networking and Hitachi Virtual Storage Platform G1000 Architecture
vPC Details
vPC
VLAN ID
33
10,191
34
10,191
As shown in Figure 10,, a pair of Cisco UCS 6296UP fabric interconnects carries both storage and
network traffic from the blades with the help of Cisco Nexus 5548UP switch. The 10GB FCoE traffic
leaves the UCS Fabrics through Nexus5548 Switches to Hitachi Virtual Storage Platform G1000. To
effectively handle the higher I/O requirements, FC boot is a better solution.
Both the fabric interconnect and the Cisco Nexus 5548UP switch are clustered with the peer link
between them to provide high availability. Two virtual PortChannels (vPCs) are configured to provide
public network and private network paths for the blades to northbound switches. Each vPC has VLANs
created for application network data and management data paths. For more information about vPC
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
23
Design Topology
configuration on the Cisco Nexus 5548UP Switch, refer to:
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/configuration_guide_c07-54356
3.html.
As illustrated in Figure 10, eight (four per chassis) links go to Fabric Interconnect "A" (ports 1 through
8). Similarly, eight links go to Fabric Interconnect B. Fabric Interconnect-A links are used for Oracle
Public network and FC Storage access and Fabric Interconnect-B links are used for Oracle private
interconnect traffic & FC Storage access.
Note
For Oracle RAC configuration on Cisco Unified Computing System, we recommend to keep all private
interconnects local on a single Fabric interconnect. In such case, the private traffic will stay local to that
fabric interconnect and will not be routed via northbound network switch. In other words, all inter blade
(or RAC node private) communication will be resolved locally at the fabric interconnect and this
significantly reduces latency for Oracle Cache Fusion traffic.
Hitachi Virtual Storage Platform G1000 Storage Layout
This section describes the storage architecture of Hitachi Virtual Storage Platform G1000 environment.
The architecture takes into consideration Hitachi Data Systems and Oracle recommended practices for
the deployment of database storage design.
Figure 11 illustrates the storage provisioning for this solution
Figure 11
Hitachi Virtual Storage Platform G1000 Storage Layout
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
24
Design Topology
Cisco UCS Manager Configuration Overview
Detailed information about configuring the Cisco Unified Computing System is available at
http://www.cisco.com/en/US/products/ps10281/products_installation_and_configuration_guides_list.h
tml
Note
It is beyond the scope of this document to cover all of these. However an attempt is made to include as
many and as much as possible.
High-Level Steps to Configure Cisco Unified Computing System
The following are the high-level steps involved for a Cisco UCS configuration:
1.
Configure Fabric Interconnects for Chassis and Blade Discovery
a. Configure Global Policies
b. Configure Server Ports
2.
Configure LAN and SAN on Cisco UCS Manager
a. Configure and Enable Ethernet LAN uplink Ports
b. Configure and Enable FC SAN uplink Ports
c. Configure VLAN
d. Configure VSAN
3.
Configure UUID, MAC, WWWN and WWPN Pool
a. UUID Pool Creation
b. IP Pool and MAC Pool Creation
c. WWNN Pool and WWPN Pool Creation
4.
Configure vNIC and vHBA Template
a. Create vNIC templates
b. Create Public vNIC template
c. Create Private vNIC template
d. Create Storage vNIC template
e. Create HBA templates
5.
Configure Ethernet Uplink Port-Channels
6.
Create Server Boot Policy for SAN Boot
Details for each step are discussed in subsequent sections below.
Configuring Fabric Interconnects for Blade Discovery
Cisco UCS 6248 UP Fabric Interconnects are configured for redundancy. It provides resiliency in case
of failures. The first step is to establish connectivity between the blades and fabric interconnects.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
25
Design Topology
Configure Global Policies
As shown in below, Navigate UCS Manager GUI menu to go to Equipment >Policies (Right pane) >
Global Policies. As shown in Figure 12, select Chassis/FEX discovery policy as "4-link" from the
drop-down list.
Figure 12
Configure Global Policy
Configuring Server Ports
Click Equipment > Fabric Interconnects > Fabric Interconnect A > Fixed Module > Ethernet Ports and
select the desired number of ports. Right-click to "Configure as Server Port" as show in Figure 13.
Figure 13
Configuring Server Ports
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
26
Design Topology
Note
We selected Port 9 to Port 16 to configure as serverports. After configuring the serverports, you would
see the details of serverports as shown in Figure 14.
Figure 14
All Configured Serverports
Configuring LAN and SAN on Cisco UCS Manager
Perform LAN and SAN configuration steps in the Cisco UCS Manager as shown in Figure 15.
Configure and Enable Ethernet LAN Uplink Ports
From Equipment > Fabric Interconnects > Fabric Interconnect A > Fixed Module > Ethernet Ports menu,
select the desired number of ports and right-click to "Configure as Uplink Port" as shown in Figure 15.
Figure 15
Configure Ethernet LAN Uplink Ports
As shown in Figure 15, we have selected Port 1 and 2 on Fabric interconnect A and configured them as
Ethernet Uplink ports. Repeat the same step on Fabric interconnect B to configure Port 1 and 2 as
Ethernet uplink ports.
These ports will be used to create Virtual Port-channels in later sections.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
27
Design Topology
Configure and Enable FC Ports
From Equipment > Fabric Interconnects > Fabric Interconnect A > Fixed Module >FC Ports menu, select
the desired ports and enable those ports. Following figure shows the configuration of FC Ports.
Figure 16
Configure FC ports
Configure VLAN
In Cisco UCS Manager, click LAN >LAN Cloud > VLAN and right-click to Create VLANs. In this
solution, you need to create 2 VLANs: one for private (VLAN 191) and one for public network (VLAN
10). These two VLANs will be used in the vNIC templates that are discussed later.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
28
Design Topology
Figure 17
Create VLAN for Public Network
In the screenshot above, we have highlighted VLAN 10 creation for public network. It is also very
important that you create both VLANs as global across both fabric interconnects. This way, VLAN
identity is maintained across the fabric interconnects in case of NIC failover.
Create VLANs for public and private networks. In case you are using Oracle HAIP feature, you may
have to configure additional vlans to be associated with additional VNICs as well.
The following is the summary of VLANs once you complete VLAN creation:
Note
•
VLAN ID 10 for public interfaces.
•
VLAN ID 191 for Oracle RAC private interconnect interfaces.
The private VLAN traffic stays local within the Cisco UCS domain during normal operating conditions,
it is necessary to configure entries for these private VLANs in northbound network switch. This will
allow the switch to route interconnect traffic appropriately in case of partial link failures. These
scenarios and traffic routing are discussed in details in later sections.
Figure 18 summarizes all the VLANs for Public and Private network.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
29
Design Topology
Figure 18
VLAN Summary
Configure VSAN
In Cisco UCS Manager, click SAN > SAN Cloud > VSANs and right -click to Create VSAN. In this
study we created VSAN 101 and 102 for SAN Boot and Storage access.
Figure 19
Configuring VSAN in Cisco UCS Manager
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
30
Design Topology
Figure 20
Note
Creating VSAN for Fabric A
We created a VSAN on both the Fabrics. It is also very important that you create both VSANs as global
across both fabric interconnects. This way, VLAN identity is maintained across the fabric interconnects
in case of HBA failover. For the VSAN on Fabric A the VSAN ID is 101 and similarly, for Fabric B the
VSAN ID is 102.
Figure 21 shows the created VSANs in Cisco UCS Manager.
Figure 21
VSAN Summary
Configure Pools
When VLANs and VSAN are created, configure the pools for UUID, MAC Addresses, Management IP
and WWN.
UUID Pool Creation
In Cisco UCS Manager, click Servers > Pools > UUID Suffix Pools and right-click to "Create UUID
Suffix Pool", create a new pool.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
31
Design Topology
Figure 22
Create UUID Pools
Figure 23 shows the "Oracle-HDS-UUID" Pool.
Figure 23
UUID Pool Summary
IP Pool and MAC Pool Creation
In Cisco UCS Manager, click LAN > Pools > IP Pools and right-click "Create IP Pool Ext-mgmt".
Figure 24
Create IP Pool
Next, click MAC Pools to "Create MAC Pools". We created Oracle-HDS-MAC-A and
Oracle-HDS-MAC-B for the all vNIC MAC addresses.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
32
Design Topology
Figure 25
Create MAC Pool
The IP pools will be used for console management, while MAC addresses for the vNICs being carved
out later.
WWNN Pool and WWPN Pool Creation
In Cisco UCS Manager, click SAN > Pools > WWNN Pools and right-click to "Create WWNN Pools".
Next, click WWPN Pools to "Create WWPN Pools". These WWNN and WWPN entries will be used for
Boot from SAN configuration. We created Oracle-HDS-WWNN Pool as world wide node name,
Oracle-HDS-WWPN-A Pool and Oracle-HDS-WWPN-B as world wide port name as shown below.
Figure 26
Create WWNN and WWPN Pool
At this point pool creation is complete for this setup. Next, create vNIC and vHBA templates.
Set Jumbo Frames in both the Cisco UCS Fabric
To configure jumbo frames and enable quality of service in the Cisco UCS Fabric, follow these steps:
1.
In Cisco UCS Manager, click the LAN tab in the navigation pane.
2.
Choose LAN > LAN Cloud > QoS System Class.
3.
In the right pane, click the General tab.
4.
On the Best Effort row, enter 9216 in the box under the MTU column.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
33
Design Topology
5.
Click Save Changes.
6.
Click OK.
Figure 27
Setting up Jumbo Frame on Fabric Interconnect
Configure vNIC and vHBA Template
Create vNIC Templates
In Cisco UCS Manager, click LAN > Policies > vNIC templates and right-click to "Create vNIC
Template"
Figure 28
Create vNIC Template
Two vNIC templates have been created for this Oracle RAC on Cisco Unified Computing System with
the Hitachi Storage configuration; one for Fabric A and another for Fabric B.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
34
Design Topology
Create vNIC Templates
Figure 29
vNIC Template for Fabric A
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
35
Design Topology
Figure 30
vNIC Template for Fabric B
The figure shows the created vNIC templates on Fabric A and Fabric B.
Figure 31
vNIC Template Summary
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
36
Design Topology
Create HBA Templates
In the Cisco UCS Manager, click SAN > Policies > vHBA templates and right-click to "Create vHBA
Template".
Figure 32
Create vHBA templates
Figure 33
vHBA Template for Fabric A
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
37
Design Topology
Figure 34
vHBA Template for Fabric B
Two vHBA templates have been created as HBA Template Oracle-HDS-HBA-A and HBA Template
Oracle-HDS-HBA-B as shown.
Next, configure the Ethernet uplink port-channels.
Configure Ethernet Uplink Port-Channels
For Configuring Port-Channels, click LAN > LAN Cloud ' Fabric A > Port Channels and right-click to
"Create Port-Channel". Select the desired Ethernet Uplink ports configured earlier. Repeat the same
steps create Port-Channel on Fabric B. In the current setup, we have used ports 1 and 2 on Fabric A and
configured as port channel 1. Similarly, ports 1 and 2 on Fabric B are configured to create port channel 2.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
38
Design Topology
Figure 35
Configuring Port Channels
Figure 36
Fabric A Ethernet Port-Channel Details
The figure shows the configured port-channels on Fabric A and fabric B.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
39
Design Topology
Figure 37
Port-Channels on Fabric A and Fabric B
When the above preparation steps are completed, create a service template where the service profiles can
be easily derived.
Create Local Disk Configuration Policy (Optional)
A local disk configuration for the Cisco UCS environment is necessary if the servers in the environment
do not have a local disk.
Note
This policy should not be used on servers that contain local disks.
In Cisco UCS Manager, click the Servers tab in the navigation pane.
1.
Choose Policies > root.
2.
Right-click Local Disk Config Policies.
3.
Choose Create Local Disk Configuration Policy.
4.
Enter SAN-Boot as the local disk configuration policy name.
5.
Change the mode to No Local Storage.
6.
Click OK to create the local disk configuration policy.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
40
Design Topology
Figure 38
7.
Creating Local Disk Configuration Policy
Click OK.
Create SAN Boot Policies
This procedure applies to a Cisco UCS environment in which the storage SAN ports are configured in
the following ways:
•
The SAN ports 1A, 3A, 5A and 7A of Hitachi storage cluster-1 are connected to the Cisco Nexus
5548 switch A.
•
The SAN ports 1C, 3C, 5C and 7C of Hitachi storage cluster-1 are connected to the Cisco Nexus
5548 switch B.
•
The SAN ports 2A, 4A, 6A and 8A of Hitachi storage cluster-2 are connected to the Cisco Nexus
5548 switch A.
•
The SAN ports 2C, 4C, 6C and 8C of Hitachi storage cluster-2 are connected to the Cisco Nexus
5548 switch B.
There are two SAN boot policies are configured in this procedure, one named as SAN-BOOT-A and
other named as SAN-BOOT-B.
The SAN boot (SAN-BOOT-A) configures the SAN primary's primary-target to be FC port 1A on
storage cluster 1 and SAN primary's secondary-target to be FC port 2A on storage cluster 2. Similarly,
the SAN secondary’s primary-target should be 3C on storage cluster 1 and SAN secondary's
secondary-target should be 4C on storage cluster 2.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
41
Design Topology
The SAN boot (SAN-BOOT-B) configures the SAN primary's primary-target to be FC port 7C on storage
cluster 1 and SAN primary's secondary-target to be FC port 8C on storage cluster 2. Similarly, SAN
secondary's primary-target to be 5A on storage cluster 1 and SAN secondary's secondary-target to be 6A
on storage cluster 2.
To create boot policies for the Cisco UCS environment, follow these steps:
1.
In Cisco UCS Manager, click the Servers tab in the navigation pane.
2.
Choose Policies > root.
3.
Right-click Boot Policies.
4.
Choose Create Boot Policy.
5.
Enter SAN-BOOT-A as the name of the boot policy.
6.
(Optional) Enter a description for the boot policy.
7.
Keep the Reboot on Boot Order Change check box unchecked.
8.
Expand the Local Devices drop-down menu and Choose Add CD-ROM.
9.
Expand the vHBAs drop-down menu and Choose Add SAN Boot.
10. In the Add SAN Boot dialog box, enter "hba0" in the vHBA field.
11. Make sure that the Primary radio button is selected as the SAN boot type.
12. Click OK to add the SAN boot initiator.
Figure 39
Adding SAN Boot Initiator for Fabric A
13. From the vHBA drop-down menu, choose Add SAN Boot Target.
14. Keep 0 as the value for Boot Target LUN.
15. Enter the WWPN for FC port 1A on storage cluster 1.
Note
To obtain this information, log in to storage cluster 1 and get the wwpn of port 1A. Make sure you enter
the port name and not the node name.
16. Keep the Primary radio button selected as the SAN boot target type.
17. Click OK to add the SAN boot target.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
42
Design Topology
Figure 40
Adding SAN Boot Target for Fabric A
18. From the vHBA drop-down menu, choose Add SAN Boot Target.
19. Keep 0 as the value for Boot Target LUN.
20. Enter the WWPN for FC port 2A on storage cluster 2.
Note
To obtain this information, log in to storage controller and get the wwpn for port 2A. Make sure you
enter the port name and not the node name.
21. Click OK to add the SAN boot target.
Figure 41
Adding Secondary SAN Boot Target for Fabric A
22. From the vHBA drop-down menu, choose Add SAN Boot.
23. In the Add SAN Boot dialog box, enter "hba1" in the vHBA box.
24. The SAN boot type should automatically be set to Secondary, and the Type option should be
unavailable.
25. Click OK to add the SAN boot initiator.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
43
Design Topology
Figure 42
Adding SAN Boot Initiator for Fabric B
26. From the vHBA drop-down menu, choose Add SAN Boot Target.
27. Keep 0 as the value for Boot Target LUN.
28. Enter the WWPN for FC port 3C on storage cluster1.
Note
To obtain this information, log in to storage controller and get the wwpn for port 3C. Make sure you enter
the port name and not the node name.
29. Keep Primary as the SAN boot target type.
30. Click OK to add the SAN boot target.
Figure 43
Adding Primary SAN Boot Target for Fabric B
31. From the vHBA drop-down menu, choose Add SAN Boot Target.
32. Keep 0 as the value for Boot Target LUN.
33. Enter the WWPN for FC port 4C on storage cluster2.
Note
To obtain this information, log in to storage controller and get the wwpn for port 4C. Make sure you enter
the port name and not the node name.
34. Click OK to add the SAN boot target.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
44
Design Topology
Figure 44
Adding Secondary SAN Boot Target
35. Click OK, and then OK again to create the boot policy.
36. Right-click Boot Policies again.
37. Choose Create Boot Policy.
38. Enter SAN-BOOT-B as the name of the boot policy.
39. (Optional) Enter a description of the boot policy.
40. Keep the Reboot on Boot Order Change check box unchecked.
41. From the Local Devices drop-down menu choose Add CD-ROM.
42. From the vHBA drop-down menu choose Add SAN Boot.
43. In the Add SAN Boot dialog box, enter "hba1" in the vHBA box.
44. Make sure that the Primary radio button is selected as the SAN boot type.
45. Click OK to add the SAN boot initiator.
Figure 45
Adding SAN Boot Initiator for Fabric B
46. From the vHBA drop-down menu, choose Add SAN Boot Target.
47. Keep 0 as the value for Boot Target LUN.
48. Enter the WWPN for FC port 7C on storage cluster1.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
45
Design Topology
Note
To obtain this information, log in to storage controller and get the wwpn for port 7C. Make sure you enter
the port name and not the node name.
49. Keep Primary as the SAN boot target type.
50. Click OK to add the SAN boot target.
Figure 46
Adding Primary SAN Boot Target for Fabric B
51. From the vHBA drop-down menu, choose Add SAN Boot Target.
52. Keep 0 as the value for Boot Target LUN.
53. Enter the WWPN for FC port 8C on storage cluster2.
Note
To obtain this information, log in to storage controller and get the wwpn for port 8C. Make sure you enter
the port name and not the node name.
54. Click OK to add the SAN boot target.
Figure 47
Adding Secondary SAN Boot Target for Fabric B
55. From the vHBA menu, choose Add SAN Boot.
56. In the Add SAN Boot dialog box, enter Fabric-A in the vHBA box.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
46
Design Topology
57. The SAN boot type should automatically be set to Secondary, and the Type option should be
unavailable.
58. Click OK to add the SAN boot initiator.
Figure 48
Adding SAN Boot for Fabric A
59. From the vHBA menu, choose Add SAN Boot Target.
60. Keep 0 as the value for Boot Target LUN.
61. Enter the WWPN for FC port 5A on storage cluster1.
Note
To obtain this information, log in to storage controller and get the wwpn for port 5A. Make sure you
enter the port name and not the node name.
62. Keep Primary as the SAN boot target type.
63. Click OK to add the SAN boot target.
Figure 49
Adding Primary SAN Boot Target for Fabric A
64. From the vHBA drop-down menu, choose Add SAN Boot Target.
65. Keep 0 as the value for Boot Target LUN.
66. Enter the WWPN for FCoE port 6A on storage cluster2.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
47
Design Topology
Note
To obtain this information, log in to storage controller and get the wwpn for port 6A. Make sure you
enter the port name and not the node name.
67. Click OK to add the SAN boot target.
Figure 50
Adding Secondary SAN Boot Target for Fabric A
68. Click OK, and then click OK again to create the boot policy.
After creating the FC boot policies for Fabric A and Fabric B, you can view the boot order in the UCS
Manager GUI. To view the boot order, navigate to Servers > Policies > Boot Policies. Click Boot Policy
SAN-BOOT-A to view the boot order for Fabric A in the right pane of the UCS Manager. Similarly, Click
Boot Policy SAN-BOOT-B to view the boot order for Fabric B in the right pane of the UCS Manager.
Figure 51
SAN Boot Details for Fabric A
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
48
Design Topology
Figure 52
SAN Boot Details for Fabric B
Service Profile Creation and Association to Cisco UCS Blade Servers
Service profile templates enable policy based server management that helps ensure consistent server
resource provisioning suitable to meet predefined workload needs.
Create Service Profile Template
To create a service profile template, complete the following steps:
1.
In Cisco UCS Manager, click Servers > Service Profile Templates > root and right-click root to
"Create Service Profile Template"
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
49
Design Topology
Figure 53
2.
Create Service Profile Template
Enter a template name and select the UUID Pool that was created earlier and click Next.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
50
Design Topology
Figure 54
3.
Creating Service Profile Template - Identify
In the networking window, select the Dynamic vNIC that was created earlier.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
51
Design Topology
Figure 55
Creating Service Profile Template - Networking
4.
In the Networking page create vNICs; one on each fabric and associate them with the VLAN policies
created earlier. Select expert mode, and click add to add one or more vNICs that the server should
use to connect to the LAN.
5.
In the create vNIC page, select "Use vNIC template" and adapter policy as Oracle-HDS-vNICA.
Enter vNIC "eth0" as the vNIC name.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
52
Design Topology
Figure 56
Creating Service Profile Template - Create vNIC
6.
Create the vNIC "eth1 with appropriate vNIC template mapping for each vNIC.
7.
When the vNICs are created, we need to create vHBAs. In the storage page, select expert mode,
choose the WWNN pool created earlier and click add to create vHBAs.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
53
Design Topology
Figure 57
Creating Service Profile Template - Storage
The following four vHBAs have been created:
•
hba0 using template Oracle-HDS-HBA-A.
•
hba1 using template Oracle-HDS-HBA-B.
•
hba2 using template Oracle-HDS-HBA-A.
•
hba3 using template Oracle-HDS-HBA-B.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
54
Design Topology
Figure 58
Creating Service Profile Template - Create vHBA
For this Oracle RAC configuration, the Cisco Nexus 5548UP is used for zoning, so skip the zoning
section and use the default vNIC/vHBA placement. Also skip the vMedia Policy.
Server Boot Policy
To create the server boot policy, complete the following steps:
1.
In the Server Boot Order page, choose the Boot Policy we created for SAN boot and click Next.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
55
Design Topology
Figure 59
Configure Server Boot Policy during Service Profile template Creation
The rest maintenance and assignment policies were left as default in the configuration. However, they
may vary from site-to-site depending on workloads, best practices, and policies.
2.
Create one more service profile template "Oracle-HDS-Fabric-B" using boot policy
"SAN-BOOT-B". There are two service profile templates created, one using boot policy
"SAN-BOOT-A" and other using boot policy "SAN-BOOT-B".
Figure 60
Service Profile template Creation Details
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
56
Design Topology
Create Service Profiles from Service Profile Templates
To create service profiles from a template, complete the following steps:
1.
In Cisco UCS Manager, click Servers > Service Profile Templates and click "Create Service Profiles
from Template."
Figure 61
Create Service profile from Service Profile Template
Figure 62
Create Service Profile from Service Profile Template "Oracle-HDS-Fabric-A"
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
57
Design Topology
Figure 63
Create Service Profile from Service Profile Template "Oracle-HDS-Fabric-B"
Four service profiles have been created, as listed below; two service profiles created using the
"Oracle-HDS-Fabric-A" and two service profiles created using template the "Oracle-HDS-Fabric-B"
•
Oracle-HDS-SP-A1
•
Oracle-HDS-SP-A2
•
Oracle-HDS-SP-B1
•
Oracle-HDS-SP-B2
Associating Service Profile to the Servers
To associate service profiles to the servers, complete the following steps:
1.
Under the servers tab, select the desired service profile, and select change service profile
association.
Figure 64
Associating Service Profile to Cisco UCS Blade Servers
2.
Right-click on the name of service profile ( Ex. Oracle-HDS-SP-A1) you want to associate with the
server and select the option "Change Service Profile Association".
3.
In the Change Service Profile Association page, from the Server Assignment drop-down, select
existing server that you would like to assign, and click OK.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
58
Configuring Hitachi Virtual Storage Platform (G1000)
Figure 65
Changing Service Profile Association
4.
Repeat the same steps to associate remaining three service profiles for the blade servers as shown
below.
5.
Make sure all the service profiles are associated as shown below.
Figure 66
Associated Service profiles Summary
Configuring Hitachi Virtual Storage Platform (G1000)
The following procedures to configure the storage for this solution assume you have installed all the
appropriate licenses on your storage system.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
59
Configuring Hitachi Virtual Storage Platform (G1000)
Configure Fibre Channel Port Settings
To configure your storage Fibre Channel ports using Hitachi Device Manager software, do the following:
1.
Note
Log on to Hitachi Device Manager.
You must have modify privileges when using Hitachi Device Manager software to complete this process.
2.
Click the Array Name link to open the Oracle database server environment storage system.
3.
Expand the Settings heading and click the Ports/HostGroups link.
4.
Click the Ports
5.
Click Edit Ports.
6.
Check the ports that are zoned to connect to the Oracle database server on the SAN.
7.
Click Enable from the Port Security list
8.
Click Auto from the Port Speed list
9.
Click ON from the Fabric list
10. Click P-to-P from the Connection Type list and then click OK.
A message displays saying that the change will interrupt I/O to any currently-connected host to the port.
Figure 67 shows the screenshot of the changes for the Edit Ports
Figure 67
Hitachi Virtual Storage Platform G1000 Edit Ports
11. Click Confirm and wait a few seconds for the change to take place.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
60
Configuring Hitachi Virtual Storage Platform (G1000)
After establishing the connection between the storage system and the host, the Ports window shows all
ports in a ON status as shown in the Figure 68 below.
Figure 68
Hitachi Virtual Storage Platform G1000 FC Port details
Create Parity Groups
This solution uses twenty-eight Parity Groups created on Hitachi Virtual Storage System G1000.
Table 4
Parity
Group
1-1
1-2 - 1-14
2-1 – 2-7
9-1 – 9-3
10-1 –
10-2
5-1 – 5-3
Details of the Parity Groups
Purpose
RAID Level
Operating system for
Oracle RAC Database
server
RAID-10 (2D+2D)
Oracle RAC Database
RAID-6 (6D+2P)
Drive Type
No of
Drives
Capacity
(GB)
800 GB SAS 10K RPM
Drives
4
1,610
1.6 TB Flash Module
Drives (FMD)
4
3,276
4 TB NL-SAS 7.2K RPM
Drives
8
21,883
To create a RAID group using Hitachi Device Manager software, do the following:
1.
Log on to Hitachi Device Manager.
You must have modify privileges when using Storage Navigator Modular 2 software to complete
this process.
2.
Click the Array Name link to open the storage system.
3.
Expand the Groups heading in the storage system pane and then click the Volumes link.
The right pane displays three tabs: Volumes, Parity Groups, and DP Pools.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
61
Configuring Hitachi Virtual Storage Platform (G1000)
4.
Click the Parity Groups tab and then click Create RG.
The Create Raid Group window opens.
5.
Use Table 3 to configure the RAID Level and Combination for each RAID group in the Create Raid
Group window.
The Number of Parity Groups changes based on your RAID level and combination choices.
6.
Click the Automatic Selection option.
If you have different types of drives installed in the storage system (either type or capacity), click
the Drive Type value and Drive Capacity value from each list.
Using automatic selection is the recommended practice from Hitachi Data Systems. Hitachi Device
Manager uses the next available drives of the type and capacity clicked.
7.
Click OK.
A message says that there is the successful creation of the RAID group.
8.
Click Close.
The formatting process to create the RAID group starts immediately in the background.
Figure 69 shows the screenshot of the Parity Groups in the Hitachi Virtual Storage Platform G1000 used
in this solution.
Figure 69
Hitachi Virtual Storage Platform G1000 Parity Group
Create Hitachi Logical Devices (LDEVs)
This procedure creates the following:
•
28 logical Devices used for the Oracle RAC database
•
4 logical devices used for the operating system of Oracle RAC database server
Table 5 lists the details of the logical devices created in Hitachi Virtual Storage Platform G1000.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
62
Configuring Hitachi Virtual Storage Platform (G1000)
Table 5
Details of Logical Devices /LDEVs
RAID Group
LDEVs
LDEV Size
(GB)
·
00:00:00
00:00:01
1-1
Purpose
O/S Boot for first node in a four
node Oracle RAC database server
● O/S Boot for second node in a
four node Oracle RAC database
200
00:00:02
server
● O/S Boot for third node in a four
node Oracle RAC database
server
● O/S Boot for fourth node in a
00:00:03
9-1 – 9-3
10-1 – 10-2
1-2 – 1-14
2-1 - 2-7
5-1 – 5-3
00:00:08 - 00:00:0A
00:00:0B – 00:00:0C
00:00:0D - 00:00:19
00:00:1A - 00:00:20
00:00:28 – 00:00:2A
four node Oracle RAC database
server
3276
1610
● Oracle RAC Database
3072
To configure the LDEVs using Hitachi Device Manager, do the following:
1.
Log on to Hitachi Device Manager.
2.
Click the Array Name link to open the storage system.
3.
Expand the Logical Devices heading in the storage system pane and then click the Create LDEVs.
4.
For Provisioning Type list, select Basic.
5.
For System Type choose OPEN.
6.
Select OPEN-V for Emulation Type.
7.
Choose Any for Drive Type/RPM.
8.
From the RAID Level list, select 2D+2D.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
63
Configuring Hitachi Virtual Storage Platform (G1000)
Figure 70
9.
Hitachi Virtual Storage Platform G1000 Create LDEVs
Type the LDEV Capacity and choose GB.
10. Type 1 in the Number of LDEVs per Free Space.
11. Give the name of the LDEV in the LDEV Name.
12. Choose Normal Format from the Format Type list.
13. Click Add.
14. Click Finish.
15. The Create LDEV pane refreshes, populated with the new LDEV information. Click Finish.
16. The Confirm window opens. Click Apply.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
64
Configuring Hitachi Virtual Storage Platform (G1000)
Figure 71
Hitachi Virtual Storage Platform G1000 LDEVs
Create Hitachi Dynamic Tiering Pools
This solution uses one Hitachi Dynamic Tiering pool. Table 6 lists the details.
Table 6
Details of a Hitachi Dynamic Tiering Pool
Pool Name
Number of Pool
VOLs
Number of
V-VOLs
RAID Level
Capacity in GB
Pool Type
Ora_hdt_pool_
28
113
Mixed
56781.53
DT
To create the Hitachi Dynamic Tiering Pool, complete the following:
1.
Create the Parity Groups.
2.
Create the LDEVs.
To create a dynamic tiering pool using Hitachi Device Manager software, do the following.
1.
Log on to Hitachi Device Manager.
2.
Click the Array Name link to open the storage system.
3.
Click the Pools on the left pane of the storage system.
4.
Click Create Pool.
The Create Pools window opens.
5.
Choose Dynamic Provisioning from the Pool Type list
6.
Select Open from the System Type radio button
7.
Select Enable from the Multi-Tier Pool radio button
8.
Choose Manual from the Pool Volume Selection
9.
Click Select Pool VOLs
The Select Pool Vols window appears.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
65
Configuring Hitachi Virtual Storage Platform (G1000)
10. In the Available Pool Volumes table, select the pool-VOL row to be associated with a pool. then
click Add
The selected pool-VOL is registered in the Selected Pool Volumes table.
11. In the Pool Name text box, type the prefix and initial number of the pool.
12. Click Add
13. Click Finish and the Confirm window appears.
14. In the Confirm window, click Apply to register the setting in the task
Figure 72
Hitachi Virtual Storage Platform G1000 Pools
Figure 73
Hitachi Virtual Storage Platform G1000 Entire Pool in Tier Properties
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
66
Configuring Hitachi Virtual Storage Platform (G1000)
Figure 74
Hitachi Virtual Storage Platform G1000 Tiering Policy in Tier Properties
Figure 75
Hitachi Virtual Storage Platform G1000 View Pool Management Status
Create Virtual Volumes
This procedure creates 113 storage virtual volumes used for the Oracle RAC Database. All the storage
virtual volumes are mapped to the storage ports 1A, 1C, 2A, 2C, 3A, 3C, 4A, 4C, 5A, 5C, 6A, 6C, 7A,
7C, 8A and 8C. Table 7 lists the details of the virtual volumes.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
67
Configuring Hitachi Virtual Storage Platform (G1000)
Table 7
VVOLs for Oracle RAC Database
Pool Name
LDEV Name
LDEV Size
(GB)
ora_hdt_pool_
ora_vvol_hdt_001
–
500
Purpose
Oracle RAC
Database
ora_vvol_hdt_113
Storage Port
1A, 1C, 2A, 2C, 3A,
3C, 4A, 4C, 5A, 5C,
6A, 6C, 7A, 7C, 8A,
8C
To create volumes using Hitachi Device Manager, follow these steps:
1.
Log on to Hitachi Device Manager
2.
Click the Array Name link to open the storage system.
3.
Click the Pools on the left pane of the storage system.
4.
Click Virtual Volumes tab which appears when a pool in Pools is selected.
5.
Click Create LDEVs. The Create LDEVs window appears.
6.
For Provisioning Type list, select Dynamic Provisioning.
7.
For System Type choose OPEN.
8.
Select OPEN-V for Emulation Type.
9.
Choose Mixed for Drive Type/RPM.
10. From the RAID Level list, select Mixed.
11. Click Select Pool and choose the pool from the Available Pools table. Click OK.
12. Type the LDEV Capacity and choose GB.
13. Type 113 in the Number of LDEVs text box.
14. Type the name of the LDEV in the LDEV Name. In the Initial LDEV ID field, type the initial
number, which can be up to 9 digits.
15. Click Add.
16. Click Finish.
17. The Confirm window appears. Click Finish.
18. Click Apply. If the Go to tasks window for status check box is selected, the Tasks window appears.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
68
Configuring Hitachi Virtual Storage Platform (G1000)
Figure 76
Hitachi Virtual Storage Platform G1000 Virtual Volumes
Create Host Groups
To create host groups, you will create and configure the fibre channel zoning.
To create the hosts groups, complete the following steps:
1.
Display the Create Host Groups window by performing one of the following:
2.
In Device Manager - Storage Navigator, select Create Host Groups from the General Tasks menu
and display the Create Host Groups window.
3.
From the Actions menu, choose Ports/Host Groups, and then Create Host Groups.
4.
From the Storage Systems tree, click the Ports/Hosts Groups. In the Host Groups page of the
displayed window, click Create Host Groups.
5.
From the Storage Systems tree, expand the Ports/Hosts Groups node, and then click the relevant
port. In the Host Groups page of the displayed window, click Create Host Groups.
6.
Enter the host group name in the Host Group Name box.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
69
Configuring Hitachi Virtual Storage Platform (G1000)
Figure 77
Hitachi Virtual Storage Platform G1000 Create Host Groups
7.
Select a host mode from the Host Mode list.
8.
Select hosts to be registered in a host group.
9.
If the desired host has ever been connected with a cable to another port in the storage system, select
the desired host bus adapter from the Available Hosts list.
10. If there is no host to be registered, skip this step and move to the next step. Otherwise, a host group
with no host would be created.
11. If the desired host has never been connected via a cable to any port in the storage system, perform
the following steps:
12. Click Add New Host under the Available Hosts list.
13. The Add New Host dialog box opens.
14. Enter the desired WWN in the HBA WWN box.
15. If necessary, enter a nickname for the host bus adapter in the Host Name box.
16. Click OK to close the Add New Host dialog box.
17. Select the desired host bus adapter from the Available Hosts list.
18. Select the port to which you want to add the host group.
19. Click Add to add the host group.
20. By repeating steps from 2 to 7, you can create multiple host groups.
21. Click Finish to display the Confirm window.
22. Click Apply in the Confirm window.
If the Go to tasks window for status check box is selected, the Tasks window appears.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
70
Configuring Hitachi Virtual Storage Platform (G1000)
Figure 78
Hitachi Virtual Storage Platform G1000 Host Groups
Add LUN Paths
To add LUN paths, complete the following steps:
1.
From the Storage Systems tree, click Ports/Hosts Groups. From the Actions menu, select Logical
Device, and then Add LUN Paths
2.
Select the desired LDEVs from the Available LDEVs table, and then click Add.
3.
Selected LDEVs are listed in the Selected LDEVs table.
4.
Click Next.
5.
Select the desired host groups from the Available Host Groups table, and then click Add.
6.
Selected host groups are listed in the Selected Host Groups table.
7.
Click Next.
8.
Confirm the defined LU paths.
9.
To change the LU path settings, click Change LUN IDs and type the LUN ID that you want to
change.
10. To change the LDEV name, click Change LDEV Settings. In the Change LDEV Settings window,
change the LDEV name.
11. Click Finish.
12. In the Confirm window, confirm the settings, in Task Name type a unique name for this task or
accept the default, and then click Apply.
If Go to tasks window for status is checked, the Tasks window opens.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
71
Port Connectivity of Hitachi VSP G1000 and Cisco Nexus 5048 UP Switch
Figure 79
Hitachi Virtual Storage Platform G1000 Add LUN Paths
For the Operating System, this solution makes use of two paths. However, you could use multiple paths
to meet your requirements. Table 8 lists the Hitachi Logical Devices configured for Operating System
and mapped to the Oracle RAC Database on Cisco Unified Computing System.
Table 8
Operating System LDEVs
Server Name on
Cisco Unified
LDEV Name on VSP
Computing
G1000
LDEV Size (GB)
Host Group on VSP
G1000
System
oracle-hds-srv1
Oracle_Srv_OS1
200
oracle-hds-srv2
Oracle_Srv_OS2
200
oracle-hds-srv3
Oracle_Srv_OS3
200
oracle-hds-srv4
Oracle_Srv_OS4
200
1A-server1
2A-server1
3C-server2
4C-server2
5A-server3
6A-server3
7C-server4
8C-server4
Port Connectivity of Hitachi VSP G1000 and Cisco Nexus
5048 UP Switch
Sixteen ports from Hitachi Virtual Storage Platform G1000 are used in this solution to connect two Cisco
Nexus 5548 switches. The storage ports are equally distributed between storage cluster. Table 9 lists the
port connectivity between Hitachi Virtual Storage Platform G1000 and Cisco Nexus 5548.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
72
Configuring Cisco Nexus 5548 UP
Table 9
Port Connectivity between Hitachi Virtual Storage Platform G1000 and Cisco Nexus 5548
Hitachi Virtual Storage Platform G1000
Port
WWPN
CL1-A
50:06:0E:80:07:C3:DA:00
CL1-C
50:06:0E:80:07:C3:DA:02
CL3-A
50:06:0E:80:07:C3:DA:20
CL3-C
50:06:0E:80:07:C3:DA:22
Cluster 1
CL5-A
50:06:0E:80:07:C3:DA:40
CL5-C
50:06:0E:80:07:C3:DA:42
CL7-A
50:06:0E:80:07:C3:DA:60
CL7-C
50:06:0E:80:07:C3:DA:62
CL2-A
50:06:0E:80:07:C3:DA:10
CL2-C
50:06:0E:80:07:C3:DA:12
CL4-A
50:06:0E:80:07:C3:DA:30
CL4-C
50:06:0E:80:07:C3:DA:32
Cluster 2
CL6-A
50:06:0E:80:07:C3:DA:50
CL6-C
50:06:0E:80:07:C3:DA:52
CL8-A
50:06:0E:80:07:C3:DA:70
CL8-C
50:06:0E:80:07:C3:DA:72
Cluster
Cisco Nexus 5548
Switch
Port
Cisco Nexus 5548 A
fc2/13
Cisco Nexus 5548 B
fc2/13
Cisco Nexus 5548 A
fc2/15
Cisco Nexus 5548 B
fc2/15
Cisco Nexus 5548 A
fc2/17
Cisco Nexus 5548 B
fc2/17
Cisco Nexus 5548 A
fc2/19
Cisco Nexus 5548 B
fc2/19
Cisco Nexus 5548 A
fc2/14
Cisco Nexus 5548 B
fc2/14
Cisco Nexus 5548 A
fc2/16
Cisco Nexus 5548 B
fc2/16
Cisco Nexus 5548 A
fc2/18
Cisco Nexus 5548 B
fc2/18
Cisco Nexus 5548 A
fc2/20
Cisco Nexus 5548 B
fc2/20
Configuring Cisco Nexus 5548 UP
Enable Licenses
Cisco Nexus A
To license the Cisco Nexus A switch on <<var_nexus_A_hostname>>, follow these steps:
1.
Log in as admin.
2.
Run the following commands:
config t
feature fcoe
feature npiv
feature lacp
feature vpc
Cisco Nexus B
To license the Cisco Nexus B switch on <<var_nexus_B_hostname>>, follow these steps:
1.
Log in as admin.
2.
Run the following commands:
config t
feature fcoe
feature npiv
feature lacp
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
73
Configuring Cisco Nexus 5548 UP
feature vpc
Set Global Configurations
Cisco Nexus 5548 A and Cisco Nexus 5548 B
To set global configurations, follow these steps on both switches:
1.
Run the following commands to set global configurations and jumbo frames in QoS:
2.
Login as admin user.
3.
Run the following commands:
conf t
spanning-tree port type network default
spanning-tree port type edge bpduguard default
port-channel load-balance ethernet source-dest-port
policy-map type network-qos jumbo
class type network-qos class-default
mtu 9216
exit
class type network-qos class-fcoe
pause no-drop
mtu 2158
exit
exit
system qos
service-policy type network-qos jumbo
exit
copy run start
Create VLANs
Cisco Nexus 5548 A and Cisco Nexus 5548 B
To create the necessary virtual local area networks (VLANs), follow these steps on both switches:
1.
From the global configuration mode, run the following commands:
2.
Login as admin user.
3.
Run the following commands:
conf
vlan
name
exit
vlan
name
exit
t
10
Public-VLAN
191
Private-VLAN
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
74
Configuring Cisco Nexus 5548 UP
Add Individual Port Descriptions for Troubleshooting
Cisco Nexus 5548 A
To add individual port descriptions for troubleshooting activity and verification for switch A, follow
these steps:
1.
From the global configuration mode, run the following commands:
2.
Login as admin user.
3.
Run the following commands:
conf t
interface Eth1/1
description Nexus5k-B-Cluster-Interconnect
exit
interface Eth1/2
description Nexus5k-B-Cluster-Interconnect
exit
interface Eth1/3
description Fabric_Interconnect_A:1/1
exit
interface Eth1/4
description Fabric_Interconnect_B:1/1
exit
Cisco Nexus 5548 B
To add individual port descriptions for troubleshooting activity and verification for switch B, follow
these steps:
1.
From the global configuration mode, run the following commands:
2.
Login as admin user.
3.
Run the following commands:
conf t
interface Eth1/1
description Nexus5k-A-Cluster-Interconnect
exit
interface Eth1/2
description Nexus5k-A-Cluster-Interconnect
exit
interface Eth1/3
description Fabric_Interconnect_A:1/2
exit
interface Eth1/4
description Fabric_Interconnect_B:1/2
exit
Create Port Channels
Cisco Nexus 5548 A and Cisco Nexus 5548 B
To create the necessary port channels between devices, follow these steps on both switches:
1.
From the global configuration mode, run the following commands:
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
75
Configuring Cisco Nexus 5548 UP
2.
Login as admin user.
3.
Run the following commands:
conf t
interface Po1
description vPC peer-link
exit
interface Eth1/1-2
channel-group 1 mode active
no shutdown
exit
interface Po3
description Fabric_Interconnect_A
exit
interface Eth1/3
channel-group 3 mode active
no shutdown
exit
interface Po4
description Fabric_Interconnect_B
exit
interface Eth1/4
channel-group 4 mode active
no shutdown
exit
copy run start
Configure Port Channels
Cisco Nexus 5548 A and Cisco Nexus 5548 B
To configure the port channels, follow these steps on both switches:
1.
From the global configuration mode, run the following commands:
2.
Login as admin user.
3.
Run the following commands:
conf t
interface Po1
switchport mode trunk
switchport trunk native vlan 1
switchport trunk allowed vlan 1,10,191
spanning-tree port type network
no shutdown
exit
interface Po3
switchport mode trunk
switchport trunk native vlan 1
switchport trunk allowed vlan 10,191
spanning-tree port type edge trunk
no shutdown
exit
interface Po4
switchport mode trunk
switchport trunk native vlan 1
switchport trunk allowed vlan 10,191
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
76
Configuring Cisco Nexus 5548 UP
spanning-tree port type edge trunk
no shutdown
exit
copy run start
Configure Virtual Port Channels
Cisco Nexus 5548 A
To configure virtual port channels (vPCs) for switch A, follow these steps:
1.
From the global configuration mode, run the following commands:
2.
Login as admin user.
3.
Run the following commands:
conf t
vpc domain 1
role priority 10
peer-keepalive destination <<var_nexus_B_mgmt0_ip>> source
<<var_nexus_A_mgmt0_ip>>
auto-recovery
exit
interface Po1
vpc peer-link
exit
interface Po3
vpc 3
exit
interface Po4
vpc 4
exit
copy run start
Cisco Nexus 5548 B
To configure vPCs for switch B, follow these steps:
1.
From the global configuration mode, run the following commands.
2.
Login as admin user.
3.
Run the following commands:
conf t
vpc domain 1
role priority 20
peer-keepalive destination <<var_nexus_A_mgmt0_ip>> source
<<var_nexus_B_mgmt0_ip>>
auto-recovery
exit
interface Po1
vpc peer-link
exit
interface Po3
vpc 3
exit
interface Po4
vpc 4
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
77
Configuring Cisco Nexus 5548 UP
exit
copy run start
Create and Configure Fibre Channel Zoning
This procedure sets up Fibre Channel connections between the Cisco Nexus 5548 switches, the Cisco
UCS Fabric Interconnects, and the Hitachi storage systems.
Before going to the zoning details, decide how many paths are needed for each LUN and extract the
wwpn numbers for each of the HBAs from each server. We have used 4 HBAs for each Server. Two
HBAs (HBA1 and HBA3) are connected to Nexus-A switch and other two HBAs (HBA2 and HBA4) are
connected to Nexus-B.
1.
Log in to the Cisco UCS Manager > Equipment > Chassis > servers and the desired server. On the
right hand menu, click the Inventory tab and HBA's sub tab to get the wwpn of HBA's.
Figure 80
2.
WWPn of Servers
Connect to the Hitachi Storage System using Hitachi Device Manager and extract the WWPn of FC
Ports connected to the Cisco Nexus switches. We have connected to 16 FC ports from Hitachi
Storage System. FC ports (1A, 2A, 3A, 4A, 5A, 6A, 7A, 8A) connected to Nexus-A switch and
similarly FC ports (1C, 2C, 3C, 4C, 5C, 6C, 7C, 8C) connected to Nexus-B switch.
Figure 81
WWPn of Hitachi Virtual Storage Platform G1000
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
78
Configuring Cisco Nexus 5548 UP
Create Device Aliases for FC Zoning
Cisco Nexus 5548 A
To configure device aliases and zones for the primary boot paths of switch A on
<<var_nexus_A_hostname>>, follow these steps:
1.
From the global configuration mode, run the following commands:
2.
Log in as admin user.
3.
Run the following commands:
conf t
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
exit
device-alias
database
name Storage1-1A pwwn
name Storage1-3A pwwn
name Storage1-5A pwwn
name Storage1-7A pwwn
name Storage2-2A pwwn
name Storage2-4A pwwn
name Storage2-6A pwwn
name Storage2-8A pwwn
name Oracle-Srv1-hba0
name Oracle-Srv1-hba2
name Oracle-Srv2-hba0
name Oracle-Srv2-hba2
name Oracle-Srv3-hba0
name Oracle-Srv3-hba2
name Oracle-Srv4-hba0
name Oracle-Srv4-hba2
50:06:0e:80:07:c3:da:00
50:06:0e:80:07:c3:da:20
50:06:0e:80:07:c3:da:40
50:06:0e:80:07:c3:da:60
50:06:0e:80:07:c3:da:10
50:06:0e:80:07:c3:da:30
50:06:0e:80:07:c3:da:50
50:06:0e:80:07:c3:da:70
pwwn 20:00:00:25:b5:10:a0:0c
pwwn 20:00:00:25:b5:10:a0:0d
pwwn 20:00:00:25:b5:10:a0:0a
pwwn 20:00:00:25:b5:10:a0:0b
pwwn 20:00:00:25:b5:10:a0:06
pwwn 20:00:00:25:b5:10:a0:07
pwwn 20:00:00:25:b5:10:a0:14
pwwn 20:00:00:25:b5:10:a0:05
commit
Cisco Nexus 5548 B
To configure device aliases and zones for the boot paths of switch B on <<var_nexus_B_hostname>>,
follow these steps:
1.
From the global configuration mode, run the following commands:
2.
Login as admin user.
3.
Run the following commands:
conf t
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
device-alias
database
name Storage1-1C pwwn
name Storage1-3C pwwn
name Storage1-5C pwwn
name Storage1-7C pwwn
name Storage2-2C pwwn
name Storage2-4C pwwn
name Storage2-6C pwwn
name Storage2-8C pwwn
name Oracle-Srv1-hba1
name Oracle-Srv1-hba3
name Oracle-Srv2-hba1
name Oracle-Srv2-hba3
name Oracle-Srv3-hba1
50:06:0e:80:07:c3:da:02
50:06:0e:80:07:c3:da:22
50:06:0e:80:07:c3:da:42
50:06:0e:80:07:c3:da:62
50:06:0e:80:07:c3:da:12
50:06:0e:80:07:c3:da:32
50:06:0e:80:07:c3:da:52
50:06:0e:80:07:c3:da:72
pwwn 20:00:00:25:b5:20:b0:0e
pwwn 20:00:00:25:b5:20:b0:0f
pwwn 20:00:00:25:b5:20:b0:0c
pwwn 20:00:00:25:b5:20:b0:0d
pwwn 20:00:00:25:b5:20:b0:0a
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
79
Configuring Cisco Nexus 5548 UP
device-alias
device-alias
device-alias
exit
device-alias
name Oracle-Srv3-hba3 pwwn 20:00:00:25:b5:20:b0:0b
name Oracle-Srv4-hba1 pwwn 20:00:00:25:b5:20:b0:08
name Oracle-Srv4-hba3 pwwn 20:00:00:25:b5:20:b0:09
commit
Create Zones
Cisco Nexus 5548 A
To create zones for the service profiles on switch A, follow these steps:
1.
Create a zone for each service profile.
2.
Login as admin user.
3.
Run the following commands:
conf t
zone name oracle-hds-srv1-hba0 vsan 101
member device-alias Oracle-Srv1-hba0
member device-alias Storage1-1A
exit
zone name oracle-hds-srv1-hba2 vsan 101
member device-alias Oracle-Srv1-hba2
member device-alias Storage2-2A
exit
zone name oracle-hds-srv2-hba0 vsan 101
member device-alias Oracle-Srv2-hba0
member device-alias Storage1-3A
exit
zone name oracle-hds-srv2-hba2 vsan 101
member device-alias Oracle-Srv2-hba2
member device-alias Storage2-4A
exit
zone name oracle-hds-srv3-hba0 vsan 101
member device-alias Oracle-Srv3-hba0
member device-alias Storage1-5A
exit
zone name oracle-hds-srv3-hba2 vsan 101
member device-alias Oracle-Srv3-hba2
member device-alias Storage2-6A
exit
zone name oracle-hds-srv4-hba0 vsan 101
member device-alias Oracle-Srv4-hba0
member device-alias Storage1-7A
exit
zone name oracle-hds-srv4-hba2 vsan 101
member device-alias Oracle-Srv4-hba2
member device-alias Storage2-8A
exit
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
80
Configuring Cisco Nexus 5548 UP
zone name server1-boot-hba0 vsan 101
member device-alias Storage1-1A
member device-alias Storage2-2A
member device-alias Oracle-Srv1-hba0
exit
zone name server2-boot-hba0 vsan 101
member device-alias Oracle-Srv2-hba0
member device-alias Storage1-1A
member device-alias Storage2-2A
exit
zone name server3-boot-hba0 vsan 101
member device-alias Storage1-5A
member device-alias Storage2-6A
member device-alias Oracle-Srv3-hba0
exit
zone name server4-boot-hba0 vsan 101
member device-alias Oracle-Srv4-hba0
member device-alias Storage1-5A
member device-alias Storage2-6A
exit
4.
After the zone for the Cisco UCS service profiles has been created, create the zone set and add the
necessary members.
zoneset name Oracle-HDS-A vsan 101
member oracle-hds-srv1-hba0
member oracle-hds-srv1-hba2
member oracle-hds-srv2-hba0
member oracle-hds-srv2-hba2
member oracle-hds-srv3-hba0
member oracle-hds-srv3-hba2
member oracle-hds-srv4-hba0
member oracle-hds-srv4-hba2
member server1-boot-hba0
member server2-boot-hba0
member server3-boot-hba0
member server4-boot-hba0
exit
Activate the zone set.
zoneset activate name Oracle-HDS-A vsan 101
exit
copy run start
Cisco Nexus 5548 B
To create zones for the service profiles on switch B, follow these steps:
1.
Create a zone for each service profile.
2.
Login as admin user.
3.
Run the following commands:
conf t
zone name oracle-hds-srv1-hba1 vsan 102
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
81
Configuring Cisco Nexus 5548 UP
member device-alias Oracle-Srv1-hba1
member device-alias Storage1-1C
exit
zone name oracle-hds-srv1-hba3 vsan 102
member device-alias Oracle-Srv1-hba3
member device-alias Storage2-2C
exit
zone name oracle-hds-srv2-hba1 vsan 102
member device-alias Oracle-Srv2-hba1
member device-alias Storage1-3C
exit
zone name oracle-hds-srv2-hba3 vsan 102
member device-alias Oracle-Srv2-hba3
member device-alias Storage2-4C
exit
zone name oracle-hds-srv3-hba1 vsan 102
member device-alias Oracle-Srv3-hba1
member device-alias Storage1-5C
exit
zone name oracle-hds-srv3-hba3 vsan 102
member device-alias Oracle-Srv3-hba3
member device-alias Storage2-6C
exit
zone name oracle-hds-srv4-hba1 vsan 102
member device-alias Oracle-Srv4-hba1
member device-alias Storage1-7C
exit
zone name oracle-hds-srv4-hba3 vsan 102
member device-alias Oracle-Srv4-hba3
member device-alias Storage2-8C
exit
zone name server1-boot-hba1 vsan 102
member device-alias Storage1-3C
member device-alias Storage2-4C
member device-alias Oracle-Srv1-hba0
exit
zone name server2-boot-hba1 vsan 102
member device-alias Oracle-Srv2-hba0
member device-alias Storage1-3C
member device-alias Storage2-4C
exit
zone name server3-boot-hba1 vsan 102
member device-alias Oracle-Srv3-hba0
member device-alias Storage1-7C
member device-alias Storage2-8C
exit
zone name server4-boot-hba1 vsan 102
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
82
Configuring Cisco Nexus 5548 UP
member device-alias Oracle-Srv4-hba1
member device-alias Storage1-7C
member device-alias Storage2-8C
exit
4.
After the zone for the Cisco UCS service profiles has been created, create the zone set and add the
necessary members.
zoneset name Oracle-HDS-B vsan 102
member oracle-hds-srv1-hba1
member oracle-hds-srv1-hba3
member oracle-hds-srv2-hba1
member oracle-hds-srv2-hba3
member oracle-hds-srv3-hba1
member oracle-hds-srv3-hba3
member oracle-hds-srv4-hba1
member oracle-hds-srv4-hba3
member server1-boot-hba1
member server2-boot-hba1
member server3-boot-hba1
member server4-boot-hba1
exit
5.
Activate the zone set.
zoneset activate name Oracle-HDS-B vsan 102
exit
copy run start
When configuring the Cisco Nexus 5548UP with vPCs, be sure that the status for all vPCs is "Up" for
connected Ethernet ports by running the commands shown in Figure 50 from the CLI on the Cisco Nexus
5548UP Switch.
Figure 82
Port-Channel Status on Cisco Nexus 5548UP
The command show vpc status should show the following for successful configuration.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
83
Connectivity of Cisco Unified Computing System and Hitachi VSP G1000
Figure 83
Virtual PortChannel Status on Cisco Nexus 5548UP Fabric A Switch
Verify the Virtual Port Channel status on Nexus B switch by running the command as mentioned above
for Cisco Nexus A switch.
Connectivity of Cisco Unified Computing System and
Hitachi VSP G1000
This section describes the connectivity layout of the solution components on completion of SAN
zoning..
Table 10 the connectivity of the solution components.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
84
Connectivity of Cisco Unified Computing System and Hitachi VSP G1000
Table 10
Cisco Unified Computing
System
Chassis
Blade
#
Slot#
oracle-h
ds-srv1
HBA
Fibre Channel Connectivity of the Solution Components
Hitachi Virtual Storage Platform
Cisco Nexus 5548
Switch Name
G1000
Switch
Port
Port
hba0
Cisco Nexus 5548- A
fc2/13
CL1-A
hba1
Cisco Nexus 5548- B
fc2/13
CL1-C
hba2
Cisco Nexus 5548- A
fc2/14
CL2-A
hba3
Cisco Nexus 5548- B
fc2/14
CL2-C
hba0
Cisco Nexus 5548- A
fc2/15
CL3-A
hba1
Cisco Nexus 5548- B
fc2/15
CL3-C
hba2
Cisco Nexus 5548- A
fc2/16
CL4-A
hba3
Cisco Nexus 5548- B
fc2/16
CL4-C
hba0
Cisco Nexus 5548- A
fc2/17
CL5-A
hba1
Cisco Nexus 5548- B
fc2/17
CL5-C
hba2
Cisco Nexus 5548- A
fc2/18
CL6-A
hba3
Cisco Nexus 5548- B
fc2/18
CL6-C
hba0
Cisco Nexus 5548- A
fc2/19
CL7-A
hba1
Cisco Nexus 5548- B
fc2/19
CL7-C
hba2
Cisco Nexus 5548- A
fc2/20
CL8-A
hba3
Cisco Nexus 5548- B
fc2/20
CL8-C
1
oracle-h
ds-srv2
oracle-h
ds-srv3
2
oracle-h
ds-srv4
Registered
Host Name
server1-hb
a0
server1-hb
a1
server1-hb
a2
server1-hb
a3
server2-hb
a0
server2-hb
a1
server2-hb
a2
server2-hb
a3
server3-hb
a0
server3-hb
a1
server3-hb
a2
server3-hb
a3
server4-hb
a0
server4-hb
a1
server4-hb
a2
server4-hb
Host
Group
Name
1A-server1
1C-server1
2A-server1
2C-server1
3A-server2
3C-server2
4A-server2
4C-server2
5A-server3
5C-server3
6A-server3
6C-server3
7A-server4
7C-server4
8A-server4
8C-server4
a3
Map all the logical devices and the virtual volumes created during the storage configuration for the
Oracle databases with the Host Groups. We created four logical devices (size 200GB each) using a parity
group out side the dynamic tiering pool. These four logical devices would be used as OS LUN to install
operating system.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
85
Install Oracle Linux 6.4 from Image
Install Oracle Linux 6.4 from Image
To install Oracle Linux 6.4 from image, follow these steps:
1.
Download Oracle Linux 6.4 images from https://edelivery.oracle.com/linux or as appropriate to a
staging area. Launch the KVM console for the desired server > click virtual media > activate virtual
devices > accept this session > click on virtual media > click map CD/DVD > add the downloaded
image > reset the server.
Figure 84 and Figure 85 shows the KVM console of the server and mapping to virtual media.
Figure 84
Launching KVM Console
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
86
Install Oracle Linux 6.4 from Image
Figure 85
Mapping Virtual Media
2.
When the server comes up, it launches the Oracle Linux Installer. Select the appropriate LUN to
install the Oracle Linux operating system.
3.
At the time of Oracle Linux package selection, select "customize now" to add additional packages
to the existing install as shown in Figure 86.
Figure 86
4.
Customize Oracle linux Package
In the servers menu of the customize package selection, select "system administration and Oracle
asm support tools as shown in Figure 87.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
87
Install Oracle Linux 6.4 from Image
Figure 87
Select Package System Administration
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
88
Miscellaneous Post-install Steps
When installation completes, reboot the server and accept license information, register the system as
needed and synchronize the time with ntp. If ntp is not configured, Oracle RAC cluster synchronization
daemon kicks in on a Oracle RAC node to sync up the time between the cluster nodes and maintaining
the mean cluster time. Both NTP and OCSSD are mutually exclusive.
This completes the OS Install.
Miscellaneous Post-install Steps
Note
Not all of them may have to be changed on your setup. Validate and change as needed. The following
changes were made on the test bed where Oracle RAC install was done.
Disable selinux
It is recommended to disable selinux.
Edit /etc/selinux/config and change to
SELINUX=disabled
#SELINUXTYPE=targeted
Disable Firewalls
service iptables stop
service ip6tables stop
chkconfig iptables off
chkconfig ip6tables off
Make sure /etc/sysconfig/network has an entry for hostname. Preferably add
NETWORKING_IPV6=no
Setup yum.repository
cd /etc/yum.repos.d
wget http://public-yum.oracle.com/public-yum-ol6.repo
edit the downloaded file public-yum-ol6.repo and change status as enabled=1
Run yum update.
You may have to set up http_proxy environment variable in case the server
accesses internet via a proxy.
Make sure that the following RPM packages are available after the yum
update. Alternatively install with yum install.
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.5-1.el6.x86_64
The exact version of the packages could be different on the uek kernel being
used.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
89
Miscellaneous Post-install Steps
Install Linux Driver for Cisco 10G FCOE HBA
Go to http://software.cisco.com/download/navigator.html
In the download page, select servers-Unified computing. On the right menu select your class of servers
say Cisco UCS B-series Blade server software and then select Unified Computing System (UCS) Drivers
in the following page.
Select your firmware version under All Releases, say 2.2 and download the ISO image of
Cisco UCS-related drivers for your matching firmware, for example ucs-bxxx-drivers.2.2.2.iso.
Extract the fnic rpm from the iso.
Alternatively you can also mount the iso file. You can use KVM console too and map the iso.
After mapping virtual media - Login to host to copy the rpm
[root@oracle-hds-srv1 ~]# mount -o loop,ro
/download/ucs-bxxx-drivers.2.2.2.iso /mnt
[root@oracle-hds-srv1 ~]# cd /mnt
[root@oracle-hds-srv1 ~]# cd /mnt/Linux/Storage/Cisco/MLOM/Oracle/OL6.4
[root@oracle-hds-srv1 ~]# ls
kmod-fnic-1.6.0.10-3.8.13.13.el6uek.x86_64.rpm
README-Oracle Linux Driver for Cisco 10G FCoE HBA.docx
Follow the instructions in README-Oracle Linux Driver for Cisco 10G FCoE HBA. In case you are
running this on Oracle Linux Redhat compatible kernel, the appropriate driver for your linux version
should be installed.
The following are the steps followed for uek2 kernel.
[root@oracle-hds-srv1 ~]# rpm -ivh
kmod-fnic-1.6.0.10-3.8.13.13.el6uek.x86_64.rpm
Preparing...
###########################################
[100%]
1:kmod-fnic
###########################################
[100%]
[root@oracle-hds-srv1 ~]# modinfo fnic
filename:
/lib/modules/2.6.39-400.17.1.el6uek.x86_64/weak-updates/fnic/fnic.ko
version:
1.6.0.10
license:
GPL v2
author:
Abhijeet Joglekar <abjoglek@cisco.com>, Joseph R. Eykholt
<jeykholt@cisco.com>
description:
Cisco FCoE HBA Driver
srcversion:
BE0100FCB58E1FF9AC935C4
alias:
pci:v00001137d00000045sv*sd*bc*sc*i*
depends:
libfcoe,libfc,scsi_transport_fc
vermagic:
2.6.39-400.209.1.el6uek.x86_64 SMP mod_unload modversions
parm:
fnic_log_level:bit mask of fnic logging levels (int)
parm:
fnic_trace_max_pages:Total allocated memory pages for fnic
trace buffer (uint)
parm:
fnic_fc_trace_max_pages:Total allocated memory pages for fc
trace buffer (uint)
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
90
Miscellaneous Post-install Steps
parm:
fnic_max_qdepth:Queue depth to report for each LUN (uint)
For more details on the install, follow the README document found in the iso above.
It is good practice to install the latest drivers. In case you are planning to run RHEL compatible kernel,
you may have to check for any additional drivers in enic/fnic category to be installed.
Reboot the host after making the changes and verify the fnic driver is updated. Create appropriate
operating system users to own oracle clusterware binary and oracle database binary. We used "grid" as
operating system user to own the clusterware binary and "oracle" as operating system user to own the
oracle database binary.
Configure Multipath
Use Oracle Linux multipath software to configure multipaths to access the LUNs presented from Hitachi
Storage. Modify "/etc/multipath.conf" file accordingly to give the alias name of each LUN id presented
from Hitachi Storage as given below. Run "multipath -ll" command to view all the LUN id.
[root@oracle-hds-srv1 etc]# cat /etc/multipath.conf
# multipath.conf written by anaconda
defaults {
polling_interval
path_grouping_policy
failback
user_friendly_names
no_path_retry
max_fds
}
5
multibus
immediate
yes
6
8192
devices {
device {
vendor
"HITACHI"
product
"OPEN-V"
path_grouping_policy
multibus
getuid_callout
"/lib/udev/scsi_id --whitelisted
--device=/dev/%n"
path_selector
"round-robin 0"
path_checker
tur
features
"0"
hardware_handler
"0"
prio
const
rr_weight
uniform
no_path_retry
6
rr_min_io_rq
8
}
}
blacklist_exceptions {
wwid "360060e8007c3da000030c3da00000000"
}
multipaths {
multipath {
wwid
360060e8007c3da000030c3da0000004b
alias
DATADISK1
}
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
91
Miscellaneous Post-install Steps
multipath {
wwid
alias
}
multipath {
wwid
alias
}
}
360060e8007c3da000030c3da0000004a
DATADISK2
360060e8007c3da000030c3da0000004d
DATADISK3
Add all the LUNs presented from Hitachi storage in the "/etc/multipath.conf" file. Reload the the
"multipath" deamon.
Configure Oracle ASM
Oracle ASM is installed as part of the install in OEL 6.and needs to be configured. Create and verifty
the Oracle users and groups in each cluster nodes.
groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 2000 -g oinstall -G dba grid
passwd grid
useradd -u 1100 -g oinstall -G dba oracle
passwd oracle
Configure the ASM library as "root" user and give the ownership to "grid" user as follows.
[root@oracle-hds-srv1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface [oracle]: grid
Default group to own the driver interface [oinstall]:
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:
Scanning the system for Oracle ASMLib disks:
[ OK ]
This should create a mount point /dev/oracleasm/disks
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
92
[
OK
]
Miscellaneous Post-install Steps
Configure OS LUNs and Create ASM Disks
Create LUN Partitions
Partition LUNs
Partition the luns with an offset of 1MB. While it is necessary to create partitions on disks for Oracle
ASM (just to prevent any accidental overwrite), it is equally important to create an aligned partition.
Setting this offset aligns host I/O operations with the back end storage I/O operations.
Use host utilities like "fdisk" to create a partition on the disk.
Create a input file, "fdisk.input" as shown below.
d
n
p
1
<- Leave a double space here
x
b
1
2048
p
w
Execute as fdisk /dev/mapper/asmdisk1 < fdisk.input. This makes partition at 2048 cylinders. In fact this
can be scripted for all the luns too.
Now all the pseudo partitions should be available in /dev/mapper as asmdisk1p1, asmdisk2p1,
asmdisk3p1 etc..
Reload the multipath deamon to rescan all the multipath devices.
Create ASM Disks
When the partitions are created, create ASM disks with oracleasm API's.
oracleasm createdisk -v asm_1 /dev/mapper/asmdisk1p1
This will create a disk label as asm_1 on the partition. This can be queried with oracle supplied kfed/kfod
utilities as well.
Repeat the process for all the /dev/mapper partitions and create ASM disks for all your database and
RAC files.
Scan the asm disks with oracleasm on all the oracle nodes and these should be visible under
/dev/oracleasm/disks mount point created by oracleasm earlier as below.
[root@oracle-hds-srv1 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Now the system is ready for Oracle Install.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
93
Oracle Database 11g R2 GRID Infrastructure with RAC Option Deployment
Oracle Database 11g R2 GRID Infrastructure with RAC
Option Deployment
This section describes high level steps for Oracle Database 11g R2 RAC install. Prior to GRID and
database install, verify all the prerequisites are completed. As an alternate, you can install Oracle
validated RPM that will ensure all prerequisites are met before Oracle grid install. We will not cover
step-by-step install for Oracle GRID in this document but will provide partial summary of details that
might be relevant.
Use the following Oracle document for pre-installation tasks, such as setting up the kernel parameters,
RPM packages, user creation, etc.
(http://download.oracle.com/docs/cd/E11882_01/install.112/e10812/prelinux.htm#BABHJHCJ)
1.
Create and verify required oracle users and groups in each Oracle RAC nodes.
groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 2000 -g oinstall -G dba grid
passwd grid
useradd -u 1100 -g oinstall -G dba oracle
passwd oracle
2.
Create the following local directory structure and ownerships on each RAC nodes.
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle
chown -R oracle:oinstall
chmod -R 775
/u01/app/oracle
chown -R grid:oinstall
chmod -R 775
Note
/u01/app
/u01/app
3.
Configure the private and public NICs with the appropriate IP addresses across all the nodes are art
of Oracle clusterware installation.
4.
Identify the virtual IP addresses and SCAN IPs and have them setup in DNS per
Oracle'srecommendation. Alternatively, you can update the /etc/hosts file with all the details
(private, public, SCAN and virtual IP) if you do not have DNS services available.
5.
Configure ssh option (with no password) for the Oracle user and grid user. For more information
about ssh configuration, refer to the Oracle installation documentation.
Oracle Universal Installer also offers automatic SSH connectivity configuration and testing.
6.
Note
/u01/app/oracle
Configure/Verify "/etc/sysctl.conf" and update shared memory and semaphore parameters required
for Oracle GRID Installation. Also configure "/etc/security/limits.conf" file by adding user limits
for oracle and
You do not have to perform these steps if Oracle Validated RPM is installed.
7.
Configure hugepages.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
94
Installing Oracle Database 11g R2 RAC
Hugepages is a method to have larger page size that is useful for working with very large memory.
For Oracle Databases, using HugePages reduces the operating system maintenance of page states,
and increases Translation Lookaside Buffer (TLB) hit ratio.
Advantages of HugePages
•
HugePages are not swappable so there is no page-in/page-out mechanism overhead.
•
Hugepage uses fewer pages to cover the physical address space, so the size of "book keeping"
(mapping from the virtual to the physical address) decreases, so it requiring fewer entries in the TLB
and so TLB hit ratio improves.
•
Hugepages reduces page table overhead.
•
Eliminated page table lookup overhead: Since the pages are not subject to replacement, page table
lookups are not required.
•
Faster overall memory performance: On virtual memory systems each memory operation is actually
two abstract memory operations. Since there are fewer pages to work on, the possible bottleneck on
page table access is clearly avoided.
For our configuration, we used hugepages for both OLTP and DSS workloads. Please refer to Oracle
metalink document 361323.1 for hugepages configuration details.
When hugepages are configured, You are now ready to install Oracle Database 11g R2 GRID
Infrastructure with RAC option and the database.
Installing Oracle Database 11g R2 RAC
It is not within the scope of this document to include the specifics of an Oracle RAC installation;
youshould refer to the Oracle installation documentation for specific installation instructions for your
Environment.
To install Oracle, follow these steps:
1.
Download the Oracle Database 11g Release 2 Grid Infrastructure (11.2.0.4.0) and Oracle Database
11g Release 2 (11.2.0.4.0) for Linux x86-64. Extract the zip file both for Oracle Database 11g
Release 2 Grid
2.
For this configuration, we used Oracle ASM for OCR and voting disks. For more details, See Grid
Infrastructure Installation Guide for Linux.
(http://www.oracle.com/pls/db112/to_toc?pathname=install.112/e10812/toc.htm).
3.
Launch the installer as the "grid" user from the staging area where the Oracle 11g R2 Grid
Infrastructure software is stored.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
95
Installing Oracle Database 11g R2 RAC
4.
Click Next.
5.
Select "Install and Configure Oracle Grid infrastructure for a Cluster" and click Next to continue
the installation.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
96
Installing Oracle Database 11g R2 RAC
6.
Select "Advanced Installation" and click Next to continue with the installation.
7.
Provide the cluster name, scan name and scan port. Click Next.
8.
Add all the node name (public host name and virtual host name, as provided in your "/etc/hosts"
file). Click Next.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
97
Installing Oracle Database 11g R2 RAC
9.
Select the appropriate network interface for public and private interconnect use. Click Next.
10. Select "Oracle Automatic Storage Management (Oracle ASM)" and click Next.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
98
Installing Oracle Database 11g R2 RAC
11. Provide the ASm disk group name and check all the ASM Disks created in previous step to store
OCR file and Voting disks. Click Next.
12. Provide the Oracle Base path and the software installation location. Click Next.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
99
Installing Oracle Database 11g R2 RAC
13. Run both the executables as an root user in each node starting from node 1. Click "OK" button to
move to next step after both the executable files run on all the nodes are part of cluster installation.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
100
Installing Oracle 11g R2 Database Binary
When the configuration completed successfully, click Next to complete the installation.
Installing Oracle 11g R2 Database Binary
When Oracle Grid install is complete, install Oracle Database 11g Release 2 Database "Software Only"
as oracle user. Do not create the database during the database binary installation. See Real Application
Clusters Installation Guide for Linux and UNIX for detailed installation instructions
http://www.oracle.com/pls/db112/to_toc?pathname=install.112/e10813/toc.htm.
1.
Launch the installer as the "oracle" user from the staging area where the Oracle 11g R2 Database
Binary software is stored
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
101
Installing Oracle 11g R2 Database Binary
2.
Click Next.
3.
Select the option "Install database software only" and click Next.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
102
Installing Oracle 11g R2 Database Binary
4.
Select the option "Oracle Real Application Clusters database Installation" and select all the nodes
to install the database binary. Click Next.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
103
Create ASM DiskGroups and Create Databases
5.
Run the "root.sh" file as root user in all the nodes, starting from node 1. Click OK to complete the
installation once you successfully run the "root.sh" file on all the nodes.
Create ASM DiskGroups and Create Databases
Run the command "asmca" as the "grid" operating system user to create the ASM diskgroups to store
databases. We have created 3 differnet disk groups to store OLTP database, DSS database and the redo
log files of both the database as shown in Table 11.
Table 11
ASM Diskgroups
Disk group name
OLTPDG
DSSDG
Number of ASM disks
24
24
Size of Disk Group
12 TB
12 TB
REDODG
10
5 TB
Purpose
For OLTP database
For DSS database
For redo log files of OLTP
and DSS Databases
Run the "dbca" tool as "oracle" user to create OLTP and DSS databases. Make sure to place the datafiles,
redo logs and control files in proper disk group as created in above steps. We will discuss additional
details about OLTP and DSS schema creation in workload section.
Workloads and Database Configuration
We used Swingbench for workload testing. Swingbench is a simple to use, free, Java based tool to
generate database workload and perform stress testing using different benchmarks in Oracle database
environments. Swingbench provides four separate benchmarks, namely, Order Entry, Sales History,
Calling Circle, and Stress Test. For the tests described in this paper, Swingbench Order Entry benchmark
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
104
Workloads and Database Configuration
was used for OLTP workload testing and the Sales History benchmark was used for the DSS workload
testing. The Order Entry benchmark is based on SOE schema and is TPC-C like by types of transactions.
The workload uses a very balanced read/write ratio around 60/40 and can be designed to run
continuously and test the performance of a typical Order Entry workload against a small set of tables,
producing contention for database resources. The Sales History benchmark is based on the SH schema
and is TPC-H like. The workload is query (read) centric and is designed to test the performance of
queries against large tables.
As discussed in previous section, two independent databases were created earlier for Oracle Swingbench
OLTP and DSS workloads. Next step is to pre-create the order entry and sales history schema for OLTP
and DSS workload. Swingbench Order Entry (OLTP) workload uses SOE tablespace and Sales History
workload uses SH tablespaces. We pre-created SOE schema and the SH schema on OLTP database and
DSS database respectively.
For our setup, we created SOE tablespace "soetbs" with 276 datafiles of 30GB size each on OLTP
database and SH tablespace "shtbs" with 342 datafiles of 30 GB size each on DSS database. Assign
"soetbs" as the default tablespace for SOE schema on OLTP database and "shtbs" as the default
tablespace of SH schema on DSS database.
When the schema for workloads are created, we populated both databases with Swingbench
datagenerator as shown below.
OLTP Database
The OLTP database was populated with the following data:
[oracle@oracle-hds-srv1 ~]$ sqlplus soe/soe
SQL*Plus: Release 11.2.0.4.0 Production on Tue Sep 2 14:29:41 2014
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage
Management, OLAP,
Data Mining and Real Application Testing options
SQL> select table_name, num_rows from user_tables;
TABLE_NAME
NUM_ROWS
------------------------------ ---------CUSTOMERS
9999999984
ORDER_ITEMS
3.4589E+10
ORDERS
1.1250E+10
LOGON
2499999984
ORDERENTRY_METADATA
4
PRODUCT_DESCRIPTIONS
1000
PRODUCT_INFORMATION
1000
INVENTORIES
898372
WAREHOUSES
1000
DSS (Sales History) Database
The DSS database was populated with the following data:
[oracle@oracle-hds-srv1 ~]$ sqlplus sh/sh
SQL*Plus: Release 11.2.0.4.0 Production on Tue Sep 2 14:37:37 2014
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
105
Performance Data from the Tests
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage
Management, OLAP,
Data Mining and Real Application Testing options
SQL> select table_name, num_rows from user_tables;
TABLE_NAME
-----------------------------CHANNELS
COUNTRIES
CUSTOMERS
PROMOTIONS
PRODUCTS
SUPPLEMENTARY_DEMOGRAPHICS
TIMES
SALES
NUM_ROWS
---------5
23
1.1500E+10
503
72
1.1500E+10
6209
5.7500E+10
As typically encountered in the real world deployments, we tested scalability and stress related scenarios
that ran on current 4-node Oracle RAC cluster configuration.
•
OLTP user scalability and OLTP cluster scalability representing small and random transactions
•
DSS workload representing larger transactions
•
Mixed workload featuring OLTP and DSS workloads running simultaneously for 24 hours
Performance Data from the Tests
When the databases were created, we started out with OLTP database calibration about the number of
users and database configuration. For Order Entry workload, we used 96GB SGA and ensured that
hugepages were in use. Each OLTP scalability test was run for at least 12 hours and we ensured that
results are consistent for the duration of a full run.
OLTP work load
For OLTP workloads, the common measurement metrics are Transactions Per Minute (TPM), users
scalability with IOPs, and CPU utilization. Here are the scalability charts for Order Entry workload.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
106
Performance Data from the Tests
We captured the data after running the tests with different number of users (100, 150, 200 and 250)
across the 4-node cluster. During the tests, we validated that Oracle SCAN listener fairly and evenly load
balanced users across all 4 nodes of the cluster. We also observed appropriate scalability in transaction
per minute as the number of users across clusters increased. Next graph shows increased IO and
scalability as the number of users across all cluster node increased.
As indicated in the graph, we observed about 85,951 IO/Sec across the 4-node cluster. The Oracle AWR
report below also summarizes Physical Reads/Sec and Physical Writes/Sec per instance. During OLTP
tests, we observed some resource utilization variations due to the random nature of the workload as
depicted by 250 users IOPS. We ran each test multiple times to ensure consistent numbers that are
presented in this solution.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
107
Performance Data from the Tests
The following graph shows the CPU utilization in each node after OLTP user scaling from 100 users to
250 users.
We captured the host CPU% utilization after running the tests with different number of users (100, 150,
200 and 250) across all 4-node cluster. During the tests, we validated that Oracle SCAN listener fairly
and evenly load balanced users across all 4 nodes of the cluster. We also observed CPU is equally utilized
across all the nodes as number of users across clusters increased.
The table below shows interconnect traffic for the 4-node Oracle RAC cluster during 400 user run. The
average interconnect traffic was 215 MB/Sec for the duration of the run.
Interconnect Traffic
Instance 1
Instance 2
Instance 3
Instance 4
Sent (MB/s)
Total
112.1
124.0
100.8
112.7
Received (MB/s)
Total
110.3
119.2
105.7
113.5
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
108
Performance Data from the Tests
Total MB/Sec
449.6
448.7
We also tested with different OLTP users by adding Oracle RAC node one after another. The following
graph shows the node scalability as well as user scalability.
DSS Workload
DSS workloads are generally sequential in nature, read intensive and exercise large IO size. DSS
workloads run a small number of users that typically exercise extremely complex queries that run for
hours. For our tests, we ran Swingbench Sales history workload with 8 users, 16 users, 24 users and 32
users using one node, two node, three node and four node respectively. We captured the throughput from
each test run. The charts below show DSS workload results.
During the DSS test run using swingbench sh schema, we confirmed that the throughput is scalaing after
adding one node after another while increasing the number of DSS users. We found 6.12 GigaByte /
second throughput generated with four node database cluster and 32 DSS users.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
109
Destructive and Hardware Failover Tests
Mixed Workload
We ran both OLTP and DSS workloads simultaneously. This test will ensure that configuration in this
test is able to sustain small random queries presented via OLTP along with large and sequential
transactions submitted via DSS workload. We ran the tests with different setup of user (OLTP and DSS)
load along with adding Oracle database cluster node one after another.
Destructive and Hardware Failover Tests
The goal of these tests is to ensure that reference architecture withstands commonly occurring failures
either due to unexpected crashes, hardware failures or human errors. We conducted many hardware,
software(process kills) and OS specific failures that simulate real world scenarios under stress
conditions. In the destructive testing, we also demonstrate unique failover capabilities of Cisco VIC
1240 adapter. We have highlighted some of those test cases below.
Scenario
Test 1 – Chassis 1-
Test
Run the system on full mixed work load.
Status
Network Traffic from IOM2 will
IOM2 Link Failure test
Disconnect the IOM2 by pulling it out from the first
failover without any disruption
Test 2 – Chassis 2-
chassis and reconnect the IOM2 after 5 minutes.
Run the system on full mixed work load.
to IOM1.
Network Traffic from IOM2 will
IOM2 Link Failure test
Disconnect the IOM2 by pulling it out from the second
failover without any disruption
chassis and reconnect the IOM2 after 5 minutes.
to IOM1.
Test 3– Chassis 1 & 2-
Run the system on full mixed work load.
Network Traffic from IOM2 will
IOM2 Link Failure test
Disconnect the IOM2 by pulling it out from both the
failover without any disruption
chassis and reconnect the IOM2 after 5 minutes
to IOM1.
Test 4 – UCS 6248
Run the system on full load as above.
Fabric failovers did not cause
Fabric-B Failure test
Reboot Fabric B, let it join the cluster back.
any disruption to Private/public
network and Storage traffic.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
110
Conclusion
Test 5 – UCS 6248
Run the system on full load as above.
Fabric failovers did not cause
Fabric-A Failure test
Reboot Fabric A, let it join the cluster back.
any disruption to Private/public
network and Storage traffic.
Test 6 – Nexus 5548
Run the system on full mixed work load.
No disruption to the
Fabric-A Failure test
Reboot the Nexus5548 Fabric-A Switch, wait for 5
Public/Private Network and
Test 7 – Nexus 5548
minutes, connect it back.
Run the system on full mixed work load.
Storage Traffic
No disruption to the
Fabric-B Failure test
Reboot the Nexus5548 Fabric-B Switch, wait for 5
Public/Private Network and
minutes, connect it back.
Storage Traffic
Conclusion
Cisco Unified Computing System is built on leading computing, networking, and infrastructure software
components. With Hitachi Virtual Storage Platform G1000 , customers can leverage a secure, integrated,
and optimized stack that includes compute, network and storage resources that are sized, configured and
deployed as a fully tested unit running industry standard applications such as Oracle Database 11g R2
RAC.
The following list describes how the combination of Cisco Unified Computing System with Hitachi
Virtual Storage Platform G1000 is so powerful for Oracle environments:
•
Cisco Unified Computing System’s stateless computing architecture provided by the Service Profile
capability of Cisco Unified Computing System allows for fast, non-disruptive workload changes to
be executed simply and seamlessly across the integrated Cisco UCS infrastructure and Cisco x86
servers.
•
With the introduction of Hitachi Dynamic Tiering, complexities and overhead of implementing data
lifecycle management and optimizing use of tiered storage are solved. Dynamic Tiering software
simplifies storage administration by eliminating the need for time-consuming manual data
classification and the movement of data to optimize usage of tiered storage.
As a result, customers can achieve dramatic cost savings when leveraging Fiber Channel based products
and deploy any application on a scalable Shared IT infrastructure built on Cisco and Hitachi
technologies. This solution, jointly developed by Cisco and Hitachi, is a flexible infrastructure platform
composed of pre-sized storage, networking, and server components. It's designed to ease your IT
transformation and operational challenges with maximum efficiency and minimal risk.
Cisco Unified Computing System and Hitachi Storage differs from other solutions by providing:
•
A simplified storage administration, Hitachi Dynamic Tiering software automatically optimizes
data placement.
•
Highest efficiency and throughput through granular page-based data movement.
•
A simplified management of up to 3 storage tiers as a single volume while automatically moveing
most active data to highest performing tier..
•
Integrated, validated technologies from industry leaders and top-tier software partners.
•
A platform, built from unified compute, fabric, and storage technologies, that lets you scale to
large-scale data centers without architectural changes.
•
A centralized, simplified management of infrastructure resources, including end-to-end automation.
•
A flexible cooperative support model that resolves issues rapidly and spans across new and legacy
products.
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
111
Appendix
Appendix
Appendix A: Cisco Nexus 5548 UP Configuration
The following is an example shows Cisco Nexus 5548 Fabric Zoning Configuration for all the Oracle
RAC Servers.
Login to Cisco Nexus 5548 through .ssh and issue the following:
Cisco Nexus 5548 Fabric A Configuration
!Command: show running-config
!Time: Fri Nov 20 20:54:05 2009
version 7.0(2)N1(1)
feature fcoe
hostname Oracle-HDS-N5K-A
feature npiv
feature telnet
cfs eth distribute
feature interface-vlan
feature hsrp
feature lacp
feature vpc
feature lldp
username admin password 5 $1$jwhzf7l2$2wgzBYzVsJnjrVoQI5TL01 role
network-admin
ip domain-lookup
policy-map type network-qos jumbo
class type network-qos class-fcoe
pause no-drop
mtu 2158
class type network-qos class-default
mtu 9216
multicast-optimize
system qos
service-policy type queuing input fcoe-default-in-policy
service-policy type queuing output fcoe-default-out-policy
service-policy type qos input fcoe-default-in-policy
service-policy type network-qos jumbo
slot 2
port 1-16 type fc
snmp-server user admin network-admin auth md5
0xf23753e0e7c2ec2d83868f5a09b4767f priv 0xf23753e0e7c2ec2d83868f5a09b4767f
localizedkey
rmon event 1 log trap public description FATAL(1) owner PMON@FATAL
rmon event 2 log trap public description CRITICAL(2) owner PMON@CRITICAL
rmon event 3 log trap public description ERROR(3) owner PMON@ERROR
rmon event 4 log trap public description WARNING(4) owner PMON@WARNING
rmon event 5 log trap public description INFORMATION(5) owner PMON@INFO
vlan 1
vlan 10
name Public_Network
vlan 191
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
112
Appendix
name Private_Network
spanning-tree port type edge bpduguard default
spanning-tree port type network default
vrf context management
ip route 0.0.0.0/0 173.36.215.1
port-channel load-balance ethernet source-dest-port
vpc domain 1
peer-keepalive destination 173.36.215.62
delay restore 150
auto-recovery
vsan database
vsan 101 name "Fabric_A"
vsan 102 name "Fabric_B"
device-alias database
device-alias name Oracle-Srv4-hba2 pwwn 20:00:00:25:b5:10:a0:05
device-alias commit
fcdomain fcid database
vsan 101 wwn 20:21:00:2a:6a:61:5f:00 fcid 0x340000 dynamic
vsan 101 wwn 20:22:00:2a:6a:61:5f:00 fcid 0x340020 dynamic
vsan 101 wwn 20:23:00:2a:6a:61:5f:00 fcid 0x340040 dynamic
vsan 101 wwn 20:24:00:2a:6a:61:5f:00 fcid 0x340060 dynamic
vsan 101 wwn 50:06:0e:80:07:27:9a:00 fcid 0x340080 dynamic
vsan 101 wwn 50:06:0e:80:07:27:9a:02 fcid 0x3400a0 dynamic
vsan 101 wwn 50:06:0e:80:07:27:9a:10 fcid 0x3400c0 dynamic
vsan 101 wwn 50:06:0e:80:07:27:9a:12 fcid 0x340100 dynamic
vsan 101 wwn 50:06:0e:80:07:c3:da:02 fcid 0x3400a1 dynamic
vsan 101 wwn 50:06:0e:80:07:c3:da:12 fcid 0x340101 dynamic
vsan 101 wwn 50:06:0e:80:07:c3:da:10 fcid 0x3400c1 dynamic
!
[Storage2-2A]
vsan 101 wwn 50:06:0e:80:07:c3:da:00 fcid 0x340081 dynamic
!
[Storage1-1A]
vsan 101 wwn 20:00:00:25:b5:10:a0:0c fcid 0x340021 dynamic
!
[Oracle-Srv1-hba0]
vsan 101 wwn 20:00:00:25:b5:10:a0:06 fcid 0x340001 dynamic
!
[Oracle-Srv3-hba0]
vsan 101 wwn 20:00:00:25:b5:10:a0:0a fcid 0x340061 dynamic
!
[Oracle-Srv2-hba0]
vsan 101 wwn 20:00:00:25:b5:10:a0:14 fcid 0x340041 dynamic
!
[Oracle-Srv4-hba0]
vsan 101 wwn 20:00:00:25:b5:10:a0:0d fcid 0x340022 dynamic
!
[Oracle-Srv1-hba2]
vsan 101 wwn 20:00:00:25:b5:10:a0:0b fcid 0x340062 dynamic
!
[Oracle-Srv2-hba2]
vsan 101 wwn 20:00:00:25:b5:10:a0:07 fcid 0x340002 dynamic
!
[Oracle-Srv3-hba2]
vsan 101 wwn 20:00:00:25:b5:10:a0:05 fcid 0x340042 dynamic
!
[Oracle-Srv4-hba2]
vsan 101 wwn 50:06:0e:80:07:c3:da:40 fcid 0x340140 dynamic
!
[Storage1-5A]
vsan 101 wwn 50:06:0e:80:07:c3:da:42 fcid 0x340160 dynamic
vsan 101 wwn 50:06:0e:80:07:c3:da:50 fcid 0x340180 dynamic
!
[Storage2-6A]
vsan 101 wwn 50:06:0e:80:07:c3:da:52 fcid 0x3401a0 dynamic
vsan 101 wwn 50:06:0e:80:07:c3:da:20 fcid 0x3401c0 dynamic
!
[Storage1-3A]
vsan 101 wwn 50:06:0e:80:07:c3:da:60 fcid 0x340181 dynamic
!
[Storage1-7A]
vsan 101 wwn 50:06:0e:80:07:c3:da:30 fcid 0x340200 dynamic
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
113
Appendix
!
[Storage2-4A]
vsan 101 wwn 50:06:0e:80:07:c3:da:70 fcid 0x340220 dynamic
!
[Storage2-8A]
interface Vlan1
no shutdown
interface Vlan10
no shutdown
ip address 10.36.215.2/24
hsrp version 2
hsrp 10
preempt
priority 110
ip 10.36.215.1
interface Vlan191
no shutdown
ip address 191.168.1.2/24
hsrp version 2
hsrp 191
preempt
priority 110
ip 191.168.1.1
interface port-channel1
description vPC peer-link
switchport mode trunk
switchport trunk allowed vlan 1,10,191
spanning-tree port type network
vpc peer-link
interface port-channel3
description Fabric_Interconnect_A
switchport mode trunk
switchport trunk allowed vlan 1,10,191
spanning-tree port type edge trunk
vpc 3
interface port-channel4
description Fabric_Interconnect_B
switchport mode trunk
switchport trunk allowed vlan 1,10,191
spanning-tree port type edge trunk
vpc 4
vsan database
vsan 101 interface fc2/1
vsan 101 interface fc2/2
vsan 101 interface fc2/3
vsan 101 interface fc2/4
vsan 101 interface fc2/5
vsan 101 interface fc2/6
vsan 101 interface fc2/7
vsan 101 interface fc2/8
vsan 101 interface fc2/9
vsan 101 interface fc2/10
vsan 101 interface fc2/11
vsan 101 interface fc2/12
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
114
Appendix
vsan
vsan
vsan
vsan
101
101
101
101
interface
interface
interface
interface
fc2/13
fc2/14
fc2/15
fc2/16
interface fc2/1
no shutdown
interface fc2/2
no shutdown
interface fc2/3
no shutdown
interface fc2/4
no shutdown
interface fc2/5
no shutdown
interface fc2/6
no shutdown
interface fc2/7
no shutdown
interface fc2/8
no shutdown
interface fc2/9
no shutdown
interface fc2/10
no shutdown
interface fc2/11
no shutdown
interface fc2/12
no shutdown
interface fc2/13
no shutdown
interface fc2/14
no shutdown
interface fc2/15
no shutdown
interface fc2/16
no shutdown
interface Ethernet1/1
description Nexus5k-A-Cluster-Interconnect
switchport mode trunk
switchport trunk allowed vlan 1,10,191
channel-group 1 mode active
interface Ethernet1/2
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
115
Appendix
description Nexus5k-A-Cluster-Interconnect
switchport mode trunk
switchport trunk allowed vlan 1,10,191
channel-group 1 mode active
interface Ethernet1/3
description Fabric_Interconnect_A:1/1
switchport mode trunk
switchport trunk allowed vlan 1,10,191
channel-group 3 mode active
interface Ethernet1/4
description Fabric_Interconnect_B:1/1
switchport mode trunk
switchport trunk allowed vlan 1,10,191
channel-group 4 mode active
interface Ethernet1/5
interface Ethernet1/6
interface Ethernet1/7
interface Ethernet1/8
interface Ethernet1/9
interface Ethernet1/10
interface Ethernet1/11
interface Ethernet1/12
switchport access vlan 10
speed 1000
interface Ethernet1/13
interface Ethernet1/14
interface Ethernet1/15
interface Ethernet1/16
interface Ethernet1/17
description FCoE_FI_A_17
interface Ethernet1/18
description FCoE_FI_B_17
interface Ethernet1/19
interface Ethernet1/20
interface Ethernet1/21
interface Ethernet1/22
interface Ethernet1/23
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
116
Appendix
interface Ethernet1/24
interface Ethernet1/25
interface Ethernet1/26
interface Ethernet1/27
interface Ethernet1/28
interface Ethernet1/29
interface Ethernet1/30
interface Ethernet1/31
shutdown
speed 1000
interface Ethernet1/32
switchport access vlan 10
speed 1000
interface mgmt0
vrf member management
ip address 173.36.215.61/24
line console
line vty
boot kickstart bootflash:/n5000-uk9-kickstart.7.0.2.N1.1.bin
boot system bootflash:/n5000-uk9.7.0.2.N1.1.bin
interface fc2/1
interface fc2/2
interface fc2/3
interface fc2/4
interface fc2/5
interface fc2/6
interface fc2/7
interface fc2/8
interface fc2/9
interface fc2/10
interface fc2/11
interface fc2/12
interface fc2/13
interface fc2/14
interface fc2/15
interface fc2/16
!Full Zone Database Section for vsan 101
zone name Oracle-HDS-1A vsan 101
zone name oracle-hds-srv1-hba0 vsan 101
member pwwn 20:00:00:25:b5:10:a0:0c
!
[Oracle-Srv1-hba0]
member pwwn 50:06:0e:80:07:c3:da:00
!
[Storage1-1A]
zone name oracle-hds-srv1-hba2 vsan 101
member pwwn 20:00:00:25:b5:10:a0:0d
!
[Oracle-Srv1-hba2]
member pwwn 50:06:0e:80:07:c3:da:10
!
[Storage2-2A]
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
117
Appendix
zone name oracle-hds-srv2-hba0 vsan 101
member pwwn 20:00:00:25:b5:10:a0:0a
!
[Oracle-Srv2-hba0]
member pwwn 50:06:0e:80:07:c3:da:20
!
[Storage1-3A]
zone name oracle-hds-srv2-hba2 vsan 101
member pwwn 20:00:00:25:b5:10:a0:0b
!
[Oracle-Srv2-hba2]
member pwwn 50:06:0e:80:07:c3:da:30
!
[Storage2-4A]
zone name Oracle-HDS-SRV1 vsan 101
zone name oracle-hds-srv3-hba0 vsan 101
member pwwn 20:00:00:25:b5:10:a0:06
!
[Oracle-Srv3-hba0]
member pwwn 50:06:0e:80:07:c3:da:40
!
[Storage1-5A]
zone name oracle-hds-srv3-hba2 vsan 101
member pwwn 20:00:00:25:b5:10:a0:07
!
[Oracle-Srv3-hba2]
member pwwn 50:06:0e:80:07:c3:da:50
!
[Storage2-6A]
zone name oracle-hds-srv4-hba0 vsan 101
member pwwn 20:00:00:25:b5:10:a0:14
!
[Oracle-Srv4-hba0]
member pwwn 50:06:0e:80:07:c3:da:60
!
[Storage1-7A]
zone name oracle-hds-srv4-hba2 vsan 101
member pwwn 20:00:00:25:b5:10:a0:05
!
[Oracle-Srv4-hba2]
member pwwn 50:06:0e:80:07:c3:da:70
!
[Storage2-8A]
zone name server1-boot-hba0 vsan 101
member pwwn 50:06:0e:80:07:c3:da:00
!
[Storage1-1A]
member pwwn 50:06:0e:80:07:c3:da:10
!
[Storage2-2A]
member pwwn 20:00:00:25:b5:10:a0:0c
!
[Oracle-Srv1-hba0]
zone name server2-boot-hba0 vsan 101
member pwwn 20:00:00:25:b5:10:a0:0a
!
[Oracle-Srv2-hba0]
member pwwn 50:06:0e:80:07:c3:da:00
!
[Storage1-1A]
member pwwn 50:06:0e:80:07:c3:da:10
!
[Storage2-2A]
zone name server3-boot-hba0 vsan 101
member pwwn 50:06:0e:80:07:c3:da:40
!
[Storage1-5A]
member pwwn 50:06:0e:80:07:c3:da:50
!
[Storage2-6A]
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
118
Appendix
!
member pwwn 20:00:00:25:b5:10:a0:06
[Oracle-Srv3-hba0]
zone name server4-boot-hba0 vsan 101
member pwwn 20:00:00:25:b5:10:a0:14
!
[Oracle-Srv4-hba0]
member pwwn 50:06:0e:80:07:c3:da:40
!
[Storage1-5A]
member pwwn 50:06:0e:80:07:c3:da:50
!
[Storage2-6A]
zoneset name Oracle-HDS-A vsan 101
member oracle-hds-srv1-hba0
member oracle-hds-srv1-hba2
member oracle-hds-srv2-hba0
member oracle-hds-srv2-hba2
member oracle-hds-srv3-hba0
member oracle-hds-srv3-hba2
member oracle-hds-srv4-hba0
member oracle-hds-srv4-hba2
member server1-boot-hba0
member server2-boot-hba0
member server3-boot-hba0
member server4-boot-hba0
zoneset activate name Oracle-HDS-A vsan 101
!Full Zone Database Section for vsan 102
zone name oracle-hds-srv1-hba1 vsan 102
Cisco Nexus 5548 Fabric B Configuration
!Command: show running-config
!Time: Thu Nov 19 21:04:08 2009
version 7.0(2)N1(1)
feature fcoe
hostname Oracle-HDS-N5K-B
feature npiv
feature telnet
cfs eth distribute
feature interface-vlan
feature hsrp
feature lacp
feature vpc
feature lldp
username admin password 5 $1$W3eLJoVN$YSq0BYeEM42vWyMwdjguY.
network-admin
ip domain-lookup
policy-map type network-qos jumbo
class type network-qos class-fcoe
pause no-drop
mtu 2158
class type network-qos class-default
mtu 9216
multicast-optimize
system qos
role
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
119
Appendix
service-policy type queuing input fcoe-default-in-policy
service-policy type queuing output fcoe-default-out-policy
service-policy type qos input fcoe-default-in-policy
service-policy type network-qos jumbo
slot 2
port 1-16 type fc
snmp-server user admin network-admin auth md5
0x15de5eb8b495705ef3fea8c58fdbdbee priv 0x15de5eb8b495705ef3fea8c58fdbdbee
localizedkey
rmon event 1 log trap public description FATAL(1) owner PMON@FATAL
rmon event 2 log trap public description CRITICAL(2) owner PMON@CRITICAL
rmon event 3 log trap public description ERROR(3) owner PMON@ERROR
rmon event 4 log trap public description WARNING(4) owner PMON@WARNING
rmon event 5 log trap public description INFORMATION(5) owner PMON@INFO
vlan 1
vlan 10
name Public_Network
vlan 191
name Private_Network
spanning-tree port type edge bpduguard default
spanning-tree port type network default
vrf context management
ip route 0.0.0.0/0 173.36.215.1
vpc domain 1
role priority 20
peer-keepalive destination 173.36.215.61 source 173.36.215.62
delay restore 150
auto-recovery
vsan database
vsan 101 name "Fabric_A"
vsan 102 name "Fabric_B"
device-alias database
device-alias name Oracle-Srv4-hba3 pwwn 20:00:00:25:b5:20:b0:09
device-alias commit
fcdomain fcid database
vsan 102 wwn 20:22:00:2a:6a:6c:2c:80 fcid 0x210000 dynamic
vsan 102 wwn 20:23:00:2a:6a:6c:2c:80 fcid 0x210020 dynamic
vsan 102 wwn 20:24:00:2a:6a:6c:2c:80 fcid 0x210040 dynamic
vsan 102 wwn 20:21:00:2a:6a:6c:2c:80 fcid 0x210060 dynamic
vsan 102 wwn 50:06:0e:80:07:27:9a:20 fcid 0x210080 dynamic
vsan 102 wwn 50:06:0e:80:07:27:9a:22 fcid 0x2100a0 dynamic
vsan 102 wwn 50:06:0e:80:07:27:9a:30 fcid 0x2100c0 dynamic
vsan 102 wwn 50:06:0e:80:07:27:9a:32 fcid 0x210100 dynamic
vsan 102 wwn 50:06:0e:80:07:c3:da:30 fcid 0x2100c1 dynamic
vsan 102 wwn 50:06:0e:80:07:c3:da:32 fcid 0x210101 dynamic
!
[Storage2-4C]
vsan 102 wwn 50:06:0e:80:07:c3:da:20 fcid 0x210081 dynamic
vsan 102 wwn 50:06:0e:80:07:c3:da:22 fcid 0x2100a1 dynamic
!
[Storage1-3C]
vsan 102 wwn 20:00:00:25:b5:20:b0:0e fcid 0x210001 dynamic
!
[Oracle-Srv1-hba1]
vsan 102 wwn 20:00:00:25:b5:20:b0:0a fcid 0x210061 dynamic
!
[Oracle-Srv3-hba1]
vsan 102 wwn 20:00:00:25:b5:20:b0:0c fcid 0x210041 dynamic
!
[Oracle-Srv2-hba1]
vsan 102 wwn 20:00:00:25:b5:20:b0:08 fcid 0x210021 dynamic
!
[Oracle-Srv4-hba1]
vsan 102 wwn 20:00:00:25:b5:20:b0:0f fcid 0x210002 dynamic
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
120
Appendix
!
!
!
!
!
!
!
!
!
!
[Oracle-Srv1-hba3]
vsan 102 wwn 20:00:00:25:b5:20:b0:0d
[Oracle-Srv2-hba3]
vsan 102 wwn 20:00:00:25:b5:20:b0:0b
[Oracle-Srv3-hba3]
vsan 102 wwn 20:00:00:25:b5:20:b0:09
[Oracle-Srv4-hba3]
vsan 102 wwn 50:06:0e:80:07:c3:da:60
vsan 102 wwn 50:06:0e:80:07:c3:da:62
[Storage1-7C]
vsan 102 wwn 50:06:0e:80:07:c3:da:70
vsan 102 wwn 50:06:0e:80:07:c3:da:72
[Storage2-8C]
vsan 102 wwn 50:06:0e:80:07:c3:da:02
[Storage1-1C]
vsan 102 wwn 50:06:0e:80:07:c3:da:42
[Storage1-5C]
vsan 102 wwn 50:06:0e:80:07:c3:da:12
[Storage2-2C]
vsan 102 wwn 50:06:0e:80:07:c3:da:52
[Storage2-6C]
fcid 0x210042 dynamic
fcid 0x210062 dynamic
fcid 0x210022 dynamic
fcid 0x210140 dynamic
fcid 0x210160 dynamic
fcid 0x210180 dynamic
fcid 0x2101a0 dynamic
fcid 0x2100a2 dynamic
fcid 0x210141 dynamic
fcid 0x210102 dynamic
fcid 0x2101c0 dynamic
interface Vlan1
no shutdown
interface Vlan10
no shutdown
ip address 10.36.215.3/24
hsrp version 2
hsrp 10
preempt
priority 110
ip 10.36.215.1
interface Vlan191
no shutdown
ip address 191.168.1.3/24
hsrp version 2
hsrp 191
preempt
priority 110
ip 191.168.1.1
interface port-channel1
description vPC peer-link
switchport mode trunk
switchport trunk allowed vlan 1,10,191
spanning-tree port type network
vpc peer-link
interface port-channel3
description Fabric_Interconnect_A
switchport mode trunk
switchport trunk allowed vlan 1,10,191
spanning-tree port type edge trunk
vpc 3
interface port-channel4
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
121
Appendix
description Fabric_Interconnect_B
switchport mode trunk
switchport trunk allowed vlan 1,10,191
spanning-tree port type edge trunk
vpc 4
vsan database
vsan 102 interface fc2/1
vsan 102 interface fc2/2
vsan 102 interface fc2/3
vsan 102 interface fc2/4
vsan 102 interface fc2/5
vsan 102 interface fc2/6
vsan 102 interface fc2/7
vsan 102 interface fc2/8
vsan 102 interface fc2/9
vsan 102 interface fc2/10
vsan 102 interface fc2/11
vsan 102 interface fc2/12
vsan 102 interface fc2/13
vsan 102 interface fc2/14
vsan 102 interface fc2/15
vsan 102 interface fc2/16
interface fc2/1
no shutdown
interface fc2/2
no shutdown
interface fc2/3
no shutdown
interface fc2/4
no shutdown
interface fc2/5
no shutdown
interface fc2/6
no shutdown
interface fc2/7
no shutdown
interface fc2/8
no shutdown
interface fc2/9
no shutdown
interface fc2/10
no shutdown
interface fc2/11
no shutdown
interface fc2/12
no shutdown
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
122
Appendix
interface fc2/13
no shutdown
interface fc2/14
no shutdown
interface fc2/15
no shutdown
interface fc2/16
no shutdown
interface Ethernet1/1
description Nexus5k-B-Cluster-Interconnect
switchport mode trunk
switchport trunk allowed vlan 1,10,191
channel-group 1 mode active
interface Ethernet1/2
description Nexus5k-B-Cluster-Interconnect
switchport mode trunk
switchport trunk allowed vlan 1,10,191
channel-group 1 mode active
interface Ethernet1/3
description Fabric_Interconnect_A:1/2
switchport mode trunk
switchport trunk allowed vlan 1,10,191
channel-group 3 mode active
interface Ethernet1/4
description Fabric_Interconnect_B:1/2
switchport mode trunk
switchport trunk allowed vlan 1,10,191
channel-group 4 mode active
interface Ethernet1/5
interface Ethernet1/6
interface Ethernet1/7
interface Ethernet1/8
interface Ethernet1/9
interface Ethernet1/10
interface Ethernet1/11
interface Ethernet1/12
interface Ethernet1/13
interface Ethernet1/14
interface Ethernet1/15
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
123
Appendix
interface Ethernet1/16
interface Ethernet1/17
description FCoE_FI_A_18
interface Ethernet1/18
description FCoE_FI_B_18
interface Ethernet1/19
interface Ethernet1/20
interface Ethernet1/21
interface Ethernet1/22
interface Ethernet1/23
interface Ethernet1/24
interface Ethernet1/25
interface Ethernet1/26
interface Ethernet1/27
interface Ethernet1/28
interface Ethernet1/29
interface Ethernet1/30
interface Ethernet1/31
shutdown
switchport mode trunk
speed 1000
interface Ethernet1/32
shutdown
switchport mode trunk
speed 1000
interface mgmt0
vrf member management
ip address 173.36.215.62/24
line console
line vty
boot kickstart bootflash:/n5000-uk9-kickstart.7.0.2.N1.1.bin
boot system bootflash:/n5000-uk9.7.0.2.N1.1.bin
interface fc2/1
interface fc2/2
interface fc2/3
interface fc2/4
interface fc2/5
interface fc2/6
interface fc2/7
interface fc2/8
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
124
Appendix
interface fc2/9
interface fc2/10
interface fc2/11
interface fc2/12
interface fc2/13
interface fc2/14
interface fc2/15
interface fc2/16
!Full Zone Database Section for vsan 102
zone name oracle-hds-srv1-hba1 vsan 102
member pwwn 20:00:00:25:b5:20:b0:0e
!
[Oracle-Srv1-hba1]
member pwwn 50:06:0e:80:07:c3:da:02
!
[Storage1-1C]
zone name oracle-hds-srv1-hba3 vsan 102
member pwwn 20:00:00:25:b5:20:b0:0f
!
[Oracle-Srv1-hba3]
member pwwn 50:06:0e:80:07:c3:da:12
!
[Storage2-2C]
zone name oracle-hds-srv2-hba1 vsan 102
member pwwn 20:00:00:25:b5:20:b0:0c
!
[Oracle-Srv2-hba1]
member pwwn 50:06:0e:80:07:c3:da:22
!
[Storage1-3C]
zone name oracle-hds-srv2-hba3 vsan 102
member pwwn 20:00:00:25:b5:20:b0:0d
!
[Oracle-Srv2-hba3]
member pwwn 50:06:0e:80:07:c3:da:32
!
[Storage2-4C]
zone name oracle-hds-srv3-hba1 vsan 102
member pwwn 20:00:00:25:b5:20:b0:0a
!
[Oracle-Srv3-hba1]
member pwwn 50:06:0e:80:07:c3:da:42
!
[Storage1-5C]
zone name oracle-hds-srv3-hba3 vsan 102
member pwwn 20:00:00:25:b5:20:b0:0b
!
[Oracle-Srv3-hba3]
member pwwn 50:06:0e:80:07:c3:da:52
!
[Storage2-6C]
zone name oracle-hds-srv4-hba1 vsan 102
member pwwn 20:00:00:25:b5:20:b0:08
!
[Oracle-Srv4-hba1]
member pwwn 50:06:0e:80:07:c3:da:62
!
[Storage1-7C]
zone name oracle-hds-srv4-hba3 vsan 102
member pwwn 20:00:00:25:b5:20:b0:09
!
[Oracle-Srv4-hba3]
member pwwn 50:06:0e:80:07:c3:da:72
!
[Storage2-8C]
zone name server1-boot-hba1 vsan 102
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
125
Appendix B: Cisco UCS Service Profiles
!
!
!
member pwwn 50:06:0e:80:07:c3:da:22
[Storage1-3C]
member pwwn 50:06:0e:80:07:c3:da:32
[Storage2-4C]
member pwwn 20:00:00:25:b5:20:b0:0e
[Oracle-Srv1-hba1]
zone name server2-boot-hba1 vsan 102
member pwwn 20:00:00:25:b5:20:b0:0c
!
[Oracle-Srv2-hba1]
member pwwn 50:06:0e:80:07:c3:da:22
!
[Storage1-3C]
member pwwn 50:06:0e:80:07:c3:da:32
!
[Storage2-4C]
zone name server3-boot-hba1 vsan 102
member pwwn 20:00:00:25:b5:20:b0:0a
!
[Oracle-Srv3-hba1]
member pwwn 50:06:0e:80:07:c3:da:62
!
[Storage1-7C]
member pwwn 50:06:0e:80:07:c3:da:72
!
[Storage2-8C]
zone name server4-boot-hba1 vsan 102
member pwwn 20:00:00:25:b5:20:b0:08
!
[Oracle-Srv4-hba1]
member pwwn 50:06:0e:80:07:c3:da:62
!
[Storage1-7C]
member pwwn 50:06:0e:80:07:c3:da:72
!
[Storage2-8C]
member pwwn 20:00:00:25:b5:20:b0:09
!
[Oracle-Srv4-hba3]
zoneset name Oracle-HDS-B vsan 102
member oracle-hds-srv1-hba1
member oracle-hds-srv1-hba3
member oracle-hds-srv2-hba1
member oracle-hds-srv2-hba3
member oracle-hds-srv3-hba1
member oracle-hds-srv3-hba3
member oracle-hds-srv4-hba1
member oracle-hds-srv4-hba3
member server1-boot-hba1
member server2-boot-hba1
member server3-boot-hba1
member server4-boot-hba1
zoneset activate name Oracle-HDS-B vsan 102
Appendix B: Cisco UCS Service Profiles
Oracle-HDS-FI-A# show fabric-interconnect
Fabric Interconnect:
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
126
Appendix B: Cisco UCS Service Profiles
ID
OOB IP Addr
OOB Gateway
OOB Netmask
OOB IPv6 Address
OOB IPv6 Gateway Prefix Operability
---- --------------- --------------- --------------- ------------------------------- ------ ----------A
173.36.215.58 173.36.215.1
255.255.255.0 ::
::
64
Operable
B
173.36.215.59 173.36.215.1
255.255.255.0 ::
::
64
Operable
Oracle-HDS-FI-A# show fabric version
Fabric Interconnect A:
Running-Kern-Vers: 5.2(3)N2(2.22c)
Running-Sys-Vers: 5.2(3)N2(2.22c)
Package-Vers: 2.2(2c)A
Startup-Kern-Vers: 5.2(3)N2(2.22c)
Startup-Sys-Vers: 5.2(3)N2(2.22c)
Act-Kern-Status: Ready
Act-Sys-Status: Ready
Bootloader-Vers:
v3.6.0(05/09/2012)
Fabric Interconnect B:
Running-Kern-Vers: 5.2(3)N2(2.22c)
Running-Sys-Vers: 5.2(3)N2(2.22c)
Package-Vers: 2.2(2c)A
Startup-Kern-Vers: 5.2(3)N2(2.22c)
Startup-Sys-Vers: 5.2(3)N2(2.22c)
Act-Kern-Status: Ready
Act-Sys-Status: Ready
Bootloader-Vers:
v3.6.0(05/09/2012)
Oracle-HDS-FI-A# show server inventory
Server Equipped PID Equipped VID Equipped Serial (SN) Slot Status
Ackd
Memory (MB) Ackd Cores
------- ------------ ------------ -------------------- ------------------------------- ---------1/1
UCSB-B200-M3 V01
FCH16507NMC
Equipped
262144
24
1/2
UCSB-B200-M3 V01
FCH16507P21
Equipped
262144
24
1/3
UCSB-B200-M3 V01
FCH16217L51
Equipped
131072
24
1/4
UCSB-B200-M3 V01
FCH16287KJ0
Equipped
131072
24
1/5
Empty
1/6
Empty
1/7
Empty
1/8
Empty
2/1
UCSB-B200-M3 V01
FCH1734J49Y
Equipped
262144
24
2/2
UCSB-B200-M3 V01
FCH1734J46P
Equipped
262144
24
2/3
UCSB-B200-M3 V01
FCH16207L12
Equipped
131072
24
2/4
Empty
2/5
Empty
2/6
Empty
2/7
Empty
2/8
Empty
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
127
Appendix B: Cisco UCS Service Profiles
Oracle-HDS-FI-A# show service-profile inventory
Service Profile Name Type
Server Assignment
-------------------- ----------------- ------- ---------Oracle-HDS-Fabric-A Updating Template
Unassigned
Oracle-HDS-Fabric-B Updating Template
Unassigned
Oracle-HDS-SP-A1
Instance
1/1
Assigned
Oracle-HDS-SP-A2
Instance
2/1
Assigned
Oracle-HDS-SP-B1
Instance
1/2
Assigned
Oracle-HDS-SP-B2
Instance
2/2
Assigned
Oracle_Client1
Instance
1/3
Assigned
Oracle_Client2
Instance
2/3
Assigned
Oracle-HDS-FI-A# show service-profile inventory expand
Service Profile Name: Oracle-HDS-Fabric-A
Type: Updating Template
Server:
Description:
Assignment: Unassigned
Association: Unassociated
Service Profile Name: Oracle-HDS-Fabric-B
Type: Updating Template
Server:
Description:
Assignment: Unassigned
Association: Unassociated
Service Profile Name: Oracle-HDS-SP-A1
Type: Instance
Server: 1/1
Description:
Assignment: Assigned
Association: Associated
Server 1/1:
Name:
Acknowledged Serial (SN): FCH16507NMC
Acknowledged Product Name: Cisco UCS B200 M3
Acknowledged PID: UCSB-B200-M3
Acknowledged VID: V03
Acknowledged Memory (MB): 262144
Acknowledged Effective Memory (MB): 262144
Acknowledged Cores: 24
Acknowledged Adapters: 1
Bios:
Model: UCSB-B200-M3
Revision: 0
Serial:
Vendor: Cisco Systems, Inc.
Motherboard:
Product Name: Cisco UCS B200 M3
PID: UCSB-B200-M3
VID: V01
Vendor: Cisco Systems Inc
Serial (SN): FCH16507NMC
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
128
Association
----------Unassociated
Unassociated
Associated
Associated
Associated
Associated
Associated
Associated
Appendix B: Cisco UCS Service Profiles
HW Revision: 0
Array 1:
DIMM Location
Presence
Overall Status
Type
Capacity (MB) Clock
---- ---------- ---------------- ----------------------------------- ------------- ----1 A0
Equipped
Operable
DDR3
16384
1866
2 A1
Equipped
Operable
DDR3
16384
1866
3 A2
Missing
Removed
Undisc
Unknown
Unknown
4 B0
Equipped
Operable
DDR3
16384
1866
5 B1
Equipped
Operable
DDR3
16384
1866
6 B2
Missing
Removed
Undisc
Unknown
Unknown
7 C0
Equipped
Operable
DDR3
16384
1866
8 C1
Equipped
Operable
DDR3
16384
1866
9 C2
Missing
Removed
Undisc
Unknown
Unknown
10 D0
Equipped
Operable
DDR3
16384
1866
11 D1
Equipped
Operable
DDR3
16384
1866
12 D2
Missing
Removed
Undisc
Unknown
Unknown
13 E0
Equipped
Operable
DDR3
16384
1866
14 E1
Equipped
Operable
DDR3
16384
1866
15 E2
Missing
Removed
Undisc
Unknown
Unknown
16 F0
Equipped
Operable
DDR3
16384
1866
17 F1
Equipped
Operable
DDR3
16384
1866
18 F2
Missing
Removed
Undisc
Unknown
Unknown
19 G0
Equipped
Operable
DDR3
16384
1866
20 G1
Equipped
Operable
DDR3
16384
1866
21 G2
Missing
Removed
Undisc
Unknown
Unknown
22 H0
Equipped
Operable
DDR3
16384
1866
23 H1
Equipped
Operable
DDR3
16384
1866
24 H2
Missing
Removed
Undisc
Unknown
Unknown
CPUs:
ID: 1
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
129
Appendix B: Cisco UCS Service Profiles
Presence: Equipped
Architecture: Xeon
Socket: CPU1
Cores: 12
Speed (GHz): 2.700000
Stepping: 4
Product Name: Intel(R) Xeon(R) E5-2697 v2
PID: UCS-CPU-E52697B
VID: V01
Vendor: Intel(R) Corporation
HW Revision: 0
ID: 2
Presence: Equipped
Architecture: Xeon
Socket: CPU2
Cores: 12
Speed (GHz): 2.700000
Stepping: 4
Product Name: Intel(R) Xeon(R) E5-2697 v2
PID: UCS-CPU-E52697B
VID: V01
Vendor: Intel(R) Corporation
HW Revision: 0
RAID Controller 1:
Type: SAS
Vendor: LSI Logic
Symbios Logic
Model: LSI MegaRAID SAS 2004 ROMB
Serial: LSIROMB-0
HW Revision: B2
PCI Addr: 01:00.0
Raid Support: RAID0, RAID1
OOB Interface Supported: Yes
Rebuild Rate: 30
Controller Status: Optimal
Local Disk 1:
Product Name:
PID:
VID:
Vendor:
Model:
Vendor Description:
Serial:
HW Rev: 0
Block Size: Unknown
Blocks: Unknown
Operability: N/A
Oper Qualifier Reason: N/A
Presence: Missing
Size (MB): Unknown
Drive State: Unknown
Power State: Unknown
Link Speed: Unknown
Device Type: Unspecified
Local Disk 2:
Product Name:
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
130
Appendix C: Verify Oracle RAC Cluster Status Command Output
PID:
VID:
Vendor:
Model:
Vendor Description:
Serial:
HW Rev: 0
Block Size: Unknown
Blocks: Unknown
Operability: N/A
Oper Qualifier Reason: N/A
Presence: Missing
Size (MB): Unknown
Drive State: Unknown
Power State: Unknown
Link Speed: Unknown
Device Type: Unspecified
Local Disk Config Definition:
Mode: Any Configuration
Description:
Protect Configuration: Yes
Adapter:
Adapter PID
Vendor
Serial
Overall Status
------- ------------ ----------------- ------------ -------------1 UCSB-MLOM-40G-01
Cisco Systems Inc FCH16507D06 Operable
Appendix C: Verify Oracle RAC Cluster Status Command
Output
[root@oracle-hds-srv1 ~]# /u01/app/11.2.4/grid/bin/crsctl check cluster -all
**************************************************************
oracle-hds-srv1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
oracle-hds-srv2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
oracle-hds-srv3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
oracle-hds-srv4:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
131
Appendix C: Verify Oracle RAC Cluster Status Command Output
**************************************************************
[root@oracle-hds-srv1 ~]# /u01/app/11.2.4/grid/bin/crs_stat
NAME=ora.BACKUPDG.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.DATADG.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE
STATE=OFFLINE
NAME=ora.DSSDG.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.LISTENER.lsnr
TYPE=ora.listener.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.LISTENER_SCAN1.lsnr
TYPE=ora.scan_listener.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv2
NAME=ora.LISTENER_SCAN2.lsnr
TYPE=ora.scan_listener.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv4
NAME=ora.LISTENER_SCAN3.lsnr
TYPE=ora.scan_listener.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv3
NAME=ora.OCRVOTE.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.OLTPDG.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.REDODG.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.REDOGROUP.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.asm
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
132
Appendix C: Verify Oracle RAC Cluster Status Command Output
TYPE=ora.asm.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.cvu
TYPE=ora.cvu.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv3
NAME=ora.dss16db.db
TYPE=ora.database.type
TARGET=ONLINE
STATE=OFFLINE
NAME=ora.dssdb.db
TYPE=ora.database.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.gsd
TYPE=ora.gsd.type
TARGET=OFFLINE
STATE=OFFLINE
NAME=ora.net1.network
TYPE=ora.network.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.oc4j
TYPE=ora.oc4j.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv3
NAME=ora.oltpdb.db
TYPE=ora.database.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.ons
TYPE=ora.ons.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.oracle-hds-srv1.ASM1.asm
TYPE=application
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.oracle-hds-srv1.LISTENER_ORACLE-HDS-SRV1.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.oracle-hds-srv1.gsd
TYPE=application
TARGET=OFFLINE
STATE=OFFLINE
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
133
Appendix C: Verify Oracle RAC Cluster Status Command Output
NAME=ora.oracle-hds-srv1.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.oracle-hds-srv1.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.oracle-hds-srv2.ASM2.asm
TYPE=application
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv2
NAME=ora.oracle-hds-srv2.LISTENER_ORACLE-HDS-SRV2.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv2
NAME=ora.oracle-hds-srv2.gsd
TYPE=application
TARGET=OFFLINE
STATE=OFFLINE
NAME=ora.oracle-hds-srv2.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv2
NAME=ora.oracle-hds-srv2.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv2
NAME=ora.oracle-hds-srv3.ASM3.asm
TYPE=application
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv3
NAME=ora.oracle-hds-srv3.LISTENER_ORACLE-HDS-SRV3.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv3
NAME=ora.oracle-hds-srv3.gsd
TYPE=application
TARGET=OFFLINE
STATE=OFFLINE
NAME=ora.oracle-hds-srv3.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv3
NAME=ora.oracle-hds-srv3.vip
TYPE=ora.cluster_vip_net1.type
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
134
Appendix C: Verify Oracle RAC Cluster Status Command Output
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv3
NAME=ora.oracle-hds-srv4.ASM4.asm
TYPE=application
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv4
NAME=ora.oracle-hds-srv4.LISTENER_ORACLE-HDS-SRV4.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv4
NAME=ora.oracle-hds-srv4.gsd
TYPE=application
TARGET=OFFLINE
STATE=OFFLINE
NAME=ora.oracle-hds-srv4.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv4
NAME=ora.oracle-hds-srv4.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv4
NAME=ora.registry.acfs
TYPE=ora.registry.acfs.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv1
NAME=ora.scan1.vip
TYPE=ora.scan_vip.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv2
NAME=ora.scan2.vip
TYPE=ora.scan_vip.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv4
NAME=ora.scan3.vip
TYPE=ora.scan_vip.type
TARGET=ONLINE
STATE=ONLINE on oracle-hds-srv3
[root@oracle-hds-srv1 ~]# /u01/app/11.2.4/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
Version
:
3
Total space (kbytes)
:
262120
Used space (kbytes)
:
3580
Available space (kbytes) :
258540
ID
: 290914339
Device/File Name
:
+OCRVOTE
Device/File integrity check succeeded
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
135
Appendix C: Verify Oracle RAC Cluster Status Command Output
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
[root@oracle-hds-srv1 ~]# /u01/app/11.2.4/grid/bin/crsctl query css votedisk
## STATE
File Universal Id
File Name Disk group
-- ----------------------------- --------1. ONLINE
b00a292a5f614f76bfb48ebfb99e5e37
(/dev/oracleasm/disks/ASMDISK1) [OCRVOTE]
2. ONLINE
01c6249813e34fe0bfe2e1695b67f99d
(/dev/oracleasm/disks/ASMDISK2) [OCRVOTE]
3. ONLINE
3788279ac1204f06bf09f3e98c5feba0
(/dev/oracleasm/disks/ASMDISK3) [OCRVOTE]
Located 3 voting disk(s).
[grid@oracle-hds-srv1 ~]$ cluvfy comp sys -n
oracle-hds-srv1,oracle-hds-srv2,oracle-hds-srv3,oracle-hds-srv4 -p crs
-verbose
Verifying system requirement
Check: Total memory
Node Name
Available
Required
Status
------------ ------------------------ --------------------------------oracle-hds-srv2 252.4143GB (2.64675576E8KB) 1.5GB (1572864.0KB)
passed
oracle-hds-srv1 252.4143GB (2.64675576E8KB) 1.5GB (1572864.0KB)
passed
oracle-hds-srv4 252.4143GB (2.64675576E8KB) 1.5GB (1572864.0KB)
passed
oracle-hds-srv3 252.4143GB (2.64675576E8KB) 1.5GB (1572864.0KB)
passed
Result: Total memory check passed
Check: Available memory
Node Name
Available
Required
------------ ------------------------ --------------------------------oracle-hds-srv2 105.803GB (1.10942452E8KB) 50MB (51200.0KB)
passed
oracle-hds-srv1 105.2837GB (1.10397972E8KB) 50MB (51200.0KB)
passed
oracle-hds-srv4 106.1908GB (1.11349132E8KB) 50MB (51200.0KB)
passed
oracle-hds-srv3 105.8552GB (1.10997228E8KB) 50MB (51200.0KB)
passed
Result: Available memory check passed
Check: Swap space
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
136
Status
Appendix C: Verify Oracle RAC Cluster Status Command Output
Node Name
Available
Required
------------ ------------------------ --------------------------------oracle-hds-srv2 79.1016GB (8.2943996E7KB) 16GB (1.6777216E7KB)
passed
oracle-hds-srv1 79.1016GB (8.2943996E7KB) 16GB (1.6777216E7KB)
passed
oracle-hds-srv4 79.1016GB (8.2943996E7KB) 16GB (1.6777216E7KB)
passed
oracle-hds-srv3 79.1016GB (8.2943996E7KB) 16GB (1.6777216E7KB)
passed
Result: Swap space check passed
Status
Check: Free disk space for "oracle-hds-srv2:/u01/app/11.2.4/grid"
Path
Node Name
Mount point
Available
Required
Status
---------------- ------------ ------------ ------------ ----------------------/u01/app/11.2.4/grid oracle-hds-srv2 /
97.2861GB
5.5GB
passed
Result: Free disk space check passed for
"oracle-hds-srv2:/u01/app/11.2.4/grid"
Check: Free disk space for "oracle-hds-srv1:/u01/app/11.2.4/grid"
Path
Node Name
Mount point
Available
Required
Status
---------------- ------------ ------------ ------------ ----------------------/u01/app/11.2.4/grid oracle-hds-srv1 /
35.3184GB
5.5GB
passed
Result: Free disk space check passed for
"oracle-hds-srv1:/u01/app/11.2.4/grid"
Check: Free disk space for "oracle-hds-srv4:/u01/app/11.2.4/grid"
Path
Node Name
Mount point
Available
Required
Status
---------------- ------------ ------------ ------------ ----------------------/u01/app/11.2.4/grid oracle-hds-srv4 /
97.9219GB
5.5GB
passed
Result: Free disk space check passed for
"oracle-hds-srv4:/u01/app/11.2.4/grid"
Check: Free disk space for "oracle-hds-srv3:/u01/app/11.2.4/grid"
Path
Node Name
Mount point
Available
Required
Status
---------------- ------------ ------------ ------------ ----------------------/u01/app/11.2.4/grid oracle-hds-srv3 /
97.3613GB
5.5GB
passed
Result: Free disk space check passed for
"oracle-hds-srv3:/u01/app/11.2.4/grid"
Check: Free disk space for "oracle-hds-srv2:/tmp"
Path
Node Name
Mount point
Available
Status
---------------- ------------ ------------ -----------------------
Required
------------
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
137
Appendix C: Verify Oracle RAC Cluster Status Command Output
/tmp
oracle-hds-srv2 /tmp
97.2861GB
1GB
passed
Result: Free disk space check passed for "oracle-hds-srv2:/tmp"
Check: Free disk space for "oracle-hds-srv1:/tmp"
Path
Node Name
Mount point
Available
Required
Status
---------------- ------------ ------------ ------------ ----------------------/tmp
oracle-hds-srv1 /tmp
35.3184GB
1GB
passed
Result: Free disk space check passed for "oracle-hds-srv1:/tmp"
Check: Free disk space for "oracle-hds-srv4:/tmp"
Path
Node Name
Mount point
Available
Required
Status
---------------- ------------ ------------ ------------ ----------------------/tmp
oracle-hds-srv4 /tmp
97.9219GB
1GB
passed
Result: Free disk space check passed for "oracle-hds-srv4:/tmp"
Check: Free disk space for "oracle-hds-srv3:/tmp"
Path
Node Name
Mount point
Available
Required
Status
---------------- ------------ ------------ ------------ ----------------------/tmp
oracle-hds-srv3 /tmp
97.3613GB
1GB
passed
Result: Free disk space check passed for "oracle-hds-srv3:/tmp"
Check: User existence for "grid"
Node Name
Status
------------ -----------------------oracle-hds-srv2 passed
oracle-hds-srv1 passed
oracle-hds-srv4 passed
oracle-hds-srv3 passed
Comment
-----------------------exists(5002)
exists(5002)
exists(5002)
exists(5002)
Checking for multiple users with UID value 5002
Result: Check for multiple users with UID value 5002 passed
Result: User existence check passed for "grid"
Check: Group existence for "oinstall"
Node Name
Status
Comment
------------ ------------------------ -----------------------oracle-hds-srv2 passed
exists
oracle-hds-srv1 passed
exists
oracle-hds-srv4 passed
exists
oracle-hds-srv3 passed
exists
Result: Group existence check passed for "oinstall"
Check: Group existence for "dba"
Node Name
Status
------------ -----------------------oracle-hds-srv2 passed
oracle-hds-srv1 passed
oracle-hds-srv4 passed
Comment
-----------------------exists
exists
exists
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
138
Appendix C: Verify Oracle RAC Cluster Status Command Output
oracle-hds-srv3 passed
exists
Result: Group existence check passed for "dba"
Check: Membership of user "grid" in group "oinstall" [as Primary]
Node Name
User Exists
Group Exists User in Group Primary
Status
---------------- ------------ ------------ ------------ ----------------------oracle-hds-srv2
yes
yes
yes
yes
passed
oracle-hds-srv1
yes
yes
yes
yes
passed
oracle-hds-srv4
yes
yes
yes
yes
passed
oracle-hds-srv3
yes
yes
yes
yes
passed
Result: Membership check for user "grid" in group "oinstall" [as Primary]
passed
Check: Membership of user "grid" in group "dba"
Node Name
User Exists
Group Exists User in Group Status
---------------- ------------ ------------ --------------------------oracle-hds-srv2
yes
yes
yes
passed
oracle-hds-srv1
yes
yes
yes
passed
oracle-hds-srv4
yes
yes
yes
passed
oracle-hds-srv3
yes
yes
yes
passed
Result: Membership check for user "grid" in group "dba" passed
Check: Run level
Node Name
run level
------------ --------------------------------oracle-hds-srv2 5
passed
oracle-hds-srv1 5
passed
oracle-hds-srv4 5
passed
oracle-hds-srv3 5
passed
Result: Run level check passed
Required
------------------------
Status
3,5
3,5
3,5
3,5
Check: Hard limits for "maximum open file descriptors"
Node Name
Type
Available
Required
Status
---------------- ------------ ------------ --------------------------oracle-hds-srv2
hard
262144
65536
passed
oracle-hds-srv1
hard
262144
65536
passed
oracle-hds-srv4
hard
262144
65536
passed
oracle-hds-srv3
hard
262144
65536
passed
Result: Hard limits check passed for "maximum open file descriptors"
Check: Soft limits for "maximum open file descriptors"
Node Name
Type
Available
Required
---------------- ------------ ------------ --------------------------oracle-hds-srv2
soft
262144
1024
Status
passed
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
139
Appendix C: Verify Oracle RAC Cluster Status Command Output
oracle-hds-srv1
oracle-hds-srv4
oracle-hds-srv3
Result: Soft limits
soft
262144
1024
passed
soft
262144
1024
passed
soft
262144
1024
passed
check passed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes"
Node Name
Type
Available
Required
---------------- ------------ ------------ --------------------------oracle-hds-srv2
hard
65535
16384
oracle-hds-srv1
hard
65535
16384
oracle-hds-srv4
hard
65535
16384
oracle-hds-srv3
hard
65535
16384
Result: Hard limits check passed for "maximum user processes"
Check: Soft limits for "maximum user processes"
Node Name
Type
Available
Required
---------------- ------------ ------------ --------------------------oracle-hds-srv2
soft
65535
2047
oracle-hds-srv1
soft
65535
2047
oracle-hds-srv4
soft
65535
2047
oracle-hds-srv3
soft
65535
2047
Result: Soft limits check passed for "maximum user processes"
Check: System architecture
Node Name
Available
------------ --------------------------------oracle-hds-srv2 x86_64
passed
oracle-hds-srv1 x86_64
passed
oracle-hds-srv4 x86_64
passed
oracle-hds-srv3 x86_64
passed
Result: System architecture check passed
Status
passed
passed
passed
passed
Status
passed
passed
passed
passed
Required
------------------------
Status
x86_64
x86_64
x86_64
x86_64
Check: Kernel version
Node Name
Available
Required
------------ ------------------------ --------------------------------oracle-hds-srv2 2.6.39-400.17.1.el6uek.x86_64 2.6.32
passed
oracle-hds-srv1 2.6.39-400.17.1.el6uek.x86_64 2.6.32
passed
oracle-hds-srv4 2.6.39-400.17.1.el6uek.x86_64 2.6.32
passed
oracle-hds-srv3 2.6.39-400.17.1.el6uek.x86_64 2.6.32
passed
Result: Kernel version check passed
Check: Kernel parameter for "semmsl"
Node Name
Current
Configured
Comment
---------------- ------------ -----------------------
Required
Status
------------
------------
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
140
Status
Appendix C: Verify Oracle RAC Cluster Status Command Output
oracle-hds-srv2
8192
8192
250
oracle-hds-srv1
8192
8192
250
oracle-hds-srv4
8192
8192
250
oracle-hds-srv3
8192
8192
250
Result: Kernel parameter check passed for "semmsl"
Check: Kernel parameter for "semmns"
Node Name
Current
Configured
Required
Comment
---------------- ------------ ------------ ----------------------oracle-hds-srv2
48000
48000
32000
oracle-hds-srv1
48000
48000
32000
oracle-hds-srv4
48000
48000
32000
oracle-hds-srv3
48000
48000
32000
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm"
Node Name
Current
Configured
Required
Comment
---------------- ------------ ------------ ----------------------oracle-hds-srv2
8192
8192
100
oracle-hds-srv1
8192
8192
100
oracle-hds-srv4
8192
8192
100
oracle-hds-srv3
8192
8192
100
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni"
Node Name
Current
Configured
Required
Comment
---------------- ------------ ------------ ----------------------oracle-hds-srv2
8192
8192
128
oracle-hds-srv1
8192
8192
128
oracle-hds-srv4
8192
8192
128
oracle-hds-srv3
8192
8192
128
Result: Kernel parameter check passed for "semmni"
passed
passed
passed
passed
Status
-----------passed
passed
passed
passed
Status
-----------passed
passed
passed
passed
Status
-----------passed
passed
passed
passed
Check: Kernel parameter for "shmmax"
Node Name
Current
Configured
Required
Status
Comment
---------------- ------------ ------------ ------------ ----------------------oracle-hds-srv2
4398046511104 4398046511104 4294967295
passed
oracle-hds-srv1
4398046511104 4398046511104 4294967295
passed
oracle-hds-srv4
4398046511104 4398046511104 4294967295
passed
oracle-hds-srv3
4398046511104 4398046511104 4294967295
passed
Result: Kernel parameter check passed for "shmmax"
Check: Kernel parameter for "shmmni"
Node Name
Current
Configured
Comment
---------------- ------------ ----------------------oracle-hds-srv2
4096
4096
oracle-hds-srv1
4096
4096
oracle-hds-srv4
4096
4096
Required
Status
------------
------------
4096
4096
4096
passed
passed
passed
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
141
Appendix C: Verify Oracle RAC Cluster Status Command Output
oracle-hds-srv3
4096
4096
4096
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall"
Node Name
Current
Configured
Required
Comment
---------------- ------------ ------------ ----------------------oracle-hds-srv2
1073741824
1073741824
2097152
oracle-hds-srv1
1073741824
1073741824
2097152
oracle-hds-srv4
1073741824
1073741824
2097152
oracle-hds-srv3
1073741824
1073741824
2097152
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max"
Node Name
Current
Configured
Required
Comment
---------------- ------------ ------------ ----------------------oracle-hds-srv2
6815744
6815744
6815744
oracle-hds-srv1
6815744
6815744
6815744
oracle-hds-srv4
6815744
6815744
6815744
oracle-hds-srv3
6815744
6815744
6815744
Result: Kernel parameter check passed for "file-max"
passed
Status
-----------passed
passed
passed
passed
Status
-----------passed
passed
passed
passed
Check: Kernel parameter for "ip_local_port_range"
Node Name
Current
Configured
Required
Status
Comment
---------------- ------------ ------------ ------------ ----------------------oracle-hds-srv2
between 9000.0 & 65500.0 between 9000.0 & 65500.0
between 9000.0 & 65500.0 passed
oracle-hds-srv1
between 9000.0 & 65500.0 between 9000.0 & 65500.0
between 9000.0 & 65500.0 passed
oracle-hds-srv4
between 9000.0 & 65500.0 between 9000.0 & 65500.0
between 9000.0 & 65500.0 passed
oracle-hds-srv3
between 9000.0 & 65500.0 between 9000.0 & 65500.0
between 9000.0 & 65500.0 passed
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default"
Node Name
Current
Configured
Required
Comment
---------------- ------------ ------------ ----------------------oracle-hds-srv2
4194304
4194304
262144
oracle-hds-srv1
4194304
4194304
262144
oracle-hds-srv4
4194304
4194304
262144
oracle-hds-srv3
4194304
4194304
262144
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max"
Node Name
Current
Configured
Comment
---------------- ------------ ----------------------oracle-hds-srv2
16777216
16777216
oracle-hds-srv1
16777216
16777216
-----------passed
passed
passed
passed
Required
Status
------------
------------
4194304
4194304
passed
passed
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
142
Status
Appendix C: Verify Oracle RAC Cluster Status Command Output
oracle-hds-srv4
16777216
16777216
4194304
oracle-hds-srv3
16777216
16777216
4194304
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default"
Node Name
Current
Configured
Required
Comment
---------------- ------------ ------------ ----------------------oracle-hds-srv2
4194304
4194304
262144
oracle-hds-srv1
4194304
4194304
262144
oracle-hds-srv4
4194304
4194304
262144
oracle-hds-srv3
4194304
4194304
262144
Result: Kernel parameter check passed for "wmem_default"
passed
passed
Status
-----------passed
passed
passed
passed
Check: Kernel parameter for "wmem_max"
Node Name
Current
Configured
Required
Status
Comment
---------------- ------------ ------------ ------------ ----------------------oracle-hds-srv2
16777216
16777216
1048576
passed
oracle-hds-srv1
16777216
16777216
1048576
passed
oracle-hds-srv4
16777216
16777216
1048576
passed
oracle-hds-srv3
16777216
16777216
1048576
passed
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr"
Node Name
Current
Configured
Required
Comment
---------------- ------------ ------------ ----------------------oracle-hds-srv2
2097152
2097152
1048576
oracle-hds-srv1
2097152
2097152
1048576
oracle-hds-srv4
2097152
2097152
1048576
oracle-hds-srv3
2097152
2097152
1048576
Result: Kernel parameter check passed for "aio-max-nr"
Status
-----------passed
passed
passed
passed
Check: Package existence for "binutils"
Node Name
Available
Required
Status
------------ ------------------------ --------------------------------oracle-hds-srv2 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2
passed
oracle-hds-srv1 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2
passed
oracle-hds-srv4 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2
passed
oracle-hds-srv3 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2
passed
Result: Package existence check passed for "binutils"
Check: Package existence for "compat-libcap1"
Node Name
Available
Required
------------ ------------------------ --------------------------------oracle-hds-srv2 compat-libcap1-1.10-1
compat-libcap1-1.10
passed
Status
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
143
Appendix C: Verify Oracle RAC Cluster Status Command Output
oracle-hds-srv1 compat-libcap1-1.10-1
compat-libcap1-1.10
passed
oracle-hds-srv4 compat-libcap1-1.10-1
compat-libcap1-1.10
passed
oracle-hds-srv3 compat-libcap1-1.10-1
compat-libcap1-1.10
passed
Result: Package existence check passed for "compat-libcap1"
Check: Package existence for "compat-libstdc++-33(x86_64)"
Node Name
Available
Required
Status
------------ ------------------------ --------------------------------oracle-hds-srv2 compat-libstdc++-33(x86_64)-3.2.3-69.el6
compat-libstdc++-33(x86_64)-3.2.3 passed
oracle-hds-srv1 compat-libstdc++-33(x86_64)-3.2.3-69.el6
compat-libstdc++-33(x86_64)-3.2.3 passed
oracle-hds-srv4 compat-libstdc++-33(x86_64)-3.2.3-69.el6
compat-libstdc++-33(x86_64)-3.2.3 passed
oracle-hds-srv3 compat-libstdc++-33(x86_64)-3.2.3-69.el6
compat-libstdc++-33(x86_64)-3.2.3 passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"
Check: Package existence for "libgcc(x86_64)"
Node Name
Available
Required
Status
------------ ------------------------ --------------------------------oracle-hds-srv2 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4
passed
oracle-hds-srv1 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4
passed
oracle-hds-srv4 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4
passed
oracle-hds-srv3 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4
passed
Result: Package existence check passed for "libgcc(x86_64)"
Check: Package existence for "libstdc++(x86_64)"
Node Name
Available
Required
Status
------------ ------------------------ --------------------------------oracle-hds-srv2 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4
passed
oracle-hds-srv1 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4
passed
oracle-hds-srv4 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4
passed
oracle-hds-srv3 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4
passed
Result: Package existence check passed for "libstdc++(x86_64)"
Check: Package existence for "libstdc++-devel(x86_64)"
Node Name
Available
Required
------------ ------------------------ --------------------------------oracle-hds-srv2 libstdc++-devel(x86_64)-4.4.7-3.el6
libstdc++-devel(x86_64)-4.4.4 passed
oracle-hds-srv1 libstdc++-devel(x86_64)-4.4.7-3.el6
libstdc++-devel(x86_64)-4.4.4 passed
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
144
Status
Appendix C: Verify Oracle RAC Cluster Status Command Output
oracle-hds-srv4 libstdc++-devel(x86_64)-4.4.7-3.el6
libstdc++-devel(x86_64)-4.4.4 passed
oracle-hds-srv3 libstdc++-devel(x86_64)-4.4.7-3.el6
libstdc++-devel(x86_64)-4.4.4 passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"
Check: Package existence for "sysstat"
Node Name
Available
Required
------------ ------------------------ --------------------------------oracle-hds-srv2 sysstat-9.0.4-20.el6
sysstat-9.0.4
passed
oracle-hds-srv1 sysstat-9.0.4-20.el6
sysstat-9.0.4
passed
oracle-hds-srv4 sysstat-9.0.4-20.el6
sysstat-9.0.4
passed
oracle-hds-srv3 sysstat-9.0.4-20.el6
sysstat-9.0.4
passed
Result: Package existence check passed for "sysstat"
Check: Package existence for "gcc"
Node Name
Available
Required
------------ ------------------------ --------------------------------oracle-hds-srv2 gcc-4.4.7-3.el6
gcc-4.4.4
passed
oracle-hds-srv1 gcc-4.4.7-3.el6
gcc-4.4.4
passed
oracle-hds-srv4 gcc-4.4.7-3.el6
gcc-4.4.4
passed
oracle-hds-srv3 gcc-4.4.7-3.el6
gcc-4.4.4
passed
Result: Package existence check passed for "gcc"
Check: Package existence for "gcc-c++"
Node Name
Available
Required
------------ ------------------------ --------------------------------oracle-hds-srv2 gcc-c++-4.4.7-3.el6
gcc-c++-4.4.4
passed
oracle-hds-srv1 gcc-c++-4.4.7-3.el6
gcc-c++-4.4.4
passed
oracle-hds-srv4 gcc-c++-4.4.7-3.el6
gcc-c++-4.4.4
passed
oracle-hds-srv3 gcc-c++-4.4.7-3.el6
gcc-c++-4.4.4
passed
Result: Package existence check passed for "gcc-c++"
Check: Package existence for "ksh"
Node Name
Available
------------ --------------------------------oracle-hds-srv2 ksh-20100621-19.el6
passed
oracle-hds-srv1 ksh-20100621-19.el6
passed
oracle-hds-srv4 ksh-20100621-19.el6
passed
Required
------------------------
Status
Status
Status
Status
ksh-20100621
ksh-20100621
ksh-20100621
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
145
Appendix C: Verify Oracle RAC Cluster Status Command Output
oracle-hds-srv3 ksh-20100621-19.el6
ksh-20100621
passed
Result: Package existence check passed for "ksh"
Check: Package existence for "make"
Node Name
Available
Required
------------ ------------------------ --------------------------------oracle-hds-srv2 make-3.81-20.el6
make-3.81
passed
oracle-hds-srv1 make-3.81-20.el6
make-3.81
passed
oracle-hds-srv4 make-3.81-20.el6
make-3.81
passed
oracle-hds-srv3 make-3.81-20.el6
make-3.81
passed
Result: Package existence check passed for "make"
Status
Check: Package existence for "glibc(x86_64)"
Node Name
Available
Required
Status
------------ ------------------------ --------------------------------oracle-hds-srv2 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12
passed
oracle-hds-srv1 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12
passed
oracle-hds-srv4 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12
passed
oracle-hds-srv3 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12
passed
Result: Package existence check passed for "glibc(x86_64)"
Check: Package existence for "glibc-devel(x86_64)"
Node Name
Available
Required
------------ ------------------------ --------------------------------oracle-hds-srv2 glibc-devel(x86_64)-2.12-1.107.el6
glibc-devel(x86_64)-2.12 passed
oracle-hds-srv1 glibc-devel(x86_64)-2.12-1.107.el6
glibc-devel(x86_64)-2.12 passed
oracle-hds-srv4 glibc-devel(x86_64)-2.12-1.107.el6
glibc-devel(x86_64)-2.12 passed
oracle-hds-srv3 glibc-devel(x86_64)-2.12-1.107.el6
glibc-devel(x86_64)-2.12 passed
Result: Package existence check passed for "glibc-devel(x86_64)"
Status
Check: Package existence for "libaio(x86_64)"
Node Name
Available
Required
Status
------------ ------------------------ --------------------------------oracle-hds-srv2 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107
passed
oracle-hds-srv1 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107
passed
oracle-hds-srv4 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107
passed
oracle-hds-srv3 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107
passed
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
146
Appendix D: Key Linux Parameters
Result: Package existence check passed for "libaio(x86_64)"
Check: Package existence for "libaio-devel(x86_64)"
Node Name
Available
Required
------------ ------------------------ --------------------------------oracle-hds-srv2 libaio-devel(x86_64)-0.3.107-10.el6
libaio-devel(x86_64)-0.3.107 passed
oracle-hds-srv1 libaio-devel(x86_64)-0.3.107-10.el6
libaio-devel(x86_64)-0.3.107 passed
oracle-hds-srv4 libaio-devel(x86_64)-0.3.107-10.el6
libaio-devel(x86_64)-0.3.107 passed
oracle-hds-srv3 libaio-devel(x86_64)-0.3.107-10.el6
libaio-devel(x86_64)-0.3.107 passed
Result: Package existence check passed for "libaio-devel(x86_64)"
Status
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Starting check for consistency of primary group of root user
Node Name
Status
------------------------------------ -----------------------oracle-hds-srv2
passed
oracle-hds-srv1
passed
oracle-hds-srv4
passed
oracle-hds-srv3
passed
Check for consistency of root user's primary group passed
Check: Time zone consistency
Result: Time zone consistency check passed
Verification of system requirement was successful.
Appendix D: Key Linux Parameters
sysctl.conf
kernel.sem = 8192 48000 8192 8192
net.core.rmem_default = 4194304
net.core.rmem_max = 16777216
net.core.wmem_default = 4194304
net.core.wmem_max = 16777216
vm.nr_hugepages = 72100
limits.conf
oracle soft nofile 4096
oracle hard nofile 65536
oracle soft nproc 32767
oracle hard nproc 32767
oracle soft stack 10240
oracle hard stack 32768
grid soft nofile 4096
grid hard nofile 65536
grid soft nproc 32767
grid hard nproc 32767
grid soft stack 10240
grid hard stack 32768
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
147
References
References
Cisco Unfied Computing System:
http://www.cisco.com/en/US/netsol/ns944/index.html
Hitachi Storage Platform G1000:
http://www.hds.com/products/storage-systems/hitachi-virtual-storage-platform-g1000.html?WT.ac=us
_mg_pro_hvspg1000
Cisco Nexus:
http://www.cisco.com/en/US/products/ps9441/Products_Sub_Category_Home.html
Cisco Nexus 5000 Series NX-OS Software Configuration Guide:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli/CLIConf
igurationGuide.html
Cisco Unified Computing System and Oracle RAC 11gR2 with Hitachi Virtual Storage Platform G1000
148
Download