Implementing HP Helion OpenStack on HP BladeSystem: A solution

advertisement
Technical white paper
Implementing HP Helion
OpenStack® on HP BladeSystem
A solution example for HP Helion OpenStack® private clouds
Table of contents
Executive summary ...................................................................................................................................................................... 2
Introduction to HP Helion OpenStack ....................................................................................................................................... 3
Core HP Helion OpenStack services ...................................................................................................................................... 3
HP Helion OpenStack additional services ............................................................................................................................ 5
HP Helion OpenStack deployment architecture ................................................................................................................. 5
HP Helion OpenStack networking ......................................................................................................................................... 8
HP Helion OpenStack configurations ...................................................................................................................................... 10
HP Helion OpenStack version 1.0.1 using HP BladeSystem .............................................................................................. 10
Network subnets and addresses ......................................................................................................................................... 13
Cabling ....................................................................................................................................................................................... 14
Initial 3PAR configuration ...................................................................................................................................................... 16
Initial SAN switch configuration ........................................................................................................................................... 18
HP OneView setup .................................................................................................................................................................. 18
Installing HP Helion OpenStack............................................................................................................................................ 26
Summary ....................................................................................................................................................................................... 35
Appendix A – Sample HP Helion OpenStack JSON configuration file ............................................................................... 35
Appendix B – Sample baremetal PowerShell script ............................................................................................................ 36
Appendix C – Sample baremetal.csv file ................................................................................................................................ 36
Appendix D – Sample JSON configuration file with HP 3PAR integration ....................................................................... 37
For more information ................................................................................................................................................................. 39
Click here to verify the latest version of this document
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Executive summary
HP Helion OpenStack is an open and extensible scale out cloud platform for building your own on-premise private clouds
with the option of participating in a hybrid cloud when business needs demand it. HP Helion OpenStack is a commercialgrade product designed to deliver a flexible open source cloud computing technology in a resilient, maintainable, and easy
to install solution.
The product places special importance on enabling:
Deployment of a secure, resilient and manageable cloud
• Highly Available infrastructure services with active failover for important cloud controller services.
• HP’s Debian-based host Linux® running the OpenStack control plane services, reducing security risks by removing
unneeded modules.
• Build and manage your cloud using simplified guided installation and deployment through TripleO technology.
• Stay up-to-date with automated, live distribution of regularly tested updates where you still maintain full control over
your deployment.
• Inventory management of cloud infrastructure allowing visibility into what resources are free or in use as you deploy
secure services.
Flexibility to scale
• Ability to scale up and down as workload demands change.
• Openness enables you to move, deliver and integrate cloud services across public, private and managed/hosted
environments.
• Optimized for production workload support running on KVM (Kernel-based Virtual Machine) or VMware® vSphere
virtualization.
Global support for the enterprise cloud
• Foundation Care support is included, providing a choice of support levels including same day and 24x7 coverage.
• HP Support provides access to experts in HP’s Global Cloud Center of Excellence as a single source of support and
accountability. This support from HP also qualifies you for HP’s OpenStack Technology Indemnification Program.
• Access to local experts with significant expertise in OpenStack technology and HP Helion OpenStack to accelerate your
implementation.
Combining the secure, manageable and scalable characteristics of HP Helion OpenStack software with HP server, storage
and networking technologies further enhances the cloud solution. HP offers a range of server technologies on which HP
Helion OpenStack can be based allowing for the selection of the best server type and form factor for the planned cloud
workload. Customers can choose block storage from the HP 3PAR StoreServ storage array family for cloud applications that
require high-end storage characteristics or alternatively select the HP Helion supplied HP StoreVirtual VSA Software – a
virtual storage appliance solution running on HP servers.
This paper discusses a sample deployment of the HP Helion OpenStack v1.01 software and how this software architecture
can be realized using HP server, storage and networking technologies. Each private cloud solution using HP Helion
OpenStack needs to address specific business needs and the goal of this paper is to offer a detailed starting configuration
suggestion that can be evolved to meet those needs.
The configuration in this paper is designed for use with the fully supported HP Helion OpenStack edition targeted for
production cloud environments in an enterprise setting. HP also offers the HP Helion OpenStack Community edition which is
a free-to-license distribution often useful for proof of concept and testing scenarios. The example configuration in this
paper is not designed for use with HP Helion OpenStack Community edition.
Target audience: This paper is targeted at IT architects who are designing private clouds solutions. A working knowledge of
OpenStack based cloud software and HP server, networking and storage products is helpful.
DISCLAIMER OF WARRANTY
This document may contain the following HP or other software: XML, CLI statements, scripts, parameter files. These are
provided as a courtesy, free of charge, “AS-IS” by Hewlett-Packard Company (“HP”). HP shall have no obligation to maintain
or support this software. HP MAKES NO EXPRESS OR IMPLIED WARRANTY OF ANY KIND REGARDING THIS SOFTWARE
INCLUDING ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR NON-INFRINGEMENT.
HP SHALL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES, WHETHER
BASED ON CONTRACT, TORT OR ANY OTHER LEGAL THEORY, IN CONNECTION WITH OR ARISING OUT OF THE FURNISHING,
PERFORMANCE OR USE OF THIS SOFTWARE.
2
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Introduction to HP Helion OpenStack
HP Helion OpenStack is OpenStack technology coupled with a version of Debian-based host Linux provided by HP. It is
designed to offer a number of value-added services that complement and enhance OpenStack technologies and can be
used to set up private cloud environments that scale from a few to 100 compute nodes.
An HP Helion OpenStack cloud consists of a number of co-operating services and those familiar with the OpenStack
architecture will immediately recognize many of these in the HP Helion OpenStack product. End user cloud needs are
satisfied by submitting requests to the HP Helion OpenStack cloud using a choice of web portal, command line utilities or
through a well-defined set of APIs. The cloud request is orchestrated by these services with each performing a specific type
of action that collectively go to fulfilling the request.
Core HP Helion OpenStack services
The following table briefly describes the core HP Helion OpenStack services and provides an overview of each role. These
services will be immediately recognizable and familiar to those who have already explored or implemented an OpenStack
environment.
Table 1. OpenStack services
Service
Description
Identity Operations
(Keystone)
Based on OpenStack Keystone, the HP Helion OpenStack Identity service provides one-stop
authentication for the HP Helion OpenStack private cloud.
The Identity service enables you to create and configure users, specify user roles and credentials, and
issue security tokens for users. The Identity service then uses this information to validate that incoming
requests are being made by the user who claims to be making the call.
Compute Operations
(Nova)
HP Compute Operation services, based on OpenStack Nova, provides a way to instantiate virtual servers
on assigned virtual machine compute hosts. Some of the tasks you can perform as a user are creating
and working with virtual machines, attaching storage volumes, working with network security groups and
key pairs, and associating floating IP addresses.
As an administrator, you can also configure server flavors, modify quotas, enable and disable services,
and work with deployed virtual machines.
Network Operations
(Neutron)
HP Network Operation services, based on OpenStack Neutron, provides network connectivity and IP
addressing for compute instances using a software defined networking paradigm.
Some of the tasks you can perform as a user are configuring networks and routers, adding and removing
subnets, creating a router, associating floating IP addresses, configuring network security groups, and
working with load balancers and firewalls.
As an administrator, you can also create an external network, and work with DHCP agents and Level-3
networking agents.
Image Operations
(Glance)
HP Image Operations services, based on OpenStack Glance, helps manage virtual machine software
images. Glance allows for the querying and updating of metadata associated with those images in
addition to the retrieval of the actual image data for use on compute hosts should new instances that are
being instantiated require it.
As a user, you can create, modify and delete your own private images. As an administrator, you can also
create, modify and delete public images that are made available to all tenants in addition to their private
set of images.
Volume operations
(Cinder)
HP Volume Operations services (or Block Storage), based on OpenStack Cinder, helps you perform
various tasks with block storage volumes. Cinder storage volume operations include creating a volume,
creating volume snapshots, configuring a volume and attaching/detaching volumes from instances.
As an administrator, you can also modify project quotas, enable services, create volume types and
associate quality of service metrics with each of the volume types.
Object Operations
(Swift)
HP Object Storage service, based on OpenStack Swift, provides you with a way to store and retrieve
object data in your HP Helion OpenStack private cloud. You can configure storage containers, upload and
download objects stored in those containers, and delete objects when they are no longer needed.
3
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Service
Description
Orchestration (Heat)
HP Orchestration service, based on OpenStack Heat, enables you to design and coordinate multiple
composite cloud applications using templates. The definition of a composite application is encompassed
as a stack which includes resource definitions for instances, networks and storage in addition to providing
information on the required software configuration actions to perform against the deployed instances.
As a user, you can create stacks, suspend and resume stacks, view information on stacks, view event
information from stack actions, and work with stack templates and infrastructure resources (such as
servers, floating IPs, volumes and security groups).
Ironic
HP Helion OpenStack software includes the capability to deploy physical “baremetal” servers in addition
to its ability to create new instances within a virtualized server environment. Ironic is the OpenStack
component that enables physical server deployment and it allows for physical servers with no operating
software installed to be bootstrapped and provisioned with software images obtained from Glance.
Ironic features are used during the HP Helion OpenStack installation process to deploy the cloud software
on to servers. Use of Ironic outside of the core cloud installation process is currently not supported.
TripleO
TripleO provides cloud bootstrap and installation services for deploying HP Helion OpenStack on to target
hardware configurations. TripleO leverages Heat for defining the deployment layout and customization
requirements for the target Cloud and uses Ironic services for deploying cloud control software to
physical servers using HP supplied software images.
Figure 1. OpenStack Services and their interactions (Sourced from http://docs.openstack.org/training-guides/content/associate-gettingstarted.html)
4
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
HP Helion OpenStack additional services
HP has enhanced the core set of OpenStack services provided by Helion with additional services and functionality that
enhance the manageability and resiliency of the cloud. These additional services are briefly described in the table below.
Table 2. HP Helion OpenStack additional services
Service
Description
Sherpa
Sherpa is the HP Helion OpenStack content distribution catalog service that provides a mechanism to
download and install additional product content and updates for a deployed HP Helion OpenStack
configuration.
EON
The HP Helion EON service interacts with VMware vCenter to collect information about the available set of
vSphere datacenters and clusters. This information is then used to configure VMware clusters as compute
targets for HP Helion OpenStack.
Sirius
HP Helion OpenStack Sirius service assists the cloud administrator in the configuration of storage services
such as Cinder and Swift. It offers a dashboard graphical user interface and a REST based web service for
storage device management.
Centralized
Logging and
ElasticSearch
HP Helion OpenStack includes a centralized logging facility enabling an administrator to review logs in a
single place rather than needing to connect to each cloud infrastructure server in turn to examine local log
files. Tools are provided that simplify the analysis of large amounts of log file data making it easier for the
administrator to pinpoint issues more quickly.
Monitoring with
Icinga
Monitoring of the HP Helion OpenStack cloud is important for maintaining availability and robustness of
services. Two types of monitoring are available:
• Watching for problems: ensures that all services are up and running. Knowing quickly when a service fails
is important so that those failures can be addressed leading to improved cloud availability.
• Watching usage trends: involves monitoring resource usage over time in order to make informed
decisions about potential bottlenecks and when upgrades are needed to improve cloud performance and
capacity.
HP Helion OpenStack includes support for both the monitoring of problems and the tracking of usage
information through Icinga.
HP Helion OpenStack deployment architecture
To simplify the deployment process, HP Helion OpenStack ships a number of pre-integrated software images that are
installed onto servers assigned for cloud infrastructure. These images are automatically deployed as part of the HP Helion
OpenStack initial cloud installation process that is carried out using TripleO. Additional services can be subsequently added
to the cloud over time using TripleO allowing for increased cloud scale and functionality as needs arise.
TripleO uses the concept of deploying a “starter” OpenStack instance that is then used to install and configure the end-user
accessible HP Helion OpenStack cloud infrastructure. This starter OpenStack instance is termed the undercloud and its role
is to provision the production cloud which is termed the overcloud. The undercloud is only used to administer the overcloud
and production end-user workloads are only run on the overcloud and never the undercloud.
To initiate the HP Helion OpenStack installation process, a “Seed” virtual machine is provided that is deployed on a KVM
capable management host. Currently Ubuntu 13.10 and 14.04LTS are the supported operating system versions that are
certified for use as the KVM virtualization host for Seed VM.
The Seed VM is installed and booted and is then used to deploy the undercloud controller instance. A description of the
available hardware for the cloud and the desired target cloud configuration is provided to the undercloud which then
deploys the overcloud controllers and appropriate set of overcloud services (such as Swift, VSA Cinder nodes or KVM
Compute hosts). This process is illustrated in Figure 2.
5
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Figure 2. HP Helion OpenStack deployment model
High Availability is a key design point for HP Helion OpenStack and the product specifically includes replicated copies of
important services and data that together enhance overall cloud control plane resiliency. Three separate overcloud
controllers are deployed with each installation and these controllers are automatically configured to enable the replication
of services and service data essential for supporting resilient cloud operations.
Further details for the overcloud, undercloud and the Seed VM are discussed in the following sections.
The overcloud
The overcloud is the “production” cloud that end users interact with to obtain cloud services. During the installation phase,
the overcloud is implemented on a number of pre-assigned servers that at a minimum will be composed of:
• Three overcloud controllers (one of which is assigned a special role as the Management Controller).
• A starter Swift cluster implemented on two servers for availability. This Swift cluster is primarily used by Glance for
storing images and instance snapshots although other Swift object storage uses are possible.
• One or more KVM compute servers or a set of pre-existing VMware vSphere clusters used as the compute host targets
for instances.
Based on the customer’s specific needs, the overcloud may also include:
• An optional Swift Scale-Out cluster of between two and twelve servers that is used for large-scale production cloud
Object storage use (Scale-Out Swift extends the Starter Swift Cluster enabling greater capacity while maintaining any
initial data present in Starter Swift).
• An optional VSA based Cinder block storage capability. One or more VSA clusters can be implemented with each cluster
having a recommended number of servers of between one (no High Availability) and three (High Availability is enabled).
Individual VSA clusters with more than three constituent servers (and up to a maximum of fifteen) are possible but
require careful design to ensure appropriate performance.
• An optional HP 3PAR storage array that can be used to provide high performance Cinder block storage.
6
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Figure 3. Illustration of the overcloud configuration
The overcloud controllers run the core components of the OpenStack cloud including Nova, Keystone, Glance, Cinder, Heat,
Neutron and Horizon.
To enable High Availability, three instances of the overcloud controller are run on three separate physical servers. Software
clustering and replication technologies are used with the database, message queuing and web proxy to ensure that should
one overcloud controller fail that another active overcloud controller can take over its workload. This Active-Active cluster
design allows the cloud to remain running and for cloud users to continue to have access to cloud control functionality even
in the face of an overcloud server failure.
A similar approach is used with the Starter Swift servers where a minimum of two servers is required to ensure High
Availability. The Swift software makes sure that data is replicated appropriately with redundant copies of the Swift object
data spread over both servers.
The Starter Swift servers are deployed within the overcloud and provide the backing storage for Glance images and instance
snapshots as well as being a target for a limited set of Cinder volume backups and a repository for cloud software updates.
The Starter Swift cluster is mandatory because Glance is a required component for any HP Helion OpenStack cloud to
operate. All of these overcloud components are automatically installed as part of the TripleO deployment process.
The remaining required component for the overcloud when using KVM virtualization is the compute server environment. HP
Helion OpenStack includes support for deploying cloud end user instances to either one or more KVM based virtualization
hosts that run on HP’s host Linux or VMware vSphere clusters. For KVM compute nodes, the TripleO based installation
process will deploy the appropriate software to the target compute servers and configure them for use within the cloud. For
VMware vSphere compute environments, a vSphere cluster must already have been provisioned outside of the TripleO
installation process and preconfigured to meet the pre-requisites for operation with HP Helion OpenStack.
A separate set of Swift Proxy and Swift Object servers can be installed for those deployments that have a need for a more
comprehensive object storage capability than that provided by the Starter Swift servers. These additional Swift servers are
7
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
not set up as part of the initial installation process but can be configured through TripleO after the core overcloud has been
set up.
The final component of the overcloud is the optional VSA block storage server that offers Cinder support to the cloud. HP
Helion OpenStack supports a number of Cinder block storage server types that include StoreVirtual VSA and 3PAR storage
arrays. For cloud environments that require high-end storage capabilities, the 3PAR storage array can be considered as an
alternative to the StoreVirtual VSA solution. If VSA is chosen as a Cinder provider, then a group of servers each with their
own local disks can be pooled together using the VSA software to offer protected storage to end user instances.
The undercloud
The undercloud is implemented on a physical server and is responsible for the initial deployment and subsequent
configuration and updating of the overcloud. The undercloud itself uses OpenStack technologies for the deployment of the
overcloud but it is not designed for access or use by the general cloud end user population. Undercloud access is restricted
to cloud administrators.
The undercloud runs on a single server and does not implement High Availability clustering as is the case for the overcloud
Controller nodes. Once the cloud has been created, the undercloud system is then used for a number of purposes including
providing DHCP and network booting services for the overcloud servers and running the centralized logging and monitoring
software for the cloud.
The Seed KVM host
The Seed KVM host runs the HP Helion OpenStack supplied Seed software that is used to bootstrap the HP Helion
OpenStack cloud environment. The Seed software is loaded into the KVM capable Seed virtualization host and a Seed VM is
created by running scripts that are supplied with the Helion software.
The Seed KVM host is used during the initial bootstrapping process of the HP Helion OpenStack cloud. Once the cloud has
been created, the Seed VM is then used for providing DHCP and network booting services for the undercloud and also
includes scripts that enable the backup and restore of the HP Helion OpenStack control plan servers. Because the server is
used for storing backup images, the minimum storage requirement is 1TB.
HP Helion OpenStack networking
HP Helion OpenStack provides instances with networking support that allows for both public communications through an
external network as well as more restrictive networking through the definition of tenant networks. Cloud users with
appropriate authority and available quota can create, modify and delete networks and routers on the fly through the use of
HP Helion OpenStack software defined networking capabilities.
In addition to the networking used for instance communication, HP Helion OpenStack also requires networks that connect
together the infrastructure components used for the cloud. A common Management network is defined for communications
between the cloud’s Seed VM, overcloud and undercloud controller nodes, VSA and 3PAR Cinder block storage providers,
compute nodes and Swift object storage servers.
Common networks used within an HP Helion OpenStack KVM Cloud are shown in Figure 4 and a brief explanation of the role
of each of these key networks is provided below.
Table 3. HP Helion OpenStack networking
Network
Description
External
The External network is used to connect cloud instances to an external public network such as a company’s
intranet or the public Internet in the case of a public cloud provider. The external network has a predefined range
of Floating IPs which are assigned to individual instances to enable communications to and from the instance to
the assigned corporate intranet/Internet.
Management
The management network is the backbone used for the majority of HP Helion OpenStack management
communications. Control messages are exchanged between the overcloud, undercloud, Seed VM, compute hosts,
Swift and Cinder backends through this network. In addition to the control flows, the management network is also
used to transport Swift and iSCSI based Cinder block storage traffic between servers. Also implemented on this
network are VxLAN tunnels that are used for enabling tenant networking for the instances.
The HP Helion OpenStack installation processes use Ironic to provision baremetal servers. Ironic uses a network
boot strategy with the PXE protocol to initiate the deployment process for new physical servers. The PXE boot
and subsequent TFTP traffic is carried over the management network.
The management network is a key network in the HP Helion OpenStack configuration and should use at least a
10Gb network interface card for physical connectivity. Each server targeted to the undercloud, overcloud, VSA,
Swift and KVM compute roles should have PXE enabled on this interface so they can be deployed via Ironic and
TripleO.
8
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Network
Description
IPMI
The IPMI network is used to connect the IPMI interfaces on the servers that are assigned for use with
implementing the cloud. IPMI is a protocol that enables the control of servers over the network performing such
activities as powering on and powering off servers. For HP ProLiant servers, the IPMI network connects to the HP
iLO management device port for the server. This network is used by Ironic to control the state of the servers
during baremetal deployments.
Note: The IPMI network is designed to be a separate network from the Management network with it being
accessible from cloud infrastructure servers via an IP layer network router (see Figure 4). This approach allows for
access to the HP Helion OpenStack main Management network to be restricted from the IPMI network especially if
filtering rules are available on the IP router being used.
Service
The service network is used with the HP Helion Development Platform, enabling communication between the HP
Helion Development Platform components and the HP Helion OpenStack services. This communication is
restricted to the HP Helion Development Platform and access to the network is protected via Keystone
credentials. This network is optional and is not required if the cloud deployment is not using the HP Helion
Development Platform.
Fibre
channel
The fibre channel network is used for communications between the servers that make up the HP Helion
OpenStack cloud and the 3PAR storage array(s) that participate in the cloud. This network is a Storage Area
Network (SAN) and is dedicated for performing storage input/output to and from 3PAR storage arrays.
The SAN is used for Cinder block storage operations when the 3PAR Cinder plugin is selected and the fibre
channel communications option is enabled (the alternative transport option being iSCSI). HP Helion OpenStack
also supports boot from SAN for the cloud infrastructure and if this configuration is used then this SAN is also
used for that purpose.
SAN switches are required when using HP Helion OpenStack in a SAN environment. “Flat SAN” configurations
where BladeSystem Virtual Connect modules are directly connected to 3PAR storage arrays without the
intermediary SAN switches is not supported.
HP Helion OpenStack 1.0.1 requires that a single path is presented to the server for each LUN in the 3PAR storage
array. This requires that appropriate zoning and VLUN presentation is configured in the SAN switches and 3PAR
arrays.
Figure 4. HP Helion OpenStack Networking for KVM
Although Figure 4 illustrates several logical networks being connected to each of the cloud components, the actual physical
implementation uses a single networking port with a number of the HP Helion OpenStack logical networks being defined as
9
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
VLANs. The common approach when using the single NIC port configuration is for the management network to be defined
as the untagged network and for the external and service networks to be VLANs that flow over the same physical port.
HP Helion OpenStack configurations
HP Helion OpenStack software can be used to implement a variety of different cloud configurations so that a customer’s
individual needs for their cloud can be met. The requirement for different compute, networking, and storage scale can be
addressed by combining the appropriate HP Helion OpenStack software configuration with the corresponding hardware that
supports that target configuration.
HP offers a wide range of servers that can be used with HP Helion OpenStack software and so for any given software
configuration there may be several potential combinations of server hardware, networking and storage that can be used to
support that solution. Which type of server and storage infrastructure is used for an HP Helion OpenStack solution is likely
influenced by factors such as the preference for a particular server family (DL rack servers, BL blade servers, SL space
optimized or others) or whether they have standardized on 3PAR arrays or StoreVirtual VSA solutions for storage.
The following section provides configuration details for a cloud solution implemented using HP Helion OpenStack software
on HP BladeSystem. It is intended to be used as a complete, repeatable example implementation for an HP Helion
OpenStack solution but the expectation is that it may need to be adjusted to ensure it meets the specific needs of the target
cloud being built.
In the example configuration that follows, the Top of Rack networking infrastructure is not specifically called out as part of
the configuration although that is clearly an important component for any cloud solution. In many cases, datacenters have
already standardized on a specific networking solution and have that networking infrastructure already in place. If that is the
case, then integrate these configurations into that pre-existing network environment.
If networking infrastructure is not available, consider using HP Networking’s 5900AF family of switches for use as Top of
Rack devices. Design the infrastructure with two or more of these switches and include IRF so that networking availability is
enhanced.
HP Helion OpenStack version 1.0.1 using HP BladeSystem
This implementation uses the HP BladeSystem line of servers to implement a general purpose starter HP Helion OpenStack
cloud configuration. The ProLiant BladeSystem line of servers allows for increased levels of server density which is often an
important factor when building large scale cloud implementations consisting of very large numbers of servers. Use of HP
OneView management software significantly simplifies the management of the BladeSystem server environment.
Storage is provided to the configuration using an HP 3PAR StoreServ 7400 storage array which is connected to the
BladeSystem enclosure using a SAN switch and the Virtual Connect modules within the c7000 enclosure. The SAN switch is
required and “flat SAN” architectures are not supported. HP 3PAR storage arrays can be configured using disks with different
performance profiles (for example, SSD, FC or Nearline drives) so that volumes with different quality of service can be
delivered to cloud end users and infrastructure.
The BL460c Gen8 server is used for the compute nodes and the cloud control infrastructure. Sixteen BL460c Gen8 halfheight blades are housed in a single c7000 chassis meaning that a complete HP Helion OpenStack cloud with nine compute
blades can be delivered all within a single enclosure. This configuration also uses HP FlexFabric adapters in the blades along
with FlexFabric Virtual Connect modules in the c7000 enclosure. No local drives are configured in the BL460c Gen8 blades
as the systems will boot from SAN volumes provided by the 3PAR.
A “Starter Swift” option with two BL460c Gen8 servers is included in the configuration. This Swift capacity is primarily used
for Glance image storage, instance snapshot data and potentially as the target for limited sized Cinder backups. Access to
the Swift store can also be enabled for cloud users and applications so long as this group of users’ storage needs do not
exceed the available Starter Swift capacity. Storage for the Swift cluster is provided by the 3PAR and benefits from the extra
resiliency that the 3PAR array provides.
No StoreVirtual VSA cluster is included in this configuration as the 3PAR storage array is used as the exclusive target for
Cinder block storage.
Although BL460c Gen8 servers have been shown as the option for the additional compute nodes, these can be replaced by
alternate server types that are supported by HP Helion OpenStack if required. The currently supported compute nodes can
be found at http://docs.hpcloud.com/helion/openstack/support-matrix/.
The following table lists the components used to develop and test this configuration.
10
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Table 4. Components used for the HP Helion OpenStack on a BladeSystem
Server Role
Quantity
Server
Model
Processor
Memory
Storage
Network
Seed KVM Host
1
BL460c
Gen8
2 x 6 Core
2.6GHz
Intel® Xeon®
32GB
Boot from SAN:
1TB LUN in 3PAR
10Gb 554FLB
FlexFabric
LOM
Undercloud
Controller
1
BL460c
Gen8
2 x 8 core
2.6GHz Intel
Xeon
64GB
Boot from SAN:
2TB LUN in 3PAR
10Gb 554FLB
FlexFabric
LOM
Overcloud
Controller
3
BL460c
Gen8
2 x 8 core
2.6GHz Intel
Xeon
64GB
Boot from SAN:
2TB LUN in 3PAR
10Gb 554FLB
FlexFabric
LOM
Starter Swift
2
BL460c
Gen8
2 x 8 core
2.6GHz Intel
Xeon
64GB
Boot from SAN:
2TB LUN in 3PAR
10Gb 554FLB
FlexFabric
LOM
Initial KVM Compute
1
BL460c
Gen8
2 x 8 core
2.6GHz Intel
Xeon
64GB
Boot from SAN:
2TB LUN in 3PAR
10Gb 554FLB
FlexFabric
LOM
Additional KVM
Compute
8
BL460c
Gen8
2 x 8 core
2.6GHz Intel
Xeon
128GB
Boot from SAN:
2TB LUN in 3PAR
10Gb 554FLB
FlexFabric
LOM
In addition to the BL460c Gen8 Servers listed above, the following enclosure, SAN switches and 3PAR storage array are also
used.
Table 5. Enclosure, SAN and 3PAR Storage for HP Helion OpenStack on a BladeSystem
Role
Configuration
BladeSystem Enclosure
1 HP c7000 Platinum Enclosure. Up to 6 more fully loaded enclosures could be added to the
environment, depending on KVM compute blade count requirements.
Each enclosure includes the following:
• Two Onboard Administrator modules (dual OA for availability)
• Two Virtual Connect FlexFabric 10Gb/24-port modules
(If the target compute nodes are expected to generate combined network and SAN traffic
exceeding the capacity of the two 10Gb ports on each server then consider using the Virtual
Connect FlexFabric-20/40 F8 module and associated higher speed FlexFabric LOMs instead.)
• Appropriate enclosure power and fans for the target datacenter with a design to enable
increased availability through power and fan redundancy
SAN Switches
One HP Brocade 8/24 8Gb 24 port AM868B SAN Switch
Storage Array
HP 3PAR StoreServ 7400 storage array with two controller nodes with each controller populated
with fibre channel adapters for connectivity to the SAN switches. The implemented disk
configuration had:
• 96 of 300GB 15K RPM FC drives
• Four M6710 24 drive enclosures
A mix of FC, SSD, and/or NL drives can be used to enable multiple levels of Cinder block storage
quality of service. Adjust the drive counts and types as needed.
The first 8 blades are reserved for the Helion control plane while the rest are available for compute. Except for the Seed
KVM Host, the control plane nodes are assigned automatically by the Helion installation script. A diagram of this hardware is
shown below in Figure 5. The servers are shown in the position they were used for this example but as mentioned, the
additional server roles may be installed on any of the servers.
11
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Figure 5. Example of a general purpose starter cloud using BL460c Gen8 blades
This capacity for this higher density HP Helion OpenStack configuration example is characterized in the table below.
Table 6. Capacity for the higher density HP Helion OpenStack on a BladeSystem
Component
Capacity
Compute
Servers
Initial: Compute server has 16 cores, 64GB of memory and 2TB of RAID protected storage delivered by the 3PAR
storage array. One 10Gb FlexFabric converged network adapter is used for networking and the other for SAN
storage traffic.
Additional: Each additional compute server has 16 cores, 128GB of memory and 2TB of RAID protected storage
delivered by the 3PAR storage array. One 10Gb FlexFabric converged network adapter is used for networking and
the other for SAN storage traffic.
Starter Swift
Cluster
Total of 1.04TB of Swift Object storage available across the cluster:
• Each server with 1.95TB of RAID protected data storage after accounting for operating system overhead
• Two servers supplies 3.9TB of protected RAID data storage
• Swift by default maintains three copies of all stored objects for availability and using a maximum capacity of
80% of total data storage for objects provides a Swift object storage capacity of 1.04TB
12
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Component
Capacity
The starter Swift cluster is primarily used to store Glance images, instance snapshots and a limited number of
Cinder volume backups so the combination of the intended size for each of these may not exceed 1.04TB. Access
to Starter Swift is also possible for cloud end users and applications but the total Swift capacity available must be
taken into account if this is enabled.
If a larger scale Swift capacity is required than can be provided by Starter Swift then consider deploying a ScaleOut Swift configuration.
3PAR Cinder
Storage
Cinder block storage is made available directly from the 3PAR storage array. The amount of storage available will
depend on the number of Compute servers in the configuration (since each compute server is allocated 2TB of
boot storage from the 3PAR) and the settings for the RAID levels within the 3PAR. We are using a 3+1 RAID-5
configuration. The LUNs are thin-provisioned.
In the configuration shown, the seven control plane servers will require 13TB of thin-provisioned protected FC
based storage for their boot drives and the nine compute servers will consume 18TB for a total of 31TB. The
3PAR has a useable capacity of 26.1TB after formatting and RAID overhead, so the 3PAR is approximately 20%
oversubscribed. This did not affect the implementation or performance of the environment, although a
production system should have more total disk space. This could be accomplished by using more and/or larger
drives. A mix of FC and large Nearline disks should be considered for larger volume requirements.
Network subnets and addresses
Network subnets
As discussed above, HP Helion OpenStack uses four IP networks – External, Management, IPMI and Service. In this reference
implementation, the following subnets and VLANs were used.
Table 7. Subnets used for HP Helion OpenStack
Name
Subnet
Subnet Mask
Gateway
Address
VLAN ID
External
10.136.96.0
255.255.224.0
10.136.96.1
536
IPMI
192.168.128.0
255.255.224.0
192.168.146.12
436
Management
172.1.1.0
255.255.224.0
172.1.1.12
636
Service
172.2.1.0
255.255.224.0
172.2.1.12
736
Tenant subnet
192.1.0.0
255.255.224.0
192.1.0.1
N/A (Tenant networks dynamically assigned
through VxLAN)
These are only meant to be used as example address ranges. The External network will almost certainly need to be changed
while the others can be used as-is or modified to meet your needs and standards.
IP addresses
The following table of IP addresses reflects the IP addresses given to the various components of the environment. They
match expectations of the JSON configuration file tripleo/configs/kvm-custom-ips.json shown in Appendix A. This JSON file
is edited according to the IP addresses of the network environment. The IP addresses will need to be adjusted based on any
changes that are made to the subnets, as well as to conform to your specific environment.
Table 8. IP addresses used for HP Helion OpenStack
Component
Network
IP Address
Enclosure On-Board
Administrator 1
IPMI
192.168.146.229
Enclosure On-Board
Administrator 2
IPMI
192.168.146.230
BL460c iLO – Bay 116
IPMI
192.168.146.231-246
Comment
16 Blades
13
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Component
Network
IP Address
Comment
Virtual Connect
FlexFabric module
Bay 1
IPMI
192.168.146.247
Ethernet Network VC module
Virtual Connect
FlexFabric module
Bay 2
IPMI
192.168.146.248
SAN Network VC module
Router - External
External
10.136.96.1
Interface on the external net
Router - IPMI
IPMI
192.168.146.12
Interface on the IPMI net
Router - Management
Management
172.1.1.12
Interface on the Management net
3PAR Node
Management
172.1.1.228
3PAR Node IP
SAN Switch
Management
172.1.1.226
Seed KVM Host
Management
172.1.1.21
Ubuntu 14.04 KVM host
Seed VM
Management
172.1.1.22
HP Helion Seed VM running on the Seed KVM Host
Seed VM Range
Management
172.1.1.23-40
Various seed services use addresses from this range
Undercloud Range
Management
172.1.1.64-224
The various undercloud servers and services get
automatically assigned IPs from this range.
Floating IP Range
External
10.136.107.172-191
Range of IPs available for tenant VMs to use to access
the external network
Cabling
The cabling of the environment is shown in Figure 6, below. The Virtual Connect FlexFabric 10Gb/24-port module in
interconnect bay 1 is used for Ethernet network connectivity, while the Virtual Connect FlexFabric 10Gb/24-port module in
interconnect bay 2 is used for SAN connectivity. A pair of 10Gb connections were made from interconnect bay 1 ports X5
and X6 to the HP 5920 Top of Rack switch. Similarly, a pair of 8Gb connections were made from interconnect bay 2 ports X1
and X2 to the HP Brocade 8/24 SAN switch. While it is supported to use a single Virtual Connect FlexFabric module and to
connect both the Ethernet and SAN networks to it, splitting them across a pair of Virtual Connect FlexFabric modules allows
a full 10Gb of Ethernet bandwidth and a full 8Gb of Fibre Channel bandwidth for the HP Helion cloud. If both Ethernet and
SAN networks are combined on a single Virtual Connect module, the Fibre Channel bandwidth must be reduced to 4Gb and
the Ethernet to 6Gb to stay within the 10Gb maximum throughput supported by these modules. If the higher speed Virtual
Connect FlexFabric-20/40 F8 modules are used, then a total of 20Gb is available to the combined networks.
From the HP Brocade 8/24 SAN switch, four 8Gb connections were made, two to each 3PAR controller node. Be sure to use
a pair of controller node partner port pairs for the connection. In this configuration partner pairs 0:1:2/1:1:2 and 0:2:2/1:2:2
were connected. Using partner port pairs ensures that the connection can properly fail over between nodes if one of the
3PAR controllers fails.
14
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Figure 6. Cabling of the environment
15
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Initial 3PAR configuration
Starting the Web Service API Server
As the physical installation and on-site initialization of the 3PAR array is beyond the scope of this paper, it assumes the
3PAR is initialized, with all the drives installed and properly cabled, the array is connected to the management network, and
the user has super-user credentials. The default “3paradm” user will be used in this paper although another user can be
configured if desired.
While most of the 3PAR configuration can be done via the HP 3PAR Management Console Java GUI, there is a step that can
only be done via the Command Line Interface – enabling the Web Services API server. The easiest way to access the CLI is by
simply using SSH to connect to the 3PAR IP address or hostname. Any of the many SSH clients will work – Putty, OpenSSH,
or any Linux variant with SSH installed will suffice.
SSH into the array, logging in with a user that has super-user authority. Check the current status of the Web Services API
server by executing:
showwsapi
-Service- -State-- -HTTP_State- HTTP_Port -HTTPS_State- HTTPS_Port -VersionDisabled Inactive Disabled
8008 Enabled
8080 1.3.1
In the example output above, the Service State is Inactive, HTTP is Disabled, HTTPS is Enabled, and the HTTPS port is 8080.
This is the default configuration for the Web Service API Server. Using HTTPS is highly recommended, although it can be
disabled and HTTP enabled with the “setwsapi“ command, if desired. Start the service with:
startwsapi
The Web Services API Server will start shortly.
After 20-30 seconds the server should be started.
showwsapi
-Service- -State- -HTTP_State- HTTP_Port -HTTPS_State- HTTPS_Port -VersionEnabled
Active Disabled
8008 Enabled
8080 1.3.1
Configuring the 3PAR host ports
After starting the Web Service API Server, open up the HP 3PAR Management Console and, using the 3PAR hostname or IP
address and the super-user credentials, login to the array. Click on the Systems tab on the bottom left panel, and then
expand the array and ports in the top left panel.
Click on the “Free” branch to list the ports that either aren’t configured or are configured but don’t have any active
connections. For any ports that were used to connect to the SAN switch, right-click the port, select “Configure…” and verify
the Connection Mode is set to Host and the Connection Type is Point. If needed, update the settings as shown in Figure 7.
3PAR Port Settings. Changing the Connection Mode requires the system to set the port offline. Click OK to save the new
settings.
16
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Figure 7. 3PAR Port Settings
Creating a 3PAR Common Provisioning Group
After configuring the host ports, you’ll need to create at least one initial 3PAR Common Provisioning Group (CPG). A CPG
defines, among many other things, the RAID level, and disk type of the LUNs created in it. You’ll want to create at least one
CPG for each type of disk (FC, Nearline, SSD) in your array. After HP Helion OpenStack has been configured, you can map
each of these CPGs to a Cinder volume type allowing the cloud user to select from a variety of CPGs representing different
quality of service for their Cinder block storage.
To create a CPG for FC disks with RAID5 3+1 (3 data disks + 1 parity disk):
In the toolbar at the top of the page, select Actions  Provisioning  CPG  Create CPG. Click Next on the Welcome page.
Leaving the System defaulting to the array, and Domain defaulting to <none>, enter a name for the CPG. For example
“FC_RAID5_31” for a RAID 5 3+1 CPG. In the Allocation Settings verify the Device Type is FC, and change the RAID Type to
“RAID 5”. The Device RPM value of <Default> is fine, and the Set Size should have changed to “3 data, 1 parity” when you
selected “RAID 5”. Click Finish to create the CPG.
If you have Nearline disks the recommendation is to initially create a “NL_RAID6_62” CPG with a Set Size of “6 data, 2
parity”; for SSD a “SSD_RAID5_31” CPG is recommended. Additional CPGs can be created as needed.
Once the host ports are configured and at least one CPG is created, you’re ready to move on to the SAN switch configuration.
Figure 8 shows the CPG settings as configured on the 3PAR Management Console.
17
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Figure 8. Common Provisioning Group settings
Initial SAN switch configuration
This section assumes that you don’t already have a configuration defined in the switch. If you do, executing these
commands will break all current FC connectivity and be a BAD THING. If your switch already has a configuration please just
move on to the HP OneView configuration section.
In order for HP OneView to manage the SAN switch, it needs to have a configuration defined in the switch. Before you can
create a configuration in the switch, however, it needs to have a zone defined. Since we want HP OneView to create and
manage all the zones in the switch, we’re going to create a dummy zone, use it to create the switch configuration and then,
after OneView has created zones in the switch, delete the dummy one.
To create the dummy zone and the switch configuration, SSH into the switch and execute the following commands:
zonecreate dummy, 10:20:30:40:50:60:70:80
cfgcreate Helion, dummy
cfgenable Helion
When asked, confirm you want to enable the “Helion” configuration.
HP OneView setup
HP OneView 1.20 was used to manage the HP BladeSystem, SAN switch and 3PAR. A full description of installing and setting
up HP OneView is beyond the scope of this paper. It assumes that HP OneView is already installed and can be accessed
through the HP Helion IPMI network. It also assumes that the Brocade Network Advisor has been installed and configured on
a VM/Host, has been integrated with HP OneView, and that the 3PAR has also been integrated with HP OneView. Please see
the HP OneView documentation at hp.com/go/oneview/docs for further information on these steps.
18
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Creating HP OneView Networks
The first step in preparing HP OneView for the HP Helion install was to define the networks. The table below shows the
parameters used for this implementation. The names, and VLAN IDs specified can be adjusted to meet the requirements of
your environment. Note that except for the SAN network, the bandwidth values aren’t important to the configuration as long
as you don’t exceed the maximum bandwidth; the Network Set, created later, will determine the Ethernet bandwidth
attribute.
Table 9. HP OneView Networks
Network Name
Type
VLAN ID
Preferred Bandwidth
Maximum Bandwidth
External
Ethernet
536
2Gb/s
10Gb/s
IPMI
Ethernet
436
1Gb/s
10Gb/s
Management
Ethernet
636
5Gb/s
10Gb/s
Service
Ethernet
736
1Gb/s
10Gb/s
SAN-C
Fibre Channel
N/A
8Gb/s
8Gb/s
Log in to the HP OneView appliance via the web interface and create the networks. Figure 9 below is a screenshot of the HP
OneView network list after all the networks have been created, as well as the overview of the External network.
Figure 9. HP OneView Networks
19
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Creating the HP OneView Network Set
After creating the networks, create a Network Set. The Network Set will contain all of the Ethernet networks created above.
Set the Preferred and Maximum Bandwidth values to 10Gb/s. Also, be sure to click the “Untagged” checkbox next to the
Management network – this tells HP OneView that any network traffic that does not have a VLAN ID in it gets directed to the
Management network. The name of the Network Set isn’t important.
Figure 10. HP Network set creation
The HP OneView Logical Interconnect Group
A new Helion Logical Interconnect Group needs to be created for the specific requirements of HP Helion. The two Virtual
Connect modules are added to the LIG, and then two Uplink Sets created. The first uplink set, called SUS_External, should be
type “Ethernet” and have all four Ethernet networks (External, Management, IPMI, Service) added to it. Its uplink ports will be
interconnect 1 port X5 and interconnect 1 port X6. The defaults for Connection Mode (Automatic) and LACP timer (Short) are
retained.
20
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
The second uplink set, SAN_Uplink, should be type “Fibre Channel”, have network SAN-C added, and use interconnect 2
ports X1 and X2. The port speed is fixed to 8Gb/s. Figure 11 below shows the resulting Uplink Set configurations.
Figure 11. Logical Interconnect Group Uplink Sets
HP OneView Enclosure Import
After the Logical Interconnect Group is created, import the c7000 enclosure used for the HP OpenStack Helion solution into
HP OneView. Several things happen when an enclosure is imported. First, any configuration the enclosure currently has is
erased. This includes all network definitions, server profiles, etc. After that, the selected Logical Interconnect Group (LIG)
configuration is applied to the enclosure. All of the networks, network sets, uplink sets, etc. that are defined in the LIG are
automatically created in the enclosure; subsequent enclosures can be imported into the enclosure group and will be
configured exactly like the previous ones. Adding extra enclosures to the configuration allows for additional compute or
21
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Swift capacity to be added to the HP Helion OpenStack cloud. Lastly, the Onboard Administrator and Virtual Connect
modules are updated to the firmware contained in the selected firmware bundle. Using the latest available HP OneView
Service Pack for ProLiant (SPP) is highly recommended and can be found at hp.com/go/spp.
HP OneView Storage System Port Groups
When a SAN volume is added to an HP OneView server profile, HP OneView automatically zones it to a Storage System Port
group. By default, all of the ports connected to the HP 3PAR array are in a single port group, thus all the hosts have paths to
all the ports in the 3PAR. HP OpenStack Helion requires that the storage system port groups need to be configured so that
only a single path is available between the array and each host. This is done by putting each 3PAR port into its own unique
HP OneView Storage System port group.
Once logged in to HP OneView and on the Storage Systems page, select the appropriate array and then click on “Edit” from
the Actions menu dropdown. In the resulting dialog box give each port a unique Port Group name by simply typing it into the
Port Group box. This implementation used a naming convention of “SANC_x???”, where the “???” was replaced by the Node,
Slot and Port values of the 3PAR port. For example, port 0:1:1 became Port Group SANC_x011. In the screen shot below,
please note that SAN-D is a defined, configured, but unused SAN. Only SAN-C is used or required by this implementation.
Figure 12. HP OneView Storage System Ports
Importing an HP 3PAR CPG into HP OneView
The CPG(s) created in the Creating a 3PAR Common Provisioning Group section should be imported into OneView to make
them available for Virtual Volume creation. Do this by going to Storage Pools in the HP OneView console. In HP OneView
terminology, an HP 3PAR Common Provisioning Group is a Storage Pool. Click “Add storage pool” to bring up the screen
shown in Figure 13. Clicking the dropdown arrows will allow the selection of the storage system and available CPGs on that
array. Click Add to import the CPG into HP OneView.
22
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Figure 13. Add Storage Pool
HP 3PAR Virtual Volume Creation
The next step is to create two HP 3PAR Virtual Volumes via the HP OneView web GUI as shown in Figure 14. The first volume
is for the Seed KVM Host, and the second is for the initial Helion server profile. The Seed KVM Host only requires 1TB, while
the Helion server LUN needs 2TB. They should both be marked as “Private” in the volume creation dialog, as shown in the
figure.
Only two server profiles need to be defined in HP OneView – the Seed KVM Host profile, and an initial Helion server profile.
The initial Helion server profile can be copied once for each server blade in the environment. HP OneView duplicates all the
settings in the source profile, including creating new versions of the 3PAR LUN(s) and attaching it to the new profile. This
means creating a new server, including storage, network and SAN connections, and BIOS settings only takes a few mouse
clicks, significantly reducing deployment times. If different profiles were required, for example to accommodate different
hardware or volume requirements, more template profiles could be created and then copied as needed.
Figure 14. Seed KVM Host Virtual Volume Creation
23
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Setting up the Seed KVM Host Blade Profile
While all the blades eventually boot from SAN, only the Seed KVM Host needs to be configured to do so. All the other servers
will PXE boot off the Seed VM, and the boot loader the Seed VM provides them redirects the boot process to the SAN drive.
For the Seed KVM Host, create a new profile in HP OneView by going to Server Profiles and selecting “Create” from the
Actions dropdown. Assign the server in bay 1 of the enclosure to the profile and make a connection to the Ethernet “Helion”
Network Set created above. Set this network connection to have a requested bandwidth of 10Gb/s, Port “Auto”, and Boot to
“Not bootable” – all of which were the defaults.
Add a second connection to the profile, this time a Fibre Channel connection to SAN-C. The defaults of 8Gb/s, Auto Port and
“Not bootable” should also be retained. Although this profile needs to boot from SAN, that configuration can’t happen until
after the profile has been initially created. In order to make the LUN a valid boot device, HP OneView needs to know what
port it is on and what LUN ID it has. The port and LUN ID are determined when the profile is created so those values are not
entered yet.
After adding the two connections to the profile click the “SAN Storage” checkbox in the “Create server profile” dialog. This
enables the options to select the host OS type and to add a volume to the profile. The Seed KVM Host will be Ubuntu but HP
OneView doesn’t have an exact match for Ubuntu. Selecting “RHE Linux (5.x,6.x)” will provide the correct 3PAR settings.
Clicking “Add Volume” will bring up another dialog and the previously created Seed KVM Host LUN can be selected. Note that
HP OneView will warn about attaching a volume with only one storage path; click Add again to force the addition of the
volume.
Figure 15. Adding a volume to the Seed KVM Host
If this is the first time the blade has been used after importing the enclosure, be sure to select a Firmware baseline from the
dropdown menu. Once the firmware has been installed, future reconfigurations of the profile will be significantly faster if
“managed manually” is used instead since the blade will not need to boot HPSUM to check the firmware levels. After the
firmware bundle is selected, the initial profile configuration is completed by pressing Create. This takes between 10 and 30
24
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
minutes, depending on the blade and how many, if any, firmware updates are required. Creating the other profiles can be
started while waiting for the Seed KVM Host profile creation to finish executing. See Figure 16.
Figure 16. Seed KVM Host initial HP OneView Profile
After the initial profile creation is completed on the Seed KVM Host, it needs to be updated to be SAN bootable. Before this
can be done, find the 3PAR port WWPN by selecting the profile and then “SAN Storage” in the profile menu as shown in
Figure 17. The resulting display shows both the 3PAR Storage Targets (which is the WWPN for the 3PAR port), and the LUN
ID assigned to the volume.
Figure 17. SAN Boot Configuration
Making the SAN volume bootable is done by editing the SAN-C connection, changing it from “Not bootable” to “Primary”, and
putting the 3PAR port WWPN and LUN ID in the fields. This is shown in Figure 18. Updating the profile will force an additional
reboot the next time the blade is powered on.
25
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Figure 18. Setting SAN Boot Parameters
HP Helion OpenStack Server Profiles
The profile for the blade in bay 2 is created next. It can be created exactly like the Seed KVM Host, with 2 exceptions. First,
the 2TB SAN volume is selected this time. Secondly, when the Ethernet network connection is created to the Helion Network
Set, it needs to be marked as the Primary boot connection. This forces the blade to PXE boot off the Seed VM.
The profiles for the other blades can be created by simply selecting the profile for bay 2 and then selecting “Copy” under the
Actions menu. When copying a profile with a SAN volume attached, HP OneView automatically creates a new LUN with the
same volume settings as the original. Enter a unique profile name, assign the profile to the next blade and click on Create.
Installing HP Helion OpenStack
At this point the environment is ready for a standard HP Helion OpenStack installation. The documentation below details
how the reference implementation was installed, but in no way supersedes the official HP Helion OpenStack installation
instructions. When in doubt, or if there are any conflicts between them, please assume the official instructions are correct.
Creating the baremetal.csv file
The baremetal.csv file tells the HP Helion OpenStack installer where to find the servers, how to connect to the management
processor (via the IPMI protocol to the iLO), how much memory and CPU they have, and what the MAC address is for the
primary Ethernet network interface. It contains 7 comma-separated fields consisting of:
• MAC
• iLO Username
• ILO Password
• iLO IP
• Total Core Count
• Memory (in MB)
• Storage (in GiB)
26
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
All of this information is easily available via HP OneView and you can use the sample Windows PowerShell script provided in
Appendix B to collect the information. Note that the correct iLO username and password will need to be set in the resulting
output.
The same information is also available from the HP OneView console. The server hardware view for each blade provides the
core count (note that you need to multiply the number of processors by the cores per processor), and memory (displayed in
GB; multiply by 1024 to get MB), and the iLO IP address, as seen in Figure 19. Blade hardware overview. The server profile
view shows the Ethernet NIC MAC address – Figure 20. Note that leading or trailing blank lines in the baremetal.csv file will
cause the installer to fail and there are no comments allowed. Appendix C has a sample baremetal.csv file.
If the file is created in Microsoft® Windows®, be sure to convert the line endings to Linux and make sure the file is saved as
ASCII, not Unicode.
Figure 19. Blade hardware overview
Figure 20. Profile MAC address
Creating the Seed KVM Host
This implementation used Ubuntu Server 14.04.1 LTS as the OS on the Seed KVM Host. The ISO can be downloaded directly
from ubuntu.com/download/server and installed on the host by mounting the ISO via the iLO console’s remote DVD support
and booting from it. All the default Ubuntu installation settings can be used and it should be installed on the SAN boot LUN.
The installer will request a non-root username and “helion” was used in this installation. When the installation is done the
blade will automatically reboot into Ubuntu. Log in with the non-root user created during the installation and become root
with:
sudo -i
After Ubuntu is installed the network must be configured to allow access to the Internet. This is required to install updates
and the required HP Helion OpenStack prerequisite packages.
Ensure there are no other DHCP servers on the Management subnet. The Seed KVM Host should not get an IP address via
DHCP on boot. HP Helion OpenStack expects the DHCP server to be on the Management subnet and having other DHCP
27
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
servers respond to the blades on that subnet will make a successful installation impossible. If the Seed KVM Host gets a
DHCP IP address on boot, find the problem and make sure the host does not get an IP address via DHCP.
The network is configured by editing /etc/network/interfaces so it looks like this:
auto em1
iface em1 inet static
address 172.1.1.21
netmask 255.255.224.0
gateway 172.1.1.12
dns-nameservers 172.1.1.6
Set the values to match your required configuration. Note that Ubuntu “randomly” assigns names to the NICs based on card
type and order found; use whatever NIC name Ubuntu put in the default interfaces file. Restart the network by executing
“service networking restart” and it should be possible to ping the gateway and to successfully nslookup
external addresses like www.ubuntu.com.
Next update the OS and install the prerequisite packages.
apt-get
apt-get
apt-get
libvirt
update
upgrade
install -y openssh-server ntp libvirt-bin openvswitch-switch pythonqemu-system-x86 qemu-kvm nfs-common
Configure NTP by editing /etc/ntp.conf and adding the local NTP servers to the top of the servers list. The NTP daemon can
then be stopped, a clock update forced, and the NTP service restarted.
service ntp stop
ntpdate <your NTP server IP>
service ntp start
Generate a SSH key pair for root. Just use the defaults for the file names, and don’t enter a passphrase.
ssh-keygen -t rsa
Now that the Ubuntu software has been installed and OpenvSwitch is available, we can reconfigure the network to
automatically create an OpenvSwitch on boot, and assign the static Management network IP address to it. This is done by
editing /etc/network/interfaces as shown below. If the DNS server is not available, then the DNS entry can be deleted. The
MAC address, highlighted, is the MAC address of the current network interface. The order is important.
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto brbm
allow-ovs brbm
iface brbm inet static
address 172.1.1.21
netmask 255.255.224.0
gateway 172.1.1.12
ovs_type OVSBridge
ovs_extra set bridge brbm other-config:hwaddr=a2:f0:e6:80:00:4b
dns-nameservers 172.1.1.6
auto eth0
allow-brbm eth0
iface eth0 inet manual
ovs_bridge brbm
ovs_type OVSPort
The HP Helion OpenStack installer wants the physical NIC be named “eth0”, not the Ubuntu default of “em1”. While it is
possible to use the default “em1” name, it’s easier if the NIC is just renamed to eth0. This can be done by creating a
/etc/udev/rules.d/20-networking.rules file with a single line for each “em” NIC as shown below. The MAC address for each
NIC can be found via ifconfig. The NIC with the name “eth0” should have the MAC address of the currently active and working
28
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
NIC. If your Seed KVM Host has more NICs besides the em1, such as em2, em3, then all should be added to the file and
renamed ethx..
SUBSYSTEM=="net",ACTION=="add",ATTR{address}=="a2:f0:e6:80:00:4b",NAME="eth0"
SUBSYSTEM=="net",ACTION=="add",ATTR{address}=="a2:f0:e6:80:00:4d",NAME="eth1"
Reboot the system and the network should work and be able to ping the gateway IP. Sometimes, however, the brbm bridge
will be created, NIC em1 renamed to eth0, and eth0 set as the physical port for the bridge but the network still isn’t
operational. This can be solved by executing the following commands:
/etc/init.d/openvswitch-switch restart
ifdown brbm
ifdown eth0
ifup brbm
ifup eth0
The network should now be working and the host accessible via SSH.
Unpacking and configuring the HP Helion OpenStack Installer
With the Seed KVM Host configured, mount the HP Helion OpenStack install ISO on it and untar the installer file into /root.
This will create a /root/tripleo directory with all the installation and sample configuration files in it. The tripleo/configs/kvmcustom-ips.json file now needs to be edited to reflect the various network subnets, IP addresses, VLANs, etc. that will make
up the HP Helion Cloud configuration. A copy of the one that was used in our implementation is given in Appendix A and the
various networks and IP addresses are discussed in the networking section.
Executing the HP Helion OpenStack Installer
The installation occurs in two phases. In the first phase, the Seed VM is created on the Seed KVM Host. The second phase,
executed from the Seed VM, installs and configures the undercloud and then the overcloud.
To set up the configuration environment for both installation phases, the kvm-custom-ips.json file is sourced using the
following command before running the appropriate HP Helion OpenStack installation script:
source /root/tripleo/tripleo-incubator/scripts/hp_ced_load_config.sh
/root/tripleo/configs/kvm-custom-ips.json
This sets shell environment variables that the installer uses to determine the configuration. You can validate the shell
variables set using the shell’s env command.
During phase one, the Seed VM is installed with:
bash -x ~/tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh --createseed 2>&1 | tee seed`date +%Y%m%d%H%M`.log
The Seed VM is created, and a log file kept of the process in seed<timestamp>.log. Examine the logfile for any errors that
may have occurred.
During the creation process, the Seed VM is automatically loaded with the SSH key for the Seed KVM Host root user, so the
Seed VM can be accessed by SSH/SCP directly from the Seed KVM Host root user without the need for a password.
The edited kvm-custom-ips.json and baremetal.csv files need to be copied to the Seed VM as they will be used during the
second phase of the install to communicate the HP Helion Cloud configuration and details for the available baremetal
servers:
scp /root/tripleo/configs/kvm-custom-ips.json root@172.1.1.22:/root
scp /root/baremetal.csv root@172.1.1.22:/root
Once the configuration files are copied over, complete the installation by logging into the Seed VM using the root user,
sourcing the kvm-custom-ips.json file, and running the HP Helion OpenStack installer.
ssh root@172.1.1.22
source /root/tripleo/tripleo-incubator/scripts/hp_ced_load_config.sh
/root/kvm-custom-ips.json
bash -x /root/tripleo/tripleo-incubator/scripts/hp_ced_installer.sh 2>&1 |tee
cloud`date +%Y%m%d%H%M`.log
The entire installation will take about an hour for the configuration outlined in this paper. Upon successful completion of the
install a fully functioning HP Helion Cloud will be available for use for cloud workloads.
29
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
If an error occurs during the install process, it is recommended to start the process over from creating the Seed VM. The
installer, when run on the Seed KVM Host with the --create-seed parameter, will automatically delete and recreate an
existing Seed VM.
Logging in to HP Helion OpenStack
Before logging into either the HP Helion OpenStack undercloud or the HP Helion OpenStack overcloud, the IP address and
administrator password are needed. There are several ways to obtain this information and the following commands
illustrate one way that can be used to obtain that information for the undercloud while logged on to the Seed VM as root.
This sequence of commands will set up the shell environment on the Seed VM so that OpenStack command line utilities can
be run.
root@hLinux:~# export TE_DATAFILE=~/tripleo/ce_env.json
root@hLinux:~# . ~/tripleo/tripleo-incubator/undercloudrc
root@hLinux:~# env | grep OS
OS_PASSWORD=bd88a6de5acb7869ac7e3fc56ecf0b111d229625
OS_AUTH_URL=http://172.1.1.23:5000/v2.0
OS_USERNAME=admin
OS_TENANT_NAME=admin
OS_CACERT=/usr/local/share/ca-certificates/ephemeralca-cacert.pem
OS_NO_CACHE=True
OS_CLOUDNAME=undercloud
Point a browser to the undercloud IP address (172.1.1.23 in this case) and enter the username (admin) and password
(bd88a6de5acb7869ac7e3fc56ecf0b111d229625 in this case) into the authentication boxes.
The overcloud shell environment can also be set using the same method but sourcing “~/tripleo/tripleoincubator/overcloudrc” instead of the undercloudrc file.
At this point the HP Helion OpenStack installation is complete. Test the environment by creating a test tenant VM and
enabling it to get to/from the Internet by assigning floating IPs to it.
Integrating the HP 3PAR as Cinder Storage
Once the core HP Helion Cloud is installed and functional, the HP 3PAR storage can be integrated into it as a Cinder block
storage provider. This allows the HP Helion administrator to create, manage, and delete volumes and snapshots on the
3PAR storage array using Cinder on the HP Helion Cloud.
The first step in integrating the HP 3PAR array with HP Helion is to log in to the Horizon interface on the undercloud as
described above. Under Resources  Storage  StoreServ click the “Register StoreServ” button. Fill in the requested data –
an array name, the IP address of the 3PAR, valid 3PAR user credentials, and the port the wsapi service is running on. The
completed form is shown in Figure 21. Note that the SAN IP, Username, and Password is the same as the IP Address,
Username and Password. Port 8080 is used since the 3PAR Web API Service was previously configured to use HTTPS on
that port.
30
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Figure 21. Register StoreServ
31
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
After the 3PAR is registered in the undercloud, the 3PAR CPGs need to be registered. The “Register CPG” option under the
“More” menu displays a list of CPGs that have been automatically discovered on the array. Add the desired CPG(s) to the
“Selected CPG(s)” list and register them by clicking the Register button. This is shown in Figure 22, below.
Figure 22. Registering CPGs
The next step is to propagate the HP 3PAR configuration to the overcloud for use by Cinder. While remaining in the
undercloud web interface, select the “Add Backend” button on the Overcloud  Configure page in the “StoreServ Backends”
tab. Enter a Volume Backend Name of HP3PAR_FC_RAID5_31 to specify the array and CPG used, and then move the CPG to
the “Selected StoreServ CPG Choices” panel. Click “Add” to create the new backend mapping and then click the “Generate
Config” button to generate a JSON configuration snippet. Download the snippet as this will be used to update the HP Helion
Cloud’s configuration.
This JSON snippet describes the Cinder backends and their connectivity, and this information needs to be synchronized to
the HP Helion OpenStack overcloud controller hosts as an update to the configuration. This is achieved by adding the
generated JSON snippet to the /root/tripleo/configs/kvm-custom-ips.json file on the Seed VM. Appendix D shows the
complete JSON configuration file after it was updated with the additional 3PAR content. The updated JSON file was then
sourced into the environment and HP Helion OpenStack updated with the configuration change.
source ~/tripleo/tripleo-incubator/scripts/hp_ced_load_config.sh
~/tripleo/configs/kvm-custom-ips.json
cd ~
~/tripleo/tripleo-incubator/scripts/hp_ced_installer.sh --update-overcloud
The update took approximately 45 minutes to complete.
The last step in integrating the HP 3PAR with Cinder is to create a volume type in the overcloud Cinder service. When
creating a new volume, the volume type is specified so Cinder can schedule the volume creation on the appropriate backend
3PAR storage array and CPG. Using this approach enables multiple Cinder volume types to be created that can map to
multiple 3PARs and multiple CPGs on those 3PARs allowing cloud users to specify the operational characteristics of their
block storage devices.
Creating the volume type is done in the overcloud web interface under Admin  System  Volumes  Create Volume
Type. Give the volume type a name and create it. Click the “View Extra Specs” then “Create” to bring up a screen where a
key/value pair can be entered. A Key of “volume_backend_name” is required. The Value entry is the name of the Volume
Backend – this same key/value pair can be seen in the JSON snippet used to update the kvm-custom-ips.json file. In the
32
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
sample file in Appendix D, the Volume Backend name is HP3PAR_RAID5_31. Figure 23 shows the filled out Extra Spec
creation form.
It is now possible to create new volumes in the HP 3PAR by going to Project  Compute  Volumes  Create Volume,
entering a name, selecting the Volume Type from the Type dropdown, giving the volume a size and clicking Create. Once
created, volumes can be attached to and accessed by a Cloud instance.
Figure 23. Volume Type Extra Spec
Adding additional Compute Nodes
Adding or removing compute hosts from the HP Helion OpenStack environment is well documented on the HP Helion
OpenStack website at https://docs.hpcloud.com/helion/openstack/install/add/nodes/. The process shown here was under
the heading “Enroll a new baremetal node and then configure compute nodes.”
Using the process described above, source the undercloudrc file.
Listing the current nodes is done with the Ironic “node-list” command, and details about a specific node can be found by
using the “node-show” command and passing the node UUID.
root@undercloud-undercloud-r35oi5m6rxrb:~# ironic node-list
+--------------------------------------+--------------------------------------+------------+-----------------+-------------+
| uuid
| instance_uuid
|
power_state | provision_state | maintenance |
+--------------------------------------+--------------------------------------+------------+-----------------+-------------+
| fc25cc2a-cae4-4660-ac0a-c21f1b71b82f | ef3eae77-6f84-49ee-b285-27693d8d2a02 |
power on
| active
| False
|
| 150ba9b2-9e5e-4300-b5f1-2db369e6b411 | 9dd211cb-dedb-4638-8980-25389b29f5e3 |
power on
| active
| False
|
| c79afed3-8fb6-42cd-a6a4-21750535718b | 8bd46a4b-7b60-43a1-93a9-7a69b391a474 |
power on
| active
| False
|
| b8ac2ba1-d997-4470-9517-8468bca4f828 | dfb30776-56cb-407c-9155-653a6b3e4949 |
power on
| active
| False
|
| 95edfc4b-cb96-4067-b07c-5566f6b6c4f1 | 7984b9b4-b834-4acd-8f83-304e4a585fb7 |
power on
| active
| False
|
| abc3f018-d270-4756-a00b-993db2a79b0d | 0161b5ca-925b-4aab-9281-69de7ed38ad8 |
power on
| active
| False
|
| 7836ebb9-9bb5-4f0c-9b82-ff390fc291de | 64532b29-0c6b-4b43-874b-e09c6d8a7086 |
power on
| active
| False
|
+--------------------------------------+--------------------------------------+------------+-----------------+-------------+
33
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
root@undercloud-undercloud-r35oi5m6rxrb:~# ironic node-show fc25cc2a-cae4-4660ac0a-c21f1b71b82f
+------------------------+-----------------------------------------------------------------------+
| Property
| Value
|
+------------------------+-----------------------------------------------------------------------+
| instance_uuid
| ef3eae77-6f84-49ee-b285-27693d8d2a02
|
| target_power_state
| None
|
| properties
| {u'memory_mb': u'65536', u'cpu_arch': u'amd64',
u'local_gb': u'2007', |
|
| u'cpus': u'16'}
|
| maintenance
| False
|
| driver_info
| {u'pxe_deploy_ramdisk': u'e112dee1-72b9-43ef-9d8680e34af947e5',
|
|
| u'pxe_deploy_kernel': u'25ccea27-4705-4557-ae1832e13930fe07',
|
|
| u'ipmi_address': u'192.168.146.233'…
There are two steps to adding a new baremetal node to Ironic. The first creates the node, and the second creates a NIC port
and associates it with the node. The command to create the node is:
ironic node-create -d pxe_ipmitool -p cpus=<value> -p memory_mb=<value> -p
local_gb=<value> -p cpu_arch=<value> -i ipmi_address=<IP Address> -i
ipmi_username=<username> -i ipmi_password=<password>
where “cpus” is the total number of cores in the system, “memory_mb” is the memory, “local_gb” is the disk space,
“cpu_arch” is “amd64” for the blades, and “ipmi_address/username/password” are the iLO IP address and credentials.
For example:
ironic node-create -d pxe_ipmitool -p cpus=16 -p memory_mb=131072 -p
local_gb=2007 -p cpu_arch=amd64 -i ipmi_address=192.168.146.240 -i
ipmi_username=Administrator -i ipmi_password=Password
The second step is to create the network port that is associated with the primary NIC on the server. That command is:
ironic port-create --address <MAC_Address> --node_uuid <uuid>
where “MAC_Address” is the MAC address of the NIC assigned to the profile by HP OneView, and “uuid” is the Ironic UUID
of the node. The UUID is assigned to the node when it is created, and is displayed as part of the output of the “ironic
node-create” command.
For example:
ironic port-create --address A2:F0:E6:80:00:69 --node_uuid b5794774-b1ed-4d528309-3a4d04652e0a
This pair of commands should be run for each additional compute node that is to be added to the HP Helion Cloud
environment. The data used in the node-create and port-create commands will be needed to update the baremetal.csv file.
The HP Helion Cloud is now updated so that the new compute hosts are added to the configuration. The update process will
not only update the Helion OpenStack definitions in the overcloud controllers so that they will be available to schedule cloud
instances on but will also automatically deploy the HP Helion OpenStack software to each of the new compute nodes. No
manual installation of software on the compute nodes outside of the HP Helion Cloud update process is required.
Like adding an HP 3PAR, this requires running the HP Helion OpenStack update process. Start by updating the baremetal.csv
file to include the new nodes – these need to be appended to the existing node definitions already in the file. Save the file
and then edit /root/tripleo/configs/kvm-custom-ips.json. Change the “compute_scale” line to indicate the total number of
compute nodes that are required in the Cloud environment. If there was 1 compute node already defined and you are adding
2 more, then “compute_scale” should be 3.
34
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Source the updated kvm-custom-ips.json file and execute the hp_ced_installer.sh update procedure. It will take
approximately 30-45 minutes to run for the configuration shown here.
source ~/tripleo/tripleo-incubator/scripts/hp_ced_load_config.sh
~/tripleo/configs/kvm-custom-ips.json
/root/tripleo/tripleo-incubator/scripts/hp_ced_installer.sh --update-overcloud
Once the update is complete the new hypervisors will be seen in the overcloud web interface under Admin  System 
Hypervisors.
Summary
This document showed how to build your own on-site private cloud with HP Helion OpenStack, an open and extensible scale
out cloud platform. To explore further, please see the references shown in the For more information section.
Appendix A – Sample HP Helion OpenStack JSON configuration file
This is a sample of the original /root/tripleo/configs/kvm-custom-ips.json file used during the HP Helion OpenStack
installation.
{
"cloud_type": "KVM",
"vsa_scale": 0,
"vsa_ao_scale": 0,
"so_swift_storage_scale": 0,
"so_swift_proxy_scale": 0,
"compute_scale": 1,
"bridge_interface": "eth0",
"virtual_interface": "eth0",
"fixed_range_cidr": "192.1.0.0/19",
"control_virtual_router_id": "202",
"baremetal": {
"network_seed_ip": "172.1.1.22",
"network_cidr": "172.1.1.0/19",
"network_gateway": "172.1.1.12",
"network_seed_range_start": "172.1.1.23",
"network_seed_range_end": "172.1.1.40",
"network_undercloud_range_start": "172.1.1.64",
"network_undercloud_range_end": "172.1.1.254"
},
"neutron": {
"public_interface_raw_device": "eth0",
"overcloud_public_interface": "vlan536",
"undercloud_public_interface": "eth0"
},
"ntp": {
"overcloud_server": "172.1.1.21",
"undercloud_server": "172.1.1.21"
},
"floating_ip": {
"start": "10.136.107.172",
"end": "10.136.107.191",
"cidr": "10.136.96.0/19"
},
"svc": {
"interface": "vlan736",
"interface_default_route": "172.2.2.12",
"allocate_start": "172.2.1.2",
"allocate_end": "172.2.1.250",
"allocate_cidr": "172.2.1.0/19",
"overcloud_bridge_mappings": "svcnet1:br-svc",
"overcloud_flat_networks": "svcnet1",
"customer_router_ip": "10.136.96.1"
},
35
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
"codn": {
"undercloud_http_proxy": "",
"undercloud_https_proxy": "",
"overcloud_http_proxy": "",
"overcloud_https_proxy": ""
}
}
Appendix B – Sample baremetal PowerShell script
This PowerShell script will print out the information that’s needed to create a baremetal.csv file for the HP Helion OpenStack
installer. It also verifies that all the blades have the same number of cores and the same amount of memory. It assumes
that the HP OneView PowerShell library for HP OneView 1.20 is installed, and that the first enclosure is being used for HP
Helion OpenStack. Update the IP address or hostname for the HP OneView appliance, as well as the login credentials. There
are placeholders for the iLO credentials too. They can be updated here or in the bare metal file after it has been created.
import-module HPOneView.120
connect-HPOVMgmt -Appliance <OV IP> -user Administrator -password <OV Password>
$cores = 0
$mem = 0
(Get-HPOVEnclosure)[0].deviceBays|
where {$_.bayNumber -gt 1 -and $_.bayNumber -le 8}|
foreach{send-hpovrequest $_.deviceUri}|
foreach{
if($mem -eq 0) { $mem = $_.memoryMb }
elseif ($mem -ne $_.memoryMB) {throw "Memory mismatch"}
if($cores -eq 0) { $cores = $_.processorCount * $_.processorCoreCount}
elseif ($cores -ne ($_.processorCount * $_.processorCoreCount))
{throw "Core Count mismatch"}
$ilo = $_.mpIPAddress
$s = send-hpovrequest $_.serverprofileuri
$mac = $s.connections[0].mac
$vol = $([int](((Send-hpovrequest
$s.SANstorage.volumeAttachments[0].volumeUri).provisionedCapacity)/(1024*1024*1024
)*0.98))
"$mac,<iLO User>,<iLO Password>,$ilo,$cores,$mem,$vol"
}
Disconnect-HPOVMgmt
Appendix C – Sample baremetal.csv file
The following baremetal.csv file defines a series of blades that have 16 cores, 64GB of memory and 2007GiB of disk space.
Replace the iLO User and iLO password with the values that are appropriate for your environment.
FE:CC:EE:80:00:27,<iLO
FE:CC:EE:80:00:29,<iLO
FE:CC:EE:80:00:2B,<iLO
FE:CC:EE:80:00:2D,<iLO
FE:CC:EE:80:00:2F,<iLO
FE:CC:EE:80:00:31,<iLO
FE:CC:EE:80:00:33,<iLO
36
User>,<iLO
User>,<iLO
User>,<iLO
User>,<iLO
User>,<iLO
User>,<iLO
User>,<iLO
Password>,192.168.146.232,16,65536,2007
Password>,192.168.146.233,16,65536,2007
Password>,192.168.146.234,16,65536,2007
Password>,192.168.146.235,16,65536,2007
Password>,192.168.146.236,16,65536,2007
Password>,192.168.146.237,16,65536,2007
Password>,192.168.146.238,16,65536,2007
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Appendix D – Sample JSON configuration file with HP 3PAR integration
This is a sample of a complete /root/tripleo/configs/kvm-custom-ips.json file after the HP 3PAR backend configuration
JSON has been added. The JSON snippet created by the “Generated Configuration” is highlighted in yellow; text highlighted
in green had to be manually added to the file. The rest of the file was the original text used during the HP Helion OpenStack
installation.
{
"cloud_type": "KVM",
"vsa_scale": 0,
"vsa_ao_scale": 0,
"so_swift_storage_scale": 0,
"so_swift_proxy_scale": 0,
"compute_scale": 1,
"bridge_interface": "eth0",
"virtual_interface": "eth0",
"fixed_range_cidr": "192.1.0.0/19",
"control_virtual_router_id": "202",
"baremetal": {
"network_seed_ip": "172.1.1.22",
"network_cidr": "172.1.1.0/19",
"network_gateway": "172.1.1.12",
"network_seed_range_start": "172.1.1.23",
"network_seed_range_end": "172.1.1.40",
"network_undercloud_range_start": "172.1.1.64",
"network_undercloud_range_end": "172.1.1.254"
},
"neutron": {
"public_interface_raw_device": "eth0",
"overcloud_public_interface": "vlan536",
"undercloud_public_interface": "eth0"
},
"ntp": {
"overcloud_server": "172.1.1.21",
"undercloud_server": "172.1.1.21"
},
"floating_ip": {
"start": "10.136.107.172",
"end": "10.136.107.191",
"cidr": "10.136.96.0/19"
},
"svc": {
"interface": "vlan736",
"interface_default_route": "172.2.2.12",
"allocate_start": "172.2.1.2",
"allocate_end": "172.2.1.250",
"allocate_cidr": "172.2.1.0/19",
"overcloud_bridge_mappings": "svcnet1:br-svc",
"overcloud_flat_networks": "svcnet1",
"customer_router_ip": "10.136.96.1"
},
"codn": {
"undercloud_http_proxy": "",
"undercloud_https_proxy": "",
"overcloud_http_proxy": "",
"overcloud_https_proxy": ""
},
"3par": {
"DEFAULT": {
"enabled_backends": [
"CPG_2d0bf597-c14e-4c83-aa1c-86da91f317bb"
]
},
"CPG_2d0bf597-c14e-4c83-aa1c-86da91f317bb": {
"san_password": "3pardata",
"hp3par_username": "3paradm",
"volume_backend_name": "HP3PAR_RAID5_31",
37
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
"san_login": "3paradm",
"hp3par_api_url": "https://172.1.1.228:8080/api/v1",
"volume_driver":
"cinder.volume.drivers.san.hp.hp_3par_fc.HP3PARFCDriver",
"hp3par_password": "3pardata",
"hp3par_cpg": "FC_RAID5_31",
"san_ip": "172.1.1.228"
}
}
}
38
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
For more information
The following links provide more information on HP Helion OpenStack and HP Infrastructure Products:
HP Helion OpenStack Overview
hp.com/go/helion
HP Helion OpenStack Learning Center
docs.hpcloud.com/helion/openstack
HP Helion OpenStack Community Virtual
Installation and Configuration
https://docs.hpcloud.com/helion/community/install-virtual/
HP Helion OpenStack Community Baremetal
Installation and Configuration
https://docs.hpcloud.com/helion/community/install/
HP Helion Development Platform
http://www8.hp.com/us/en/cloud/helion-devplatform-overview.html
HP Networking 5900 Switch Series
hp.com/go/networking
HP BladeSystem Servers
hp.com/go/bladesystem
HP OneView
hp.com/go/oneview
HP 3PAR StoreServ Storage
hp.com/go/3par
To help us improve our documents, please provide feedback at hp.com/solutions/feedback.
Sign up for updates
hp.com/go/getupdated
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Microsoft and Windows are trademarks of the Microsoft group of companies. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other
countries. Java is a registered trademark of Oracle and/or its affiliate. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions.
The OpenStack Word Mark and OpenStack Logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in
the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed, or sponsored by the
OpenStack Foundation, or the OpenStack community.
4AA5-7091ENW, February 2015
Download