Professional Documents
Culture Documents
Implementing HP Helion Openstack® On HP Bladesystem
Implementing HP Helion Openstack® On HP Bladesystem
Implementing HP Helion
OpenStack® on HP BladeSystem
A solution example for HP Helion OpenStack® private clouds
Table of contents
Executive summary ...................................................................................................................................................................... 2
Introduction to HP Helion OpenStack ....................................................................................................................................... 3
Core HP Helion OpenStack services ...................................................................................................................................... 3
HP Helion OpenStack additional services ............................................................................................................................ 5
HP Helion OpenStack deployment architecture ................................................................................................................. 5
HP Helion OpenStack networking ......................................................................................................................................... 8
HP Helion OpenStack configurations ...................................................................................................................................... 10
HP Helion OpenStack version 1.0.1 using HP BladeSystem .............................................................................................. 10
Network subnets and addresses ......................................................................................................................................... 13
Cabling ....................................................................................................................................................................................... 14
Initial 3PAR configuration ...................................................................................................................................................... 16
Initial SAN switch configuration ........................................................................................................................................... 18
HP OneView setup .................................................................................................................................................................. 18
Installing HP Helion OpenStack............................................................................................................................................ 26
Summary ....................................................................................................................................................................................... 35
Appendix A – Sample HP Helion OpenStack JSON configuration file ............................................................................... 35
Appendix B – Sample baremetal PowerShell script ............................................................................................................ 36
Appendix C – Sample baremetal.csv file ................................................................................................................................ 36
Appendix D – Sample JSON configuration file with HP 3PAR integration ....................................................................... 37
For more information ................................................................................................................................................................. 39
Executive summary
HP Helion OpenStack is an open and extensible scale out cloud platform for building your own on-premise private clouds
with the option of participating in a hybrid cloud when business needs demand it. HP Helion OpenStack is a commercial-
grade product designed to deliver a flexible open source cloud computing technology in a resilient, maintainable, and easy
to install solution.
The product places special importance on enabling:
Deployment of a secure, resilient and manageable cloud
• Highly Available infrastructure services with active failover for important cloud controller services.
• HP’s Debian-based host Linux® running the OpenStack control plane services, reducing security risks by removing
unneeded modules.
• Build and manage your cloud using simplified guided installation and deployment through TripleO technology.
• Stay up-to-date with automated, live distribution of regularly tested updates where you still maintain full control over
your deployment.
• Inventory management of cloud infrastructure allowing visibility into what resources are free or in use as you deploy
secure services.
Flexibility to scale
• Ability to scale up and down as workload demands change.
• Openness enables you to move, deliver and integrate cloud services across public, private and managed/hosted
environments.
• Optimized for production workload support running on KVM (Kernel-based Virtual Machine) or VMware® vSphere
virtualization.
Global support for the enterprise cloud
• Foundation Care support is included, providing a choice of support levels including same day and 24x7 coverage.
• HP Support provides access to experts in HP’s Global Cloud Center of Excellence as a single source of support and
accountability. This support from HP also qualifies you for HP’s OpenStack Technology Indemnification Program.
• Access to local experts with significant expertise in OpenStack technology and HP Helion OpenStack to accelerate your
implementation.
Combining the secure, manageable and scalable characteristics of HP Helion OpenStack software with HP server, storage
and networking technologies further enhances the cloud solution. HP offers a range of server technologies on which HP
Helion OpenStack can be based allowing for the selection of the best server type and form factor for the planned cloud
workload. Customers can choose block storage from the HP 3PAR StoreServ storage array family for cloud applications that
require high-end storage characteristics or alternatively select the HP Helion supplied HP StoreVirtual VSA Software – a
virtual storage appliance solution running on HP servers.
This paper discusses a sample deployment of the HP Helion OpenStack v1.01 software and how this software architecture
can be realized using HP server, storage and networking technologies. Each private cloud solution using HP Helion
OpenStack needs to address specific business needs and the goal of this paper is to offer a detailed starting configuration
suggestion that can be evolved to meet those needs.
The configuration in this paper is designed for use with the fully supported HP Helion OpenStack edition targeted for
production cloud environments in an enterprise setting. HP also offers the HP Helion OpenStack Community edition which is
a free-to-license distribution often useful for proof of concept and testing scenarios. The example configuration in this
paper is not designed for use with HP Helion OpenStack Community edition.
Target audience: This paper is targeted at IT architects who are designing private clouds solutions. A working knowledge of
OpenStack based cloud software and HP server, networking and storage products is helpful.
DISCLAIMER OF WARRANTY
This document may contain the following HP or other software: XML, CLI statements, scripts, parameter files. These are
provided as a courtesy, free of charge, “AS-IS” by Hewlett-Packard Company (“HP”). HP shall have no obligation to maintain
or support this software. HP MAKES NO EXPRESS OR IMPLIED WARRANTY OF ANY KIND REGARDING THIS SOFTWARE
INCLUDING ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR NON-INFRINGEMENT.
HP SHALL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES, WHETHER
BASED ON CONTRACT, TORT OR ANY OTHER LEGAL THEORY, IN CONNECTION WITH OR ARISING OUT OF THE FURNISHING,
PERFORMANCE OR USE OF THIS SOFTWARE.
2
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Service Description
Identity Operations Based on OpenStack Keystone, the HP Helion OpenStack Identity service provides one-stop
(Keystone) authentication for the HP Helion OpenStack private cloud.
The Identity service enables you to create and configure users, specify user roles and credentials, and
issue security tokens for users. The Identity service then uses this information to validate that incoming
requests are being made by the user who claims to be making the call.
Compute Operations HP Compute Operation services, based on OpenStack Nova, provides a way to instantiate virtual servers
(Nova) on assigned virtual machine compute hosts. Some of the tasks you can perform as a user are creating
and working with virtual machines, attaching storage volumes, working with network security groups and
key pairs, and associating floating IP addresses.
As an administrator, you can also configure server flavors, modify quotas, enable and disable services,
and work with deployed virtual machines.
Network Operations HP Network Operation services, based on OpenStack Neutron, provides network connectivity and IP
(Neutron) addressing for compute instances using a software defined networking paradigm.
Some of the tasks you can perform as a user are configuring networks and routers, adding and removing
subnets, creating a router, associating floating IP addresses, configuring network security groups, and
working with load balancers and firewalls.
As an administrator, you can also create an external network, and work with DHCP agents and Level-3
networking agents.
Image Operations HP Image Operations services, based on OpenStack Glance, helps manage virtual machine software
(Glance) images. Glance allows for the querying and updating of metadata associated with those images in
addition to the retrieval of the actual image data for use on compute hosts should new instances that are
being instantiated require it.
As a user, you can create, modify and delete your own private images. As an administrator, you can also
create, modify and delete public images that are made available to all tenants in addition to their private
set of images.
Volume operations HP Volume Operations services (or Block Storage), based on OpenStack Cinder, helps you perform
(Cinder) various tasks with block storage volumes. Cinder storage volume operations include creating a volume,
creating volume snapshots, configuring a volume and attaching/detaching volumes from instances.
As an administrator, you can also modify project quotas, enable services, create volume types and
associate quality of service metrics with each of the volume types.
Object Operations HP Object Storage service, based on OpenStack Swift, provides you with a way to store and retrieve
(Swift) object data in your HP Helion OpenStack private cloud. You can configure storage containers, upload and
download objects stored in those containers, and delete objects when they are no longer needed.
3
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Service Description
Orchestration (Heat) HP Orchestration service, based on OpenStack Heat, enables you to design and coordinate multiple
composite cloud applications using templates. The definition of a composite application is encompassed
as a stack which includes resource definitions for instances, networks and storage in addition to providing
information on the required software configuration actions to perform against the deployed instances.
As a user, you can create stacks, suspend and resume stacks, view information on stacks, view event
information from stack actions, and work with stack templates and infrastructure resources (such as
servers, floating IPs, volumes and security groups).
Ironic HP Helion OpenStack software includes the capability to deploy physical “baremetal” servers in addition
to its ability to create new instances within a virtualized server environment. Ironic is the OpenStack
component that enables physical server deployment and it allows for physical servers with no operating
software installed to be bootstrapped and provisioned with software images obtained from Glance.
Ironic features are used during the HP Helion OpenStack installation process to deploy the cloud software
on to servers. Use of Ironic outside of the core cloud installation process is currently not supported.
TripleO TripleO provides cloud bootstrap and installation services for deploying HP Helion OpenStack on to target
hardware configurations. TripleO leverages Heat for defining the deployment layout and customization
requirements for the target Cloud and uses Ironic services for deploying cloud control software to
physical servers using HP supplied software images.
4
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Service Description
Sherpa Sherpa is the HP Helion OpenStack content distribution catalog service that provides a mechanism to
download and install additional product content and updates for a deployed HP Helion OpenStack
configuration.
EON The HP Helion EON service interacts with VMware vCenter to collect information about the available set of
vSphere datacenters and clusters. This information is then used to configure VMware clusters as compute
targets for HP Helion OpenStack.
Sirius HP Helion OpenStack Sirius service assists the cloud administrator in the configuration of storage services
such as Cinder and Swift. It offers a dashboard graphical user interface and a REST based web service for
storage device management.
Centralized HP Helion OpenStack includes a centralized logging facility enabling an administrator to review logs in a
Logging and single place rather than needing to connect to each cloud infrastructure server in turn to examine local log
ElasticSearch files. Tools are provided that simplify the analysis of large amounts of log file data making it easier for the
administrator to pinpoint issues more quickly.
Monitoring with Monitoring of the HP Helion OpenStack cloud is important for maintaining availability and robustness of
Icinga services. Two types of monitoring are available:
• Watching for problems: ensures that all services are up and running. Knowing quickly when a service fails
is important so that those failures can be addressed leading to improved cloud availability.
• Watching usage trends: involves monitoring resource usage over time in order to make informed
decisions about potential bottlenecks and when upgrades are needed to improve cloud performance and
capacity.
HP Helion OpenStack includes support for both the monitoring of problems and the tracking of usage
information through Icinga.
5
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
High Availability is a key design point for HP Helion OpenStack and the product specifically includes replicated copies of
important services and data that together enhance overall cloud control plane resiliency. Three separate overcloud
controllers are deployed with each installation and these controllers are automatically configured to enable the replication
of services and service data essential for supporting resilient cloud operations.
Further details for the overcloud, undercloud and the Seed VM are discussed in the following sections.
The overcloud
The overcloud is the “production” cloud that end users interact with to obtain cloud services. During the installation phase,
the overcloud is implemented on a number of pre-assigned servers that at a minimum will be composed of:
• Three overcloud controllers (one of which is assigned a special role as the Management Controller).
• A starter Swift cluster implemented on two servers for availability. This Swift cluster is primarily used by Glance for
storing images and instance snapshots although other Swift object storage uses are possible.
• One or more KVM compute servers or a set of pre-existing VMware vSphere clusters used as the compute host targets
for instances.
Based on the customer’s specific needs, the overcloud may also include:
• An optional Swift Scale-Out cluster of between two and twelve servers that is used for large-scale production cloud
Object storage use (Scale-Out Swift extends the Starter Swift Cluster enabling greater capacity while maintaining any
initial data present in Starter Swift).
• An optional VSA based Cinder block storage capability. One or more VSA clusters can be implemented with each cluster
having a recommended number of servers of between one (no High Availability) and three (High Availability is enabled).
Individual VSA clusters with more than three constituent servers (and up to a maximum of fifteen) are possible but
require careful design to ensure appropriate performance.
• An optional HP 3PAR storage array that can be used to provide high performance Cinder block storage.
6
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
The overcloud controllers run the core components of the OpenStack cloud including Nova, Keystone, Glance, Cinder, Heat,
Neutron and Horizon.
To enable High Availability, three instances of the overcloud controller are run on three separate physical servers. Software
clustering and replication technologies are used with the database, message queuing and web proxy to ensure that should
one overcloud controller fail that another active overcloud controller can take over its workload. This Active-Active cluster
design allows the cloud to remain running and for cloud users to continue to have access to cloud control functionality even
in the face of an overcloud server failure.
A similar approach is used with the Starter Swift servers where a minimum of two servers is required to ensure High
Availability. The Swift software makes sure that data is replicated appropriately with redundant copies of the Swift object
data spread over both servers.
The Starter Swift servers are deployed within the overcloud and provide the backing storage for Glance images and instance
snapshots as well as being a target for a limited set of Cinder volume backups and a repository for cloud software updates.
The Starter Swift cluster is mandatory because Glance is a required component for any HP Helion OpenStack cloud to
operate. All of these overcloud components are automatically installed as part of the TripleO deployment process.
The remaining required component for the overcloud when using KVM virtualization is the compute server environment. HP
Helion OpenStack includes support for deploying cloud end user instances to either one or more KVM based virtualization
hosts that run on HP’s host Linux or VMware vSphere clusters. For KVM compute nodes, the TripleO based installation
process will deploy the appropriate software to the target compute servers and configure them for use within the cloud. For
VMware vSphere compute environments, a vSphere cluster must already have been provisioned outside of the TripleO
installation process and preconfigured to meet the pre-requisites for operation with HP Helion OpenStack.
A separate set of Swift Proxy and Swift Object servers can be installed for those deployments that have a need for a more
comprehensive object storage capability than that provided by the Starter Swift servers. These additional Swift servers are
7
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
not set up as part of the initial installation process but can be configured through TripleO after the core overcloud has been
set up.
The final component of the overcloud is the optional VSA block storage server that offers Cinder support to the cloud. HP
Helion OpenStack supports a number of Cinder block storage server types that include StoreVirtual VSA and 3PAR storage
arrays. For cloud environments that require high-end storage capabilities, the 3PAR storage array can be considered as an
alternative to the StoreVirtual VSA solution. If VSA is chosen as a Cinder provider, then a group of servers each with their
own local disks can be pooled together using the VSA software to offer protected storage to end user instances.
The undercloud
The undercloud is implemented on a physical server and is responsible for the initial deployment and subsequent
configuration and updating of the overcloud. The undercloud itself uses OpenStack technologies for the deployment of the
overcloud but it is not designed for access or use by the general cloud end user population. Undercloud access is restricted
to cloud administrators.
The undercloud runs on a single server and does not implement High Availability clustering as is the case for the overcloud
Controller nodes. Once the cloud has been created, the undercloud system is then used for a number of purposes including
providing DHCP and network booting services for the overcloud servers and running the centralized logging and monitoring
software for the cloud.
Network Description
External The External network is used to connect cloud instances to an external public network such as a company’s
intranet or the public Internet in the case of a public cloud provider. The external network has a predefined range
of Floating IPs which are assigned to individual instances to enable communications to and from the instance to
the assigned corporate intranet/Internet.
Management The management network is the backbone used for the majority of HP Helion OpenStack management
communications. Control messages are exchanged between the overcloud, undercloud, Seed VM, compute hosts,
Swift and Cinder backends through this network. In addition to the control flows, the management network is also
used to transport Swift and iSCSI based Cinder block storage traffic between servers. Also implemented on this
network are VxLAN tunnels that are used for enabling tenant networking for the instances.
The HP Helion OpenStack installation processes use Ironic to provision baremetal servers. Ironic uses a network
boot strategy with the PXE protocol to initiate the deployment process for new physical servers. The PXE boot
and subsequent TFTP traffic is carried over the management network.
The management network is a key network in the HP Helion OpenStack configuration and should use at least a
10Gb network interface card for physical connectivity. Each server targeted to the undercloud, overcloud, VSA,
Swift and KVM compute roles should have PXE enabled on this interface so they can be deployed via Ironic and
TripleO.
8
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Network Description
IPMI The IPMI network is used to connect the IPMI interfaces on the servers that are assigned for use with
implementing the cloud. IPMI is a protocol that enables the control of servers over the network performing such
activities as powering on and powering off servers. For HP ProLiant servers, the IPMI network connects to the HP
iLO management device port for the server. This network is used by Ironic to control the state of the servers
during baremetal deployments.
Note: The IPMI network is designed to be a separate network from the Management network with it being
accessible from cloud infrastructure servers via an IP layer network router (see Figure 4). This approach allows for
access to the HP Helion OpenStack main Management network to be restricted from the IPMI network especially if
filtering rules are available on the IP router being used.
Service The service network is used with the HP Helion Development Platform, enabling communication between the HP
Helion Development Platform components and the HP Helion OpenStack services. This communication is
restricted to the HP Helion Development Platform and access to the network is protected via Keystone
credentials. This network is optional and is not required if the cloud deployment is not using the HP Helion
Development Platform.
Fibre The fibre channel network is used for communications between the servers that make up the HP Helion
channel OpenStack cloud and the 3PAR storage array(s) that participate in the cloud. This network is a Storage Area
Network (SAN) and is dedicated for performing storage input/output to and from 3PAR storage arrays.
The SAN is used for Cinder block storage operations when the 3PAR Cinder plugin is selected and the fibre
channel communications option is enabled (the alternative transport option being iSCSI). HP Helion OpenStack
also supports boot from SAN for the cloud infrastructure and if this configuration is used then this SAN is also
used for that purpose.
SAN switches are required when using HP Helion OpenStack in a SAN environment. “Flat SAN” configurations
where BladeSystem Virtual Connect modules are directly connected to 3PAR storage arrays without the
intermediary SAN switches is not supported.
HP Helion OpenStack 1.0.1 requires that a single path is presented to the server for each LUN in the 3PAR storage
array. This requires that appropriate zoning and VLUN presentation is configured in the SAN switches and 3PAR
arrays.
Although Figure 4 illustrates several logical networks being connected to each of the cloud components, the actual physical
implementation uses a single networking port with a number of the HP Helion OpenStack logical networks being defined as
9
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
VLANs. The common approach when using the single NIC port configuration is for the management network to be defined
as the untagged network and for the external and service networks to be VLANs that flow over the same physical port.
10
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Seed KVM Host 1 BL460c 2 x 6 Core 32GB Boot from SAN: 10Gb 554FLB
Gen8 2.6GHz 1TB LUN in 3PAR FlexFabric
Intel® Xeon® LOM
Starter Swift 2 BL460c 2 x 8 core 64GB Boot from SAN: 10Gb 554FLB
Gen8 2.6GHz Intel 2TB LUN in 3PAR FlexFabric
Xeon LOM
Initial KVM Compute 1 BL460c 2 x 8 core 64GB Boot from SAN: 10Gb 554FLB
Gen8 2.6GHz Intel 2TB LUN in 3PAR FlexFabric
Xeon LOM
Additional KVM 8 BL460c 2 x 8 core 128GB Boot from SAN: 10Gb 554FLB
Compute Gen8 2.6GHz Intel 2TB LUN in 3PAR FlexFabric
Xeon LOM
In addition to the BL460c Gen8 Servers listed above, the following enclosure, SAN switches and 3PAR storage array are also
used.
Table 5. Enclosure, SAN and 3PAR Storage for HP Helion OpenStack on a BladeSystem
Role Configuration
BladeSystem Enclosure 1 HP c7000 Platinum Enclosure. Up to 6 more fully loaded enclosures could be added to the
environment, depending on KVM compute blade count requirements.
Each enclosure includes the following:
• Two Onboard Administrator modules (dual OA for availability)
• Two Virtual Connect FlexFabric 10Gb/24-port modules
(If the target compute nodes are expected to generate combined network and SAN traffic
exceeding the capacity of the two 10Gb ports on each server then consider using the Virtual
Connect FlexFabric-20/40 F8 module and associated higher speed FlexFabric LOMs instead.)
• Appropriate enclosure power and fans for the target datacenter with a design to enable
increased availability through power and fan redundancy
SAN Switches One HP Brocade 8/24 8Gb 24 port AM868B SAN Switch
Storage Array HP 3PAR StoreServ 7400 storage array with two controller nodes with each controller populated
with fibre channel adapters for connectivity to the SAN switches. The implemented disk
configuration had:
• 96 of 300GB 15K RPM FC drives
• Four M6710 24 drive enclosures
A mix of FC, SSD, and/or NL drives can be used to enable multiple levels of Cinder block storage
quality of service. Adjust the drive counts and types as needed.
The first 8 blades are reserved for the Helion control plane while the rest are available for compute. Except for the Seed
KVM Host, the control plane nodes are assigned automatically by the Helion installation script. A diagram of this hardware is
shown below in Figure 5. The servers are shown in the position they were used for this example but as mentioned, the
additional server roles may be installed on any of the servers.
11
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Figure 5. Example of a general purpose starter cloud using BL460c Gen8 blades
This capacity for this higher density HP Helion OpenStack configuration example is characterized in the table below.
Component Capacity
Compute Initial: Compute server has 16 cores, 64GB of memory and 2TB of RAID protected storage delivered by the 3PAR
Servers storage array. One 10Gb FlexFabric converged network adapter is used for networking and the other for SAN
storage traffic.
Additional: Each additional compute server has 16 cores, 128GB of memory and 2TB of RAID protected storage
delivered by the 3PAR storage array. One 10Gb FlexFabric converged network adapter is used for networking and
the other for SAN storage traffic.
Starter Swift Total of 1.04TB of Swift Object storage available across the cluster:
Cluster • Each server with 1.95TB of RAID protected data storage after accounting for operating system overhead
• Two servers supplies 3.9TB of protected RAID data storage
• Swift by default maintains three copies of all stored objects for availability and using a maximum capacity of
80% of total data storage for objects provides a Swift object storage capacity of 1.04TB
12
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Component Capacity
The starter Swift cluster is primarily used to store Glance images, instance snapshots and a limited number of
Cinder volume backups so the combination of the intended size for each of these may not exceed 1.04TB. Access
to Starter Swift is also possible for cloud end users and applications but the total Swift capacity available must be
taken into account if this is enabled.
If a larger scale Swift capacity is required than can be provided by Starter Swift then consider deploying a Scale-
Out Swift configuration.
3PAR Cinder Cinder block storage is made available directly from the 3PAR storage array. The amount of storage available will
Storage depend on the number of Compute servers in the configuration (since each compute server is allocated 2TB of
boot storage from the 3PAR) and the settings for the RAID levels within the 3PAR. We are using a 3+1 RAID-5
configuration. The LUNs are thin-provisioned.
In the configuration shown, the seven control plane servers will require 13TB of thin-provisioned protected FC
based storage for their boot drives and the nine compute servers will consume 18TB for a total of 31TB. The
3PAR has a useable capacity of 26.1TB after formatting and RAID overhead, so the 3PAR is approximately 20%
oversubscribed. This did not affect the implementation or performance of the environment, although a
production system should have more total disk space. This could be accomplished by using more and/or larger
drives. A mix of FC and large Nearline disks should be considered for larger volume requirements.
Tenant subnet 192.1.0.0 255.255.224.0 192.1.0.1 N/A (Tenant networks dynamically assigned
through VxLAN)
These are only meant to be used as example address ranges. The External network will almost certainly need to be changed
while the others can be used as-is or modified to meet your needs and standards.
IP addresses
The following table of IP addresses reflects the IP addresses given to the various components of the environment. They
match expectations of the JSON configuration file tripleo/configs/kvm-custom-ips.json shown in Appendix A. This JSON file
is edited according to the IP addresses of the network environment. The IP addresses will need to be adjusted based on any
changes that are made to the subnets, as well as to conform to your specific environment.
13
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Seed VM Management 172.1.1.22 HP Helion Seed VM running on the Seed KVM Host
Seed VM Range Management 172.1.1.23-40 Various seed services use addresses from this range
Undercloud Range Management 172.1.1.64-224 The various undercloud servers and services get
automatically assigned IPs from this range.
Floating IP Range External 10.136.107.172-191 Range of IPs available for tenant VMs to use to access
the external network
Cabling
The cabling of the environment is shown in Figure 6, below. The Virtual Connect FlexFabric 10Gb/24-port module in
interconnect bay 1 is used for Ethernet network connectivity, while the Virtual Connect FlexFabric 10Gb/24-port module in
interconnect bay 2 is used for SAN connectivity. A pair of 10Gb connections were made from interconnect bay 1 ports X5
and X6 to the HP 5920 Top of Rack switch. Similarly, a pair of 8Gb connections were made from interconnect bay 2 ports X1
and X2 to the HP Brocade 8/24 SAN switch. While it is supported to use a single Virtual Connect FlexFabric module and to
connect both the Ethernet and SAN networks to it, splitting them across a pair of Virtual Connect FlexFabric modules allows
a full 10Gb of Ethernet bandwidth and a full 8Gb of Fibre Channel bandwidth for the HP Helion cloud. If both Ethernet and
SAN networks are combined on a single Virtual Connect module, the Fibre Channel bandwidth must be reduced to 4Gb and
the Ethernet to 6Gb to stay within the 10Gb maximum throughput supported by these modules. If the higher speed Virtual
Connect FlexFabric-20/40 F8 modules are used, then a total of 20Gb is available to the combined networks.
From the HP Brocade 8/24 SAN switch, four 8Gb connections were made, two to each 3PAR controller node. Be sure to use
a pair of controller node partner port pairs for the connection. In this configuration partner pairs 0:1:2/1:1:2 and 0:2:2/1:2:2
were connected. Using partner port pairs ensures that the connection can properly fail over between nodes if one of the
3PAR controllers fails.
14
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
15
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
In the example output above, the Service State is Inactive, HTTP is Disabled, HTTPS is Enabled, and the HTTPS port is 8080.
This is the default configuration for the Web Service API Server. Using HTTPS is highly recommended, although it can be
disabled and HTTP enabled with the “setwsapi“ command, if desired. Start the service with:
startwsapi
The Web Services API Server will start shortly.
16
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
17
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
HP OneView setup
HP OneView 1.20 was used to manage the HP BladeSystem, SAN switch and 3PAR. A full description of installing and setting
up HP OneView is beyond the scope of this paper. It assumes that HP OneView is already installed and can be accessed
through the HP Helion IPMI network. It also assumes that the Brocade Network Advisor has been installed and configured on
a VM/Host, has been integrated with HP OneView, and that the 3PAR has also been integrated with HP OneView. Please see
the HP OneView documentation at hp.com/go/oneview/docs for further information on these steps.
18
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Log in to the HP OneView appliance via the web interface and create the networks. Figure 9 below is a screenshot of the HP
OneView network list after all the networks have been created, as well as the overview of the External network.
19
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
20
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
The second uplink set, SAN_Uplink, should be type “Fibre Channel”, have network SAN-C added, and use interconnect 2
ports X1 and X2. The port speed is fixed to 8Gb/s. Figure 11 below shows the resulting Uplink Set configurations.
21
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Swift capacity to be added to the HP Helion OpenStack cloud. Lastly, the Onboard Administrator and Virtual Connect
modules are updated to the firmware contained in the selected firmware bundle. Using the latest available HP OneView
Service Pack for ProLiant (SPP) is highly recommended and can be found at hp.com/go/spp.
22
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
23
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
If this is the first time the blade has been used after importing the enclosure, be sure to select a Firmware baseline from the
dropdown menu. Once the firmware has been installed, future reconfigurations of the profile will be significantly faster if
“managed manually” is used instead since the blade will not need to boot HPSUM to check the firmware levels. After the
firmware bundle is selected, the initial profile configuration is completed by pressing Create. This takes between 10 and 30
24
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
minutes, depending on the blade and how many, if any, firmware updates are required. Creating the other profiles can be
started while waiting for the Seed KVM Host profile creation to finish executing. See Figure 16.
After the initial profile creation is completed on the Seed KVM Host, it needs to be updated to be SAN bootable. Before this
can be done, find the 3PAR port WWPN by selecting the profile and then “SAN Storage” in the profile menu as shown in
Figure 17. The resulting display shows both the 3PAR Storage Targets (which is the WWPN for the 3PAR port), and the LUN
ID assigned to the volume.
Making the SAN volume bootable is done by editing the SAN-C connection, changing it from “Not bootable” to “Primary”, and
putting the 3PAR port WWPN and LUN ID in the fields. This is shown in Figure 18. Updating the profile will force an additional
reboot the next time the blade is powered on.
25
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
26
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
All of this information is easily available via HP OneView and you can use the sample Windows PowerShell script provided in
Appendix B to collect the information. Note that the correct iLO username and password will need to be set in the resulting
output.
The same information is also available from the HP OneView console. The server hardware view for each blade provides the
core count (note that you need to multiply the number of processors by the cores per processor), and memory (displayed in
GB; multiply by 1024 to get MB), and the iLO IP address, as seen in Figure 19. Blade hardware overview. The server profile
view shows the Ethernet NIC MAC address – Figure 20. Note that leading or trailing blank lines in the baremetal.csv file will
cause the installer to fail and there are no comments allowed. Appendix C has a sample baremetal.csv file.
If the file is created in Microsoft® Windows®, be sure to convert the line endings to Linux and make sure the file is saved as
ASCII, not Unicode.
27
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
servers respond to the blades on that subnet will make a successful installation impossible. If the Seed KVM Host gets a
DHCP IP address on boot, find the problem and make sure the host does not get an IP address via DHCP.
The network is configured by editing /etc/network/interfaces so it looks like this:
auto em1
iface em1 inet static
address 172.1.1.21
netmask 255.255.224.0
gateway 172.1.1.12
dns-nameservers 172.1.1.6
Set the values to match your required configuration. Note that Ubuntu “randomly” assigns names to the NICs based on card
type and order found; use whatever NIC name Ubuntu put in the default interfaces file. Restart the network by executing
“service networking restart” and it should be possible to ping the gateway and to successfully nslookup
external addresses like www.ubuntu.com.
Next update the OS and install the prerequisite packages.
apt-get update
apt-get upgrade
apt-get install -y openssh-server ntp libvirt-bin openvswitch-switch python-
libvirt qemu-system-x86 qemu-kvm nfs-common
Configure NTP by editing /etc/ntp.conf and adding the local NTP servers to the top of the servers list. The NTP daemon can
then be stopped, a clock update forced, and the NTP service restarted.
service ntp stop
ntpdate <your NTP server IP>
service ntp start
Generate a SSH key pair for root. Just use the defaults for the file names, and don’t enter a passphrase.
ssh-keygen -t rsa
Now that the Ubuntu software has been installed and OpenvSwitch is available, we can reconfigure the network to
automatically create an OpenvSwitch on boot, and assign the static Management network IP address to it. This is done by
editing /etc/network/interfaces as shown below. If the DNS server is not available, then the DNS entry can be deleted. The
MAC address, highlighted, is the MAC address of the current network interface. The order is important.
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
auto eth0
allow-brbm eth0
iface eth0 inet manual
ovs_bridge brbm
ovs_type OVSPort
The HP Helion OpenStack installer wants the physical NIC be named “eth0”, not the Ubuntu default of “em1”. While it is
possible to use the default “em1” name, it’s easier if the NIC is just renamed to eth0. This can be done by creating a
/etc/udev/rules.d/20-networking.rules file with a single line for each “em” NIC as shown below. The MAC address for each
NIC can be found via ifconfig. The NIC with the name “eth0” should have the MAC address of the currently active and working
28
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
NIC. If your Seed KVM Host has more NICs besides the em1, such as em2, em3, then all should be added to the file and
renamed ethx..
SUBSYSTEM=="net",ACTION=="add",ATTR{address}=="a2:f0:e6:80:00:4b",NAME="eth0"
SUBSYSTEM=="net",ACTION=="add",ATTR{address}=="a2:f0:e6:80:00:4d",NAME="eth1"
Reboot the system and the network should work and be able to ping the gateway IP. Sometimes, however, the brbm bridge
will be created, NIC em1 renamed to eth0, and eth0 set as the physical port for the bridge but the network still isn’t
operational. This can be solved by executing the following commands:
/etc/init.d/openvswitch-switch restart
ifdown brbm
ifdown eth0
ifup brbm
ifup eth0
The network should now be working and the host accessible via SSH.
This sets shell environment variables that the installer uses to determine the configuration. You can validate the shell
variables set using the shell’s env command.
During phase one, the Seed VM is installed with:
bash -x ~/tripleo/tripleo-incubator/scripts/hp_ced_host_manager.sh --create-
seed 2>&1 | tee seed`date +%Y%m%d%H%M`.log
The Seed VM is created, and a log file kept of the process in seed<timestamp>.log. Examine the logfile for any errors that
may have occurred.
During the creation process, the Seed VM is automatically loaded with the SSH key for the Seed KVM Host root user, so the
Seed VM can be accessed by SSH/SCP directly from the Seed KVM Host root user without the need for a password.
The edited kvm-custom-ips.json and baremetal.csv files need to be copied to the Seed VM as they will be used during the
second phase of the install to communicate the HP Helion Cloud configuration and details for the available baremetal
servers:
scp /root/tripleo/configs/kvm-custom-ips.json root@172.1.1.22:/root
scp /root/baremetal.csv root@172.1.1.22:/root
Once the configuration files are copied over, complete the installation by logging into the Seed VM using the root user,
sourcing the kvm-custom-ips.json file, and running the HP Helion OpenStack installer.
ssh root@172.1.1.22
source /root/tripleo/tripleo-incubator/scripts/hp_ced_load_config.sh
/root/kvm-custom-ips.json
bash -x /root/tripleo/tripleo-incubator/scripts/hp_ced_installer.sh 2>&1 |tee
cloud`date +%Y%m%d%H%M`.log
The entire installation will take about an hour for the configuration outlined in this paper. Upon successful completion of the
install a fully functioning HP Helion Cloud will be available for use for cloud workloads.
29
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
If an error occurs during the install process, it is recommended to start the process over from creating the Seed VM. The
installer, when run on the Seed KVM Host with the --create-seed parameter, will automatically delete and recreate an
existing Seed VM.
Point a browser to the undercloud IP address (172.1.1.23 in this case) and enter the username (admin) and password
(bd88a6de5acb7869ac7e3fc56ecf0b111d229625 in this case) into the authentication boxes.
The overcloud shell environment can also be set using the same method but sourcing “~/tripleo/tripleo-
incubator/overcloudrc” instead of the undercloudrc file.
At this point the HP Helion OpenStack installation is complete. Test the environment by creating a test tenant VM and
enabling it to get to/from the Internet by assigning floating IPs to it.
30
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
31
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
After the 3PAR is registered in the undercloud, the 3PAR CPGs need to be registered. The “Register CPG” option under the
“More” menu displays a list of CPGs that have been automatically discovered on the array. Add the desired CPG(s) to the
“Selected CPG(s)” list and register them by clicking the Register button. This is shown in Figure 22, below.
The next step is to propagate the HP 3PAR configuration to the overcloud for use by Cinder. While remaining in the
undercloud web interface, select the “Add Backend” button on the Overcloud Configure page in the “StoreServ Backends”
tab. Enter a Volume Backend Name of HP3PAR_FC_RAID5_31 to specify the array and CPG used, and then move the CPG to
the “Selected StoreServ CPG Choices” panel. Click “Add” to create the new backend mapping and then click the “Generate
Config” button to generate a JSON configuration snippet. Download the snippet as this will be used to update the HP Helion
Cloud’s configuration.
This JSON snippet describes the Cinder backends and their connectivity, and this information needs to be synchronized to
the HP Helion OpenStack overcloud controller hosts as an update to the configuration. This is achieved by adding the
generated JSON snippet to the /root/tripleo/configs/kvm-custom-ips.json file on the Seed VM. Appendix D shows the
complete JSON configuration file after it was updated with the additional 3PAR content. The updated JSON file was then
sourced into the environment and HP Helion OpenStack updated with the configuration change.
source ~/tripleo/tripleo-incubator/scripts/hp_ced_load_config.sh
~/tripleo/configs/kvm-custom-ips.json
cd ~
~/tripleo/tripleo-incubator/scripts/hp_ced_installer.sh --update-overcloud
32
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
sample file in Appendix D, the Volume Backend name is HP3PAR_RAID5_31. Figure 23 shows the filled out Extra Spec
creation form.
It is now possible to create new volumes in the HP 3PAR by going to Project Compute Volumes Create Volume,
entering a name, selecting the Volume Type from the Type dropdown, giving the volume a size and clicking Create. Once
created, volumes can be attached to and accessed by a Cloud instance.
33
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
There are two steps to adding a new baremetal node to Ironic. The first creates the node, and the second creates a NIC port
and associates it with the node. The command to create the node is:
ironic node-create -d pxe_ipmitool -p cpus=<value> -p memory_mb=<value> -p
local_gb=<value> -p cpu_arch=<value> -i ipmi_address=<IP Address> -i
ipmi_username=<username> -i ipmi_password=<password>
where “cpus” is the total number of cores in the system, “memory_mb” is the memory, “local_gb” is the disk space,
“cpu_arch” is “amd64” for the blades, and “ipmi_address/username/password” are the iLO IP address and credentials.
For example:
ironic node-create -d pxe_ipmitool -p cpus=16 -p memory_mb=131072 -p
local_gb=2007 -p cpu_arch=amd64 -i ipmi_address=192.168.146.240 -i
ipmi_username=Administrator -i ipmi_password=Password
The second step is to create the network port that is associated with the primary NIC on the server. That command is:
ironic port-create --address <MAC_Address> --node_uuid <uuid>
where “MAC_Address” is the MAC address of the NIC assigned to the profile by HP OneView, and “uuid” is the Ironic UUID
of the node. The UUID is assigned to the node when it is created, and is displayed as part of the output of the “ironic
node-create” command.
For example:
ironic port-create --address A2:F0:E6:80:00:69 --node_uuid b5794774-b1ed-4d52-
8309-3a4d04652e0a
This pair of commands should be run for each additional compute node that is to be added to the HP Helion Cloud
environment. The data used in the node-create and port-create commands will be needed to update the baremetal.csv file.
The HP Helion Cloud is now updated so that the new compute hosts are added to the configuration. The update process will
not only update the Helion OpenStack definitions in the overcloud controllers so that they will be available to schedule cloud
instances on but will also automatically deploy the HP Helion OpenStack software to each of the new compute nodes. No
manual installation of software on the compute nodes outside of the HP Helion Cloud update process is required.
Like adding an HP 3PAR, this requires running the HP Helion OpenStack update process. Start by updating the baremetal.csv
file to include the new nodes – these need to be appended to the existing node definitions already in the file. Save the file
and then edit /root/tripleo/configs/kvm-custom-ips.json. Change the “compute_scale” line to indicate the total number of
compute nodes that are required in the Cloud environment. If there was 1 compute node already defined and you are adding
2 more, then “compute_scale” should be 3.
34
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
Source the updated kvm-custom-ips.json file and execute the hp_ced_installer.sh update procedure. It will take
approximately 30-45 minutes to run for the configuration shown here.
source ~/tripleo/tripleo-incubator/scripts/hp_ced_load_config.sh
~/tripleo/configs/kvm-custom-ips.json
/root/tripleo/tripleo-incubator/scripts/hp_ced_installer.sh --update-overcloud
Once the update is complete the new hypervisors will be seen in the overcloud web interface under Admin System
Hypervisors.
Summary
This document showed how to build your own on-site private cloud with HP Helion OpenStack, an open and extensible scale
out cloud platform. To explore further, please see the references shown in the For more information section.
35
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
"codn": {
"undercloud_http_proxy": "",
"undercloud_https_proxy": "",
"overcloud_http_proxy": "",
"overcloud_https_proxy": ""
}
}
36
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
37
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
"san_login": "3paradm",
"hp3par_api_url": "https://172.1.1.228:8080/api/v1",
"volume_driver":
"cinder.volume.drivers.san.hp.hp_3par_fc.HP3PARFCDriver",
"hp3par_password": "3pardata",
"hp3par_cpg": "FC_RAID5_31",
"san_ip": "172.1.1.228"
}
}
}
38
Technical white paper | Implementing HP Helion OpenStack on HP BladeSystem
HP OneView hp.com/go/oneview
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Microsoft and Windows are trademarks of the Microsoft group of companies. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other
countries. Java is a registered trademark of Oracle and/or its affiliate. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions.
The OpenStack Word Mark and OpenStack Logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in
the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed, or sponsored by the
OpenStack Foundation, or the OpenStack community.