Download as pdf or txt
Download as pdf or txt
You are on page 1of 43

Reference Architecture

Red Hat Enterprise Linux


OpenStack Platform
Last update: 27 August 2015
Version 1.0

Provides both economic and


high performance options for
cloud workloads

Describes Lenovo E5-2600 v3 rack


servers, networking, and systems
management software

Describes architecture for high


availability and distributed
functionality

Includes validated and tested


deployment and sizing guide for
quick start

Kai Huang
Yixuan Huang
Srihari Angaluri
Krzysztof Janiszewski
Mike Perks

Table of Contents
1

Introduction ................................................................................................ 1

Business problem and business value ................................................... 3

2.1

Business problem .................................................................................................... 3

2.2

Business value ......................................................................................................... 3

Requirements ............................................................................................. 5
3.1

Functional requirements........................................................................................... 5

3.2

Non-functional requirements .................................................................................... 5

Architectural overview .............................................................................. 8

Component model ..................................................................................... 9

5.1

Core OpenStack components .................................................................................. 9

5.2

Additional components ........................................................................................... 11

5.3

Red Hat Enterprise Linux OpenStack Platform release 6 enhancements .............. 11

Operational model ................................................................................... 13


6.1

6.1.1

Rack servers......................................................................................................................... 13

6.1.2

Network switches .................................................................................................................. 15

6.2

ii

Hardware ............................................................................................................... 13

Implementing an OpenStack cluster ...................................................................... 18

6.2.1

Compute pool ....................................................................................................................... 19

6.2.2

Storage pool ......................................................................................................................... 19

6.2.3

Controller cluster................................................................................................................... 20

6.2.4

Deployment server................................................................................................................ 21

6.2.5

Support services ................................................................................................................... 22

6.3

Systems management ........................................................................................... 23

6.4

Deployment model ................................................................................................. 24

6.4.1

Deployment example 1: full rack system................................................................................ 26

6.4.2

Deployment example 2 with ThinkServer: healthcare company example ............................... 27

Deployment considerations .................................................................... 28


Reference Architecture: Red Hat Enterprise Linux OpenStack Platform
version 1.0

7.1

Server configurations ............................................................................................. 28

7.2

Performance and sizing considerations ................................................................. 29

7.3

Networking ............................................................................................................. 31

7.4

Expansion and scaling ........................................................................................... 34

7.4.1

Multiple rack expansion......................................................................................................... 34

7.4.2

Storage scale out .................................................................................................................. 34

7.5

Best practices ........................................................................................................ 35

Appendix: Lenovo bill of materials ........................................................ 36

Resources ................................................................................................. 39

iii

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

1 Introduction
OpenStack continues to gain significant traction in the industry because of the growing adoption of cloud usage
and the flexibility OpenStack offers as an open source development environment. This document describes the
reference architecture (RA) for OpenStack from Red Hat Enterprise Linux OpenStack Platform Red Hat
Enterprise Linux OpenStack Platform with Lenovo hardware.
Red Hat Enterprise Linux OpenStack Platform is an innovative, cost-effective cloud management solution that
also includes automation, metering, and security for the cloud environment. Red Hat Enterprise Linux
OpenStack Platform provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS) cloud
on top of Red Hat Enterprise Linux and offers a scalable, highly available platform for the management of
cloud-enabled workloads. Red Hat Enterprise Linux OpenStack Platform version 6.1 is based on the
OpenStack Juno release and includes other features, such as the Red Hat Enterprise Linux OpenStack
Platform installer and support for high availability (HA). It also improves the overall quality of the open source
version of OpenStack.
The Lenovo hardware platform provides an ideal infrastructure solution for cloud deployments. The Lenovo x86
server portfolio spans a broad spectrum of Intel based systems, combining the ThinkServer, System x, Flex
System, NeXtScale, and BladeCenter product lines. These systems provide the full range of features and
functions that are necessary to meet the needs of small businesses all the way up to large enterprises. Lenovo
also uses industry standards in systems management on all these platforms, which enables seamless
integration into cloud management tools, such as OpenStack. Lenovo also provides data center network
switches that are designed specifically for robust, scale-out servers and converged storage interconnect fabrics.
This document describes the Lenovo RA for Red Hat Enterprise Linux OpenStack Platform. The target
environments include Managed Service Providers (MSPs), Cloud Service Providers (CSPs), and enterprise
private clouds that require a complete solution for cloud IaaS deployment and management by using
OpenStack.
This document also features planning, design considerations, and reference configurations for implementing
Red Hat Enterprise Linux OpenStack Platform on the Lenovo hardware platform. The RA enables organizations
to easily adopt OpenStack to simplify the system architecture, reduce deployment and maintenance costs, and
address the lack of convergence in infrastructure management. The reference architecture focuses on
achieving optimal performance and workload density by using Lenovo hardware while minimizing procurement
costs and operational overhead.
OpenStack and other Red Hat branded software are deployed on the Lenovo hardware platform to instantiate
compute, network, storage, and management services. The Lenovo XClarity Administrator software is used
to consolidate systems management across multiple Lenovo servers that span the data center. XClarity
increases IT productivity and efficiency by visualizing, automating, and streamlining complex, repetitive
infrastructure management tasks via out-of-band, scalable management of hundreds of servers. XClarity
enables automation of firmware updates on servers via compliance policies, patterns for system configuration
settings, hardware inventory, baremetal OS and hypervisor provisioning, and continuous hardware monitoring.
XClarity can be easily extended via the published REST API to upwardly integrate into other management tools,
such as OpenStack.
The intended audience for this document is IT professionals, technical architects, sales engineers, and

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

consultants. Readers are expected to have a basic knowledge of Red Hat Enterprise Linux and OpenStack.

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

2 Business problem and business value


The following sections outline the value proposition of the Red Hat Enterprise Linux OpenStack Platform with
Lenovo hardware.

2.1 Business problem


Virtualization and cloud achieved tremendous growth on x86 server hardware in recent years. Initially,
virtualization provided immediate relief to server sprawl by enabling consolidation of multiple workloads onto a
single server. As the hypervisor space evolved and cloud proved to be a more cost-effective means for
deploying IT, independent software vendors (ISVs) moved up the software stack as a means of differentiating
themselves while driving customer lock-in and profit.
MSPs and CSPs face intense challenges that drive them to look for economic but scalable cloud solutions that
enable them to offer cost-effective IT services while easily expanding capacity to meet user demand.
OpenStack provides an open source alternative for cloud management that is increasingly used to build and
manage cloud infrastructures to support enterprise operations. The OpenStack free license also significantly
reduces the initial acquisition and expansion costs. However, IT Administrators require deep skills to build a
robust cloud service that is based on OpenStack because it is flexible and offers various deployment and
configuration options. Moreover, scale out of such a cloud cluster introduces risk of an unbalanced
infrastructure, which might lead to performance issues, insufficient network bandwidth, or increased exposure to
security vulnerabilities.

2.2 Business value


The Red Hat Enterprise Linux OpenStack Platform reference architecture solves the problems described in
section 2.1 by providing a blueprint for accelerating the design, piloting, and deployment process of an
enterprise-level OpenStack cloud environment on Lenovo hardware. This reference architecture provides the
following benefits:

Consolidated and fully integrated hardware resources with balanced workloads for compute, network,
and storage.
An aggregation of compute and storage hardware, which delivers a single, virtualized resource pool
that can be customized to different compute and storage ratios to meet the requirements of various
solutions.

Ease of scaling (vertical and horizontal) based on business requirements. Compute and storage
resources can be extended at runtime.

Elimination of single points of failure in every layer, which delivers continuous access to virtual
machines (VMs).

Hardware redundancy and full utilization.

Rapid OpenStack cloud deployment, including updates, patches, security, and usability enhancements
with enterprise-level support from Red Hat and Lenovo.

Unified management and monitoring for kernel-based VMs (KVMs).

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

The reference architecture reduces the complexity of OpenStack by outlining a validated configuration that
scales and delivers an enterprise level of redundancy across servers, storage, and networking to help enable
HA. In this reference architecture, VMs can be migrated among clustered host servers to support resource
rebalancing and scheduled maintenance, which allows IT operations staff to minimize service downtime and
deployment risks.

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

3 Requirements
The functional and non-functional requirements for a robust cloud implementation are described in this section.

3.1 Functional requirements


Table 1 lists the functional requirements for a cloud implementation.
Table 1. Functional requirements
Requirement

Description

Mobility

Workload is not tied to any

Supported by

Enabled VM is booted from distributed


storage and runs on different hosts

Live migration of running VMs

Rescue mode support for host


maintenance

VMs, virtual storage, and virtual

OpenStack compute service

network can be provisioned on

OpenStack block storage service

demand

OpenStack network service

physical location

Resource provisioning

Management portal

Multi-tenancy

Metering

Web-based dashboard for

OpenStack dashboard (Horizon) for most

workloads management

routine management operations

Resources are segmented

Built-in segmentation and multi-tenancy in

based on tenancy

OpenStack

Collect measurements of used

OpenStack metering service (Ceilometer)

resources to allow billing

3.2 Non-functional requirements


Table 2 lists the non-functional requirements for MSP or CSP cloud implementation.
Table 2. Non-functional requirements
Requirement

Description

Supported by

OpenStack environment

Supports the current

OpenStack Juno release through Red Hat

OpenStack edition

Enterprise Linux OpenStack Platform 6.1

Solution components can

Compute nodes and storage nodes can be

scale for growth

scaled independently within a rack or across

Scalability

racks without service downtime

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Requirement

Description

Load balancing

Workload is distributed

Supported by

Network interfaces are teamed and load


balanced

Use of OpenStack scheduler for


balancing compute and storage

evenly across servers

resources

Data blocks are distributed across


storage nodes and can be rebalanced
on node failure

High availability

Single component failure

will not lead to whole system

Hardware architecture ensures that


computing service, storage service, and

unavailability

network service are automatically


switched to remaining components

Controller node, compute node, and


storage node are redundant

Data is stored on multiple servers and


accessible from any one of them;
therefore, no single server failure can
cause loss of data

Physical footprint

Compact solution

Lenovo System x server, network


devices, and software are integrated
into one rack with validated performance
and reliability

Ease of installation

Reduced complexity for

Provides 1U compute node option

A dedicated deployment server with


web-based deployment tool provides

solution deployment

greater flexibility and control over how


you deploy OpenStack in your cloud

Support

Available vendor support

Optional deployment services

Hardware warranty and software


support are included with component
products

Separately available commercial


support from Red Hat for OpenStack
issues

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Requirement

Description

Flexibility

Solution supports variable

Supported by

deployment methodologies

Hardware and software components


can be modified or customized to meet
various unique customer requirements

Robustness

Solution continuously works

Provides local and shared storage for


workload

Red Hat Enterprise Linux OpenStack


Platform 6.1 is integrated and validated

without routine supervision

on Red Hat Enterprise Linux 7

Security

Integration tests on hardware and


software components

secure customer

Security is integrated in the Lenovo


System x hardware with System x

infrastructure

Trusted Platform Assurance, an

Solution provides means to

exclusive set of industry-leading security


features and practices

Networks are isolated by virtual LAN


(VLAN) and virtual extensible LAN
(VxLAN) for virtual networks

High performance

Solution components are

Provides 28 - 60 medium workloads (2

high-performance

vCPU, 6 GB vRAM, 100 GB disk) per host

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

4 Architectural overview
The Red Hat Enterprise Linux OpenStack Platform

6.1 is based on the OpenStack Juno release. This

platform offers the innovation of the OpenStack community project and provides the security, stability, and
enterprise-readiness of a platform that is built on Red Hat Enterprise Linux. Red Hat Enterprise Linux
OpenStack Platform supports the configuration of data center physical hardware into private, public, or hybrid
cloud platforms and includes the following benefits:

Fully distributed object storage

Persistent block-level storage

VM provisioning engine and image storage

Authentication and authorization mechanisms

Integrated networking

A web-based GUI for users and administration

Figure 1 shows the main components in Red Hat Enterprise Linux OpenStack Platform, which is a collection of
interacting services that control the compute, storage, and networking resources. Administrators use a
web-based interface to control, provision, and automate OpenStack resources. Also, programmatic access to
the OpenStack infrastructure is facilitated through an extensive REST API, which enables a rich set of add-on
capabilities.

Figure 1. Overview of Red Hat Enterprise Linux OpenStack platform

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

5 Component model
The section describes the components that are used in OpenStack, specifically the Red Hat Enterprise Linux
OpenStack Platform distribution.

5.1 Core OpenStack components


Figure 2 shows the core components of Red Hat Enterprise Linux OpenStack Platform. It does not include some
optional add-on components, which are listed in the Other components section in this document.

Figure 2. Components of Red Hat Enterprise Linux OpenStack platform


Table 3 lists the core components of Red Hat Enterprise Linux OpenStack Platform as shown in Figure 2.
Table 3. Core components
Component

Code name

Description

Compute service

Nova

Provisions and manages VMs, which creates a redundant and


horizontally scalable cloud computing platform. It is hardware- and
hypervisor-independent and has a distributed and asynchronous
architecture that provides HA and tenant-based isolation.

Block storage

Cinder

Provides persistent block storage for VM instances. The ephemeral


storage of deployed instances is non-persistent; therefore,

any data

that is generated by the instance is destroyed after the instance is


terminated. Cinder uses persistent volumes that are attached to
instances for data longevity, and instances can boot from a Cinder
volume rather than from a local image.
Virtual network

Neutron

OpenStack Networking is a pluggable networking as a service


framework for managing networks and IP addresses. This framework
supports several flexible network models, including Dynamic Host
Configuration Protocol (DHCP) and VLAN.

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Component

Code name

Description

Image management

Glance

Provides discovery, registration, and delivery services for virtual disk


images. The images can be stored on multiple back-end storage units
and are cached locally to reduce image staging time.

Authentication

Keystone

Provides a central and unified authorization mechanism for all


OpenStack users and services across all projects. It supports integration
with existing authentication services, such as Lightweight Directory
Access Protocol (LDAP).

Telemetry

Ceilometer

Provides infrastructure to collect measurements within OpenStack.


Delivers a unique point of contact for billing systems to acquire all of the
needed measurements to establish customer billing across all current
OpenStack core components. An administrator can configure the type of
collected data to meet operating requirements.

Dashboard

Horizon

An extensible, web-based application that runs as a Hypertext Transfer


Protocol (HTTP) service, which enables cloud administrators and users
to control and provision compute, storage, and networking resources.

Table 4 lists the optional components in the Red Hat Enterprise Linux OpenStack Platform release. Choose
whether and how to use them based on the actual use cases of their cloud deployments.
Table 4. Optional core components
Component

Code name

Description

Object Storage

Swift

Cloud storage software that is built for scale and optimized for durability,
availability, and concurrency across the entire data set. It can store and
retrieve lots of data with a simple API, and is ideal for storing unstructured
data that can grow without bound. (Ceph is used in this Reference
Architecture, instead of Swift, to provide object storage service.)

Orchestration

Heat

An orchestration engine to start multiple composite cloud applications that


are based on templates in the form of text files. Other templates also can
be started, such as AWS CloudFormation.

Database as a

Trove

Service

Provides a scalable and reliable cloud database as a service provisioning


functionality for relational and non-relational database engines.

OpenStack defines the concepts that are listed in Table 5 to help the administrator further manage the tenancy
or segmentation in a cloud environment.
Table 5. OpenStack tenancy concepts
Name

Description

Tenant

The OpenStack system is designed to have multi-tenancy on a shared system.


Tenants (or projects, as they are also called) are isolated resources that consist of
separate networks, volumes, instances, images, keys, and users. Quota controls can

10

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Name

Description
be applied to these resources on a per-tenant basis.

Availability Zone

In OpenStack, an availability zone allows a user to allocate new resources with defined
placement. The instance availability zone defines the placement for allocation of
VMs, and the volume availability zone defines the placement for allocation of virtual
block storage devices.

Host Aggregate

A host aggregate further partitions an availability zone. It consists of key-value pairs


that are assigned to groups of machines and can be used in the scheduler to enable
advanced scheduling.

Region

Regions segregate the cloud into multiple compute deployments. Administrators can
use regions to divide a shared-infrastructure cloud into multiple sites with separate API
endpoints without coordination between sites. Regions share the Keystone identity
service, but each has a different API endpoint and a full Nova compute installation.

5.2 Other components


The following components also are available:

Red Hat Ceph Storage is a distributed object store and file system that provides a robust, highly
scalable block and object storage platform for enterprises deploying public or private clouds.

MariaDB is open source database software that is shipped with Red Hat Enterprise Linux as a
replacement for MySQL. MariaDB Galera cluster is a synchronous multi-master cluster for MariaDB. It
uses synchronous replication between every instance in the cluster to achieve an active-active
multi-master topology, which means every instance can accept data retrieving and storing requests and
the failed nodes do not affect the function of the cluster.

RabbitMQ is a robust open source messaging system that is based on the AMQP standard. In Red Hat
Enterprise Linux OpenStack Platform 6, RabbitMQ is recommended and replaces Qpid as the default
message broker.

5.3 Red Hat Enterprise Linux OpenStack Platform 6 specific


enhancements
Red Hat Enterprise Linux OpenStack Platform offers the innovation of the OpenStack community project and
provides the security, stability, and enterprise-readiness of a platform that is built on Red Hat Enterprise Linux,
which includes with 3 years of production support.
Red Hat Enterprise Linux OpenStack Platform 6 is fully integrated with the Red Hat Enterprise Linux High
Availability Add-On to support highly available environments for customer deployments. For more information
about the High Availability Add-On, see the High Availability Add-On Overview website.
Red Hat Enterprise Linux OpenStack Platform also includes a wizard-based tool for the easy deployment of Red
Hat Enterprise Linux OpenStack Platform across a set of hardware. The installer builds upon the capabilities of
the Foreman deployment tool to make enterprise-grade installations much easier. For more information about
11

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

other enhancements from previous Red Hat Enterprise Linux OpenStack Platform release, see the Release
Information website.

12

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

6 Operational model
Because OpenStack provides a great deal of flexibility, the operational model for OpenStack normally depends
upon a thorough understanding of the requirements and needs of the cloud users to design the best possible
configuration to meet the requirements. For more information, see the OpenStack Operations Guide.
This section describes the Red Hat Enterprise Linux OpenStack Platform operational model that is verified with
Lenovo hardware and software. It concludes with some example deployment models that use Lenovo servers
and Lenovo RackSwitch network switches.

6.1 Hardware
The OpenStack software was validated to run on all Lenovo hardware platforms, including System x and
ThinkServer. This reference architecture guide focuses on the Red Hat OpenStack Platform that is deployed on
the Lenovo System x3650 M5 and Lenovo System x3550 M5 servers that are installed with the Red Hat
Enterprise Linux

7 operating system and Red Hat Enterprise Linux OpenStack Platform version 6.1.1. The

network configuration consists of the RackSwitch network switches.

6.1.1 Rack servers


Following sections describe the validated server options for Red Hat OpenStack Platform.

Lenovo System x3650 M5


The Lenovo System x3650 M5 server (as shown in Figure 3 and Figure 4) is an enterprise class 2U two-socket
versatile server that incorporates outstanding reliability, availability, and serviceability (RAS), security, and high
efficiency for business-critical applications and cloud deployments. It offers a flexible, scalable design and
simple upgrade path to 26 2.5-inch hard disk drives (HDDs) or solid-state drives (SSDs), or 14 3.5-inch HDDs,
with doubled data transfer rate via 12 Gbps serial-attached SCSI (SAS) internal storage connectivity and up to
1.5TB of TruDDR4 Memory. Its onboard Ethernet solution provides four standard embedded Gigabit Ethernet
ports and two optional embedded 10 Gigabit Ethernet ports without occupying PCIe slots.
Combined with the Intel Xeon processor E5-2600 v3 product family, the Lenovo x3650 M5 server offers a
high density of workloads and performance that is targeted to lower the total cost of ownership (TCO) per VM.
Its flexible, pay-as-you-grow design and great expansion capabilities solidify dependability for any kind of
virtualized workload, with minimal downtime.
The Lenovo x3650 M5 server provides internal storage density of up to 100 TB (with up to 26 x 2.5-inch drives)
in a 2U form factor with its impressive array of workload-optimized storage configurations. The x3650 M5 offers
easy management and saves floor space and power consumption for the most demanding storage virtualization
use cases by consolidating the storage and server into one system.

Figure 3. Lenovo x3650 M5 (with 16 2.5-inch disk bays)

13

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Figure 4. Lenovo x3650 M5 (with 3.5" disk bays)


For more information, see the following websites:

System x3650 M5 Overview

System x3650 M5 Product Guide

Lenovo System x3550 M5


The Lenovo System x3550 M5 server (as shown in Figure 5) is a cost- and density-balanced 1U two-socket
rack server. The x3550 M5 features a new, innovative, energy-smart design with up to two Intel Xeon
processors of the high-performance E5-2600 v3 product family of processors; up to 1.5TB of faster,
energy-efficient TruDDR4 memory; up to 12 12Gb/s SAS drives; and up to three PCI Express (PCIe) 3.0 I/O
expansion slots in an impressive selection of sizes and types. The improved feature set and exceptional
performance of the x3550 M5 is ideal for scalable cloud environments.

Figure 5. Lenovo x3550 M5


For more information, see the following websites:

System x3550 M5 - Overview

System x3550 M5 Product Guide

Lenovo ThinkServer RD550


The Lenovo ThinkServer RD550 (as shown in Figure 6) is a 1U two-socket server that features Intel Xeon
E5-2600 v3 processors and supports up to 768 GB of DDR4 memory, 18 cores, and 36 threads per socket. It
offers the capability to support mix and match internal HDD and SSD storage with up to 12 2.5-inch drive bays,
140 GbE networking capability, up to eight hot-swap dual rotor fans, hot-swap redundant power supplies, and a
dedicated Gigabit Ethernet out-of-band management port in a dense 1U design.

Figure 6. Lenovo ThinkServer RD550


For more information, see the Lenovo ThinkServer RD550 Product Guide.

14

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Lenovo ThinkServer RD650


The Lenovo ThinkServer RD650 (as shown in Figure 7) is a 2U, two-socket storage-rich server that provides
up to 74 TB of storage capacity. It features a unique Lenovo AnyBay design that allows a unique hybrid option
with nine 3.5-inch and six 2.5-inch front-access drive bays in the front and two 2.5-inch drive bays in the back
and the standard 3.5-inch and 2.5-inch HDD chassis, which is ideal for creating a tiered storage environment.
The RD650 provides Lenovo AnyRAID technology, which is a mid-plane RAID adapter design that connects
directly to the drive backplane without the use of a PCIe slot. SD card options are available to enable flexible
boot-drive choices.

Figure 7. Lenovo ThinkServer RD650


For more information, see the Lenovo ThinkServer RD650 Product Guide.

6.1.2 Network switches


Following sections describe the TOR switches options that can be used in this reference architecture. The 10Gb
switch is used for the internal and external network of Red Hat Enterprise Linux OpenStack Platform cluster,
and 1Gb switch is used for out-of-band server management.

Lenovo RackSwitch G8124E


The Lenovo RackSwitch G8124E (as shown in Figure 8) delivers exceptional performance that is lossless and
low-latency. It provides HA and reliability with redundant power supplies and fans as standard. The RackSwitch
G8124E also delivers excellent cost savings and a feature-rich design when it comes to virtualization,
Converged Enhanced Ethernet (CEE)/Fibre Channel over Ethernet (FCoE), Internet Small Computer System
Interface (iSCSI), HA, and enterprise-class Layer 2 and Layer 3 functionality.

Figure 8. Lenovo RackSwitch G8124E


With support for 10 Gb, this 24-port switch is designed for clients who are using 10 Gb Ethernet or plan to do so.
The G8124E supports Lenovo Virtual Fabric, which provides the ability to dynamically allocate bandwidth per
virtual network interface card (vNIC) in increments of 100 MB, while adjusting over time without downtime.
For more information, see the RackSwitch G8124E Product Guide.

15

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Lenovo RackSwitch G8264


Designed with top performance in mind, Lenovo RackSwitch G8264 (as shown in Figure 9) is ideal for todays
big data, cloud, and optimized workloads. The G8264 switch offers up to 64 10 Gb SFP+ ports in a 1U form
factor and can be used in the future with four 40 Gb QSFP+ ports. It is an enterprise-class and full-featured data
center switch that delivers line-rate, high-bandwidth switching, filtering, and traffic queuing without delaying data.
Large data center grade buffers keep traffic moving. Redundant power and fans and numerous high-availability
features equip the switches for business-sensitive traffic.

Figure 9: Lenovo RackSwitch G8264


The G8264 switch is ideal for latency-sensitive applications, such as client virtualization. It supports Virtual
Fabric to help clients reduce the number of I/O adapters to a single dual-port 10 Gb adapter, which helps reduce
cost and complexity. The G8264 switch supports the newest protocols, including Data Center
Bridging/Converged Enhanced Ethernet (DCB/CEE) for support of FCoE, iSCSI, and NAS.
For more information, see the RackSwitch G8264 Product Guide.

Lenovo RackSwitch G7028


The Lenovo RackSwitch G7028 (as shown in Figure 10) is a 1 Gb top-of-rack switch that delivers line-rate Layer
2 performance at an attractive price. G7028 has 24 10/100/1000BASE-T RJ45 ports and four 10 Gb Ethernet
SFP+ ports. It typically uses only 45 W of power, which helps improve energy efficiency.

Figure 10. Lenovo RackSwitch G7028


For more information, see the RackSwitch G7028 Product Guide.

16

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Lenovo RackSwitch G8052


The Lenovo System Networking RackSwitch G8052 (as shown in Figure 11) is an Ethernet switch that is
designed for the data center and provides a virtualized, cooler, and simpler network solution. The Lenovo
RackSwitch G8052 offers up to 48 1 GbE ports and up to four 10 GbE ports in a 1U footprint. The G8052 switch
is always available for business-sensitive traffic by using redundant power supplies, fans, and numerous
high-availability features.

Figure 11: Lenovo RackSwitch G8052


For more information, see the RackSwitch G8052 Product Guide.

17

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

6.2 Implementing an OpenStack cluster


Figure 12 shows how the components and concepts that are described in Page 7 Component model

are

related to each other and how they form an OpenStack cluster.

Figure 12: Deployment model for Red Hat Enterprise Linux OpenStack Platform
The deployment node runs a GUI-based wizard, which is responsible for the initial deployment of the controller
node, compute node, and storage node by using PXE boot. It also performs the service configuration and wiring
between all deployed nodes and can perform incremental deployment of individual nodes after the initial
deployment.
The controller node cluster consists of three controller nodes. The nodes act as a central entrance for
processing all internal and external cloud operation requests. The controller nodes also manage the lifecycle of
all VM instances that are running on the compute nodes and provide essential services, such as authentication
and networking to the VM instances. The controller nodes rely on some support services, such as DHCP, DNS,
and NTP.
The Nova-Compute and Open vSwitch agents run on the compute nodes. The agents receive instrumentation
requests from the controller node via RabbitMQ messages to manage the compute and network virtualization of
instances that are running on the compute node. Compute nodes can be aggregated into pools of various sizes
for better management, performance, or isolation. A Ceph-based storage cluster is created on the storage node,
which is largely self-managed and is supervised by the Ceph monitor that is installed on the controller node. The
Ceph cluster provides block data storage for Glance image store and for VM instances via the Cinder service.

18

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

6.2.1 Compute pool


The compute pool (as shown on the lower-right side of Figure 12) consists of multiple compute servers that are
virtualized by OpenStack Nova to provide a redundant and scalable cloud computing environment. Red Hat
Enterprise Linux OpenStack Platform provides Nova drivers to enable virtualization on standard x86 servers
(such as the x3550 M5 and the x3650 M5) and offers support for multiple hypervisors. Network addresses are
automatically assigned to VM instances. Compute nodes inside the compute pool can be grouped into one or
more host aggregates according to business need.

6.2.2 Storage pool


A storage pool (as shown on the lower-left side of Figure 12) consists of multiple storage servers that provide
persistent block storage resources from their local drives.
Ceph is open source software from Red Hat that provides Exabyte-level scalable object, block, and file storage
from a completely distributed computer cluster with self-healing and self-managing capabilities. Ceph
virtualizes the pool of the block storage devices and stripes the virtual storage as objects across the servers.
Ceph is well integrated with OpenStack in the Juno release. The OpenStack Cinder storage component and
Glance image services can be implemented on top of the Ceph distributed storage.
OpenStack users and administrators can use the Horizon dashboard or the OpenStack command-line
interface to request and use the storage resources without requiring knowledge of where the storage is
deployed or how the block storage volume is allocated in a Ceph cluster.
The Nova, Cinder, and Glance services on the controller and compute nodes use the Ceph driver as the
underlying implementation for storing the actual VM or image data. Ceph divides the data into placement
groups to balance the workload of each storage device. Data blocks within a placement group are further
distributed to logical storage units called Object Storage Devices (OSDs), which often are physical disks or
drive partitions on a storage node.
The OpenStack services can use a Ceph cluster in the following ways:

VM Images: OpenStack Glance manages images for VMs. The Glance service treats VM images as
immutable binary blobs and can be uploaded to or downloaded from a Ceph cluster accordingly.

Volumes: OpenStack Cinder manages volumes (that is, virtual block devices), which can be attached to
running VMs or used to boot VMs. Ceph serves as the back-end volume provider for Cinder.

VM Disks: By default, when a VM boots, its drive appears as a file on the file system of the hypervisor.
Alternatively, the VM disk can be in Ceph and the VM is started by using the boot-from-volume
functionality of Cinder or directly started without the use of Cinder. The latter option is advantageous
because it enables maintenance operations to be easily performed by using the live-migration process,
but only if the VM uses the RAW disk format.

If the hypervisor fails, it is convenient to trigger the Nova evacuate function and almost seamlessly run
the VM machine on another server. When the Ceph back end is enabled for both Glance and Nova, there
is no need to cache an image from Glance to a local file, which saves time and local disk space. In addition,
Ceph can implement a copy-on-write feature, so that starting an instance from a Glance image does not
actually use any disk space.

19

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Figure 13 shows the details of the integration between the OpenStack services and Ceph.

Figure 13. Ceph and OpenStack services integration


Ceph uses a write-ahead mode for local operations; a write operation hits the file system journal first and from
there it is copied to the backing file store. Therefore, SSDs are used here for the journal data to improve the
storage performance.
Note: Hardware RAID should not be used because Ceph provides software-based data protection by object
replication and data striping.

6.2.3 Controller cluster


The controller nodes (as shown on the upper-left side of Figure 12) are central administrative servers where
users can access and provision cloud-based resources from a self-service portal.
The HA of services that are running on controllers is primarily provided by using redundant controller nodes, as
shown in Figure 14. Odd number of controller nodes are used (minimum of 3) because most HA capability is
based on a quorum-based system; for a three node controller cluster, if two controllers disagree, the third is
used as the arbiter.

20

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Figure 14: Controller cluster


The controller cluster also hosts proxy and message broker services for scheduling compute and storage pool
resources and provides the data store for cloud environment settings.
In addition, the controller servers act as network nodes that provide Software Defined Networking (SDN)
functionality to the VMs. Network nodes are responsible for managing the virtual networking that is needed for
administrators to create public or private networks and uplink VMs into external networks, which form the only
ingress and egress points for instances that are running on top of OpenStack.
Each service location is referred to by using a Virtual IP (VIP) and the mapping between physical IP and VIP is
automatically managed by Pacemaker; therefore, switching a service provider between different controller
nodes is transparent to service consumers. For a three-node controller cluster, each service needs three
physical IP addresses and one virtual IP address.
Because all OpenStack API services are stateless, the user can choose an active/passive HA model or an
active/active HA model with HAProxy to achieve load balancing. However, session stickiness for the Horizon
web application must be enabled.
RabbitMQ and Ceph include built-in clustering capabilities. MariaDB database uses the Galera library to
achieve HA deployment.

6.2.4 Deployment server


The Red Hat Enterprise Linux OpenStack Platform installer (Foreman) is an application to manage the
installation, configuration, and scalability of OpenStack environments. The application achieves automatic
installation through detecting bootable hosts and mapping OpenStack services to them on a web interface.
This tool simplifies the process of installing and configuring the Red Hat Enterprise Linux OpenStack Platform
while providing a means to scale in the future.
Puppet is declarative, model-driven configuration management software that automates system administration
tasks. Foreman uses many Puppet modules that help automate initial deployment or expansion of an
OpenStack cloud installation.

21

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

6.2.5 Support services


Support services (as shown on the upper-right side of Figure 12), such as Domain Name System (DNS),
Dynamic Host Configuration Protocol (DHCP), and Network Time Protocol (NTP) are needed for cloud
operations. These services can be installed on the same deployment server or reused from the customers
environment.

22

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

6.3 Systems management


Lenovo XClarity Administrator is a centralized resource management solution that reduces complexity,
speeds up response, and enhances the availability of Lenovo server systems and solutions.
The Lenovo XClarity Administrator provides agent-free hardware management for Lenovos System x rack
servers and Flex System compute nodes and components, including the Chassis Management Module
(CMM) and Flex System I/O modules. Figure 15 shows the Lenovo XClarity administrator interface, where Flex
System components and rack servers are managed and are seen on the dashboard. Lenovo XClarity
Administrator is a virtual appliance that is quickly imported into the current virtualized environment server
configuration.

Figure 15: XClarity Administrator interface

23

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

6.4 Deployment model


There are many models for deploying an OpenStack environment. To achieve the highest usability and flexibility,
you can assign each node a specific role and place the corresponding OpenStack components on it.
The controller node has the following roles in supporting OpenStack:

The controller node acts as an API layer and listens to all service requests (Nova, Glance, Cinder, and
so on).The requests first land on the controller node. Then, the requests are forwarded to a compute
node or storage node through messaging services for underlying compute workload or storage
workload.

The controller node is the messaging hub in which all messages follow and route through for
cross-node communication.

The controller node acts as the scheduler to determine the placement of a particular VM, which is
based on a specific scheduling driver.

The controller node acts as the network node, which manipulates the virtual networks that are created
in the cloud cluster.

However, it is not mandatory that all four roles are on the same physical server. OpenStack is a naturally
distributed framework by design. Such all-in-one controller placement is used for simplicity and ease of
deployment. In production environments, it might be preferable to move one or more of these roles and
relevant services to another node to address security, performance, or manageability concerns.
On the compute nodes, the installation includes the essential OpenStack computing service
(openstack-nova-compute), along with the neutron-openvswitch-agent, which enables
software-defined networking. An optional metering component (Ceilometer) can be installed to collect resource
usage information for billing or auditing.
On a storage node, the Ceph OSD service is installed. It communicates with the Ceph monitor services that are
running on controller nodes and contributes its local disks to the Ceph storage pool. The user of the Ceph
storage resources exchanges data objects with the storage nodes through its internal protocol.
Table 6 lists the placement of major services on physical servers.
Table 6. Components placement on physical servers
Role

Host Name

Services

Deployment
server

deployer

foreman
puppet
httpd

24

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Role

Host Name

Services

Controller node

controller01 - controller03

openstack-cinder-api
openstack-cinder-backup
openstack-cinder-scheduler
openstack-cinder-volume
openstack-glance-api
openstack-glance-registry
openstack-keystone
openstack-nova-api
openstack-nova-cert
openstack-nova-conductor
openstack-nova-consoleauth
openstack-nova-novncproxy
openstack-nova-scheduler
neutron-dhcp-agent
neutron-13-agent
neutron-metadata-agent
neutron-openvswitch-agent
neutron-server
openstack-ceilometer-alarm-evaluator
openstack-ceilometer-alarm-notifier
openstack-ceilometer-api
openstack-ceilometer-central
openstack-ceilometer-collector
ceph-monitor
rabbitmq, mariadb, mongodb

Compute node

compute01 - compute18

neutron-openvswitch-agent
openstack-ceilometer-compute
openstack-nova-compute

Storage node

25

storage01 - storage03

ceph-osd

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

6.4.1 Deployment example 1: Full rack system


The Lenovo System x3650 M5 and Lenovo System x3550 M5 previously described in Section 6.1.1 can be
combined into a full rack of Red Hat Enterprise Linux OpenStack Platform cluster. Figure 16 shows a balanced
configuration of compute, storage, networking, and power. The VM capacity is calculated on a per-host density
basis. For more information about how to estimate the VM density, see "Performance and sizing considerations"
on page 29.
Components

Capacity

Networking

10 GbE

VM capacity

>760 (mixed workload)

KVM and Monitor

Controller nodes

Compute nodes

14

Storage nodes

Deployment server

Local storage

86 TB (RAID-10)

Local IOPS

>100

External storage

108 TB (with 2 replicas)

Rack Layout

Figure 16. Deployment example 1: Full rack system

26

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Because it is impractical to take into account every possible combination of these options, this configuration can
be modified in a real scenario to best fit the workloads that are running on the target cloud environment.

6.4.2 Deployment example 2 with ThinkServer: Healthcare company example


A healthcare company plans to abstract its compute, network, and storage functionality into a pool of resources
that can be used as a service by its customers so that users can simply and expeditiously provision or
decommission an application. The applications must be automatically balanced across data center resources,
and the data center resources can be elastically scaled out to meet application demands.
In this case, it was determined that ThinkServer RD650, ThinkServer RD550, and Red Hat Enterprise Linux
OpenStack Platform 6 with its Ceph Storage should be used. To keep the cloud services available all of the time,
four controller nodes are used to build the controller cluster and eight storage nodes are used to set up the
storage cluster with capacity up to 192 TB. In addition, all VMs instances are booted from the storage cluster to
keep the data consistency if there is a compute node failure. Table 7 lists the configuration for the healthcare
company example.
Table 7. Configuration for healthcare company example
OpenStack Server
Controller Node x 3

Machine Configuration

Lenovo ThinkServer RD550 with dual Xeon E5-2620 v3


processors, 64 GB RAM, two 32GB SD Cards (for OS), plus
four 4TB HDD with ThinkServer RAID 720IX Adapter with 2 GB
Supercapacitor Upgrade

Compute Node x 16

Emulex CNA OCe14102-UX 10Gb dual port adapter

Mezzanine Quad RJ45 1G port adapter

Lenovo ThinkServer RD550 with dual Xeon E5-2650 v3


processors, 384 GB RAM, two 32GB SD Cards (for OS) and no
HDD

Storage Node x 8

Emulex CNA OCe14102-UX 10Gb dual port adapter

Mezzanine Quad RJ45 1G port adapter

Lenovo ThinkServer RD650 with single Xeon E5-2620 v3


processor, 64 GB RAM, two 2.5 SSD, eight 3.5 6TB HDDs,
and two 32GB SD Cards (for OS)

Emulex CNA OCe14102-UX 10Gb dual port adapter

Mezzanine Quad RJ45 1G port adapter

10 GbE switch x 4

Lenovo RackSwitch G8264

1 Gb switch x 1

Lenovo RackSwitch G8052

27

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

7 Deployment considerations
This section describes the performance, sizing, and networking considerations for a Red Hat Enterprise Linux
OpenStack Platform deployment.

7.1 Server configurations


Table 8 lists three server configurations for OpenStack compute nodes that meet different performance and
density needs.
Table 8. Compute node configurations
Compute node

Server configuration

Standard configuration

System x3650 M5 with dual Xeon E5-2650 v3 processors, 256 GB


RAM, and 16 900 GB 2.5-inch HDDs in 2U server

Enhanced configuration

System x3550 M5 with dual Xeon E5-2650 v3 processors, 256 GB


RAM, and eight 1.2 TB 2.5-inch HDDs plus two 200GB SSDs in RAID
array in 1U server

Advanced configuration

System x3650 M5 with dual Xeon E5-2680 v3 processors, 384 GB RAM,


and 14 1.8 TB 2.5-inch HDDs plus two 400GB SSDs in RAID array in 2U
server

The standard configuration uses a System x3650 M5 dual-socket 2U rack server (with 2.5-inch disk bays) and is
configured for general workloads in this reference architecture. Such configuration features balanced
computing, memory, and storage, and is cost-competitive on a per-VM basis. Memory or drives can be added to
accommodate workloads that require more capacity.
The enhanced configuration uses a System x3550 M5 server dual-socket 1U server and is configured for I/O
intensive workloads to maximize storage performance. Two Lenovo S3700 SSDs are used with LSI MegaRAID
CacheCade Pro 2.0 as a dedicated pool of RAID controller cache to improve the read/write performance of the
disk array of a servers eight HDDs. The Lenovo S3700 SATA 2.5-inch MLC Enterprise SSDs for System x use
MLC NAND flash memory with high endurance technology and a 12 Gbps SAS interface to provide an efficient
solution with industry-leading performance. These SSDs can be fully rewritten up to 10 times per day throughout
their entire five-year life expectancy. This technology offers an ideal combination of capacity and high
performance.
The advanced configuration uses a System x3650 M5 dual-socket 2U rack server and is suited to heavy
workloads. This server has faster processors, more memory, and higher available IOPS.
Table 9 lists the server configurations for the other types of nodes that are required for building an OpenStack
cloud.

28

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Table 9. OpenStack node configurations


OpenStack node

Server configuration

Deployment server

System x3550 M5 with a single Xeon E5-2650 v3 processor, 16 GB


RAM, and two 900GB 2.5-inch HDDs

Controller node

System x3650 M5 with dual Xeon E5-2650 v3 processors, 64 GB RAM,


and two 900GB 2.5-inch HDDs

Storage node

System x3650 M5 with a single Xeon E5-2650 v3 processor, 64 GB


RAM, two 2.5-inch SSDs, and up to twelve 3.5 6TB HDDs

The Storage Node uses a System x3650 M5 (with 3.5-inch disk bays) server. It has up to 65 TB of storage
capacity with 12 3.5-inch HDDs and is ideal to fulfil requirements for external storage. The storage capacity is
shared by multiple compute nodes by making available its local disks through Ceph storage system and the
OpenStack Cinder service. To achieve optimal performance, the two 2.5-inch SSDs in the rear are partitioned
for the operating system and storing the Ceph journal data.
For more information about server configurations and a Bill of Materials, see "Appendix: Lenovo Bill of
Materials" on page 36.

7.2 Performance and sizing considerations


The expected performance of the OpenStack clusters relies on the actual workload that is running in the cloud
and the correct sizing of each component. This section provides a sizing guide for different workloads and
scenarios.
Table 10 lists the assumptions for the workload characteristics. The mixed workload consists of 45% small, 40%
medium, and 15% large VMs. This information serves as a baseline only to measure the users real workload
and is not necessarily meant to represent any specific application. In any deployment scenario, workloads are
unlikely to have the same characteristics, and workloads might not be balanced between hosts. An appropriate
scheduler with real performance measurement of a workload must be used for compute and storage to meet the
target service level agreement (SLA).
Table 10. VM workloads characteristics
vCPU

vRAM

Storage

IOPS

Small

2 GB

20 GB

15

Medium

6 GB

60 GB

35

Large

12 GB

200 GB

100

When sizing the solution, calculate the amount of required resources that is based on the amount of workload
expected. Then, multiply that amount by the maximum size of the workload and add a 10% buffer for
management overhead.

29

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

For better resource utilization, consider putting similar workloads on the same guest OS type in the same host to
use memory or storage de-duplication, as shown in the following examples:

Virtual CPU (vCPU) = Physical Cores * CPU allocation ratio 1(6:1)

Virtual Memory (vRAM) = Physical Memory * (RAM allocation ratio2) * (100% - OS reserved)

By using these formulas, usable virtual resources can be calculated from the hardware configuration of compute
nodes, as shown in Table 11.
Table 11. Virtual resource
Standard configuration

Enhanced configuration

Server

System x3650 M5

System x3550 M5

vCPU

20 Cores*6 = 120 vCPU

20 Cores*6 = 120 vCPU

vRAM

256 GB*150%*(100% - 10%) =

256 GB*150%*(100% - 10%) = 345 GB

345 GB
Storage Space

16*900GB with RAID-10 = 6.6 TB

8*1.2TB with RAID-10 + SSD Cache = 4.8TB

By using a mixed workload requirement that was generated from Table 10 in which each VM requires 2 vCPU,
6 GB vRAM, 80 GB local disk, and 100 IOPS, it is easy to calculate the expected VM density (the smallest
number determines the number of VMs that can be supported per compute server). Table 12 shows 57 VMs per
host for the standard and enhanced configurations.
Table 12. Resource-based VM density for mixed workload
CPU

Memory

Storage Space

IOPS (67% read ops +


33% write ops)

Standard configuration

120/2 = 60

345/6 = 57

6.6T/80 GB = 82

3~8000/100 = 80

Enhanced configuration

120/2 = 60

345/6 = 57

4.8T/80 GB = 60

~20000/100 = 200

However, the actual user experience of I/O performance is largely dependent on workload pattern and
OpenStack configuration; for example, the number of concurrent running VM instances and the cache mode. By
applying the calculated density of a single compute host, this reference architecture can be tailored to fit various
sizes of cloud cluster.

CPU allocation ratio indicates the number of virtual cores that can be assigned to a node for each physical core. 6:1 is a balanced choice

for performance and cost effectiveness on models with Xeon E5-2600 v3 series processors.
2

RAM allocation ratio is to allocate virtual resources in excess of what is physically available on a host through compression or

de-duplication technology. The hypervisor uses it to improve infrastructure utilization of the RAM allocation ratio = (virtual resource/physical
resource) formula.
3

IOP estimates for standard and enhanced configurations are derived from measurements performed by using the FIO performance tool.

30

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

7.3 Networking
Combinations of physical and virtual isolated networks are configured at the host, switch, and storage layers to
meet isolation, security, and quality of service (QoS) requirements. Different network data traffic is physically
isolated into different switches according to the physical switch port and server port assignments.
There are five onboard 1 GbE Ethernet interfaces (including one dedicated port for the IMM) and an installed
dual-port 10 GbE Ethernet device for each physical host (x3550 M5 or x3650 M5).
The 1 GbE management port (dedicated for the Integrated Management Module, or IMM) is connected to the
1GbE RackSwitch G7028 for out-of-band management, while the other 1 GbE ports are not connected. The
corresponding Red Hat Enterprise Linux 7 Ethernet devices eno[0-9] are not configured. Two G8124E
24-port 10GbE Ethernet switches are used in pairs to provide redundant networking for the controller nodes,
compute nodes, storage nodes, and deployment server. The dual port 10G Emulex Virtual Fabric Adapter
delivers virtual I/O by supporting vNICs, with which two ports can be divided into eight virtual NICs.
Administrators can create dedicated virtual pipes between the 10 GbE network adapter and the TOR switch for
optimal performance and better security. Linux NIC bonding is then applied to the vNICs to provide HA to the
communication networks. Ethernet traffic is isolated by type through the use of virtual LANs (VLANs) on the
switch.
Table 13 lists the six logical networks in a typical OpenStack deployment.
Table 13. OpenStack Logical Networks
Network

VLAN

Description

BMC Network

Used for out-of-band management of hosts and switches via IMM.

Management Network

The network where OpenStack APIs are shown to the users and the
message hub to connect various services inside OpenStack. The OpenStack
Installer also uses this network for host discovery and initial deployment.

Internal Network

This is the subnet from which the VM private IP addresses are allocated.
Through this network, the VM instances can talk to each other.

External Network

This is the subnet from which the VM public IP addresses are allocated. It is
the only network where external users can access their VM instances.

Storage Network

The front-side storage network where Ceph clients (through Glance API,
Cinder API, or Ceph CLI) access the Ceph cluster. Ceph Monitors also
operate on this network.

Data Cluster Network

The back-side storage network to which Ceph routes its heartbeat, object
replication, and recovery traffic.

vNIC0 and vNIC4 are used to create 'bond0', and vNIC1 and vNIC5 are used to create 'bond1', and so on. The
20 Gb bandwidth from dual 10 Gb connections are dynamically allocated to four vNIC bonds, and can adjust
as required in increments of 100 Mb without downtime.

31

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Figure 17 shows the recommended network topology diagram with VLANs and vNICs.

Figure 17. Network topology diagram


Table 14 lists the recommended VLAN and bandwidth that is used by each type of node in an OpenStack
deployment. The bandwidth of each logical network is for general-purpose workloads only. These
recommendations can be adjusted accordingly at runtime for the actual workload without service interruption.

32

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Table 14. Recommended VLAN usage by OpenStack nodes


Role

Logical Network

Usage

Bandwidth

VLAN

Controller Node

Bond0

Management Network

2 Gb/s

Bond1

Internal Network

2 Gb/s

Bond2

Storage Network

12 Gb/s

Bond3

External Network

4 Gb/s

Bond0

Management Network

2 Gb/s

Bond1

Internal Network

6 Gb/s

Bond2

Storage Network

12 Gb/s

Bond3

Reserved

0 Gb/s

N/A

Bond0

Management Network

2 Gb/s

Bond1

Reserved

0 Gb/s

N/A

Bond2

Storage Network

12 Gb/s

Bond3

Data Cluster Network

8 Gb/s

(no bonding)

Management Network

1 Gb/s

Compute Node

Storage Node

Deployment Server

The Emulex VFA5 adapter is used to offload packet encapsulation for VxLAN networking overlays to the adapter,
which results in higher server CPU effectiveness and higher server power efficiency.
Open vSwitch (OVS) is a fully featured, flow-based implementation of a virtual switch and can be used as a
platform in software-defined networking. Open vSwitch supports VxLAN overlay networks, and serves as the
default OpenStack network service in this reference architecture. Virtual Extensible LAN (VxLAN) is a network
virtualization technology that addresses scalability problems that are associated with large cloud computing
deployments by using a VLAN-like technique to encapsulate MAC-based OSI layer 2 Ethernet frames within
layer 3 UDP packets.

33

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

7.4 Expansion and scaling


This section describes multi-rack expansion and storage scale out.

7.4.1 Multiple rack expansion


If the workload demands exceed the capacity of one rack, span the OpenStack cloud across multiple racks to
increase capacity. A common practice is to deploy controller nodes that are running separate sets of APIs and
scheduler services that are managing the compute and storage nodes in each rack. Sharing controller nodes
that are running a single instance of authentication (Keystone) and dashboard (Horizon) services are shared
across all racks for centralized management, as shown in Figure 18.

Figure 18. Multiple rack deployment example


Each rack has its own Power Distribution Unit (PDU) and TOR switches and can operate separately, regardless
of the health status of other racks.
The two G8124E switches and one G7028 switch that were used in the previous full-rack example can manage
up to 20 servers in an Red Hat Enterprise Linux OpenStack Platform networking environment. If the expected
size of the Red Hat Enterprise Linux OpenStack Platform cloud exceeds 20 servers, the switches should be
substituted with the G8264 and G8042, respectively, because they have more ports and can manage up to 40
servers.

7.4.2 Storage scale out


To increase shared storage space, add storage nodes to the cluster and duplicate the configuration of the
storage node, or add drives to a storage node. Then, run the ceph osd create Ceph management command
to notify the monitors. The monitors automatically discover increased storage capacity and add it to the storage

34

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

pool without any administrative effort or rebooting the storage service.

7.5 Best practices


A set of proven planning and deployment techniques contribute significantly to a successful OpenStack
deployment and operation. Proper planning includes sizing of needed server resources (CPU and memory),
storage (space and IOPS), and core software services to support the infrastructure. This information can then
be implemented by using the following best practices to achieve optimal performance and availability for the
solution.

When the solution is implemented, measure the actual resource usage of workload first and adjust the
memory or disks accordingly to avoid imbalanced resource utilization.

In Red Hat OpenStack Platform installer, the Emulex VFA5 adapter cannot be recognized the first time
after discovery. If this issue occurs, deploy the operating system first and then reassign the adapter into
your deployment. (Red Hat BZ 1182484).

Use Red Hat satellite when Red Hat Enterprise Linux OpenStack Platform is deployed if the network
access is restricted.

All nodes (and monitors) must be time synchronized by using NTP at all times.

Set the compute and storage quota on each tenant properly so that the resource are not exhausted
because of a misconfigured or resource-starved VM.

To allow for planned maintenance or unplanned failures, provide enough physical server resources to
handle all VMs in an N-1 configuration.

Before planned maintenance, evacuate the instances on the server and gracefully shut down all
running services to avoid inconsistent states or data corruption.

Two switches are connected by using the inter-switch link (ISL) protocol. The number of ports for
inter-switch connection should be at least two.

Combined with Lenovos enterprise-class hardware and software, this reference architecture prepares IT
administrators to successfully meet virtualization performance and growth objectives by deploying clouds
efficiently and reliably.

35

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

8 Appendix: Lenovo Bill of Materials


This appendix contains the Bill of Materials (BOMs) for different configurations of hardware for Red Hat
Enterprise Linux OpenStack Platform deployments. There are sections for compute nodes, deployment server,
controller nodes, storage nodes, networking, rack options, and software list.

Part

Description

Quantity

number
Compute node: Standard configuration
5462G2x

00FK645
46W0796
47C8664
00NA251
00D1996
90Y9430

Lenovo System x3650 M5, Xeon 10C E5-2650v3 105W


2.3GHz/2133MHz/25MB, 1x16GB, O/Bay HS 2.5in SATA/SAS, SR M5210,
550W p/s, Rack
Intel Xeon Processor E5-2650 v3 10C 2.3GHz 25MB 2133MHz 105W
16 GB TruDDR4 Memory ( 2Rx4, 1.2V) PC3-17000 CL152133MHz LP RDIMM
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade for Lenovo System x
900GB 10K 12Gbps SAS 2.5in G3HS 512e HDD
Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for Lenovo System x
3m Passive DAC SFP+ Cable

1
15
1
16
1
2

Compute node: Enhanced configuration


5463G2x

00KA072
46W0796
47C8664
00NA261
00FN379
00D1996
90Y9430

Lenovo System x3550 M5, Xeon 10C E5-2650v3 105W


2.3GHz/2133MHz/25MB, 1x16GB, O/Bay HS 2.5in SATA/SAS, SR M5210,
550W p/s, Rack
Intel Xeon Processor E5-2650 v3 10C 2.3G 25MB 2133MHz 105W
16 GB TruDDR4 Memory ( 2Rx4, 1.2V) PC3-17000 CL152133MHz LP RDIMM
ServeRAID M5200 Series 2 GB Flash/RAID 5 Upgrade for Lenovo System x
1.2 TB 10K 12Gbps SAS 2.5in G3HS 512e HDD
Lenovo 200GB 12G SAS 2.5in MLC G3HS Enterprise SSD
Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for Lenovo System x
3m Passive DAC SFP+ Cable

1
15
1
8
2
6
2

Compute node: Advanced configuration


5462J2x

00FK645
46W0796
47C8664
00NA271
00FN389
00D1996
90Y9430

36

Lenovo System x3650 M5, Xeon 12C E5-2680v3 120W


2.5GHz/2133MHz/30MB, 1x16GB, O/Bay HS 2.5in SATA/SAS, SR M5210,
900W p/s, Rack
Intel Xeon Processor E5-2650 v3 10C 2.3GHz 25MB 2133MHz 105W
16 GB TruDDR4 Memory ( 2Rx4, 1.2V) PC3-17000 CL152133MHz LP RDIMM
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade for Lenovo System x
1.8 TB 10K 12Gbps SAS 2.5in G3HS 512e HDD
Lenovo 400GB 12G SAS 2.5in MLC G3HS Enterprise SSD
Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for Lenovo System x
3m Passive DAC SFP+ Cable

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

1
23
1
14
2
1
2

Part

Description

Quantity

number
Deployment server
5463F2x
00AJ146
90Y9430

Lenovo System x3550 M5, Xeon 8C E5-2640v3 90W 2.6GHz/1866MHz/20MB,


1x16GB, O/Bay HS 2.5in SATA/SAS, SR M5210, 550W p/s, Rack
1.2 TB 10K 6Gbps SAS 2.5-inch SFF G3HS HDD
3m Passive DAC SFP+ Cable

1
2
2

Controller node
5463G2x

00KA072
46W0796
47C8664
00AJ146
00D1996
90Y9430

Lenovo System x3550 M5, Xeon 10C E5-2650v3 105W


2.3GHz/2133MHz/25MB, 1x16GB, O/Bay HS 2.5in SATA/SAS, SR M5210,
550W p/s, Rack
Intel Xeon Processor E5-2650 v3 10C 2.3G 25MB 2133MHz 105W
16 GB TruDDR4 Memory ( 2Rx4, 1.2V) PC3-17000 CL152133MHz LP RDIMM
ServeRAID M5200 Series 2 GB Flash/RAID 5 Upgrade for Lenovo System x
1.2 TB 10K 6Gbps SAS 2.5-inch SFF G3HS HDD
Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for Lenovo System x
3m Passive DAC SFP+ Cable

1
3
1
2
1
2

Storage node
5462D4x
46W0796
47C8664
00FN173
00D1996
00FN379
00FK658
90Y9430

37

Lenovo System x3650 M5, Xeon 8C E5-2630v3 85W 2.4GHz/1866MHz/20MB,


1x16GB, O/Bay HS 3.5in SATA/SAS, SR M5210, 750W p/s, Rack
16 GB TruDDR4 Memory ( 2Rx4, 1.2V) PC3-17000 CL152133MHz LP RDIMM
ServeRAID M5200 Series 2 GB Flash/RAID 5 Upgrade for Lenovo System x
6TB 7.2K 6Gbps NL SATA 3.5in G2HS 512e HDD
Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for Lenovo System x
Lenovo 200GB 12G SAS 2.5in MLC G3HS Enterprise SSD
System x3650 M5 Rear 2x 2.5in HDD Kit
3m Passive DAC SFP+ Cable

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

1
3
1
12
1
2
1
2

Part

Description

Quantity

number
1 GbE Networking
7159BAX

Lenovo RackSwitch G7028 (Rear to Front)

39Y7938

2.8m, 10A/100-250V, C13 to IEC 320-C20 Rack Power Cable

10 GbE Networking
7159BR6

Lenovo RackSwitch G8124E (Rear to Front)

39Y7938
90Y9427

2.8m, 10A/100-250V, C13 to IEC 320-C20 Rack Power Cable


1m Passive DAC SFP+ Cable

4
2

Console main
1754D1X
43V6147

Global 2X2X16 Console Manager (GCM16)


USB Conversion Option (UCO)

1
16

Keyboard/monitor
17238BX

1U 18.5in Standard Console Kit

Rack
93084PX
46M4119

Lenovo 42U Enterprise Rack


IBM 0U 24 C13 Switched and Monitored 32A PDU

1
2

Software
00JY341

38

Lenovo XClarity Administrator, per Mngd Server w/3 Yr SW S&S

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

13

9 Resources
For more information about the topics in this document, see the following resources:

OpenStack Project:
http://www.openstack.org

OpenStack Operations Guide:


http://docs.openstack.org/ops/

Red Hat Enterprise Linux OpenStack Platform:


https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/

Lenovo System x3650 M5:


http://www-03.ibm.com/systems/x/hardware/rack/x3650m5/index.html

Lenovo System x3550 M5:


http://www-03.ibm.com/systems/x/hardware/rack/x3550m5/index.html

39

Ceph Storage System:


http://www.ceph.com

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

Trademarks and special notices


Copyright Lenovo 2015.
References in this document to Lenovo products or services do not imply that Lenovo intends to make them
available in every country.
Lenovo, the Lenovo logo, AnyBay, AnyRAID, BladeCenter, NeXtScale, RackSwitch, Rescue and Recovery,
System x, ThinkCentre, ThinkVision, ThinkVantage, ThinkPlus and XClarity are trademarks of Lenovo.
Red Hat, Red Hat Enterprise Linux and the Shadowman logo are trademarks of Red Hat, Inc., registered in the
U.S. and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.
The OpenStack mark is either a registered trademark/service mark or trademark/service mark of the OpenStack
Foundation, in the United States and other countries, and is used with the OpenStack Foundation's permission.
We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United
States, other countries, or both.
Intel, Intel Inside (logos), and Xeon are trademarks of Intel Corporation in the United States, other countries, or
both.
Other company, product, or service names may be trademarks or service marks of others.
Information is provided "AS IS" without warranty of any kind.
All customer examples described are presented as illustrations of how those customers have used Lenovo
products and the results they may have achieved. Actual environmental costs and performance characteristics
may vary by customer.
Information concerning non-Lenovo products was obtained from a supplier of these products, published
announcement material, or other publicly available sources and does not constitute an endorsement of such
products by Lenovo. Sources for non-Lenovo list prices and performance numbers are taken from publicly
available information, including vendor announcements and vendor worldwide homepages. Lenovo has not
tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to
non-Lenovo products. Questions on the capability of non-Lenovo products should be addressed to the supplier
of those products.
All statements regarding Lenovo future direction and intent are subject to change or withdrawal without notice,
and represent goals and objectives only. Contact your local Lenovo office or Lenovo authorized reseller for the
full text of the specific Statement of Direction.
Some information addresses anticipated future capabilities. Such information is not intended as a definitive
statement of a commitment to specific levels of performance, function or delivery schedules with respect to any
future products. Such commitments are only made in Lenovo product announcements. The information is
presented here to communicate Lenovos current investment and development activities as a good faith effort to
help with our customers' future planning.
Performance is based on measurements and projections using standard Lenovo benchmarks in a controlled
environment. The actual throughput or performance that any user will experience will vary depending upon
considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the
storage configuration, and the workload processed. Therefore, no assurance can be given that an individual
user will achieve throughput or performance improvements equivalent to the ratios stated here.
Photographs shown are of engineering prototypes. Changes may be incorporated in production models.
Any references in this information to non-Lenovo websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this Lenovo product and use of those websites is at your own risk.
40

Reference Architecture: Red Hat Enterprise Linux OpenStack Platform


version 1.0

You might also like