Professional Documents
Culture Documents
LENOVO - Cloud - Rhel - Openstack - Reference Architecture PDF
LENOVO - Cloud - Rhel - Openstack - Reference Architecture PDF
Kai Huang
Yixuan Huang
Srihari Angaluri
Krzysztof Janiszewski
Mike Perks
Table of Contents
1
Introduction ................................................................................................ 1
2.1
2.2
Requirements ............................................................................................. 5
3.1
Functional requirements........................................................................................... 5
3.2
5.1
5.2
5.3
6.1.1
Rack servers......................................................................................................................... 13
6.1.2
6.2
ii
Hardware ............................................................................................................... 13
6.2.1
6.2.2
6.2.3
Controller cluster................................................................................................................... 20
6.2.4
Deployment server................................................................................................................ 21
6.2.5
6.3
6.4
6.4.1
6.4.2
7.1
7.2
7.3
Networking ............................................................................................................. 31
7.4
7.4.1
7.4.2
7.5
Resources ................................................................................................. 39
iii
1 Introduction
OpenStack continues to gain significant traction in the industry because of the growing adoption of cloud usage
and the flexibility OpenStack offers as an open source development environment. This document describes the
reference architecture (RA) for OpenStack from Red Hat Enterprise Linux OpenStack Platform Red Hat
Enterprise Linux OpenStack Platform with Lenovo hardware.
Red Hat Enterprise Linux OpenStack Platform is an innovative, cost-effective cloud management solution that
also includes automation, metering, and security for the cloud environment. Red Hat Enterprise Linux
OpenStack Platform provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS) cloud
on top of Red Hat Enterprise Linux and offers a scalable, highly available platform for the management of
cloud-enabled workloads. Red Hat Enterprise Linux OpenStack Platform version 6.1 is based on the
OpenStack Juno release and includes other features, such as the Red Hat Enterprise Linux OpenStack
Platform installer and support for high availability (HA). It also improves the overall quality of the open source
version of OpenStack.
The Lenovo hardware platform provides an ideal infrastructure solution for cloud deployments. The Lenovo x86
server portfolio spans a broad spectrum of Intel based systems, combining the ThinkServer, System x, Flex
System, NeXtScale, and BladeCenter product lines. These systems provide the full range of features and
functions that are necessary to meet the needs of small businesses all the way up to large enterprises. Lenovo
also uses industry standards in systems management on all these platforms, which enables seamless
integration into cloud management tools, such as OpenStack. Lenovo also provides data center network
switches that are designed specifically for robust, scale-out servers and converged storage interconnect fabrics.
This document describes the Lenovo RA for Red Hat Enterprise Linux OpenStack Platform. The target
environments include Managed Service Providers (MSPs), Cloud Service Providers (CSPs), and enterprise
private clouds that require a complete solution for cloud IaaS deployment and management by using
OpenStack.
This document also features planning, design considerations, and reference configurations for implementing
Red Hat Enterprise Linux OpenStack Platform on the Lenovo hardware platform. The RA enables organizations
to easily adopt OpenStack to simplify the system architecture, reduce deployment and maintenance costs, and
address the lack of convergence in infrastructure management. The reference architecture focuses on
achieving optimal performance and workload density by using Lenovo hardware while minimizing procurement
costs and operational overhead.
OpenStack and other Red Hat branded software are deployed on the Lenovo hardware platform to instantiate
compute, network, storage, and management services. The Lenovo XClarity Administrator software is used
to consolidate systems management across multiple Lenovo servers that span the data center. XClarity
increases IT productivity and efficiency by visualizing, automating, and streamlining complex, repetitive
infrastructure management tasks via out-of-band, scalable management of hundreds of servers. XClarity
enables automation of firmware updates on servers via compliance policies, patterns for system configuration
settings, hardware inventory, baremetal OS and hypervisor provisioning, and continuous hardware monitoring.
XClarity can be easily extended via the published REST API to upwardly integrate into other management tools,
such as OpenStack.
The intended audience for this document is IT professionals, technical architects, sales engineers, and
consultants. Readers are expected to have a basic knowledge of Red Hat Enterprise Linux and OpenStack.
Consolidated and fully integrated hardware resources with balanced workloads for compute, network,
and storage.
An aggregation of compute and storage hardware, which delivers a single, virtualized resource pool
that can be customized to different compute and storage ratios to meet the requirements of various
solutions.
Ease of scaling (vertical and horizontal) based on business requirements. Compute and storage
resources can be extended at runtime.
Elimination of single points of failure in every layer, which delivers continuous access to virtual
machines (VMs).
Rapid OpenStack cloud deployment, including updates, patches, security, and usability enhancements
with enterprise-level support from Red Hat and Lenovo.
The reference architecture reduces the complexity of OpenStack by outlining a validated configuration that
scales and delivers an enterprise level of redundancy across servers, storage, and networking to help enable
HA. In this reference architecture, VMs can be migrated among clustered host servers to support resource
rebalancing and scheduled maintenance, which allows IT operations staff to minimize service downtime and
deployment risks.
3 Requirements
The functional and non-functional requirements for a robust cloud implementation are described in this section.
Description
Mobility
Supported by
demand
physical location
Resource provisioning
Management portal
Multi-tenancy
Metering
workloads management
based on tenancy
OpenStack
Description
Supported by
OpenStack environment
OpenStack edition
Scalability
Requirement
Description
Load balancing
Workload is distributed
Supported by
resources
High availability
unavailability
Physical footprint
Compact solution
Ease of installation
solution deployment
Support
Requirement
Description
Flexibility
Supported by
deployment methodologies
Robustness
Security
secure customer
infrastructure
High performance
high-performance
4 Architectural overview
The Red Hat Enterprise Linux OpenStack Platform
platform offers the innovation of the OpenStack community project and provides the security, stability, and
enterprise-readiness of a platform that is built on Red Hat Enterprise Linux. Red Hat Enterprise Linux
OpenStack Platform supports the configuration of data center physical hardware into private, public, or hybrid
cloud platforms and includes the following benefits:
Integrated networking
Figure 1 shows the main components in Red Hat Enterprise Linux OpenStack Platform, which is a collection of
interacting services that control the compute, storage, and networking resources. Administrators use a
web-based interface to control, provision, and automate OpenStack resources. Also, programmatic access to
the OpenStack infrastructure is facilitated through an extensive REST API, which enables a rich set of add-on
capabilities.
5 Component model
The section describes the components that are used in OpenStack, specifically the Red Hat Enterprise Linux
OpenStack Platform distribution.
Code name
Description
Compute service
Nova
Block storage
Cinder
any data
Neutron
Component
Code name
Description
Image management
Glance
Authentication
Keystone
Telemetry
Ceilometer
Dashboard
Horizon
Table 4 lists the optional components in the Red Hat Enterprise Linux OpenStack Platform release. Choose
whether and how to use them based on the actual use cases of their cloud deployments.
Table 4. Optional core components
Component
Code name
Description
Object Storage
Swift
Cloud storage software that is built for scale and optimized for durability,
availability, and concurrency across the entire data set. It can store and
retrieve lots of data with a simple API, and is ideal for storing unstructured
data that can grow without bound. (Ceph is used in this Reference
Architecture, instead of Swift, to provide object storage service.)
Orchestration
Heat
Database as a
Trove
Service
OpenStack defines the concepts that are listed in Table 5 to help the administrator further manage the tenancy
or segmentation in a cloud environment.
Table 5. OpenStack tenancy concepts
Name
Description
Tenant
10
Name
Description
be applied to these resources on a per-tenant basis.
Availability Zone
In OpenStack, an availability zone allows a user to allocate new resources with defined
placement. The instance availability zone defines the placement for allocation of
VMs, and the volume availability zone defines the placement for allocation of virtual
block storage devices.
Host Aggregate
Region
Regions segregate the cloud into multiple compute deployments. Administrators can
use regions to divide a shared-infrastructure cloud into multiple sites with separate API
endpoints without coordination between sites. Regions share the Keystone identity
service, but each has a different API endpoint and a full Nova compute installation.
Red Hat Ceph Storage is a distributed object store and file system that provides a robust, highly
scalable block and object storage platform for enterprises deploying public or private clouds.
MariaDB is open source database software that is shipped with Red Hat Enterprise Linux as a
replacement for MySQL. MariaDB Galera cluster is a synchronous multi-master cluster for MariaDB. It
uses synchronous replication between every instance in the cluster to achieve an active-active
multi-master topology, which means every instance can accept data retrieving and storing requests and
the failed nodes do not affect the function of the cluster.
RabbitMQ is a robust open source messaging system that is based on the AMQP standard. In Red Hat
Enterprise Linux OpenStack Platform 6, RabbitMQ is recommended and replaces Qpid as the default
message broker.
other enhancements from previous Red Hat Enterprise Linux OpenStack Platform release, see the Release
Information website.
12
6 Operational model
Because OpenStack provides a great deal of flexibility, the operational model for OpenStack normally depends
upon a thorough understanding of the requirements and needs of the cloud users to design the best possible
configuration to meet the requirements. For more information, see the OpenStack Operations Guide.
This section describes the Red Hat Enterprise Linux OpenStack Platform operational model that is verified with
Lenovo hardware and software. It concludes with some example deployment models that use Lenovo servers
and Lenovo RackSwitch network switches.
6.1 Hardware
The OpenStack software was validated to run on all Lenovo hardware platforms, including System x and
ThinkServer. This reference architecture guide focuses on the Red Hat OpenStack Platform that is deployed on
the Lenovo System x3650 M5 and Lenovo System x3550 M5 servers that are installed with the Red Hat
Enterprise Linux
7 operating system and Red Hat Enterprise Linux OpenStack Platform version 6.1.1. The
13
14
15
16
17
are
Figure 12: Deployment model for Red Hat Enterprise Linux OpenStack Platform
The deployment node runs a GUI-based wizard, which is responsible for the initial deployment of the controller
node, compute node, and storage node by using PXE boot. It also performs the service configuration and wiring
between all deployed nodes and can perform incremental deployment of individual nodes after the initial
deployment.
The controller node cluster consists of three controller nodes. The nodes act as a central entrance for
processing all internal and external cloud operation requests. The controller nodes also manage the lifecycle of
all VM instances that are running on the compute nodes and provide essential services, such as authentication
and networking to the VM instances. The controller nodes rely on some support services, such as DHCP, DNS,
and NTP.
The Nova-Compute and Open vSwitch agents run on the compute nodes. The agents receive instrumentation
requests from the controller node via RabbitMQ messages to manage the compute and network virtualization of
instances that are running on the compute node. Compute nodes can be aggregated into pools of various sizes
for better management, performance, or isolation. A Ceph-based storage cluster is created on the storage node,
which is largely self-managed and is supervised by the Ceph monitor that is installed on the controller node. The
Ceph cluster provides block data storage for Glance image store and for VM instances via the Cinder service.
18
VM Images: OpenStack Glance manages images for VMs. The Glance service treats VM images as
immutable binary blobs and can be uploaded to or downloaded from a Ceph cluster accordingly.
Volumes: OpenStack Cinder manages volumes (that is, virtual block devices), which can be attached to
running VMs or used to boot VMs. Ceph serves as the back-end volume provider for Cinder.
VM Disks: By default, when a VM boots, its drive appears as a file on the file system of the hypervisor.
Alternatively, the VM disk can be in Ceph and the VM is started by using the boot-from-volume
functionality of Cinder or directly started without the use of Cinder. The latter option is advantageous
because it enables maintenance operations to be easily performed by using the live-migration process,
but only if the VM uses the RAW disk format.
If the hypervisor fails, it is convenient to trigger the Nova evacuate function and almost seamlessly run
the VM machine on another server. When the Ceph back end is enabled for both Glance and Nova, there
is no need to cache an image from Glance to a local file, which saves time and local disk space. In addition,
Ceph can implement a copy-on-write feature, so that starting an instance from a Glance image does not
actually use any disk space.
19
Figure 13 shows the details of the integration between the OpenStack services and Ceph.
20
21
22
23
The controller node acts as an API layer and listens to all service requests (Nova, Glance, Cinder, and
so on).The requests first land on the controller node. Then, the requests are forwarded to a compute
node or storage node through messaging services for underlying compute workload or storage
workload.
The controller node is the messaging hub in which all messages follow and route through for
cross-node communication.
The controller node acts as the scheduler to determine the placement of a particular VM, which is
based on a specific scheduling driver.
The controller node acts as the network node, which manipulates the virtual networks that are created
in the cloud cluster.
However, it is not mandatory that all four roles are on the same physical server. OpenStack is a naturally
distributed framework by design. Such all-in-one controller placement is used for simplicity and ease of
deployment. In production environments, it might be preferable to move one or more of these roles and
relevant services to another node to address security, performance, or manageability concerns.
On the compute nodes, the installation includes the essential OpenStack computing service
(openstack-nova-compute), along with the neutron-openvswitch-agent, which enables
software-defined networking. An optional metering component (Ceilometer) can be installed to collect resource
usage information for billing or auditing.
On a storage node, the Ceph OSD service is installed. It communicates with the Ceph monitor services that are
running on controller nodes and contributes its local disks to the Ceph storage pool. The user of the Ceph
storage resources exchanges data objects with the storage nodes through its internal protocol.
Table 6 lists the placement of major services on physical servers.
Table 6. Components placement on physical servers
Role
Host Name
Services
Deployment
server
deployer
foreman
puppet
httpd
24
Role
Host Name
Services
Controller node
controller01 - controller03
openstack-cinder-api
openstack-cinder-backup
openstack-cinder-scheduler
openstack-cinder-volume
openstack-glance-api
openstack-glance-registry
openstack-keystone
openstack-nova-api
openstack-nova-cert
openstack-nova-conductor
openstack-nova-consoleauth
openstack-nova-novncproxy
openstack-nova-scheduler
neutron-dhcp-agent
neutron-13-agent
neutron-metadata-agent
neutron-openvswitch-agent
neutron-server
openstack-ceilometer-alarm-evaluator
openstack-ceilometer-alarm-notifier
openstack-ceilometer-api
openstack-ceilometer-central
openstack-ceilometer-collector
ceph-monitor
rabbitmq, mariadb, mongodb
Compute node
compute01 - compute18
neutron-openvswitch-agent
openstack-ceilometer-compute
openstack-nova-compute
Storage node
25
storage01 - storage03
ceph-osd
Capacity
Networking
10 GbE
VM capacity
Controller nodes
Compute nodes
14
Storage nodes
Deployment server
Local storage
86 TB (RAID-10)
Local IOPS
>100
External storage
Rack Layout
26
Because it is impractical to take into account every possible combination of these options, this configuration can
be modified in a real scenario to best fit the workloads that are running on the target cloud environment.
Machine Configuration
Compute Node x 16
Storage Node x 8
10 GbE switch x 4
1 Gb switch x 1
27
7 Deployment considerations
This section describes the performance, sizing, and networking considerations for a Red Hat Enterprise Linux
OpenStack Platform deployment.
Server configuration
Standard configuration
Enhanced configuration
Advanced configuration
The standard configuration uses a System x3650 M5 dual-socket 2U rack server (with 2.5-inch disk bays) and is
configured for general workloads in this reference architecture. Such configuration features balanced
computing, memory, and storage, and is cost-competitive on a per-VM basis. Memory or drives can be added to
accommodate workloads that require more capacity.
The enhanced configuration uses a System x3550 M5 server dual-socket 1U server and is configured for I/O
intensive workloads to maximize storage performance. Two Lenovo S3700 SSDs are used with LSI MegaRAID
CacheCade Pro 2.0 as a dedicated pool of RAID controller cache to improve the read/write performance of the
disk array of a servers eight HDDs. The Lenovo S3700 SATA 2.5-inch MLC Enterprise SSDs for System x use
MLC NAND flash memory with high endurance technology and a 12 Gbps SAS interface to provide an efficient
solution with industry-leading performance. These SSDs can be fully rewritten up to 10 times per day throughout
their entire five-year life expectancy. This technology offers an ideal combination of capacity and high
performance.
The advanced configuration uses a System x3650 M5 dual-socket 2U rack server and is suited to heavy
workloads. This server has faster processors, more memory, and higher available IOPS.
Table 9 lists the server configurations for the other types of nodes that are required for building an OpenStack
cloud.
28
Server configuration
Deployment server
Controller node
Storage node
The Storage Node uses a System x3650 M5 (with 3.5-inch disk bays) server. It has up to 65 TB of storage
capacity with 12 3.5-inch HDDs and is ideal to fulfil requirements for external storage. The storage capacity is
shared by multiple compute nodes by making available its local disks through Ceph storage system and the
OpenStack Cinder service. To achieve optimal performance, the two 2.5-inch SSDs in the rear are partitioned
for the operating system and storing the Ceph journal data.
For more information about server configurations and a Bill of Materials, see "Appendix: Lenovo Bill of
Materials" on page 36.
vRAM
Storage
IOPS
Small
2 GB
20 GB
15
Medium
6 GB
60 GB
35
Large
12 GB
200 GB
100
When sizing the solution, calculate the amount of required resources that is based on the amount of workload
expected. Then, multiply that amount by the maximum size of the workload and add a 10% buffer for
management overhead.
29
For better resource utilization, consider putting similar workloads on the same guest OS type in the same host to
use memory or storage de-duplication, as shown in the following examples:
Virtual Memory (vRAM) = Physical Memory * (RAM allocation ratio2) * (100% - OS reserved)
By using these formulas, usable virtual resources can be calculated from the hardware configuration of compute
nodes, as shown in Table 11.
Table 11. Virtual resource
Standard configuration
Enhanced configuration
Server
System x3650 M5
System x3550 M5
vCPU
vRAM
345 GB
Storage Space
By using a mixed workload requirement that was generated from Table 10 in which each VM requires 2 vCPU,
6 GB vRAM, 80 GB local disk, and 100 IOPS, it is easy to calculate the expected VM density (the smallest
number determines the number of VMs that can be supported per compute server). Table 12 shows 57 VMs per
host for the standard and enhanced configurations.
Table 12. Resource-based VM density for mixed workload
CPU
Memory
Storage Space
Standard configuration
120/2 = 60
345/6 = 57
6.6T/80 GB = 82
3~8000/100 = 80
Enhanced configuration
120/2 = 60
345/6 = 57
4.8T/80 GB = 60
~20000/100 = 200
However, the actual user experience of I/O performance is largely dependent on workload pattern and
OpenStack configuration; for example, the number of concurrent running VM instances and the cache mode. By
applying the calculated density of a single compute host, this reference architecture can be tailored to fit various
sizes of cloud cluster.
CPU allocation ratio indicates the number of virtual cores that can be assigned to a node for each physical core. 6:1 is a balanced choice
for performance and cost effectiveness on models with Xeon E5-2600 v3 series processors.
2
RAM allocation ratio is to allocate virtual resources in excess of what is physically available on a host through compression or
de-duplication technology. The hypervisor uses it to improve infrastructure utilization of the RAM allocation ratio = (virtual resource/physical
resource) formula.
3
IOP estimates for standard and enhanced configurations are derived from measurements performed by using the FIO performance tool.
30
7.3 Networking
Combinations of physical and virtual isolated networks are configured at the host, switch, and storage layers to
meet isolation, security, and quality of service (QoS) requirements. Different network data traffic is physically
isolated into different switches according to the physical switch port and server port assignments.
There are five onboard 1 GbE Ethernet interfaces (including one dedicated port for the IMM) and an installed
dual-port 10 GbE Ethernet device for each physical host (x3550 M5 or x3650 M5).
The 1 GbE management port (dedicated for the Integrated Management Module, or IMM) is connected to the
1GbE RackSwitch G7028 for out-of-band management, while the other 1 GbE ports are not connected. The
corresponding Red Hat Enterprise Linux 7 Ethernet devices eno[0-9] are not configured. Two G8124E
24-port 10GbE Ethernet switches are used in pairs to provide redundant networking for the controller nodes,
compute nodes, storage nodes, and deployment server. The dual port 10G Emulex Virtual Fabric Adapter
delivers virtual I/O by supporting vNICs, with which two ports can be divided into eight virtual NICs.
Administrators can create dedicated virtual pipes between the 10 GbE network adapter and the TOR switch for
optimal performance and better security. Linux NIC bonding is then applied to the vNICs to provide HA to the
communication networks. Ethernet traffic is isolated by type through the use of virtual LANs (VLANs) on the
switch.
Table 13 lists the six logical networks in a typical OpenStack deployment.
Table 13. OpenStack Logical Networks
Network
VLAN
Description
BMC Network
Management Network
The network where OpenStack APIs are shown to the users and the
message hub to connect various services inside OpenStack. The OpenStack
Installer also uses this network for host discovery and initial deployment.
Internal Network
This is the subnet from which the VM private IP addresses are allocated.
Through this network, the VM instances can talk to each other.
External Network
This is the subnet from which the VM public IP addresses are allocated. It is
the only network where external users can access their VM instances.
Storage Network
The front-side storage network where Ceph clients (through Glance API,
Cinder API, or Ceph CLI) access the Ceph cluster. Ceph Monitors also
operate on this network.
The back-side storage network to which Ceph routes its heartbeat, object
replication, and recovery traffic.
vNIC0 and vNIC4 are used to create 'bond0', and vNIC1 and vNIC5 are used to create 'bond1', and so on. The
20 Gb bandwidth from dual 10 Gb connections are dynamically allocated to four vNIC bonds, and can adjust
as required in increments of 100 Mb without downtime.
31
Figure 17 shows the recommended network topology diagram with VLANs and vNICs.
32
Logical Network
Usage
Bandwidth
VLAN
Controller Node
Bond0
Management Network
2 Gb/s
Bond1
Internal Network
2 Gb/s
Bond2
Storage Network
12 Gb/s
Bond3
External Network
4 Gb/s
Bond0
Management Network
2 Gb/s
Bond1
Internal Network
6 Gb/s
Bond2
Storage Network
12 Gb/s
Bond3
Reserved
0 Gb/s
N/A
Bond0
Management Network
2 Gb/s
Bond1
Reserved
0 Gb/s
N/A
Bond2
Storage Network
12 Gb/s
Bond3
8 Gb/s
(no bonding)
Management Network
1 Gb/s
Compute Node
Storage Node
Deployment Server
The Emulex VFA5 adapter is used to offload packet encapsulation for VxLAN networking overlays to the adapter,
which results in higher server CPU effectiveness and higher server power efficiency.
Open vSwitch (OVS) is a fully featured, flow-based implementation of a virtual switch and can be used as a
platform in software-defined networking. Open vSwitch supports VxLAN overlay networks, and serves as the
default OpenStack network service in this reference architecture. Virtual Extensible LAN (VxLAN) is a network
virtualization technology that addresses scalability problems that are associated with large cloud computing
deployments by using a VLAN-like technique to encapsulate MAC-based OSI layer 2 Ethernet frames within
layer 3 UDP packets.
33
34
When the solution is implemented, measure the actual resource usage of workload first and adjust the
memory or disks accordingly to avoid imbalanced resource utilization.
In Red Hat OpenStack Platform installer, the Emulex VFA5 adapter cannot be recognized the first time
after discovery. If this issue occurs, deploy the operating system first and then reassign the adapter into
your deployment. (Red Hat BZ 1182484).
Use Red Hat satellite when Red Hat Enterprise Linux OpenStack Platform is deployed if the network
access is restricted.
All nodes (and monitors) must be time synchronized by using NTP at all times.
Set the compute and storage quota on each tenant properly so that the resource are not exhausted
because of a misconfigured or resource-starved VM.
To allow for planned maintenance or unplanned failures, provide enough physical server resources to
handle all VMs in an N-1 configuration.
Before planned maintenance, evacuate the instances on the server and gracefully shut down all
running services to avoid inconsistent states or data corruption.
Two switches are connected by using the inter-switch link (ISL) protocol. The number of ports for
inter-switch connection should be at least two.
Combined with Lenovos enterprise-class hardware and software, this reference architecture prepares IT
administrators to successfully meet virtualization performance and growth objectives by deploying clouds
efficiently and reliably.
35
Part
Description
Quantity
number
Compute node: Standard configuration
5462G2x
00FK645
46W0796
47C8664
00NA251
00D1996
90Y9430
1
15
1
16
1
2
00KA072
46W0796
47C8664
00NA261
00FN379
00D1996
90Y9430
1
15
1
8
2
6
2
00FK645
46W0796
47C8664
00NA271
00FN389
00D1996
90Y9430
36
1
23
1
14
2
1
2
Part
Description
Quantity
number
Deployment server
5463F2x
00AJ146
90Y9430
1
2
2
Controller node
5463G2x
00KA072
46W0796
47C8664
00AJ146
00D1996
90Y9430
1
3
1
2
1
2
Storage node
5462D4x
46W0796
47C8664
00FN173
00D1996
00FN379
00FK658
90Y9430
37
1
3
1
12
1
2
1
2
Part
Description
Quantity
number
1 GbE Networking
7159BAX
39Y7938
10 GbE Networking
7159BR6
39Y7938
90Y9427
4
2
Console main
1754D1X
43V6147
1
16
Keyboard/monitor
17238BX
Rack
93084PX
46M4119
1
2
Software
00JY341
38
13
9 Resources
For more information about the topics in this document, see the following resources:
OpenStack Project:
http://www.openstack.org
39