Professional Documents
Culture Documents
Objectives
Objectives
1
Network Function Virtualization
2
Virtualization
The Telco Cloud enables operators to deploy, operate, maintain, and upgrade traditional
telecom network elements, and deliver services in a simplified, cost-optimized,
harmonized, and flexible manner.
3
CMM in Network Function Virtualization (NFV) reference architecture
Virtualization
Virtualized Layer
Network
HypervisorFunction
(Linux-KVM/ESXi) Nokia CMM solution
Cloud Mobility
HW Resources (Computing,
Manager Storage, Network) Reference Cloud infrastructure
Nokia Airframe / Generic IT HW
The CMM implements the role of a VNF, which is a virtualization of a network function, in
the Network Function Virtualization (NFV) reference architecture.
The Telco cloud enables operators to deploy, operate, maintain, and upgrade traditional
telecom network elements, and deliver services in a simplified, cost-optimized,
harmonized, and flexible manner. Nokia’s virtualized MME and SGSN, as well as its
management solution, is fully compliant with IEEEs NFV reference architecture, as
specified in ETSI GS NFV 002.
The CMM implements the role of a VNF, which is a virtualization of a network function
(the MME/SGSN in this case). The functions and external operational interfaces of a
VNF are the same as a non-virtualized MME/SGSN node.
A VNF can be composed of multiple internal components (VNF-C.
The VNF Manager (VNFM) is responsible for VNF lifecycle management, such as
deployment, update, and scaling. The VNFM used by the CMM in its reference
deployment is the CloudBand Application Manager (CBAM).
The Network Function Virtualization Infrastructure (NFVI) includes all SW and HW
components that provide the environment in which VNFs are deployed, managed, and
executed. Typically, this is achieved by deploying a virtualization layer (hypervisor, such
as Linux/KVM or ESXi) over IT HW (for example, x86 rack mount servers) such as the
Nokia AirFrame.
The Virtual Infrastructure Manager (VIM) comprises the functionalities used to control
and manage the interaction of a VNF with virtualized computing, storage and networking
resources, and the virtualization layer (NFVI) itself. The VIMs supported by the CMM is
4
OpenStack and VMware.
The Element Management System (EMS) performs the typical management functions for one
or several VNFs. In the Nokia solution, these functions are provided by the NFM-P, which
has a northbound interface to NetAct.
4
Cloud run-time architecture
Compute nodes - provide computing capacity for the use of the guest applications.
Controller nodes - provide supervision functions for the compute nodes and they also provide
networking as required for the guest applications.
The undercloud - the main director node used in OpenStack. It is a single-system OpenStack
installation server compute and controller node that includes components for provisioning and
managing the OpenStack nodes that form the OpenStack environment (referred to as the
overcloud).
The network is composed of the network interface cards (NIC) hosted by compute and
controller nodes and the L2/L3 switches that interconnect these nodes.
Virtualization is a technique that allows running more than one application on the same
hardware unit. Virtualization optimizes hardware usage by abstracting the hardware
layer from the software layer, providing a number of logical resources.
The resources available are dependent on the deployment, i.e. OpenStack or VMware.
In OpenStack, hardware is divided into the following logical resources:
• Compute nodes - provide computing capacity for the use of the guest applications.
• Controller nodes - provide supervision functions for the compute nodes and they also
provide networking as required for the guest applications.
• The undercloud - the main director node used in OpenStack. It is a single-system
OpenStack installation server compute and controller node that includes components
for provisioning and managing the OpenStack nodes that form the OpenStack
environment (referred to as the overcloud).
• The network is composed of the network interface cards (NIC) hosted by compute
and controller nodes and the L2/L3 switches that interconnect these nodes.
With a VMware deployment logical resources are:
• Physical Infrastructure Management Services (PIMS) - contains VM appliances,
which must run outside the CMS cluster to provision, monitor, manage and backup
the serer infrastructure.
• Cloud Management Services (CMS) - The overall cloud infrastructure management is
hosted on CMS and provides the basic services, operations and the Cloud front-end
service for IaaS consumers. The cloud compute clusters VI and NSX managers are
5
also located on this cluster.
• Cloud Networking Services (CNS) - a collection of resources (compute, storage and
network) used for hosting the VM components required for the external connectivity of the
applications. CNS resource pool contains the tenant-level virtual Edge Services Gateways
(ESG VMs) to access external networks for different security zones.
• Cloud Computing Services (CCS) - provides resources for both CNS and Provider vDC
Resource Pools. vCloud Director deploys VNF applications and VNF level edge services
on to these clusters.
5
VNF (CMM) Management
VNFs
Guest Guest Guest
Application Application Application
(IPDS, CPPS, DBS...) (IPDS, CPPS, DBS...) (IPDS, CPPS, DBS...)
Application
management
Guest OS (Linux)
Guest OS
(Linux/SROS)
... Guest OS (Linux)
Hardware
Hardware
management
Hypervisor (Linux-KVM/ESXi)
Virtualization is a technique that allows running more than one application on the same
hardware unit.
Virtualization optimizes HW usage by abstracting the hardware layer from the software layer. In
Openstack, HW is divided into the following logical resources:
Compute nodes provide computing capacity for the use of the guest applications.
Controller nodes provide supervision functions for the compute nodes and they also provide
networking as required for the guest applications.
The undercloud is the main director node used in OpenStack. It is a single‐system OpenStack
installation server compute and controller node that includes components for provisioning and
managing the OpenStack nodes that form the OpenStack environment (referred to as the
overcloud).
The network is composed of the network interface cards (NIC) hosted by compute and controller
nodes and the L2/L3 switches that interconnect these nodes.
In VMware‐based deployments, HW is divided into the following logical resources:
Physical Infrastructure Management Services (PIMS): It contains VM appliances, which must run
outside the CMS cluster to provision, monitor, manage and backup the serer infrastructure.
Cloud Management Services (CMS): The overall cloud infrastructure management is hosted on
CMS and provides the basic services, operations and the Cloud front‐end service for IaaS
consumers. The cloud compute clusters VI and NSX managers are also located on this cluster.
Cloud Networking Services (CNS): A collection of resources (compute, storage and network)
used for hosting the VM components required for the external connectivity of the applications.
6
CNS resource pool contains the tenant‐level virtual Edge Services Gateways (ESG VMs) to access
external networks for different security zones.
Cloud Computing Services (CCS): It provides resources for both CNS and Provider vDC Resource Pools.
vCloud Director deploys VNF applications and VNF level edge services on to these clusters.
Applications and the native operating system (OS) are called guest applications and guest OS,
respectively. In the CMM, (click) guest applications are the functional units NECC, IPDS, CPPS, DBS,
PAPS, IPPS, and L3NS. (click) The guest OS is Red Hat Linux for all the VNFCs, except for L3NS, (click)
which runs on Nokia’s Service Router Operating System (SROS). (click) Guest applications and the
guest OS are encapsulated inside a Virtual Machine, which provides the run‐time environment for
them.
The (click) underlying hypervisor is responsible for emulating hardware configurations to the guest
OS.
6
CMM VNF architecture
7
CMM VNF architecture
Transaction Control Plane Processing Service Packet Processing Service (PAPS) CPPS PAPS
(CPPS) SGSN 2G/3G MM and SM
layer MME 4G MM and SM N+1 N+1
CMM common
MME only
SGSN only
CMM design provides the following features: optimization for multi-core, virtualized
compute space, high performance and system reliability, common local user interface
and EMS across end-to-end EPC, business logic hardened and field proven in large,
complex networks, and flexible configuration from very small to very large traffic loads
and network designs.
The CMM is designed to run on top of standard, multi-purpose IT hardware and
generally available OpenStack or VMware distribution.
The CMM is designed according to the multi-tier paradigm where load balancing,
transaction processing, and database constitute individual tiers that can scale
independently. The functional layers of the CMM include the following:
• Layer-1, load balancer / user plane: The IPDS terminates the logical signaling
interfaces from the external network’s viewpoint. This layer is aware of working states
and the load of the transaction layer unit and it distributes the incoming transactions
to the next layer for processing. In large CMM deployments with multiple IPDS pairs,
L3NS is used to load balance signaling traffic among the IPDS instances. The IPPS
processes the 3G user plane data.
• Layer-2, transaction processing: The CPPS and the PAPS execute subscriber-related
transactions and maintain local cache of subscriber data to improve performance and
reduce latency associated with stateless call processing design.
• Layer-3, subscriber database: The DBS provides an always available storage area of
static and dynamic UE context data that eliminates the need for active/standby call
processing and on-demand recovery of the UE state on a per subscriber basis.
8
OAM functions are located in the NECC. The NECC provides configuration management,
SW update control, and master for redundancy management, along with a collection of
statistical data and log information based on industry standard data analytics and report
builder and dashboard tools. L3NS OAM functions are managed via the VMs direct OAM
interface.
8
Virtualization infrastructure
9
Requirements
Generic hardware • Intel x86 server (Intel Broadwell, Haswell or Skylake) with a minimum 12 cores per
socket
10
Reference hardware and infrastructure manager
Rated capacity has been verified on reference hardware and infrastructure manager with
the specifications for node configurations listed here. Other HW configurations are
supported by design, but capacity can vary.
11
Nokia AirFrame compute node 2
14
Virtual infrastructure manager
OpenStack distribution:
• Nokia CloudBand infrastructure software 18.5
• OpenStack Newton distribution (RHELOSP 10.0)
• Host OS/Hypervisor: Red Hat® Enterprise Linux (RHEL 7.4)/KVM
• NUMA scheduled, Huge Page (1G) enabled
• Storage backend: CEPH
• CPU pinning (for guests) / isolation for host OS / CEPH
• VF Trust mode must be enabled for L3NS with Mellanox NICs
• VF spoofchk must be disabled for L3NS on Intel NICs (accomplished through Heat/HOT template
security setting)
VMware distribution
• VMware NFV 2.0
• Hypervisor: ESXi 6.5A
• Networking based on VMXNETESG and NSX firewalls are bypassed
15
CMM-a
16
CMM-a infrastructure
CMM-a2
CMM-a8
© Nokia 2019 Confidential
The CMM-a runs a fixed configuration of the CMM software directly on KVM.
To provide the system with CLI commands (for installing, upgrading, checking VM
status, hardware status, and so on) for the CMM application, the Airframe Infrastructure
Manager (AIM) runs at the host level. Once in service, AIM monitors all hardware
components and raises alarms upon failure and recovery of the components including
servers and links. In addition, the CMM application monitors the status of all VMs
running in the system and raise alarms upon failure and recovery of the VMs.
When deployed with the NSP, the CMM application and the AIM are managed under a
common resource group (CMM-a). The CMM application and AIM are configured with
individual management IP addresses.
17
Compute Node (CN) 1 specifications
W x H x D (inch): 17.244 x 1.7 x 29.21
Dimensions
W x H x D (mm): 438 x 43.2 x 742
Operating temperature and 5°C to 40°C
humidity 80% RH
Power 750W AC or 48V DC redundant
Two Intel® Xeon® E5-2680 v4 Broadwell-EP Processors
Processors
(2.4 GHz, 14 cores)
RAM 256 GB
Servers 1 to 3: two 960 Gb or 1.2 TB SSDs
Drives
Servers 4 to 8: two 1 TB HDDs
One PCIe Ethernet card, two 10 Gbps SFP+ Intel 82599 Niantic
NICs
One OCP LAN mezzanine adapter, two 10 Gbps SFP+ Intel 82599 Niantic
CMM-a8 hardware
The CMM-a8 variant runs on a dedicated hardware platform consisting of eight AirFrame
RM Compute Node 1 (CN1) Broadwell servers and two Nuage 7850 Virtualized Services
Gateway (VSG) switches.
The table summarizes server configuration for the CMM-a8. For additional information,
see AirFrame Rackmount Server User Guide (Barebone AR-D51BP10-A, AR-D51B1U1-
A and AR-D51B2U1-A), DN09133656 Issue: 2 1 for server model ARD51BP10-A.
CMM-a2 is a smaller variant, with 2 servers and is available for MME-only deployments.
The CMM-a2 PNF setup does not include the two switches, as the 2 servers are directly
interconnected with cross cables and have ports to connect to the first hop router.
18
Nuage Networks Virtualized Services Gateway specifications
Weight 10kg
Cooling Front-to-back and back-to-front via fan tray (4+1) and power module (1+1) options
19