Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Objectives

• Describe Network Function Virtualization


• Describe the CMM VNF architecture
• State the Virtualization infrastructure requirements
• Describe the CMM-a infrastructure

© Nokia 2019 Confidential

1
Network Function Virtualization

© Nokia 2019 Confidential

2
Virtualization

• Deploy, operate, maintain, upgrade traditional telecoms network elements


• Deliver services:
• Cost-effective
• Cost optimized
• Harmonized
• flexible

© Nokia 2019 Confidential

The Telco Cloud enables operators to deploy, operate, maintain, and upgrade traditional
telecom network elements, and deliver services in a simplified, cost-optimized,
harmonized, and flexible manner.

Network Function Virtualization, or NFV, refers to the process of separating network


functions from hardware to create a virtualized network that can run on commodity
hardware, allowing networks to be more pliable and more cost-effective. At the core of
NFV are virtual network functions (VNFs) that handle specific network functions.
Individual VNFs can be connected or combined together as building blocks to create a
fully virtualized environment. VNFs run on virtual machines (VMs) on top of the hardware
networking infrastructure. There can be multiple VMs on one hardware platform.

A VNF can be composed of multiple internal components (VNF-Cs).

3
CMM in Network Function Virtualization (NFV) reference architecture

OSS/BSS NFV Orchestrator

Element Management NFV Service NFV Catalog NFV NFV


System Catalog CloudBand instances resources
NFM-P /
NetAct

Virtualized VNF manager


Network Function CBAM
Cloud Mobility (OpenStack/VMware)
Manager

Network Function Virtualization Infrastructure Virtualized


Infrastructure Manager
Virtual Computing Virtual Storage Virtual Network
CBIS (OpenStack)/
Nova/ESXi Cinder/vSAN Neutron/NSX
vCloud Director (Vmware)

Virtualization
Virtualized Layer
Network
HypervisorFunction
(Linux-KVM/ESXi) Nokia CMM solution
Cloud Mobility
HW Resources (Computing,
Manager Storage, Network) Reference Cloud infrastructure
Nokia Airframe / Generic IT HW

© Nokia 2019 Confidential

The CMM implements the role of a VNF, which is a virtualization of a network function, in
the Network Function Virtualization (NFV) reference architecture.
The Telco cloud enables operators to deploy, operate, maintain, and upgrade traditional
telecom network elements, and deliver services in a simplified, cost-optimized,
harmonized, and flexible manner. Nokia’s virtualized MME and SGSN, as well as its
management solution, is fully compliant with IEEEs NFV reference architecture, as
specified in ETSI GS NFV 002.
The CMM implements the role of a VNF, which is a virtualization of a network function
(the MME/SGSN in this case). The functions and external operational interfaces of a
VNF are the same as a non-virtualized MME/SGSN node.
A VNF can be composed of multiple internal components (VNF-C.
The VNF Manager (VNFM) is responsible for VNF lifecycle management, such as
deployment, update, and scaling. The VNFM used by the CMM in its reference
deployment is the CloudBand Application Manager (CBAM).
The Network Function Virtualization Infrastructure (NFVI) includes all SW and HW
components that provide the environment in which VNFs are deployed, managed, and
executed. Typically, this is achieved by deploying a virtualization layer (hypervisor, such
as Linux/KVM or ESXi) over IT HW (for example, x86 rack mount servers) such as the
Nokia AirFrame.
The Virtual Infrastructure Manager (VIM) comprises the functionalities used to control
and manage the interaction of a VNF with virtualized computing, storage and networking
resources, and the virtualization layer (NFVI) itself. The VIMs supported by the CMM is

4
OpenStack and VMware.
The Element Management System (EMS) performs the typical management functions for one
or several VNFs. In the Nokia solution, these functions are provided by the NFM-P, which
has a northbound interface to NetAct.

4
Cloud run-time architecture

In OpenStack, hardware is divided into the following logical resources:

Compute nodes - provide computing capacity for the use of the guest applications.
Controller nodes - provide supervision functions for the compute nodes and they also provide
networking as required for the guest applications.
The undercloud - the main director node used in OpenStack. It is a single-system OpenStack
installation server compute and controller node that includes components for provisioning and
managing the OpenStack nodes that form the OpenStack environment (referred to as the
overcloud).
The network is composed of the network interface cards (NIC) hosted by compute and
controller nodes and the L2/L3 switches that interconnect these nodes.

© Nokia 2019 Confidential

Virtualization is a technique that allows running more than one application on the same
hardware unit. Virtualization optimizes hardware usage by abstracting the hardware
layer from the software layer, providing a number of logical resources.
The resources available are dependent on the deployment, i.e. OpenStack or VMware.
In OpenStack, hardware is divided into the following logical resources:
• Compute nodes - provide computing capacity for the use of the guest applications.
• Controller nodes - provide supervision functions for the compute nodes and they also
provide networking as required for the guest applications.
• The undercloud - the main director node used in OpenStack. It is a single-system
OpenStack installation server compute and controller node that includes components
for provisioning and managing the OpenStack nodes that form the OpenStack
environment (referred to as the overcloud).
• The network is composed of the network interface cards (NIC) hosted by compute
and controller nodes and the L2/L3 switches that interconnect these nodes.
With a VMware deployment logical resources are:
• Physical Infrastructure Management Services (PIMS) - contains VM appliances,
which must run outside the CMS cluster to provision, monitor, manage and backup
the serer infrastructure.
• Cloud Management Services (CMS) - The overall cloud infrastructure management is
hosted on CMS and provides the basic services, operations and the Cloud front-end
service for IaaS consumers. The cloud compute clusters VI and NSX managers are

5
also located on this cluster.
• Cloud Networking Services (CNS) - a collection of resources (compute, storage and
network) used for hosting the VM components required for the external connectivity of the
applications. CNS resource pool contains the tenant-level virtual Edge Services Gateways
(ESG VMs) to access external networks for different security zones.
• Cloud Computing Services (CCS) - provides resources for both CNS and Provider vDC
Resource Pools. vCloud Director deploys VNF applications and VNF level edge services
on to these clusters.

5
VNF (CMM) Management
VNFs
Guest Guest Guest
Application Application Application
(IPDS, CPPS, DBS...) (IPDS, CPPS, DBS...) (IPDS, CPPS, DBS...)
Application
management

Guest OS (Linux)
Guest OS
(Linux/SROS)
... Guest OS (Linux)

Virtual Machine Virtual Machine Virtual Machine


Cloud stack
management
Hypervisor (Linux-KVM/ESXi)

Hardware

Hardware
management
Hypervisor (Linux-KVM/ESXi)

© Nokia 2019 Confidential

Virtualization is a technique that allows running more than one application on the same
hardware unit.
Virtualization optimizes HW usage by abstracting the hardware layer from the software layer. In
Openstack, HW is divided into the following logical resources:
Compute nodes provide computing capacity for the use of the guest applications.
Controller nodes provide supervision functions for the compute nodes and they also provide
networking as required for the guest applications.
The undercloud is the main director node used in OpenStack. It is a single‐system OpenStack
installation server compute and controller node that includes components for provisioning and
managing the OpenStack nodes that form the OpenStack environment (referred to as the
overcloud).
The network is composed of the network interface cards (NIC) hosted by compute and controller
nodes and the L2/L3 switches that interconnect these nodes.
In VMware‐based deployments, HW is divided into the following logical resources:
Physical Infrastructure Management Services (PIMS): It contains VM appliances, which must run
outside the CMS cluster to provision, monitor, manage and backup the serer infrastructure.
Cloud Management Services (CMS): The overall cloud infrastructure management is hosted on
CMS and provides the basic services, operations and the Cloud front‐end service for IaaS
consumers. The cloud compute clusters VI and NSX managers are also located on this cluster.
Cloud Networking Services (CNS): A collection of resources (compute, storage and network)
used for hosting the VM components required for the external connectivity of the applications.

6
CNS resource pool contains the tenant‐level virtual Edge Services Gateways (ESG VMs) to access
external networks for different security zones.
Cloud Computing Services (CCS): It provides resources for both CNS and Provider vDC Resource Pools.
vCloud Director deploys VNF applications and VNF level edge services on to these clusters.
Applications and the native operating system (OS) are called guest applications and guest OS,
respectively. In the CMM, (click) guest applications are the functional units NECC, IPDS, CPPS, DBS,
PAPS, IPPS, and L3NS. (click) The guest OS is Red Hat Linux for all the VNFCs, except for L3NS, (click)
which runs on Nokia’s Service Router Operating System (SROS). (click) Guest applications and the
guest OS are encapsulated inside a Virtual Machine, which provides the run‐time environment for
them.
The (click) underlying hypervisor is responsible for emulating hardware configurations to the guest
OS.

6
CMM VNF architecture

© Nokia 2019 Confidential

7
CMM VNF architecture

OAM Network Element Management (NECC) NECC


MME/SGSN
OEM, Business logic, open API and centralized functions, event broker, 1+1
plane time series event database and event processing engine +Quorum

DB Cloud Database (DBS) DBS


layer Open database technology for high-capacity database solutions 2N

Transaction Control Plane Processing Service Packet Processing Service (PAPS) CPPS PAPS
(CPPS) SGSN 2G/3G MM and SM
layer MME 4G MM and SM N+1 N+1

IP Director Service (IPDS) IP Packet Processing Service (IPPS) IPDS IPPS


Elastic transaction load balancing, 2N N+
LBS/UP forwarding and i/f aggregation 3G user plane handling
Signaling 3G 2G
layer GTP-U GTP-U
Layer 3 Network Steering Service (L3NS) L3NS
L3 load balance links across multiple IPDS – guaranteed delivery for
SCTP endpoints 1+1

CMM common
MME only
SGSN only

4G/5G NSA 3x 3G 2G Core


© Nokia 2019 Confidential

CMM design provides the following features: optimization for multi-core, virtualized
compute space, high performance and system reliability, common local user interface
and EMS across end-to-end EPC, business logic hardened and field proven in large,
complex networks, and flexible configuration from very small to very large traffic loads
and network designs.
The CMM is designed to run on top of standard, multi-purpose IT hardware and
generally available OpenStack or VMware distribution.
The CMM is designed according to the multi-tier paradigm where load balancing,
transaction processing, and database constitute individual tiers that can scale
independently. The functional layers of the CMM include the following:
• Layer-1, load balancer / user plane: The IPDS terminates the logical signaling
interfaces from the external network’s viewpoint. This layer is aware of working states
and the load of the transaction layer unit and it distributes the incoming transactions
to the next layer for processing. In large CMM deployments with multiple IPDS pairs,
L3NS is used to load balance signaling traffic among the IPDS instances. The IPPS
processes the 3G user plane data.
• Layer-2, transaction processing: The CPPS and the PAPS execute subscriber-related
transactions and maintain local cache of subscriber data to improve performance and
reduce latency associated with stateless call processing design.
• Layer-3, subscriber database: The DBS provides an always available storage area of
static and dynamic UE context data that eliminates the need for active/standby call
processing and on-demand recovery of the UE state on a per subscriber basis.

8
OAM functions are located in the NECC. The NECC provides configuration management,
SW update control, and master for redundancy management, along with a collection of
statistical data and log information based on industry standard data analytics and report
builder and dashboard tools. L3NS OAM functions are managed via the VMs direct OAM
interface.

8
Virtualization infrastructure

© Nokia 2019 Confidential

9
Requirements

Generic hardware • Intel x86 server (Intel Broadwell, Haswell or Skylake) with a minimum 12 cores per
socket

Note: Minimum 14 cores recommended for Vmware

• No vCPU/vRAM oversubscription (oversubscription rate 1:1)


• Hyper-threading enabled
OpenStack deployment • Deployment: OpenStack Newton or later
• Host OS/Hypervisor: Red Hat Enterprise or Canonical Ubuntu-based Linux distribution
• Hypervisor: KVM
• Networking options:
– two 2 x 10Gbit SR-IOV/DPDK capable Intel Niantic NIC
– two Mellanox MCX4121A-ACAT or MCX312A-XCBT ConnectX-3 EN Dual 10GbE
SFP+ Ethernet Adapter
VMware deployment • vSphere 6.5
• VMXNET3

© Nokia 2019 Confidential

The CMM virtualization infrastructure requirements for generic hardware, OpenStack


deployment, and VMware deployment are shown here.

10
Reference hardware and infrastructure manager

• Nokia AirFrame compute node 2


• Nokia AirFrame RM17 density
• Nokia AirFrame OR18

© Nokia 2019 Confidential

Rated capacity has been verified on reference hardware and infrastructure manager with
the specifications for node configurations listed here. Other HW configurations are
supported by design, but capacity can vary.

11
Nokia AirFrame compute node 2

AirFrame RM 10G Compute Node 2 Set (AR-COMN2-SET):


• Processor: 2x E5-2680v3 Haswell, 12-Core 2.5 GHz
• Memory: 256GB (DDR4)
• Storage: 1 x 6TB 3.5” 7.2K SAS disks + 2x 1TB 3.5” 7.2K SAS disks for OS + 1x 2.5” HDD/SSD (internal)
for CEPH journaling
• Networking: 3 x Intel 82599 Niantic NIC (2x 10Gbit) DPDK/SR-IOV enabled

AirFrame RM 10G Compute Node 2 Set BDW (AR-CMN2-SB / AR-DCMN2-SB):

• Processor: 2x E5-2600v4 Broadwell, 14-Core 2.4 GHz


• Memory: 256GB (DDR4)
• Storage: 1 x 6TB 3.5” 7.2K SAS disks + 2x 1TB 3.5” 7.2K SAS disks for OS + 1x 2.5” HDD/SSD (internal)
for CEPH journaling
• Networking: 3 x Intel 82599 Niantic NIC (2x 10Gbit) DPDK/SR-IOV enabled

© Nokia 2019 Confidential

Compute Node 2 Sets are shown here.


Nokia AirFrame RM17 density

Nokia AirFrame (1U) server:


• Processor: 2x E-5-2650v4 Broadwell, 14 core 2.4 GHz
• Memory: 256GB (DDR4)
• Storage: 2 x 400GB 2.5” SATA SSD disks for OS and Ephemeral storage
• Networking: two 20 Mellanox ConnectcX 4 NIC (2 x 25 Gbit) or NIC +LOM port, 1 used for IPMI and PXE

CMM will add an additional NIC

Nokia AirFrame (2U) server:

• Processor: 2x E-5-2650v4 Broadwell, 12 core 2.2 GHz


• Memory: 256GB (DDR4)
• Storage: 2 x 400GB 2.5” SATA SSD disks for OS, 12 x 6TB 3.5” 7.2K SAS disks for CEPH, 2 x 400GB
NVMe SSD for CEPH tiering / journaling
• Networking: 1 x Mellanox ConnectcX 4 NIC (2 x 25 Gbit) +LOM port 1 used for IPMI and PXE

LSI 3108 SAS controller

© Nokia 2019 Confidential

The RM17 density HW IC17.5 configuration compute node is shown here.


Nokia AirFrame OR18

Nokia AirFrame (1U) server:


• Processor: 2x Intel 6130 Skylake, 16 core 2.1 GHz
• Memory: 192 GB (DDR4)
• Storage: 2 x 400GB 2.5” SATA SSD disks for OS and Ephemeral storage
• Networking: two 20 Mellanox ConnectcX 4 NIC (2 x 25 Gbit) or NIC +LOM port, 1 used for IPMI and PXE

© Nokia 2019 Confidential

The OR configuration compute node is shown here.

14
Virtual infrastructure manager

OpenStack distribution:
• Nokia CloudBand infrastructure software 18.5
• OpenStack Newton distribution (RHELOSP 10.0)
• Host OS/Hypervisor: Red Hat® Enterprise Linux (RHEL 7.4)/KVM
• NUMA scheduled, Huge Page (1G) enabled
• Storage backend: CEPH
• CPU pinning (for guests) / isolation for host OS / CEPH
• VF Trust mode must be enabled for L3NS with Mellanox NICs
• VF spoofchk must be disabled for L3NS on Intel NICs (accomplished through Heat/HOT template
security setting)

VMware distribution
• VMware NFV 2.0
• Hypervisor: ESXi 6.5A
• Networking based on VMXNETESG and NSX firewalls are bypassed

© Nokia 2019 Confidential

The Virtual Infrastructure Manager (VIM) are shown here.

15
CMM-a

© Nokia 2019 Confidential

16
CMM-a infrastructure

CMM-a2

CMM-a8
© Nokia 2019 Confidential

The CMM-a runs a fixed configuration of the CMM software directly on KVM.

To provide the system with CLI commands (for installing, upgrading, checking VM
status, hardware status, and so on) for the CMM application, the Airframe Infrastructure
Manager (AIM) runs at the host level. Once in service, AIM monitors all hardware
components and raises alarms upon failure and recovery of the components including
servers and links. In addition, the CMM application monitors the status of all VMs
running in the system and raise alarms upon failure and recovery of the VMs.

When deployed with the NSP, the CMM application and the AIM are managed under a
common resource group (CMM-a). The CMM application and AIM are configured with
individual management IP addresses.

17
Compute Node (CN) 1 specifications
W x H x D (inch): 17.244 x 1.7 x 29.21
Dimensions
W x H x D (mm): 438 x 43.2 x 742
Operating temperature and 5°C to 40°C
humidity 80% RH
Power 750W AC or 48V DC redundant
Two Intel® Xeon® E5-2680 v4 Broadwell-EP Processors
Processors
(2.4 GHz, 14 cores)
RAM 256 GB
Servers 1 to 3: two 960 Gb or 1.2 TB SSDs
Drives
Servers 4 to 8: two 1 TB HDDs

One PCIe Ethernet card, two 10 Gbps SFP+ Intel 82599 Niantic
NICs
One OCP LAN mezzanine adapter, two 10 Gbps SFP+ Intel 82599 Niantic

Front I/O Two USB 2.0 ports


Two USB 3.0 ports
One VGA port
Rear I/O One RS232 serial port
One ID LED
One 1GE dedicated management port

© Nokia 2019 Confidential

CMM-a8 hardware
The CMM-a8 variant runs on a dedicated hardware platform consisting of eight AirFrame
RM Compute Node 1 (CN1) Broadwell servers and two Nuage 7850 Virtualized Services
Gateway (VSG) switches.

The table summarizes server configuration for the CMM-a8. For additional information,
see AirFrame Rackmount Server User Guide (Barebone AR-D51BP10-A, AR-D51B1U1-
A and AR-D51B2U1-A), DN09133656 Issue: 2 1 for server model ARD51BP10-A.

CMM-a2 is a smaller variant, with 2 servers and is available for MME-only deployments.
The CMM-a2 PNF setup does not include the two switches, as the 2 servers are directly
interconnected with cross cables and have ports to connect to the first hop router.

18
Nuage Networks Virtualized Services Gateway specifications

Dimensions W x H x D (mm): 440 x 44 x 470

Weight 10kg

Operating temperature 0°C to 40°C

Operating relative humidity 5% to 85% RH

Power 475W redundant, AC or -48V DC

Cooling Front-to-back and back-to-front via fan tray (4+1) and power module (1+1) options

© Nokia 2019 Confidential

19

You might also like