BRKCRS 3447

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 133

Network Function

Virtualisation for
Enterprise Networks
James Sandgathe - Engineer, Technical Marketing
Enterprise Infrastructure and Solutions Group
Abstract
Network Function Virtualisation (NfV) is gaining increasing traction in the industry
based on the promise of reducing both CAPEX and OPEX using COTS hardware.
This session introduces the use-cases for virtualising Enterprise network
architectures, such as virtualising branch routers, LISP nodes, IWAN
deployments, or enabling enterprise hybrid cloud deployments. The sessions also
discusses the technology of Virtualisation from both a system architecture as well
as a network architecture perspective. Particular focus is given on understanding
the impact of running routing functions on top of hypervisors, as well as the
placement and chaining of network functions. Performance of virtualised
functions is also discussed.
Agenda BRKCRS-3447
• Introduction & Motivation
• Deployment Models and Characteristics
• The Building Blocks of Virtualisation
• Introducing Enterprise NFV
• Demonstration - NFVIS Orchestration
• Demonstration – ESA Orchestration

• Conclusion
Some additional points …
Cisco launches Enterprise NFV
http://www.cisco.com/go/enfv

Enterprise NFV Technical Whitepaper


http://www.cisco.com/c/en/us/solutions/collateral/enterprise-networks/enterprise-network-
functions-virtualization-nfv/white-paper-c11-736783.html?cachemode=refresh
Some additional points …
Two new sessions are added at CiscoLive Las Vegas 2016
• BRKCRS-2006 – 2 Hour Breakout
• TECCRS-3006 – 8 Hour Deep Dive Tectorial and Hands On Lab
These new sessions will focus on the Enterprise NFV solution
Introduction and
Motivation
Network Functions Virtualisation (NFV)
Announced at SDN World Congress, Oct 2012

• AT&T
• BT
• CenturyLink
• China Mobile
• Colt
• Deutsche Telekom
• KDDI
• NTT
• Orange
• Telecom Italia
• Telstra
• Verizon
• Others TBA…

7
What is NFV? A Definition

NFV decouples the network functions such as NAT,
Firewall, DPI, IPS/IDS, WAAS, SBC, RR etc. from
proprietary hardware appliances, so they can run in
software.
….. Service
Orchestration
It utilises standard IT virtualisation technologies that
run on high-volume servers, switch and storage
hardware to virtualise network functions..
…..
It involves the implementation of network SDN X86
compute NFV
functions in software that can run on a range of
industry standard server hardware, and that can be
moved to, or instantiated in, various locations
in the network as required, without the need for
installation of new equipment.

Sources:
https://www.sdncentral.com/which-is-better-sdn-or-nfv/
http://portal.etsi.org/nfv/nfv_white_paper.pdf
Motivation for Virtualising Network Functions
CAPEX
• Deploy on standard x86 servers
• Economies of scale
• Service Elasticity
• Simpler architectural paradigm
• Changes in management access?
• Changes in HA?
• Best-of-breed
Motivation for Virtualising Network Functions
OPEX
• Reduction of number of network elements
• Reduction of on-site visits
• Leveraging Virtualisation benefits
• Hardware oversubscription, vMotion, ..
• Increased potential for automated network operations
• Re-alignment of organisational boundaries
Deployment Models and
Characteristics
Virtualisation Architecture Taxonomy
Classifying the Architecture

• What type of function in the network is being virtualised?

• Where does the function reside?

• How is the function hosted?


Virtualisation Architecture Taxonomy
Type of Functions
• Control plane
Network
• Network policy
Control
• Orchestration and Management

Enterprise • Data/Forwarding plane


Network Transport • Routing
Virtualisation • Packet diversion/service chaining

Network
Functions/S • L3-L7 Services
ervices • DPI, NAT, Compression
Virtualisation Architecture Taxonomy
Placement and Location
• Any number of virtual instances
Cloud • Many traffic volumes
• Location agnostic

Enterprise Data • Mid to Large number of instances


Network Centre/Cam • High traffic volumes
Virtualisation pus • Two to fifty or more

• Potentially large number of locations


Branch • Low to mid traffic volume
• Tens to tens of thousand
Virtualisation Architecture Taxonomy
Hosting of Functions
• Virtual Machines
Cloud • Application/Linux container
• End user does not see HW

• Virtual Machines and containers


Enterprise Data
Network Centre/Cam • Blade server clusters and high
Virtualisation pus density chassis servers
• On-Premise Private Cloud

• Virtual Machines
Branch • Network element hosting/appliances
• General purpose servers
Virtualisation – Architecture
Differences In Data Centre and Branch

Data Centre Branch Site


• Runs 10s of thousands of VMs • Runs 2 to 6 VM
• Server hardware is high-end • Server hardware lower end
compute Fast storage using chassis
SAN and SAN switching • Internal storage, possibly RAID
• VM = 4C/12GB/100GB HDD • VM = 1C/8GB/250GB HDD

• 5% headroom = 2,000 VMs • 5% headroom = 1-2 VMs


Virtualisation – Architecture
Cost Impact to Scaling Compute
DRAM Cost Per GB
$26
Cost
25

20

$16
15
$13
$11
10

5
DIMM
16GB 32GB 64GB
8GB
Virtualisation – Architecture
Cost Impact to Scaling Compute
CPU Cost Per Core $229
Cost per
core
$199
200

150
$130

100
$79
$72

50

8C 12C 16C 18C Cores per


6C
socket
Virtualisation – Architecture
Cost Impact to Scaling Compute
HDD Cost Per TB
Cost per
TB
$219
200
$180
$160
150

$113
100

50

2TB 3TB 4TB HDD Size


1TB
Virtualisation of Control
Plane Functions
Enterprise Virtualisation Models
Network Control Plane Functions
• Virtualisation of Control plane functions
– Route Reflectors
– PfR MC
Shared Services
WAN – LISP MS/MR
vWLC vRR – WLC
vMS/MR vMC – …
• Can be on-premise or in larger Enterprise
WAN PoPs or in the cloud
Campus – Assuming VNFs are reachable by IP
• CSR 1000v offers functional and operational
consistency
• Virtualised IOS XE
Example: vRR with CSR 1000v
• CSR 1000v offers full IOS XE route-reflector functionality
Customer Data Center VMs
Premise
SP Aggregation
SP Core
vRR

ASR1001 & ASR1001 & CSR1000v CSR1000v RP2 (8GB) RP2 (16GB)
ASR1002-X ASR1002-X (8GB) (16GB)
(8GB) (16GB)

ipv4 routes 7M 13M 8.5M 24.8M 8M 24M


vpnv4 routes 6M 12M 8.1M 23.9M 7M 18M
ipv6 routes 6M 11M 7.4M 21.9M 6M 17M
vpnv6 routes 6M 11M 7.3M 21.3M 6M 15M
BGP sessions 4000 4000 4000 4000 8000 8000
Cloud Virtualisation
Application Visibility in the Public Cloud
• Cloud network enhanced by
Remote Sites
sophisticated routing functionality & Employees

• Secure connectivity to cloud Enterprise


(encryption) Data Center VPC1

• VPC to VPC connectivity


• Application Visibility
• WAAS Public
VPC2
Internet

VPCs are part of enterprise network


End-to-end Cisco network (including
AWS Cloud)
VPCs are part of enterprise network
Application Visibility
End-to-end Cisco network (including
AWS Cloud)

Application Visibility
Branch Virtualisation: Cloud Options
L3 Private-cloud Branch – 1:1
Branch
• L3 router remains in branch but performs
minimal functions
4
F/D
DC
Routing, QoS,
FW, NAT..
• L4-7 services virtualised in the private
Branch FW, NAT..
cloud
WAN Campus • Branch router tightly coupled with virtual
router in the private cloud for services

L2 Private-cloud Branch – 1:1


Branch
• Small branches with low throughput and
no WAAS, Encryption, HA requirements
5
F/D
DC
Routing, QoS,
FW, NAT..QoS,
Routing,
• Switch: transport, Storm control, L2 COS
Branch
FW, NAT.. • Routing & Services: done in PoP or in SP
WAN Campus DC running on UCS (at PoP or in DC)
• Single tenant, but optionally single-or multi-
site

Suitability for applications with stringent


bandwidth / delay / jitter requirements?
Virtualising Branch
Functions
Virtualisation of Branch Functions
Branch Branch Appliances
CUBE
• Router: Routing, ACL, NAT, SNMP..
CUBE

• Switch: port aggregation


Fib/DSL/Cab. Campus / • Services realised with appliances
Fib/DSL/Cab. WAN DC • Full redundancy
• Could be multi-vendor (Best of breed)

• Current Branch infrastructure often contains physical appliances that complicate


architecture
• Typical Appliances vary by branch size
• Remote office (1-5 users): firewall
• Small (5-50 users): switched infrastructure, small call control, firewall, IPS/IDS
• Medium (50-100 users): redundancy, local campus, call control, firewall, IPS, IDS, WAAS
• Large (100+ users): redundancy, local campus, call control, firewall, IPS, IDS, WAAS

• …In addition to end-points (Phones, Printers, local storage…)


Branch Virtualisation – On premise Options
Branch Router + integrated L4-7 services
1 F/D
• E.g. ISR + UCS-E
• Router performs transport functions
WAN • Services (Firewall, WAAS..) virtualised on UCS-E

Branch Router + virtualised L4-7 services


• Router performs transport functions (Routing, ACL,
2 F/D
NAT, SNMP..)
• Services virtualised on external server
WAN • VNFs Could be multi-vendor (Best of breed)

Branch
Fully virtualised Branch
• Physical router replaced by x86 compute
3 F/D • Both transport and network services virtualised
WAN • VNFs could be multi-vendor (Best of breed)
The Building Blocks of
Virtualisation (Today)
ETSI NfV Reference Architecture
NFV Management and
Orchestration

Os-Ma
OSS/BSS Orchestrator
Orchestration
Service, VNF and Infrastructure
Se-Ma

Description
Or-Vnfm

EMS 1 EMS 2 EMS 3 Ve-Vnfm


VNF
Virtual Network Functions Manager(s)
Management
VNF 1 VNF 2 VNF 3 Or-Vi

Vn-Nf Vi-Vnfm
NFVI
Virtual Virtual
Virtual Storage
Computing Hypervisor Network
Nf-Vi Virtualised
Virtualisation Layer Infrastructure
Vl-Ha Manager(s)
Hardware resources
ComputingComputeStorage
Hardware Network
Hardware Hardware Hardware

Execution reference points Other reference points Main NFV reference points

30
Architecture Building Blocks Enterprise Virtualisation
• Orchestration and Management
• Virtual Network Functions Policy

• Virtual Routers, Firewalls, Orchestration & Management

NATs…
• Hypervisors / Containers
Branch 1 DC
• A transport network VM1 VM2
VMx

• Physical Hardware


PnP
VSwitc
LCM
h
• X86 servers Hypervisor WAN Branch N
PHY PHY
Host OS
• Virtualisation-capable routers

• Service Chaining (Optional)


Virtual Network
Functions
Available VNFs from Cisco for Enterprise (Sample)
Virtual Router Virtual Virtual
AppNav and AVC DHCP
CE / CPE Route Reflector PE/ IP Router CML / VIRL
(CSR1Kv) (CSR1Kv)
(CSR1Kv) (CSR1Kv, XRv) (CSR1Kv)
Network
Infrastructure Nexus 1000V
Network Analysis
Wide Area
Application Service
IP SLA
VXLAN (L2,L3),
OTV, VPLS, LISP
Wireless LAN
Controller
Module (NAM) (CSR1Kv)
(WAAS) (CSR1Kv) (WLC/MSE)

Virtual Zone Virtual ASA IPSec and SSL IPSec VPNs (Flex,
vNGIPS NAT
Based Firewall Firewall VPN Easy, GET)
(SourceFire) (CSR1Kv)
(CSR1Kv) (ASAv) (ASAv) (CSR1Kv)
Security
Deep Packet Identity Services
Web Security E-Mail Security DMVPN SSL VPN
Inspection Engine
(vWSA) (vESA) (CSR1Kv) (CSR1Kv)
(CSR1Kv) (vISE)

Enterprise Network Prime Network Cisco Prime


Prime Prime
Controller (APIC- Registrar, IP Infrastructure, Prime Home
Collaboration Access Registrar
Management EM) Express Provisioning

& Orchestration Prime Service


Prime Performance Intelligent
Prime Fulfillment, Prime Network
Manager, Prime Automation for UCS Director
Catalog Order Fulfillment Service Controller
Analytics Cloud (IAC)

Voice & Cisco VDS-IS


Cisco Unified
Coms Manager,
Unified Contact
Center, CC
CUBE
(CSR1Kv)
Video
Conferencing
Video Presence, Unity Express Roadmap (MSE8K)
Cisco Virtual Network Functions
• Adaptations from physical systems / solutions
• Feature and operational consistency between physical and virtual systems
• E.g. CSR 1000v and ASR 1000 / ISR 44xx are all based on the SAME IOS XE
• Exposure of APIs (REST)
• Flexible Licensing models (perpetual, Smart Licensing, Cisco ONE)
• Flexible Performance
• ASAv: {100Mbps, 1Gbps, 2Gbps}
• CSR 1000v: {10Mbps, 50Mbps, 100Mbps, 250 Mbps, 500Mbps, 1Gbps, 5 Gbps, 10Gbps}
• WAAS: {200, 750, 1300, 2500, 6000, 12000, 50000}
Cisco CSR 1000V – Virtual IOS XE Networking
Cisco IOS Software in Virtual Form-Factor
CSR 1000V
IOS XE Cloud Edition
• Selected features of IOS XE based on targeted use cases
App App
Infrastructure Agnostic
OS OS Not tied to any server or vSwitch, supports ESXi, KVM, Xen, AMI
Throughput Elasticity
VPC/ vDC • Delivers 10Mbps to 20 Gbps throughput, consumes 1 to 8 vCPU
Hypervisor
Multiple Licensing Models
Virtual Switch • Term, Perpetual
Programmability
Server • RESTful APIs for automated management

Virtualised Networking with Rapid Deployment and Flexibility


Introducing vCUBE (CUBE on CSR 1000v)
Architecture
• CSR (Cloud Services Router) 1000v runs on a Hypervisor – IOS
XE without the router
Virtual Container
RP (control plane) ESP (data plane) FFP code

Chassis Mgr. QFP Client Chassis Mgr.


IOS-XE Forwarding Mgr. / Driver Forwarding Mgr.

CUBE signaling CUBE media processing


Kernel (incl. utilities)
Virtual CPU Memory Flash / Disk Console Mgmt ENET Ethernet NICs

CSR 1000v (virtual IOS-XE)

Hypervisor vSwitch NIC

X86 Multi-Core CPU Memory Banks Hardware GE


… GE
Hypervisors
CSR 1000v and Hypervisor Processing Relationships
VM1(4vCPU CSR 1000v)

CSR
2(1vCPU CSR 1000v)
VMFman / PP PP HQF
IOS Rx IRQ1 vNIC11 vNICn1 VM Kernel1
CMan3 E/ PPE Pkt Scheduler

CSR
IOSVM (2vCPU
Fman CSR
HQF1000v) Rx
IRQ2 vNIC12 vNICn2 VM Kernel2
CMan
Guest OS E Pkt Scheduler
Scheduler • Example: 3 CSR VMs scheduled

CSR
Fman / PP HQF
IOS Rx
GuestCMan
OS Scheduler
E Pkt Scheduler IRQ3 vNIC13 vNICn3 VM Kernel3 on a 2-socket 8-core x86
vCPU01 vCPU11 vCPU21 vCPU31
Guest OS Scheduler
vCPU 2 – Different CSR footprints shown
0

vCPU03 vCPU13
• Type 1 Hypervisor
– No additional Host OS represented
X86 Server vCPU12
vSwitch
• HV Scheduler algorithm governs
Process
Queue

vCPU03
HV Kernel
vNICn2 how vCPU/IRQ/vNIC/VMKernel
VM Kernel1
processes are allocated to pCPUs
HV Scheduler • Note the various schedulers
– Running ships-in-the-night
Socket0 Socket1
pCPU1 pCPU2 pCPU3 pCPU4 pCPU1 pCPU2 pCPU3 pCPU4
pCPU5 pCPU6 pCPU7 pCPU8 pCPU5 pCPU6 pCPU7 pCPU8
Virtual Switches / Bridges
• Virtual switches ensure connectivity between physical interfaces and Virtual Machines
• Can have multiple vSwitches per host
• May have L2 restrictions (some vSwitches are switches in name only)
• May impact performance
I/O Architecture
Virtualising I/O – KVM Architecture Example
x86 Host (KVM)
• Hypervisor virtualises the NIC hardware to the
multiple VMs VM1
Application
VM2
Application

Guest
• Hypervisor scheduler responsible for ensuring Packet Copy Packet Copy
I/O driver (e.g. VirtIO) I/O driver (e.g. VirtIO)
that I/O processes are served.
Packet Copy Packet Copy
• There is a single instance of physical NIC

(QEMU)
Space
User
hardware, including queues, etc. vNIC (vHost) vNIC (vHost)

• Many to one relationship between the VM’s vNIC Packet Copy Packet Copy
and the single physical NIC Tap Tap

Host Kernel
• One vHost/VirtIO thread used per configured Virtual Switch / Linux Bridge
interface (vNIC)
pNIC Driver

• May become a bottleneck at high data rates pNIC


Pkt Pkt
Pkt Pkt
Pkt Pkt
I/O Optimisations: Direct-map PCI (PCI pass-through)

• Physical NICs are directly mapped to


a VM
Bypasses the Hypervisor scheduler layer
PCI device (i.e. NIC) no longer shared
among VMs
Typically, all ports on the NIC are
associated with VM
Unless NIC supports virtualisation

• Caveats:
Limits the scale of the number of VMs per
blade to ‘number of physical NICs per
system’
Breaks live migration of VMs
I/O Optimisations: Single Root IO Virtualisation - SR-IOV
with PCIe pass-through
• Allows a single PCIe devices to appear to be
multiple separate PCIe devices
NIC supports virtualisation
• Enables network traffic to bypass software
switch layers
• Creates physical and virtual functions (PF/VF)
PF: full featured PCIe
VF: PCIe without configuration resources
Each PF/VF gets a PCIe requestor ID s.t. IO
memory management can be separated between
different VFs
Number of VFs dependent on NIC (O(10))
• Ports with the same (e.g. VLAN) encap share
the same L2 broadcast domain
• Requires support in BIOS/Hypervisor
Enterprise NFV
DEMONSTRATION: ESA

45
The Current Enterprise Branch Landscape
Multiple Devices Difficult to Manage Costly to Operate
Routers, Appliances, Servers Device integration and Upgrades, refresh cycles,
operation site visits

Horseman of the branch apocalypse


What can the system do for me?
But a new
What if adefense network
new attacker can be up
threatened in business
the minutes
… Everywhere at once
Orchestration &
Automation

FW/
Route IPS

vnet vnet vnet

Office

FW/
IPS Route

vnet vnet vnet


FW/
Office Route WLC
IPS

vnet vnet vnet

Office
How does it make my life simpler?
It’s simple really ….

Here is your Branch This is your Branch with


on Hardware Cisco Enterprise NFV

Router
Route/
Firewall Path FW/ WAN-
WLC vAPP vAPP
IDS Opt
Selection
Wireless
Virtualization Layer - KVM
WAN Opt
Policy
Life Cycle MGT Automation
Proxy/Cache Enforcement

Operating System

X86 Processor

Switch NIC NIM BMC


So why not just put a server at the branch and be
done with it?
Managing the Hypervisor
vCenter
GE1.114 10.16.48.21 10.16.48.1

1 GE2.1 17.16.8.26 7
CSR FW
4
Carrier 1
VZ
5
6
DvSW-1

2
8 VMKernel Port VMWare ESXi
3
UCS 240

L2 VLAN
LAN SW

1. VMWare vCenter sends packet from central location (East coast)


2. Packet carried over MPLS (VZ) to store (Sunnyvale Lab)
3. Physical Ethernet connected to switch and frame forwarded to
VMWare Distributed vSwitch (DvSW)
4. DvSW forwards frame to CSR
5. CSR removes MPLS label and forwards to DvSW
6. Forwarded from DvSW to Juniper SRX FW
7. FW forwards to DvSW for VMKernel going out to EX and back
8. Packet arrives at VMKernel
Managing the Hypervisor Change in the CSR

Change in the FW
vCenter
GE1.114 10.16.48.21 10.16.48.1

1 GE2.1 17.16.8.26 7
CSR FW
4
Carrier 1
VZ
5
6
DvSW-1

2
8 VMKernel Port VMWare ESXi
One /30 from each
3
carrier for the WAN
circuit UCS 240

L2 VLAN
LAN SW
Change in the Port Channel or VLAN

• While changes were made to the FW, VLAN assignments, CSR, or FW connectivity
to/from vCenter gets lost and begins to flap
• vCenter sometimes misses confirmation of changes made
• This is a issue since management of the hypervisor becomes dependent on the
stability of the VMs running in it
Managing the Hypervisor
Change in the CSR

• Virtualisation evolved as a DC vCenter

1 GE2.1 17.16.8.26
GE1.114 10.16.48.21
7
10.16.48.1
Change in the FW

CSR FW

technology where high speed, Carrier 1


VZ
4

DvSW-1
5
6

near zero latency, and straight


One /30 from each
carrier for the WAN
2

3
8 VMKernel Port VMWare ESXi

circuit UCS 240

IP access existed between the L2 VLAN


EX3300
Change in the Port Channel or VLAN

management console and the


Carrier 2
ATT

hypervisor instance This is a fundamental


• Applying this to the WAN causes
flaw in the
it to break since managing the
architecture of
hypervisor is dependent a VM
virtualisation
and its stability
VNF and
VNF andApplication
Application

ISRv
ISRv ASAv
ASAv WAAS
WAAS …

Enterprise NFV Solution Architecture
vWLC
vWLC vNAM
vNAM
33rdrd
App11
App App22
App Appnn
App
with 33rdrd
hosting with
hosting
party support
party support
VNFnn
VNF

Phase 1 Software host


Software host
API
API Platform
Platform Virtual
Virtual Common
Interface
Interface Management
Management NFVIS
NFVIS Hypervisor
Hypervisor
Switching
Switching
managing
managing
Orchestration and
virtualization and
virtualization and
ESA + APIC-EM + Prime Infrastructure Management
hardware
hardware across
virtual & physical
network
ISR-4K ++ x86
ISR-4K x86 on
on UCS x86
UCS x86
… …
Various
Various HostHost options
options
UCS-E
UCS-E 3rd Server
ISRv ASAv WAAS vWLC App1Server
App1 Appn for different
for different Branch
Branch
VNF and Application
VNFn Sizes
Sizes
hosting with 3rd
33rdrd party
party supplied
supplied software
software party support
Cisco supplied
Cisco supplied software
software

API Platform Virtual


Interface Management NFVIS Hypervisor
Switching
Software host
managing
virtualisation and
hardware
ISR-4K + x86 on UCS x86
UCS-E Server
Various Host
options for different
NFVIS = Network Function Virtualisation Infrastructure Software Branch Sizes
ESA + APIC-EM + Prime Infrastructure

Branch Profile Design ISRv ASAv WAAS vWLC vNAM … 3rd


VNFn
App1 App2 … Appn

Enterprise Service Automation API


Interface
Platform
Management

ISR-4K + x86 on
NFVIS Hypervisor

UCS x86
Virtual
Switching

UCS-E Server

Custom Design a NFVIS = Network Function Virtualization Infrastructure Software

Upload Devices to Upload the Branch Profile Select functions


be shipped locations 3 4
1 2

Map to Associate the Pick validated


Branch(s) templates & attributes topologies

7
6 5
ESA + APIC-EM + Prime Infrastructure

Orchestration & Management Day 0/1 ISRv

API
ASAv

Interface
WAAS vWLC

Platform
Management
vNAM

NFVIS
… 3rd
VNFn
App1

Hypervisor
App2 … Appn

Virtual
Switching

ISR-4K + x86 on UCS x86


UCS-E Server

NFVIS = Network Function Virtualization Infrastructure Software

Enterprise Services Automation (ESA)

SN, IP for host


Provisioning

Profile to SN
mapping
Day
Day00/1
Config
config PnP
Prime
Prime Infra
Infrastructure Repository
repository
PnP
Server APIC-EM
APIC-EM

REST
Provisioning

Office
ESC-Lite
WAAS
IPS
IP
vSwitch

NFVIS WAN
ESA + APIC-EM + Prime Infrastructure

Orchestration & Management Day 2 ISRv

API
ASAv

Interface
WAAS vWLC

Platform
Management
vNAM

NFVIS
… 3rd
VNFn
App1

Hypervisor
App2 … Appn

Virtual
Switching

ISR-4K + x86 on UCS x86


UCS-E Server

NFVIS = Network Function Virtualization Infrastructure Software

Enterprise Services Automation (ESA)

Day
Day 00/1
Config
config PnP
Prime
Prime Infra
Infrastructure Repository
repository
PnP
Server APIC-EM
APIC-EM CSM WCM
Managemet
Element

Office
ESC-Lite
WAAS
IPS
IP
vSwitch

NFVIS WAN
• Day 2 Element Management (Config changes, Fault monitoring etc) done by PI, APIC-EM, and VNF-specific element managers (in
case of 3rd party or if the VNF is not supported by PI)
• ESA plays no role in day 2 operations
ESA + APIC-EM + Prime Infrastructure

Best-of-breed Trusted Services from Cisco ISRv ASAv WAAS vWLC vNAM … 3rd
VNFn
App1 App2 … Appn

Consistent software across physical and virtual API


Interface
Platform
Management

ISR-4K + x86 on
NFVIS Hypervisor

UCS x86
Virtual
Switching

UCS-E Server

NFVIS = Network Function Virtualization Infrastructure Software

ISRv ASAv/FTD * vWAAS vWLC


High Performance Comprehensive Protection Application Optimisation Survivability & Scale
Rich Features Full DC-class Featured Superior Caching with Consistency across the
Functionality Akamai Connect Data Center and Switches
End-to-end Support
Designed for NFV Built for small and medium
Proven Software
branches
Cost-effective with NFV

Windows 2012 and Linux Server also supported

* FirePOWER Threat Defense for ENFV June/July 2016


ESA + APIC-EM + Prime Infrastructure

3rd
ISRv ASAv WAAS vWLC vNAM … VNFn
App1 App2 … Appn

Optimised for Network Services


API Platform Virtual
Interface Management NFVIS Hypervisor
Switching

ISR-4K + x86 on UCS x86


UCS-E Server

Enterprise NFV Infrastructure Software (NFVIS) NFVIS = Network Function Virtualization Infrastructure Software

Network Hypervisor Zero Touch Deployment


Enables segmentation of Automatic connection to PnP server
virtual networks
Secure connection to the
Abstract CPU, memory, orchestration system
storage resources Easy day 0 provisioning

Life Cycle Management Service Chaining Open API


Provisioning and launch of VNFs Elastic service insertion Programmable API for
service orchestration
Failure and recovery monitoring Multiple independent service
Stop and restart services paths based on applications or REST and NETCONF API
user profiles
Dynamically add and
remove services
ESA + APIC-EM + Prime Infrastructure

NFVIS ISRv

API
ASAv WAAS vWLC

Platform
vNAM

NFVIS
… 3rd
VNFn
App1

Hypervisor
App2 … Appn

Virtual

The POWER under the hood


Interface Management Switching

ISR-4K + x86 on UCS x86


UCS-E Server

NFVIS = Network Function Virtualization Infrastructure Software

Virtualisation
• Kernel Virtual Machine (KVM) to abstract
service functions from hardware
API Virtualized Virtualized Virtualized
• Virtual switching provides connectivity Interface
Service Service Service

between service functions and to physical PnP Platform


br2 br1
vSwitch

Management KVM
interfaces Client

Int-3 Int-2 Int-1 Linux

Network Function Virtualisation Infrastructure Software


ESA + APIC-EM + Prime Infrastructure

NFVIS ISRv

API
ASAv WAAS vWLC

Platform
vNAM

NFVIS
… 3rd
VNFn
App1

Hypervisor
App2 … Appn

Virtual

The POWER under the hood


Interface Management Switching

ISR-4K + x86 on UCS x86


UCS-E Server

NFVIS = Network Function Virtualization Infrastructure Software

REST (HTTPS) and NETCONF (SSH)


PF = Physical Function
VF = Virtual Function
• Register and deploy services
• Configure platform Virtualized Virtualized Virtualized
API VF Service Service Service
• Gather monitoring statistics Interface
vSwitch
PnP Platform
br2 br1
Client Management KVM

PnP client for ZTD PF


Int-3 Int-2 Int-1 Linux

Platform Management
• Controlling hardware specifics such as storage,
memory, network interface connectivity
• Health monitoring
• Hardware performance such as SR-IOV

Network Function Virtualisation Infrastructure Software


NFVIS Local Management ISRv ASAv
ESA + APIC-EM + Prime Infrastructure

WAAS vWLC vNAM … 3rd


VNFn
App1 App2 … Appn

The POWER under the hood API


Interface
Platform
Management

ISR-4K + x86 on
UCS-E
NFVIS Hypervisor

UCS x86
Server
Virtual
Switching

NFVIS = Network Function Virtualization Infrastructure Software

• Enterprise NFV local management capabilities


MANO
Agents VNF Lifecycle Management Agent + Programmable APIs

• Components: Virtual
Network
Functions
Cisco VNF Cisco VNF Cisco VNF 3rd Party VNF 3rd Party VNF

Server Platform Service


Local WebUI Management Initialization assurance
functions Software agents

• Local GUI, VM Life-cycle Manager NFVIS


Security Hardware
Licensing (Secure Boot/ Accelerator PnP client
NFV
code signing) SDK
Infrastructure
(NFVI)
Platform Interface
Hardware drivers Drivers Linux Hypervisor vSwitch

• Local PnP Agent NFV Platform (Cisco/3rd Party)


WAN/LAN Network x86 + HW Compute
Interfaces Accelerators Resources

• Useful if WAN connectivity is unavailable


• For small deployments

All controls written using public APIs!!


ESA + APIC-EM + Prime Infrastructure

Enterprise NFV ISRv

API
ASAv

Interface
WAAS vWLC

Platform
Management
vNAM

NFVIS
… 3rd
VNFn
App1

Hypervisor
App2 … Appn

Virtual
Switching

Scalable Services Compute Platforms ISR-4K + x86 on


UCS-E

NFVIS = Network Function Virtualization Infrastructure Software


UCS x86
Server

• Solution supports different form-


factors and resources to meet
varying demands UCS C-Series

• Provide the physical resources


ISR-4K with
for NFVIS, VNFs and UCS E- Series

applications.
• Enterprise NFV solution runs on
x86 based host
• UCS-E
• Cisco UCS-C
Enterprise NFV
ESA + APIC-EM + Prime Infrastructure

3rd
ISRv ASAv WAAS vWLC vNAM … VNFn
App1 App2 … Appn

UCS-220-M4
API Platform Virtual
Interface Management NFVIS Hypervisor
Switching

ISR-4K + x86 on UCS x86


UCS-E Server

NFVIS = Network Function Virtualization Infrastructure Software

• Designed for a wide range of workloads


• Dense 1RU modular general compute
platform VM VM VM

• CPU: Single/Dual 4 to 18 cores each


NFVIS
• Memory: Up to 512GB
• Storage : 4 or 8 up to 8TB (RAID 10)
• External Interfaces:
• Dual GE on-board
• Two PCIe slots (Quad or Dual GE)

• Cisco integrated management controller


(CIMC)
ESA + APIC-EM + Prime Infrastructure

An NFV Platform with modular options ISRv

API
ASAv WAAS vWLC

Platform
vNAM

NFVIS
… 3rd
VNFn
App1

Hypervisor
App2 … Appn

Virtual
Interface Management Switching

ISR-4K + x86 on UCS x86


UCS-E Server

NFVIS = Network Function Virtualization Infrastructure Software

GE
x86
ESA + APIC-EM + Prime Infrastructure

With an SD-WAN solution built in


NFVIS ISRv

API
ASAv WAAS vWLC

Platform
vNAM … 3rd
VNFn
App1 App2 … Appn

Virtual
Interface Management NFVIS Hypervisor
Switching

Along with automation control ISR-4K + x86 on


UCS-E

NFVIS = Network Function Virtualization Infrastructure Software


UCS x86
Server

WAN

Orchestration &
Automation

VNF
NFVIS
VNF
GE
IWAN
x86

Internet
ESA + APIC-EM + Prime Infrastructure

Enterprise NFV ISRv

API
ASAv

Interface
WAAS vWLC

Platform
Management
vNAM

NFVIS
… 3rd
VNFn
App1

Hypervisor
App2 … Appn

Virtual
Switching

Modular Compute Platform ISR-4K + x86 on


UCS-E

NFVIS = Network Function Virtualization Infrastructure Software


UCS x86
Server

UCS® E-Series
Reliable Integrated & OIR Support
Best edge platform compute – up to 8 cores
Revolutionary
Platform
Life-Cycle Architecture Support
5 – 7 Years One support cost
Cisco ISR
4000
Virtualised Services
Framework Native L2-7 Services
Appliance-level Security, optimisation
performance
Enterprise NFV
ESA + APIC-EM + Prime Infrastructure

3rd
ISRv ASAv WAAS vWLC vNAM … VNFn
App1 App2 … Appn

UCS-E Compute Blade API


Interface
Platform
Management

ISR-4K + x86 on
UCS-E
NFVIS Hypervisor

UCS x86
Server
Virtual
Switching

NFVIS = Network Function Virtualization Infrastructure Software

• Models 4331, 4351, and 4451

UCS-E140S M2 UCS-E160D M2 UCS-E180D M2


Processor Intel Xeon (Ivy Bridge) Intel Xeon (Ivy Bridge) Intel Xeon (Ivy Bridge) E5-2428L v2
E3-1105C v2 (1.8 GHz) E5-2418L v2 (2 GHz) (1.8 GHz)

Core 4 6 8

Memory 8 - 16 GB 8 - 48 GB 8 - 96 GB
DDR3 1333MHz DDR3 1333MHz DDR3 1333MHz
Storage 200 GB- 2 TB (2 HDD) 200 GB- 3 TB (3 HDD) 200 GB- 5.4 TB (3 HDD*)
SATA, SAS, SED, SSD SATA, SAS, SED, SSD SATA, SAS, SED, SSD
RAID RAID 0 & RAID 1 RAID 0, RAID 1 & RAID 5 RAID 0, RAID 1 & RAID 5*

Network Port Internal: 2 GE Ports Internal: 2 GE Ports Internal: 2 GE Ports


External: 1 GE Port External: 2 GE Ports External: 2 GE Ports
PCIE Card: 4 GE or 1 10 GE FCOE PCIE Card: 4 GE or 1 10 GE FCOE

New model for late summer CY16 doubles memory and 50% CPU
ESA + APIC-EM + Prime Infrastructure

NFVIS Service Chaining ISRv

API
ASAv WAAS vWLC

Platform
vNAM … 3rd
VNFn
App1 App2 … Appn

Virtual

Today
Interface Management NFVIS Hypervisor
Switching

ISR-4K + x86 on UCS x86


UCS-E Server

NFVIS = Network Function Virtualization Infrastructure Software

• Each VNF may connect externally and/or to other NFV services


• The service may be accessed in multiple ways
• Directly by IP Address

ISRv WAAS ASAv vWLC

Virtualization Layer - KVM

AP control
ESA + APIC-EM + Prime Infrastructure

NFVIS Service Chaining ISRv

API
ASAv WAAS vWLC

Platform
vNAM … 3rd
VNFn
App1 App2 … Appn

Virtual

Today
Interface Management NFVIS Hypervisor
Switching

ISR-4K + x86 on UCS x86


UCS-E Server

NFVIS = Network Function Virtualization Infrastructure Software

• Each VNF will connect externally and to other NFV services


• The service may be accessed in multiple ways
• Directly by IP Address
• Connected in the packet’s forwarding path, or stitching

ISRv WAAS ASAv vWLC

Virtualization Layer - KVM

DIA Traffic
ESA + APIC-EM + Prime Infrastructure

NFVIS Service Chaining ISRv

API
ASAv WAAS vWLC

Platform
vNAM … 3rd
VNFn
App1 App2 … Appn

Virtual

Today
Interface Management NFVIS Hypervisor
Switching

ISR-4K + x86 on UCS x86


UCS-E Server

NFVIS = Network Function Virtualization Infrastructure Software

• Each VNF will connect externally and to other NFV services


• The service may be accessed in multiple ways
• Directly by IP Address
• Connected in the packet’s forwarding path, or stitching
• Utilize other services to divert packets to it

ISRv WAAS ASAv vWLC

Virtualization Layer - KVM

Optimized Traffic
ESA + APIC-EM + Prime Infrastructure

Service Chaining ISRv

API
ASAv WAAS vWLC

Platform
vNAM … 3rd
VNFn
App1 App2 … Appn

Virtual

Connectivity with NSH


Interface Management NFVIS Hypervisor
Switching

ISR-4K + x86 on UCS x86


UCS-E Server

NFVIS = Network Function Virtualization Infrastructure Software

• Network Service Header (NSH) will follow to address more advanced


needs for service chaining
• Offers new functionality and a dedicated service plane
• Provides traffic steering capabilities AND metadata passing
• Provides path identification, loop detection, service hop awareness, and service
specific OAM capabilities

ISRv WAAS ASAv vWLC

Virtualization Layer - KVM


Service Classifier

NSH availability for Phase 2


Service Chaining – Network Services Header
ESA + APIC-EM + Prime Infrastructure

3rd
ISRv ASAv WAAS vWLC vNAM … VNFn
App1 App2 … Appn

API Platform Virtual

Connectivity with NSH


Interface Management NFVIS Hypervisor
Switching

ISR-4K + x86 on UCS x86


UCS-E Server

NFVIS = Network Function Virtualization Infrastructure Software

• Policy sent to the Service Classifier

Orchestrator/
Controller
ISRv WAAS ASAv vWLC

Virtualization Layer - KVM


Policy Service Classifier

NSH availability for Phase 2


Service Chaining – Network Services Header
ESA + APIC-EM + Prime Infrastructure

3rd
ISRv ASAv WAAS vWLC vNAM … VNFn
App1 App2 … Appn

API Platform Virtual

Connectivity with NSH


Interface Management NFVIS Hypervisor
Switching

ISR-4K + x86 on UCS x86


UCS-E Server

NFVIS = Network Function Virtualization Infrastructure Software

• Policy sent to the Service Classifier


• Inbound packets are classified/encapsulated

Orchestrator/
Controller
ISRv WAAS ASAv vWLC

Virtualization Layer - KVM

NSH
Packet

IP
Service Classifier Packet

IP
Policy

NSH availability for Phase 2


Service Chaining – Network Services Header
ESA + APIC-EM + Prime Infrastructure

3rd
ISRv ASAv WAAS vWLC vNAM … VNFn
App1 App2 … Appn

API Platform Virtual

Connectivity with NSH


Interface Management NFVIS Hypervisor
Switching

ISR-4K + x86 on UCS x86


UCS-E Server

NFVIS = Network Function Virtualization Infrastructure Software

• Policy sent to the Service Classifier


• Inbound packets are classified/encapsulated
• Packet forwarded to VNFs according to policy

Orchestrator/
Controller
ISRv WAAS ASAv vWLC

Virtualization Layer - KVM

NSH
Packet

IP
Policy Service Classifier

NSH availability for Phase 2


Example Packet Flows
LAN -> WAN
1. Frame arrives LAN GEx with CSR MAC
Address
ISRv
DMacDST SMac Payld
CSR WAAS ASAv Win WLC Linux
IWAN 2. GE bridged to NFVIS-vSwitch
Hypervisor (KVM)
3. BR0 of vSwitch connects to ASAv (BR0)
NFVIS

Tap7 Tap6 Tap5 Tap4 Tap3 Tap2 Tap1


Tap7
4. ASAv processes frame and Sends to
vSwitch

BR-WAN
BR1 BR0 vSwitch BR1
5. vSwitch BR1 connects to CSR
WAN WAN LAN 6. CSR sends back to BR1 with destination
NIC
GE5
NIC
GE4 GE0 GE1
NIC
GE2 GE3
vWAAS
7. vWAAS processes (compresses) Packet
and sends back to CSR via BR1
DST DMacCSR SMac
SRC SRC Payld 8. CSR routes the frame to WAN GE
UCS Packet Flow: ARP by LAN Endpoint to WLC

1. ARP request sent by Endpoint into


ISRv
GE1
WAAS ASAv NAM WLC Win/Lin
IWAN

Hypervisor (KVM)
2. ARP passed by GE into BR0
NFVIS

Tap7 Tap7 Tap6 Tap5 Tap4 Tap3 Tap2 Tap1


3. ARP flooded out all ports
vSwitch

1. Reaches all interfaces, VNFs and


BR-WAN ARP
BR1 BR0
ARP
ARP Applications connected to BR0
4. One of the ARPs also passes to
ASAv
WAN WAN LAN
NIC
GE5
NIC
GE4 GE0 GE1
NIC
GE2 GE3
5. vFP – transparent – forwards the
ARP to BR1
6. ARP flooded in BR1 to reach ISRv
ARP and WAAS
SRC
ISR + UCS-E Architecture Enterprise NFV

• L3/L4 transport always done in WLC Win/Lin ASAv

ISR4K Hypervisor (KVM)


Tap0 Tap1 Tap2 Tap3

NFVIS
• WAN:

GE1 GE2

PF
BR0 BR1

LAN
NIC
vSwitch
• NIM Module(4G, T’s, etc.)

PF
• On-Board GE Internal NIC
UCSe

• LAN: GE (MGF)

NIM
IOSd

• Model 1 UCS-E LAN FPGA WAAS Snort

• Model 2: UCS-E LAN + NIM LAN Hypervisor (KVM)

IOS-XE
FFP DataPlane (Router)

BR0
• On-board virtualisation adds vSwitch

Snort or WAAS PF PF
PF ISR-4K
WAN NIC
GE0 GE1 Mgmt NIC
GE
UCS-E Packet Flow: Go-Through LAN (UCS-E) <-> WAN
WLC Win/Lin ASAv

• Service path example: Tap0 Tap1 Tap2 Tap3


Hypervisor (KVM)

NFVIS
• ASAv -> WAAS -> IOS XE

GE1 GE2

PF
BR0 BR1

LAN
NIC
vSwitch
• WAAS done via AppNav

PF
Internal NIC
• LAN connected to UCS-E UCSe

• Traffic WAN optimised GE (MGF)

NIM
IOSd

between WAN interface FPGA WAAS Snort

and WAAS VNF in Service Hypervisor (KVM)

IOS-XE
FFP DataPlane (Router)

Container BR0
vSwitch

PF PF
WAN NIC PF ISR-4K
GE0 GE1 Mgmt NIC
GE
UCS-E Packet Flow: Go-Through LAN (ISR4K) <-> WAN
WLC Win/Lin ASAv

• Service Chain example: Tap0 Tap1 Tap2 Tap3


Hypervisor (KVM)

NFVIS
• ASAv -> WAAS -> IOS XE

GE1 GE2

PF
BR0 BR1

LAN
NIC
vSwitch
• WAAS done via AppNav

PF
Internal NIC
• LAN connected to NIM UCSe

• Traffic WAN optimised GE (MGF)

NIM
IOSd

between WAN interface and FPGA WAAS Snort

WAAS VNF in Service Hypervisor (KVM)

IOS-XE
FFP DataPlane (Router)

Container BR0
vSwitch

PF PF
WAN NIC PF ISR-4K
GE0 GE1 Mgmt NIC
GE
DEMONSTRATION: Local
GUI

82
Conclusion
Key Conclusions
1. Network Function Virtualisation is rapidly maturing and enabling first use-cases
TODAY for enterprise network functions
• Virtualisation of control plane functions
• Cloud-based network services
2. Virtualisation of enterprise network functions enables new architectural
approaches leading to potential CAPEX and OPEX savings
• Unclear Benefit from replacement of existing transport infrastructure solutions for the
sake of it
• Orchestration and Management put into the spotlight
3. Architectural details both at the system and network level need to be well
understood and examined
• E.g. Service Chaining
Call to Action
• Visit the World of Solutions for
• Cisco Campus
• Walk in Labs

• Meet the Engineer


• Cisco Live Berlin Sessions
• BRKSPG-2063: Cisco vBNG Solution with CSR 1000v and ESC Orchestration
• LTRVIR-2100: Deploying Cisco Cloud Services Router in Public and Private Clouds
• BRKCRS-1244: SP Virtual Managed Services (VMS) for Intelligent WAN (IWAN)

• DevNet Zone
Complete Your Online Session Evaluation
• Complete your session surveys
though the Cisco Live mobile
app or your computer on
Cisco Live Connect.

Don’t forget: Cisco Live sessions will be available


for viewing on-demand after the event at
CiscoLive.com/Online
Thank you
Appendix A
Virtualisation Trade-Offs
and Research Topics
Main Trade-off and Research Areas
1. Cost of Virtualisation solution as a
function of performance
CAPEX /
2. Trading-off performance for virtualisation OPEX
flexibility
• Tuning performance may impact virtualisation
elasticity
3. Architectural Considerations
• Capacity planning Service Function Chains?
Architecture Performance
• Orchestration solution?
• High-Availability requirements?
Cost / Performance Trade-offs
• CAPEX Viability for virtualisation may require a minimum VM-packing density on
a server
• How many VMs can be deployed simultaneously to achieve a certain CAPEX goal?
• Particularly applicable for Cloud deployment architectures

• What are cost effective deployment models?


• Mixing of application VMs and VNFs on the same hardware?
• Single-tenant / Multi-tenant?
• Hypervisor type?
• Hyperthreading?
• SLA guarantees and acceptable loss rates?
• High-availability requirements and architectures?
Architectural
Considerations
Differences between Cloud and Branch Virtualisation
Use-Cases
• Focus on cloud orchestration and virtualisation features
VDI VDI
VDI VDI DB • Mix of applications and VNFs may be hosted in the
ERPVDI VDI DB cloud
ERP DB
WAN DPI ERPWin Win
• Horizontal scaling -> smaller VM footprints
DPI Win Win
UCS • Dynamic capacity & usage- / term-based billing
DPI Win Win
UCS
DC UCS

• Focus on replacing hardware-based appliances


• Typically smaller x86 processing capacity in the branch
• Virtualised applications (Firewall, NAT, WAAS..) may
consume large proportion of available hardware
Firewall IPS
WAN resources
• larger VM footprints
• Cloud orchestration and automation has to be
UCS
Branch distributed over all branches
• integration with existing OSS desirable for migration
Single-Branch vs. Multi-Branch VM Deployments

• Deployment of multi-tenant VMs can significantly improve the business case


– Leverage multi-tenancy feature set in IOS XE on CSR 1000v
• Leverages different footprint sizes of CSR 1000v, for example
– Deploy small footprint for single-branch & large footprint for multi-branch
• BUT:
– comes with a different operational model (Need to consider multi-tenancy for on-
boarding a new branch)
– Has different failure-radius implications

Branch 1 Branch 1

Branch N Branch N
WAN DC / Cloud WAN DC / Cloud
CSR 1000v as multi-tenant vCPE - Example
• Multi-tenant CSR 1000v deployed for 5 Mbps ‘vanilla’ branches requiring 5 Mbps each
• Single-tenant CSR 1000v deployed for high-end branches requiring 50 Mbps each
– Note that the 44 VM scenario (Profile 2) is oversubscribed, however the max bandwidth per VM
requirement is only 50Mbps

Profile 1 (multi-tenant) Profile 2 (single-tenant)


1vCPU CSR – 400 Mbps 1vCPU CSR - 50Mbps
200 VRF’s @ 5Mbps/VRF QOS, DHCP Server, OSPF, IP SLA,
QOS, DHCP Server, Static Route, IP SLA, IGMPv2, PIM SM, SNMP, ACL2
SNMP
Number of VM instances / server chassis 20 44
Number of branches / VNF instance 40 1
Total number of branches / server blade 800 44
Total aggregate bandwidth / server chassis 8 Gbps 2.2 Gbps
VNF High-Availability Architecture Considerations
• Traditional Networking: make all • Does a virtualised environment
critical network services highly- need HA?
available • Depends on PIN
• Active-Standby or Active-active • Branch: YES
redundancy models
• Cloud: MAYBE
• Stateful redundancy for NAT, • Can rely on reload / re-boot of VMs as
Firewall (i.e. stateful services) this happens much faster
• Function of VM scope (cf. single-
• Adds architectural complexity
branch VNFs)
• HSRP, NSR, Stateful HA features…
Performance Aspects for
VNF Deployments
Performance Aspects for VNF Deployments

• Throughput / SLAs for VNFs are determined by a multitude of factors


• System architecture, in particular I/O
• Hypervisor type (VMWare ESXi, KVM, Microsoft HyperV, Citrix XEN..)

• Throughput can be increased significantly by hypervisor tuning and the use of


direct-I/O techniques
• Need to determine
• How many VMs to run on a server blade
• Acceptable frame loss rates
Hypervisor Impacts on Performance
• VMWare ESXi and KVM schedulers can CSR 1000v IOS XE 3.16 Throughput (Gbps) ESXi: Single Feature
(IMIX, 0.01% FLR, C240 M3) - CISCO INTERNAL
perform in the same order of magnitude 3.5
3.0

Throughput (Gbps)
with tuning 2.5
2.0
1.5
• BUT: need to apply tuning recommendations, 1.0
0.5
especially for KVM 0.0
CEF ACL NAT Firewall QoS HQoS
IPSec Single IPSec Crypto
AES Map

• Most impactful tuning: I/O Optimisations (e.g. 1vCPU


2vCPU
2.5
2.9
2.2
2.8
1.4
2.4
1.7
2.7
2.4
3.0
1.5
1.8
0.5
0.8
0.1
0.2

VM-Fex, SR-IOV) 4vCPU 2.2 2.3 2.1 2.4 2.3 1.4 1.1 0.2

CSR 1000v IOS XE 3.16 Throughput (Gbps) KVM: Single Feature


• KVM currently shows bottlenecks when (IMIX, 0.01% FLR, C240 M3) - CISCO INTERNAL
un-tuned 3.5
3.0

Throughput (Gbps)
2.5
• descriptor ring restriction in KVM limits 2.0
1.5
performance improvements for larger vCPU 1.0
0.5
VMs 0.0
IPSec Crypto
CEF ACL NAT Firewall QoS HQoS IPSec Single AES
Map
1vCPU 3.0 2.7 1.9 2.2 2.6 2.1 0.7 0.2
2vCPU 2.9 3.0 2.0 2.3 2.5 1.7 0.8 0.2
4vCPU 2.0 2.2 1.9 1.9 2.0 1.5 1.0 0.2
REFERENCE
KVM Performance Tuning Recommendations
• Use a Direct path I/O technology (SR-IOV w/ PCIe pass-through) with CPU
tuning below! Otherwise the following configurations are recommended:
Tuning Details / Commands Tuning
Recommendation
Disable Hyperthreading Can be done in BIOS CPU
Find I/O NUMA Node cat /sys/bus/pci/devices/0000:06:00.0/numa_node
Enable isolcpus run command “numactl -H” CPU
Pin vCPUs ‘sudo virsh vcpupin test 0 6’ CPU
Set CPU in performance Mode run /etc/init.d/ondemand stop. CPU
Set Procsessor into pass- virsh edit <vm name> CPU
through add this line <cpu mode='host-passthrough' />
Enable / Disable IRQ Balance run “service irqbalance start” & “service irqbalance stop” NOTE: ONLY IF IRQ CPU
PINNING IS DONE!
NUMA-aware VM edit vm config by virsh edit <VM name>. CPU
<vcpu placement='static' cpuset='8-15'>1</vcpu>
IRQ Pinning find specific nic interrupt number from /proc/interrupts. set affinity to other core than CPU
pinned cpu than for CPU and vHost
101 pinning
REFERENCE
KVM Performance Tuning Recommendations (cont.)
Tuning Details / Commands Tuning
Recommendation
Pin vHost processes ‘sudo taskset -pc 4 <process Number>’, I/O
Where <process Number> is found using ‘ps -ef | grep vhost’
Change vnet txqueue length to Default tx queue length is 500 I/O
4000 ‘sudo ifconfig vnet1 txqueuelen 4000’
Turn off TSO, GSO, RSO, ‘ethtool -K vnet1 tso off gso off gro off’ I/O
Physical NIC Configuration Change rx Interrupt coalescing to 100 for the 10G NICs I/O
Disable
NOTE: KSM
these settings may echo 0 > /s`ys/kernel/mm/ksm/run
impact Linux
the number of VMs that can be instantiated on a server / blade
Disable Memballoon Edit “virsh edit <VM> , find memballon in vm config file. Linux
NOTE: Tuning steps are most impactful
Please for<memballoon
change as a small number of VMs instantiated on a host. Tuning impact
model='none'/>
diminishes with a large number of VMs
Disable ARP/IP Filtering sysctl -w net.bridge.bridge-nf-call-arptables=0 Linux
sysctl -w net.bridge.bridge-nf-call-iptables=0
sysctl -w net.bridge.bridge-nf-call-ip6tables=0
Optional Linux Tuning sysctl -w net.core.netdev_max_backlog=20000 Linux
sysctl -w net.core.netdev_budget=3000
sysctl -w net.core.wmem_max=12582912
102
sysctl -w net.core.rmem_max=12582912
service iptables stop ( if you don't want linux firewall)
Sample Results of different Performance
Improvements
• Quantitative Impact of various hypervisor tuning steps

Sample Impact with different Hypervisor Tunings


KVM+Ubuntu 1.0 with OVS, 2 vCPU CSR 1000v, XE 3.12 Engineering image, IMIX traffic, UCS 220 2.7GHz, 0.01 FLR
1000%

900%

800%
Average Throughput (Mbps)

700%

600%

500%

400%

300%

200%

100%

0%
Txqueuelen of
4000+vCPUPinning+vhost
default w/ Hyperthreading Hyperthreading Off vCPUPinning only Txqueuelen of 4000 only
pinning+txo,rxo off+Hyper
threading Off
Average Throughput Mbps 100% 145% 174% 509% 952%
SR-IOV Virtualisation Caveats

• The following features are not available for virtual machines configured with SR-IOV:
• vSphere vMotion • vSphere DPM
• Storage vMotion • Virtual machine suspend and resu
• vShield • Virtual machine snapshots
• NetFlow • MAC-based VLAN for passthrough virtual
functions
• VXLAN Virtual Wire
• Hot addition and removal of virtual devices,
• vSphere High Availability memory, and vCPU
• vSphere Fault Tolerance • Participation in a cluster environment
• vSphere DRS • Network statistics for a virtual machine NIC
using SR-IOV passthrough
VMWare ESXi Fault Tolerance Caveats

• Only works for 1vCPU VMs


• Fault Tolerance is not supported or incompatible in combination with
• Snapshots • Paravirtualized guests

• Storage vMotion • NIC Passthrough

• Linked Clones • Hot-plugging devices

• VM Backups • Serial or parallel ports

• Virtual SAN • IPv6

• Symmetric multiprocessor VMs • …


• Physical raw disk mapping
ESXi + vSwitch Full Subscription (XE 3.13)

• Not tuning ESXi can lead to


performance degradations as
VMs are added on a server
• vSwitch maxes out between 3 Gbps
and 4 Gbps
• Highlights importance of direct I/O
techniques for full-subscription
• For a detailed study, see the
latest EANTC report
• http://www.lightreading.com/nfv/nfv-tests-and-
trials/validating-ciscos-nfv-infrastructure-pt-1/d/d-
id/718684?
Full Subscription Results under KVM+RH for NAT
and IPSec for IOS XE 3.16
• Graphs show total and average (per-VM)
throughput under a fully-loaded server
• NAT+QOS+ACL
• IPSec+QoS+ACL
• Adding VMs to a host does not
contribute linearly to system throughput
• OR: average per-VM throughput
declines as additional VMs are added
• Marginal differences between hyper-
threading on and off!
• Results are similar for 1vCPU CSR
1000v footprints
• Underlying OVS bottleneck not reached!
CSR 1000v vBRAS Throughput

• CSR can reach 40 Gbps with 5 VMs with a


vBRAS configuration CSR 1000v vBRAS Throughput with VM-FEX for 1-8 VMs
• Offered load of 5Gbps per VM on average (FLR 0.001%, Broadband traffic mix, 10% UP, 90% DOWN, IOS XE 3.16)
(test design, not CSR 1000v VM Limit!) 45 6000

System Throughput (KPPS)


System Throughput (Gbps)
40
• Multiple VMs to scale control and data plane in 5000
35
unison 30 4000
25
3000
20
• Overall server utilisation is about 24% during the 15 2000
test (measured with mpstat) 10
1000
5
• Translates to somewhere between 8-9 cores being 0 0
utilised (out of 36) 1 2 3 4 5 6 7 8
System Throughput (Gbps) 5 10 15 20 25 30 35 40
• With most test iterations periodic ingress buffer System Throughput (KPPS) 676 1352 2028 2704 3380 4056 4732 5408
drops per-VM were observed – overall the Number of VMs
number of drops was <0.001 %
Loss Rate Interpretation - Background

• Performance results vary depending


on what acceptable frame loss is Throughput as a func on of acceptable
Traffic Loss (%, normalized, KVM, XE 3.13)
defined. Typical definitions for loss 180%
rates (FLR) range from 160%
• Absolutely 0 packets lost -> Non-drop

Normalized Throughput (%, NDR = 100%)


140%
Rate
120%
• 5 packets lost 100%
• 0.1% of PPS lost 80%

• Small relaxation of FLR definition can 60%

lead to significant higher throughput 40%

20%

• Typically FLR Test data reported for 5 0%


0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75
packet loss (to account for warmup) % of acceptable traffic loss per VM

with multiple consecutive 1 minute % increase in Throughput


runs
REFEREN
Determination of Desired Frame Loss Rate CE
• Throughput can be affected by definition of Sample Data
acceptable loss rates
• Tests measure % of dropped traffic for various traffic
loads
– Offer traffic load -> observe loss -> reduce offered load
until desired loss rate reached

• BUT: Difficult to get consistent data across multiple


runs.
• How to interpret the right loss-rate?
• Example:
– Highest rate at which LR of 0.01% appears -> 475 Mbps
– Lowest rate below which LR of 0.01% is ALWAYS
observed -> 374 Mbps
– Loss rate ‘violations’ at {445, 435, 414, 384} Mbps
Appendix B
Glossary
Abbreviation Description Abbreviation Description Abbreviation Description
ACL Access Control List CFS Completely Fair scheduler DPM Distributed Power Management
ANCP Access Node Control Protocol CFS Customer Facing Services DRS Dynamic Resource Scheduling
ARP Address Resolution Protocol CGN Carrier Grade NAT DSCP DiffServe code Point
ASA Application Security Appliance CLI command Line Interface Eap Extensible Authentication Protocol
AVC Application Visibility & Control CM Chassis Manager (in IOS XE) EOAM Ethernet OAM
bi-directional Forwarding
BFD CoA RADIUS Change of Authorization ESA Email Security Appliance
detection
BPDU Bridge Protocol Data Unit COS Class of Service ESC Elastic Services Controller
BRAS Broadband Remote Access Server COTS Common off-the-shelf ESXi VMWare hypervisor
BSS Business support system CPS Calls per second EVC Ethernet Virtual Circuit
CAPEX Capital Expenditures DC Data Center F/D/C Fibre / DSL / Cable
Fast Forwarding Plane (data plane in
CDP Cisco Discovery Protocol DCI Data Center Interconnect FFP
IOS XE)
CE Carrier Ethernet DHCP Dynamic host configuration Protocol FLR Frame Loss Rate
CE Customer Edge DNS Domain Name System FM Forwarding Manger (in IOS XE)
CEF Cisco Express Forwarding DPDK Data Path Development Kit FSOL First Sign of life
Configuration and Fault
CFM DPI Deep Packet Inspection FT Fault Tolerance
Management\
Glossary
Abbreviation Description Abbreviation Description Abbreviation Description
FW Firewall IPoE IP over Ethernet LRO Large Receive Offload
GRE Generic Route Encapsulation IPS Intrusion Prevention System MC PfR Master Controller
GRT Global routing table IRQ Interrupt Request MP-BGP Multiprotocol BGp
Multi Protocol Label Switching EXP
GSO Generic Segmentation Offload ISG Intelligent Services Gateway MPLS EXP
field
GTm Go-to-market ISG TC ISG Traffic class MS/MR LISP Map Server / Map Resolver
HA High Availability IWAN Intelligent WAN (Cisco Solution) MSP Managed Service Provider
HQF Hierarchical Queueing Framework KSM kernel same-page merging MST Multiple Spanning Tree
HQOS Hierarchical QOS KVM Kernel Virtual Machine NAT Network Address Translation
HSRP Hot Standby Routing Protocol L2TPv2 Layer 2 Transport Protocl version 2 NB Northbound
HT Hyperthreading LAC L2TP Access Concentrator NE Network Element
HV Hypervisor LAG Link Aggregation NF netflow
I/O Input / Output LB Loadbalancer NfV Network Function Virtualization
IDS Intrusion Detection System LCM Life-cycle manager (for VNFs) NFVI NFV Infrastructure
IP SLA IP Service Level Agreements LNS L2TP Network Server NFVO NFV Orchestrator
IPC inter-process communication LR Loss Rate NIC network Interface card
Glossary
Abbreviation Description Abbreviation Description Abbreviation Description
NID Network Interface Device PnP Plug and Play RSO Receive Segmentation Offload
NSO Network Services Orchestration POF Prime Order Fulfilment Rx Receive
NUMA non-uniform memory access PoP Point of presence SB Southbound
non-volatile Random Access
NVRAM PPE Packet Processing Engine SBC Session Border Controller
Memory
Operations, administration and
OAM PPS Packets per Second SC Service chaining
maintenance
OPEX Operational Expenditures PSC Prime Services Catalog SDN Software Defined Networking
Service Function (in SFC
OS OpenStack PTA PPP termination and Aggregation SF
Architecture)
OSS Operations support System PW Pseudowire SFC Service Function Chaining
Service Function Forwarder (in SFC
OVS Open Virtual Switch PxTR Proxy Tunnel Router (LISP) SFF
Architecture)
PBHK Port Bundle host key (ISG feature) QFP Quantum Flow Processor ( SGT Security Group Tag
PE Provider Edge QOS Quality of Service SIP SPA Interface Processor
PF Physical Function (in SR-IOV) RA Remote Access SLA service level agreement
PfR Performance Routing REST Representational State Transfer SLB Server Loadbalancing
PMD Pull mode driver RFS Resource Facing Services SMB small and medium Business
Simple Network Management
pNIC Physical NIC RR Route Reflector SNMP
Protocol
Glossary
Abbreviation Description Abbreviation Description Abbreviation Description
SP Service Provider VM Virtual Machine WAN Wide Area Network
SPA Shared Port Adapter vMS Virtual Managed Services WLAN Wireless LAN
SR-IOV single Root I/O virtualization VNF Virtual Network Function WLC Wireless LAN Controller
TCO Total Cost of Ownership VNFM VNF Manager WRED weighted random Early Detection
TOS Type of Service vNIC virtual NIC ZBFW Zone-based firewall
TPS transparent page sharing VPC Virtual Private Cloud ZTP Zero touch provisioning
TSO TCP Segmentation Offload vPE-F virtual PE Forwarding instance
TTM Time-to-market VPLS Virtual Private LAN service
UC Unified communication VPN Virtual Private Network
vCPE virtual CPE VRF virtual routing and forwarding
vCPU virtual CPU vSwitch virtual Switch
VF virtual Function (in SR-IOV) VTC Virtual Topology controller
vHost virtual host VTF Virtual Topology Forwarder
VIM Virtual Infrastructure Managers VTS Virtual Topology System
VLAN virtual Local area network WAAS Wide Area Application Services
Appendix C
Cisco ASAv Firewall and Management Features
Cisco® ASA Feature Set
 Parity with all other Cisco ASA platform features
 10 vNIC interfaces and VLAN tagging
 Virtualization displaces multiple-context and clustering

Cisco  SDN (Cisco APIC) and traditional (Cisco ASDM and CSM)
ASAv10 management tools
 Dynamic routing includes OSPF, EIGRP, and BGP
ASAv30  IPv6 inspection support, NAT66, and NAT46/NAT64
 REST API for programmed configuration and monitoring
 Cisco TrustSec® PEP with SGT-based ACLs
 Zone-based firewall
 Equal-Cost Multipath
Removed clustering and
 Failover Active/Standby HA model
multiple-context mode
Protection Across the Attack Continuum with
FirePOWERv
Attack Continuum

BEFORE DURING AFTER


Discover Detect Scope
Enforce Block Contain
Harden Defend Remediate

• Visibility into virtual


Virtual machine network
discovery • Single pane-of-glass
communications across physical and virtual
• Enforce application policy networks
• Protect VMs even as the
migrate across hosts • Automated response via
• Access control to segment
security Integration with platform
• Intrusion zones
prevention without security controls
hairpinning
Virtual IPS Appliances

DC

FirePOWERv Virtual Defense Center


• Deployed as virtual appliance • Deployed as virtual appliance
• Inline or passive deployment
• Manages up to 25 sensors
• Full NGIPS Capabilities
• Add-on capability • physical and virtual
• Control • single pane-of-glass
• Advanced Malware Protection
• URL Filtering
Virtualised WAAS
• Hypervisors • Platform Variants (TCP) Hypervisor
• ESXi • vWAAS-200
• Hyper-V • vWAAS-750 UCS /x86 Server
• KVM
• vWAAS-1300
• Interception Methods* • vWAAS-2500
• AppNav • vWAAS-6000
• WCCP • vWAAS-12000
• vWAAS-50000
• Platforms ISR 4000 Series + UCS E-Series
• UCS or other x86
• Service Container on ISR-4000
• UCS-E
Branch Office - Local WLAN Controller
Overview Backup Central
Controller
Central Site
• Branches can have local
controllers CAPWAP

• Small or Mid-size Branch with


vWLC vWLC
WAN Cat-3850

WLC25xx

Advantages
• Cookie cutter configuration for every branch site
• Independency from WAN quality
Remote Site C
Remote Site A
Remote Site B
FlexConnect Mode: On Premise or Data Centre
Virtualisation of
Transport/Forwarding
Enterprise Virtualisation Models
Transport Functions
App App
CSR 1000V • Virtualisation of Transport plane functions
– L3 routing and packet forwarding
OS OS

VPC/ vDC
Cloud Hypervisor
Virtual Switch

– Packet divert
Shared Services
WAN • Can be on-premise or in larger Enterprise
Routing WAN PoPs or in the cloud
Diversio
n • IOS XRv

• CSR 1000v
Campus • Virtual router forwarding engine
• AppNav clustering (WAAS)
• WCCP/PBR
• NSH*

* NSH estimate is for July/August 2016


Example: AX Transport and CSR 1000v
• CSR 1000v using AppNav for Service Insertion

WAN ASR
ISR
10.1.1.1 (VRF B)

VRF A VRF B

10.1.1.1 (VRF A) vWAAS


ISR

Branch Internet
CSR

vWAAS

CSR

vWAAS
Q&A
Complete Your Online Session Evaluation
Give us your feedback and receive a
Cisco 2016 T-Shirt by completing the
Overall Event Survey and 5 Session
Evaluations.
– Directly from your mobile device on the Cisco Live
Mobile App
– By visiting the Cisco Live Mobile Site
http://showcase.genie-connect.com/ciscolivemelbourne2016/
– Visit any Cisco Live Internet Station located
throughout the venue
Learn online with Cisco Live!
T-Shirts can be collected Friday 11 March Visit us online after the conference
for full access to session videos and
at Registration presentations.
www.CiscoLiveAPAC.com
Thank you
Appendix
Hypervisor Traversal Tax: Example KVM with OVS

• KVM with OVS consumes a vHost thread


per configured VM interface
• The vHost thread is very CPU intensive,
requires dedicated physical core
• On 16-core server, can only get 3
CSR1000v (2vCPU, 2 i/f each)
– Cores for CSR: 6 May not be
– Cores for vPE-F: 2 fully utilized!

– Cores for vHost: 6 Hypervisor traversal


– Free: 2 tax = 8/16 = 50%

• Should be considered when service


chaining
Hypervisors vs. Linux Containers
Containers share the OS kernel of the host and thus are lightweight. Containers are isolated,
However, each container must have the same OS kernel. but share OS and, where
appropriate, libs / bins.

Type 1 Hypervisor Type 2 Hypervisor Linux Containers (LXC)

App App App App

App App App App Bins / libs Bins / libs


Operating Operating
Bins / libs Bins / libs System System App App App App
Operating Operating Virtual Machine Virtual Machine
System System Bins / libs Container
Virtual Machine Virtual Machine Hypervisor Container
Bins / libs
Hypervisor Operating System Operating System
Hardware Hardware Hardware

You might also like