Professional Documents
Culture Documents
BRKCRS 3447
BRKCRS 3447
BRKCRS 3447
Virtualisation for
Enterprise Networks
James Sandgathe - Engineer, Technical Marketing
Enterprise Infrastructure and Solutions Group
Abstract
Network Function Virtualisation (NfV) is gaining increasing traction in the industry
based on the promise of reducing both CAPEX and OPEX using COTS hardware.
This session introduces the use-cases for virtualising Enterprise network
architectures, such as virtualising branch routers, LISP nodes, IWAN
deployments, or enabling enterprise hybrid cloud deployments. The sessions also
discusses the technology of Virtualisation from both a system architecture as well
as a network architecture perspective. Particular focus is given on understanding
the impact of running routing functions on top of hypervisors, as well as the
placement and chaining of network functions. Performance of virtualised
functions is also discussed.
Agenda BRKCRS-3447
• Introduction & Motivation
• Deployment Models and Characteristics
• The Building Blocks of Virtualisation
• Introducing Enterprise NFV
• Demonstration - NFVIS Orchestration
• Demonstration – ESA Orchestration
• Conclusion
Some additional points …
Cisco launches Enterprise NFV
http://www.cisco.com/go/enfv
• AT&T
• BT
• CenturyLink
• China Mobile
• Colt
• Deutsche Telekom
• KDDI
• NTT
• Orange
• Telecom Italia
• Telstra
• Verizon
• Others TBA…
7
What is NFV? A Definition
…
NFV decouples the network functions such as NAT,
Firewall, DPI, IPS/IDS, WAAS, SBC, RR etc. from
proprietary hardware appliances, so they can run in
software.
….. Service
Orchestration
It utilises standard IT virtualisation technologies that
run on high-volume servers, switch and storage
hardware to virtualise network functions..
…..
It involves the implementation of network SDN X86
compute NFV
functions in software that can run on a range of
industry standard server hardware, and that can be
moved to, or instantiated in, various locations
in the network as required, without the need for
installation of new equipment.
Sources:
https://www.sdncentral.com/which-is-better-sdn-or-nfv/
http://portal.etsi.org/nfv/nfv_white_paper.pdf
Motivation for Virtualising Network Functions
CAPEX
• Deploy on standard x86 servers
• Economies of scale
• Service Elasticity
• Simpler architectural paradigm
• Changes in management access?
• Changes in HA?
• Best-of-breed
Motivation for Virtualising Network Functions
OPEX
• Reduction of number of network elements
• Reduction of on-site visits
• Leveraging Virtualisation benefits
• Hardware oversubscription, vMotion, ..
• Increased potential for automated network operations
• Re-alignment of organisational boundaries
Deployment Models and
Characteristics
Virtualisation Architecture Taxonomy
Classifying the Architecture
Network
Functions/S • L3-L7 Services
ervices • DPI, NAT, Compression
Virtualisation Architecture Taxonomy
Placement and Location
• Any number of virtual instances
Cloud • Many traffic volumes
• Location agnostic
• Virtual Machines
Branch • Network element hosting/appliances
• General purpose servers
Virtualisation – Architecture
Differences In Data Centre and Branch
20
$16
15
$13
$11
10
5
DIMM
16GB 32GB 64GB
8GB
Virtualisation – Architecture
Cost Impact to Scaling Compute
CPU Cost Per Core $229
Cost per
core
$199
200
150
$130
100
$79
$72
50
$113
100
50
ASR1001 & ASR1001 & CSR1000v CSR1000v RP2 (8GB) RP2 (16GB)
ASR1002-X ASR1002-X (8GB) (16GB)
(8GB) (16GB)
Application Visibility
Branch Virtualisation: Cloud Options
L3 Private-cloud Branch – 1:1
Branch
• L3 router remains in branch but performs
minimal functions
4
F/D
DC
Routing, QoS,
FW, NAT..
• L4-7 services virtualised in the private
Branch FW, NAT..
cloud
WAN Campus • Branch router tightly coupled with virtual
router in the private cloud for services
Branch
Fully virtualised Branch
• Physical router replaced by x86 compute
3 F/D • Both transport and network services virtualised
WAN • VNFs could be multi-vendor (Best of breed)
The Building Blocks of
Virtualisation (Today)
ETSI NfV Reference Architecture
NFV Management and
Orchestration
Os-Ma
OSS/BSS Orchestrator
Orchestration
Service, VNF and Infrastructure
Se-Ma
Description
Or-Vnfm
Vn-Nf Vi-Vnfm
NFVI
Virtual Virtual
Virtual Storage
Computing Hypervisor Network
Nf-Vi Virtualised
Virtualisation Layer Infrastructure
Vl-Ha Manager(s)
Hardware resources
ComputingComputeStorage
Hardware Network
Hardware Hardware Hardware
Execution reference points Other reference points Main NFV reference points
30
Architecture Building Blocks Enterprise Virtualisation
• Orchestration and Management
• Virtual Network Functions Policy
NATs…
• Hypervisors / Containers
Branch 1 DC
• A transport network VM1 VM2
VMx
• Physical Hardware
…
PnP
VSwitc
LCM
h
• X86 servers Hypervisor WAN Branch N
PHY PHY
Host OS
• Virtualisation-capable routers
Virtual Zone Virtual ASA IPSec and SSL IPSec VPNs (Flex,
vNGIPS NAT
Based Firewall Firewall VPN Easy, GET)
(SourceFire) (CSR1Kv)
(CSR1Kv) (ASAv) (ASAv) (CSR1Kv)
Security
Deep Packet Identity Services
Web Security E-Mail Security DMVPN SSL VPN
Inspection Engine
(vWSA) (vESA) (CSR1Kv) (CSR1Kv)
(CSR1Kv) (vISE)
CSR
2(1vCPU CSR 1000v)
VMFman / PP PP HQF
IOS Rx IRQ1 vNIC11 vNICn1 VM Kernel1
CMan3 E/ PPE Pkt Scheduler
CSR
IOSVM (2vCPU
Fman CSR
HQF1000v) Rx
IRQ2 vNIC12 vNICn2 VM Kernel2
CMan
Guest OS E Pkt Scheduler
Scheduler • Example: 3 CSR VMs scheduled
CSR
Fman / PP HQF
IOS Rx
GuestCMan
OS Scheduler
E Pkt Scheduler IRQ3 vNIC13 vNICn3 VM Kernel3 on a 2-socket 8-core x86
vCPU01 vCPU11 vCPU21 vCPU31
Guest OS Scheduler
vCPU 2 – Different CSR footprints shown
0
vCPU03 vCPU13
• Type 1 Hypervisor
– No additional Host OS represented
X86 Server vCPU12
vSwitch
• HV Scheduler algorithm governs
Process
Queue
vCPU03
HV Kernel
vNICn2 how vCPU/IRQ/vNIC/VMKernel
VM Kernel1
processes are allocated to pCPUs
HV Scheduler • Note the various schedulers
– Running ships-in-the-night
Socket0 Socket1
pCPU1 pCPU2 pCPU3 pCPU4 pCPU1 pCPU2 pCPU3 pCPU4
pCPU5 pCPU6 pCPU7 pCPU8 pCPU5 pCPU6 pCPU7 pCPU8
Virtual Switches / Bridges
• Virtual switches ensure connectivity between physical interfaces and Virtual Machines
• Can have multiple vSwitches per host
• May have L2 restrictions (some vSwitches are switches in name only)
• May impact performance
I/O Architecture
Virtualising I/O – KVM Architecture Example
x86 Host (KVM)
• Hypervisor virtualises the NIC hardware to the
multiple VMs VM1
Application
VM2
Application
Guest
• Hypervisor scheduler responsible for ensuring Packet Copy Packet Copy
I/O driver (e.g. VirtIO) I/O driver (e.g. VirtIO)
that I/O processes are served.
Packet Copy Packet Copy
• There is a single instance of physical NIC
(QEMU)
Space
User
hardware, including queues, etc. vNIC (vHost) vNIC (vHost)
• Many to one relationship between the VM’s vNIC Packet Copy Packet Copy
and the single physical NIC Tap Tap
Host Kernel
• One vHost/VirtIO thread used per configured Virtual Switch / Linux Bridge
interface (vNIC)
pNIC Driver
• Caveats:
Limits the scale of the number of VMs per
blade to ‘number of physical NICs per
system’
Breaks live migration of VMs
I/O Optimisations: Single Root IO Virtualisation - SR-IOV
with PCIe pass-through
• Allows a single PCIe devices to appear to be
multiple separate PCIe devices
NIC supports virtualisation
• Enables network traffic to bypass software
switch layers
• Creates physical and virtual functions (PF/VF)
PF: full featured PCIe
VF: PCIe without configuration resources
Each PF/VF gets a PCIe requestor ID s.t. IO
memory management can be separated between
different VFs
Number of VFs dependent on NIC (O(10))
• Ports with the same (e.g. VLAN) encap share
the same L2 broadcast domain
• Requires support in BIOS/Hypervisor
Enterprise NFV
DEMONSTRATION: ESA
45
The Current Enterprise Branch Landscape
Multiple Devices Difficult to Manage Costly to Operate
Routers, Appliances, Servers Device integration and Upgrades, refresh cycles,
operation site visits
FW/
Route IPS
Office
FW/
IPS Route
Office
How does it make my life simpler?
It’s simple really ….
Router
Route/
Firewall Path FW/ WAN-
WLC vAPP vAPP
IDS Opt
Selection
Wireless
Virtualization Layer - KVM
WAN Opt
Policy
Life Cycle MGT Automation
Proxy/Cache Enforcement
Operating System
X86 Processor
1 GE2.1 17.16.8.26 7
CSR FW
4
Carrier 1
VZ
5
6
DvSW-1
2
8 VMKernel Port VMWare ESXi
3
UCS 240
L2 VLAN
LAN SW
Change in the FW
vCenter
GE1.114 10.16.48.21 10.16.48.1
1 GE2.1 17.16.8.26 7
CSR FW
4
Carrier 1
VZ
5
6
DvSW-1
2
8 VMKernel Port VMWare ESXi
One /30 from each
3
carrier for the WAN
circuit UCS 240
L2 VLAN
LAN SW
Change in the Port Channel or VLAN
• While changes were made to the FW, VLAN assignments, CSR, or FW connectivity
to/from vCenter gets lost and begins to flap
• vCenter sometimes misses confirmation of changes made
• This is a issue since management of the hypervisor becomes dependent on the
stability of the VMs running in it
Managing the Hypervisor
Change in the CSR
1 GE2.1 17.16.8.26
GE1.114 10.16.48.21
7
10.16.48.1
Change in the FW
CSR FW
DvSW-1
5
6
3
8 VMKernel Port VMWare ESXi
ISRv
ISRv ASAv
ASAv WAAS
WAAS …
…
Enterprise NFV Solution Architecture
vWLC
vWLC vNAM
vNAM
33rdrd
App11
App App22
App Appnn
App
with 33rdrd
hosting with
hosting
party support
party support
VNFnn
VNF
ISR-4K + x86 on
NFVIS Hypervisor
UCS x86
Virtual
Switching
UCS-E Server
7
6 5
ESA + APIC-EM + Prime Infrastructure
API
ASAv
Interface
WAAS vWLC
Platform
Management
vNAM
NFVIS
… 3rd
VNFn
App1
Hypervisor
App2 … Appn
Virtual
Switching
Profile to SN
mapping
Day
Day00/1
Config
config PnP
Prime
Prime Infra
Infrastructure Repository
repository
PnP
Server APIC-EM
APIC-EM
REST
Provisioning
Office
ESC-Lite
WAAS
IPS
IP
vSwitch
NFVIS WAN
ESA + APIC-EM + Prime Infrastructure
API
ASAv
Interface
WAAS vWLC
Platform
Management
vNAM
NFVIS
… 3rd
VNFn
App1
Hypervisor
App2 … Appn
Virtual
Switching
Day
Day 00/1
Config
config PnP
Prime
Prime Infra
Infrastructure Repository
repository
PnP
Server APIC-EM
APIC-EM CSM WCM
Managemet
Element
Office
ESC-Lite
WAAS
IPS
IP
vSwitch
NFVIS WAN
• Day 2 Element Management (Config changes, Fault monitoring etc) done by PI, APIC-EM, and VNF-specific element managers (in
case of 3rd party or if the VNF is not supported by PI)
• ESA plays no role in day 2 operations
ESA + APIC-EM + Prime Infrastructure
Best-of-breed Trusted Services from Cisco ISRv ASAv WAAS vWLC vNAM … 3rd
VNFn
App1 App2 … Appn
ISR-4K + x86 on
NFVIS Hypervisor
UCS x86
Virtual
Switching
UCS-E Server
3rd
ISRv ASAv WAAS vWLC vNAM … VNFn
App1 App2 … Appn
Enterprise NFV Infrastructure Software (NFVIS) NFVIS = Network Function Virtualization Infrastructure Software
NFVIS ISRv
API
ASAv WAAS vWLC
Platform
vNAM
NFVIS
… 3rd
VNFn
App1
Hypervisor
App2 … Appn
Virtual
Virtualisation
• Kernel Virtual Machine (KVM) to abstract
service functions from hardware
API Virtualized Virtualized Virtualized
• Virtual switching provides connectivity Interface
Service Service Service
Management KVM
interfaces Client
NFVIS ISRv
API
ASAv WAAS vWLC
Platform
vNAM
NFVIS
… 3rd
VNFn
App1
Hypervisor
App2 … Appn
Virtual
Platform Management
• Controlling hardware specifics such as storage,
memory, network interface connectivity
• Health monitoring
• Hardware performance such as SR-IOV
ISR-4K + x86 on
UCS-E
NFVIS Hypervisor
UCS x86
Server
Virtual
Switching
• Components: Virtual
Network
Functions
Cisco VNF Cisco VNF Cisco VNF 3rd Party VNF 3rd Party VNF
API
ASAv
Interface
WAAS vWLC
Platform
Management
vNAM
NFVIS
… 3rd
VNFn
App1
Hypervisor
App2 … Appn
Virtual
Switching
applications.
• Enterprise NFV solution runs on
x86 based host
• UCS-E
• Cisco UCS-C
Enterprise NFV
ESA + APIC-EM + Prime Infrastructure
3rd
ISRv ASAv WAAS vWLC vNAM … VNFn
App1 App2 … Appn
UCS-220-M4
API Platform Virtual
Interface Management NFVIS Hypervisor
Switching
API
ASAv WAAS vWLC
Platform
vNAM
NFVIS
… 3rd
VNFn
App1
Hypervisor
App2 … Appn
Virtual
Interface Management Switching
GE
x86
ESA + APIC-EM + Prime Infrastructure
API
ASAv WAAS vWLC
Platform
vNAM … 3rd
VNFn
App1 App2 … Appn
Virtual
Interface Management NFVIS Hypervisor
Switching
WAN
Orchestration &
Automation
VNF
NFVIS
VNF
GE
IWAN
x86
Internet
ESA + APIC-EM + Prime Infrastructure
API
ASAv
Interface
WAAS vWLC
Platform
Management
vNAM
NFVIS
… 3rd
VNFn
App1
Hypervisor
App2 … Appn
Virtual
Switching
UCS® E-Series
Reliable Integrated & OIR Support
Best edge platform compute – up to 8 cores
Revolutionary
Platform
Life-Cycle Architecture Support
5 – 7 Years One support cost
Cisco ISR
4000
Virtualised Services
Framework Native L2-7 Services
Appliance-level Security, optimisation
performance
Enterprise NFV
ESA + APIC-EM + Prime Infrastructure
3rd
ISRv ASAv WAAS vWLC vNAM … VNFn
App1 App2 … Appn
ISR-4K + x86 on
UCS-E
NFVIS Hypervisor
UCS x86
Server
Virtual
Switching
Core 4 6 8
Memory 8 - 16 GB 8 - 48 GB 8 - 96 GB
DDR3 1333MHz DDR3 1333MHz DDR3 1333MHz
Storage 200 GB- 2 TB (2 HDD) 200 GB- 3 TB (3 HDD) 200 GB- 5.4 TB (3 HDD*)
SATA, SAS, SED, SSD SATA, SAS, SED, SSD SATA, SAS, SED, SSD
RAID RAID 0 & RAID 1 RAID 0, RAID 1 & RAID 5 RAID 0, RAID 1 & RAID 5*
New model for late summer CY16 doubles memory and 50% CPU
ESA + APIC-EM + Prime Infrastructure
API
ASAv WAAS vWLC
Platform
vNAM … 3rd
VNFn
App1 App2 … Appn
Virtual
Today
Interface Management NFVIS Hypervisor
Switching
AP control
ESA + APIC-EM + Prime Infrastructure
API
ASAv WAAS vWLC
Platform
vNAM … 3rd
VNFn
App1 App2 … Appn
Virtual
Today
Interface Management NFVIS Hypervisor
Switching
DIA Traffic
ESA + APIC-EM + Prime Infrastructure
API
ASAv WAAS vWLC
Platform
vNAM … 3rd
VNFn
App1 App2 … Appn
Virtual
Today
Interface Management NFVIS Hypervisor
Switching
Optimized Traffic
ESA + APIC-EM + Prime Infrastructure
API
ASAv WAAS vWLC
Platform
vNAM … 3rd
VNFn
App1 App2 … Appn
Virtual
3rd
ISRv ASAv WAAS vWLC vNAM … VNFn
App1 App2 … Appn
Orchestrator/
Controller
ISRv WAAS ASAv vWLC
3rd
ISRv ASAv WAAS vWLC vNAM … VNFn
App1 App2 … Appn
Orchestrator/
Controller
ISRv WAAS ASAv vWLC
NSH
Packet
IP
Service Classifier Packet
IP
Policy
3rd
ISRv ASAv WAAS vWLC vNAM … VNFn
App1 App2 … Appn
Orchestrator/
Controller
ISRv WAAS ASAv vWLC
NSH
Packet
IP
Policy Service Classifier
BR-WAN
BR1 BR0 vSwitch BR1
5. vSwitch BR1 connects to CSR
WAN WAN LAN 6. CSR sends back to BR1 with destination
NIC
GE5
NIC
GE4 GE0 GE1
NIC
GE2 GE3
vWAAS
7. vWAAS processes (compresses) Packet
and sends back to CSR via BR1
DST DMacCSR SMac
SRC SRC Payld 8. CSR routes the frame to WAN GE
UCS Packet Flow: ARP by LAN Endpoint to WLC
Hypervisor (KVM)
2. ARP passed by GE into BR0
NFVIS
NFVIS
• WAN:
GE1 GE2
PF
BR0 BR1
LAN
NIC
vSwitch
• NIM Module(4G, T’s, etc.)
PF
• On-Board GE Internal NIC
UCSe
• LAN: GE (MGF)
NIM
IOSd
IOS-XE
FFP DataPlane (Router)
BR0
• On-board virtualisation adds vSwitch
Snort or WAAS PF PF
PF ISR-4K
WAN NIC
GE0 GE1 Mgmt NIC
GE
UCS-E Packet Flow: Go-Through LAN (UCS-E) <-> WAN
WLC Win/Lin ASAv
NFVIS
• ASAv -> WAAS -> IOS XE
GE1 GE2
PF
BR0 BR1
LAN
NIC
vSwitch
• WAAS done via AppNav
PF
Internal NIC
• LAN connected to UCS-E UCSe
NIM
IOSd
IOS-XE
FFP DataPlane (Router)
Container BR0
vSwitch
PF PF
WAN NIC PF ISR-4K
GE0 GE1 Mgmt NIC
GE
UCS-E Packet Flow: Go-Through LAN (ISR4K) <-> WAN
WLC Win/Lin ASAv
NFVIS
• ASAv -> WAAS -> IOS XE
GE1 GE2
PF
BR0 BR1
LAN
NIC
vSwitch
• WAAS done via AppNav
PF
Internal NIC
• LAN connected to NIM UCSe
NIM
IOSd
IOS-XE
FFP DataPlane (Router)
Container BR0
vSwitch
PF PF
WAN NIC PF ISR-4K
GE0 GE1 Mgmt NIC
GE
DEMONSTRATION: Local
GUI
82
Conclusion
Key Conclusions
1. Network Function Virtualisation is rapidly maturing and enabling first use-cases
TODAY for enterprise network functions
• Virtualisation of control plane functions
• Cloud-based network services
2. Virtualisation of enterprise network functions enables new architectural
approaches leading to potential CAPEX and OPEX savings
• Unclear Benefit from replacement of existing transport infrastructure solutions for the
sake of it
• Orchestration and Management put into the spotlight
3. Architectural details both at the system and network level need to be well
understood and examined
• E.g. Service Chaining
Call to Action
• Visit the World of Solutions for
• Cisco Campus
• Walk in Labs
• DevNet Zone
Complete Your Online Session Evaluation
• Complete your session surveys
though the Cisco Live mobile
app or your computer on
Cisco Live Connect.
Branch 1 Branch 1
Branch N Branch N
WAN DC / Cloud WAN DC / Cloud
CSR 1000v as multi-tenant vCPE - Example
• Multi-tenant CSR 1000v deployed for 5 Mbps ‘vanilla’ branches requiring 5 Mbps each
• Single-tenant CSR 1000v deployed for high-end branches requiring 50 Mbps each
– Note that the 44 VM scenario (Profile 2) is oversubscribed, however the max bandwidth per VM
requirement is only 50Mbps
Throughput (Gbps)
with tuning 2.5
2.0
1.5
• BUT: need to apply tuning recommendations, 1.0
0.5
especially for KVM 0.0
CEF ACL NAT Firewall QoS HQoS
IPSec Single IPSec Crypto
AES Map
VM-Fex, SR-IOV) 4vCPU 2.2 2.3 2.1 2.4 2.3 1.4 1.1 0.2
Throughput (Gbps)
2.5
• descriptor ring restriction in KVM limits 2.0
1.5
performance improvements for larger vCPU 1.0
0.5
VMs 0.0
IPSec Crypto
CEF ACL NAT Firewall QoS HQoS IPSec Single AES
Map
1vCPU 3.0 2.7 1.9 2.2 2.6 2.1 0.7 0.2
2vCPU 2.9 3.0 2.0 2.3 2.5 1.7 0.8 0.2
4vCPU 2.0 2.2 1.9 1.9 2.0 1.5 1.0 0.2
REFERENCE
KVM Performance Tuning Recommendations
• Use a Direct path I/O technology (SR-IOV w/ PCIe pass-through) with CPU
tuning below! Otherwise the following configurations are recommended:
Tuning Details / Commands Tuning
Recommendation
Disable Hyperthreading Can be done in BIOS CPU
Find I/O NUMA Node cat /sys/bus/pci/devices/0000:06:00.0/numa_node
Enable isolcpus run command “numactl -H” CPU
Pin vCPUs ‘sudo virsh vcpupin test 0 6’ CPU
Set CPU in performance Mode run /etc/init.d/ondemand stop. CPU
Set Procsessor into pass- virsh edit <vm name> CPU
through add this line <cpu mode='host-passthrough' />
Enable / Disable IRQ Balance run “service irqbalance start” & “service irqbalance stop” NOTE: ONLY IF IRQ CPU
PINNING IS DONE!
NUMA-aware VM edit vm config by virsh edit <VM name>. CPU
<vcpu placement='static' cpuset='8-15'>1</vcpu>
IRQ Pinning find specific nic interrupt number from /proc/interrupts. set affinity to other core than CPU
pinned cpu than for CPU and vHost
101 pinning
REFERENCE
KVM Performance Tuning Recommendations (cont.)
Tuning Details / Commands Tuning
Recommendation
Pin vHost processes ‘sudo taskset -pc 4 <process Number>’, I/O
Where <process Number> is found using ‘ps -ef | grep vhost’
Change vnet txqueue length to Default tx queue length is 500 I/O
4000 ‘sudo ifconfig vnet1 txqueuelen 4000’
Turn off TSO, GSO, RSO, ‘ethtool -K vnet1 tso off gso off gro off’ I/O
Physical NIC Configuration Change rx Interrupt coalescing to 100 for the 10G NICs I/O
Disable
NOTE: KSM
these settings may echo 0 > /s`ys/kernel/mm/ksm/run
impact Linux
the number of VMs that can be instantiated on a server / blade
Disable Memballoon Edit “virsh edit <VM> , find memballon in vm config file. Linux
NOTE: Tuning steps are most impactful
Please for<memballoon
change as a small number of VMs instantiated on a host. Tuning impact
model='none'/>
diminishes with a large number of VMs
Disable ARP/IP Filtering sysctl -w net.bridge.bridge-nf-call-arptables=0 Linux
sysctl -w net.bridge.bridge-nf-call-iptables=0
sysctl -w net.bridge.bridge-nf-call-ip6tables=0
Optional Linux Tuning sysctl -w net.core.netdev_max_backlog=20000 Linux
sysctl -w net.core.netdev_budget=3000
sysctl -w net.core.wmem_max=12582912
102
sysctl -w net.core.rmem_max=12582912
service iptables stop ( if you don't want linux firewall)
Sample Results of different Performance
Improvements
• Quantitative Impact of various hypervisor tuning steps
900%
800%
Average Throughput (Mbps)
700%
600%
500%
400%
300%
200%
100%
0%
Txqueuelen of
4000+vCPUPinning+vhost
default w/ Hyperthreading Hyperthreading Off vCPUPinning only Txqueuelen of 4000 only
pinning+txo,rxo off+Hyper
threading Off
Average Throughput Mbps 100% 145% 174% 509% 952%
SR-IOV Virtualisation Caveats
• The following features are not available for virtual machines configured with SR-IOV:
• vSphere vMotion • vSphere DPM
• Storage vMotion • Virtual machine suspend and resu
• vShield • Virtual machine snapshots
• NetFlow • MAC-based VLAN for passthrough virtual
functions
• VXLAN Virtual Wire
• Hot addition and removal of virtual devices,
• vSphere High Availability memory, and vCPU
• vSphere Fault Tolerance • Participation in a cluster environment
• vSphere DRS • Network statistics for a virtual machine NIC
using SR-IOV passthrough
VMWare ESXi Fault Tolerance Caveats
20%
Cisco SDN (Cisco APIC) and traditional (Cisco ASDM and CSM)
ASAv10 management tools
Dynamic routing includes OSPF, EIGRP, and BGP
ASAv30 IPv6 inspection support, NAT66, and NAT46/NAT64
REST API for programmed configuration and monitoring
Cisco TrustSec® PEP with SGT-based ACLs
Zone-based firewall
Equal-Cost Multipath
Removed clustering and
Failover Active/Standby HA model
multiple-context mode
Protection Across the Attack Continuum with
FirePOWERv
Attack Continuum
DC
WLC25xx
Advantages
• Cookie cutter configuration for every branch site
• Independency from WAN quality
Remote Site C
Remote Site A
Remote Site B
FlexConnect Mode: On Premise or Data Centre
Virtualisation of
Transport/Forwarding
Enterprise Virtualisation Models
Transport Functions
App App
CSR 1000V • Virtualisation of Transport plane functions
– L3 routing and packet forwarding
OS OS
VPC/ vDC
Cloud Hypervisor
Virtual Switch
– Packet divert
Shared Services
WAN • Can be on-premise or in larger Enterprise
Routing WAN PoPs or in the cloud
Diversio
n • IOS XRv
• CSR 1000v
Campus • Virtual router forwarding engine
• AppNav clustering (WAAS)
• WCCP/PBR
• NSH*
WAN ASR
ISR
10.1.1.1 (VRF B)
VRF A VRF B
Branch Internet
CSR
vWAAS
CSR
vWAAS
Q&A
Complete Your Online Session Evaluation
Give us your feedback and receive a
Cisco 2016 T-Shirt by completing the
Overall Event Survey and 5 Session
Evaluations.
– Directly from your mobile device on the Cisco Live
Mobile App
– By visiting the Cisco Live Mobile Site
http://showcase.genie-connect.com/ciscolivemelbourne2016/
– Visit any Cisco Live Internet Station located
throughout the venue
Learn online with Cisco Live!
T-Shirts can be collected Friday 11 March Visit us online after the conference
for full access to session videos and
at Registration presentations.
www.CiscoLiveAPAC.com
Thank you
Appendix
Hypervisor Traversal Tax: Example KVM with OVS