Download as pdf or txt
Download as pdf or txt
You are on page 1of 221

Deploying Cisco

SD-Access
(ENSDA) v1.1

B Y S D WA N - N E T W O R K S
Course details

Learning Objectives

Upon completion of this course, you should be able to:

❑ Describe Cisco SD-Access and how it relates to Cisco DNA

❑ Orchestrate a Cisco SD-Access solution using the Cisco DNA Center™ orchestration platform

❑ Use the Network Data Platform to demonstrate the assurance and analytics capabilities of
SD-Access
.
Prerequisites

Before taking this course, you should have the following knowledge and skills:

❑ You should have an understanding of network routing and switching principles


equivalent to the Cisco CCNA® certification level.
Outline

❑ Cisco SD-Access Overview


o Exploring Cisco SD-Access
o Describing the Cisco SD-Access Architecture
o Exploring Cisco DNA Center
o Configuring Underlay Automation

❑ Cisco SD-Access Implementation


o ISE Integration in Cisco DNA Center
o Policy Provisioning Basics
o Navigating and Managing the Policy Application Workflows

❑ Cisco SD-Access Border Operations


o Cisco SD-Access Deployment Models
o Connecting the Fabric to External Domains
❑ Wireless Integration Orchestration
o Integrating Wireless with the Cisco SD-Access Solution
o Workflow of Cisco SD-Access Wireless
o Cisco SD-Access Wireless Network Design
o Cisco SD-Access Wireless Basic Operation

❑ Cisco SD-Access Assurance and Migration


o Cisco Network Data Platform
o Cisco SD-Access Migration Strategies

https://www.cisco.com/c/en/us/training-events/training-certifications/training/training-
services/courses/deploying-cisco-sd-access-ensda.html
Thank you!!!
Cisco SD-Access Overview
•Exploring Cisco SD-Access
•Describing the Cisco SD-Access
Architecture
•Exploring Cisco DNA Center
•Configuring Underlay
Automation
Thank you!!!
Exploring Cisco
SD-Access
What Problems Does Software-Defined Access Solve?

Software-Defined Access looks to solve the following problems:

➢ Security/Segmentation

➢ Host Mobility

➢ Automation and Orchestration

➢ Network Visibility
1. Security/Segmentation

How it is done today:

➢ To create segmentation, customers are doing things like steering all the traffic through firewalls, using
VLANs, ACLs, VRF-lite or deploying MPLS.

➢ Some customers have deployed TrustSec into their networks, but have found it historically difficult to
implement.

➢ Security is also difficult and time-consuming to maintain as customers have numerous firewalls interface
with thousands of lines of access control lists to limit traffic flows.

➢ Today, security at the edge is handled by customers deploying 802.1X and ISE in their networks.

➢ We have seen great success among customers adopting and deploying 802.1X, but it can be a timely
process to create, test and roll out 802.1X organization wide.
How it is done with Software-Defined Access:
➢ Macro-segmentation is created by Software-Defined Access via VRFs and micro segmentation via
Cisco TrustSec, which uses a scalable group tag (SGT) to map a user’s identity to a tag rather than an
IP address.

➢ Writing security rules off this tag greatly improves the readability of the security policy, and
significantly reduces maintenance; going from thousands of ACL entries to tens of SGT entries.

➢ Software-Defined Access also automates the process of deploying 802.1X to the fabric edge.

➢ The deployment and maintenance is automated and managed via the DNA-Center appliance and
Identity Services Engine.
2. Host Mobility
How it is done today:

➢ Today, when a user roams throughout the network, their IP address changes, making it difficult to base
security policies on IP addresses and creating complex IP schemes to maintain.

How it is done with Software-Defined Access:

➢ Users can seamlessly roam over an L3 network while maintaining the same IP address by using LISP as the
control plane protocol.

➢ In this solution, security enforcement is achieved using SGTs that render IP addresses much less relevant.
With this model, you can use very large IP pools without worrying about broadcast storms or using IP
addresses for security enforcement.
3. Automation and Orchestration

How it is done today:

➢ Today, customers use many different systems to help automate and orchestrate the deployment of a
network.

➢ Most of our customers still manually deploy their networks via the CLI, which can take over a week to stand
up a site, including: provisioning VRFs, configuring dynamic routing protocols, configuring switching features
and configuring 802.1X.

How it is done with Software-Defined Access:

➢ The entire Software-Defined Access solution is deployed and managed via Cisco DNA Center, providing
customers with a single pane of glass for all network tasks.

➢ DNA Center can automate all of the deployment tasks, which would normally take over a week, into a few
hours.
4. Network Visibility

How it is done today:

➢ Today customers must use third-party tools to try to gain visibility into their network, but still generally lack
meaningful, actionable data.

➢ If a user calls into a customer’s help desk, gaining visibility into historical network issues is near impossible,
making troubleshooting a difficult and time-consuming task.

How it is done with Software-Defined Access:


➢ Cisco’s Network Data Platform (NDP), an optional component of the Software-Defined Access solution,
delivers end-to-end visibility, analytics and troubleshooting tools.

➢ With NDP, a help desk employee can get a 360-degree view of client and devices historical experience on
the network. NDP can show issues and trends related to things like device onboarding, app experience and
more. For example, NDP could identify a wireless issue in the network that correlates to the user’s ticket to
quickly identify and resolve the ticket.
Software Defined Access
What is Cisco Software Defined Access (SD-Access)?
Cisco SD-Access is the foundation for a new era of Intent Based Networking (IBN).

Cisco SD-Access is an innovative fabric-based network infrastructure, to provide:


• Automated network and policy configuration
• Dynamic host mobility for wired and wireless
• Identity-based macro and micro-segmentation
• Virtualized multicast and Layer 2 broadcast

3
Cisco Software Defined Access
The Foundation for Cisco’s Intent-Based Network
Cisco DNACenter
One Automated
Network Fabric
Policy Automation Assurance Single fabric for Wired and
Wireless with full automation
Outside

B B
Identity-Based
C
Policy and Segmentation
Policy definition decoupled
from VLAN and IP address

AI-Driven
Insights and Telemetry
SD-Access
Extension Client Mobility Analytics and visibility into
User and Application experience
Policyfollows User

IoT Network Employee Network #CLUS © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public
What is SD-Access?
Campus Fabric + Cisco DNA Center (Automation & Assurance)
▪ SD-Access
APIC-EM
NCP
1.X
GUI approach provides automation &
ISE ISE NDP assurance of all Fabric configuration,
P
CiscoI DNA
management and group-based policy
Center
Cisco DNA Center integrates multiple
management systems, to orchestrate
IP LAN, Wireless LAN and WAN access

B B
▪ Campus Fabric

C CLI or API approach to build a LISP +


VXLAN + CTS Fabric overlay for your
enterprise Campus networks
Campus CLI provides backward compatibility,
Fabric but management is box-by-box.
API provides some automation via
NETCONF/YANG, also box-by-box.

Separate managementsystems
Cisco DNA Cisco DNA Center
Enterprise Solution
Simple Workflows

DESIGN PROVISION POLICY ASSURANC


E

DNA Center
Identity Services Engine Network Control Platform Network Data Platform

Routers Switches Wireless Controllers Wireless APs

#CLUS © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public
SD-Access
What exactly is a Fabric?

A Fabric is an Overlay
An Overlay network is a logical topology used to virtually connect devices,
built over an arbitrary physical Underlay topology.
An Overlay network often uses alternate forwarding attributes to provide
additional services, not provided by the Underlay.

Examples of Network Overlays


• GRE, mGRE • LISP
• MPLS, VPLS • OTV
• IPSec, DMVPN • DFA
• CAPWAP • ACI
SD-Access
Fabric Terminology

Overlay Network Overlay Control Plane

Encapsulation

Edge Device Edge Device

Hosts
(End- Points)

Underlay Network Underlay Control Plane


SD-Access
Why Overlays?

Separate the “Forwarding Plane” from the “Services Plane”

IT Challenge (Business): Network Uptime IT Challenge (Employee): New Services


The Boss YOU The User

Simple Transport Forwarding Flexible Virtual Services


• Redundant Devices and Paths • Mobility - Map Endpoints to Edges
• Keep It Simple and Manageable • Services - Deliver using Overlay
• Optimize Packet Handling • Scalability - Reduce ProtocolState
• Maximize Network Reliability (HA) • Flexible and Programmable
SD-Access
Types of Overlays

Hybrid L2 + L3 Overlays are the Best of Both Worlds

Layer 2 Overlays Layer 3 Overlays


• Emulate a LAN segment • Abstract IP connectivity
• Transport Ethernet Frames (IP & Non-IP) • Transport IP Packets (IPv4 & IPv6)
• Single subnet mobility (L2 domain) • Full mobility regardless of Gateway
• Exposure to Layer 2 flooding • Contain network related failures (floods)
• Useful in emulating physical topologies • Useful to abstract connectivity and policy
SD-Access
Fabric Underlay – Manual vs. Automated

Manual Underlay LAN Automation


You can reuse your existing IP Fully automated prescriptive IP
network as the Fabric Underlay! network Underlay Provisioning!
• Key Requirements • Key Requirements
• IP reach from Edge to Edge/Border/CP • Leverages standard PNP for Bootstrap
• Can be L2 or L3 – We recommend L3 • Assumes New / ErasedConfiguration
• Can be any IGP – We recommend ISIS • Uses a Global “Underlay” Address Pool

• Key Considerations • Key Considerations


• MTU (Fabric Header adds 50B) • Seed Device pre-setup is required
• Latency (RTT of =/< 100ms) • 100% Prescriptive (No Custom)

Underlay Network
Thank you!!!
Describing Cisco
SD-Access
Architecture
Fabric
Components
& Terminology
Part -1
Cisco SD-Access
Fabric components and terminology
▪ Network Automation – Simple GUI
Automation and APIs for intent-based Automation
Identity of wired and wireless fabric devices
Cisco ISE Cisco DNA Center
Services
▪ Network Assurance – DataCollectors
analyze Endpoint to Application flows
Assurance and monitor fabric device status
▪ Identity Services – NAC & ID Services
(e.g. ISE) for dynamic Endpoint to Group
Fabric Border IP Fabric Wireless mapping and Policy definition
Nodes Controllers
B B ▪ Control-Plane Nodes – Map System
that
Control-Plane manages Endpoint to Device relationships
Intermediate
C Nodes
Nodes (Underlay)
▪ Fabric Border Nodes – A fabric device
(e.g. Core) that connects External L3
network(s) to the SD-Access fabric
Campus ▪ Fabric Edge Nodes – A fabric device
Fabric Edge Fabric Wireless
Nodes Fabric Access Points
(e.g. Access or Distribution) that
connects Wired Endpoints to the SD-
Access fabric
▪ Fabric Wireless Controller – A fabric device
(WLC) that connects Fabric APs and
Wireless Endpoints to the SD-Access fabric
SD-Access Fabric
Control-Plane Nodes – A Closer Look
Control-Plane Node runs a Host Tracking Database to map location information
IP to RLOC MAC to RLOC Address Resolution
1.2.3.4 → FE1 AA:BB:CC:DD → FE1 1.2.3.4 → AA:BB:CC:DD

• A simple Host Database that maps Endpoint IDs to C


a current Location, along with other attributes
B B
• Host Database supports multiple types of Endpoint
ID lookup types (IPv4, IPv6 or MAC)

• Receives Endpoint ID map registrations from Edge


and/or Border Nodes for “known” IP prefixes

• Resolves lookup requests from Edge and/or Border FE1

Nodes, to locate destination Endpoint IDs IP - 1.2.3.4/32


MAC – AA:BB:CC:DD
SD-Access Platforms
Fabric Control Plane

Catalyst 3K Catalyst 6K ISR 4K & ENCS ASR1K

• Catalyst 3650/3850 • Catalyst 6500/6800 • ISR 4430/4450 • ASR 1000-X


• 1/mG RJ45 • Sup2T/Sup6T • ISR 4330/4450 • ASR 1000-HX
• 1/10G SFP • C6800 Cards • ENCS 5400 • 1/10G RJ45
• 1/10/40G NM Cards • C6880/6840-X • ISRv / CSRv • 1/10G SFP
SD-Access @ Cisco DNA Center
Control-Plane Nodes
SD-Access Fabric
Edge Nodes – A Closer Look
Edge Node provides first-hop services for Users / Devices connected to a Fabric

IP to RLOC MAC to RLOC Address Resolution


• Responsible for Identifying and Authenticating 1.2.3.4 → FE1 AA:BB:CC:DD → FE1 1.2.3.4 → AA:BB:CC:DD

Endpoints (e.g. Static, 802.1X, Active Directory)


C
Known Unknown
• Register specific Endpoint ID info (e.g. /32 or /128) Networks Networks

with the Control-Plane Node(s) B B

• Provide an Anycast L3 Gateway for the connected


Endpoints (same IP address on all Edge nodes)

• Performs encapsulation / de-encapsulation of data


traffic to and from all connected Endpoints
FE1

IP - 1.2.3.4/32
MAC – AA:BB:CC:DD
SD-Access Platforms
Fabric Edge Node

Catalyst 9200 Catalyst 9300 Catalyst 9400 Catalyst 9500 Catalyst 9600

• Catalyst 9200/L * • Catalyst 9300 • Catalyst 9400 • Catalyst 9500 • Catalyst 9600
• 1/mG RJ45 • 1/mG RJ45 • Sup1/Sup1XL • 1/10/25G SFP • Sup1
• 1G SFP (Uplinks) • 10/25/40/mG NM • 9400 Cards • 40/100G QSFP • 9600 Cards
SD-Access Platforms
Fabric Edge Node

Catalyst 3K Catalyst 4500E Catalyst 6K

• Catalyst 3650/3850 • Catalyst 4500E • Catalyst 6500/6800


• 1/mG RJ45 • Sup8E/Sup9E (Uplink) • Sup2T/Sup6T
• 1/10G SFP • 4600/4700 Cards (Host) • C6800 Cards
• 1/10/40G NM Cards • C6880/6840-X
SD-Access @ Cisco DNA Center
Edge Nodes
Thank you!!!
Fabric
Components
& Terminology
Part-2
SD-Access Fabric
Border Nodes

Border Node is an Entry & Exit point for data traffic going Into & Out of a Fabric

There are 3 Types of Border Node!

• Internal Border (Rest of Company)


• connects ONLY to the known areas of the company

• External Border
DNA Center
• connects ONLY to unknown areas outside the company
1.3
1.2

• Internal + External (Anywhere)


• connects transit areas AND known areas of the company
SD-Access Platforms
Fabric Border Node

Catalyst 9300 Catalyst 9400 Catalyst 9500 Catalyst 9600

• Catalyst 9300 • Catalyst 9400 • Catalyst 9500 • Catalyst 9600


• 1/mG RJ45 • Sup1XL • 40/100G QSFP • Sup1
• 10/25/40/mG NM • 9400 Cards • 1/10/25G SFP • 9600 Cards
SD-Access Platforms
Fabric Border Node

Catalyst 3K Catalyst 6K Nexus 7K* ISR 4K ASR 1K

• Catalyst 3650/3850 • Catalyst 6500/6800 • Nexus 7700 • ISR 4300/4400 • ASR 1000-X/HX
• 1/mG RJ45 • Sup2T/Sup6T • Sup2E • AppX (AX) • AppX (AX)
• 1/10G SFP • C6800 Cards • M3 Cards • 1/10G RJ45 • 1/10G ELC/EPA
• 1/10/40G NM Cards • C6880/6840-X • LAN1K9 + MPLS • 1/10G SFP • 40G ELC/EPA
SD-Access @ Cisco DNA Center
Border Nodes
SD-Access Fabric
Border Nodes - Internal

Internal Border advertises Endpoints to outside, and known Subnets to inside

• Connects to any “known” IP subnets available from C


Known Unknown
the outside network (e.g. DC, WLC, FW, etc.) Networks Networks

B B
• Exports all internal IP Pools to outside (as
aggregate), using a traditional IP routing protocol(s).

• Imports and registers (known) IP subnets from


outside, into the Control-Plane Map System

• Hand-off requires mapping the context (VRF & SGT)


from one domain to another.
SD-Access - Border Deployment
Internal Border : Connecting to Known Networks

Data
C Center

B B

Branch
Office

Known Networks
SD-Access - Border Deployment
Anywhere Border : SD-Access as a Transit Area

B B

External Domain 1 External Domain 2

SD-Access
Fabric
Thank you!!!
Fabric
Components
& Terminology
Part-3
SD-Access Fabric
Border Nodes - External

External Border is a “Gateway of Last Resort” for any unknown destinations

• Connects to any “unknown” IP subnets, outside of C


Known Unknown
the network (e.g. Internet, Public Cloud) Networks Networks

B B
• Exports all internal IP Pools outside (as aggregate)
into traditional IP routing protocol(s).

• Does NOT importunknown routes! It is a “default”


exit, if no entry is available in Control-Plane.

• Hand-off requires mapping the context (VRF & SGT)


from one domain to another.
SD-Access - Border Deployment
External Border : Connecting to Unknown Networks

C Public Cloud

B B

Internet

SD-Access Fabric

Unknown Networks
SD-Access - Border Deployment
Why? Internal Traffic with External Borders

Edge Node
IP Network B

External Border Internet

ALL non-fabric traffic MUST travel


to the External (Default) Border.

If other internal domains (e.g. WAN WAN Edge WAN/Branch


or DC) are only reachable via the
same IP network, traffic may follow
a sub-optimal path (e.g. hairpin).

DC Edge Data Center


SD-Access - Border Deployment
Why? Internal Traffic with Internal Borders

Edge Node
IP Network B

External Border Internet

B
Traffic to internal domains willgo
directly to the Internal Borders.

Any external traffic (e.g. Internet) Internal Border WAN/Branch


can still exit via the External Border.

Internal Border Data Center


SD-Access Fabric
Fabric Enabled Wireless – A Closer Look

Fabric Enabled WLC is integrated into Fabric for SD-Access Wireless clients
Ctrl: CAPWAP

• Connects to Fabric via Border (Underlay) Data: VXLAN

C
• Fabric Enabled APs connect to the WLC (CAPWAP) Known Unknown
Networks
using a dedicated Host Pool (Overlay) Networks

B B
• Fabric Enabled APs connect to the Edge via VXLAN

• Wireless Clients (SSIDs) use regular Host Pools for


data traffic and policy (same as Wired)

• Fabric Enabled WLC registers Clients with the


Control-Plane (as located on local Edge + AP)
SD-Access Platforms
Fabric Enabled Wireless

AireOS WLC Catalyst 9800 Wifi 6, 11ac Wave 2 Wave 1*AP

• AIR-CT3504 • Catalyst 9800-40/80 • Catalyst 9100 • AIR-CAP1700, 2700


• AIR-CT5520 • Catalyst 9800-CL • AIR-CAP1800, 2800, and 3700
• AIR-CT8540 • C9K Embedded WLC 3800 and 4800 • AIR-CAP1540, 1560
• 802.11ax, 11ac Wave2 • 802.11ac Wave1*
SD-Access @ Cisco DNA Center
Fabric Wireless
SD-Access Extension for IoT
Securely Consolidate IT and IOT

DNA Center
Beta in 1.2.5 Extended Node Portfolio
GA in 1.3
IE3300/3400 IE4000/4010 IE5000
Enterprise Campus

Catalyst Digital 3560-CX


Building Compact

▪ Operational IOT simplicity (Automation)


▪ IT designed and managed –or-
Extended
Enterpris

Extended Nodes ▪ IT designed and OT managed


REP Ring
▪ Greater visibility of IoT devices (Assurance)
e

▪ Extended Segmentation & Policy (Security)


#CLUS
Thank you!!!
Fabric
Components
& Terminology
Part-4
SD-Access Fabric
Virtual Network– A Closer Look

Virtual Network maintains a separate Routing & Switching table for each instance

• Control-Plane uses Instance ID to maintain separate C


VRF topologies (“Default” VRF is Instance ID “4098”) Known
Networks
Unknown
Networks

B B
• Nodes add a VNID to the Fabric encapsulation

• Endpoint ID prefixes (Host Pools) are routed and VN VN VN


advertised within a Virtual Network Campus IOT Guest

• Uses standard “vrf definition” configuration, along


with RD & RT for remote advertisement (Border Node)
SD-Access Fabric
How VNs work in SD-Access

• Fabric Devices (Underlay)connectivity Scope of Fabric

is in the Global Routing Table User-Defined VN(s)


• INFRA_VNis only for Access Points Border
User VN (for Default)
and Extended Nodes in GRT
USER VRF(s)
VN (for APs, Extended Nodes)
• DEFAULT_VN is an actual “User VN” DEFAULT_VN
INFRA_VN
provided by default Devices (Underlay) GRT

can be added or
• User-Defined VNs
removed on-demand
SD-Access Fabric
How VNs work in SD-Access
ip vrf USERS
rd 1:4099
route-target export 1:4099
route-target import 1:4099
route-target import 1:4097
SD-Access Designs connecting to existing Global Routing Table !
ip vrf DEFAULT_VN

should use a “Fusion” router with MP-BGP & VRF import/export. rd 1:4098
route-target export 1:4098
route-target import 1:4098
route-target import 1:4097

Control Plane ip vrf GLOBAL


rd 1:4097
route-target export 1:4097
route-target import 1:4097
C route-target export 1:4099
VRF B route-target export 1:4098

SVI B
AF VRF B
ISIS OSPF
B AF VRFA

AF IPv4
MP-BGP
Edge Node Border Node Fusion Router Switch
VRF A
SVI A GRT
Fabric Edge & Border - VRF Configuration
Edge-1# show vrf
Name Default RD Protocols Interfaces
DEFAULT_VN 1:4098 ipv4 LI0.4098
GUEST 1:4100 ipv4 LI0.4100
Mgmt-vrf <not set> ipv4,ipv6 Gi0/0
USERS 1:4099 ipv4 LI0.4099

CP-Border-1# show vrf


Name Default RD Protocols Interfaces
DEFAULT_VN 1:4098 ipv4 Vl3004
LI0.4098
GUEST 1:4100 ipv4 Vl3001
LI0.4100
Mgmt-vrf <not set> ipv4,ipv6 Gi0/0
USERS 1:4099 ipv4 Vl3002
LI0.4099
SD-Access Fabric
Scalable Groups – A Closer Look

Scalable Group is a logical policy object to “group” Users and/or Devices

• Nodes use “Scalable Groups” to ID and assign a C


Known Unknown
unique Scalable Group Tag (SGT) to Endpoints Networks Networks

B B
• Nodes add a SGT to the Fabric encapsulation
SGT
SGT 4 SGT SGT
• SGTs are used to manage address-independent 17
SGT
8 25

“Group-Based Policies” SGT 3 SGT


23
SGT
11
19 SGT
12

• Edge or Border Nodes use SGT to enforce local


Scalable Group ACLs (SGACLs)
SD-Access @ Cisco DNA Center
Virtual Networks and Scalable Groups
SD-Access Fabric
Host Pools – A Closer Look

Host Pool provides basic IP functions necessary for attached Endpoints

• Edge Nodes use a Switch Virtual Interface (SVI), C


Known Unknown
with IP Address /Mask, etc. per Host Pool Networks Networks

B B
• Fabric uses Dynamic EID mapping to advertise each
Host Pool (per Instance ID) Pool
Pool Pool Pool
.4
.17 .8 .25
Pool
• Fabric Dynamic EID allows Host-specific (/32, /128 Pool Pool Pool .19 Pool
.13 .23 .11 .12
or MAC) advertisement and mobility

• Host Pools can be assigned Dynamically (via Host


Authentication) and/or Statically (per port)
SD-Access Fabric
Anycast Gateway – A Closer Look

Anycast GW provides a single L3 Default Gateway for IP capable endpoints

• Similar principle and behavior to HSRP / VRRP with C


a shared “Virtual” IP and MAC address Known
Networks
Unknown
Networks

B B
• The same Switch Virtual Interface (SVI) is present
on EVERY Edge with the SAME Virtual IP and MAC

• Control-Plane with Fabric Dynamic EID mapping


maintains the Host to Edge relationship

• When a Host moves from Edge 1 to Edge 2, it does GW GW GW GW GW


not need to change it’s Default Gateway ☺
SD-Access Fabric
Layer 3 Overlay – A Closer Look

Stretched Subnets allow an IP subnet to be “stretched” via the Overlay

C
• Host IP based traffic arrives on the local Fabric Edge Known
Networks
Unknown
Networks

(SVI) and is then transferred by the Fabric B B

• Fabric Dynamic EID mapping allows Host-specific


(/32, /128, MAC) advertisement and mobility Dynamic
EID

• Host 1 connected to Edge A can now use the same


IP subnet to communicate with Host 2 on Edge B
GW GW GW GW GW
• No longer need a VLAN to connect Host 1 and 2 ☺
SD-Access @ Cisco DNA Center
Host Pools & Layer-2 Extension
Thank you!!!
Fabric operation
SD-Access Fabric
Campus Fabric - Key Components

1. Control-Plane based on LISP


2. Data-Plane based on VXLAN
3. Policy-Plane based on CTS
B B
Key Differences
C
• L2 + L3 Overlay - v s - L2 or L3 Only
• Host Mobility with Anycast Gateway
• Adds VRF + SGT into Data-Plane
• Virtual Tunnel Endpoints (Automatic)
• NO Topology Limitations (Basic IP)
Thank you!!!
Control-Plane
based on LISP
Fabric control plane

➢ The primary technology used for the fabric control plane is based on the Locator/ID Separation Protocol
(LISP).

➢ LISP is an IETF standard protocol (RFC-6830, etc.) based on a simple endpoint ID (EID) to routing locator
(RLOC) mapping system, to separate the “identity” (address) from its current “location” (attached router).

➢ LISP dramatically simplifies traditional routing environments by removing the need for each router to
process every possible IP destination address and route. It does this by moving remote destination
information to a centralized map database that allows each router to manage only its local routes (and query
the map system to locate destination endpoints).

➢ This technology provides many advantages for Cisco SD-Access, such as less CPU usage, smaller routing
tables (hardware and/or software), dynamic host mobility (wired and wireless), address-agnostic mapping
(IPv4, IPv6, and/or MAC), built-in network segmentation (Virtual Routing and Forwarding [VRF]), and others.

➢ In Cisco SD-Access, several enhancements to the original LISP specifications have been added, including
distributed Anycast Gateway, Virtual Network (VN) Extranet and Fabric Wireless.
SD-Access Fabric
Key Components - LISP
Host
1. Control-Plane based on LISP Mobility

Routing Protocols = Big Tables & More CPU LISP DB + Cache = Small Tables & Less CPU
with Local L3 Gateway with Anycast L3 Gateway

BEFORE AFTER
IP Address = Location + Identity Separate Identity from Location
Prefix RLOC
192.58.28.128 ….....171.68.228.121
189.16.17.89 ….....171.68.226.120
Prefix Next-hop 22.78.190.64 ….....171.68.226.121
189.16.17.89 ….....171.68.226.120 172.16.19.90 ….....171.68.226.120
22.78.190.64 ….....171.68.226.121 192.58.28.128 ….....171.68.228.121
172.16.19.90 ….....171.68.226.120 192.58.28.128 ….....171.68.228.121
…....171.68.228.121
192.58.28.128
189.16.17.89 …....171.68.226.120
Prefix Next-hop 189.16.17.89
22.78.190.64
….....171.68.226.120
….....171.68.226.121

Mapping
22.78.190.64 ….....171.68.226.121 189.16.17.89 ….....171.68.226.120 172.16.19.90 ….....171.68.226.120
172.16.19.90 …......171.68.226.120 22.78.190.64 ….....171.68.226.121 192.58.28.128 ….....171.68.228.121

Endpoint
192.58.28.128 ….....171.68.228.121 172.16.19.90 ….....171.68.226.120
189.16.17.89 …....171.68.226.120 192.58.28.128 …....171.68.228.121
22.78.190.64 ….....171.68.226.121
172.16.19.90
192.58.28.128
…......171.68.226.120
…......171.68.228.121
….....171.68.226.120
Database
Routes are
189.16.17.89
22.78.190.64 …......171.68.226.121
172.16.19.90 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121

Consolidated
Prefix
189.16.17.89
22.78.190.64
Next-hop
…......171.68.226.120
….....171.68.226.121
to LISP DB
172.16.19.90 ….....171.68.226.120
192.58.28.128 …....171.68.228.121
189.16.17.89 …....171.68.226.120
22.78.190.64
172.16.19.90
….....171.68.226.121
…......171.68.226.120
Prefix Next-hop
189.16.17.89 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121 22.78.190.64 ….....171.68.226.121
189.16.17.89 …....171.68.226.120 172.16.19.90 ….....171.68.226.120
22.78.190.64 ….....171.68.226.121 192.58.28.128 …....171.68.228.121
172.16.19.90 …......171.68.226.120
192.58.28.128 ….....171.68.228.121
189.16.17.89 ….....171.68.226.120
22.78.190.64 …......171.68.226.121
172.16.19.90 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121
Prefix Next-hop
Prefix Next-hop 189.16.17.89 ….....171.68.226.120
189.16.17.89 ….....171.68.226.120 22.78.190.64 ….....171.68.226.121
22.78.190.64 ….....171.68.226.121 172.16.19.90 ….....171.68.226.120
172.16.19.90 ….....171.68.226.120 192.58.28.128 …....171.68.228.121
192.58.28.128 …....171.68.228.121
189.16.17.89 …....171.68.226.120
22.78.190.64 ….....171.68.226.121
172.16.19.90 …......171.68.226.120
192.58.28.128 ….....171.68.228.121
189.16.17.89 …....171.68.226.120

Topology + Endpoint Routes 22.78.190.64


172.16.19.90
….....171.68.226.121
…......171.68.226.120
….....171.68.228.121
Only Local Routes
192.58.28.128
189.16.17.89
22.78.190.64
….....171.68.226.120
…......171.68.226.121
Topology Routes
172.16.19.90 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121

Endpoint Routes
Control plane Responsibilities

In SD- Access fabric, the fabric control plane node operates as the data base tracking all
end point connectivity to the fabric, and is responsible for the following functions:

➢ Registers all end points connected to the edge nodes, and tracks their location in the
fabric
i.e. which edge node the end points are located behind.

➢ Responds to queries from network elements about the location of end points in the fabric

➢ Ensures that when end points move from one location to an other, traffic is redirected to
the current location.
Fabric Operation
Control-Plane Roles & Responsibilities
Control-Plane EID RLOC
a.a.a.0/24 w.x.y.1
b.b.b.0/24 x.y.w.2
c.c.c.0/24 z.q.r.5
d.d.0.0/16 z.q.r.5

LISP Map Server / Resolver EID Space


EID
a.a.a.0/24
RLOC
w.x.y.1
b.b.b.0/24 x.y.w.2
(Control-Plane) c.c.c.0/24
d.d.0.0/16
z.q.r.5
z.q.r.5

• EID to RLOC mappings EID RLOC


Edge a.a.a.0/24
b.b.b.0/24
w.x.y.1
x.y.w.2

• Can be distributed across c.c.c.0/24


d.d.0.0/16
z.q.r.5
z.q.r.5

Non- LISP
multiple LISP devices Prefix Next-hop
w.x.y.1 e.f.g.h
x.y.w.2 e.f.g.h
z.q.r.5 e.f.g.h
z.q.r.5 e.f.g.h

LISP Tunnel Router - XTR Border RLOC Space


(Edge & Internal Border)
• Register EID with Map Server
• Ingress / Egress (ITR / ETR) Edge

LISP Proxy Tunnel Router - PXTR EID Space


(External Border)
• EID = Endpoint Identifier
• Provides a Default Gateway
• Host Address or Subnet
when no mapping exists
• RLOC = Routing Locator
• Ingress / Egress (PITR / PETR) • Local Router Address
Fabric Operation
Control Plane Register & Resolution
Branch
Where is 10.2.2.2?
Cache Entry (on ITR)
10.2.2.2/32 → (2.1.2.1)
Fabric Edge

Fabric Control Plane


5.1.1.1

2.1.1.1 2.1.2.1 3.1.1.1 3.1.2.1

Database Mapping Entry (on ETR) Fabric Edges Database Mapping Entry (on ETR)
10.2.2.4/32 → ( 3.1.2.1)
10.2.2.2/32 → ( 2.1.2.1)

10.2.2.3/16 10.2.2.2/16 10.2.2.5/16 10.2.2.4/16

Subnet 10.2.0.0 255.255.0.0 stretched across


Control plane operation

➢ Endpoint 1 on edge 1 will be registered to the fabric control plane node. The registration
includes End point 1's IP address, MAC address, and location.

➢ Endpoint 2 on edge 2 will be registered to the fabric control plane node. The registration
includes End point 2's IP address, MAC address, and location.
Control plane operation (CONT)

➢ When Endpoint 1 wants to communicate to End point 2, edge 1 will query the fabric control plane node
for the location of End point 2.

➢ Upon getting the reply (End point 2 location is behind edge 2 ) it will encapsulate the traffic from
Endpoint 1 using VXLAN, and send it to Endpoint 2 (via edge 2).

➢ Once this traffic arrives at edge 2, it will be decapsulated and forwarded along to Endpoint 2.

➢ The reverse applies when Endpoint 2 wants to communicate back to End point 1.
Fabric Operation
Fabric Internal Forwarding (Edge to Edge)

3 EID-prefix: 10.2.2.2/32
Mapping Locator-set: Path Preference
Entry Controlled
2.1.2.1, priority: 1, weight:100 by Destination Site
1
DNS Entry:
Branch Non-Fabric Non-Fabric
D.abc.com A 10.2.2.2
10.1.0.0/24

Fabric Borders
Fabric Edge
S
2
1.1.1.1
10.1.0.1 → 10.2.2.2 5.3.3.3

5.1.1.1
IP Network 5.2.2.2
4 Mapping
System
1.1.1.1 → 2.1.2.1

10.1.0.1 → 10.2.2.2 2.1.1.1 2.1.2.1 3.1.1.1 3.1.2.1

5 Fabric Edges

10.1.0.1 → 10.2.2.2
D
10.2.2.3/16 10.2.2.2/ 1 6 10.2.2.4/16 10.2.2.5/16

Subnet 10.2.0.0 255.255.0.0 stretched across


Fabric Operation
Forwarding from Outside (Border to Edge) 3 EID-Prefix: 10.2.2.2/32
Mapping Locator-Set:
Entry 2.1.2.1, priority: 1, weight: 100
1
DNS Entry:
D.abc.com A 10.2.2.2 192.3.0.1

S
Non- Fabric
2
192.3.0.1 → 10.2.2.2 Fabric Borders

4.4.4.4

4 5.3.3.3

5.1.1.1
4.4.4.4 → 2.1.2.1 IP Network 5.2.2.2
Mapping
192.3.0.1 → 10.2.2.2 System

2.1.1.1 2.1.2.1 3.1.1.1 3.1.2.1

5 Fabric Edges

192.3.0.1 → 10.2.2.2

D
10.2.2.3/16 10.2.2.2/ 1 6 10.2.2.4/16 10.2.2.5/16

Subnet 10.2.0.0 255.255.0.0 stretched across


Fabric Operation
Host Mobility – Dynamic EID Migration Map Register
Fabric Control Plane
10.10.0.0/16 – 12.0.0.1
EID: 10.17.1.10/32
Node: 12.1.1.1 10.2.1.10/32 – 12.1.1.1
D 10.2.1.10/32 – 12.2.2.1
10.10.10.0/24
2.1.1.1

DC1 3.1.1.1
Fabric Borders 1.1.1.1

Mapping
System

Routing Table 12.0.0.1 12.0.0.2

10.2.1.0/24 – Local
5 Routing Table
3
10.2.1.10/32 – Local 10.2.1.0/24 – Local 42
10.2.1.10/32 – LISP0
10.2.1.10/32 - Local
IP Network

12.1.1.1 12.1.1.2 12.2.2.1 12.2.2.2

Campus Campus
S Fabric Edges
1
Bldg 1 Bldg 2

10.2.1.10 10.2.1.10
SD-Access Fabric
Unique Control-Plane extensions compared to LISP

Capability Traditional LISP SD-Access Fabric


Layer 2 Extension Limited Support Fabric Control Plane extended to support
MAC to IP binding and Layer 2 Overlays

Virtual Networks Layer-3 VN (VRF) only Both Layer-3 and Layer-2 VN (VRF)
support (using VXLAN)

Fast Roaming Not Supported Fabric Control Plane extended to support


fast roaming in =/< 50ms

Wireless Extensions Not Supported Fabric Control Plane extended to support


wireless extensions for:
• AP Onboarding
• Wireless Guest
• AP VXLAN functionality
Thank you!!!
Data-Plane
based on
VXLAN
Fabric data plane
➢ The primary technology used for the fabric data plane is based on Virtual Extensible LAN (VXLAN).

➢ VXLAN is an IETF standard encapsulation (RFC-7348, etc.).

➢ VXLAN encapsulation is IP/UDP-based, meaning that it can be forwarded by any IP-based network
(legacy or non-Cisco) and effectively creates the “overlay” aspect of the SD-Access fabric.

➢ VXLAN encapsulation is used (instead of LISP encapsulation) for two main reasons.

➢ VXLAN includes the source Layer 2 (Ethernet) header (LISP does not), and it also provides special
fields for additional information (such as virtual network [VN] ID and group [segment] ID).
SD-Access Fabric
Key Components – VXLAN

1. Control-Plane based on LISP


2. Data-Plane based on VXLAN
IP PAYLOAD ORIGINAL
ETHERNET
PACKET
Supports L3
Overlay Only

IP UDP LISP IP PAYLOAD


PACKET IN
ETHERNET
LISP
Supports L2
& L3 Overlay
IP UDP VXLAN IP PAYLOAD PACKET IN
ETHERNET ETHERNET
VXLAN
LISP & VXLAN Headers
Similar Format - Different Payload

LISP Header - IP based VXLAN Header - Ethernet based

OUTER
HEADER
4789

OVERLAY
HEADER

INNER
HEADER
VXLAN-GPO Header 48
Next-Hop MAC Address

Src VTEP MAC Address


MAC-in-IP with VN ID & Group ID Dest. MAC

Source MAC 48

VLAN Type 14 Bytes


16 IP Header
0x8100 (4 Bytes Optional) 72
Misc. Data

VLAN ID 16
Protocol 0x11 (UDP) 8
Ether Type
16 Header
0x0800 16 20 Bytes
Outer MAC Header
Underlay

Checksum

Source IP 32
Src RLOC IP Address
Outer IP Header Dest. IP 32
Source Port 16 Dst RLOC IP Address

UDP Header Dest Port 16


8 Bytes Hash of inner L2/L3/L4 headers of original frame.
UDP Length 16 Enables entropy for ECMP load balancing.
VXLAN Header
Checksum 0x0000 16 UDP 4789

Inner (Original) MAC Header


Allows 64K
Inner (Original) IP Header VXLAN Flags RRRRIRRR 8 possible SGTs
Overlay

Segment ID 16
8 Bytes
Original Payload VN ID 24
Allows 16M
Reserved 8 possible VRFs
VXLAN-GPO Header
MAC-in-IP with VN ID & Group ID

What to look for in Frame 1: 192 bytes on wire (1536 bits), 192 bytes captured (1536 bits)
Ethernet II, Src: CiscoInc_c5:db:47 (88:90:8d:c5:db:47), Dst: CiscoInc_5b:58:fb (0c:f5:a4:5b:58:fb)
a packet capture? Internet Protocol Version 4, Src: 10.2.120.1, Dst: 10.2.120.3
User Datagram Protocol, Src Port: 65354 (65354), Dst Port: 4789 (4789)
Source Port: 65354
Destination Port: 4789
OUTER
Length: 158 HEADER
Checksum: 0x0000 (none)
[Stream index: 0]

Virtual eXtensible Local Area Network


Flags: 0x0800, VXLANNetwork ID (VNI)
OVERLAY
Group Policy ID: 50
VXLAN Network Identifier (VNI): 4098 HEADER
Reserved: 0

Ethernet II, Src: CiscoInc_c5:00:00 (88:90:8d:c5:00:00), Dst: ba:25:cd:f4:ad:38 (ba:25:cd:f4:ad:38)


Destination: ba:25:cd:f4:ad:38 (ba:25:cd:f4:ad:38)
Source: CiscoInc_c5:00:00 (88:90:8d:c5:00:00) INNER
Type: IPv4 (0x0800) HEADER
Internet Protocol Version 4, Src: 10.2.1.89,Dst: 10.2.1.99
Internet Control Message Protocol
Data-Plane Overview
Fabric Header Encapsulation

Inner
Fabric Data-Plane provides the following:

Outer
• Underlay address advertisement & mapping
• Automatic tunnel setup (Virtual Tunnel End-Points)
• Frame encapsulation between Routing Locators

Support for LISP or VXLAN header format

Outer
• Nearly the same, with different fields & payload Encap

Inner

Inner
• LISP header carries IP payload (IP in IP)
• VXLAN header carries MAC payload (MAC in IP)
Decap
Triggered by LISP Control-Plane events
• ARP or NDP Learning on L3 Gateways
• Map-Reply or Cache on Routing Locators

Inner
Data plane operation

VXLAN

➢ When Endpoint 1 wants to communicate to End point 2, edge 1 will query the fabric control plane node
for the location of End point 2.

➢ Upon getting the reply (End point 2 location is behind edge 2 ) it will encapsulate the traffic from
Endpoint 1 using VXLAN, and send it to Endpoint 2 (via edge 2).

➢ Once this traffic arrives at edge 2, it will be decapsulated and forwarded along to Endpoint 2.

➢ The reverse applies when Endpoint 2 wants to communicate back to End point 1.
SD-Access Fabric
Unique Data-Plane Extensions compared to VXLAN

Capability Traditional LISP/VXLAN SD-Access Fabric


SGT Tag No SGT VXLAN-GPO uses Reserved field to
carry SGT

Layer 3 Extension Yes Yes, by mapping VRF->VNI


(VRF)

Layer 2 Extension Not Supported Fabric supports Layer 2 extension by


mapping VLAN ->VNI

Wireless Not Supported AP to Fabric Edge uses VXLAN


Fabric Edge to Edge/Border uses VXLAN
for both Wired and Wireless (same)
Thank you!!!
Policy-Plane
based on CTS
Fabric policy plane
➢ The primary technology used for the fabric policy plane is based on Cisco TrustSec® .

➢ Cisco TrustSec, and specifically SGT and SGT Exchange Protocol (SXP), is an IETF draft protocol (SXP-006) that
provides logical group-based policy creation and enforcement by separating the actual endpoint “identity”
(group) from its actual network address (IP) using a new ID known as a Scalable [or security] Group Tag
(SGT).

➢ This technology provides several advantages for Cisco SD-Access, such as support for both network-based
(VRF/VN) and group-based segmentation (policies), the ability to create logical (address-agnostic) policies,
dynamic enforcement of group-based policies (regardless of location) for both wired and wireless traffic,
and the ability to provide policy constructs over a legacy or non-Cisco network (using VXLAN-GPO).

➢ In SD-Access, several enhancements to the original Cisco TrustSec specifications have been added, notably
combining the SGT and VN into the VXLAN-GPO header and enhancing Cisco TrustSec to include LISP VN
Extranet.
SD-Access Fabric
Key Components – Group Based Policy

1. Control-Plane based on LISP


2. Data-Plane based on VXLAN
3. Policy-Plane based on CTS
Virtual Routing & Forwarding
Scalable Group Tagging
VRF + SGT

ETHERNET IP UDP VXLAN ETHERNET IP PAYLOAD


SD-Access Policy
Two Level Hierarchy - Macro Segmentation

K n own Un kn o wn
Ne t wo rks Ne t wo rks

SD-Access
VN VN VN
Fabric
Virtual Network (VN)
“ A” “ B” “ C”
First level Segmentation ensures zero
communication between forwarding
domains. Ability to consolidate multiple
networks into one management plane.

Building Management Campus Users


VN VN
SD-Access Policy
Two Level Hierarchy - Micro Segmentation

K n own Un kn o wn
Ne t wo rks Ne t wo rks

SG
1
SG
4
SG
7
SD-Access Scalable Group (SG)
SG
2
SG
3
SG
5
SG
6
SG
8
SG
9 Fabric
Second level Segmentation ensures
role based access control between
two groups within a Virtual Network.
Provides the ability to segment the
network into either line of businesses
or functional blocks.

Building Management Campus Users


VN VN
SD-Access Policy
Policy Types

Access Control Application Traffic Copy


Policy Policy Policy
↓ ↓ ↓
Who can access What? How to treat Traffic? Need to Monitor Traffic?

Permit / Deny Rules QoS for Applications Enable SPAN Services


for Group-to-Group Access or Application Caching for specific Groups or Traffic


Group Assignment
Two ways to assign SGT

Dynamic Classification Static Classification

L3 Interface (SVI) to SGT L2 Port to SGT

Campus
Access Distribution Core DC Core DC Access

MAB
Enterprise
Backbone

WLC Firewall Hypervisor SW

VLAN to SGT Subnet to SGT VM (Port Profile) to SGT


Cisco TrustSec
Security Group Tags in ISE

Define SGTs under ‘Components’ section in TrustSec Work Center (from ISE 2.0+)
Cisco TrustSec
Define Authorization Policies for Users and Devices in ISE

Create an 802.1X or
MAB or Web Auth
policy to assign the
SGTs to the Users
and Devices, after
client Authentication
& Authorization

104
SD-Access Policy
Access Control Policies

Source Group Destination Group


Contract

Guest Users Web Server

Cisco DNA Center Cisco APIC-DC


CLASSIFIER: PORT ACTION: DENY

Classifier Type Action Type


Port Number Permit
Protocol Name Deny
Application Type Copy

All groups in a Policy must belong to the same Virtual Network


Cisco TrustSec
Security Group ACLs in ISE

SGACLs are
referenced
under the
Egress policy
Policy Enforcement
Ingress Classification with Egress Enforcement

Destination Classification
CRM: SGT 20
Web: SGT30
User Authenticated = FIB Lookup =
Classified as Marketing(5) Destination IP = SGT 20 ISE

Cat3850 Cat6800 Cat6800 Nexus 7000 Nexus 5500 Nexus 2248


CRM
Enterprise DST: 10.1.100.52
5 Backbone 5
SRC: 10.1.10.220 SGT: 20
DST: 10.1.100.52
SRC: 10.1.10.220 SGT: 5 Web
DST: 10.1.200.100
Egress SGT: 30
Enforcement
(SGACL)
WLC5508
DST ➔ CRM Web
 SRC (20) (30)
Marketing (5) Permit Deny
BYOD (7) Deny Permit
Cisco TrustSec
Ingress Classification & Group Tagging
Group Tag
cts role-based sgt-map 80.1.1.2 sgt 80 Static (CLI) or Dynamic (ISE)

Associated IP

Example Each SGT Binding consumes


1 1 IP Host entry
cts role-based sgt-map 80.1.1.2 sgt 80
cts role-based sgt-map 90.1.1.2 sgt 90 IP SGT Binding: <ip, vrf> sgt, null adj

cts role-based sgt-map 100.1.1.2 sgt 100 IP SGT Binding: 80.1.1.2, 80

.. IP SGT Binding: 80.1.1.2, 90 Host


Entries
.. IP SGT Binding: 80.1.1.2, 100
cts role-based sgt-map 110.1.1.2 sgt 200 …
IP SGT Binding: 80.1.1.2, 200

Hash Table – Host Entries

SGT to IP Binding is part of the Host Table (Not ACL Table)


Cisco TrustSec
Egress Enforcement - SGACL
Switch(config)# ip access role allow_webtraff SGACL
Switch(config-rb-acl)# 10 permit tcp dst eq 80
Switch(config-rb-acl)# 20 permit tcp dst eq 443
Switch(config-rb-acl)# 30 permit icmp
Switch(config-rb-acl)# 40 deny ip Source Group Tag (SGT) Dest. Group Tag (DGT)

Switch(config)# cts role-based permissions from 20 to 70 allow_webtraff

2 Each SGT,DGT set consumes Each SGACL can consume


1 ACL Label Entry 3 1+ Access Control Entries
Policy with SGT, DGT, SGACL Reference (Label) SGACL
cts role-based permissions from 20 to 70 allow_webtraff permit tcp dst eq 80

cts role-based permissions from SGT, DGT, SGACL Label permit tcp dst eq 443
SG ACL
SGT/DGT
Entries
Entries
permit icmp

… …

Hash Table – SGT/DGT ACL TCAM – ACEs


Group Propagation
VN & SGT in VXLAN-GPO Encapsulation

Encapsulation Decapsulation
IP Network

Edge Node1 Edge Node2

VXLAN VXLAN

VN ID SGT ID VN ID SGT ID

Classification Propagation Enforcement


Static or DynamicVN Carry VN and Group Group Based Policies
and SGT assignments context across the network ACLs, Firewall Rules
SD-Access Fabric
Unique Policy-Plane Extensions compared to CTS

Capability Traditional CTS SD-Access Policy


SGT Propagation Enabled hop-by-hop, or by Carried with the data traffic inside
Security-Group Exchange Protocol VXLAN-GPO (overlay) end-to-end
(SXP) sessions

VN Integration Not Supported VN + SGT-aware Firewalls

Access Control Policy Yes Yes

QoS (App) Policy Not Supported App based QoS policy, to optimize
application traffic priority

Traffic Copy Policy Not Supported SRC/DST based Copy policy (using
ERSPAN) to capture data traffic
Thank you!!!
Exploring
Cisco DNA
Center
Cisco DNA Center overview
➢ Cisco DNA Center is a centralized operations platform for end- to-end automation and assurance of
enterprise LAN, WLAN, and WAN environments, as well as orchestration with external solutions and
domains.

➢ It allows the network administrator to use a single dashboard to manage and automate the network.

Some of the key highlights of Cisco DNA Center include:

➢ High availability — for both hardware component and software packages

➢ Backup and restore mechanism — to support disaster recovery scenarios

➢ Role- based access control mechanism, for differentiated access to users based on roles and scope

➢ Programmable interfaces to enable ecosystem partners and developers to integrate with Cisco DNA Center
Cisco DNA Center
SD-Access – Key Components

ISE Appliance DNA CenterAppliance


SNS 3600 Series DN2-HW-APL
Cisco DNA Center
API API
API Design | Policy | Provision | Assurance API

API

Cisco ISE
Identity 2.3
& Policy Automation Assurance
API API
Identity Services Engine Network Control Platform Network Data Platform

NETCONF
SNMP
SSH

AAA
RADIUS
TACACS
Campus Fabric NetFlow
Syslog
HTTPS

Cisco Switches | Cisco Routers | Cisco Wireless


Cisco DNA Center
Overall “Solution Scale” is driven by Cisco DNAC
Cisco DNAC 1.2.10
Cisco DNA Center
Cisco DNAC DN1- HW- APL DN2-HW-APL DN2-HW-APL-L
44 Core- UCS M4 44 Core- UCS M5 56 Core- UCS M5 DN1-HW-APL
44 Core - UCS M4
Switches, * End of Sale *
1000 1000 2000
Routers & WLC

Access Points 4000 4000 6000


DN2-HW-APL
Endpoints 44 Core - UCS M5
5K+ 20K 10K+ 30K
Infrastructure

(Wired + Wireless) 5K+ 20K

Sites 500 500 1000

Fabric Nodes 500/Site 500/Site 600/Site


DN2- HW- APL- L
56 Core - UCS M5
IP Pools 100/Site 300/Site 1000/Site

Virtual Networks 64/Site 64/Site 64/Site


Cisco DNA Center
Overall “Solution Scale” is driven by Cisco DNAC
Cisco DNAC 1.3
Cisco DNA Center
Cisco DNAC DN2-HW-APL DN2-HW-APL-L DN2-HW-APL-XL
44 Core- UCS M5 56 Core- UCS M5 112 Core- UCS M5

Switches,
1000 2000 5000
Routers & WLC

Access Points 4000 6000 12000

Endpoints
25K 40K 100K DN2- HW- APL- XL
Infrastructure

(Wired + Wireless)
112 Core - UCS M5

Sites 500 1000 2000

Fabric Nodes 500/Site 600/Site 1200/Site

IP Pools 300/Site 1000/Site 1000/Site

Virtual Networks 64/Site 64/Site 256/Site


Cisco DNA Center
High Availability Cluster

1 or 3 appliance HA Cluster (more in future)


- Odd number to achieve quorum
of distributed system
Seen as 1 logical DNAC instance
- Connect to Virtual (Cluster) IP
- Rare need to access individual nodes
Distributed Micro Services on Maglev cluster (e.g. SSH)
2 nodes active/sharing + 1 redundant
Virtual IP - Some services run multiple copies
spread across nodes (e.g. databases)
- Other services run single copy and
migrate from failed to redundant node

Single Appliance for Cisco DNA (Automation + Assurance)


#CLUS © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cisco DNA Center
Automated Provisioning and Telemetry Enrichment

Telemetry Intent
Alerts

Network Control Violations Network Data


Platform Inventory, Topology, Host, Group Platform
Network State changes
Path Trace information

Configuration Automation C Data Collection


Telemetry Configuration Telemetry Data
B B

Campus
Fabric
Cisco DNA Center
NDP System Components

Network Network Cost Network Segmentation & Change Impact Other 3rd party
Vulnerability Detection app Analytics app
Assurance Analytics app analytics apps

Northbound(NB) APIs NDP Extensions

Control
Network NDP Core Analytics Platform
Controller
Platform Inventory,
Topology
etc.

Infrastructure Services, Cluster Operations and Management

Configuration Telemetry

Distributed Processing
Network Elements
(Switches, Routers, Access Points, N/W Services, Identity providers)
Cisco DNA Center
NDP Analytics Architecture

Data collection and ingestion Data correlation and analysis Data visualization and action
Network Assurance
Complex
Router Switch WLC Sensor
Network correlation
telemetry
Metadata
SNMP NetFlow Syslog Streaming extraction
telemetry
...
Collector and analytics pipeline SDK
ISE AAA Topology Location PxGrid
Stream Data models and restful APIs
processing
DNS DHCP Inventory Policy IPAM
Time series analysis

Contextual data System management portal


Analytics Engine
SD-Access
CLI and API vs. GUI

Campus Fabric SD Access

• Command Line (CLI) • Programmable APIs • DNA Center GUI


• Templates / Macros • NETCONF / YANG • Cross-App REST APIs
• Customized Workflows • Automated Workflows • Automated Workflows
• Box-by-Box Management • Box-by-Box Management • Centralized Management
Cisco DNA Center
4 Step Workflow

Design Policy Provision Assurance

• Global Settings • Virtual Networks • Fabric Domains • Health Dashboard


• Site Profiles • ISE, AAA, Radius • CP, Border, Edge • 360 o Views
• DDI, SWIM, PNP • Endpoint Groups • Fabric WLC, AP • Net, Device, Client
• User Access • Group Policies • External Connect • Path Traces

System Settings & Integration

App Management & High Availability


Assure
Assure
Thank you!!!
Configuring
Underlay
Automation
What is Underlay Network?
Traditional Networks

Core

Core

Dist

Access

▪ Traditional LAN and WLAN network infrastructure and designs


▪ Variable network size – Three-Tier or Collapsed models
▪ Traditional network designs – Multilayer or Routed Access providing reachability
What is Underlay Automation
Automating Traditional Networks
Core

Core

Dist

Access

▪ Ease of new LAN network deployments for Campus or Branch networks


▪ Complete network automation to accelerate building SDA overlay networks
▪ Flexible software design to on-board new switch during network expansion
Underlay Automation Overview
Simplified Procedure
Plan Design Discover Provision




Verify Network Design Sites across geographic Discover Network devices Dynamic discovery & automation
Verify System support Global network services Physical Topology Optimized routing design
Prepare IP Services Design IP Address Pools Network Readiness Resilient underlay settings

4 Step process
SDA Ready Network
Thank you!!!
Underlay
Automation
Step – 1 : Plan
Plan Design Discover Provision

Plan – Understanding Device Roles


Cisco DNA Center

Core

Seed
Seed Seed

Underlay Automation Block

Seed Device
Intermediate system(s) between Core and
new network block
Key system to discover, automate andon-
board new Catalyst switches in network
Plan Design Discover Provision

Plan – Understanding Device Roles


Cisco DNA Center Cisco DNA Center

Core Core

Seed Seed Seed


Seed Seed

PnP-Agent
PnP Agent PnP Agent
Underlay Automation Block

PnP Agent PnP Agent PnP Agent

Seed Device PnP-Agent Device


Intermediate system(s) between Core and Catalyst switch with factory-default settings
new network block and waiting at startup-wizard state
Key system to discover, automate andon- Interconnect between Seed and another
board new Catalyst switches in network PnP-Agent device in the network
Plan Design Discover Provision

Plan – Underlay Automation Boundary


Cisco DNA Center

Core

2 Tier – Collapsed Core Design 3 Tier – Campus Design Extended Campus Design

Seed Seed Seed

PnP Agent PnP Agent PnP Agent

PnP Agent PnP Agent

PnP Agent

Layer 3
Underlay Automation Boundary Layer 2

Maximum Automation boundary limited to 2


hop count from Seed Device
Underlay Automation Boundary
Supporting common hierarchical and
structured Enterprise network designs
Plan Design Discover Provision

Plan – Network Support Cisco DNA Center

Core

2 Tier – Collapsed Core Design 3 Tier – Campus Design

Core
Seed Seed

Dist
Seed Seed PnP Agent PnP Agent

Access
PnP Agent PnP Agent PnP Agent PnP Agent PnP Agent PnP Agent

Underlay Network Discovery Flexible Discovery Support


Dynamic and on-demand network discovery Flexible Multi-tier network topologies
process support – Two or Three-Tier Designs
Layer 3
Seed system programmed to on-board new Day-2 Underlay Automation support for new Layer 2
Catalyst switches with zero configurations systems in P2P topologies
Underlay Automation Boundary
Plan Design Discover Provision

Plan – IP Address Plan Cisco DNA Center

Seed-1 Seed-2
Core
S1(config)# interface Loopback 0 S1(config)# interface Loopback 0
S1(config-if)# ip address <ip> <mask> S1(config-if)# ip address <ip> <mask>
! !

Seed Seed

10.128.0.0/16 IS-IS Routing Domain

PnP Agent PnP Agent

PnP Agent PnP Agent PnP Agent

IP Address Plan Interface Address Plan


Plan and identify Network Address range for Underlay Leverage existing Loopback interface or create new if
Automation network required

Manually configure IP subnet on inter-seed switch Loopback IP could be outside of domain Network
interfaces from Underlay network address range if there address range, but must be reachable to DNA-C
is interconnection
Seed devices must not use LAN Automation address
pool
Plan – Seed Switch IP Routing Configurations
Plan Design Discover Provision

Cisco DNA Center

Core

Seed Seed

10.128.0.0/16 IS-IS Routing Domain

PnP Agent PnP Agent

PnP Agent PnP Agent PnP Agent

IP Routing Configuration
Optional if IS-IS routing protocol in Core
Else, manually create IS-IS routing instance without
area tag and mutually redistribute between routing
domains. No additional IS-IS routing configurations
required.
Summarize Network range to Core
Plan – Seed Switch IP Routing Configurations
Plan Design Discover Provision

Cisco DNA Center

Seed-1 OSPF Core Seed-2 OSPF

S1(config)# router isis OSPF S2(config)# router isis


S1(config-router)# redistribute ospf <id> metric<count> S2(config-router)# redistribute ospf <id> metric<count>
! !
S1(config)# router ospf <id> S2(config)# router ospf <id>
S1(config-router)# redistribute connected route-map<name> S2(config-router)# redistribute connected route-map<name>
S1(config-router)# summary-address 10.128.0.0 255.255.0.0 S2(config-router)# summary-address 10.128.0.0 255.255.0.0

Seed Seed

10.128.0.0/16 IS-IS Routing Domain

PnP Agent PnP Agent

PnP Agent PnP Agent PnP Agent

IP Routing Configuration
Optional if IS-IS routing protocol in Core
Else, manually create IS-IS routing instance without
area tag and mutually redistribute between routing
domains. No additional IS-IS routing configurations
required.
Summarize Network range to Core
Plan – Seed Switch IP Routing Configurations
Plan Design Discover Provision

Cisco DNA Center

Seed-1 OSPF Core Seed-2 OSPF

S1(config)# router isis OSPF EIGRP S2(config)# router isis


S1(config-router)# redistribute ospf <id> metric<count> S2(config-router)# redistribute ospf <id> metric<count>
! !
S1(config)# router ospf <id> S2(config)# router ospf <id>
S1(config-router)# redistribute connected route-map<name> S2(config-router)# redistribute connected route-map<name>
S1(config-router)# summary-address 10.128.0.0 255.255.0.0 S2(config-router)# summary-address 10.128.0.0 255.255.0.0

Seed Seed

EIGRP 10.128.0.0/16 IS-IS Routing Domain


EIGRP
S2(config)# router isis
S1(config)# router isis S2(config-router)# redistribute eigrp <id> metric <count>
S1(config-router)# redistribute eigrp <id> metric <count> PnP Agent PnP Agent !
! S2(config)# interface <id>
S1(config)# interface <id> S2(config-if)# description CONNECTED TO CORE
S1(config-if)# description CONNECTED TO CORE S2(config-if)# ip summary-address eigrp <AS> 10.128.0.0 255.255.0.0
S1(config-if)# ip summary-address eigrp <AS> 10.128.0.0 255.255.0.0

PnP Agent PnP Agent PnP Agent

IP Routing Configuration
Optional if IS-IS routing protocol in Core
Else, manually create IS-IS routing instance without
area tag and mutually redistribute between routing
domains. No additional IS-IS routing configurations
required.
Summarize Network range to Core
Plan Design Discover Provision

Plan – DNA-C IP Routing Configurations


Single-Home Multi-Home
DNA-C Cisco DNA Center Cisco DNA Center DNA-C
Eth-0 Management Interface : Eth-0 Management Interface :
Eth-0
IP Address : <IP_Address> IP Address : <IP_Address_1>
Core Core
Netmask : <Mask> Eth-0 Netmask : <Mask>
Eth-1
Gateway : <Default_Gateway> Gateway : <Default_Gateway>

Eth-1 Interface :

Seed Seed
IP Address : <IP_Address_2>
10.128.0.0/16 IS-IS Routing Domain
Netmask : <Mask>

Gateway : <Skip>
PnP Agent PnP Agent
Static Route : <LAN_Automation-Net>/<mask>/GW

PnP Agent PnP Agent PnP Agent

DNA-C IP Routing Configuration


DNA-C must have end-to-end IP reachability
In Single-Home design the DNA-C performs host
function with Default Gateway providing IP routing.
In Multi-Home design, the DNA-C must have static route
to LAN Automation network(s) via secondary interface.
Plan – Endpoint Connections
Plan Design Discover Provision

Cisco DNA Center

Core

Seed Seed

10.128.0.0/16

PnP Agent PnP Agent


Layer 2 Domain

PnP Agent PnP Agent PnP Agent

Endpoint Integration

The PnP Agent may contend for DHCP address with attached
Endpoints

Underlay automation process may fail if the LAN Pool is


consumed by the Endpoints connected to PnP Agents

Recommended to connect Endpoints post successful Underlay


Automation procedure
Plan Design Discover Provision

Plan – Endpoint Connections


Cisco DNA Center

Core

Temp DHCP Server

Seed Seed

10.128.0.0/16

PnP Agent PnP Agent


Layer 2 Domain

PnP Agent PnP Agent PnP Agent

Before Underlay Automation Endpoint Integration


Not Recommended
The PnP Agent may contend for DHCP address with attached
Endpoints

Underlay automation process may fail if the LAN Pool is


consumed by the Endpoints connected to PnP Agents

Recommended to connect Endpoints post successful Underlay


Automation procedure
Plan – Endpoint Connections
Plan Design Discover Provision

Cisco DNA Center

Core

Seed Seed

10.128.0.0/16 IS-IS Routing Domain

After Underlay Automation Endpoint Integration


Recommended
The PnP Agent may contend for DHCP address with attached
Endpoints

Underlay automation process may fail if the LAN Pool is


consumed by the Endpoints connected to PnP Agents

Recommended to connect Endpoints post successful Underlay


Automation procedure
Plan Design Discover Provision

Plan – Seed Switch Feature Validation


✅ Verify no conflicting Spanning-Tree CLI is not present, i.e. ”spanning-tree portfast default”

✅ Verify Seed device do not have any network address belonging to LAN Automation IP Pool
✅ Pre-configure IS-IS routing without Area Tag. Mutual route-redistribution. No additional IS-IS
routing configuration implemented.

✅ Verify SSH configuration terminal access is present. Telnet is unsupported


Thank you!!!
Underlay
Automation
Step – 2 :
Design
Plan Design Discover Provision

Design – Overview
Plan Design Discover Provision

Design – Configure Global Network Services

1 Add and Configure Server Address

Network Services Configurations Configuration Compliance


Add all required network services Provision step configures systems 2 Save Configuration
Multiple servers can be added for load sharing Updates can re-provisioned for Day-2 operation
and redundancy
Design – Configure Global Device Credentials
Plan Design Discover Provision

CLI Credential Configurations SNMP Credentials


Common login credentials for all devices under Automate SNMP community configuration.
selected hierarchy
Multiple SNMP community possible. Only one
Multiple local login accounts can be created and active
automated
Design – Configure Global Device Credentials
Plan Design Discover Provision

1 Configure and Select Credentials

CLI Credential Configurations SNMP Credentials


Common login credentials for all devices under Automate SNMP community configuration.
selected hierarchy
Multiple SNMP community possible. Only one
Multiple local login accounts can be created and active
automated
Plan Design Discover Provision

Design – Configure Global Device Credentials

1 Configure and Select Credentials

2 Configure and Select SNMP

CLI Credential Configurations SNMP Credentials


Common login credentials for all devices under Automate SNMP community configuration.
selected hierarchy 3 Save Configuration
Multiple SNMP community possible. Only one
Multiple local login accounts can be created and active
automated
Plan Design Discover Provision

Design – Configure Global Network Range

Global Network Range Global IP Pool


Structured Enterprise IP network design IP address repository for multi-function
distribution purpose to Area, Site etc.
Planned and divided regionally for optimal
network communications Reserve IP Pool from Area to automate network
intent for various operations
Plan Design Discover Provision

Design – Configure Global Network Range

1 Assign unique IP Pool Name

Network Range for specific Area 2

Classful Network Mask 3

Gateway IP Address 4

5 Save

Global Network Range Global IP Pool


Structured Enterprise IP network design IP address repository for multi-function
distribution purpose to Area, Site etc.
Planned and divided regionally for optimal
network communications Reserve IP Pool from Area to automate network
intent for various operations
Plan Design Discover Provision

Design – Configure LAN Pool at Site

Reserve LAN IP Pool


Configure Pool Name and Type = LAN
One Fabric Domain = One LAN Pool
Select Parent Pool to reserve Network Address
Range
Design – Configure LAN Pool at Site
Plan Design Discover Provision

1 Assign unique LAN Pool Name

Select LAN from menu 2

Select Area Network Range 3

Assign LAN Pool Address and Mask 4

5 Reserve to create new entry

Reserve LAN IP Pool LAN IP Assignments


Configure Pool Name and Type = LAN Supported Netmask Range – 8 – 24
One Fabric Domain = One LAN Pool Dynamic IP address assignment from the LAN
pool
Select Parent Pool to reserve Network Address
Range Add more as network grow
Plan Design Discover Provision

Design – Configuration Summary

Step-1 Build Network Hierarchy based on geographic locations

Step-2 Configure Network Services – Global | Area | Site level

Step-3 Configure Network Address Range – Global | Area | Site level

Step-4 Configure LAN IP Pool from Parent – Global | Area | Site level
Thank you!!!
Underlay
Automation
Step – 3 :
Discovery
Discovery – Overview
Plan Design Discover Provision

Cisco DNA Center

Core

2 Tier – Collapsed Core Design 3 Tier – Campus Design

Core
Seed Seed

Dist
Seed Seed PnP Agent PnP Agent

Access
PnP Agent PnP Agent PnP Agent PnP Agent PnP Agent PnP Agent

Underlay Network Discovery Flexible Discovery Support


Dynamic and on-demand network discovery Flexible network topologies with Dual or Single
process Seed system
Seed system programmed to on-board new Day-2 Underlay Automation support for new Layer 3
Catalyst switches with zeroconf systems Layer 2

Underlay Automation Boundary


Plan Design Discover Provision

Discovery – Seed System Discovery


Start Discovery
Plan Design Discover Provision

Discovery – Seed System Inventory


Plan Design Discover Provision

Discovery – Seed System Inventory


Seed System Discovery


Seed device automatically added in Inventory.
Discovers system information
Prepares for Underlay network infrastructure
discovery and automation
Plan Design Discover Provision

Discovery – Configuration Summary

Step-1 Build Discovery Profile

Step-2 Assign Primary and Secondary Seed System IP address to discover

Step-3 Retain remaining parameters unless unique value


Thank you!!!
Underlay
Automation
Step – 4 :
Provision
Plan Design Discover Provision

Provision – Underlay Automation


Underlay Provision
DNA-C Provision supports Underlay and Overlay
network automation
All systems under Seed are dynamically
discovered and programmed using PnP function
Plan Design Discover Provision

Provision – Add Seed Systems to Site

1 Add Seed system to Site

2 Update Software if needed

Underlay Provision Upgrade Software


After successful Step-2 discovery the Seed Upgrade Cisco IOS software on Seed device(s)
systems are automatically added in Provision if new version required
table
Optional step to proceed further on Underlay
Add Seed systems to a Building of an Site where Automation
deployed for logical grouping
Plan Design Discover Provision

Provision – Device Inventory Views

✅ ✅

✅ ✅
Plan Design Discover Provision

Provision – Device Inventory Views

Change Topology View 1

Underlay Provision
Device Inventory provides two views providing
unique functions – Table and Topology
Table view provides device inventory and states
Topology view provides Provision function
Plan Design Discover Provision

Provision – Initiate Discovery Process

Select Seed Systems


Click each discovered seed system and select
”Discover and Provision”
Both systems are programmed with all required
parameters to successfully discover and
automate all systems
Plan Design Discover Provision

Provision – Initiate Discovery Process

S1 S2

1 Select Primary Seed-1 System

2 Select Secondary Seed-2 System

Select Seed Systems


Click each discovered seed system and select
”Discover and Provision”
Both systems are programmed with all required
parameters to successfully discover and
automate all systems
Plan Design Discover Provision

Provision – Start Automation

Start Automation Process


Primary Seed is temporarily programmed with
DHCP and options. Automatic failover to
Secondary if Primary fails during automation.
Selected Ports are automated to discover direct
and in-direct attached PnP-Agent switches
Provision – Start Automation
Plan Design Discover Provision

1 Click LAN Automation

Start Automation Process


Primary Seed is temporarily programmed with
DHCP and options. Automatic failover to
Secondary if Primary fails during automation.
Selected Ports are automated to discover direct
and in-direct attached PnP-Agent switches
Plan Design Discover Provision

Provision – Start Automation


1 Click LAN Automation

2 Select Site

3 Select Seed Devices


4 Select Site LAN IP Pool
5 Optional. Configure Name Prefix

6 Select Underlay Network Interface

Start Automation Process


Primary Seed is temporarily programmed with
DHCP and options. Automatic failover to
Secondary if Primary fails during automation. 7 Start Underlay Automation
Selected Ports are automated to discover direct
and in-direct attached PnP-Agent switches
Plan Design Discover Provision

Provision – Stop Automation


Plan Design Discover Provision

Provision – Stop Automation


1 Check Discovery Status
Plan Design Discover Provision

Provision – Stop Automation


️✅




Plan Design Discover Provision

Provision – Stop Automation





Plan Design Discover Provision

Provision – Stop Automation




Stop Automation Process ✅
All discovered and automated Switches must
reach to Completed status. Process time may
vary on network size 2 Stop Underlay Automation
Stop the automation. This action completes
process and transitions all switches to final state
Plan Design Discover Provision

Provision – Global Network Services

✅ ✅

✅ ✅

✅ ✅

✅ ✅

✅ ✅

✅ ✅
Plan Design Discover Provision

Provision – Global Network Services

Global Service Provision


Provision all Global or Area configured services
to newly discovered switches
The services configuration are supported over
non-Mgmt Core network infrastructure
Plan Design Discover Provision

Provision – Global Network Services


Plan Design Discover Provision

Provision – Global Network Services


Plan Design Discover Provision

Provision – Global Network Services


Plan Design Discover Provision

Provision – Global Network Services


Plan Design Discover Provision

Provision – Global Network Services

Global Service Provision


Provision all Global or Area configured services
to newly discovered switches
The services configuration are supported over
non-Mgmt Core network infrastructure
Plan Design Discover Provision

Provision – Define System Roles


Plan Design Discover Provision

Provision – Define System Roles


Provision – Define System Roles
Plan Design Discover Provision
Plan Design Discover Provision

Provision – Define System Roles

System Role
Administrator must select each switch and define
its network role – Access | Distribution | Core
DNA-C auto-arranges topology view based on
user selection
Plan Design Discover Provision

Provision – Define System Roles


Plan Design Discover Provision

Provision – Define System Roles


Plan Design Discover Provision

Provision – Define System Roles

System Role
Administrator must select each switch and define
its network role – Access | Distribution | Core
DNA-C auto-arranges topology view based on
user selection
Plan Design Discover Provision

Configuration

Underlay Automation Configurations


DNA-C automates broad set of network
configuration on Seed and PnP Agent Switches
All systems are programmed with variety of
technologies and best practices for reliable
underlay network infrastructure
Plan Design Discover

Provision – Validate Configuration


Provision

Underlay Automation

Underlay Automation Configurations


DNA-C automates broad set of network
configuration on Seed and PnP Agent Switches
All systems are programmed with variety of
technologies and best practices for reliable
underlay network infrastructure
Plan Design Discover Provision

Provision – Validate Configuration

Underlay Automation

Point-to-Point Interface configurations


Loopback Interface configurations

Underlay Automation Configurations


DNA-C automates broad set of network
configuration on Seed and PnP Agent Switches
All systems are programmed with variety of
technologies and best practices for reliable
underlay network infrastructure
Plan Design Discover Provision

Provision – Validate Configuration

Underlay Automation

Point-to-Point Interface configurations


Loopback Interface configurations

IS-IS Routing Protocol


BFD, IP Dampening, High Availability

Underlay Automation Configurations


DNA-C automates broad set of network
configuration on Seed and PnP Agent Switches
All systems are programmed with variety of
technologies and best practices for reliable
underlay network infrastructure
Plan Design Discover Provision

Provision – Validate Configuration

Underlay Automation

Point-to-Point Interface configurations


Loopback Interface configurations

IS-IS Routing Protocol


BFD, IP Dampening, High Availability

IP Routing Security, Device Security


AAA, 802.1X, IP Device Tracking

Underlay Automation Configurations


DNA-C automates broad set of network
configuration on Seed and PnP Agent Switches
All systems are programmed with variety of
technologies and best practices for reliable
underlay network infrastructure
Plan Design Discover Provision

Provision – Validate Configuration

Underlay Automation

Point-to-Point Interface configurations


Loopback Interface configurations

IS-IS Routing Protocol


BFD, IP Dampening, High Availability

IP Routing Security, Device Security


AAA, 802.1X, IP Device Tracking

SNMP Traps, Syslog, Radius


SSH, HTTP and OOB Access
Management

Underlay Automation Configurations


DNA-C automates broad set of network
configuration on Seed and PnP Agent Switches
All systems are programmed with variety of
technologies and best practices for reliable
underlay network infrastructure
Plan Design Discover Provision

Provision – Validate Configuration

Underlay Automation

Point-to-Point Interface configurations


Loopback Interface configurations

IS-IS Routing Protocol


BFD, IP Dampening, High Availability

IP Routing Security, Device Security


AAA, 802.1X, IP Device Tracking

SNMP Traps, Syslog, Radius


SSH, HTTP and OOB Access
Management

Underlay Automation Configurations


DNA-C automates broad set of network
configuration on Seed and PnP Agent Switches
All systems are programmed with variety of
technologies and best practices for reliable
underlay network infrastructure
Plan Design Discover Provision

Provision – SD-Access Ready!

SD-Access Ready 💡
DNA-C auto-arranges topology view based on Resynchronize Device Inventory if partial topology discovered
user selection.
All systems are programmed and ready to build
an overlay networks
Plan Design Discover Provision

Provision – Configuration Summary

Step-1 Add Seed systems to Site

Step-2 Start Underlay Network discovery and automation

Step-3 Stop Underlay Network discovery and automation

Step-4 Provision Global Network services

Step-5 Designate System role to build structure network topology


Plan Design Discover Provision

Provision – Network Expansion


Cisco DNA Center

Core

Access Network Expansion

Seed

PnP Agent PnP Agent

Access Network Expansion


Automate from Parent Seed device as
Access network expands.
Transparent process with existing switches
sharing same or different LAN Pool
Plan Design Discover Provision

Provision – Network Expansion


Cisco DNA Center

Core

Access Network Expansion

Seed

PnP Agent PnP Agent

Access Network Expansion


Automate from Parent Seed device as
Access network expands.
Transparent process with existing switches
sharing same or different LAN Pool
Plan Design Discover Provision

Provision – Network Expansion


Cisco DNA Center

Core

Access Network Expansion

Seed

PnP Agent PnP Agent PnP Agent

Access Network Expansion


Automate from Parent Seed device as
Access network expands.
Transparent process with existing switches
sharing same or different LAN Pool
Plan Design Discover Provision

Provision – Network Expansion


Cisco DNA Center

Core

Access Network Expansion

Seed

PnP Agent PnP Agent PnP Agent

Access Network Expansion


Automate from Parent Seed device as
Access network expands.
Transparent process with existing switches
sharing same or different LAN Pool
Plan Design Discover Provision

Provision – Network Expansion


Cisco DNA Center

Core

Access Network Expansion Distribution Network Expansion

Seed Seed

PnP Agent

PnP Agent PnP Agent PnP Agent

PnP Agent

Access Network Expansion Distribution Network Expansion


Automate from Parent Seed device as Automate new network block from Parent
Access network expands. Seed device. Reuse or create new LAN
Pool.
Transparent process with existing switches
sharing same or different LAN Pool Use Distribution as Seed if Access expands
Plan Design Discover Provision

Provision – Network Expansion


Cisco DNA Center

Core

Access Network Expansion Distribution Network Expansion

Seed Seed

PnP Agent PnP Agent PnP Agent

PnP Agent PnP Agent PnP Agent

PnP Agent PnP Agent PnP Agent

Access Network Expansion Distribution Network Expansion


Automate from Parent Seed device as Automate new network block from Parent
Access network expands. Seed device. Reuse or create new LAN
Pool.
Transparent process with existing switches
sharing same or different LAN Pool Use Distribution as Seed if Access expands
Plan Design Discover Provision

Provision – Network Expansion


Cisco DNA Center

Core

Access Network Expansion Distribution Network Expansion

Seed Seed

PnP Agent PnP Agent PnP Agent

PnP Agent PnP Agent PnP Agent

PnP Agent PnP Agent PnP Agent

Access Network Expansion Distribution Network Expansion


Automate from Parent Seed device as Automate new network block from Parent
Access network expands. Seed device. Reuse or create new LAN
Pool.
Transparent process with existing switches
sharing same or different LAN Pool Use Distribution as Seed if Access expands
Thank you!!!

You might also like