Professional Documents
Culture Documents
Deploying+Cisco+SD Access+ (ENSDA) +part+1
Deploying+Cisco+SD Access+ (ENSDA) +part+1
SD-Access
(ENSDA) v1.1
B Y S D WA N - N E T W O R K S
Course details
Learning Objectives
❑ Orchestrate a Cisco SD-Access solution using the Cisco DNA Center™ orchestration platform
❑ Use the Network Data Platform to demonstrate the assurance and analytics capabilities of
SD-Access
.
Prerequisites
Before taking this course, you should have the following knowledge and skills:
https://www.cisco.com/c/en/us/training-events/training-certifications/training/training-
services/courses/deploying-cisco-sd-access-ensda.html
Thank you!!!
Cisco SD-Access Overview
•Exploring Cisco SD-Access
•Describing the Cisco SD-Access
Architecture
•Exploring Cisco DNA Center
•Configuring Underlay
Automation
Thank you!!!
Exploring Cisco
SD-Access
What Problems Does Software-Defined Access Solve?
➢ Security/Segmentation
➢ Host Mobility
➢ Network Visibility
1. Security/Segmentation
➢ To create segmentation, customers are doing things like steering all the traffic through firewalls, using
VLANs, ACLs, VRF-lite or deploying MPLS.
➢ Some customers have deployed TrustSec into their networks, but have found it historically difficult to
implement.
➢ Security is also difficult and time-consuming to maintain as customers have numerous firewalls interface
with thousands of lines of access control lists to limit traffic flows.
➢ Today, security at the edge is handled by customers deploying 802.1X and ISE in their networks.
➢ We have seen great success among customers adopting and deploying 802.1X, but it can be a timely
process to create, test and roll out 802.1X organization wide.
How it is done with Software-Defined Access:
➢ Macro-segmentation is created by Software-Defined Access via VRFs and micro segmentation via
Cisco TrustSec, which uses a scalable group tag (SGT) to map a user’s identity to a tag rather than an
IP address.
➢ Writing security rules off this tag greatly improves the readability of the security policy, and
significantly reduces maintenance; going from thousands of ACL entries to tens of SGT entries.
➢ Software-Defined Access also automates the process of deploying 802.1X to the fabric edge.
➢ The deployment and maintenance is automated and managed via the DNA-Center appliance and
Identity Services Engine.
2. Host Mobility
How it is done today:
➢ Today, when a user roams throughout the network, their IP address changes, making it difficult to base
security policies on IP addresses and creating complex IP schemes to maintain.
➢ Users can seamlessly roam over an L3 network while maintaining the same IP address by using LISP as the
control plane protocol.
➢ In this solution, security enforcement is achieved using SGTs that render IP addresses much less relevant.
With this model, you can use very large IP pools without worrying about broadcast storms or using IP
addresses for security enforcement.
3. Automation and Orchestration
➢ Today, customers use many different systems to help automate and orchestrate the deployment of a
network.
➢ Most of our customers still manually deploy their networks via the CLI, which can take over a week to stand
up a site, including: provisioning VRFs, configuring dynamic routing protocols, configuring switching features
and configuring 802.1X.
➢ The entire Software-Defined Access solution is deployed and managed via Cisco DNA Center, providing
customers with a single pane of glass for all network tasks.
➢ DNA Center can automate all of the deployment tasks, which would normally take over a week, into a few
hours.
4. Network Visibility
➢ Today customers must use third-party tools to try to gain visibility into their network, but still generally lack
meaningful, actionable data.
➢ If a user calls into a customer’s help desk, gaining visibility into historical network issues is near impossible,
making troubleshooting a difficult and time-consuming task.
➢ With NDP, a help desk employee can get a 360-degree view of client and devices historical experience on
the network. NDP can show issues and trends related to things like device onboarding, app experience and
more. For example, NDP could identify a wireless issue in the network that correlates to the user’s ticket to
quickly identify and resolve the ticket.
Software Defined Access
What is Cisco Software Defined Access (SD-Access)?
Cisco SD-Access is the foundation for a new era of Intent Based Networking (IBN).
3
Cisco Software Defined Access
The Foundation for Cisco’s Intent-Based Network
Cisco DNACenter
One Automated
Network Fabric
Policy Automation Assurance Single fabric for Wired and
Wireless with full automation
Outside
B B
Identity-Based
C
Policy and Segmentation
Policy definition decoupled
from VLAN and IP address
AI-Driven
Insights and Telemetry
SD-Access
Extension Client Mobility Analytics and visibility into
User and Application experience
Policyfollows User
IoT Network Employee Network #CLUS © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public
What is SD-Access?
Campus Fabric + Cisco DNA Center (Automation & Assurance)
▪ SD-Access
APIC-EM
NCP
1.X
GUI approach provides automation &
ISE ISE NDP assurance of all Fabric configuration,
P
CiscoI DNA
management and group-based policy
Center
Cisco DNA Center integrates multiple
management systems, to orchestrate
IP LAN, Wireless LAN and WAN access
B B
▪ Campus Fabric
Separate managementsystems
Cisco DNA Cisco DNA Center
Enterprise Solution
Simple Workflows
DNA Center
Identity Services Engine Network Control Platform Network Data Platform
#CLUS © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public
SD-Access
What exactly is a Fabric?
A Fabric is an Overlay
An Overlay network is a logical topology used to virtually connect devices,
built over an arbitrary physical Underlay topology.
An Overlay network often uses alternate forwarding attributes to provide
additional services, not provided by the Underlay.
Encapsulation
Hosts
(End- Points)
Underlay Network
Thank you!!!
Describing Cisco
SD-Access
Architecture
Fabric
Components
& Terminology
Part -1
Cisco SD-Access
Fabric components and terminology
▪ Network Automation – Simple GUI
Automation and APIs for intent-based Automation
Identity of wired and wireless fabric devices
Cisco ISE Cisco DNA Center
Services
▪ Network Assurance – DataCollectors
analyze Endpoint to Application flows
Assurance and monitor fabric device status
▪ Identity Services – NAC & ID Services
(e.g. ISE) for dynamic Endpoint to Group
Fabric Border IP Fabric Wireless mapping and Policy definition
Nodes Controllers
B B ▪ Control-Plane Nodes – Map System
that
Control-Plane manages Endpoint to Device relationships
Intermediate
C Nodes
Nodes (Underlay)
▪ Fabric Border Nodes – A fabric device
(e.g. Core) that connects External L3
network(s) to the SD-Access fabric
Campus ▪ Fabric Edge Nodes – A fabric device
Fabric Edge Fabric Wireless
Nodes Fabric Access Points
(e.g. Access or Distribution) that
connects Wired Endpoints to the SD-
Access fabric
▪ Fabric Wireless Controller – A fabric device
(WLC) that connects Fabric APs and
Wireless Endpoints to the SD-Access fabric
SD-Access Fabric
Control-Plane Nodes – A Closer Look
Control-Plane Node runs a Host Tracking Database to map location information
IP to RLOC MAC to RLOC Address Resolution
1.2.3.4 → FE1 AA:BB:CC:DD → FE1 1.2.3.4 → AA:BB:CC:DD
IP - 1.2.3.4/32
MAC – AA:BB:CC:DD
SD-Access Platforms
Fabric Edge Node
Catalyst 9200 Catalyst 9300 Catalyst 9400 Catalyst 9500 Catalyst 9600
• Catalyst 9200/L * • Catalyst 9300 • Catalyst 9400 • Catalyst 9500 • Catalyst 9600
• 1/mG RJ45 • 1/mG RJ45 • Sup1/Sup1XL • 1/10/25G SFP • Sup1
• 1G SFP (Uplinks) • 10/25/40/mG NM • 9400 Cards • 40/100G QSFP • 9600 Cards
SD-Access Platforms
Fabric Edge Node
Border Node is an Entry & Exit point for data traffic going Into & Out of a Fabric
• External Border
DNA Center
• connects ONLY to unknown areas outside the company
1.3
1.2
• Catalyst 3650/3850 • Catalyst 6500/6800 • Nexus 7700 • ISR 4300/4400 • ASR 1000-X/HX
• 1/mG RJ45 • Sup2T/Sup6T • Sup2E • AppX (AX) • AppX (AX)
• 1/10G SFP • C6800 Cards • M3 Cards • 1/10G RJ45 • 1/10G ELC/EPA
• 1/10/40G NM Cards • C6880/6840-X • LAN1K9 + MPLS • 1/10G SFP • 40G ELC/EPA
SD-Access @ Cisco DNA Center
Border Nodes
SD-Access Fabric
Border Nodes - Internal
B B
• Exports all internal IP Pools to outside (as
aggregate), using a traditional IP routing protocol(s).
Data
C Center
B B
Branch
Office
Known Networks
SD-Access - Border Deployment
Anywhere Border : SD-Access as a Transit Area
B B
SD-Access
Fabric
Thank you!!!
Fabric
Components
& Terminology
Part-3
SD-Access Fabric
Border Nodes - External
B B
• Exports all internal IP Pools outside (as aggregate)
into traditional IP routing protocol(s).
C Public Cloud
B B
Internet
SD-Access Fabric
Unknown Networks
SD-Access - Border Deployment
Why? Internal Traffic with External Borders
Edge Node
IP Network B
Edge Node
IP Network B
B
Traffic to internal domains willgo
directly to the Internal Borders.
Fabric Enabled WLC is integrated into Fabric for SD-Access Wireless clients
Ctrl: CAPWAP
C
• Fabric Enabled APs connect to the WLC (CAPWAP) Known Unknown
Networks
using a dedicated Host Pool (Overlay) Networks
B B
• Fabric Enabled APs connect to the Edge via VXLAN
DNA Center
Beta in 1.2.5 Extended Node Portfolio
GA in 1.3
IE3300/3400 IE4000/4010 IE5000
Enterprise Campus
Virtual Network maintains a separate Routing & Switching table for each instance
B B
• Nodes add a VNID to the Fabric encapsulation
can be added or
• User-Defined VNs
removed on-demand
SD-Access Fabric
How VNs work in SD-Access
ip vrf USERS
rd 1:4099
route-target export 1:4099
route-target import 1:4099
route-target import 1:4097
SD-Access Designs connecting to existing Global Routing Table !
ip vrf DEFAULT_VN
should use a “Fusion” router with MP-BGP & VRF import/export. rd 1:4098
route-target export 1:4098
route-target import 1:4098
route-target import 1:4097
SVI B
AF VRF B
ISIS OSPF
B AF VRFA
AF IPv4
MP-BGP
Edge Node Border Node Fusion Router Switch
VRF A
SVI A GRT
Fabric Edge & Border - VRF Configuration
Edge-1# show vrf
Name Default RD Protocols Interfaces
DEFAULT_VN 1:4098 ipv4 LI0.4098
GUEST 1:4100 ipv4 LI0.4100
Mgmt-vrf <not set> ipv4,ipv6 Gi0/0
USERS 1:4099 ipv4 LI0.4099
B B
• Nodes add a SGT to the Fabric encapsulation
SGT
SGT 4 SGT SGT
• SGTs are used to manage address-independent 17
SGT
8 25
B B
• Fabric uses Dynamic EID mapping to advertise each
Host Pool (per Instance ID) Pool
Pool Pool Pool
.4
.17 .8 .25
Pool
• Fabric Dynamic EID allows Host-specific (/32, /128 Pool Pool Pool .19 Pool
.13 .23 .11 .12
or MAC) advertisement and mobility
B B
• The same Switch Virtual Interface (SVI) is present
on EVERY Edge with the SAME Virtual IP and MAC
C
• Host IP based traffic arrives on the local Fabric Edge Known
Networks
Unknown
Networks
➢ The primary technology used for the fabric control plane is based on the Locator/ID Separation Protocol
(LISP).
➢ LISP is an IETF standard protocol (RFC-6830, etc.) based on a simple endpoint ID (EID) to routing locator
(RLOC) mapping system, to separate the “identity” (address) from its current “location” (attached router).
➢ LISP dramatically simplifies traditional routing environments by removing the need for each router to
process every possible IP destination address and route. It does this by moving remote destination
information to a centralized map database that allows each router to manage only its local routes (and query
the map system to locate destination endpoints).
➢ This technology provides many advantages for Cisco SD-Access, such as less CPU usage, smaller routing
tables (hardware and/or software), dynamic host mobility (wired and wireless), address-agnostic mapping
(IPv4, IPv6, and/or MAC), built-in network segmentation (Virtual Routing and Forwarding [VRF]), and others.
➢ In Cisco SD-Access, several enhancements to the original LISP specifications have been added, including
distributed Anycast Gateway, Virtual Network (VN) Extranet and Fabric Wireless.
SD-Access Fabric
Key Components - LISP
Host
1. Control-Plane based on LISP Mobility
Routing Protocols = Big Tables & More CPU LISP DB + Cache = Small Tables & Less CPU
with Local L3 Gateway with Anycast L3 Gateway
BEFORE AFTER
IP Address = Location + Identity Separate Identity from Location
Prefix RLOC
192.58.28.128 ….....171.68.228.121
189.16.17.89 ….....171.68.226.120
Prefix Next-hop 22.78.190.64 ….....171.68.226.121
189.16.17.89 ….....171.68.226.120 172.16.19.90 ….....171.68.226.120
22.78.190.64 ….....171.68.226.121 192.58.28.128 ….....171.68.228.121
172.16.19.90 ….....171.68.226.120 192.58.28.128 ….....171.68.228.121
…....171.68.228.121
192.58.28.128
189.16.17.89 …....171.68.226.120
Prefix Next-hop 189.16.17.89
22.78.190.64
….....171.68.226.120
….....171.68.226.121
Mapping
22.78.190.64 ….....171.68.226.121 189.16.17.89 ….....171.68.226.120 172.16.19.90 ….....171.68.226.120
172.16.19.90 …......171.68.226.120 22.78.190.64 ….....171.68.226.121 192.58.28.128 ….....171.68.228.121
Endpoint
192.58.28.128 ….....171.68.228.121 172.16.19.90 ….....171.68.226.120
189.16.17.89 …....171.68.226.120 192.58.28.128 …....171.68.228.121
22.78.190.64 ….....171.68.226.121
172.16.19.90
192.58.28.128
…......171.68.226.120
…......171.68.228.121
….....171.68.226.120
Database
Routes are
189.16.17.89
22.78.190.64 …......171.68.226.121
172.16.19.90 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121
Consolidated
Prefix
189.16.17.89
22.78.190.64
Next-hop
…......171.68.226.120
….....171.68.226.121
to LISP DB
172.16.19.90 ….....171.68.226.120
192.58.28.128 …....171.68.228.121
189.16.17.89 …....171.68.226.120
22.78.190.64
172.16.19.90
….....171.68.226.121
…......171.68.226.120
Prefix Next-hop
189.16.17.89 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121 22.78.190.64 ….....171.68.226.121
189.16.17.89 …....171.68.226.120 172.16.19.90 ….....171.68.226.120
22.78.190.64 ….....171.68.226.121 192.58.28.128 …....171.68.228.121
172.16.19.90 …......171.68.226.120
192.58.28.128 ….....171.68.228.121
189.16.17.89 ….....171.68.226.120
22.78.190.64 …......171.68.226.121
172.16.19.90 ….....171.68.226.120
192.58.28.128 ….....171.68.228.121
Prefix Next-hop
Prefix Next-hop 189.16.17.89 ….....171.68.226.120
189.16.17.89 ….....171.68.226.120 22.78.190.64 ….....171.68.226.121
22.78.190.64 ….....171.68.226.121 172.16.19.90 ….....171.68.226.120
172.16.19.90 ….....171.68.226.120 192.58.28.128 …....171.68.228.121
192.58.28.128 …....171.68.228.121
189.16.17.89 …....171.68.226.120
22.78.190.64 ….....171.68.226.121
172.16.19.90 …......171.68.226.120
192.58.28.128 ….....171.68.228.121
189.16.17.89 …....171.68.226.120
Endpoint Routes
Control plane Responsibilities
In SD- Access fabric, the fabric control plane node operates as the data base tracking all
end point connectivity to the fabric, and is responsible for the following functions:
➢ Registers all end points connected to the edge nodes, and tracks their location in the
fabric
i.e. which edge node the end points are located behind.
➢ Responds to queries from network elements about the location of end points in the fabric
➢ Ensures that when end points move from one location to an other, traffic is redirected to
the current location.
Fabric Operation
Control-Plane Roles & Responsibilities
Control-Plane EID RLOC
a.a.a.0/24 w.x.y.1
b.b.b.0/24 x.y.w.2
c.c.c.0/24 z.q.r.5
d.d.0.0/16 z.q.r.5
Non- LISP
multiple LISP devices Prefix Next-hop
w.x.y.1 e.f.g.h
x.y.w.2 e.f.g.h
z.q.r.5 e.f.g.h
z.q.r.5 e.f.g.h
Database Mapping Entry (on ETR) Fabric Edges Database Mapping Entry (on ETR)
10.2.2.4/32 → ( 3.1.2.1)
10.2.2.2/32 → ( 2.1.2.1)
➢ Endpoint 1 on edge 1 will be registered to the fabric control plane node. The registration
includes End point 1's IP address, MAC address, and location.
➢ Endpoint 2 on edge 2 will be registered to the fabric control plane node. The registration
includes End point 2's IP address, MAC address, and location.
Control plane operation (CONT)
➢ When Endpoint 1 wants to communicate to End point 2, edge 1 will query the fabric control plane node
for the location of End point 2.
➢ Upon getting the reply (End point 2 location is behind edge 2 ) it will encapsulate the traffic from
Endpoint 1 using VXLAN, and send it to Endpoint 2 (via edge 2).
➢ Once this traffic arrives at edge 2, it will be decapsulated and forwarded along to Endpoint 2.
➢ The reverse applies when Endpoint 2 wants to communicate back to End point 1.
Fabric Operation
Fabric Internal Forwarding (Edge to Edge)
3 EID-prefix: 10.2.2.2/32
Mapping Locator-set: Path Preference
Entry Controlled
2.1.2.1, priority: 1, weight:100 by Destination Site
1
DNS Entry:
Branch Non-Fabric Non-Fabric
D.abc.com A 10.2.2.2
10.1.0.0/24
Fabric Borders
Fabric Edge
S
2
1.1.1.1
10.1.0.1 → 10.2.2.2 5.3.3.3
5.1.1.1
IP Network 5.2.2.2
4 Mapping
System
1.1.1.1 → 2.1.2.1
5 Fabric Edges
10.1.0.1 → 10.2.2.2
D
10.2.2.3/16 10.2.2.2/ 1 6 10.2.2.4/16 10.2.2.5/16
S
Non- Fabric
2
192.3.0.1 → 10.2.2.2 Fabric Borders
4.4.4.4
4 5.3.3.3
5.1.1.1
4.4.4.4 → 2.1.2.1 IP Network 5.2.2.2
Mapping
192.3.0.1 → 10.2.2.2 System
5 Fabric Edges
192.3.0.1 → 10.2.2.2
D
10.2.2.3/16 10.2.2.2/ 1 6 10.2.2.4/16 10.2.2.5/16
DC1 3.1.1.1
Fabric Borders 1.1.1.1
Mapping
System
10.2.1.0/24 – Local
5 Routing Table
3
10.2.1.10/32 – Local 10.2.1.0/24 – Local 42
10.2.1.10/32 – LISP0
10.2.1.10/32 - Local
IP Network
Campus Campus
S Fabric Edges
1
Bldg 1 Bldg 2
10.2.1.10 10.2.1.10
SD-Access Fabric
Unique Control-Plane extensions compared to LISP
Virtual Networks Layer-3 VN (VRF) only Both Layer-3 and Layer-2 VN (VRF)
support (using VXLAN)
➢ VXLAN encapsulation is IP/UDP-based, meaning that it can be forwarded by any IP-based network
(legacy or non-Cisco) and effectively creates the “overlay” aspect of the SD-Access fabric.
➢ VXLAN encapsulation is used (instead of LISP encapsulation) for two main reasons.
➢ VXLAN includes the source Layer 2 (Ethernet) header (LISP does not), and it also provides special
fields for additional information (such as virtual network [VN] ID and group [segment] ID).
SD-Access Fabric
Key Components – VXLAN
OUTER
HEADER
4789
OVERLAY
HEADER
INNER
HEADER
VXLAN-GPO Header 48
Next-Hop MAC Address
Source MAC 48
VLAN ID 16
Protocol 0x11 (UDP) 8
Ether Type
16 Header
0x0800 16 20 Bytes
Outer MAC Header
Underlay
Checksum
Source IP 32
Src RLOC IP Address
Outer IP Header Dest. IP 32
Source Port 16 Dst RLOC IP Address
Segment ID 16
8 Bytes
Original Payload VN ID 24
Allows 16M
Reserved 8 possible VRFs
VXLAN-GPO Header
MAC-in-IP with VN ID & Group ID
What to look for in Frame 1: 192 bytes on wire (1536 bits), 192 bytes captured (1536 bits)
Ethernet II, Src: CiscoInc_c5:db:47 (88:90:8d:c5:db:47), Dst: CiscoInc_5b:58:fb (0c:f5:a4:5b:58:fb)
a packet capture? Internet Protocol Version 4, Src: 10.2.120.1, Dst: 10.2.120.3
User Datagram Protocol, Src Port: 65354 (65354), Dst Port: 4789 (4789)
Source Port: 65354
Destination Port: 4789
OUTER
Length: 158 HEADER
Checksum: 0x0000 (none)
[Stream index: 0]
Inner
Fabric Data-Plane provides the following:
Outer
• Underlay address advertisement & mapping
• Automatic tunnel setup (Virtual Tunnel End-Points)
• Frame encapsulation between Routing Locators
Outer
• Nearly the same, with different fields & payload Encap
Inner
Inner
• LISP header carries IP payload (IP in IP)
• VXLAN header carries MAC payload (MAC in IP)
Decap
Triggered by LISP Control-Plane events
• ARP or NDP Learning on L3 Gateways
• Map-Reply or Cache on Routing Locators
Inner
Data plane operation
VXLAN
➢ When Endpoint 1 wants to communicate to End point 2, edge 1 will query the fabric control plane node
for the location of End point 2.
➢ Upon getting the reply (End point 2 location is behind edge 2 ) it will encapsulate the traffic from
Endpoint 1 using VXLAN, and send it to Endpoint 2 (via edge 2).
➢ Once this traffic arrives at edge 2, it will be decapsulated and forwarded along to Endpoint 2.
➢ The reverse applies when Endpoint 2 wants to communicate back to End point 1.
SD-Access Fabric
Unique Data-Plane Extensions compared to VXLAN
➢ Cisco TrustSec, and specifically SGT and SGT Exchange Protocol (SXP), is an IETF draft protocol (SXP-006) that
provides logical group-based policy creation and enforcement by separating the actual endpoint “identity”
(group) from its actual network address (IP) using a new ID known as a Scalable [or security] Group Tag
(SGT).
➢ This technology provides several advantages for Cisco SD-Access, such as support for both network-based
(VRF/VN) and group-based segmentation (policies), the ability to create logical (address-agnostic) policies,
dynamic enforcement of group-based policies (regardless of location) for both wired and wireless traffic,
and the ability to provide policy constructs over a legacy or non-Cisco network (using VXLAN-GPO).
➢ In SD-Access, several enhancements to the original Cisco TrustSec specifications have been added, notably
combining the SGT and VN into the VXLAN-GPO header and enhancing Cisco TrustSec to include LISP VN
Extranet.
SD-Access Fabric
Key Components – Group Based Policy
K n own Un kn o wn
Ne t wo rks Ne t wo rks
SD-Access
VN VN VN
Fabric
Virtual Network (VN)
“ A” “ B” “ C”
First level Segmentation ensures zero
communication between forwarding
domains. Ability to consolidate multiple
networks into one management plane.
K n own Un kn o wn
Ne t wo rks Ne t wo rks
SG
1
SG
4
SG
7
SD-Access Scalable Group (SG)
SG
2
SG
3
SG
5
SG
6
SG
8
SG
9 Fabric
Second level Segmentation ensures
role based access control between
two groups within a Virtual Network.
Provides the ability to segment the
network into either line of businesses
or functional blocks.
✓
Group Assignment
Two ways to assign SGT
Campus
Access Distribution Core DC Core DC Access
MAB
Enterprise
Backbone
Define SGTs under ‘Components’ section in TrustSec Work Center (from ISE 2.0+)
Cisco TrustSec
Define Authorization Policies for Users and Devices in ISE
Create an 802.1X or
MAB or Web Auth
policy to assign the
SGTs to the Users
and Devices, after
client Authentication
& Authorization
104
SD-Access Policy
Access Control Policies
SGACLs are
referenced
under the
Egress policy
Policy Enforcement
Ingress Classification with Egress Enforcement
Destination Classification
CRM: SGT 20
Web: SGT30
User Authenticated = FIB Lookup =
Classified as Marketing(5) Destination IP = SGT 20 ISE
Associated IP
cts role-based permissions from SGT, DGT, SGACL Label permit tcp dst eq 443
SG ACL
SGT/DGT
Entries
Entries
permit icmp
… …
Encapsulation Decapsulation
IP Network
VXLAN VXLAN
VN ID SGT ID VN ID SGT ID
QoS (App) Policy Not Supported App based QoS policy, to optimize
application traffic priority
Traffic Copy Policy Not Supported SRC/DST based Copy policy (using
ERSPAN) to capture data traffic
Thank you!!!
Exploring
Cisco DNA
Center
Cisco DNA Center overview
➢ Cisco DNA Center is a centralized operations platform for end- to-end automation and assurance of
enterprise LAN, WLAN, and WAN environments, as well as orchestration with external solutions and
domains.
➢ It allows the network administrator to use a single dashboard to manage and automate the network.
➢ Role- based access control mechanism, for differentiated access to users based on roles and scope
➢ Programmable interfaces to enable ecosystem partners and developers to integrate with Cisco DNA Center
Cisco DNA Center
SD-Access – Key Components
API
Cisco ISE
Identity 2.3
& Policy Automation Assurance
API API
Identity Services Engine Network Control Platform Network Data Platform
NETCONF
SNMP
SSH
AAA
RADIUS
TACACS
Campus Fabric NetFlow
Syslog
HTTPS
Switches,
1000 2000 5000
Routers & WLC
Endpoints
25K 40K 100K DN2- HW- APL- XL
Infrastructure
(Wired + Wireless)
112 Core - UCS M5
Telemetry Intent
Alerts
Campus
Fabric
Cisco DNA Center
NDP System Components
Network Network Cost Network Segmentation & Change Impact Other 3rd party
Vulnerability Detection app Analytics app
Assurance Analytics app analytics apps
Control
Network NDP Core Analytics Platform
Controller
Platform Inventory,
Topology
etc.
Configuration Telemetry
Distributed Processing
Network Elements
(Switches, Routers, Access Points, N/W Services, Identity providers)
Cisco DNA Center
NDP Analytics Architecture
Data collection and ingestion Data correlation and analysis Data visualization and action
Network Assurance
Complex
Router Switch WLC Sensor
Network correlation
telemetry
Metadata
SNMP NetFlow Syslog Streaming extraction
telemetry
...
Collector and analytics pipeline SDK
ISE AAA Topology Location PxGrid
Stream Data models and restful APIs
processing
DNS DHCP Inventory Policy IPAM
Time series analysis
Core
Core
Dist
Access
Core
Dist
Access
✅
✅
✅
✅
Verify Network Design Sites across geographic Discover Network devices Dynamic discovery & automation
Verify System support Global network services Physical Topology Optimized routing design
Prepare IP Services Design IP Address Pools Network Readiness Resilient underlay settings
4 Step process
SDA Ready Network
Thank you!!!
Underlay
Automation
Step – 1 : Plan
Plan Design Discover Provision
Core
Seed
Seed Seed
Seed Device
Intermediate system(s) between Core and
new network block
Key system to discover, automate andon-
board new Catalyst switches in network
Plan Design Discover Provision
Core Core
PnP-Agent
PnP Agent PnP Agent
Underlay Automation Block
Core
2 Tier – Collapsed Core Design 3 Tier – Campus Design Extended Campus Design
PnP Agent
Layer 3
Underlay Automation Boundary Layer 2
Core
Core
Seed Seed
Dist
Seed Seed PnP Agent PnP Agent
Access
PnP Agent PnP Agent PnP Agent PnP Agent PnP Agent PnP Agent
Seed-1 Seed-2
Core
S1(config)# interface Loopback 0 S1(config)# interface Loopback 0
S1(config-if)# ip address <ip> <mask> S1(config-if)# ip address <ip> <mask>
! !
Seed Seed
Manually configure IP subnet on inter-seed switch Loopback IP could be outside of domain Network
interfaces from Underlay network address range if there address range, but must be reachable to DNA-C
is interconnection
Seed devices must not use LAN Automation address
pool
Plan – Seed Switch IP Routing Configurations
Plan Design Discover Provision
Core
Seed Seed
IP Routing Configuration
Optional if IS-IS routing protocol in Core
Else, manually create IS-IS routing instance without
area tag and mutually redistribute between routing
domains. No additional IS-IS routing configurations
required.
Summarize Network range to Core
Plan – Seed Switch IP Routing Configurations
Plan Design Discover Provision
Seed Seed
IP Routing Configuration
Optional if IS-IS routing protocol in Core
Else, manually create IS-IS routing instance without
area tag and mutually redistribute between routing
domains. No additional IS-IS routing configurations
required.
Summarize Network range to Core
Plan – Seed Switch IP Routing Configurations
Plan Design Discover Provision
Seed Seed
IP Routing Configuration
Optional if IS-IS routing protocol in Core
Else, manually create IS-IS routing instance without
area tag and mutually redistribute between routing
domains. No additional IS-IS routing configurations
required.
Summarize Network range to Core
Plan Design Discover Provision
Eth-1 Interface :
Seed Seed
IP Address : <IP_Address_2>
10.128.0.0/16 IS-IS Routing Domain
Netmask : <Mask>
Gateway : <Skip>
PnP Agent PnP Agent
Static Route : <LAN_Automation-Net>/<mask>/GW
Core
Seed Seed
10.128.0.0/16
Endpoint Integration
The PnP Agent may contend for DHCP address with attached
Endpoints
Core
Seed Seed
10.128.0.0/16
Core
Seed Seed
✅ Verify Seed device do not have any network address belonging to LAN Automation IP Pool
✅ Pre-configure IS-IS routing without Area Tag. Mutual route-redistribution. No additional IS-IS
routing configuration implemented.
Design – Overview
Plan Design Discover Provision
Gateway IP Address 4
5 Save
Step-4 Configure LAN IP Pool from Parent – Global | Area | Site level
Thank you!!!
Underlay
Automation
Step – 3 :
Discovery
Discovery – Overview
Plan Design Discover Provision
Core
Core
Seed Seed
Dist
Seed Seed PnP Agent PnP Agent
Access
PnP Agent PnP Agent PnP Agent PnP Agent PnP Agent PnP Agent
✅
✅
✅ ✅
✅ ✅
Plan Design Discover Provision
Underlay Provision
Device Inventory provides two views providing
unique functions – Table and Topology
Table view provides device inventory and states
Topology view provides Provision function
Plan Design Discover Provision
S1 S2
2 Select Site
⚠
️✅
⚠
️
✅
Plan Design Discover Provision
✅
✅
✅
✅
Plan Design Discover Provision
✅
✅
✅
Stop Automation Process ✅
All discovered and automated Switches must
reach to Completed status. Process time may
vary on network size 2 Stop Underlay Automation
Stop the automation. This action completes
process and transitions all switches to final state
Plan Design Discover Provision
✅ ✅
✅ ✅
✅ ✅
✅ ✅
✅ ✅
✅ ✅
Plan Design Discover Provision
✅
Plan Design Discover Provision
System Role
Administrator must select each switch and define
its network role – Access | Distribution | Core
DNA-C auto-arranges topology view based on
user selection
Plan Design Discover Provision
System Role
Administrator must select each switch and define
its network role – Access | Distribution | Core
DNA-C auto-arranges topology view based on
user selection
Plan Design Discover Provision
Configuration
Underlay Automation
Underlay Automation
Underlay Automation
Underlay Automation
Underlay Automation
Underlay Automation
SD-Access Ready 💡
DNA-C auto-arranges topology view based on Resynchronize Device Inventory if partial topology discovered
user selection.
All systems are programmed and ready to build
an overlay networks
Plan Design Discover Provision
Core
Seed
Core
Seed
Core
Seed
Core
Seed
Core
Seed Seed
PnP Agent
PnP Agent
Core
Seed Seed
Core
Seed Seed