Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

Building ACI Multipod Fabric with GUI and API

Ver 1.2

Kris Sekula (ksekula@cisco.com)


Contents
Introduction ............................................................................................................................................................... - 4 -
POD Physical Topology .............................................................................................................................................. - 5 -
IP Network configuration: ......................................................................................................................................... - 7 -
Multipod setup using GUI ......................................................................................................................................... - 8 -
Bring the first apic online...................................................................................................................................... - 8 -
Register all nodes: ................................................................................................................................................. - 9 -
Configure Node Management Addresses: ........................................................................................................ - 10 -
Configure NTP provider: ..................................................................................................................................... - 11 -
Configure BGP Policy........................................................................................................................................... - 12 -
Create Pod Policy Group:.................................................................................................................................... - 13 -
Modify default Pod Profile : ............................................................................................................................... - 14 -
Define TEP Pool for nodes in POD2: .................................................................................................................. - 15 -
Configure Multipod ............................................................................................................................................. - 16 -
Configure VLAN Pool for IPN Connectivity: ....................................................................................................... - 18 -
Create External Routed Domain: ....................................................................................................................... - 19 -
Configure AAEP: .................................................................................................................................................. - 20 -
Create Link Level Interface Policy ...................................................................................................................... - 21 -
Create CDP Policy ................................................................................................................................................ - 22 -
Create Spine Interface Policy Group .................................................................................................................. - 23 -
Create Spine Interface Profile: ........................................................................................................................... - 24 -
Create Spine Profile............................................................................................................................................. - 25 -
Create OSPF Interface Policy .............................................................................................................................. - 27 -
Create Rotued Outside for EVPN Notes ............................................................................................................ - 28 -
Create Rotued Outside for EVPN Procedure ..................................................................................................... - 29 -
Register second APIC controller (in POD1):....................................................................................................... - 37 -
Register third APIC controller (in POD2): .......................................................................................................... - 37 -
Configure your “Golf” devices Procedure ......................................................................................................... - 38 -
Modify Routed Outside to support “Golf” Procedure ...................................................................................... - 41 -
How to consume the “Golf “ connectivity Procedure ...................................................................................... - 43 -
Configuration using the northbound API ............................................................................................................... - 48 -
Install Postman plugin......................................................................................................................................... - 48 -
Load the Postman collection .............................................................................................................................. - 49 -
Run all API calls to configure the fabric. ............................................................................................................ - 51 -

-2-
Ver 2.0
-3-
Ver 2.0
Introduction

This document covers step by step configuration of the ACI Fabric to support the multipod feature
introduced in code version 2.0(1m).

Multipod enables provisioning a more fault tolerant fabric comprised of multiple pods with isolated
control plane protocols. Also, multipod provides more flexibility with regard to the full mesh cabling
between leaf and spine switches. For example, if leaf switches are spread across different floors or
different buildings, multipod enables provisioning multiple pods per floor or building and providing
connectivity between pods through spine switches.

You have the option to follow step-by-step configuration instructions using the GUI or use the API
configuration approach utilizing a “collection” for Chrome “Postman” that is distributed with this
document. Some of the basic configuration details for the IP network that interconnects the PODs is
also provided.

The document assumes you have a “clean” fabric that you’re just about to provision, if this is not the
case you will need to skip and/or adjust some steps.

The assumption is that all switches(nodes) and APIC servers are running the 2.0(1m) code or higher,
if this is not the case (especially for nodes in POD2) it might be worth plugging all nodes and APIC
servers into POD1, building temporary single topology in POD1, upgrading the switches / APICs to
2.0(1m) and then decommissioning the nodes that will end up in POD2 (for that you can use the
“eraseconfig setup” on APIC servers and “setup-clean-config.sh” followed by “reload” commands on
switches).

-4-
Ver 2.0
POD Physical Topology

-5-
Ver 2.0
i05-n7009-01_VDC6 i05-n7009-01_VDC7
E7/2
E7/3 .10
E7/1 20
.2 E7/4 2.1
IPN .2 .1.
8/
.10 30

20
2.1
30

E7/2

.1.
/
1.8

0/
E7/3 E7/1 .14

30
.
1.1

.6 .6
20

0 E7/4
0/ 3

20
202
. 1.

2.1
.1.1
1.1 .14 i04-n7009-01_VDC6 .12

.
i04-n7009-01_VDC7

1.4
20 0 /30
.4/3

/3
.1.1

0
201.1.1.12/30
201 .5 .1 .9
.1 .9 .13 E1/1 .13
E1/1
E1/2 E1/2
E1/1 E1/2 .5 E1/1 E1/2
i02-9336-03 i02-9336-04
i02-APIC-01 i02-APIC-02 i02-n9336-01 i02-n9336-02 Spine-203 Spine-204
10.50.138.221 10.50.138.223 Spine-201 Spine-202
i02-APIC-03

-6-
10.50.138.225
E 2/1 E 2/2 E 2/1 E 2/2 E1/36 E1/36
E1/34 E1/35 E1/34 E1/35
London E1/1 E1/2 E1/1 E1/2
E 2/1E 2/2
Leaf_101 Leaf_102 Leaf_103 Leaf_104 Leaf_105
i02-9396-01 i02-9396-02 i02-9372-01 i02-n9396-03 i02-9396-04
E1/2 E1/53 E1/54 E1/2 E1/53 E1/54 E1/53 E1/54 E1/53 E1/54 E1/3 E1/53 E1/54 E1/3
E1/1 E1/1

Ver 2.0
IP Network configuration:

The IP network that interconnects the PODs needs to support multicast, so that the bridge domain
Broadcast, Unknown unicast and Multicast traffic (BUM) can be transported between PODs. We are
using a pair of nexus 7000 devices with multiple VDCs to create the topology, below are the few
configuration details that have been used:

Example configuration of interface facing the spine node:

interface Ethernet7/3
mtu 9150
no shut

interface Ethernet7/3.4
mtu 9150
encapsulation dot1q 4
ip address 201.1.1.2/30
ip ospf network point-to-point
ip router ospf GLOBAL area 0.0.0.0
ip pim sparse-mode
ip dhcp relay address 10.1.0.1
no shutdown

The PIM configuration should include RP redundancy:

ip pim rp-address 192.168.100.2 group-list 225.0.0.0/8 bidir


ip pim rp-address 192.168.100.2 group-list 239.0.0.0/8 bidir

IPN Node 1

interface loopback1
description Bidir Phantom RP
ip address 192.168.100.1/30
ip ospf network point-to-point
ip router ospf IPN area 0.0.0.0
ip pim sparse-mode

IPN Node 2

interface loopback1
description Bidir Phantom RP
ip address 192.168.100.1/29
ip ospf network point-to-point
ip router ospf IPN area 0.0.0.0
ip pim sparse-mode

Refer to this document for more information on Phantom RP configuration:

https://supportforums.cisco.com/document/55696/rp-redundancy-pim-bidir-phantom-rp

-7-
Ver 2.0
Multipod setup using GUI
Bring the first apic online

We start by provisioning the first APIC server (highlighted are the non-default values, some output
has been omitted):

Cluster configuration ...


Enter the fabric name [ACI Fabric1]: i02-fabric-01
Enter the fabric ID (1-128) [1]:
Enter the number of controllers in the fabric (1-9) [3]:
Enter the POD ID (1-9) [1]:
Enter the controller ID (1-3) [1]:
Enter the controller name [apic1]: i02-apic-01
Enter address pool for TEP addresses [10.0.0.0/16]: 10.1.0.0/16
Note: The infra VLAN ID should not be used elsewhere in your environment
and should not overlap with any other reserved VLANs on other platforms.
Enter the VLAN ID for infra network (2-4094): 3966
Enter address pool for BD multicast addresses (GIPO) [225.0.0.0/15]:

Notice: I’ve used a 10.1.0.0/16 subnet for POD1, we will provision 10.2.0.0/16 for POD2 later on.

Leave the APIC for a few minutes to become available before you try to login via the GUI.

-8-
Ver 2.0
Register all nodes:
Fabric -> Inventory -> Fabric Membership

For POD1 nodes just use the Register Switch option like we normally do:

The nodes in POD2 are not visible at this point; they will be detected once the IPN is configured. You
can pre-register them now using the “Create Fabric Node Member” option; remember to specify
“Pod ID” as “2”:

It is recommended to configure NTP on all nodes – saves you hassle with fabric discovery problems
later and is really best practice. For NTP to work correctly, you will need the OOB management
network configured first.

-9-
Ver 2.0
Configure Node Management Addresses:
Tenants -> mgmt -> Node Management Addresses -> Static Node Management Addresses

From Actions menu select “Create Static Node Management Addresses”:

Fill in details for each node, see example for node 101 below:

- 10 -
Ver 2.0
Configure NTP provider:
Navigate Fabric -> Fabric Policies -> Pod Policies -> Polices -> Date and Time -> Policy Default

Fill in the information, don’t forget to tick the “Preferred” and select the “Management EPG”

Configure Timezone:

Navigate Fabric -> Fabric Policies -> Pod Policies -> Polices -> Date and Time -> default

- 11 -
Ver 2.0
Configure BGP Policy
Navigate Fabric -> Fabric Policies -> Pod Policies -> Polices -> BGP Route Reflector default

Specify the AS number and add Route Reflector Nodes.

I’ve configured spine 201,202 in pod1 and 203,204 in pod2 to be the route reflectors. Remember
that the control plane is independent in both fabrics.

Note: This configuration will display some errors. As nodes 203 and 204 have not yet been detected
the configuration can’t be delivered to them. The errors will be cleared once nodes in pod2 are
detected.

- 12 -
Ver 2.0
Create Pod Policy Group:
Navigate Fabric -> Fabric Policies -> Pod Policies -> Policy Groups

From “Actions” menu select “Create Pod Policy Group”

As a minimum you must select “default” for the “Date Time Policy” and the “BGP Route Reflector
Policy”

I’ve selected the default policies for all other options.

- 13 -
Ver 2.0
Modify default Pod Profile :
Navigate Fabric -> Fabric Policies -> Pod Policies -> Profiles -> default

After a few minutes you can, verify if NTP has been applied to your nodes:

Navigate Fabric -> Fabric Policies -> Pod Policies -> Polices -> Date and Time -> Policy Default

Select your NTP provider and look at the “Operational” tab.

Now we need to configure multipod settings:

- 14 -
Ver 2.0
Define TEP Pool for nodes in POD2:
Navigate to:

Fabric -> Inventory -> POD Fabric Setup Policy

From the “Actions” menu select the “Setup PODs”

Add TEP address:

- 15 -
Ver 2.0
Configure Multipod
Fabric -> Inventory -> POD Fabric Setup Policy

From the “Actions” menu select the “Create Multipod”

Specify Community: “extended:as2-nn4:29:12”, “Full Mesh”. In the “POD Connection Profile”, define
a TEP address for each POD (those represent the “anycast VTEP” shared across the spines and
representing the EVPN Next-Hop when encapsulating between spines in separate PODs).

In the “Fabric External Routing Profile” define the IP subnets between the spines and the local IPN
nodes, specify name: “ext_routing_prof_1” and subnet “201.1.0.0/16, 202.1.0.0/16”

- 16 -
Ver 2.0
Your “Topology” should now look like this:

Notice the Inter-Pod Network and Pod2 sections.

Now we need to focus on configuring the interfaces towards the local IPN. In this document we are
using a “manual” method of configuring all the interface policies that are required on links towards
the IPN, although this method may look complicated it gives you full control over naming of your
policies. Being exposed to all the configurations steps gives you opportunity to learn where all the
options are.

As an alternative one could use the built in “quick start” wizards (Fabric -> Access Policies -> Quick
Start), but those are not covered in this document.

- 17 -
Ver 2.0
Configure VLAN Pool for IPN Connectivity:
Navigate

Fabric -> Access Policies -> Pools -> VLAN

From “Actions” menu select “Create VLAN Pool”

Create vlan Pool, make sure allocation is “Static” and there is a single vlan in the pool:

Note: You must use vlan 4, as this is the default vlan “spines” will try to communicate with the IPN,
it’s also important later when you configure your IPN nodes, to specify “encapsulation dot1q 4” on
the interface.

- 18 -
Ver 2.0
Create External Routed Domain:
Navigate:

Fabric -> Access Policies -> Physical and External Domain

From “Actions” menu select “Create L3 Domain”

Specify name “MultiPod_ExtL3Dom” and reference the VLAN Pool we’ve created:

- 19 -
Ver 2.0
Configure AAEP:
Navgate:

Fabric -> Access Policies -> Global Policies -> Attacheable Access Entity Profiles.

From “Actions” menu select “Create AAEP”

Specify name: MultiPod_AAEP, add our external L3 domain, continue with “Next”

Leave defaults in “Step 2”

- 20 -
Ver 2.0
Create Link Level Interface Policy
Navgate:

Fabric -> Access Policies -> Interface Policies -> Policies -> Link Level

From “Actions” menu select “Create Link Level Policy”

Specify name: 40G_LLP and select Speed: 40 Gbps

- 21 -
Ver 2.0
Create CDP Policy
Navgate:

Fabric -> Access Policies -> Interface Policies -> Policies -> CDP Interface

From “Actions” menu select “Create CDP Interface Policy”

Specify name: CDP_Enable, and Admin State: Enable

- 22 -
Ver 2.0
Create Spine Interface Policy Group
Navgate:

Fabric -> Access Policies -> Interface Policies -> Policy Groups -> Spine Policy Groups

From “Actions” menu select “Create Spine Access Policy Group”

Specify Name: SpineIPN_IntPolicyGroup, 40G_LLP, CDP_Enable, MultiPod_AAEP

- 23 -
Ver 2.0
Create Spine Interface Profile:
Navgate:

Fabric -> Access Policies -> Interface Policies -> Profiles -> Spine Profile

From “Actions” menu select “Create Spine Interface Profile”

Specify name: SpineIPN_IntProfile and add Interface Selector:

On the interface Selector screen, specify name: Interface, Interface IDs: 1/1-2 and the Inteface Policy
Group: SpineIPN_IntPolicyGroup

Finish by submitting the configuration.

- 24 -
Ver 2.0
Create Spine Profile
Navgate:

Fabric -> Access Policies -> Switch Policies -> Profiles -> Spine Profile

From “Actions” menu select “Create Spine Interface Profile”

Specify name: Spines and add Spine Selector for spine 201-204 (don’t use the drop down box for
spine selection as spines 203 and 204 are not part of the fabric yet so they will not be listed).

Continue with “Next” to “Step 2”, here

- 25 -
Ver 2.0
“Tick” our interface profile, and “Finish”

This will configure interfaces e1/1 and e1/2 on all spines to run at 40Gbps, Enable CDP and allow
vlan 4.

- 26 -
Ver 2.0
Create OSPF Interface Policy
Navgate:

Tenants -> infra -> Networking -> Protocol Policies -> OSPF Interface

From “Actions” menu select “Create OSPF Interface Policy”

Specify name: “IPN_OSPFIntPolicy”, Point-to-point, Advertise subnet, MTU Ignore

- 27 -
Ver 2.0
Create Rotued Outside for EVPN Notes

Multipod uses the BGP EVPN protocol over OSPF in the inter-pod network (IPN) for communicating
between spines in different pods.

This part is pretty tricky to configure in the GUI, I recommend doing this with an API call via the
“Postman” app for Chrome and the provided “collection” (see instruction in the second part of this
document).

As a summary we need to follow these steps:

1. Create External Routed Network.


2. Update the External Routed Domain configuration.
3. Create Node Profile for POD2.
4. Add interface configuration.

- 28 -
Ver 2.0
Create Rotued Outside for EVPN Procedure
Navgate:

Tenants -> infra -> Networking -> External Routed Networks

From “Actions” menu select “Create Routed Outside for EVPN”

In “STEP 1” of the wizard specify Name: multipod, Area ID: 0, Area Type: Regular

Continue with “Next” to “STEP 2”

- 29 -
Ver 2.0
In “STEP 2” of the wizard we define Nodes and Interfaces for the Spines in POD1, specify the Node
profile Name: “POD1-INode”, add Spine 201 nad 202 and define the loopbacks, specify the Logical
Inteface Profile Name: “LIFp_v4”, select the OSPF Interface Policy: IPN_OSPFIntPolicy and finally add
the ip addressess on node-201 e1/1, e1/2 and node-202 e1/1, e1/2 (those are our spine to IPN
links).

Finish by clicking the “Next” button this.

- 30 -
Ver 2.0
Now highlight our external routed network “multipod” and update the “External Routed Domain” to
the “MultiPod_ExtL3Dom“

To verify the configuration so far you can navigate to:

Fabric -> Inventory -> Fabric Membership

If you have pre-registered the Spines in pod2 their “Role” will now change from “unknown” to
“spine”. Notice they will not have the ip addresses yet.

If you have not pre-registered the nodes in pod2, you will now see the two new spines being
detected giving you the option to register them.

- 31 -
Ver 2.0
Now we can configure the IPN links from spines in pod2

Navigate:

Tenants -> infra -> Networking -> External Routed Networks -> multipod -> Logical Node Profiles

From the “Actions” menu select “Create Node Profile”

In the “Create Node Profile” specify: Name: “POD2-INode”, add the Node-203 and 204 to nodes list
giving them loopback ip addresses 203.203.203.203 and 204.204.204.204.

- 32 -
Ver 2.0
Click the “+” in the OSPF Interface Profiles Section to configure the interfaces, this will open another
screen

On the “Create Interface Profile” wizard screen, specify Name: “LIFp_v4”, select the
IPN_OSPFIntPolicy. NOTE: you will not be able to configure the routed subinterfaces at this point,
just finish the wizard with the “OK” followed by “Submit” button.

- 33 -
Ver 2.0
Go back to the newly created “POD2-INode” logical node profile highlight the “LIFp_v4” logical
interface profile and add the Routed Sub-Interfaces using the “+” button on the right:

You should be able to specify each interface in POD2.

Repeat this step 4 times for each of the interfaces on spine nodes in POD2.

- 34 -
Ver 2.0
Your Logical Interface Profile for POD2 should look like this:

Review the “Fabric Membership”

If you pre-registered all nodes in pod2 they will now show up with IP address assigned.

If you haven’t pre-registered the nodes in pod2, at this point they will be discovered and now you
can register them:

After registering wait a moment for the nodes to get IP address and be added to topology before
you continue.

- 35 -
Ver 2.0
The topology will show all nodes in POD1 and POD2:

Notice only two of the IPN devices show in the Inter-Pod Network – that is normal.

- 36 -
Ver 2.0
Register second APIC controller (in POD1):
Cluster configuration ...
Enter the fabric name [ACI Fabric1]: i02-fabric-01
Enter the fabric ID (1-128) [1]:
Enter the number of controllers in the fabric (1-9) [3]:
Enter the POD ID (1-9) [1]:
Enter the controller ID (1-3) [1]: 2
Enter the controller name [apic1]: i02-apic-02
Enter address pool for TEP addresses [10.0.0.0/16]: 10.1.0.0/16
Note: The infra VLAN ID should not be used elsewhere in your environment
and should not overlap with any other reserved VLANs on other platforms.
Enter the VLAN ID for infra network (2-4094) [2]: 3966

Remember the fabric name, infra network VLAN id and TEP address pool MUST match what was
used for apic controller 1

Register third APIC controller (in POD2):


Cluster configuration ...
Enter the fabric name [ACI Fabric1]: i02-fabric-01
Enter the fabric ID (1-128) [1]:
Enter the number of controllers in the fabric (1-9) [3]:
Enter the POD ID (1-9) [1]: 2
Enter the controller ID (1-3) [1]: 3
Enter the controller name [apic3]: i02-apic-03
Enter address pool for TEP addresses [10.0.0.0/16]: 10.1.0.0/16
Note: The infra VLAN ID should not be used elsewhere in your environment
and should not overlap with any other reserved VLANs on other platforms.
Enter the VLAN ID for infra network (2-4094): 3966

NOTE: this time specify POD ID “2” but you must still use the “10.1.0.0/16” TEP address range.

After a few minutes the topology diagram will reflect the newly provisioned APICs.

- 37 -
Ver 2.0
Configure your “Golf” devices Procedure
In this example we use nexus 7700 as the “Golf” device, with F3 line cards and 7.3(1)D1(1) code:

Enable required features:


feature-set mpls
feature-set fabric
feature fabric forwarding
nv overlay evpn
feature ospf
feature bgp
feature ipp
feature mpls l3vpn
feature mpls ldp
feature interface-vlan
system bridge-domain 100-3000
feature nv overlay
feature vni

Set BDs to VNIs:


system bridge-domain 100-3000
system fabric bridge-domain 2000-3000

Setup infra connectivity, in our case the golf devices peers to the spine via the IPN network used for
multipod:

IPN facing interface configuration example:


interface Ethernet1/17
mtu 9216
ip address 192.168.10.1/24
ip ospf network point-to-point
ip router ospf IPN area 0.0.0.0
no shutdown bridge

router ospf IPN


router-id 5.5.5.5

VXLAN configuration on the nexus golf device:


interface nve1
no shutdown
source-interface loopback0
host-reachability protocol bgp
unknown-peer-forwarding enable
vni assignment downstream all
!
vxlan udp port 48879
fabric forwarding switch-role dci-node border

EBGP configuration to peer with the ACI spines:


feature bgp

router bgp 3
router-id 200.200.200.201
address-family l2vpn evpn
allow-vni-in-ethertag
neighbor 11.11.11.11 (example for spine 1, repeat for all spines that are peering)
remote-as 100
update-source loopback0
ebgp-multihop 10
timers 1 3
address-family ipv4 unicast
address-family l2vpn evpn
send-community extended
import vpn unicast reoriginate

- 38 -
Ver 2.0
Create automation profiles:

VRF Profile:
configure profile vrf-common-mpls-l3vpn-dc-edge
vrf context $vrfName
vni $include_vrfSegmentId
rd auto
address-family ipv4 unicast
route-target import $include_client_import_ipv4_bgpRT_1 evpn
route-target export $include_client_export_ipv4_bgpRT_1 evpn
route-target import $include_client_import_ipv4_bgpRT_2 evpn
route-target export $include_client_export_ipv4_bgpRT_2 evpn
route-target import $include_client_import_ipv4_bgpRT_3 evpn
route-target export $include_client_export_ipv4_bgpRT_3 evpn
route-target import $include_client_import_ipv4_bgpRT_4 evpn
route-target export $include_client_export_ipv4_bgpRT_4 evpn
route-target import $include_client_import_ipv4_bgpRT_5 evpn
route-target export $include_client_export_ipv4_bgpRT_5 evpn
route-target import $include_client_import_ipv4_bgpRT_6 evpn
route-target export $include_client_export_ipv4_bgpRT_6 evpn
route-target import $include_client_import_ipv4_bgpRT_7 evpn
route-target export $include_client_export_ipv4_bgpRT_7 evpn
route-target import $include_client_import_ipv4_bgpRT_8 evpn
route-target export $include_client_export_ipv4_bgpRT_8 evpn
route-target import $include_client_import_ipv4_bgpRT_1
route-target export $include_client_export_ipv4_bgpRT_1
route-target import $include_client_import_ipv4_bgpRT_2
route-target export $include_client_export_ipv4_bgpRT_2
route-target import $include_client_import_ipv4_bgpRT_3
route-target export $include_client_export_ipv4_bgpRT_3
route-target import $include_client_import_ipv4_bgpRT_4
route-target export $include_client_export_ipv4_bgpRT_4
route-target import $include_client_import_ipv4_bgpRT_5
route-target export $include_client_export_ipv4_bgpRT_5
route-target import $include_client_import_ipv4_bgpRT_6
route-target export $include_client_export_ipv4_bgpRT_6
route-target import $include_client_import_ipv4_bgpRT_7
route-target export $include_client_export_ipv4_bgpRT_7
route-target import $include_client_import_ipv4_bgpRT_8
route-target export $include_client_export_ipv4_bgpRT_8
router bgp $asn
vrf $vrfName
address-family ipv4 unicast
advertise l2vpn evpn
label-allocation-mode per-vrf
address-family ipv6 unicast
advertise l2vpn evpn
label-allocation-mode per-vrf
interface nve $nveId
member vni $include_vrfSegmentId associate-vrf

MPLS L3VPN Universal profile:


configure terminal
configure profile defaultNetworkMplsL3vpnDcProfile
ipp tenant $vrfName $client_id
include profile any

VRF tenant profile:


configure terminal
configure profile vrf-tenant-profile
vni $vrfSegmentId
bridge-domain $bridgeDomainId
member vni $vrfSegmentId
interface bdi $bridgeDomainId
vrf member $vrfName
ip forward
no ip redirects
ipv6 forward
no ipv6 redirects

- 39 -
Ver 2.0
no shutdown
configure terminal

Define the Opflex peering:


feature ipp
ipp
profile-map profile defaultNetworkMplsL3vpnDcProfile include-profile vrf-common-mpls-l3vpn-
dc-edge
local-vtep nve 1
bgp-as 3
identity 5.5.5.5
fabric 1
opflex-peer 11.11.11.11 8009 <-spine loopback addresses
opflex-peer 12.12.12.12 8009
opflex-peer 21.21.21.21 8009
opflex-peer 22.22.22.22 8009
ssl encrypted

- 40 -
Ver 2.0
Modify Routed Outside to support “Golf” Procedure
Navgate:

Tenants -> infra -> Networking -> External Routed Networks

Add “provider” label “golf”

Configure “BGP Infra Peer Connectivity” peering with Golf devices in POD1 and POD2, by creating a
bgp connectivity profile:

- 41 -
Ver 2.0
- 42 -
Ver 2.0
How to consume the “Golf “ connectivity Procedure
Navigate to your target tenant:

Tenants -> Prod_Tenant -> Networking -> External Routed Networks and create “Routed outside”

You only need to configure the “Consumer Label” and vrf

The consumer Label has to match the “provider label” configured in the steps above so we use
“golf”:

- 43 -
Ver 2.0
Continue the wizard by clicking “Next”.

Define L3 out EPG:

Now modify BD by associating the L3out_golf:

- 44 -
Ver 2.0
Modify the tenant’s vrf, by adding “BGP route target Profile”:

Create both “Import” and “Export” route targets for the ipv4 address family:

Note: use the following format: route-target:as4-nn2:x:x where x has to be unique for all of your vrfs

- 45 -
Ver 2.0
Enable the “Opflex” protocol and specify the vrf name that will be pushed to your “Golf” devices:

Verify configuration has been pushed to “Golf” device:


li10-n7710-01-DC1-G1# show ipp fabric
Global info:
config-profile defaultNetworkMplsL3vpnDcProfile
include-config-profile vrf-common-mpls-l3vpn-dc-edge
local-vtep nve 1
bgp-as 3
identity 5.5.5.5

Fabric 1 (Healthy)
opflex-peer 11.11.11.11:8009 (Connected and ready)
opflex-peer 12.12.12.12:8009 (Connected and ready)
opflex-peer 21.21.21.21:8009 (Connected and ready)
opflex-peer 22.22.22.22:8009 (Connected and ready)
ssl encrypted

Tenant Policies
3: Fabric Vrf: Prod_Tenant:vrf-1, Vrf: Prod_Tenant
RT v4:(5:5,5:5) v6:(nil,nil)
Id 10, HostId: 10
flags 0x0

framework_p: 0xf02006bc

li10-n7710-01-DC1-G1# show bgp l2vpn evpn summary


BGP summary information for VRF default, address family L2VPN EVPN
BGP router identifier 200.200.200.201, local AS number 3
BGP table version is 1702, L2VPN EVPN config peers 4, capable peers 4
136 network entries and 188 paths using 16064 bytes of memory
BGP attribute entries [10/1440], BGP AS path entries [1/6]
BGP community entries [0/0], BGP clusterlist entries [0/0]

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd


11.11.11.11 4 100 15837 15568 1702 0 0 04:21:58 26
12.12.12.12 4 100 15862 15566 1702 0 0 04:21:57 26
21.21.21.21 4 100 8477 8296 1702 0 0 01:22:05 20
22.22.22.22 4 100 8420 8242 1702 0 0 01:21:01 20

- 46 -
Ver 2.0
And other relevant commands:

li10-n7710-01-DC1-G1# show bgp l2vpn evpn


li10-n7710-01-DC1-G1# show ip route vrf Prod_Tenant

On ACI Spine nodes:

spine1201_i08-n9508-01# show bgp l2vpn evpn summary vrf overlay-1


BGP summary information for VRF overlay-1, address family L2VPN EVPN
BGP router identifier 11.11.11.11, local AS number 100
BGP table version is 3516, L2VPN EVPN config peers 6, capable peers 6
254 network entries and 270 paths using 39692 bytes of memory
BGP attribute entries [19/2736], BGP AS path entries [1/6]
BGP community entries [0/0], BGP clusterlist entries [2/8]

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd


5.5.5.5 4 3 17691 18156 3516 0 0 04:35:34 4 <- “golf” device
6.6.6.6 4 3 18453 18795 3516 0 0 05:10:45 4 <- “golf” device
7.7.7.7 4 3 10490 11012 3516 0 0 01:35:51 4 <- “golf” device
8.8.8.8 4 3 11168 11622 3516 0 0 01:35:42 4 <- “golf” device
21.21.21.21 4 100 316 362 3516 0 0 01:35:45 53 <- spine in pod2
22.22.22.22 4 100 312 347 3516 0 0 01:35:01 53 <- spine in pod2

spine1201_i08-n9508-01#

- 47 -
Ver 2.0
Configuration using the northbound API

Install Postman plugin


Postman is a restful API client plugin for the popular Chrome browser. To install postman open
Chrome and head to http://www.getpostman.com/ . From there you will be redirected to Chrome
web store and you can add Postman to Chrome.

To open Postman you need to start Chrome and head to the apps section:

- 48 -
Ver 2.0
Load the Postman collection

Embedded below and distributed with the document is the library of API calls needed for
configuration of our multipod setup.

MultiPodSetupExample_v1.4.postman_collection.json

(as of writing this document the embedded version of the file is 1.4, please check
https://communities.cisco.com/docs/DOC-55116 where most up-to-date version will is attached)

Before we import that library into Postman it’s worth opening the library in text editor and replacing
all instances of “10.10.10.10” with the ip address of your APIC server – this will save you time editing
the API URLs later.

Save the file and import into Postman:

- 49 -
Ver 2.0
To view the imported library toggle the sidebar hide/show sidebar and select Collections

Before you use any of the API calls make sure you connect to your APIC at least once from Chrome
to accept the certificate.

- 50 -
Ver 2.0
Run all API calls to configure the fabric.

The scripts assume you have provisioned the first APIC server and nothing else is configured. Run the
scripts in order:

1_LOGIN (to get authentication cookie)

2_REGISTER_NODES (you will need to edit the “Body” of the call with your SNs and hostnames.

3_NODE_MGMT_IP (again edit the body to specify your own OOB IP addresses)

4_POD_POLICIES

5_OSPF_POOLICY

At this point you can wait a few minutes checking for the POD1 to fully register:

Navigate to Fabric ->Inventory -> Pod 1 and wait for all nodes from POD one to show up:

- 51 -
Ver 2.0
6_Configure L3 Interfaces (this is a folder at the top of the list!!! with quite a few calls inside, run
them all one by one)

7_TEP_POOLs

8_IPN_L3OUT

After all those calls have been sent, a few minutes later the nodes from POD2 should join the
topology and your multipod fabric is ready to go with no errors.

- 52 -
Ver 2.0

You might also like