Professional Documents
Culture Documents
MultiPodSetupWithGolf v1.0
MultiPodSetupWithGolf v1.0
Ver 1.2
-2-
Ver 2.0
-3-
Ver 2.0
Introduction
This document covers step by step configuration of the ACI Fabric to support the multipod feature
introduced in code version 2.0(1m).
Multipod enables provisioning a more fault tolerant fabric comprised of multiple pods with isolated
control plane protocols. Also, multipod provides more flexibility with regard to the full mesh cabling
between leaf and spine switches. For example, if leaf switches are spread across different floors or
different buildings, multipod enables provisioning multiple pods per floor or building and providing
connectivity between pods through spine switches.
You have the option to follow step-by-step configuration instructions using the GUI or use the API
configuration approach utilizing a “collection” for Chrome “Postman” that is distributed with this
document. Some of the basic configuration details for the IP network that interconnects the PODs is
also provided.
The document assumes you have a “clean” fabric that you’re just about to provision, if this is not the
case you will need to skip and/or adjust some steps.
The assumption is that all switches(nodes) and APIC servers are running the 2.0(1m) code or higher,
if this is not the case (especially for nodes in POD2) it might be worth plugging all nodes and APIC
servers into POD1, building temporary single topology in POD1, upgrading the switches / APICs to
2.0(1m) and then decommissioning the nodes that will end up in POD2 (for that you can use the
“eraseconfig setup” on APIC servers and “setup-clean-config.sh” followed by “reload” commands on
switches).
-4-
Ver 2.0
POD Physical Topology
-5-
Ver 2.0
i05-n7009-01_VDC6 i05-n7009-01_VDC7
E7/2
E7/3 .10
E7/1 20
.2 E7/4 2.1
IPN .2 .1.
8/
.10 30
20
2.1
30
E7/2
.1.
/
1.8
0/
E7/3 E7/1 .14
30
.
1.1
.6 .6
20
0 E7/4
0/ 3
20
202
. 1.
2.1
.1.1
1.1 .14 i04-n7009-01_VDC6 .12
.
i04-n7009-01_VDC7
1.4
20 0 /30
.4/3
/3
.1.1
0
201.1.1.12/30
201 .5 .1 .9
.1 .9 .13 E1/1 .13
E1/1
E1/2 E1/2
E1/1 E1/2 .5 E1/1 E1/2
i02-9336-03 i02-9336-04
i02-APIC-01 i02-APIC-02 i02-n9336-01 i02-n9336-02 Spine-203 Spine-204
10.50.138.221 10.50.138.223 Spine-201 Spine-202
i02-APIC-03
-6-
10.50.138.225
E 2/1 E 2/2 E 2/1 E 2/2 E1/36 E1/36
E1/34 E1/35 E1/34 E1/35
London E1/1 E1/2 E1/1 E1/2
E 2/1E 2/2
Leaf_101 Leaf_102 Leaf_103 Leaf_104 Leaf_105
i02-9396-01 i02-9396-02 i02-9372-01 i02-n9396-03 i02-9396-04
E1/2 E1/53 E1/54 E1/2 E1/53 E1/54 E1/53 E1/54 E1/53 E1/54 E1/3 E1/53 E1/54 E1/3
E1/1 E1/1
Ver 2.0
IP Network configuration:
The IP network that interconnects the PODs needs to support multicast, so that the bridge domain
Broadcast, Unknown unicast and Multicast traffic (BUM) can be transported between PODs. We are
using a pair of nexus 7000 devices with multiple VDCs to create the topology, below are the few
configuration details that have been used:
interface Ethernet7/3
mtu 9150
no shut
interface Ethernet7/3.4
mtu 9150
encapsulation dot1q 4
ip address 201.1.1.2/30
ip ospf network point-to-point
ip router ospf GLOBAL area 0.0.0.0
ip pim sparse-mode
ip dhcp relay address 10.1.0.1
no shutdown
IPN Node 1
interface loopback1
description Bidir Phantom RP
ip address 192.168.100.1/30
ip ospf network point-to-point
ip router ospf IPN area 0.0.0.0
ip pim sparse-mode
IPN Node 2
interface loopback1
description Bidir Phantom RP
ip address 192.168.100.1/29
ip ospf network point-to-point
ip router ospf IPN area 0.0.0.0
ip pim sparse-mode
https://supportforums.cisco.com/document/55696/rp-redundancy-pim-bidir-phantom-rp
-7-
Ver 2.0
Multipod setup using GUI
Bring the first apic online
We start by provisioning the first APIC server (highlighted are the non-default values, some output
has been omitted):
Notice: I’ve used a 10.1.0.0/16 subnet for POD1, we will provision 10.2.0.0/16 for POD2 later on.
Leave the APIC for a few minutes to become available before you try to login via the GUI.
-8-
Ver 2.0
Register all nodes:
Fabric -> Inventory -> Fabric Membership
For POD1 nodes just use the Register Switch option like we normally do:
The nodes in POD2 are not visible at this point; they will be detected once the IPN is configured. You
can pre-register them now using the “Create Fabric Node Member” option; remember to specify
“Pod ID” as “2”:
It is recommended to configure NTP on all nodes – saves you hassle with fabric discovery problems
later and is really best practice. For NTP to work correctly, you will need the OOB management
network configured first.
-9-
Ver 2.0
Configure Node Management Addresses:
Tenants -> mgmt -> Node Management Addresses -> Static Node Management Addresses
Fill in details for each node, see example for node 101 below:
- 10 -
Ver 2.0
Configure NTP provider:
Navigate Fabric -> Fabric Policies -> Pod Policies -> Polices -> Date and Time -> Policy Default
Fill in the information, don’t forget to tick the “Preferred” and select the “Management EPG”
Configure Timezone:
Navigate Fabric -> Fabric Policies -> Pod Policies -> Polices -> Date and Time -> default
- 11 -
Ver 2.0
Configure BGP Policy
Navigate Fabric -> Fabric Policies -> Pod Policies -> Polices -> BGP Route Reflector default
I’ve configured spine 201,202 in pod1 and 203,204 in pod2 to be the route reflectors. Remember
that the control plane is independent in both fabrics.
Note: This configuration will display some errors. As nodes 203 and 204 have not yet been detected
the configuration can’t be delivered to them. The errors will be cleared once nodes in pod2 are
detected.
- 12 -
Ver 2.0
Create Pod Policy Group:
Navigate Fabric -> Fabric Policies -> Pod Policies -> Policy Groups
As a minimum you must select “default” for the “Date Time Policy” and the “BGP Route Reflector
Policy”
- 13 -
Ver 2.0
Modify default Pod Profile :
Navigate Fabric -> Fabric Policies -> Pod Policies -> Profiles -> default
After a few minutes you can, verify if NTP has been applied to your nodes:
Navigate Fabric -> Fabric Policies -> Pod Policies -> Polices -> Date and Time -> Policy Default
- 14 -
Ver 2.0
Define TEP Pool for nodes in POD2:
Navigate to:
- 15 -
Ver 2.0
Configure Multipod
Fabric -> Inventory -> POD Fabric Setup Policy
Specify Community: “extended:as2-nn4:29:12”, “Full Mesh”. In the “POD Connection Profile”, define
a TEP address for each POD (those represent the “anycast VTEP” shared across the spines and
representing the EVPN Next-Hop when encapsulating between spines in separate PODs).
In the “Fabric External Routing Profile” define the IP subnets between the spines and the local IPN
nodes, specify name: “ext_routing_prof_1” and subnet “201.1.0.0/16, 202.1.0.0/16”
- 16 -
Ver 2.0
Your “Topology” should now look like this:
Now we need to focus on configuring the interfaces towards the local IPN. In this document we are
using a “manual” method of configuring all the interface policies that are required on links towards
the IPN, although this method may look complicated it gives you full control over naming of your
policies. Being exposed to all the configurations steps gives you opportunity to learn where all the
options are.
As an alternative one could use the built in “quick start” wizards (Fabric -> Access Policies -> Quick
Start), but those are not covered in this document.
- 17 -
Ver 2.0
Configure VLAN Pool for IPN Connectivity:
Navigate
Create vlan Pool, make sure allocation is “Static” and there is a single vlan in the pool:
Note: You must use vlan 4, as this is the default vlan “spines” will try to communicate with the IPN,
it’s also important later when you configure your IPN nodes, to specify “encapsulation dot1q 4” on
the interface.
- 18 -
Ver 2.0
Create External Routed Domain:
Navigate:
Specify name “MultiPod_ExtL3Dom” and reference the VLAN Pool we’ve created:
- 19 -
Ver 2.0
Configure AAEP:
Navgate:
Fabric -> Access Policies -> Global Policies -> Attacheable Access Entity Profiles.
Specify name: MultiPod_AAEP, add our external L3 domain, continue with “Next”
- 20 -
Ver 2.0
Create Link Level Interface Policy
Navgate:
Fabric -> Access Policies -> Interface Policies -> Policies -> Link Level
- 21 -
Ver 2.0
Create CDP Policy
Navgate:
Fabric -> Access Policies -> Interface Policies -> Policies -> CDP Interface
- 22 -
Ver 2.0
Create Spine Interface Policy Group
Navgate:
Fabric -> Access Policies -> Interface Policies -> Policy Groups -> Spine Policy Groups
- 23 -
Ver 2.0
Create Spine Interface Profile:
Navgate:
Fabric -> Access Policies -> Interface Policies -> Profiles -> Spine Profile
On the interface Selector screen, specify name: Interface, Interface IDs: 1/1-2 and the Inteface Policy
Group: SpineIPN_IntPolicyGroup
- 24 -
Ver 2.0
Create Spine Profile
Navgate:
Fabric -> Access Policies -> Switch Policies -> Profiles -> Spine Profile
Specify name: Spines and add Spine Selector for spine 201-204 (don’t use the drop down box for
spine selection as spines 203 and 204 are not part of the fabric yet so they will not be listed).
- 25 -
Ver 2.0
“Tick” our interface profile, and “Finish”
This will configure interfaces e1/1 and e1/2 on all spines to run at 40Gbps, Enable CDP and allow
vlan 4.
- 26 -
Ver 2.0
Create OSPF Interface Policy
Navgate:
Tenants -> infra -> Networking -> Protocol Policies -> OSPF Interface
- 27 -
Ver 2.0
Create Rotued Outside for EVPN Notes
Multipod uses the BGP EVPN protocol over OSPF in the inter-pod network (IPN) for communicating
between spines in different pods.
This part is pretty tricky to configure in the GUI, I recommend doing this with an API call via the
“Postman” app for Chrome and the provided “collection” (see instruction in the second part of this
document).
- 28 -
Ver 2.0
Create Rotued Outside for EVPN Procedure
Navgate:
In “STEP 1” of the wizard specify Name: multipod, Area ID: 0, Area Type: Regular
- 29 -
Ver 2.0
In “STEP 2” of the wizard we define Nodes and Interfaces for the Spines in POD1, specify the Node
profile Name: “POD1-INode”, add Spine 201 nad 202 and define the loopbacks, specify the Logical
Inteface Profile Name: “LIFp_v4”, select the OSPF Interface Policy: IPN_OSPFIntPolicy and finally add
the ip addressess on node-201 e1/1, e1/2 and node-202 e1/1, e1/2 (those are our spine to IPN
links).
- 30 -
Ver 2.0
Now highlight our external routed network “multipod” and update the “External Routed Domain” to
the “MultiPod_ExtL3Dom“
If you have pre-registered the Spines in pod2 their “Role” will now change from “unknown” to
“spine”. Notice they will not have the ip addresses yet.
If you have not pre-registered the nodes in pod2, you will now see the two new spines being
detected giving you the option to register them.
- 31 -
Ver 2.0
Now we can configure the IPN links from spines in pod2
Navigate:
Tenants -> infra -> Networking -> External Routed Networks -> multipod -> Logical Node Profiles
In the “Create Node Profile” specify: Name: “POD2-INode”, add the Node-203 and 204 to nodes list
giving them loopback ip addresses 203.203.203.203 and 204.204.204.204.
- 32 -
Ver 2.0
Click the “+” in the OSPF Interface Profiles Section to configure the interfaces, this will open another
screen
On the “Create Interface Profile” wizard screen, specify Name: “LIFp_v4”, select the
IPN_OSPFIntPolicy. NOTE: you will not be able to configure the routed subinterfaces at this point,
just finish the wizard with the “OK” followed by “Submit” button.
- 33 -
Ver 2.0
Go back to the newly created “POD2-INode” logical node profile highlight the “LIFp_v4” logical
interface profile and add the Routed Sub-Interfaces using the “+” button on the right:
Repeat this step 4 times for each of the interfaces on spine nodes in POD2.
- 34 -
Ver 2.0
Your Logical Interface Profile for POD2 should look like this:
If you pre-registered all nodes in pod2 they will now show up with IP address assigned.
If you haven’t pre-registered the nodes in pod2, at this point they will be discovered and now you
can register them:
After registering wait a moment for the nodes to get IP address and be added to topology before
you continue.
- 35 -
Ver 2.0
The topology will show all nodes in POD1 and POD2:
Notice only two of the IPN devices show in the Inter-Pod Network – that is normal.
- 36 -
Ver 2.0
Register second APIC controller (in POD1):
Cluster configuration ...
Enter the fabric name [ACI Fabric1]: i02-fabric-01
Enter the fabric ID (1-128) [1]:
Enter the number of controllers in the fabric (1-9) [3]:
Enter the POD ID (1-9) [1]:
Enter the controller ID (1-3) [1]: 2
Enter the controller name [apic1]: i02-apic-02
Enter address pool for TEP addresses [10.0.0.0/16]: 10.1.0.0/16
Note: The infra VLAN ID should not be used elsewhere in your environment
and should not overlap with any other reserved VLANs on other platforms.
Enter the VLAN ID for infra network (2-4094) [2]: 3966
Remember the fabric name, infra network VLAN id and TEP address pool MUST match what was
used for apic controller 1
NOTE: this time specify POD ID “2” but you must still use the “10.1.0.0/16” TEP address range.
After a few minutes the topology diagram will reflect the newly provisioned APICs.
- 37 -
Ver 2.0
Configure your “Golf” devices Procedure
In this example we use nexus 7700 as the “Golf” device, with F3 line cards and 7.3(1)D1(1) code:
Setup infra connectivity, in our case the golf devices peers to the spine via the IPN network used for
multipod:
router bgp 3
router-id 200.200.200.201
address-family l2vpn evpn
allow-vni-in-ethertag
neighbor 11.11.11.11 (example for spine 1, repeat for all spines that are peering)
remote-as 100
update-source loopback0
ebgp-multihop 10
timers 1 3
address-family ipv4 unicast
address-family l2vpn evpn
send-community extended
import vpn unicast reoriginate
- 38 -
Ver 2.0
Create automation profiles:
VRF Profile:
configure profile vrf-common-mpls-l3vpn-dc-edge
vrf context $vrfName
vni $include_vrfSegmentId
rd auto
address-family ipv4 unicast
route-target import $include_client_import_ipv4_bgpRT_1 evpn
route-target export $include_client_export_ipv4_bgpRT_1 evpn
route-target import $include_client_import_ipv4_bgpRT_2 evpn
route-target export $include_client_export_ipv4_bgpRT_2 evpn
route-target import $include_client_import_ipv4_bgpRT_3 evpn
route-target export $include_client_export_ipv4_bgpRT_3 evpn
route-target import $include_client_import_ipv4_bgpRT_4 evpn
route-target export $include_client_export_ipv4_bgpRT_4 evpn
route-target import $include_client_import_ipv4_bgpRT_5 evpn
route-target export $include_client_export_ipv4_bgpRT_5 evpn
route-target import $include_client_import_ipv4_bgpRT_6 evpn
route-target export $include_client_export_ipv4_bgpRT_6 evpn
route-target import $include_client_import_ipv4_bgpRT_7 evpn
route-target export $include_client_export_ipv4_bgpRT_7 evpn
route-target import $include_client_import_ipv4_bgpRT_8 evpn
route-target export $include_client_export_ipv4_bgpRT_8 evpn
route-target import $include_client_import_ipv4_bgpRT_1
route-target export $include_client_export_ipv4_bgpRT_1
route-target import $include_client_import_ipv4_bgpRT_2
route-target export $include_client_export_ipv4_bgpRT_2
route-target import $include_client_import_ipv4_bgpRT_3
route-target export $include_client_export_ipv4_bgpRT_3
route-target import $include_client_import_ipv4_bgpRT_4
route-target export $include_client_export_ipv4_bgpRT_4
route-target import $include_client_import_ipv4_bgpRT_5
route-target export $include_client_export_ipv4_bgpRT_5
route-target import $include_client_import_ipv4_bgpRT_6
route-target export $include_client_export_ipv4_bgpRT_6
route-target import $include_client_import_ipv4_bgpRT_7
route-target export $include_client_export_ipv4_bgpRT_7
route-target import $include_client_import_ipv4_bgpRT_8
route-target export $include_client_export_ipv4_bgpRT_8
router bgp $asn
vrf $vrfName
address-family ipv4 unicast
advertise l2vpn evpn
label-allocation-mode per-vrf
address-family ipv6 unicast
advertise l2vpn evpn
label-allocation-mode per-vrf
interface nve $nveId
member vni $include_vrfSegmentId associate-vrf
- 39 -
Ver 2.0
no shutdown
configure terminal
- 40 -
Ver 2.0
Modify Routed Outside to support “Golf” Procedure
Navgate:
Configure “BGP Infra Peer Connectivity” peering with Golf devices in POD1 and POD2, by creating a
bgp connectivity profile:
- 41 -
Ver 2.0
- 42 -
Ver 2.0
How to consume the “Golf “ connectivity Procedure
Navigate to your target tenant:
Tenants -> Prod_Tenant -> Networking -> External Routed Networks and create “Routed outside”
The consumer Label has to match the “provider label” configured in the steps above so we use
“golf”:
- 43 -
Ver 2.0
Continue the wizard by clicking “Next”.
- 44 -
Ver 2.0
Modify the tenant’s vrf, by adding “BGP route target Profile”:
Create both “Import” and “Export” route targets for the ipv4 address family:
Note: use the following format: route-target:as4-nn2:x:x where x has to be unique for all of your vrfs
- 45 -
Ver 2.0
Enable the “Opflex” protocol and specify the vrf name that will be pushed to your “Golf” devices:
Fabric 1 (Healthy)
opflex-peer 11.11.11.11:8009 (Connected and ready)
opflex-peer 12.12.12.12:8009 (Connected and ready)
opflex-peer 21.21.21.21:8009 (Connected and ready)
opflex-peer 22.22.22.22:8009 (Connected and ready)
ssl encrypted
Tenant Policies
3: Fabric Vrf: Prod_Tenant:vrf-1, Vrf: Prod_Tenant
RT v4:(5:5,5:5) v6:(nil,nil)
Id 10, HostId: 10
flags 0x0
framework_p: 0xf02006bc
- 46 -
Ver 2.0
And other relevant commands:
spine1201_i08-n9508-01#
- 47 -
Ver 2.0
Configuration using the northbound API
To open Postman you need to start Chrome and head to the apps section:
- 48 -
Ver 2.0
Load the Postman collection
Embedded below and distributed with the document is the library of API calls needed for
configuration of our multipod setup.
MultiPodSetupExample_v1.4.postman_collection.json
(as of writing this document the embedded version of the file is 1.4, please check
https://communities.cisco.com/docs/DOC-55116 where most up-to-date version will is attached)
Before we import that library into Postman it’s worth opening the library in text editor and replacing
all instances of “10.10.10.10” with the ip address of your APIC server – this will save you time editing
the API URLs later.
- 49 -
Ver 2.0
To view the imported library toggle the sidebar hide/show sidebar and select Collections
Before you use any of the API calls make sure you connect to your APIC at least once from Chrome
to accept the certificate.
- 50 -
Ver 2.0
Run all API calls to configure the fabric.
The scripts assume you have provisioned the first APIC server and nothing else is configured. Run the
scripts in order:
2_REGISTER_NODES (you will need to edit the “Body” of the call with your SNs and hostnames.
3_NODE_MGMT_IP (again edit the body to specify your own OOB IP addresses)
4_POD_POLICIES
5_OSPF_POOLICY
At this point you can wait a few minutes checking for the POD1 to fully register:
Navigate to Fabric ->Inventory -> Pod 1 and wait for all nodes from POD one to show up:
- 51 -
Ver 2.0
6_Configure L3 Interfaces (this is a folder at the top of the list!!! with quite a few calls inside, run
them all one by one)
7_TEP_POOLs
8_IPN_L3OUT
After all those calls have been sent, a few minutes later the nodes from POD2 should join the
topology and your multipod fabric is ready to go with no errors.
- 52 -
Ver 2.0