Professional Documents
Culture Documents
VXLAN With Static Ingress Replication and Multicast Control Plane
VXLAN With Static Ingress Replication and Multicast Control Plane
VXLAN With Static Ingress Replication and Multicast Control Plane
plane
Posted by Paris Arau on December 30, 2017 in Cisco, How To's, Technical
LinkedInFacebookTwitter
Share
This is the first part of a series covering VXLAN on NEXUS devices. Various control-plane
approaches will be covered.
In this first part, unicast and multicast control-plane is discussed and in our next post, we’ll
discuss one VXLAN using MP-BGP. Each of these have advantages and disadvantages.
The purpose of this series is to show how you can configure each method and how the traffic is
forwarded.
VXLAN Tunnel Endpoint(VTEP): end of a VXLAN segment that performs encapsulation and
de-encapsulation
The first part of this article will cover simple VXLAN and this is the topology:
The NEXUS devices are all running an IGP for loopback interfaces reachability and all the
traffic between the edge NEXUS devices must go through NX_OS_4.
These are the OSPF routes on NX_OS_4 and similar output is found on all the other devices.
7
8 1.1.1.1/32, ubest/mbest: 1/0
9 *via 10.10.14.1, Eth1/1, [110/41], 00:07:04, ospf-1, intra
10 1.1.1.2/32, ubest/mbest: 1/0
11 *via 10.10.24.2, Eth1/2, [110/41], 00:10:01, ospf-1, intra
12 1.1.1.3/32, ubest/mbest: 1/0
13 *via 10.10.34.3, Eth1/3, [110/41], 00:09:55, ospf-1, intra
14
15 NX_OS_4#
So far, everything is as expected and to enable VXLAN, several things are required:
The first one is to enable VXLAN and overlay features:
5
6 version 7.0(3)I6(1)
7 vlan 1,100
8 vlan 100
9 vn-segment 10100
10
11 NX_OS_1#
And finally, to create the overlay interface and specify the ingress replication type along with the
peers.
This is for NX_OS_1:
5
6 version 7.0(3)I6(1)
7 feature nv overlay
8
9 interface nve1
10 no shutdown
11 source-interface loopback0
12 member vni 10100
13 ingress-replication protocol static
14 peer-ip 1.1.1.2
15 peer-ip 1.1.1.3
16
17 NX_OS_1#
An almost identical configuration is found on NX_OS_2 and NX_OS_3, with the difference of
peers identifier.
Once this configuration is applied, two tunnels from each router going to the other two routers
will be created:
You can also check the VXLAN network identifier along with the peer status:
1 R1#ping 100.100.100.2
2 Type escape sequence to abort.
3 Sending 5, 100-byte ICMP Echos to 100.100.100.2, timeout is 2 seconds:
4 .!!!!
5 Success rate is 80 percent (4/5), round-trip min/avg/max = 17/18/19 ms
6 R1#ping 100.100.100.3
As you can see, R1 gets the ARP entries as if they all three routers were in the normal VLAN.
The MAC address table on NX_OS_1 looks like this and it helps to understand which MAC was
learnt via direct connection (for R1) and which ones were learned over the overlay interface and
from which peer:
You can clearly see the VXLAN header encapsulating the original frame received from R1 on
eth1/2.
And this would be everything about VXLAN using unicast.
Next, we will cover the VXLAN implementation with multicast control plane and from the
underlay point of view, nothing changed with the exception that PIM was added with NX_OS_4
as RP for a group used for VXLAN:
This is the configuration on NX_OS_1 and all the other devices have identical configuration:
23
24 version 7.0(3)I6(1)
25
26 interface Ethernet1/1
27 no switchport
28 mtu 9216
The configuration pertaining to VXLAN using multicast is almost identical with the one using
unicast.
The difference is that ingress-replication was removed and a multicast group was added:
5
6 version 7.0(3)I6(1)
7 feature nv overlay
8
9 interface nve1
10 no shutdown
11 source-interface loopback0
12 member vni 10100
13 mcast-group 226.0.0.100
14
15 NX_OS_1#
Independent of the overlay interface configuration, the underlying PIM infrastructure should
work. These are the PIM neighbors of NX_OS_4(RP):
13
14 (1.1.1.2/32, 226.0.0.100/32), uptime: 00:16:34, ip mrib pim nve
15 Incoming interface: Ethernet1/1, RPF nbr: 10.10.14.4, uptime: 00:16:34
16 Outgoing interface list: (count: 1)
17 nve1, uptime: 00:06:29, nve
18
23
24 (*, 232.0.0.0/8), uptime: 00:31:11, pim ip
25 Incoming interface: Null, RPF nbr: 0.0.0.0, uptime: 00:31:11
26 Outgoing interface list: (count: 0)
27 NX_OS_1#
And this is from RP. Observe for instance, that for a packet that comes from 1.1.1.1 and destined
to 226.0.0.100, the packet should be forwarded on eth1/2(NX_OS_2) and eth1/3(NX_OS_3).
Also, from any source towards 226.0.0.100, the packets should be forwarded to all the other
NEXUS devices:
This is the VXLAN network identifier and now it shows the multicast group:
Also, the MAC address table looks the same like before:
9 1.1.1.3
10 * 100 fa16.3eae.df08 dynamic 00:03:21 F F (0x47000001) nve-peer1
11 1.1.1.2
12 * 100 fa16.3ebd.45fa dynamic 00:03:31 F F Eth1/2
13 NX_OS_1#
Again, the type of the MAC is dynamic like in the unicast control-plane.
The following is the traffic flow and VTEP discovery for ARP Request/ARP Reply.
The ARP Request is sent by the end host and reaches the NX_OS_1.
NX_OS_1 will send the ARP Request encapsulated using its loopback IP address as source and
the multicast group as destination:
This is a packet capture on eth1/1 on NX_OS_1 showing the ARP Request leaving. Notice the
Src/Dst IP of the packet:
Next, after the packet reaches the RP, the RP will forward the packet to all interfaces on which a
PIM Join for 226.0.0.100 group was received:
After the packet reaches NX_OS_3(NX_OS_3 will know about NX_OS_3 at this moment) and it
is de-encapsulated and sent to R3, R3 will send an ARP Reply to NX_OS_3. Next NX_OS_3
will encapsulate the ARP Reply in a unicast packet and send it directly to NX_OS_1:
This is a packet capture on NX_OS_1 showing the ARP Reply coming from NX_OS_3:
And this is pretty much about how VXLAN using multicast is implemented and how the data
forwarding happens.
To sum up, some of the:
o Advantages for:
Unicast control-plane:
Controlled deployment of VTEP
Easier troubleshooting
Multicast control-plane:
Reduced operational overhead
Scalability
Simplicity
o Disadvantages for:
Unicast control-plane:
Increased operational burden
Prone to configuration errors
Each peer must be configured on every VTEP
Multicast control-plane:
Each VNI use one multicast group
Possible Increased complexity due to PIM usage
Reference:
1. A Summary of Cisco VXLAN Control Planes: Multicast, Unicast, MP-BGP EVPN
2. Configure VxLAN Flood And Learn Using Multicast Core