Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

VXLAN Part I. Why do we need VXLAN?

Introduction

This section examines the challenges that virtualization of servers causes


for Datacenter networks with traditional three-layer architecture and how
VXLAN can respond to these challenges. At the end of this article, you can
find a mindmap for memory builder.

Challenges for existing Datacenter networks


Figure 1-1 shows a hypothetical 3-tier Cloud Service Provider DC network
consisting of the following components.

 Access layer (L2): Twenty of the 48-port switches. Access -


Distribution links 2 x 10Gbps MEC (Multichassis EtherChannel).
 Distribution layer (L2 / L3): Two distribution switches, which
together form a virtualized switch. The default gateway for server
segments is in distribution switches. Distribution - Core Links are
L3.
 Core Layer (L3): Two Core switches

Figure 1-1: The hypothetical Cloud SP Datacenter network.


Assume that there are 48 physical servers connected to each access
switch. Each of these servers includes five different tenants (dedicated
virtualized environments) with their own virtual routing (VRF). One tenant
consists of three broadcast domains; Presentation, Application and
Database, each with two virtual machines backing up each other. The
customer manages his own tenant and can define the VLANs IDs, the mac
addresses of virtual machines and the IP address architecture. The
mobility of virtual machines is unlimited. Based on this information,
theoretically, there can be:

 Physical servers: 960 => 20 (ToR) x 48 (port per ToR)


 VMs / Mac addresses / ARP entries: 28 800 => 960 (hosts) x
30 (VM per host)
 Broadcast Domains: 14 400 => 5 (tenant per host) x 3 (VLANs
per tenant) x 960 (hosts)
 Tenants / VRF: 4800 => 960 hosts x 5 tenants

Although our example is purely theoretical, it can identify the challenges


that today's Datacenter service providers have:

VLAN id limitation: with 12-bits there can only be 4096 different VLANs.
In small and medium-size data centers this is more than enough, but in
massive Public Cloud Service Provider data centers, this may not be
enough.

Multi-tenant: In multi-tenant environments where customers can define


both VLAN ids and mac addresses of virtual machines, overlapping may
occur.

Mac table size: There are 28,800 virtual machines connected to our
example network, which means that switches might have 28 000 mac
addresses in their mac address table. Our example demonstrates that the
number of mac entries on switches can be considerably large through the
server virtualization. If there are more mac addresses that can be stored
in switch mac table, the switch may not learn new mac addresses before
the unused mac addresses are aged out. This could lead to unnecessary
flooding due to unknown destination mac-addresses.

Note! Cisco Nexus 9500/9300 Series Switches have tested support for
90,000 mac addresses.                             
ARP table size: In our example network, the gateway function is on
distribution layer switches. Server virtualization also increases the
number of IP-MAC entries stored on the ARP table. There can be more
than 28 000 IP-MAC entries in our distribution switches.

Note! Cisco Nexus 9500 Series Switches have tested support for 60,000
IPv4 ARPs and 30,000 IPv6 NDs. The corresponding figures for Nexus
9300 series switches are 45,000 (IPv4 ARP) and 20,000 (IPv6 ND).

Spanning-tree (STP): In the traditional layer 2 networks, the Control


Plane protocol is STP, which provides a loop-free L2 topology for the
hosts. Data Plane is formed by "Flood and Learn" principle, where
switches learn the mac addresses from the received Ethernet frames and
flood the BUM traffic (Broadcast, Unknown unicast and Multicast). Without
the STP-like loop prevention mechanism, the network could choke
excessive broadcast messages. Since STP does not support load balancing
between links, some of the links may not be actively utilized for traffic
transfer. However, load balancing can be achieved by using proper STP
design where the root switches of different VLANs or MST instances are
decentralized or Multichassis EtherChannel technology is used. In
principle, the network could also be constructed using a routed-access
model (link switches L3), but this would prevent VM machines from being
mobilized between the physical hosts located at a different switch.

How VXLAN responds to the challenges

VXLAN is a MAC-over-IP / UDP tunneling mechanism that allows Layer2


network segments to be "stretched" over Layer 3 network. Each stretched
Layer 2 network is represented as a VXLAN segment identified by a 24-bit
segment-id (VXLAN Network - VNI). With VNI, we can identify 16 million
VXLAN segments. Virtual machines belonging to different VXLAN
segments may have overlapping mac addresses or VLANs since only hosts
inside VXLAN segment can have Layer 2 connection between each other.

Because VXLAN segments are tunneled over the Layer 3 network, no


Spanning Tree Protocol is required. In VXLAN technology-based DC,
VLANs no longer has global significance since VLANs are a switch or even
switch port specific, meaning that host-A, on subnet 192.168.10.0/24 in
Leaf switch 101 may belong to VLANs 200 while host B in the same
subnet on different Leaf switch 102 may belong to VLAN 201.

In a VXLAN-based DC network, the Leaf (access) switch link to Spine


Switches (Distribution + Core) is a Layer 3 connection, so other than the
Leaf switches are not aware of mac addresses of Virtual Machines.

VXLAN enables the use of anycast gateway, where the routing of client
networks is distributed between Leaf Switches. This means that gateway
address of the network 192.168.10.0/24 (192.168.1.1) is found on each
Leaf switch. As the virtual machine moves to a new host connected to the
different switch, its gateway is still directly connected. The decentralized
anycast gateway greatly reduces the number of mac addresses on
individual switch ARP table.

Why VXLAN - Mindmap

Figure 1-2: The Mind Map.

You might also like