Layer 2 Data Center Interconnect Reference Designs

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Layer 2 Data Center Interconnect – Reference Designs

eos.arista.com/layer-2-data-center-interconnect-reference-designs

Naveen Kumar Devaraj December 27, 2018

Contents [hide]

Introduction
Design 1: Multi-domain Overlay
Design 2: Single-domain Overlay
2.1. End to End EVPN
2.2. CVX + EVPN
Comparison of the two designs
Resources

Introduction
VxLAN is a popular choice for extending Layer 2 both intra and inter DC using overlays.
Arista offers multiple control plane choices for VxLAN: Static HER, CVX and EVPN. In this
article, two approaches to designing a L2 DCI over a L3 underlay are discussed. High-level
technical details of each design approach is described first, followed by a comparison of the
two options along with their typical use cases.

Design 1: Multi-domain Overlay


In this design, two overlay domains are identified:

DC Fabric domain: This is the VxLAN domain within the DC Layer 3 Leaf-Spine Fabric
with Leafs acting as VTEPs.
DCI domain: This is the VxLAN domain across the DCI spanning multiple data centers
with DCI Leafs acting as VTEPs.

1/5
The two overlay domains have independent VxLAN control planes and dot1q trunking is
used to stitch the data planes together. The Edge Leaf VTEPs mark the boundary of the DC
Fabric VxLAN domain and connect to the DCI Leaf VTEP pair via 802.1Q trunks. So, the
overlay traffic traversing between the Edge and DCI Leafs is naked without any VxLAN
encapsulation. The replication domain for BUM traffic is localized within the DC i.e., each
VTEP within the DC Fabric VxLAN domain will only see the DC local VTEPs in its flood list
plus the local Edge VTEPs. This reduces the overall volume of BUM traffic traversing the
DCI.

Below are the VxLAN control plane choices for the two domains:

DC Fabric domain: Static HER or CVX or EVPN


DCI domain: Static HER or EVPN

Within each DC Fabric domain, you can run any flavor of VxLAN overlay routing: direct,
indirect or centralized. The DCI Leafs are strictly L2 with no overlay routing enabled i.e., no
SVIs corresponding to the inter DC extended VLANs.

In addition to VxLAN, MPLS is an alternate option for the data plane in DCI domain.

Design 2: Single-domain Overlay


In this design, there’s a single overlay domain spanning multiple DCs with a transparent DCI
that offers only underlay IP routing capabilities. The Edge Leafs connect to the remote DC
Edge Leafs via the DCI transport and they have no overlay functions enabled. End to End
VxLAN tunnels are created i.e., the tunnels originating on Leaf VTEPs in a DC only terminate

2/5
on the remote DC Leaf VTEPs. From an overlay perspective, this is a single BUM replication
domain i.e., for a VLAN stretched across DCs, each VTEP will see all the remote DC VTEPs
in its flood list in addition to the local DC VTEPs.

With a single overlay domain, the following design choices are available:

2.1. End to End EVPN

In this design, the DC Fabric is built with EVPN as the VxLAN control plane. The Spines offer
EVPN transit router functionality and reflect the EVPN routes. The Spine transit routers
across DCs are logically meshed using multi-hop eBGP/EVPN peerings to interconnect the
control planes of the two DC Fabric domains.

2.2. CVX + EVPN

3/5
In this design, the DC Fabric is built with CVX as the VxLAN control plane. In addition to
offering VCS to the local VTEPs, the CVX nodes are also BGP/EVPN speakers and the
control planes across DCs are stitched together using multi-hop eBGP/EVPN peerings
between the CVX nodes. This design does not support VRFs in the overlay i.e., direct routing
or asymmetric IRB is the only supported overlay routing option.

Comparison of the two designs

Multi-domain Overlay Single-domain Overlay

Segmented approach with dot1q handoff to provide Single overlay domain with an
clear demarcation between the DC Fabric and DCI extended VxLAN control plane
domains. This design also offers: across DCs.
(1) Choice of control planes in each domain: Static
HER / CVX / EVPN
(2) Choice of data plane encapsulation on the DCI:
VxLAN / MPLS

Appropriate design choice for multi-site DCI and Simple design appropriate for small
large deployments to restrict VTEP/MAC/ARP scale and medium scale deployments
within a DC. Design can potentially be perceived as where scale is not a constraint.
complex with additional devices and config knobs Scale optimizations are possible
such as ARP reply relay. with Symmetric IRB and selective
ARP learning.

The BUM replication domain is isolated between the One BUM replication domain and
DC Fabric and DCI. This facilitates more efficient no control of dynamic flood list.
use of the DCI bandwidth by reducing the volume of Each VTEP sees all the VTEPs in
BUM traffic traversing across the DCI links. (DCI the remote DC(s) in its flood list in
Leafs only see remote DCI Leafs in their flood list) addition to local VTEPs.

Separate VNI administrative domains. From the DCI Single VNI administrative domain
stand point, only devices part of DCI domain (DCI across all DCs and reduces the
Leafs) need to have consistent VLAN to VNI flexibility to translate VLAN/VNI.
mapping. The mappings within each DC Fabric are
local to the DC and need not be consistent across
the board.

4/5
dot1q separation offers more flexibility in integrating End-to-End EVPN design works
with existing brownfield deployments. Existing DCs well for greenfield deployments
can run any flavor of VxLAN control plane in the DC where all DCs are built from
Fabric or can be even legacy L2LS type scratch.
deployment.
CVX + EVPN design facilitates
federation of CVX based DCs using
BGP/EVPN. This design also offers
easier migration of CVX based
fabrics to EVPN.

Requires an additional pair of DCI Leaf VTEPs per No additional gear is needed.
DC.

Resources
1. Summary of Arista VxLAN control plane options
2. Federating CVX across multiple Data Centers using BGP-EVPN
3. EVPN VxLAN design guide
4. CVX Deployment Recommendations for VxLAN Control Service
5. ARP Reply Relay For VxLAN L3 Data Center Interconnect

5/5

You might also like