Professional Documents
Culture Documents
Building HP FlexFabric Data Centers, Rev 14.41 Student Guide Part3
Building HP FlexFabric Data Centers, Rev 14.41 Student Guide Part3
Building HP FlexFabric Data Centers, Rev 14.41 Student Guide Part3
HP ExpertOne
Rev. 14.41
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
HP ExpertOne
Rev. 14.41
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Copyright 2014 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice. The only warranties for HP products and
services are set forth in the express warranty statements accompanying such products and services. Nothing
herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial
errors or omissions contained herein.
This is an HP copyrighted work that may not be reproduced without the written permission of HP. You may not
use these materials to deliver training to any person outside of your organization without the written permission
of HP.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
This course is an HP ExpertOne authorized course designed to prepare you for
the associated certification exam. All material to be used and studied in
preparation to pass the certification exam is included in this training.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Contents
Appendix 1 - Enhanced IRF......................................................................................................A1-1
Objectives.....................................................................................................................A1-1
Enhanced IRF Overview...............................................................................................A1-2
Network Virtualization Types.........................................................................................A1-3
Evolution of N:1 Virtualization.......................................................................................A1-4
EIRF..............................................................................................................................A1-6
Benefits of EIRF............................................................................................................A1-7
EIRF in the Data Center................................................................................................A1-8
EIRF in the Campus......................................................................................................A1-9
EIRF Operation Overview...........................................................................................A1-10
Terminology and Components....................................................................................A1-11
EIRF Implementation: PEX Port..................................................................................A1-12
EIRF Implementation: CB-PE Registration Flow.........................................................A1-13
EIRF Implementation: CB-PE Connection..................................................................A1-14
EIRF Implementation: Physical Port States................................................................A1-15
EIRF Implementation: PE identification – New PE ....................................................A1-16
EIRF Implementation: PE Removal............................................................................A1-17
EIRF Implementation: CB chassis Option...................................................................A1-18
EIRF Implementation: CB Fixed-Port Option..............................................................A1-19
EIRF Implementation: Forwarding Modes...................................................................A1-20
EIRF Implementation: Central Forwarding Mode........................................................A1-21
EIRF Implementation: Local Forwarding Mode...........................................................A1-22
EIRF Implementation: Unicast 1.................................................................................A1-23
EIRF Implementation: Unicast 2.................................................................................A1-24
EIRF Implementation: Multicast/Broadcast 1..............................................................A1-25
EIRF Implementation: Multicast/Broadcast 2..............................................................A1-26
IRF and EIRF: PE-CB Connections............................................................................A1-27
EIRF Port Numbering..................................................................................................A1-28
EIRF Port Numbering Fixed-Port CB..........................................................................A1-29
EIRF Port Numbering Chassis CB..............................................................................A1-30
IRF and EIRF: Network Access for Servers................................................................A1-31
IRF and EIRF: Network Access for Servers - Example...............................................A1-32
IRF and EIRF: Split Brain Situation 1..........................................................................A1-33
Learning Activity: EIRF Operation Review..................................................................A1-34
Leaning Activity: Answers...........................................................................................A1-35
Design Considerations................................................................................................A1-36
Design Considerations: Deployment Planning............................................................A1-37
Configuration Steps for EIRF......................................................................................A1-38
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Step 1: Configure IRF for the CB Devices..................................................................A1-39
Step 2: Prepare the PEX Firmware Image..................................................................A1-40
Step 2: Prepare the PEX Firmware Image 2...............................................................A1-41
Step 3: Create a PEX Port with Virtual Slot-ID............................................................A1-42
Step 4: Associate PEX Port with a Physical Interface.................................................A1-43
Step 5: Configure PE in PEX Mode Using Boot Menu................................................A1-44
Step 6: Verify...............................................................................................................A1-45
Step 6: Verify 2............................................................................................................A1-46
Step 6: Verify 3............................................................................................................A1-47
Learning Check...........................................................................................................A1-49
Learning Check Answers............................................................................................A1-50
Appendix 2 - EVB-VEPA...........................................................................................................A2-1
Objectives.....................................................................................................................A2-1
EVB Overview...............................................................................................................A2-2
EVB and VEPA.............................................................................................................A2-3
Review of Hypervisor Networking.................................................................................A2-4
Traffic flow with Classic Hypervisor Networking............................................................A2-5
Traffic flow with Classic Hypervisor Networking............................................................A2-6
Traffic Visibility..............................................................................................................A2-7
VLAN Network Management.........................................................................................A2-8
EVB Model....................................................................................................................A2-9
EVB VLAN Network Management..............................................................................A2-10
Terminology and Concepts 1......................................................................................A2-11
Terminology and Concepts 2......................................................................................A2-12
S-Channel Identifier....................................................................................................A2-13
S-Channel Sub-interfaces...........................................................................................A2-14
S-Channel Setup Negotiation......................................................................................A2-15
S-Channel Configuration Negotiation..........................................................................A2-16
EVB VSI Manager.......................................................................................................A2-17
EVB VSI Manager: VSI Templates.............................................................................A2-19
HP 5900v: EVB Interaction with EVB Station 1...........................................................A2-20
HP 5900v: EVB Interaction with EVB Station 2...........................................................A2-21
HP 5900v Components...............................................................................................A2-22
HP 5900v Communication 1.......................................................................................A2-23
HP 5900v Communication 2.......................................................................................A2-24
Learning Activity: EVB Operation and Component Review........................................A2-25
Learning Activity: Answers..........................................................................................A2-27
HP 5900v Installation Prerequisites............................................................................A2-28
Installation Flow Overview 1.......................................................................................A2-29
Installation Flow Overview 2.......................................................................................A2-30
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Installation Flow Overview 3.......................................................................................A2-31
Initial Configuration – Device Discovery and VDS .....................................................A2-32
New Network/VM Configuration Flow Overview 1......................................................A2-33
New Network/VM Configuration Flow Overview 2......................................................A2-35
New Network/VM Configuration Flow Overview 3......................................................A2-36
New Network/VM Configuration Flow Overview 4......................................................A2-37
Operational VM Boot Process Flow 1.........................................................................A2-38
Operational VM Boot Process Flow 2.........................................................................A2-39
Operational VM Boot Process Flow 3.........................................................................A2-40
Operational VM Boot Process Flow 4.........................................................................A2-41
Operational VM Boot Process Flow 5.........................................................................A2-42
Operational VM Boot Process Flow 6.........................................................................A2-43
EVB Bridge Configuration Steps.................................................................................A2-44
Step 1: Configure Interface with EVB support.............................................................A2-45
Step 2: Configure LLDP..............................................................................................A2-46
Step 3: S-Channel Reflective Relay............................................................................A2-47
Step 4: VSI Manager Configuration............................................................................A2-48
Step 5: Verify...............................................................................................................A2-49
Summary.....................................................................................................................A2-50
Learning Check...........................................................................................................A2-51
Learning Check Answers............................................................................................A2-53
Appendix 3 - VXLAN.................................................................................................................A3-1
Objectives.....................................................................................................................A3-1
VXLAN Overview..........................................................................................................A3-2
VXLAN introduction.......................................................................................................A3-3
Supported Products......................................................................................................A3-4
VXLAN Operation..........................................................................................................A3-5
VXLAN Concepts..........................................................................................................A3-6
VXLAN: VTEP...............................................................................................................A3-7
VXLAN Tunnel and Multicast........................................................................................A3-8
VXLAN Packet Structure.............................................................................................A3-10
VXLAN Traffic Flow Overview.....................................................................................A3-11
VXLAN: Multi-Destination Delivery..............................................................................A3-12
VXLAN: Multi-Destination Delivery..............................................................................A3-13
VXLAN Unicast Delivery.............................................................................................A3-14
VXLAN Unicast Delivery.............................................................................................A3-15
Learning Activity: VXLAN Review...............................................................................A3-16
Learning Activity: Answers..........................................................................................A3-18
VXLAN to Physical Network........................................................................................A3-19
VXLAN to Physical network: VM Based IP Routing....................................................A3-20
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN to Physical Network: VMWare Edge Gateway...............................................A3-22
VXLAN to physical network: Hardware gateway.........................................................A3-23
Hardware Gateways and OVSDB...............................................................................A3-24
Design Considerations................................................................................................A3-25
Configuration Steps for VXLAN...................................................................................A3-27
Step 1: Configure Global L2VPN................................................................................A3-28
Step 2: Configure VXLAN Tunnel...............................................................................A3-29
Step 3: Create VSI VXLAN + Bind VXLAN Tunnel.....................................................A3-30
Step 4: Create Service Instance.................................................................................A3-31
Step 5: Bind Service Instance to VSI..........................................................................A3-32
Step 6: Configure Transport IP Interface IGMP..........................................................A3-33
Step 7: Configure VXLAN VSI Multicast address........................................................A3-34
Step 8: Verify...............................................................................................................A3-35
Summary.....................................................................................................................A3-36
Learning Check...........................................................................................................A3-37
Learning Check Answers............................................................................................A3-38
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
Appendix 1
Objectives
This module explores the Enhanced IRF (EIRF). You will learn about the advantages,
operational aspects, and connectivity requirements of EIRF. Unicast and Multicast
traffic handling mechanisms are then explored, followed by a discussion of EIRF
configuration tasks.
After completing this module, you should be able to:
Describe the Enhanced IRF Feature
Describe use cases for Enhanced IRF
Describe the Enhanced IRF operation and components
Configure Enhanced IRF
NOTES
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Traditional network design could involve the use of two core switches with separate
redundancy mechanisms for Layers 2 and 3. The Layer 2 redundancy was provided
through STP. While each access switch might be dual-homed to upstream cores, one
of those links would be placed in a blocking mode to prevent loops. The dual
connections provided active-standby redundancy – the redundant links did not help to
carry traffic load.
For Layer 3 redundancy, VRRP could be used. With VRRP one core switch would
assume the master role, actively routing traffic. Another device would assume a
standby role to provide redundancy. This standby device can detect an outage, and
assume the master role if needed. This is another active-standby redundancy
mechanism.
IRF provided a significant evolution from previous techniques by grouping multiple
switches into a single entity. With IRF, multiple core switches can be bundled
together, as can access switches. Separate core and access IRF groups can be
interconnected using LACP, which bundles multiple, physical links into a single,
logical link.
This greatly simplifies the network design, while providing active-active data and
control plane redundancy, as opposed to the active-standby mechanisms provided by
STP and VRRP.
IRF also improved the management plane. If four physical chassis are grouped into a
single IRF group, then those four devices are all managed as a single unit. If nine
members are bundled into an IRF group, there is then only one-ninth of the
administrative configuration and management overhead.
However, each IRF system must still be individually managed. For example, if 200
switches are configured into multiple four-member IRF groups, then 50 logical IRF
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
units must be individually managed. While IRF provides significant advantages over
traditional networking, room for improvement remains.
The next generation of virtualization is the Enhanced IRF (EIRF). This technology
provides similar active-active redundancy for data and control planes as IRF. The
evolution is that switches of multiple design layers can now be grouped into a single,
logical entity. IRF provides a horizontal grouping of identical devices. This means that
IRF can group access switches into a logical entity, and it can group core switches
into a separate logical entity. EIRF can provide vertical grouping of both access and
core switches into a single entity.
With EIRF a complete, virtualized network can now be managed from a single IP
address, with complete active-active device and path redundancy. The number of
nodes to be managed is reduced by 1/30th, or possibly 1/100th depending on the
number devices converged.
VLAN configuration is greatly simplified. In the IRF scenario described above, 200
physical switches were combined into 50 IRF groups. This means that a VLAN
addition or modification requires 50 configuration sessions – one for each IRF group.
With EIRF, the same task can be accomplished with a single configuration, since the
entire network is perceived as a single device.
Advantages are also realized with physical inter-switch cabling. With IRF, cables are
required between the different members the same IRF system. Access switched
require multiple uplink connections to the core, as well as 10Gbps IRF connections
between members. With EIRF, these horizontal, intra-member connections are not
required at the access layer.
Also, EIRF greatly reduces the need for LACP configurations. LACP is not required to
interconnect the members of a EIRF group. It is only used to connect LACP-capable
endpoint devices to access layer switches.
EIRF simplifies and improves the efficiency of MAC address learning operations, and
also eases the migration to SDN. These advantages are the result of EIRF’s ability to
collapse the entire network into a much smaller number of logical entities.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
EIRF
EIRF is based on IRF, which is a mature, proven technology. Traditional IRF can only
group identical switch-series models and types. EIRF provides a complete N:1
virtualization technology to collapse different device types at different deployment
layers into a single logical entity. With EIRF, various models of core and access
switches can each be perceived as the individual line cards of a single, centrally
managed system.
In the example at the top of the figure, IRF was used to group two identical chassis-
based core switches into a single entity. Two identical fixed-port access switches
were also paired into an IRF group, and another pair was also configured to act as a
single IRF entity.
In the bottom example, the same devices are now all integrated into a single EIRF.
They are centrally managed by the chassis switches as if they were remote line
cards. The entire system is perceived as one massive chassis device with multiple
line cards.
Endpoint devices can be connected to two separate access switches to maximize
redundancy. This is the same as connecting endpoints to different line cards of a
single physical device. The endpoint’s dual interfaces can simply be grouped together
using traditional link aggregation. The connections between access switches are no
longer needed, and so are removed. All communication between access switches
shall now be via the chassis switches.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
Benefits of EIRF
The single logical system created by EIRF greatly simplifies the network topology.
There is now only one entity to manage and configure. This reduces repetitive
configuration tasks and reduces configuration complexity.
For example, VLANs need only be configured one time, as opposed to once for each
of 200 physical switches, or 50 IRF systems, for example. Redundancy protocol
configurations such as VRRP and STP are no longer necessary, further simplifying
the configuration.
Firmware management is also streamlined. All access switch members are
automatically provisioned with the correct firmware by the main core switches.
NOTES
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
The figure shows a deployment of three logical EIRF fabrics. The main, chassis-
based core switches in each fabric act as a Controlling Bridge (CB) device. These
distribution-layer CBs terminate all the top-of-rack switch connections.
Each EIRF fabric operates as a single logical device that is dual-homed to the Layer
3 core network. This routing core could be individual routing devices, or an IRF
system. They could be running the routing protocol of choice, such as OSPF, BGP, or
IS-IS.
The Top-of-Rack access switches provide the EIRF Port Extender (PE) function,
since the extend the number of available ports in the EIRF fabric. Typically, servers
will be dual-homed to two PEs to ensure high availability.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
Although the focus of this course is on data centers, it should be noted that EIRF can
also be deployed in a campus setting. In this scenario the entire campus network
becomes a single logical device. The CBs consist of an IRF fabric at the core layer.
These core devices provide Layer 3 forwarding, and act as the default gateway for
endpoints.
The access switches act as PEs, providing High Availability groups for endpoint
connectivity. Although not shown in the figure, it is also possible to connect other
switches to the PEs using link aggregation. This would provide an active-active
redundancy mechanism for these switch connections.
NOTES
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
The CB is the highest layer of a EIRF fabric, and must be a chassis-based or high-
performance fixed-port device. Chassis that support EIRF include the 11900 and
12900. Fixed-port devices include the 5900/5930 models. The EIRF fabric perceives
one logical Controlling Bridge. For redundancy, two physical chassis can be grouped
into an IRF system.
The PE is the lowest layer in the EIRF model, deployed as a fixed-port device.
Chassis-based switch models cannot be used as PE devices. Various models are
available to meet different performance and scalability requirements.
PEs are connected to CBs with a Port Extender (PEX) port. This is a logical
connection, similar to a traditional IRF system’s logical IRF port. This logical PEX port
can be formed with multiple physical interfaces. To maximize redundancy, sets of
physical interfaces can be connected to different CBs.
In the figure, the CBs are depicted as high-performance, chassis-based devices.
Each PE has multiple physical connections to the CBs. This set of connections is
configured as single logical PEX port.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
NOTES
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
The figure shows CB-PE registration flow to establish the EIRF. In the first step, the
PE device is first manually converted to operate in PEX mode, via a BOOTROM
menu option. The PE reboots. Meanwhile, the CB periodically sends EIRF hello
packets out all PEX ports. When the PE device comes online, it receives these
packets and requests a Slot ID. The CB receives and responds with a Slot ID
assignment.
In the next step, the PE determines if it is running the correct firmware. If not, then
boot and system images are downloaded from the CB, and saved locally on the PE
device. The next time the PE boots it will already have the correct images.
Once booted, with images validated, the PE can register itself, and the CB pushes
the configuration down to it, as if configuring a chassis-based line card.
NOTES
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
A logical PEX port is manually created on the CB to manage each PE. One logical
PEX port is defined for each physical PE device. Each PEX port is assigned a virtual
Slot ID and physical interfaces are bound to it. Optionally, a description can be
configured for PEX port. This would typically describe the connected PE device.
Multiple physical CB-to-PE connections can be assigned to one logical PEX port. At
least two connections should be provisioned for redundancy, with additional
connections added as needed to meet bandwidth requirements. A hash-based load
balancing algorithm is used to distribute traffic over the physical PEX port
connections.
A given PE’s uplinks can only be connected to physical CB interfaces that have been
assigned to the same logical PEX port. If a CB has an interface assigned to PEX
port1 and another interface assigned to PEX port2, then a PE should not be
connected to both of these interfaces. Instead, assign two or more physical ports to
be in PEX port1, and connect a PE’s interfaces to that. This is very similar to how
traditional IRF physical ports should be mapped to the same IRF logical port.
If a cabling mistake breaks this rule, it will be automatically detected and the interface
will be placed in a blocked state by the EIRF protocol.
NOTES
________________________________________________________________
________________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
The three physical port states for an EIRF implementation are forwarding, down, and
blocked.
Forwarding: The interface is up, properly connected, and data can be sent over
the link.
Down: The physical link is disconnected, and traffic cannot be forwarded.
Blocked: The physical link is in an error condition. This can be due to
misconfiguration or a cabling error. This is likely because the PE interface is
connected to a physical CB port that is a member of a different logical PEX port
than its other interfaces.
NOTES
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
NOTES
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
NOTES
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
The CB can be deployed using two fixed-port devices, configured as an IRF fabric.
Each fixed-port device has a single MPU. One of the devices acts as a Master MPU
and the other is a standby MPUs. The EIRF fabric will have the same single point of
management as a chassis-based solution.
Each PE device will be added, and become visible as other member devices of the
IRF system. Unlike the CBs, they will not receive a full synchronization of the control
plane.
NOTES
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
PEs can operate in one of two different forwarding modes. One option is to configure
a PE to operate in central forwarding mode, in which the CB processes all traffic.
Another option is local forwarding mode, which allows each PE to process traffic
locally.
NOTES
________________________________________________________________
________________________________________________________________
________________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Local forwarding mode is most appropriate when you have high performance PE
devices, with advanced ASICs that can handle the features you need.
In this scenario, the CB creates and maintains all Layer 2 and Layer 3 forwarding
entries. These table entries are replicated to the PEs, enabling them to perform local
forwarding functions.
Each PE can therefore perform local table lookup. If a received frame’s destination is
out a local PE port, it can forward traffic autonomously, without CB involvement. If the
destination’s outgoing port is not local to the PE, it forwards the frame upstream to the
CB, which will act as a fabric interconnect. Local forwarding mode requires an
extensive ASIC feature set that is only be available on higher end PE device models.
NOTES
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
The figure shows a unicast forwarding scenario in centralized forwarding mode. When
PE device receives a traditional Ethernet frame, it does not perform table lookups.
The PE adds a EIRF header to the frame and sends it to the CB. The CB removes
the EIRF header, and adds the source MAC address to its table with its associated
inbound source port. This is very much like classic Ethernet MAC address learning.
The CB then performs table look up. It compares the Ethernet frame’s destination
MAC address to its learned MAC address table to determine the appropriate
outbound port.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
The CB could process the traffic using a typical Layer 2 table lookup, or it might
process the frame using Layer 3 routing or MPLS, depending on its configuration.
Either way, it makes a forwarding decision, selecting the appropriate PEX port toward
the destination PE.
A EIRF header is added to the frame before transmission. This header indicates the
appropriate outgoing port on the PE device. The PE receive this frame, removes the
EIRF header, and forwards the traffic out the appropriate egress interface, as
indicated in a field of the EIRF header.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
For multi-destination traffic, the inbound traffic from PE to CB is processed the same
as unicast traffic. The PE receives the frame, adds a EIRF header, and sends it
upstream to the CB. The CB removes the device header, and performs address
learning.
It is the downlink traffic, from CB to destination PE, where a multicast replication
operation occurs.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
If the CB receives a broadcast frame, it sends one copy of the traffic to each PE.
Broadcast frames are received by each PE and forwarded out all ports in the
destination VLAN.
If the CB receives a multicast frame, it parses its multicast routing table, and sends
one frame copy to each PE that is participant in the multicast group. This could
include the PE that originally received the frame inbound from an endpoint.
Each PE that receives a multicast frame uses traditional multicast table lookups to
determine which of its ports are connected to multicast members. The PE then
forwards the frame out all of those ports.
NOTES
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
When fixed-port switches are used to form a traditional IRF group, they use a three-
level port numbering schema. These levels are the slot, or IRF Member ID, the sub-
slot, and port.
Note
In a fixed-port switch, the sub-slot number will usually be zero, unless the
switch supports additional modules. These modules are typically installed into
the back of the switch.
Note
Most chassis-based line cards to not support a sub-slot, and so the number
will usually be a zero. Some router-based line cards support the installation of
modules. In this case, the sub-slot number indicates the appropriate module
slot in the line card.
The port numbering schema used by traditional IRF is also used by EIRF.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
Currently, fixed-port, CB-capable switches include the 5900 and 5930-series. EIRF
port numbering for the CB switches is identical to that used for traditional IRF.
Interface TenGi1/0/1 is the first, or left-most port in a physical switch that has a
member-ID of 1. As previously noted, the sub-slot is typically 0.
As an administrator, you can easily determine if a port is physically installed in a CB
or PE device. IRF ID numbers assigned to CB devices will be in the range of 1
through 9, and PE devices start at 100.
For example, a 5900G model could be deployed as a PE device. Its interfaces could
be referred to as Gi100/0/1, Gi100/0/2, and so on. A 5900XG could be installed as
another PE in this fabric, with ports TenGi101/0/1, TenGi101/0/2, and so on.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Examples of chassis-based CBs include the 11900 and 12900 series. These CB
devices use the same four-level schema as previously described for IRF. A difference
between IRF and EIRF is in the number of devices supported. While up to four
chassis-based switches can be deployed in a single IRF group, EIRF supports a
maximum of two.
The first number in the four-level numbering scheme is always the virtual chassis ID,
and there can be two chassis-based CBs in a EIRF fabric. Therefore CB port
numbers will either start with 1 or 2. The figure shows two port number examples
TenGi1/2/0/31, and TenGi2/2/0/31.
The EIRF fabric’s PE devices use this same 4-level identification method. The first
number of every PE device is always 9. This is the virtual IRF node number assigned
to all PEs connected to chassis-based CBs. This virtual IRF node is always online,
independent of IRF chassis ID 1 or 2.
Each PE is uniquely identified by the slot ID, starting with 1. The first PE may have
interfaces Gi9/1/0/1 – Gi9/1/0/24. The third PE could have interfaces TenGi9/3/0/1 –
24.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
There are several options for connecting server endpoints to the EIRF fabric. The
best practice is to maximize redundancy by dual-homing the server to two physical
PEs. The connections to each PE is accomplished with multiple physical links in a
port aggregation group.
Another option is to configure a port aggregation group to a single PE. This is
supported, but of course redundancy is compromised. There is link redundancy due
to port aggregation, but there is no device redundancy.
You could also decide to connect a server using a single physical link. Again, this is
supported, but there is neither link nor device redundancy in this scenario.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Figure A1-30: IRF and EIRF: Network Access for Servers - Example
The figure provides an example for configuring EIRF fabric ports to accommodate
server access. This example shows ports from multiple physical PE devices being
configured for server connectivity. This configuration is the same as that used to
configure Multi-Chassis Link Aggregation for a traditional IRF deployment.
In the example, LACP link aggregation is configured by first defining a logical Bridge-
Aggregation interface 201, and then configuring it for dynamic negotiation mode.
Next, interfaces ten101/0/1 and ten102/0/1 are configured as members of this logical
link aggregation group.
From this configuration, you should be able to surmise two key facts. One is that this
configuration is being performed on a EIRF fabric built using fixed-port CB devices.
Recall that with fixed-port CB chassis, PE port numbers use a three-level scheme,
starting with 100. Also, the LACP member ports are on different physical switches,
since one port is identified by 101, and the other by a unique slot ID of 102.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
A split brain condition occurs when an IRF fabric is broken due to link failures. Both
portions of the split fabric use the same IP address, causing routing issues and IP
address conflicts.
The way to mitigate this condition is to configure Multi-Active Detection (MAD). This
feature is configured on the CB devices, enabling them to notify their connected PEs
that a split has occurred.
PEs receive these messages and determines the number of members for each IRF
fragment to which it is attached. It joins the partial fabric with the most members. If
fabric sizes are identical, it joins the fabric with the lowest master ID. The PE blocks
ports connected to other IRF fabrics and continues to function.
There are three variations of MAD – LACP MAD, Bi-directional Forwarding (BFD)
MAD, and ARP MAD. LACP MAD is the recommended method for EIRF fabrics.
Detailed discussion of these methods is not covered in this course.
Note
For more detailed information on MAD, see the Virtual Connect and HP A-
Series switches IRF Integration Guide.
http://h30507.www3.hp.com/hpblogs/attachments/hpblogs/143/797/1/VC-IRF-
intergration-white-paper-v2.pdf
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Design Considerations
The figure summarizes some EIRF design considerations. Regarding cabling, PEs
should only have connection to CBs, and never to each other. This would cause a
loop in the logical EIRF device.
Only one layer of PE devices is supported in a EIRF fabric. All PE devices must be
directly connected to the CB. It is not possible to connect PE devices to an
intermediate layer of PE devices, which then connect to the CB.
This in turn brings up an important cabling consideration related to port availability.
For a deployment that includes eight access switches, each one should have at
least two CB connections for redundancy, for a total of sixteen. CB devices must
therefore have sixteen available ports to successfully deploy an optimal EIRF
fabric.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
The figure summarizes a high-level deployment plan. The first step is to plan the
network by identifying CB member devices, PEs, and PEX ports. Next, the CB
devices should be virtualized into single, logical IRF device. The third step is to
configure the PEX links for each PE. Each port is assigned a virtual slot ID, and
then physical interfaces are assigned to the logical PEX port. This completes the
basic CB configuration.
The next step is to integrate PEs with the CB. The operating mode of each PE
must be manually changed to PEX mode from the BootROM. Once PE ports are
properly cabled and booted, it is a “Plug and Play” scenario. The EIRF protocol will
automatically perform the software check, and ensure correct firmware is
downloaded to each PE device. After any required download, each PE reboots and
comes online as part of the EIRF fabric.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
The figure reveals the high-level configuration steps for EIRF. This scenario uses
two 5900-series devices as the CB, and a 5900 as a PE. Interconnections are via
10Gbps interfaces, with a 1Gbps link for server access.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
The first step is to configure the two CB devices for traditional IRF. LACP MAD
should also be configured.
It is assumed that traditional switch configurations such as IRF are already
understood.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Step 2 involves preparation of the PEX firmware image. This image must be
installed to flash on the CB. The PE will be able to download this PEX firmware
image during the initial boot process. Once downloaded, this PEX firmware is
saved in local flash on the PE. From that point the PE will boot from this local
image as a member of the EIRF fabric.
The figure shows the normal firmware images required for the 5900 CB device.
You can also see that images for the PE device have been copied to flash as well.
The top two files listed are the boot and the system images for PE devices.
PEs download this image during initial startup, and then reboot using this new
image. When they come back online they receive a Hello message from the CB.
This Hello message helps the PE confirm that its PEX firmware version is correct,
and that additional firmware downloads are not required.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
Simply saving these PEX images to flash on the CB is not sufficient. The CB must
be able to select which files to send to each PE switch model that comes online.
Both the boot and system images will be bound to the PE device model.
In this example, the images are bound to PEX device model called PEX-5900. To
do this, the CB command “boot loader” is used, along with a “pex” keyword,
followed by the device model. This indicates that this boot loader syntax is not for
the local CB device, but for 5900 PE devices that may come online.
The “display boot-loader pex” command can then be used to validate which image
files will be used for which PE device models.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
The third step on the CB involves defining the PEX port. The maximum number of
PEX ports that can be defined depends on the platform being deployed. If a PEX
port is removed and the slot ID is change, the PE will reboot.
In the example, PEX Port1 is defined and a description is provided. The
description could include the rack or device number of the PE device. The
“associate 101” command assigns a virtual Slot ID to the PEX port. This
assignment is locally significant, and will be reflected in the interface port
numbering schema for the attached PE device.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
Now that the logical PEX Port has been defined, physical interfaces can be bound
to it. Multiple interfaces are required for redundancy. It is recommended that at
least one interface from the PE be attached to each physical CB device member.
The maximum number of interfaces that can be connected is device dependent.
When interfaces are assigned to the PEX port, they will revert back to their default
configuration.
In this example PEX port 1 has been defined, and an interface from each physical
CB IRF member is assigned to this logical port. This includes interfaces Ten1/0/1
and Ten2/0/1.
The port group command shown here is identical to the command used for
traditional IRF configuration. All the commands shown thus far have been issued
on the CB.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
Step 6: Verify
The final step is to verify your configuration efforts. You can use the “display pex-
port” command to display logical EIRF ports, status, and associated slot ID, along
with the status of the EIRF fabric.
The “display pex-port verbose” can reveal which physical ports are participating as
part of a PEX port.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Step 6: Verify 2
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
Step 6: Verify 3
We also have the “display log” option to see any messages that may be related to
EIRF.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Summary
EIRF is based on IRF – a mature, proven technology. While traditional IRF can
only group like-devices, EIRF can group various models at the access and core
layers into a single, centrally managed system.
EIRF simplifies the network topology, reduces configuration tasks, decreases
configuration complexity, and eases initial deployments.
An EIRF fabric is composed of Controlling Bridges (CB) at the core layer, and Port
Extender (PE) devices at the access layer.
PEX ports are used as the PE-to-CB connections. These logical ports can have
multiple, link-aggregated ports connected to multiple physical CB member
switches.
Both unicast and multicast traffic are handled efficiently in the loop-free EIRF
fabric.
EIRF configuration is very straightforward, and is similar in many ways to a
traditional IRF configuration.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
Learning Check
After each module, your facilitator will lead a class discussion to capture key
insights and challenges from the module and any accompanying lab activity. To
prepare for the discussion, answer each of the questions below.
1. EIRF is based on IRF, and so has the following features (Choose three)?
a. EIRF can only group identical switch-series models and types
b. EIRF can group different device types into a single logical entity
c. EIRF can group access and core switches into a single logical entity.
d. EIRF is appropriate for both Data Center and campus deployments.
e. EIRF requires that all access layer switches be directly connect to each
other.
2. Which three statements are true about EIRF components (Choose three)?
a. The CB function is only supported on chassis-based switches
b. Two physical switches can be grouped into an IRF system to serve the
CB function.
c. The PE function is only supported on fixed-port devices
d. Various models of chassis-based and fixed-port devices can serve as PE
devices.
e. A PEX port is a logical port that can contain multiple physical ports.
f. Each PE must only have a single connection to a CB to avoid STP loops.
3. How are new PEs identified in an existing EIRF deployment?
a. EIRF uses a virtual slot ID to identify PEs, and the PE must reboot for this
change to take effect.
b. When a PE is added, traffic flow for existing PEs is disrupted for about 3
seconds.
c. When a new PE is added, the CB automatically computes the new
topology to prevent loops
d. PE identification operates in an identical way to how the IRF member ID
is used to identify member devices.
4. What two statements are true about how EIRF forwards frames (choose two)?
a. Central forwarding mode is appropriate when you have PEs that either
lack the forwarding performance or capabilities that you require.
b. EIRF uses central, local, and broadcast forwarding modes.
c. With local forwarding mode, the CB creates and maintains forwarding
tables, and shares appropriate entries with each PE device.
d. With central forwarding mode, the CB is responsible for forwarding all
frames, based on table lookup information provided by the PE.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
Appendix 2
Objectives
The module introduces EVB and VEPA technologies. This suite of services
coordinates virtual hypervisor systems, physical switches, and management
platform to provide a more scalable, more easily managed data center
environment.
After completing this module, you should be able to:
Understand the EVB/VEPA protocol
Describe the advantages of the EVB model
Understand all components involved in a complete EVB solution
Understand the integration with the Hypervisor
Understand the role of the HP 5900v Distributed vSwitch
Describe the configuration process
Describe the operational process of EVB
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
Rev.14.41 A2-1
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
EVB Overview
This module will introduce EVB, describe EVB operation, and review EVB
configuration.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
Edge Virtual Bridging (EVB) is defined in the IEEE 802.1Qbg standard. The
purpose of this standard is to enable Virtual Machines (VMs) to share a common
bridge port for forwarding services.
This standard includes Virtual Ethernet Port Aggregator (VEPA) technologies, and
protocols to help automate the coordination and configuration of network
resources. This results in enhanced network visibility for VM-to-VM configuration
and communication.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
The figure shows how VM-to-VM traffic flows within the same hypervisor host.
When VM-11 and VM-12 need to communicate, traffic never leaves the ESX1 host.
The internal vSwitch can handle this intra-host traffic.
This vSwitch has vPorts that connect to each VM’s vNIC. Each VM’s connected
vPort is assigned to a port group, which in turn can be assigned to a VLAN ID. In
this example, the vPorts for both VMs have been assigned to port group PG2,
which has been configured with VLAN 2.
Therefore, when VM-11 sends an untagged broadcast frame, all other VMs in the
same port group on ESX1 will receive this frame, since they are all in the same
broadcast domain. Unicast frames between hosts on the same VLAN are also
handled internally by the vSwitch, just as they would be with an external, physical
switch.
This also means that traffic between VM-11 and VM-12 never leaves the ESX1
hypervisor host. As a result, this communication is not visible to any physical
network devices, such as the HP 5900 switch in this scenario.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
When VMs on different hypervisors must communicate, the scenario remains fairly
similar to the previous example. The vSwitches inside ESX1 and ESX2 have both
assigned the vPorts for their respective VMs to VLAN 2. VM-11 sends an untagged
frame to the vSwitch, which has not learned the destination MAC address for VM-
13. It adds a standard 802.1q tag of VLAN 2 to the frame and sends it to the
physical HP 5900 Switch Series. The physical switch has been performing normal
network forwarding, and so has likely learned the destination MAC address. The
switch simply forwards the frame to the ESX2 hypervisor.
The target vSwitch removes the VLAN 2 tag and delivers the frame to VM-13. In
this case, the traffic is of course visible to the physical network.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
Traffic Visibility
When the physical network has visibility to VM traffic flow, additional services can
be provided. Physical switches like the HP 5900 can offer many advanced
services, in a way that is familiar to network professionals. This includes services
like the following:
QoS provides preferred and/or low-latency delivery for certain data types.
sFlow offers network analysis reporting, allowing visibility into application
usage patterns and statistics.
ACLs can be applied to improve security.
Port mirroring enables advanced troubleshooting methods by allowing you to
see all packets between certain devices, or on certain VLANs.
Few if any of these features might be available for traffic that remains inside the
hypervisor environment, and the physical network no longer has control of a large
portion of network traffic.
The goal of the EVB solution is to ensure that the physical network handles all
traffic flows. This ensures that all traffic is handled in a consistent manner, and that
all traffic can be processed by the rich feature set available to physical switches.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
EVB Model
The EVB model provides consistent traffic flow handling, because all traffic is
handled by the physical switch. No VM-to-VM traffic is handled by the internal
vSwitch. This includes traffic between VM-11 and 12, as well as that between VM-
11 and 13.
Therefore, all traffic can be treated by the same network policies, and leverage the
same services. This includes QoS, sFlow, ACLs, traffic mirroring, and more.
With EVB, the physical switch uses a feature called reflective relay. A traditional
Ethernet switch will never forward traffic back out the interface on which it was
received. In the figure, the reflective relay feature is what allows the switch to
receive traffic inbound from VM-11 and forward it back out the same interface to
VM-12. Special systems must be installed on the hypervisor to support this
feature.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
VLAN management is greatly improved with the EVB model, because VLANs need
only be deployed where they are actually needed. With EVB, port VLAN
membership is dynamically adjusted to accommodate the dynamic nature of VMs.
Products like VMWare’s vMotion make it easy for server administrators to move
VMs to a different physical server. EVB ensures the required network configuration
for these moves happens automatically.
In the figure, the physical switch connection to ESX1 only supports VLAN 2, and
the connection to ESX2 only supports VLAN 3. If something like vMotion is used to
move VM-11 from ESX1 to ESX2, the EVB solution automatically adjusts the
VLAN support. The link to ESX2 would be automatically configured to support
VLAN 2. If VM-12 was also moved, and no other VMs on ESX1 required VLAN 2,
then VLAN 2 would automatically be removed from the trunk port on the physical
switch.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
NOTES
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
The Service Channel or S-Channel is a virtual link between the ER and EVB
Bridge. The link uses the VEPA protocol, which is very similar to the QinQ
mechanism of adding an outer VLAN tag to the original frame. The S-Channel is
negotiated between the ER and EVB Bridge using an LLDP extension called the
Channel Discovery and Configuration Protocol (CDCP).
The reflective relay feature must be enabled on the physical switch’s S-Channel
interface. This feature allows inbound frames to be sent back out their ingress
interface. A traditional Ethernet switch will never allow this, making this feature a
key enabler of EVB bridge functionality.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
S-Channel Identifier
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
S-Channel Sub-interfaces
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
Note
A TPMR is simply a bridge device that supports a subset of typical MAC bridge
functions. It is transparent to all traffic flowing through it, except traffic
addressed specifically to it, or for neighbor agents, such as LLDP. The special
MAC address used by CDCP ensures that its frames will traverse any such
device without issue.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
The figure summarizes the key factors related CDCP’s S-Channel negotiation,
including the following:
Reflective Relay: This setting enables the egress of frames back out their
ingress port.
Auto-configuration: CDCP will typically request the reflective relay feature
automatically, eliminating the need to manually configure it.
Manual Configuration: If the hypervisor’s ER is incapable of negotiating the
reflective relay feature, the network administrator can manually configure it on
the physical switch – the EVB Bridge device.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
EVB Bridges service each VM’s vNIC, which is referred to as a Virtual Station
Interface (VSI). Sub-interfaces must be configured on the EVB Bridge to support
these connections. This configuration can be done manually or automatically.
Manual configuration of sub-interfaces is not recommended because it is difficult to
maintain configuration consistency. This is especially true in a typical data center
where VMs are moved between physical hosts. These moves would necessitate
manual changes to support VLANs and other configurations relevant to each VM.
A better solution is to allow sub-interfaces to be automatically configured by the
VSI manager. This tool integrates with the hypervisor management tool in order to
learn about VM starts, stops, and migrations. It then communicates with the EVB
Bridge, automatically modifying sub-interface configurations as appropriate.
HP’s Virtual Application Networks (VAN) Connection Manager is a software
module for HP’s IMC management platform. It functions as a VSI manager by
integrating with market-leading hypervisors, such as vSphere, Hyper-V, Zen, and
KVM. A template-based approach is used to ensure consistent configuration
across the network infrastructure.
You use IMC VAN to define VSI templates with specific VLANs, QoS rules, ACL
filters, and more. These templates are made available to the hypervisor, where you
define traditional network profiles, called port groups. You bind an appropriate VSI
template to this port group.
As a result, the ER knows which VSI template to use for each VM, and announces
this to the EVB Bridge. The EVB Bridge queries the VSI Manager, which delivers
the appropriate configuration. In this way physical switches are automatically
configured to accommodate a dynamic data center environment.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
The templates that you define in the VSI Manager consist of two objects. One is a
network object, which contains the VLAN assignment.
The other object is the VSI Type, which is bound to the network object. The VSI
Type contains the actual configuration settings for ACLs, QoS, and more. Most of
these configuration settings are optional, except the VSI type, which must be
bound to the network object.
A properly completed template configuration is released as a VSI Type version.
For example, you might define a sales server template, to be applied to certain
VMs. When these VMs come online, sales template version 1 will be applied.
When you modify this template, you release it as a new VSI Type version, which
can then be applied to the VMs. Version control is automatically enforced.
HP’s IMC VAN system allows various roles to be defined for administrative staff
members. Role security settings can be used to define which network objects each
IMC administrator can access and control. This role security can also be used to
determine which VSI types a particular administrator can manage.
The hypervisor management platform is used to define port groups, and associate
them to a VSI type. The VSI Type version is bound to a network on the VAN
server. This ensures that the port group will be configured with the correct ACL,
QoS, and VLAN service.
The hypervisor manager only needs to configure vNICs and associate each one to
a traditional Port Group, just as they did before the implementation of EVB. The
actual control over which VSI type is assigned to which port group is controlled by
IMC VAN templates.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Standard hypervisor vSwitches do not support EVB or VEPA. The hypervisor can
only support this functionality by installing an EVB-capable product like the HP
5900v.
The HP 5900v is EVB and VEPA-capable, and so provides CDCP communication
with the EVB HP 5900 physical switch. 5900v configuration can be orchestrated
through the IMC VAN Connection Manager.
The HP 5900v can coexist with a traditional vSwitch in the hypervisor. At least one
physical NIC on the host server must be assigned to the HP 5900v for EVB and
VEPA functionality. A traditional vSwitch can have other VMs associated with it,
using a separate physical NIC on the host. The limitation is that the traditional
vSwitch and the HP 5900v cannot share a physical interface. Each must have
exclusive access to its own physical NIC on the hypervisor host.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
The HP 5900v is responsible for negotiating the S-Channel with the EVB Bridge,
and also for announcing the addition or removal of VMs. A separate protocol is
used for each of these responsibilities.
CDCP is used for S-Channel set up between the ER and the EVB Bridge, as
previously discussed. The Edge Control Protocol (ECP) is used to announce VM
additions and removals, thus automating vNIC session setup and teardown. ECP’s
transition state machine includes the pre-associate, associate, and de-associate
states.
The ER sends a pre-associate message to the EVB Bridge for VMs that are in the
process of coming online. This gives the EVB Bridge time to prepare and configure
new sub-interface for the VM.
The associate state is applicable when the VM is operational and can be used in
production. This state is also used when a VM is moved to another hypervisor.
During the transition, the VSI remains in the associate state with the original EVB
Bridge interface, while the target host sends a pre-associate announcement to the
new EVB Bridge interface.
When the transfer is complete, the original host sends a de-associate message to
the original interface, thus removing the sub-interface from the EVB Bridge. The
target host moves to an associate state with new EVB Bridge interface. The
transition is fairly seamless, since the target interface was already prepared and
configured before the VM came online.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
HP 5900v Components
The HP 5900v consist of a Virtual Forwarding Engine (VFE) and a Virtual Control
Engine (VCE).
The VFE is the HP 5900v data plane component, and is installed on each
hypervisor, replacing the standard vSwitch for the EVB deployment. This
component has no user interface or any type of local management capabilities.
The VFE must be configured and managed by the VCE.
The VCE is the control plane of the HP 5900v, and runs as a VM on a hypervisor
host. This is shown as a separate logical server in the figure, which is running as a
VM on the physical host. Arrows pointing from the VCE to the VFEs indicate that
configuration information is being transferred to modify VFE operation.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
HP 5900v Communication 1
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
HP 5900v Communication 2
The management plugin also defines the VMware port group to VSI Type version
mapping. The VCE component communicates this information to the vSphere
host, which sends port group configurations to ESX hosts. This configuration does
not include VLAN or QoS information, only the VSI type and version data. It is
simply an internal VSI type index number.
This information is distributed to the VFE instances as VMs are brought online.
This means that the VCE is not aware of the actual ACL and QoS rules to be
applied. It simply knows the internal identifier of the VSI type profile that should be
applied.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
EVB Bridge:
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
EVB Station:
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
Edge Relay (ER):
_______________________________________________________________
Uplink Relay Port (URP):
_______________________________________________________________
Downlink Relay Port (DRP):
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
_______________________________________________________________
Virtual Station Interface (VSI):
_______________________________________________________________
_______________________________________________________________
S-Channel:
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
VSI Manager:
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
The HP 5900v is deployed as a VM, using the vSphere “Deploy OVF Template”
option. The HP 5900v software package is available for download from the HP
web site in an OVF format. The OVF template deployment wizard streamlines the
installation.
In the screenshot shown, the configuration wizard requests IMC login credentials
and vCenter IP address and credentials. These settings will be pushed to the VM
as it comes online. In addition to installing the VM, this OVF Template also
includes the required vSphere plugin installation. This happens transparently. No
special deployment process is required for the vSphere plugin.
Once deployed, the HP 5900v VCE can be configured with new IMC or vCenter
information by directly accessing the IP address of the VCE.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
As mentioned, the VCE deployment automatically installs the vSphere plugin. This
can be verified from the vSphere management interface. As shown in the figure,
an additional tab appears for the HP Virtual Distributed Switch (VDS) solution. This
tab includes an option for VFE installation.
The example shows two ESXi hypervisor hosts are available. The administrator
can select the host on which to install the VFE component. After clicking the check
box next to the host, the install button is clicked, and the VFE components will be
deployed as a background task.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
After VFEs are deployed they should be configured. The initial configuration
includes adding physical switches to the deployment, such as Top-of-Rack HP
5900 EVB Bridges. ESXi and vSphere hosts are added using a SOAP/HTTP
template. This allows IMC to query specific VMware information related to vMotion
and other VM status information using SOAP.
A Virtual Distributed Switch (VDS) must be created on the vSphere host. One and
only one VDS is supported on each vSphere host. Then port groups can be
defined on this VDS for the uplink ports. In the bottom-right screenshot, two
hypervisor hosts are listed by their IP address, along with available uplink ports.
The network administrator can select which interfaces can be used as uplinks for
each server. The link aggregation type can also be specified.
Once the uplink has been configured a default port group can be defined, as
shown in the bottom-left screen shot.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
The figure shows a new VM configuration process from the IMC VAN Connection
Manager. The first step is to define the network object. In this example, a new
network is defined for Customer C and is assigned to VLAN 31. A maximum
connection limit for this VLAN is set at 200. The 201st VM that attempts to connect
to this VLAN will be refused, and the VM will not come online. This is a convenient
way to prevent the overpopulation of an IP subnet.
Next, the VSI should be created. In this example a VSI Type for Customer C’s
front-end server is created, and assigned to the Cust-C-31 network that was just
defined, using VLAN 31. Specific service units can also be enabled, such as
bandwidth control or VM access control, as in the example.
As the administrator enables these services, new configuration parameters
become available. For example, IP address/mask pairs can be configured for
filtering, as can the amount of bandwidth to be allowed.
These Service Unit features are not required. If the network administrator simply
wants to assign VLANs, all the service units can be unchecked. Some client VMs
may not need bandwidth or QoS control, so a simple VLAN assignment would
suffice.
The VAN Connection Manager automatically translates whatever options were
selected into Comware CLI Commands. This could include creating traffic
classifiers for QoS, access-lists, and QoS traffic behaviors to control the interface
rate of the filtering. The traffic classifiers and behaviors could be combined into a
policy, and applied to the S-Channel’s sub-interface. This will all be done
dynamically and automatically by IMC during the pre-associate phase of the VM
connection.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
Once the VSI Type has been released you will see them in the VAN Connection
Manager. In this example we can see that a VSI Type named Customer-C-Server-
Front has been released with version 1. You can only deploy actual versions to VM
port groups.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
The next step happens from within the vSphere Network Configuration
environment. Using the HP plugin, you can create new Port Groups. This vSphere
plugin will query the IMC VAN Module for the list of VSI types, such as the ones
you created in the previous steps. These VSI types are made available as
selections in vSphere. Now, when a new port group is defined, an appropriate VSI
Type version can be bound to it.
In this example a new port group named PortGroup-VLAN600 has been defined.
The administrator has bound the VSI Type version named VLAN600-VSI(V1) to
this port group. This VSI Type version template contains all of the VLAN, QoS, and
security settings that you configured previously from IMC VAN.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
As you recall, the port group was bound to a VSI Type version in the previous step.
The final step is to bind the VM’s vNIC to a port group. This operation is exactly
the same as what VMware administrator has been doing for years. This is because
the EVB port groups are all listed next to any traditional port groups that may have
been locally defined.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
The VCE now knows that the VM is booting and that the network must be
provisioned. The VCE informs the VFE of this fact. The VFE announces to the
local EVB Bridge that a VM is coming online. In essence, the VCE tells the VFE to
initiate an ECP session with the EVB Bridge.
The VFE begins an ECP exchange with the EVB Bridge. The figure shows a
packet trace of this communication. ECP includes information about the VSI type
ID and the virtual ID. This is simply an internal identifier for the VSI type. It also
includes information about the MAC address and VLAN ID of that VM.
The EVB Bridge now knows that a new sub-interface should be created. This sub-
interface will be used to process traffic for the VM’s MAC address
(00:10:95:00:00:02, in this example).
The actual sub-interface configuration is unknown at this point. The EVB Bridge
only knows that the sub-interface should be configured with VSI type ID 1, Version
1.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
The EVB Bridge creates a new sub-interface for every new vNIC connection. If the
switch has an S-Channel Interface 1/0/1:1, then the first sub-interface created will
be S-Channel 1/0/1:1.0. This sub-interface will bound to the VLAN and MAC
address of the VM.
This can be seen in the switch configuration as a VSI filter. This enables you to
use the switch configuration to determine which sub-interface is used by a
particular VM’s MAC address. This is created dynamically based on the ECP
exchange, since the HP 5900v provides the MAC address and VLAN used by the
VM.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
The EVB Bridge now has a sub-interface for VM-11, but no configuration
parameters have been applied. For this to occur, the EVB Bridge must contact the
VSI manager.
Two options are available. The interface could be manually configured by the
network administrator. As previously stated, this is not a best practice. Instead, it is
best to leverage the centralized control provided by the IMC VAN Connection
Manager.
The EVB Bridge will send detailed configuration information to the VSI Manager.
This includes filter information, which includes the VM’s MAC address and VLAN,
as well as the VSI-ID, VSI type and version information, and sub-interface details.
Essentially the EVB Bridge tells the VSI manager, “I have a new sub-interface that
should be configured with VSI 51, version 1.”
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
The IMC VAN Connection Manager creates an entry for this VM connection in its
VAN database. When it receives the EVB Bridge configuration request, it performs
a lookup to find the appropriate VSI Type version configuration.
Next, the IMC VAN Connection Manager opens a Telnet session to the EVB Bridge
and delivers CLI configuration syntax to the sub-interface. This will include any
ACLs, QoS policies, traffic classifiers, behaviors, and policies. Any required
policies will of course be applied to the sub-interface.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
Once this has been completed, the VSI Associate state is achieved. VM-11 is
online with a unique sub-interface configuration on the EVB Bridge. The running
configuration contains the operational VSI Type version configuration settings.
In this example, VM-11 is now logically connected to S-Channel 1/0/1:1.0. Note
that VSI Type version changes cannot be done on the fly. If changes were made
on the IMC VAN module, this would not be reflected in the configuration on the
EVB Bridge. The two would be out of sync.
Whenever a VSI configuration is modified in the IMC VAN module, it must be
released as a new version and then assigned to the port group. At that point the
configuration change can become active.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
The basic steps to configure the EVB Bridge include configuring the interface for
EVB support, configuring LLDP, and configuring the VSI Manager.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
The interface must be configured as a trunk link, and EVB must be enabled on the
interface. The “evb enable” command also enables CDCP. VLANs will be permitted
dynamically with the VSI manager. VLAN1 enabled by default, and is used as a
service VLAN.
In the example, EVB is enabled on a physical interface. It can also be enabled on
Bridge Aggregation interfaces.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
LLDP has to be enabled at the global level, and the interface must be configured
to use the non-TPMR destination MAC address. This is accomplished with the
“lldp agent” command, as shown in the figure.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
The VSI Manager is responsible for delivering VSI network configuration to the
EVB Bridge. A local manager could be used, but is not recommended. It is far
more effective to deploy a product like HP’s VAN Connection Manager to act as a
central VSI manager.
You must inform the EVB Bridge who it is to communicate with for this purpose.
This configuration can be done at the global level, as shown in the figure.
Configured this way, VSI Manager IP address and name specified will be used for
all interfaces on the EVB Bridge. If you wanted some interfaces to use a different
VSI manager, you could configure it using similar syntax at the interface level.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
Step 5: Verify
The figure shows several commands that can be used to verify your configuration
efforts.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Summary
Edge Virtual Bridging (EVB) is defined in the IEEE 802.1Qbg standard. The
purpose of this standard is to enable Virtual Machines (VMs) to share a common
bridge port for forwarding services.
This standard encompasses Virtual Ethernet Port Aggregator (VEPA) technologies,
and protocols to help automate the coordination and configuration of network
resources. This results in enhanced network capabilities for VM-to-VM traffic,
including QoS, ACLs, sFlow, port mirroring.
HP’s 5900v can replace a hypervisor’s native vSwitch to enable EVB and VEPA
services in the data center. It can work with physical switches like HP’s 5900
series.
In an EVB deployment, the physical switch is called an EVB Bridge, the HP 5900v
virtual switch is called an Edge Relay (ER), and a VM’s vNIC is called a Virtual
Station Interface (VSI). The EVB Bridge and ER communicate and connect over a
virtual link called the S-Channel. This channel is negotiated using CDCP.
HP’s IMC VAN Connection Manager serves as an EVB VSI Manager. This
component enables sub-interfaces to be automatically configured on an EVB
Bridge’s S-Channel interface to accommodate the movement, addition, and
removal of VMs in the data center.
The HP 5900v’s VFE component is installed on hypervisor hosts. Its VCE
component is installed as a VM on a hypervisor host. The physical EVB Bridge
must also be configured to support an EVB solution.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
Learning Check
After each module, your facilitator will lead a class discussion to capture key
insights and challenges from the module and any accompanying lab activity. To
prepare for the discussion, answer each of the questions below.
1. Which of the following is an advantage of an EVB – VEPA solution?
a. More frames can be forwarded locally inside a hypervisor host.
b. EVB – VEPA solutions can be implemented purely inside a standard
switched environment, with no additional components required.
c. Services such as QoS, security ACL’s, and sFlow are extended into the
hypervisor environment and so are supported on internal vSwitches.
d. Since all frames are forwarded by the physical switch, consistent services
and visibility are available for all traffic flows
e. EVB – VEPA can take advantage of a standard IMC solution. No
additional modules for IMC are required.
2. Which three statements are true about how EVB VLAN management (Choose
three)?
a. VLANs for all VMs must be provisioned on all data center switches.
b. VLANs need only be deployed where they are actually needed.
c. VLANs are automatically provisioned when they are defined in IMC
d. VLANs are automatically provisioned when a VM comes online
e. Communication between the IMC VAN module and the pSwitch help
facilitate VLAN management.
3. The S-Channel is a virtual link between the ER and EVB Bridge, and uses
sub-interfaces to support dynamic VLAN creation.
a. True.
b. False.
4. Which three statements are true about an EVB VSI manager (Choose three)?
a. It allows sub-interfaces to be automatically configured on a switch by
integrating with a hypervisor management tool
b. An EVB VSI manager is included by default with an IMC installation.
c. The IMC VAN Connection Manager module adds EVB VSI manager
capabilities to IMC.
d. Automatic VLAN creation using an EVB VSI manager is the only method
for creating VLANs for Virtual Machines in an EVB deployment.
e. To aid in automatic VLAN creation, VSI Templates are defined with
networks and VSI types.
5. What are two features of the HP 5900v (Choose two)?
a. Both a standard hypervisor vSwitch and the HP 5900v can be used for
VEPA functionality in an EVB deployment.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
b. A hypervisor’s standard vSwitch can coexist with the HP 5900v, but only
the 5900v provides VEPA functionality.
c. The 5900v uses CDCP to communicate with an HP 5900 physical switch.
d. The 5900v is not responsible for announcing the addition or removal of
VLANs to an EVB Bridge.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
Appendix 3
Objectives
This module begins an Introduction to of Virtual eXtensible LANs, or VXLANs. You
will learn how this Layer 2 overlay technology functions, and how it is used in a
data center environment. This includes an understanding of solution objectives for
VXLAN, and how those objectives are fulfilled with capabilities such as MAC
learning inside the VXLAN, and how multi-destination delivery methods are
handled.
You will also learn how VXLAN can be interconnected to traditional physical
networks, providing multiple deployment options. One such option includes the use
and configuration of VXLAN hardware-based gateways, such as the HP model
5930.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
Rev.14.41 A3-1
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
VXLAN Overview
This module includes section on VXLAN introductory topics, before moving into
VXLAN theory of operation and design considerations. The final section involves
the configuration of VXLAN functionality on HP switches, such as the HP 5930.
.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
VXLAN introduction
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Supported Products
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
VXLAN Operation
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
VXLAN Concepts
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
VXLAN: VTEP
The VXLAN Tunnel End Point (VTEP) provides an entry point into the VXLAN,
while interacting with the other VTEPs to provide connectivity. In the figure, each
physical ESX hosts has been assigned this role. They are responsible to
encapsulate traffic from the VMs into the tunnel, and send the resulting packets to
their destination. The receiving, destination VTEP must decapsulate the packet
and deliver it toward the intended target.
Essentially, the VTEP provides an “on-ramp” to the VXLAN by performing this
frame encapsulation and decapsulation service.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
When one VXLAN VTEP or tunnel end point communicates with another, a VXLAN
tunnel is established. A VXLAN tunnel is required for each destination VTEP in the
same VXLAN. In the figure, ESX 11, 12 and 21 are the physical devices hosting
VMs assigned to VXLAN ID 1001, and so each of those devices maintains a tunnel
with the other two. VMs assigned to VXLAN ID 2001 are hosted on servers ESX
11, 12 and 22, with a similar set of tunnels.
The tunnel is merely a mechanism of transport through the IP network. One tunnel
can be used by multiple VXLAN IDs because each VM’s encapsulated packet
contains its VNI. This VNI is then used to send packets to appropriate tunnel
endpoints. This is why traffic from multiple VXLANs can use the same tunnel.
The underlying IP transport network uses multicasting to deliver broadcast,
multicast, and unknown unicast frames for each VXLAN. Suppose VM1 sends a
broadcast. Since hosts ESX 12 and 21 contain VMs in the same VXLAN, they
must receive this broadcast. There are two ways to recreate this broadcast domain
functionality.
One method is to use head-end replication. With this method, host ESX 11
encapsulates the Layer 2 broadcast frame two times. It is encapsulated once as
an IP unicast to host ESX 12, and again as a unicast to ESX 21. This unicast-
based solution simplifies the deployment by eliminating the need for multicast
services. However, larger deployments could have scalability issues, since every
broadcast must be individually unicast to each host in the VXLAN.
The second method optimizes scalability by using multicasting. Using this solution,
ESX 11 can encapsulate each Layer 2 broadcast in a single IP multicast packet,
addressed to some multicast address, such as 239.1.1.1 in this scenario. All hosts
are configured to know that VNI 1001 is bound to this address. Hosts ESX 12 and
A3-8 Rev. 14.41
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
21 will use IGMP join the multicast group, informing the IP transport network that
they need to receive traffic destined to 239.1.1.1. If properly configured for
multicasting, the IP network will deliver the original transmission to both ESX
hosts.
This method does add a bit of complexity to the transport network deployment,
since it must support multicast functionality. However, this saves resources and
improves scalability, since each Layer 2 broadcast need only be transmitted once
by the VTEP device, regardless of the VXLAN’s host count.
Another advantage to multicasting is that only ESX hosts which actually require a
VXLAN’s traffic will join the multicast group. Suppose an administrator were to
move VM5 from ESX-21 to some other physical server. ESX-21 would realize that
it is no longer hosting VMs on VXLAN 1001, and send an IGMP leave message to
the IP network, leaving the multicast group 239.1.1.1.
This VXLAN functionality provides a major advantage over traditional VLANs. The
network can respond to the changing relationship between Virtual Machines and
physical server connections, and send broadcast/multicast traffic only to required
destinations.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
The figure reveals VXLAN packet structure. The original Layer 2 frame is
encapsulated into a UDP IP frame, with an additional 8-byte VXLAN header
inserted, which contains the VXLAN ID.
This additional encapsulation adds an additional fifty bytes of overhead to the
transmission. The original frame has the 8-byte VXLAN header added, an 8-byte
UDP header, another 20-byte IP header and 14-bytes for Ethernet. For the original
Layer 2 frame to support 1500-byte payloads, the MTU of the IP infrastructure
should be increased to at least 1550 bytes or greater.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
The figure introduces this section, which is focused on VXLAN traffic flow. The
objective is to understand how unicast, multicast, and broadcast traffic is
accommodated through a VXLAN-capable infrastructure.
We will examine a scenario that focuses only on VMWare ESX hypervisor
connectivity to the VXLAN, before delving into typical multicast and unicast
scenarios.
You will understand the packet flow that supports VXLAN communications. This
includes MAC learning, interconnecting a VXLAN-based system to traditional
networks.
Finally you will learn about Comware hardware-based VXLAN Gateway operation,
providing a solution to bridge VXLANs to physical VLANs.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Figure A3-10:
The scenario above describes multi-destination delivery, showing how VM1 sends
an ARP broadcast intended for VM5.
VM1 sends an ARP broadcast, which is delivered through the virtual link to the
VTEP. ESX-11 receives this frame from the VM’s virtual port, which is assigned to
VXLAN 1001.
ESX-11 determines that the destination MAC address is a broadcast. ESX-11
encapsulates the packet into a VXLAN frame, with VXLAN ID 1001. The source IP
address is ESX-11’s IP address of 10.1.1.11, and the destination IP address is the
configured multicast address of 239.1.1.1. ESX-11 sends this encapsulated frame
into the transport IP network.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
The transport IP network will deliver the IP Multicast packet to all the joined hosts.
This assumes that multicast routing has been properly configured on this
infrastructure. Another assumption is that the remote VTEPs have joined the IP
Multicast group using IGMP.
If so, the IP packet is delivered to ESX-12 and ESX-21, and the other members of
the 239.1.1.1 multicast group. This scenario will focus on the ESX-21, since the
same activity applies to ESX-12.
ESX-21 decapsulates the VXLAN packet and so discovers the source VM’s virtual
MAC address on the tunnel interface. It binds this MAC address to the ESX-11
source IP address of 10.1.1.11.
ESX-21 then reads the destination MAC address, which is a broadcast in this
case. Any multicast, broadcast, or unknown unicast destination will be flooded to
all of ESX-21’s local virtual ports assigned to VXLAN 1001.In this case, only VM5
is active, and so it receives the ARP request sent to it by VM1.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
VM5 has now received the ARP request and responds to VM1 with an ARP reply,
which is formed as a Unicast packet. ESX-21 receives the packet from VM5’s
virtual port, which is assigned to VXLAN 1001.
ESX-21 looks up the destination MAC address, which is VM1’s MAC address.
Since ESX-21 just received an ARP request from VM1, it has an entry in its MAC
address table for this host. It will use its outbound port towards the ESX-11’s IP
address.
ESX-21 encapsulates the packet into a VXLAN frame tagged with VXLAN ID 1001.
Its own IP address is the source, and the destination IP address will ESX-11’s IP
address of 10.1.1.11. A multicast is not needed in this case, since the destination
is known. This unicast IP datagram is sent into the transport IP network.
Once encapsulated, VXLAN packets can be treated as any other IP-based traffic.
It will benefit from the various types of equal-cost load-balancing capabilities
offered by a typical routed IP infrastructure for intra-VXLAN delivery.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
ESX-11 receives and decapsulates the VXLAN packet, learning VM5’s source
MAC address. It binds this MAC address to ESX-21’s source IP as outgoing
interface. It then reads the destination MAC address, which is VM1’s address in
this case. It then forwards the packet to the local virtual port assigned to VM1.
VM1 receives the ARP reply.
The result of the multicast and unicast flows that we have just analyzed is that all
VMs have communication at Layer 2 and all the VTEPs have built a table that
associates learned MAC address to VTEP IP addresses. Therefore, each source
can deliver frames directly to any valid destination host.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Refer to the figure above when necessary, and answer the following questions.
1. Which devices above must have a tunnel between them, and why? Assuming
two VXLANs must be supported, how many tunnels must be active?
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
2. For devices in the IP network to successfully support VXLAN, what should
their MTUs be set to, and why?
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
4. ESX-12 has just received a frame from vm1, destined for vm4. How will ESX-
12 keep track of vm1’s MAC address?
a. ESX-12 does not need to keep track of this, since it is handled by the
fabric
b. It maps the source MAC address on the outer-most Ethernet header to its
local interface
c. It maps the source MAC address on the original Ethernet header to its
local interfaces IP address
d. It maps the source MAC address of the original Ethernet header to ESX-
11’s IP address
e. It maps the FC-ID from the native frame to ESX-11’s IP address
5. When ESX-12 sends the response from vm4 back to vm1, what will it used as
the VXLAN ID, source IP address, and destination IP address?
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
Having explored how devices on the same VXLAN communicate, the focus is now
on routing between VXLANs. Some kind of a gateway function must allow VMs to
access external networks. This is similar to how a host connected to a traditional
VLAN requires a default gateway on the same VLAN.
There are three possible solutions for inter-VXLAN routing:
VXLAN Layer 3 VM-Based: This solution places routing services inside the
ESX host, using the HP Virtual Service Router.
VXLAN Layer 3 VMWare Edge Gateway: This solution relies on a VMWare
product inside the ESX host to provide routing services.
VXLAN Layer 2 hardware: This solution exposes the VXLAN to an external
VLAN connection, using a hardware-based VXLAN –VLAN gateway, like the
HP 5930.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
With VM-Based IP routing, you create a virtual machine that has two or more
virtual NICs. One vNIC is bound to the VXLAN network object on the ESX server,
while the other vNIC is bound to a traditional VMNet VLAN in a traditional vSwitch
defined on the ESX server. This VMNet VLAN is associated with a physical NIC on
the ESX host, which is connected to a switch port configured to support that VLAN.
The figure highlights this scenario for the ESX-11 server. To support VM-base
routing, VM2 is created. This VM includes a G1/0 vNIC bound to VXLAN 1001,
with IP address 192.168.1.1. Its G2/0 interface is bound to VMNet VLAN 102, and
has an IP address of 192.168.2.1.
Host VM1 is configured on VXLAN 1001 as before, and its default gateway is the
192.168.1.1 address. As with traditional default gateways, the VM2 virtual gateway
device accepts frames from hosts and routes them out its G2/0 interface.
Traffic sent out VM2’s G2/0 interface includes an 802.1q VLAN tag of 102 as it
exits the physical NIC. The attached switch port is configured to support this
VLAN, and so accepts it inbound and forwards it on through the IP transport
network. Of course, this VM2 virtual router must exchange routes with the physical
routers on the IP transport network.
This example shows VM2 running on physical host ESX-11. However, this virtual
router could operate on any ESX host that has access to VXLAN 1001, and is
connected to a physical switch port that supports VLAN 102. If the virtual service
router is moved using Vmotion, the logical router topology does not change.
This solution is viable with any operating system that allows routing. For example,
the HP Comware portfolio includes HP Virtual Service Routing. This is a software-
based router optimized for hypervisor environments. Currently, this solution
supports routed access only. It is not possible to connect the 192.168.1.1 VXLAN
directly to an external, traditional VLAN.
NOTES
A3-20 Rev. 14.41
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
This solution is very similar to the previous solution, but deployed using a VMWare
product. The VMWare Guest operating system can provide routing, filtering,
address translation, and firewall services. Fundamental operation of this solution is
as described in the previous example.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
Note
The next 7900 and 12900 LPU will support VXLAN termination and routing on
the same device.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
VTEP configurations on the ESX Host are created and maintained by the vSphere
management software, using the Open Virtual Switch Data Base (OVSDB) format.
The moment the virtual machines start, this management software knows that they
are bound to specific VXLANs. It notifies all other VTEP devices to create the
appropriate internal interfaces, providing a kind of centralized configuration
management.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
Design Considerations
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
Rev. 14.41 A3-25
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
The figure introduces the steps to configure VXLAN, and depicts a sample
topology. In this scenario, the transport IP network is multicast-enabled. Switch
5930-1 is near the top of the diagram, while 5930-2 is near the bottom. These are
the two VTEPs. They will be configured to set up a tunnel interface so that VXLAN
1001 traffic can traverse the transport IP network. That specific VXLAN will also be
delivered to physical VLAN 101
The tunnel interface on each switch will be bound to the VSI defined for VXLAN
1001. A Service Instance will be created to bind the physical interface VLAN traffic
to the VSI. The top server, with IP address 192.168.1.11, is connected to an
access port in VLAN 101. That switch will tag its uplink traffic when sending it to
5930-1, which maps VLAN 101 to VXLAN 1001. It delivers frames to the VSI,
which processes the traffic and sends it out over the Tunnel Interface.
Broadcasts from the 192.168.1.11 server are delivered to the tunnel interface over
the VXLAN network. This VXLAN encapsulated packet arrives at 5930-2, on the
tunnel interface on the VSI. The VSI performs local flooding, sending the packet on
the local wire, tagged with VLAN 101. This traffic is processed by the external
access switch, which recognizes it as a broadcast frame, and flood it out all ports
in VLAN 101, including the port connected to the server at IP address
192.168.1.12.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
The first step is to enable the Layer 2 VPN globally. This is the same command
that is used for Layer2 VPN/VPLS configuration. This enables the VSI model
inside the Comware 5930 switch hardware.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
The next step is to create a VXLAN tunnel. This provides the VXLAN
encapsulation services over the transport IP Network. To ensure a stable,
unchanging tunnel source IP address, a Loopback interface is created, and
specified. These are point-to-point tunnels, so you must create a tunnel interface
for each remote VTEP. This scenario only uses a single remote 5930, so only one
tunnel endpoint is defined here.
As previously mentioned, this step must be done manually, since OVSDB
information cannot currently be shared between hardware gateways and ESX
hosts. Should this capability become available, then the vSphere management
host could dynamically create new tunnels on the hardware gateway.
In the figure, Interface loopback 0 is defined and an IP address is assigned to it.
The tunnel interface is created with VXLAN mode, and the tunnel’s local source
and remote destination loopback addresses are specified. On the 5930-2 switch,
the loopback address would be 10.2.0.22, and the tunnels source and destination
addresses would be reversed.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Next, the VSI is defined. This VSI will support a local, physical interface which is
bound to the Service Instance and the VXLAN Tunnel interface. The VSI will
actually contain the VXLAN Identifier.
In the figure, a VSI named Customer1 is created, and associated with VXLAN
1001. VXLAN 1001 is in turn bound to the previously created tunnel.
If multiple tunnels to multiple VTEP endpoints were required, more tunnel end
points would be created, and those tunnels would also be added as virtual ports to
the virtual switches.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
A service instance defines which local traffic should be connected to the VXLAN
VSI. The Service Instance ID is locally significant to the Interface only.
In the figure, interface ten gigabit 1/0/2 is configured to be associated with service-
instance 10, and configured to use VLAN 101.
At this point in the discussion, some might remember that the point of having
VXLANs was to have more than 4000 VLANs. However, once VXLANs are
mapped to traditional VLANS, the old VLAN limitation seems to resurface.
This doesn’t have to be the case. With this model, VXLANs can be bound to 4000
VLAN IDs on interface ten1/0/2, and another set of VXLANs can be bound to 4000
other VLANs on another physical interface.
The VLAN 101 on interface ten1/0/2 has nothing to do with the VLAN 101-tagged
traffic on interface ten1/0/3. Therefore, you can distribute blocks of 4000 VLANs
over different physical ports to different regions of the data center.
This is so because of the VSI-to-Service Instance mapping you will configure in the
next step.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Next, the Service Instance is bound to the VSI. This creates a kind of cross-
connect between the service instance and the VSI. Since this cross-connect
directly maps the physical interface to a VSI, the traffic is not processed globally by
the switch as VLAN 101 traffic. It is only processed inside the VSI.
Globally defined VLAN 101 operates totally independent of the VLAN 101 that is
processed by the service instance on this physical interface.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Now that IGMP host functionality is enabled, the multicast group to join is
specified, along with source IP address used to join that group.
It is recommended to use a unique Multicast IP address per VXLAN. This
optimized efficiency and reduces the unnecessary processing of frames, as
previously explained.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
Step 8: Verify
The figure shows several display command that can be used to validate your
configuration efforts. You will explore the commands above during lab activities.
This includes validating the status of the tunnel interface and the VSI, as well as
checking MAC-addresses.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
Summary
In this module, you learned that VXLANs extend that scalability of traditional
VLANs to support over 16 million broadcast domains, while improving deployment
and management flexibility and efficiency for data center administrators.
Supported on 5930 Tor and 7900, 11900, 12900 products, VXLAN is an IP-based
overlay technology that tunnels L2 frames inside IP datagrams. Each physical
server or switch has a tunnel endpoint called a VXLAN Tunnel Endpoint (VTEP).
Both Layer 2 broadcast and unicast frames are tunneled through a traditional IP
transport network to provide all the functionality of single broadcast domain.
VXLAN and physical networks can be interconnected with Layer 3 routing
functionality deployed inside a hypervisor environment, or via a Layer 2 hardware
VXLAN-to-VLAN gateway, such as the HP 5930.
The IP transport network can support VXLANs using unicast or multicast services.
Unicast is simple to deploy, but has scalability and processing concerns. A
multicast deployment optimizes bandwidth and packet processing utilization.
Finally, you learned how to configure a 5930 to operate as a VXLAN-to-VLAN
gateway.
NOTES
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
Learning Check
After each module, your facilitator will lead a class discussion to capture key
insights and challenges from the module and any accompanying lab activity. To
prepare for the discussion, answer each of the questions below.
1. What are three advantages of VXLAN (Choose three)?
a. It is an IEEE standards-based protocol.
b. Any existing IP routed infrastructure can be used.
c. The VXLAN ID is a 16-bit number, allowing for over 64,000 VLAN IDs
d. It provides flexibility, since it is an IP-based Layer 2 overlay technology
e. Scalability is further enhanced through the use of BGP extensions.
2. Choose three correctly described components of a typical VXLAN deployment
(Choose three).
a. The VTEP provides an entry point into the VXLAN.
b. Each participating VM host is assigned a VNI.
c. A VXLAN tunnel is formed between two VNI-assigned VMware hosts.
d. VXLAN can use a multicast IP address for Layer 2 multi-destination
delivery.
e. VXLAN requires IP multicast capability, since that is the only method of
transporting broadcast frames across the VXLAN fabric.
3. Because of the additional header information added by VXLAN systems, the
MTU of the IP infrastructure should be increased to at least 1550 bytes
a. True.
b. False.
4. What are three possible solutions for routing between VXLANs (Choose
three)?
a. The internal, native vRouter on the hypervisor.
b. The HP Virtual Services router.
c. VXLAN Layer 3 VMWare Edge Gateway.
d. A hardware-based VXLAN – VLAN gateway, like the HP 5930.
e. Any Layer 3-capable device.
5. What are two design considerations for VXLAN deployments (Choose two)?
a. Avoid Multicast routing protocol configuration on the IP infrastructure
b. IGMP must be configured to avoid Layer 2 flooding
c. You need to configure IP multicast ranges for the VXLAN IDs.
d. Mapping multiple VXLANs to the same transport IP address can improve
the efficiency of packet deliver.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
To learn more about HP Networking, visit
www.hp.com/networking
© 2014 Hewlett-Packard Development Company, L.P. The information contained herein is
subject to change without notice. The only warranties for HP products and services are set
forth in the express warranty statements accompanying such products and services. Nothing
herein should be construed as constituting an additional warranty. HP shall not be liable for
technical or editorial errors or omissions contained herein.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.