Building HP FlexFabric Data Centers, Rev 14.41 Student Guide Part3

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 153

Building HP FlexFabric Data Centers

Learner guide - book 3 of 3

HP ExpertOne
Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Learner guide - book 3 of 3

HP ExpertOne
Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
 Copyright 2014 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice. The only warranties for HP products and
services are set forth in the express warranty statements accompanying such products and services. Nothing
herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial
errors or omissions contained herein.
This is an HP copyrighted work that may not be reproduced without the written permission of HP. You may not
use these materials to deliver training to any person outside of your organization without the written permission
of HP.

Building HP FlexFabric Data Centers


Learner Guide - book 3 of 3
Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
This course is an HP ExpertOne authorized course designed to prepare you for
the associated certification exam. All material to be used and studied in
preparation to pass the certification exam is included in this training.

HP ExpertOne provides training and certification for the most sought-after IT


disciplines, including convergence, cloud computing, software-defined
networking, and security. You get the hands-on experience you need to hit the
ground running. And you learn how to design solutions that deliver business
value.

HP ExpertOne gives you:


 A full range of skill levels, from foundational to master
 Personalized learning plans and resources through My ExpertOne
 Certifications that command some of the highest pay premiums in the
industry
 A focus on end-to-end integration, open standards, and emerging
technologies
 Maximum credit for certifications you already hold
 A supportive global community of IT professionals
 A curriculum of unprecedented breadth from HP, the world’s most
complete technology company

Visit hp.com/go/ExpertOne to learn more about HP certifications and find the


training you need to adopt new technologies that will further enhance your IT
expertise and career.

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Contents
Appendix 1 - Enhanced IRF......................................................................................................A1-1
Objectives.....................................................................................................................A1-1
Enhanced IRF Overview...............................................................................................A1-2
Network Virtualization Types.........................................................................................A1-3
Evolution of N:1 Virtualization.......................................................................................A1-4
EIRF..............................................................................................................................A1-6
Benefits of EIRF............................................................................................................A1-7
EIRF in the Data Center................................................................................................A1-8
EIRF in the Campus......................................................................................................A1-9
EIRF Operation Overview...........................................................................................A1-10
Terminology and Components....................................................................................A1-11
EIRF Implementation: PEX Port..................................................................................A1-12
EIRF Implementation: CB-PE Registration Flow.........................................................A1-13
EIRF Implementation: CB-PE Connection..................................................................A1-14
EIRF Implementation: Physical Port States................................................................A1-15
EIRF Implementation: PE identification – New PE ....................................................A1-16
EIRF Implementation: PE Removal............................................................................A1-17
EIRF Implementation: CB chassis Option...................................................................A1-18
EIRF Implementation: CB Fixed-Port Option..............................................................A1-19
EIRF Implementation: Forwarding Modes...................................................................A1-20
EIRF Implementation: Central Forwarding Mode........................................................A1-21
EIRF Implementation: Local Forwarding Mode...........................................................A1-22
EIRF Implementation: Unicast 1.................................................................................A1-23
EIRF Implementation: Unicast 2.................................................................................A1-24
EIRF Implementation: Multicast/Broadcast 1..............................................................A1-25
EIRF Implementation: Multicast/Broadcast 2..............................................................A1-26
IRF and EIRF: PE-CB Connections............................................................................A1-27
EIRF Port Numbering..................................................................................................A1-28
EIRF Port Numbering Fixed-Port CB..........................................................................A1-29
EIRF Port Numbering Chassis CB..............................................................................A1-30
IRF and EIRF: Network Access for Servers................................................................A1-31
IRF and EIRF: Network Access for Servers - Example...............................................A1-32
IRF and EIRF: Split Brain Situation 1..........................................................................A1-33
Learning Activity: EIRF Operation Review..................................................................A1-34
Leaning Activity: Answers...........................................................................................A1-35
Design Considerations................................................................................................A1-36
Design Considerations: Deployment Planning............................................................A1-37
Configuration Steps for EIRF......................................................................................A1-38

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Step 1: Configure IRF for the CB Devices..................................................................A1-39
Step 2: Prepare the PEX Firmware Image..................................................................A1-40
Step 2: Prepare the PEX Firmware Image 2...............................................................A1-41
Step 3: Create a PEX Port with Virtual Slot-ID............................................................A1-42
Step 4: Associate PEX Port with a Physical Interface.................................................A1-43
Step 5: Configure PE in PEX Mode Using Boot Menu................................................A1-44
Step 6: Verify...............................................................................................................A1-45
Step 6: Verify 2............................................................................................................A1-46
Step 6: Verify 3............................................................................................................A1-47
Learning Check...........................................................................................................A1-49
Learning Check Answers............................................................................................A1-50
Appendix 2 - EVB-VEPA...........................................................................................................A2-1
Objectives.....................................................................................................................A2-1
EVB Overview...............................................................................................................A2-2
EVB and VEPA.............................................................................................................A2-3
Review of Hypervisor Networking.................................................................................A2-4
Traffic flow with Classic Hypervisor Networking............................................................A2-5
Traffic flow with Classic Hypervisor Networking............................................................A2-6
Traffic Visibility..............................................................................................................A2-7
VLAN Network Management.........................................................................................A2-8
EVB Model....................................................................................................................A2-9
EVB VLAN Network Management..............................................................................A2-10
Terminology and Concepts 1......................................................................................A2-11
Terminology and Concepts 2......................................................................................A2-12
S-Channel Identifier....................................................................................................A2-13
S-Channel Sub-interfaces...........................................................................................A2-14
S-Channel Setup Negotiation......................................................................................A2-15
S-Channel Configuration Negotiation..........................................................................A2-16
EVB VSI Manager.......................................................................................................A2-17
EVB VSI Manager: VSI Templates.............................................................................A2-19
HP 5900v: EVB Interaction with EVB Station 1...........................................................A2-20
HP 5900v: EVB Interaction with EVB Station 2...........................................................A2-21
HP 5900v Components...............................................................................................A2-22
HP 5900v Communication 1.......................................................................................A2-23
HP 5900v Communication 2.......................................................................................A2-24
Learning Activity: EVB Operation and Component Review........................................A2-25
Learning Activity: Answers..........................................................................................A2-27
HP 5900v Installation Prerequisites............................................................................A2-28
Installation Flow Overview 1.......................................................................................A2-29
Installation Flow Overview 2.......................................................................................A2-30

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Installation Flow Overview 3.......................................................................................A2-31
Initial Configuration – Device Discovery and VDS .....................................................A2-32
New Network/VM Configuration Flow Overview 1......................................................A2-33
New Network/VM Configuration Flow Overview 2......................................................A2-35
New Network/VM Configuration Flow Overview 3......................................................A2-36
New Network/VM Configuration Flow Overview 4......................................................A2-37
Operational VM Boot Process Flow 1.........................................................................A2-38
Operational VM Boot Process Flow 2.........................................................................A2-39
Operational VM Boot Process Flow 3.........................................................................A2-40
Operational VM Boot Process Flow 4.........................................................................A2-41
Operational VM Boot Process Flow 5.........................................................................A2-42
Operational VM Boot Process Flow 6.........................................................................A2-43
EVB Bridge Configuration Steps.................................................................................A2-44
Step 1: Configure Interface with EVB support.............................................................A2-45
Step 2: Configure LLDP..............................................................................................A2-46
Step 3: S-Channel Reflective Relay............................................................................A2-47
Step 4: VSI Manager Configuration............................................................................A2-48
Step 5: Verify...............................................................................................................A2-49
Summary.....................................................................................................................A2-50
Learning Check...........................................................................................................A2-51
Learning Check Answers............................................................................................A2-53
Appendix 3 - VXLAN.................................................................................................................A3-1
Objectives.....................................................................................................................A3-1
VXLAN Overview..........................................................................................................A3-2
VXLAN introduction.......................................................................................................A3-3
Supported Products......................................................................................................A3-4
VXLAN Operation..........................................................................................................A3-5
VXLAN Concepts..........................................................................................................A3-6
VXLAN: VTEP...............................................................................................................A3-7
VXLAN Tunnel and Multicast........................................................................................A3-8
VXLAN Packet Structure.............................................................................................A3-10
VXLAN Traffic Flow Overview.....................................................................................A3-11
VXLAN: Multi-Destination Delivery..............................................................................A3-12
VXLAN: Multi-Destination Delivery..............................................................................A3-13
VXLAN Unicast Delivery.............................................................................................A3-14
VXLAN Unicast Delivery.............................................................................................A3-15
Learning Activity: VXLAN Review...............................................................................A3-16
Learning Activity: Answers..........................................................................................A3-18
VXLAN to Physical Network........................................................................................A3-19
VXLAN to Physical network: VM Based IP Routing....................................................A3-20

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN to Physical Network: VMWare Edge Gateway...............................................A3-22
VXLAN to physical network: Hardware gateway.........................................................A3-23
Hardware Gateways and OVSDB...............................................................................A3-24
Design Considerations................................................................................................A3-25
Configuration Steps for VXLAN...................................................................................A3-27
Step 1: Configure Global L2VPN................................................................................A3-28
Step 2: Configure VXLAN Tunnel...............................................................................A3-29
Step 3: Create VSI VXLAN + Bind VXLAN Tunnel.....................................................A3-30
Step 4: Create Service Instance.................................................................................A3-31
Step 5: Bind Service Instance to VSI..........................................................................A3-32
Step 6: Configure Transport IP Interface IGMP..........................................................A3-33
Step 7: Configure VXLAN VSI Multicast address........................................................A3-34
Step 8: Verify...............................................................................................................A3-35
Summary.....................................................................................................................A3-36
Learning Check...........................................................................................................A3-37
Learning Check Answers............................................................................................A3-38

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF
Appendix 1

Objectives
This module explores the Enhanced IRF (EIRF). You will learn about the advantages,
operational aspects, and connectivity requirements of EIRF. Unicast and Multicast
traffic handling mechanisms are then explored, followed by a discussion of EIRF
configuration tasks.
After completing this module, you should be able to:
 Describe the Enhanced IRF Feature
 Describe use cases for Enhanced IRF
 Describe the Enhanced IRF operation and components
 Configure Enhanced IRF

NOTES

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

A1-1 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Enhanced IRF Overview

Figure A1-1: Enhanced IRF Overview

This module includes an introduction to EIRF, along with a discussion on EIRF


operation and concepts. Configuration is also reviewed.

A1-2 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

Network Virtualization Types

Figure A1-2: Network Virtualization Types

In the context of this module, virtualization refers to representing multiple physical


network devices as a single unit. One method of achieving this objective is to
configure multiple chassis as members of an IRF group. IRF enables two or more
physical chassis to be administratively managed and configured as one device. This
IRF group is perceived as one unit by other devices through the use of multi-chassis
link aggregation with LACP.
Another method to achieve this functionality is through 1:N virtualization, in which
multiple logical entities are hosted on one device. This includes the definition of
multiple VLANs in a single switch, or multiple VRF’s defined inside a single router.
This also includes MDC, in which multiple virtual switches are defined inside a single
network entity.
Both of these two methods can be used to provide complete network virtualization.
IRF can combine two physical switches into one logical entity. This IRF entity can
then be configured to support several MDCs, in which multiple VLANs can be
defined. To route between them, a VRF can be deployed in each of the MDCs. In this
way, N:1 and 1:N device virtualization types can be combined.

Rev. 14.41 A1-3

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Evolution of N:1 Virtualization

Figure A1-3: Evolution of N:1 Virtualization

Traditional network design could involve the use of two core switches with separate
redundancy mechanisms for Layers 2 and 3. The Layer 2 redundancy was provided
through STP. While each access switch might be dual-homed to upstream cores, one
of those links would be placed in a blocking mode to prevent loops. The dual
connections provided active-standby redundancy – the redundant links did not help to
carry traffic load.
For Layer 3 redundancy, VRRP could be used. With VRRP one core switch would
assume the master role, actively routing traffic. Another device would assume a
standby role to provide redundancy. This standby device can detect an outage, and
assume the master role if needed. This is another active-standby redundancy
mechanism.
IRF provided a significant evolution from previous techniques by grouping multiple
switches into a single entity. With IRF, multiple core switches can be bundled
together, as can access switches. Separate core and access IRF groups can be
interconnected using LACP, which bundles multiple, physical links into a single,
logical link.
This greatly simplifies the network design, while providing active-active data and
control plane redundancy, as opposed to the active-standby mechanisms provided by
STP and VRRP.
IRF also improved the management plane. If four physical chassis are grouped into a
single IRF group, then those four devices are all managed as a single unit. If nine
members are bundled into an IRF group, there is then only one-ninth of the
administrative configuration and management overhead.
However, each IRF system must still be individually managed. For example, if 200
switches are configured into multiple four-member IRF groups, then 50 logical IRF

A1-4 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

units must be individually managed. While IRF provides significant advantages over
traditional networking, room for improvement remains.
The next generation of virtualization is the Enhanced IRF (EIRF). This technology
provides similar active-active redundancy for data and control planes as IRF. The
evolution is that switches of multiple design layers can now be grouped into a single,
logical entity. IRF provides a horizontal grouping of identical devices. This means that
IRF can group access switches into a logical entity, and it can group core switches
into a separate logical entity. EIRF can provide vertical grouping of both access and
core switches into a single entity.
With EIRF a complete, virtualized network can now be managed from a single IP
address, with complete active-active device and path redundancy. The number of
nodes to be managed is reduced by 1/30th, or possibly 1/100th depending on the
number devices converged.
VLAN configuration is greatly simplified. In the IRF scenario described above, 200
physical switches were combined into 50 IRF groups. This means that a VLAN
addition or modification requires 50 configuration sessions – one for each IRF group.
With EIRF, the same task can be accomplished with a single configuration, since the
entire network is perceived as a single device.
Advantages are also realized with physical inter-switch cabling. With IRF, cables are
required between the different members the same IRF system. Access switched
require multiple uplink connections to the core, as well as 10Gbps IRF connections
between members. With EIRF, these horizontal, intra-member connections are not
required at the access layer.
Also, EIRF greatly reduces the need for LACP configurations. LACP is not required to
interconnect the members of a EIRF group. It is only used to connect LACP-capable
endpoint devices to access layer switches.
EIRF simplifies and improves the efficiency of MAC address learning operations, and
also eases the migration to SDN. These advantages are the result of EIRF’s ability to
collapse the entire network into a much smaller number of logical entities.

Rev. 14.41 A1-5

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EIRF

Figure A1-4: EIRF

EIRF is based on IRF, which is a mature, proven technology. Traditional IRF can only
group identical switch-series models and types. EIRF provides a complete N:1
virtualization technology to collapse different device types at different deployment
layers into a single logical entity. With EIRF, various models of core and access
switches can each be perceived as the individual line cards of a single, centrally
managed system.
In the example at the top of the figure, IRF was used to group two identical chassis-
based core switches into a single entity. Two identical fixed-port access switches
were also paired into an IRF group, and another pair was also configured to act as a
single IRF entity.
In the bottom example, the same devices are now all integrated into a single EIRF.
They are centrally managed by the chassis switches as if they were remote line
cards. The entire system is perceived as one massive chassis device with multiple
line cards.
Endpoint devices can be connected to two separate access switches to maximize
redundancy. This is the same as connecting endpoints to different line cards of a
single physical device. The endpoint’s dual interfaces can simply be grouped together
using traditional link aggregation. The connections between access switches are no
longer needed, and so are removed. All communication between access switches
shall now be via the chassis switches.

A1-6 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

Benefits of EIRF

Figure A1-5: EIRF

The single logical system created by EIRF greatly simplifies the network topology.
There is now only one entity to manage and configure. This reduces repetitive
configuration tasks and reduces configuration complexity.
For example, VLANs need only be configured one time, as opposed to once for each
of 200 physical switches, or 50 IRF systems, for example. Redundancy protocol
configurations such as VRRP and STP are no longer necessary, further simplifying
the configuration.
Firmware management is also streamlined. All access switch members are
automatically provisioned with the correct firmware by the main core switches.

NOTES

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

Rev. 14.41 A1-7

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EIRF in the Data Center

Figure A1-6: EIRF in the Data Center

The figure shows a deployment of three logical EIRF fabrics. The main, chassis-
based core switches in each fabric act as a Controlling Bridge (CB) device. These
distribution-layer CBs terminate all the top-of-rack switch connections.
Each EIRF fabric operates as a single logical device that is dual-homed to the Layer
3 core network. This routing core could be individual routing devices, or an IRF
system. They could be running the routing protocol of choice, such as OSPF, BGP, or
IS-IS.
The Top-of-Rack access switches provide the EIRF Port Extender (PE) function,
since the extend the number of available ports in the EIRF fabric. Typically, servers
will be dual-homed to two PEs to ensure high availability.

A1-8 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

EIRF in the Campus

Figure A1-7: EIRF in the Campus

Although the focus of this course is on data centers, it should be noted that EIRF can
also be deployed in a campus setting. In this scenario the entire campus network
becomes a single logical device. The CBs consist of an IRF fabric at the core layer.
These core devices provide Layer 3 forwarding, and act as the default gateway for
endpoints.
The access switches act as PEs, providing High Availability groups for endpoint
connectivity. Although not shown in the figure, it is also possible to connect other
switches to the PEs using link aggregation. This would provide an active-active
redundancy mechanism for these switch connections.

NOTES

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

Rev. 14.41 A1-9

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EIRF Operation Overview

Figure A1-8: EIRF Operation Overview

This section describes EIRF operation, beginning with a discussion of terminology


and components. This will lead to a more detailed look at Port Extender (PEX) Ports
and CB-PE connectivity.
This is followed by an overview of CB chassis-based switches and Top-of-Rack
models. You will also learn about various PE forwarding modes, port numbering
systems, and the EIRF split-brain condition.

A1-10 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

Terminology and Components

Figure A1-9: Terminology and Components

The CB is the highest layer of a EIRF fabric, and must be a chassis-based or high-
performance fixed-port device. Chassis that support EIRF include the 11900 and
12900. Fixed-port devices include the 5900/5930 models. The EIRF fabric perceives
one logical Controlling Bridge. For redundancy, two physical chassis can be grouped
into an IRF system.
The PE is the lowest layer in the EIRF model, deployed as a fixed-port device.
Chassis-based switch models cannot be used as PE devices. Various models are
available to meet different performance and scalability requirements.
PEs are connected to CBs with a Port Extender (PEX) port. This is a logical
connection, similar to a traditional IRF system’s logical IRF port. This logical PEX port
can be formed with multiple physical interfaces. To maximize redundancy, sets of
physical interfaces can be connected to different CBs.
In the figure, the CBs are depicted as high-performance, chassis-based devices.
Each PE has multiple physical connections to the CBs. This set of connections is
configured as single logical PEX port.

Rev. 14.41 A1-11

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EIRF Implementation: PEX Port

Figure A1-10: EIRF Implementation: PEX Port

Multiple physical interfaces will be assigned and automatically aggregated into a


logical PEX port. The cabling for PEX interfaces can be fiber-optic connections or
dedicated direct-attached cables. These will be 10Gbps or 40Gbps connections.
No direct links between the PE devices are allowed. PEs can only have physical
connections with CBs. Multi-chassis link aggregation is not required for this
connectivity, but is recommended to maximize redundancy.
As each PE comes on line, it appears as an interface line card on the CB. When the
PE goes offline, it appears as if the line card was removed. For this reason, PEX port
connections can be thought of as similar to the chassis backplane connection in a
traditional chassis-based switch.

NOTES

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

A1-12 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

EIRF Implementation: CB-PE Registration Flow

Figure A1-11: EIRF Implementation: CB-PE Registration Flow

The figure shows CB-PE registration flow to establish the EIRF. In the first step, the
PE device is first manually converted to operate in PEX mode, via a BOOTROM
menu option. The PE reboots. Meanwhile, the CB periodically sends EIRF hello
packets out all PEX ports. When the PE device comes online, it receives these
packets and requests a Slot ID. The CB receives and responds with a Slot ID
assignment.
In the next step, the PE determines if it is running the correct firmware. If not, then
boot and system images are downloaded from the CB, and saved locally on the PE
device. The next time the PE boots it will already have the correct images.
Once booted, with images validated, the PE can register itself, and the CB pushes
the configuration down to it, as if configuring a chassis-based line card.
NOTES

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

Rev. 14.41 A1-13

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EIRF Implementation: CB-PE Connection

Figure A1-12: EIRF Implementation: CB-PE Connection

A logical PEX port is manually created on the CB to manage each PE. One logical
PEX port is defined for each physical PE device. Each PEX port is assigned a virtual
Slot ID and physical interfaces are bound to it. Optionally, a description can be
configured for PEX port. This would typically describe the connected PE device.
Multiple physical CB-to-PE connections can be assigned to one logical PEX port. At
least two connections should be provisioned for redundancy, with additional
connections added as needed to meet bandwidth requirements. A hash-based load
balancing algorithm is used to distribute traffic over the physical PEX port
connections.
A given PE’s uplinks can only be connected to physical CB interfaces that have been
assigned to the same logical PEX port. If a CB has an interface assigned to PEX
port1 and another interface assigned to PEX port2, then a PE should not be
connected to both of these interfaces. Instead, assign two or more physical ports to
be in PEX port1, and connect a PE’s interfaces to that. This is very similar to how
traditional IRF physical ports should be mapped to the same IRF logical port.
If a cabling mistake breaks this rule, it will be automatically detected and the interface
will be placed in a blocked state by the EIRF protocol.

NOTES

________________________________________________________________

________________________________________________________________

A1-14 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

EIRF Implementation: Physical Port States

Figure A1-13: EIRF Implementation: Physical Port States

The three physical port states for an EIRF implementation are forwarding, down, and
blocked.
 Forwarding: The interface is up, properly connected, and data can be sent over
the link.
 Down: The physical link is disconnected, and traffic cannot be forwarded.
 Blocked: The physical link is in an error condition. This can be due to
misconfiguration or a cabling error. This is likely because the PE interface is
connected to a physical CB port that is a member of a different logical PEX port
than its other interfaces.

NOTES

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

Rev. 14.41 A1-15

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EIRF Implementation: PE identification – New PE

Figure A1-14: EIRF Implementation: PE identification – New PE

New PE identification can be compared to traditional IRF, which uses a member ID


concept to identify devices. With traditional IRF, this member ID is used as a part of
the port numbering scheme for the IRF fabric. IRF member ID changes require a
reboot to take effect.
With EIRF, the virtual slot ID is used to identify the PE. This ID must be unique within
each EIRF fabric. The Virtual Slot ID replaces the original Slot ID portion of the
device’s port numbers. Unlike IRF, the virtual Slot ID takes effect immediately after is
has been configured. There is no need to reboot the PE for this purpose.
In the figure, a new PE has been added to the EIRF fabric. This can occur while the
EIRF is in operation, without impact to existing traffic flows. The CB automatically
computes the topology to prevent loops. After successful topology validation, Hello
packets are sent over the new links, which are moved into a forwarding state. This is
similar to when new line cards are installed into a traditional chassis, which also has
no impact on switch operations.

NOTES

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

A1-16 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

EIRF Implementation: PE Removal

Figure A1-15: EIRF Implementation: PE Removal

PE removal is similar to board removal in a traditional chassis. The PE configuration


is removed from the running configuration, but maintained in the operational
configuration. The visible output of display commands will not include the
configuration for the removed PE.
Should the running configuration be saved after PE removal, the saved configuration
will no longer include the configuration for the removed PE. This is not an issue
because the complete operational configuration will remain. If the PE device is
replaced by a new device it will simply inherit the original configuration from the
previous PE.
However, upon PE removal, if you save the configuration and then reboot the EIRF
fabric, the removed PE device’s configuration would be lost, and the interfaces would
be added to the EIRF in their default configuration states.

Rev. 14.41 A1-17

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EIRF Implementation: CB chassis Option

Figure A1-16: EIRF Implementation: CB chassis Option

CB devices can be deployed using higher-end Top-of-Rack switches or with chassis-


based devices. With the chassis-based option, the CB is configured as a two-member
IRF system. Each chassis would typically have two MPUs. Of the four total MPUs,
one will be the master, and the other three will be standby MPUs. The three standby
MPUs remain fully synchronized with the master MPU. This is simply a traditional IRF
configuration between the two chassis-based switches.
In the figure, PE1-4 will be connected and brought online as if they are line cards on
this IRF chassis. The line card numbering is different from a traditional chassis
deployment.

NOTES

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

A1-18 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

EIRF Implementation: CB Fixed-Port Option

Figure A1-17: EIRF Implementation: CB Fixed-Port Option

The CB can be deployed using two fixed-port devices, configured as an IRF fabric.
Each fixed-port device has a single MPU. One of the devices acts as a Master MPU
and the other is a standby MPUs. The EIRF fabric will have the same single point of
management as a chassis-based solution.
Each PE device will be added, and become visible as other member devices of the
IRF system. Unlike the CBs, they will not receive a full synchronization of the control
plane.

NOTES

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

Rev. 14.41 A1-19

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EIRF Implementation: Forwarding Modes

Figure A1-18: EIRF Implementation: Forwarding Modes

PEs can operate in one of two different forwarding modes. One option is to configure
a PE to operate in central forwarding mode, in which the CB processes all traffic.
Another option is local forwarding mode, which allows each PE to process traffic
locally.

NOTES

________________________________________________________________

________________________________________________________________

________________________________________________________________

A1-20 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

EIRF Implementation: Central Forwarding Mode

Figure A1-19: EIRF Implementation: Central Forwarding Mode

Central forwarding mode is appropriate when you have relatively lower-performance


PEs. This does not necessarily mean low bandwidth capability. It could simply mean
that the ASICs are not designed to handle complex network protocols.
In this mode, PEs simply forward all traffic to the CB. The CB makes all forwarding
decisions, selects an egress interface, and traffic is forwarded out that interface.
EIRF table sizes are limited by CB device maximums. For example, a PE device may
have a maximum MAC address table limitation of 60,000 entries, and its CB may
support 200,000 entries. The maximum MAC address capability of the EIRF fabric is
200,000 MAC addresses. The PE limitation is not relevant, since it merely acts as a
“dumb” line card. All decisions are made by the CB.
Effectively, PE interfaces have the same feature set as CB interfaces, since they are
all controlled by the same MPU. If the CB ASICs support features such as MPLS and
TRILL, then the PE interfaces also support these features. In this way, relatively
inexpensive PEs can acquire the advanced features inherent to the more capable CB
devices.

Rev. 14.41 A1-21

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EIRF Implementation: Local Forwarding Mode

Figure A1-20: EIRF Implementation: Local Forwarding Mode

Local forwarding mode is most appropriate when you have high performance PE
devices, with advanced ASICs that can handle the features you need.
In this scenario, the CB creates and maintains all Layer 2 and Layer 3 forwarding
entries. These table entries are replicated to the PEs, enabling them to perform local
forwarding functions.
Each PE can therefore perform local table lookup. If a received frame’s destination is
out a local PE port, it can forward traffic autonomously, without CB involvement. If the
destination’s outgoing port is not local to the PE, it forwards the frame upstream to the
CB, which will act as a fabric interconnect. Local forwarding mode requires an
extensive ASIC feature set that is only be available on higher end PE device models.

NOTES

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

A1-22 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

EIRF Implementation: Unicast 1

Figure A1-21: EIRF Implementation: Unicast 1

The figure shows a unicast forwarding scenario in centralized forwarding mode. When
PE device receives a traditional Ethernet frame, it does not perform table lookups.
The PE adds a EIRF header to the frame and sends it to the CB. The CB removes
the EIRF header, and adds the source MAC address to its table with its associated
inbound source port. This is very much like classic Ethernet MAC address learning.
The CB then performs table look up. It compares the Ethernet frame’s destination
MAC address to its learned MAC address table to determine the appropriate
outbound port.

Rev. 14.41 A1-23

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EIRF Implementation: Unicast 2

Figure A1-22: EIRF Implementation: Unicast 2

The CB could process the traffic using a typical Layer 2 table lookup, or it might
process the frame using Layer 3 routing or MPLS, depending on its configuration.
Either way, it makes a forwarding decision, selecting the appropriate PEX port toward
the destination PE.
A EIRF header is added to the frame before transmission. This header indicates the
appropriate outgoing port on the PE device. The PE receive this frame, removes the
EIRF header, and forwards the traffic out the appropriate egress interface, as
indicated in a field of the EIRF header.

A1-24 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

EIRF Implementation: Multicast/Broadcast 1

Figure A1-23: EIRF Implementation: Multicast/Broadcast 1

For multi-destination traffic, the inbound traffic from PE to CB is processed the same
as unicast traffic. The PE receives the frame, adds a EIRF header, and sends it
upstream to the CB. The CB removes the device header, and performs address
learning.
It is the downlink traffic, from CB to destination PE, where a multicast replication
operation occurs.

Rev. 14.41 A1-25

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EIRF Implementation: Multicast/Broadcast 2

Figure A1-24: EIRF Implementation: Multicast/Broadcast 2

If the CB receives a broadcast frame, it sends one copy of the traffic to each PE.
Broadcast frames are received by each PE and forwarded out all ports in the
destination VLAN.
If the CB receives a multicast frame, it parses its multicast routing table, and sends
one frame copy to each PE that is participant in the multicast group. This could
include the PE that originally received the frame inbound from an endpoint.
Each PE that receives a multicast frame uses traditional multicast table lookups to
determine which of its ports are connected to multicast members. The PE then
forwards the frame out all of those ports.

NOTES

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

________________________________________________________________

A1-26 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

IRF and EIRF: PE-CB Connections

Figure A1-25: IRF and EIRF: PE-CB Connections

The figure provides several examples of PE-CB connection methods. The


recommended configuration is to create a two-chassis IRF system for the CB. Each
PE should have at least one physical link to each physical IRF member. To optimize
redundancy LACP port groups can be attached to each CB member.
The second scenario on the top row shows multiple ports aggregated to a single,
physical CB member. This is technically possible and supported, but redundancy and
scalability are compromised. The third scenario is also supported, but even less
redundant, since only a single physical link provides the PE-CB connection.
The bottom row in the figure shows various invalid configurations. Direct PE-to-PE
connections are invalid and unsupported. It is not acceptable to group PE devices
using IRF, and then connect that IRF system to the CB. Regardless of physical PE-
CB link configurations, only single, physical PE devices can be connected to the CB.

Rev. 14.41 A1-27

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EIRF Port Numbering

Figure A1-26: EIRF Port Numbering

When fixed-port switches are used to form a traditional IRF group, they use a three-
level port numbering schema. These levels are the slot, or IRF Member ID, the sub-
slot, and port.

Note
In a fixed-port switch, the sub-slot number will usually be zero, unless the
switch supports additional modules. These modules are typically installed into
the back of the switch.

For example, a typical interface name for a fixed-port CB switch is TenGi1/0/1.


Parsing the numbers from left to right, this port is physically located on physical IRF
member 1, in sub-slot 0, port number 1. Interface TenGi2/0/1 is located on IRF
member 2, in sub-slot 0, port number 1.
When chassis-based switches are deployed into an IRF group, they use a four-level
numbering schema – IRF member-ID, slot, sub-slot, and port number. For example,
consider the interface named TenGi1/2/0/1. For this interface, the first “1” indicates
that this port is located in IRF member-ID 1, with a line card installed into slot number
2 of the chassis, sub-slot number 0, port number 1.

Note
Most chassis-based line cards to not support a sub-slot, and so the number
will usually be a zero. Some router-based line cards support the installation of
modules. In this case, the sub-slot number indicates the appropriate module
slot in the line card.

The port numbering schema used by traditional IRF is also used by EIRF.

A1-28 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

EIRF Port Numbering Fixed-Port CB

Figure A1-27: EIRF Port Numbering Fixed-Port CB

Currently, fixed-port, CB-capable switches include the 5900 and 5930-series. EIRF
port numbering for the CB switches is identical to that used for traditional IRF.
Interface TenGi1/0/1 is the first, or left-most port in a physical switch that has a
member-ID of 1. As previously noted, the sub-slot is typically 0.
As an administrator, you can easily determine if a port is physically installed in a CB
or PE device. IRF ID numbers assigned to CB devices will be in the range of 1
through 9, and PE devices start at 100.
For example, a 5900G model could be deployed as a PE device. Its interfaces could
be referred to as Gi100/0/1, Gi100/0/2, and so on. A 5900XG could be installed as
another PE in this fabric, with ports TenGi101/0/1, TenGi101/0/2, and so on.

Rev. 14.41 A1-29

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EIRF Port Numbering Chassis CB

Figure A1-28: EIRF Port Numbering Chassis CB

Examples of chassis-based CBs include the 11900 and 12900 series. These CB
devices use the same four-level schema as previously described for IRF. A difference
between IRF and EIRF is in the number of devices supported. While up to four
chassis-based switches can be deployed in a single IRF group, EIRF supports a
maximum of two.
The first number in the four-level numbering scheme is always the virtual chassis ID,
and there can be two chassis-based CBs in a EIRF fabric. Therefore CB port
numbers will either start with 1 or 2. The figure shows two port number examples
TenGi1/2/0/31, and TenGi2/2/0/31.
The EIRF fabric’s PE devices use this same 4-level identification method. The first
number of every PE device is always 9. This is the virtual IRF node number assigned
to all PEs connected to chassis-based CBs. This virtual IRF node is always online,
independent of IRF chassis ID 1 or 2.
Each PE is uniquely identified by the slot ID, starting with 1. The first PE may have
interfaces Gi9/1/0/1 – Gi9/1/0/24. The third PE could have interfaces TenGi9/3/0/1 –
24.

A1-30 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

IRF and EIRF: Network Access for Servers

Figure A1-29: IRF and EIRF: Network Access for Servers

There are several options for connecting server endpoints to the EIRF fabric. The
best practice is to maximize redundancy by dual-homing the server to two physical
PEs. The connections to each PE is accomplished with multiple physical links in a
port aggregation group.
Another option is to configure a port aggregation group to a single PE. This is
supported, but of course redundancy is compromised. There is link redundancy due
to port aggregation, but there is no device redundancy.
You could also decide to connect a server using a single physical link. Again, this is
supported, but there is neither link nor device redundancy in this scenario.

Rev. 14.41 A1-31

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

IRF and EIRF: Network Access for Servers -


Example

Figure A1-30: IRF and EIRF: Network Access for Servers - Example

The figure provides an example for configuring EIRF fabric ports to accommodate
server access. This example shows ports from multiple physical PE devices being
configured for server connectivity. This configuration is the same as that used to
configure Multi-Chassis Link Aggregation for a traditional IRF deployment.
In the example, LACP link aggregation is configured by first defining a logical Bridge-
Aggregation interface 201, and then configuring it for dynamic negotiation mode.
Next, interfaces ten101/0/1 and ten102/0/1 are configured as members of this logical
link aggregation group.
From this configuration, you should be able to surmise two key facts. One is that this
configuration is being performed on a EIRF fabric built using fixed-port CB devices.
Recall that with fixed-port CB chassis, PE port numbers use a three-level scheme,
starting with 100. Also, the LACP member ports are on different physical switches,
since one port is identified by 101, and the other by a unique slot ID of 102.

A1-32 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

IRF and EIRF: Split Brain Situation 1

Figure A1-31: IRF and EIRF: Split Brain Situation 1

A split brain condition occurs when an IRF fabric is broken due to link failures. Both
portions of the split fabric use the same IP address, causing routing issues and IP
address conflicts.
The way to mitigate this condition is to configure Multi-Active Detection (MAD). This
feature is configured on the CB devices, enabling them to notify their connected PEs
that a split has occurred.
PEs receive these messages and determines the number of members for each IRF
fragment to which it is attached. It joins the partial fabric with the most members. If
fabric sizes are identical, it joins the fabric with the lowest master ID. The PE blocks
ports connected to other IRF fabrics and continues to function.
There are three variations of MAD – LACP MAD, Bi-directional Forwarding (BFD)
MAD, and ARP MAD. LACP MAD is the recommended method for EIRF fabrics.
Detailed discussion of these methods is not covered in this course.

Note
For more detailed information on MAD, see the Virtual Connect and HP A-
Series switches IRF Integration Guide.
http://h30507.www3.hp.com/hpblogs/attachments/hpblogs/143/797/1/VC-IRF-
intergration-white-paper-v2.pdf

Rev. 14.41 A1-33

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Learning Activity: EIRF Operation Review


Write the letter of the descriptions in the right-hand column in the space provided
under each numbered EIRF component in the left-hand column.
EIRF Component EIRF Component Description
a. Can be two physical chassis grouped
into an IRF
b. Manually created on the CB
1. CB
c. Lowest Layer in EIRF
_______________
d. Fixed-port device
_______________
e. No direct connections between them
f. Chassis or high-performance fixed-port
2. PE
device
_______________
g. Switches must be manually converted
_______________ via BOOTROM menu and then
rebooted.

3. PEX port h. Identified by a virtual slot ID

_______________ i. Logical port between PE and CB

_______________ j. Can contain multiple physical ports


k. Each one requires a separate PEX
port, can’t be connected to two PEX
ports.
l. Physical ports are aggregated
automatically
m. Highest layer of EIRF fabric
n. Uses 10G or 40Gbps connections

A1-34 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

Leaning Activity: Answers


1. CB
Highest layer of EIRF fabric (m)
Chassis or high-performance fixed-port device (f)
Can be two physical chassis grouped into an IRF (a)
2. PE
Lowest Layer in EIRF (c)
Fixed-port device (d)
No direct connections between them (e)
Switches must be manually converted via BOOTROM menu and then rebooted (g)
Each one requires a separate PEX port, can’t be connected to two PEX ports (k)
Identified by a virtual slot ID (h)
3. PEX port
Logical port between PE and CB (i)
Can contain multiple physical ports (j)
Physical ports are aggregated automatically (l)
Uses 10G or 40Gbps connections (n)
Manually created on the CB (b)

Rev.14.41 Appendix 1-35

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Design Considerations

Figure A1-32: Design Considerations

The figure summarizes some EIRF design considerations. Regarding cabling, PEs
should only have connection to CBs, and never to each other. This would cause a
loop in the logical EIRF device.
Only one layer of PE devices is supported in a EIRF fabric. All PE devices must be
directly connected to the CB. It is not possible to connect PE devices to an
intermediate layer of PE devices, which then connect to the CB.
This in turn brings up an important cabling consideration related to port availability.
For a deployment that includes eight access switches, each one should have at
least two CB connections for redundancy, for a total of sixteen. CB devices must
therefore have sixteen available ports to successfully deploy an optimal EIRF
fabric.

Appendix 1-36 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

Design Considerations: Deployment Planning

Figure A1-33: Design Considerations: Deployment Planning

The figure summarizes a high-level deployment plan. The first step is to plan the
network by identifying CB member devices, PEs, and PEX ports. Next, the CB
devices should be virtualized into single, logical IRF device. The third step is to
configure the PEX links for each PE. Each port is assigned a virtual slot ID, and
then physical interfaces are assigned to the logical PEX port. This completes the
basic CB configuration.
The next step is to integrate PEs with the CB. The operating mode of each PE
must be manually changed to PEX mode from the BootROM. Once PE ports are
properly cabled and booted, it is a “Plug and Play” scenario. The EIRF protocol will
automatically perform the software check, and ensure correct firmware is
downloaded to each PE device. After any required download, each PE reboots and
comes online as part of the EIRF fabric.

Rev. 14.41 Appendix 1-37

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Configuration Steps for EIRF

Figure A1-34: Configuration Steps for EIRF

The figure reveals the high-level configuration steps for EIRF. This scenario uses
two 5900-series devices as the CB, and a 5900 as a PE. Interconnections are via
10Gbps interfaces, with a 1Gbps link for server access.

Appendix 1-38 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

Step 1: Configure IRF for the CB Devices

Figure A1-35: Step 1: Configure IRF for the CB Devices

The first step is to configure the two CB devices for traditional IRF. LACP MAD
should also be configured.
It is assumed that traditional switch configurations such as IRF are already
understood.

Rev. 14.41 Appendix 1-39

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Step 2: Prepare the PEX Firmware Image

Figure A1-36: Step 2: Prepare the PEX Firmware Image

Step 2 involves preparation of the PEX firmware image. This image must be
installed to flash on the CB. The PE will be able to download this PEX firmware
image during the initial boot process. Once downloaded, this PEX firmware is
saved in local flash on the PE. From that point the PE will boot from this local
image as a member of the EIRF fabric.
The figure shows the normal firmware images required for the 5900 CB device.
You can also see that images for the PE device have been copied to flash as well.
The top two files listed are the boot and the system images for PE devices.
PEs download this image during initial startup, and then reboot using this new
image. When they come back online they receive a Hello message from the CB.
This Hello message helps the PE confirm that its PEX firmware version is correct,
and that additional firmware downloads are not required.

Appendix 1-40 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

Step 2: Prepare the PEX Firmware Image 2

Figure A1-37: Step 2: Prepare the PEX Firmware Image 2

Simply saving these PEX images to flash on the CB is not sufficient. The CB must
be able to select which files to send to each PE switch model that comes online.
Both the boot and system images will be bound to the PE device model.
In this example, the images are bound to PEX device model called PEX-5900. To
do this, the CB command “boot loader” is used, along with a “pex” keyword,
followed by the device model. This indicates that this boot loader syntax is not for
the local CB device, but for 5900 PE devices that may come online.
The “display boot-loader pex” command can then be used to validate which image
files will be used for which PE device models.

Rev. 14.41 Appendix 1-41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Step 3: Create a PEX Port with Virtual Slot-ID

Figure A1-38: Step 3: Create a PEX Port with Virtual Slot-ID

The third step on the CB involves defining the PEX port. The maximum number of
PEX ports that can be defined depends on the platform being deployed. If a PEX
port is removed and the slot ID is change, the PE will reboot.
In the example, PEX Port1 is defined and a description is provided. The
description could include the rack or device number of the PE device. The
“associate 101” command assigns a virtual Slot ID to the PEX port. This
assignment is locally significant, and will be reflected in the interface port
numbering schema for the attached PE device.

Appendix 1-42 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

Step 4: Associate PEX Port with a Physical


Interface

Figure A1-39: Step 4: Associate PEX Port with a Physical Interface

Now that the logical PEX Port has been defined, physical interfaces can be bound
to it. Multiple interfaces are required for redundancy. It is recommended that at
least one interface from the PE be attached to each physical CB device member.
The maximum number of interfaces that can be connected is device dependent.
When interfaces are assigned to the PEX port, they will revert back to their default
configuration.
In this example PEX port 1 has been defined, and an interface from each physical
CB IRF member is assigned to this logical port. This includes interfaces Ten1/0/1
and Ten2/0/1.
The port group command shown here is identical to the command used for
traditional IRF configuration. All the commands shown thus far have been issued
on the CB.

Rev. 14.41 Appendix 1-43

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Step 5: Configure PE in PEX Mode Using Boot


Menu

Figure A1-40: Step 5: Configure PE in PEX Mode Using Boot Menu

The next step is to configure PE devices to operate in PEX mode. This is


accomplished through the boot menu of the PE device. As the PE device is
booting, you must use the “Ctrl + B” option to break out of the normal boot process
and enter the boot menu. Some switch models may use a different key sequence
to break into the boot menu.
In this menu, you can use the “Change Work Mode” option to configure the switch
to operate as a PE device. Once this is enabled the switch will reboot. A fairly
“Plug-and-Play” type of scenario exists at this point. The PE automatically
becomes part of the EIRF fabric.

Appendix 1-44 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

Step 6: Verify

Figure A1-41: Step 6: Verify

The final step is to verify your configuration efforts. You can use the “display pex-
port” command to display logical EIRF ports, status, and associated slot ID, along
with the status of the EIRF fabric.
The “display pex-port verbose” can reveal which physical ports are participating as
part of a PEX port.

Rev. 14.41 Appendix 1-45

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Step 6: Verify 2

Figure A1-42: Step 6: Verify 2

The figure shows variations of “working-mode” commands. These commands


reveal that the CB-attached PE device is operating in PEX mode, and is
associated with slot ID 101.
The “display devices” command allows you to see which physical CB devices are
operating as Master and Standby units in the CB IRF group. All PE devices will be
listed as line cards in the normal state. PE devices can never become a master of
the IRF.

Appendix 1-46 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

Step 6: Verify 3

Figure A1-43: Step 6: Verify 3

We also have the “display log” option to see any messages that may be related to
EIRF.

Rev. 14.41 Appendix 1-47

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Summary

Figure A1-44: Summary

EIRF is based on IRF – a mature, proven technology. While traditional IRF can
only group like-devices, EIRF can group various models at the access and core
layers into a single, centrally managed system.
EIRF simplifies the network topology, reduces configuration tasks, decreases
configuration complexity, and eases initial deployments.
An EIRF fabric is composed of Controlling Bridges (CB) at the core layer, and Port
Extender (PE) devices at the access layer.
PEX ports are used as the PE-to-CB connections. These logical ports can have
multiple, link-aggregated ports connected to multiple physical CB member
switches.
Both unicast and multicast traffic are handled efficiently in the loop-free EIRF
fabric.
EIRF configuration is very straightforward, and is similar in many ways to a
traditional IRF configuration.

Appendix 1-48 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Enhanced IRF

Learning Check
After each module, your facilitator will lead a class discussion to capture key
insights and challenges from the module and any accompanying lab activity. To
prepare for the discussion, answer each of the questions below.
1. EIRF is based on IRF, and so has the following features (Choose three)?
a. EIRF can only group identical switch-series models and types
b. EIRF can group different device types into a single logical entity
c. EIRF can group access and core switches into a single logical entity.
d. EIRF is appropriate for both Data Center and campus deployments.
e. EIRF requires that all access layer switches be directly connect to each
other.
2. Which three statements are true about EIRF components (Choose three)?
a. The CB function is only supported on chassis-based switches
b. Two physical switches can be grouped into an IRF system to serve the
CB function.
c. The PE function is only supported on fixed-port devices
d. Various models of chassis-based and fixed-port devices can serve as PE
devices.
e. A PEX port is a logical port that can contain multiple physical ports.
f. Each PE must only have a single connection to a CB to avoid STP loops.
3. How are new PEs identified in an existing EIRF deployment?
a. EIRF uses a virtual slot ID to identify PEs, and the PE must reboot for this
change to take effect.
b. When a PE is added, traffic flow for existing PEs is disrupted for about 3
seconds.
c. When a new PE is added, the CB automatically computes the new
topology to prevent loops
d. PE identification operates in an identical way to how the IRF member ID
is used to identify member devices.
4. What two statements are true about how EIRF forwards frames (choose two)?
a. Central forwarding mode is appropriate when you have PEs that either
lack the forwarding performance or capabilities that you require.
b. EIRF uses central, local, and broadcast forwarding modes.
c. With local forwarding mode, the CB creates and maintains forwarding
tables, and shares appropriate entries with each PE device.
d. With central forwarding mode, the CB is responsible for forwarding all
frames, based on table lookup information provided by the PE.

Rev. 14.41 Appendix 1-49

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Learning Check Answers


1. b, c, d
2. b, c, e
3. c
4. a, c

Appendix 1-50 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA
Appendix 2

Objectives
The module introduces EVB and VEPA technologies. This suite of services
coordinates virtual hypervisor systems, physical switches, and management
platform to provide a more scalable, more easily managed data center
environment.
After completing this module, you should be able to:
 Understand the EVB/VEPA protocol
 Describe the advantages of the EVB model
 Understand all components involved in a complete EVB solution
 Understand the integration with the Hypervisor
 Understand the role of the HP 5900v Distributed vSwitch
 Describe the configuration process
 Describe the operational process of EVB

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev.14.41 A2-1

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EVB Overview

Figure A2-1: EVB Overview

This module will introduce EVB, describe EVB operation, and review EVB
configuration.

A2-2 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

EVB and VEPA

Figure A2-2: EVB and VEPA

Edge Virtual Bridging (EVB) is defined in the IEEE 802.1Qbg standard. The
purpose of this standard is to enable Virtual Machines (VMs) to share a common
bridge port for forwarding services.
This standard includes Virtual Ethernet Port Aggregator (VEPA) technologies, and
protocols to help automate the coordination and configuration of network
resources. This results in enhanced network visibility for VM-to-VM configuration
and communication.

Rev. 14.41 A2-3

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Review of Hypervisor Networking

Figure A2-3: Review of Hypervisor Networking

A hypervisor “VM host” is a hardware device that is capable of creating and


running multiple VMs. Common hypervisor systems in the industry include
VMWare’s ESX, Microsoft’s Hyper-V, and Linux KVM. All of these systems can
support multiple virtual environments inside a single physical host, or a collection
of physical hosts.
The hypervisor environment includes one or more software based switches, or
vSwitches. Each VM has a virtual NIC, or vNIC that connects to the hypervisor’s
vSwitch. This vSwitch provides connectivity between the VMs running on the same
Hypervisor. The vSwitch is also bound to a physical NIC (pNIC), which provides
connectivity to the physical network infrastructure.
The figure shows a typical VMWare infrastructure that includes two physical ESX
hypervisor hosts named ESX1 and ESX2. The vSwitch inside each machine
connects to two VM servers and a physical NIC. The physical connection to an
external switch can be one or more Ethernet links. In this scenario, each ESX host
has a single connection to an external HP 5900 Switch Series.
This environment also includes vSphere, which is VMWare’s hypervisor
management system. vSphere provides a centralized platform from which you can
define, modify, and control all virtual environments across all physical hosts.
HP’s IMC management platform is also a part of this environment. The IMC
platform has been enhanced by installing a Virtual Application Network (VAN)
module. While IMC adds significant value, it is not required for a pure hypervisor
environment. However, if you are deploying an EVB solution with your hypervisor
environment, the IMC VAN Connection Manager is a required component.

A2-4 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

Traffic flow with Classic Hypervisor Networking

Figure A2-4: Traffic flow with Classic Hypervisor Networking

The figure shows how VM-to-VM traffic flows within the same hypervisor host.
When VM-11 and VM-12 need to communicate, traffic never leaves the ESX1 host.
The internal vSwitch can handle this intra-host traffic.
This vSwitch has vPorts that connect to each VM’s vNIC. Each VM’s connected
vPort is assigned to a port group, which in turn can be assigned to a VLAN ID. In
this example, the vPorts for both VMs have been assigned to port group PG2,
which has been configured with VLAN 2.
Therefore, when VM-11 sends an untagged broadcast frame, all other VMs in the
same port group on ESX1 will receive this frame, since they are all in the same
broadcast domain. Unicast frames between hosts on the same VLAN are also
handled internally by the vSwitch, just as they would be with an external, physical
switch.
This also means that traffic between VM-11 and VM-12 never leaves the ESX1
hypervisor host. As a result, this communication is not visible to any physical
network devices, such as the HP 5900 switch in this scenario.

Rev. 14.41 A2-5

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Traffic flow with Classic Hypervisor Networking

Figure A2-5: Traffic flow with Classic Hypervisor Networking

When VMs on different hypervisors must communicate, the scenario remains fairly
similar to the previous example. The vSwitches inside ESX1 and ESX2 have both
assigned the vPorts for their respective VMs to VLAN 2. VM-11 sends an untagged
frame to the vSwitch, which has not learned the destination MAC address for VM-
13. It adds a standard 802.1q tag of VLAN 2 to the frame and sends it to the
physical HP 5900 Switch Series. The physical switch has been performing normal
network forwarding, and so has likely learned the destination MAC address. The
switch simply forwards the frame to the ESX2 hypervisor.
The target vSwitch removes the VLAN 2 tag and delivers the frame to VM-13. In
this case, the traffic is of course visible to the physical network.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A2-6 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

Traffic Visibility

Figure A2-6: Traffic Visibility

When the physical network has visibility to VM traffic flow, additional services can
be provided. Physical switches like the HP 5900 can offer many advanced
services, in a way that is familiar to network professionals. This includes services
like the following:
 QoS provides preferred and/or low-latency delivery for certain data types.
 sFlow offers network analysis reporting, allowing visibility into application
usage patterns and statistics.
 ACLs can be applied to improve security.
 Port mirroring enables advanced troubleshooting methods by allowing you to
see all packets between certain devices, or on certain VLANs.
Few if any of these features might be available for traffic that remains inside the
hypervisor environment, and the physical network no longer has control of a large
portion of network traffic.
The goal of the EVB solution is to ensure that the physical network handles all
traffic flows. This ensures that all traffic is handled in a consistent manner, and that
all traffic can be processed by the rich feature set available to physical switches.

Rev. 14.41 A2-7

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

VLAN Network Management

Figure A2-7: VLAN Network Management

In a traditional hypervisor model, VLANs are configured on the hypervisor


management tool, such as in VMWare’s vSphere product. This tool is used to
configure port groups, VLANs, and other aspects of vSwitch operation. These
vSwitches are of course acting as the access layer switches for all VMs.
The physical connections between the hypervisor vSwitch and the physical
switches are configured as trunk ports. Both the hypervisor and the physical
switch must of course be configured to support trunking with the appropriate
VLANs. This means that VLAN management must now be managed from two
disparate tools – the hypervisor’s management tool and traditional network
configuration and management tools.
In a classic hypervisor environment, VLANs for VMs must be configured on all
physical switches, and enabled on all hypervisor-connected trunk ports. Even
when a host has no VM on a specific VLAN, this VLAN should still be enabled on
the interface connected to that ESX host.
In the example, ESX1 contains VM-11 and 12, both configured with port group 2
on VLAN 2. ESX2 is hosting VM-13 & 14, assigned to VLAN 3 in port group 3. It
would seem the HP 5900 switch only needs to support VLAN 2 on the connection
to ESX1, and VLAN 3 toward ESX2.
However, this would hinder the ability to use vMotion, a feature that eases
transferring VMs to different physical hosts. It is often vital that an administrator
can move any VM to any physical host. For this reason, all VLANs should be
supported over all trunk links. In a larger environment with many ESX hosts, this
can create large broadcast domains, which wastes bandwidth, degrades
performance, and reduces network scalability.

A2-8 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

EVB Model

Figure A2-8: EVB Model

The EVB model provides consistent traffic flow handling, because all traffic is
handled by the physical switch. No VM-to-VM traffic is handled by the internal
vSwitch. This includes traffic between VM-11 and 12, as well as that between VM-
11 and 13.
Therefore, all traffic can be treated by the same network policies, and leverage the
same services. This includes QoS, sFlow, ACLs, traffic mirroring, and more.
With EVB, the physical switch uses a feature called reflective relay. A traditional
Ethernet switch will never forward traffic back out the interface on which it was
received. In the figure, the reflective relay feature is what allows the switch to
receive traffic inbound from VM-11 and forward it back out the same interface to
VM-12. Special systems must be installed on the hypervisor to support this
feature.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 A2-9

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EVB VLAN Network Management

Figure A2-9: EVB VLAN Network Management

VLAN management is greatly improved with the EVB model, because VLANs need
only be deployed where they are actually needed. With EVB, port VLAN
membership is dynamically adjusted to accommodate the dynamic nature of VMs.
Products like VMWare’s vMotion make it easy for server administrators to move
VMs to a different physical server. EVB ensures the required network configuration
for these moves happens automatically.
In the figure, the physical switch connection to ESX1 only supports VLAN 2, and
the connection to ESX2 only supports VLAN 3. If something like vMotion is used to
move VM-11 from ESX1 to ESX2, the EVB solution automatically adjusts the
VLAN support. The link to ESX2 would be automatically configured to support
VLAN 2. If VM-12 was also moved, and no other VMs on ESX1 required VLAN 2,
then VLAN 2 would automatically be removed from the trunk port on the physical
switch.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A2-10 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

Terminology and Concepts 1

Figure A2-10: Terminology and Concepts 1

The figure introduces EVB terminology and concepts, as described here:


 EVB Bridge: the physical switch connected to the hypervisor.
 EVB Station: the physical hypervisor host that connects to the EVB Bridge.
This physical EVB station can host multiple vSwitches, called Edge Relays in
an EVB setting.
 Edge Relay (ER): replaces the traditional vSwitch, and ensures that VM traffic
is egressed to the physical EVB Bridge. Multiple pNICs can be installed into
an EVB station, with an ER bound to each one.
 Uplink Relay Port (URP): one of two ER interface types, this is the
connection to the EVB Bridge. ERs can have one or more URPs for link-
aggregated redundancy.
 Downlink Relay Port (DRP): This is the second of two ER interface types. It
connects to a VM vNIC, available upon VM-sourced demand. One DRP is
created per activated VM.
 Virtual Station Interface (VSI): The EVB term for a VM’s vNIC, it connects to
the DRP of the Edge Relay.

NOTES

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 A2-11

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Terminology and Concepts 2

Figure A2-11: Terminology and Concepts 2

The Service Channel or S-Channel is a virtual link between the ER and EVB
Bridge. The link uses the VEPA protocol, which is very similar to the QinQ
mechanism of adding an outer VLAN tag to the original frame. The S-Channel is
negotiated between the ER and EVB Bridge using an LLDP extension called the
Channel Discovery and Configuration Protocol (CDCP).
The reflective relay feature must be enabled on the physical switch’s S-Channel
interface. This feature allows inbound frames to be sent back out their ingress
interface. A traditional Ethernet switch will never allow this, making this feature a
key enabler of EVB bridge functionality.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A2-12 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

S-Channel Identifier

Figure A2-12: S-Channel Identifier

Each ER-to-EVB Bridge S-Channel is identified by a configuration pair. The S-


Channel name is based on the host interface. If an S-Channel is activated on
interface Ten1/0/1, then the name of the logical interface is S-Channel 1/0/1. If the
S-Channel were enabled on bridge aggregation interface BAGG3, then the name
of the logical interface would be S-Channel Agg3.
The S-VLAN Identifier is the outer VLAN tag added to the S-Channel’s QinQ
tunnel. The name and S-VLAN Identifier form the S-Channel ID.
The standard provides device support for one or more S-Channels, operating in
one of two available modes. By default, VEPA Mode is used to support a single S-
Channel on service VLAN 1. This is reflected in the configuration as a number
following the interface name, separated by a colon. For example, you might see
interface S-Channel 1/0/1:1. This means that the S-Channel is configured on
interface Ten1/0/1, using VLAN 1.
Another mode is Multichannel VEPA mode, which supports multiple S-Channels
over a single interface. This mode is not widely used, and will not be discussed
further in this course.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 A2-13

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

S-Channel Sub-interfaces

Figure A2-13: S-Channel Sub-interfaces

The S-Channel will be configured with sub-interfaces. It is these sub-interfaces


that provide a logical connection to a VM’s vNIC. When a vNIC is activated, a new
sub-interface is dynamically created on the EVB Bridge. This sub-interface is
dynamically removed from the S-Channel interface when the VM is stopped, or
moved to another host using vMotion.
On the switch, it is the sub-interface that is configured for things like the VLAN
assignment, QoS, and ACLs.
The figure above shows two VMs online. To accommodate this, the EVB Bridge
configuration will include interface S-Channel 1/0/1:1.1 for VM-11, and interface S-
Channel 1/0/1:1.2 for VM-12. This is how a single physical interface on the switch
is logically separated to accommodate reflective relay. While traffic between VM-11
and VM-12 arrive on the same physical interface, the switch perceives them as
being on separate, logical sub-interfaces.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A2-14 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

S-Channel Setup Negotiation

Figure A2-14: S-Channel Setup Negotiation

CDCP is responsible for negotiating S-Channel setup. This EVB-defined extension


to LLDP uses special TLVs to negotiate S-Channel parameters. These CDCP
exchanges occur between the ER and EVB Bridge using a special destination
MAC address of 0180C2-000003. This MAC address must be configured on the
physical switch.
Incidentally, this address is referred as a “Nearest non-TPMR bridge address”.
TPMR stands for Two-Port MAC Relay. This is not of practical importance to EVB
deployments, and is only mentioned here due to its inclusion in many descriptions
of the standard.
This special MAC address helps CDCP to operate on the same interface as other
LLDP extensions without conflict. It also ensures that if there is a “Two-Port MAC
Relay” device between the EVB Bridge and the ER, it will transparently pass the
frame, instead of treating it as a typical LLDP frame, which would not be
forwarded.

Note
A TPMR is simply a bridge device that supports a subset of typical MAC bridge
functions. It is transparent to all traffic flowing through it, except traffic
addressed specifically to it, or for neighbor agents, such as LLDP. The special
MAC address used by CDCP ensures that its frames will traverse any such
device without issue.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 A2-15

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

S-Channel Configuration Negotiation

Figure A2-15: S-Channel Configuration Negotiation

The figure summarizes the key factors related CDCP’s S-Channel negotiation,
including the following:
 Reflective Relay: This setting enables the egress of frames back out their
ingress port.
 Auto-configuration: CDCP will typically request the reflective relay feature
automatically, eliminating the need to manually configure it.
 Manual Configuration: If the hypervisor’s ER is incapable of negotiating the
reflective relay feature, the network administrator can manually configure it on
the physical switch – the EVB Bridge device.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A2-16 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

EVB VSI Manager

Figure A2-16: EVB VSI Manager

EVB Bridges service each VM’s vNIC, which is referred to as a Virtual Station
Interface (VSI). Sub-interfaces must be configured on the EVB Bridge to support
these connections. This configuration can be done manually or automatically.
Manual configuration of sub-interfaces is not recommended because it is difficult to
maintain configuration consistency. This is especially true in a typical data center
where VMs are moved between physical hosts. These moves would necessitate
manual changes to support VLANs and other configurations relevant to each VM.
A better solution is to allow sub-interfaces to be automatically configured by the
VSI manager. This tool integrates with the hypervisor management tool in order to
learn about VM starts, stops, and migrations. It then communicates with the EVB
Bridge, automatically modifying sub-interface configurations as appropriate.
HP’s Virtual Application Networks (VAN) Connection Manager is a software
module for HP’s IMC management platform. It functions as a VSI manager by
integrating with market-leading hypervisors, such as vSphere, Hyper-V, Zen, and
KVM. A template-based approach is used to ensure consistent configuration
across the network infrastructure.
You use IMC VAN to define VSI templates with specific VLANs, QoS rules, ACL
filters, and more. These templates are made available to the hypervisor, where you
define traditional network profiles, called port groups. You bind an appropriate VSI
template to this port group.
As a result, the ER knows which VSI template to use for each VM, and announces
this to the EVB Bridge. The EVB Bridge queries the VSI Manager, which delivers
the appropriate configuration. In this way physical switches are automatically
configured to accommodate a dynamic data center environment.

Rev. 14.41 A2-17

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A2-18 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

EVB VSI Manager: VSI Templates

Figure A2-17: EVB VSI Manager: VSI Templates

The templates that you define in the VSI Manager consist of two objects. One is a
network object, which contains the VLAN assignment.
The other object is the VSI Type, which is bound to the network object. The VSI
Type contains the actual configuration settings for ACLs, QoS, and more. Most of
these configuration settings are optional, except the VSI type, which must be
bound to the network object.
A properly completed template configuration is released as a VSI Type version.
For example, you might define a sales server template, to be applied to certain
VMs. When these VMs come online, sales template version 1 will be applied.
When you modify this template, you release it as a new VSI Type version, which
can then be applied to the VMs. Version control is automatically enforced.
HP’s IMC VAN system allows various roles to be defined for administrative staff
members. Role security settings can be used to define which network objects each
IMC administrator can access and control. This role security can also be used to
determine which VSI types a particular administrator can manage.
The hypervisor management platform is used to define port groups, and associate
them to a VSI type. The VSI Type version is bound to a network on the VAN
server. This ensures that the port group will be configured with the correct ACL,
QoS, and VLAN service.
The hypervisor manager only needs to configure vNICs and associate each one to
a traditional Port Group, just as they did before the implementation of EVB. The
actual control over which VSI type is assigned to which port group is controlled by
IMC VAN templates.

Rev. 14.41 A2-19

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

HP 5900v: EVB Interaction with EVB Station 1

Figure A2-18: HP 5900v: EVB Interaction with EVB Station 1

Standard hypervisor vSwitches do not support EVB or VEPA. The hypervisor can
only support this functionality by installing an EVB-capable product like the HP
5900v.
The HP 5900v is EVB and VEPA-capable, and so provides CDCP communication
with the EVB HP 5900 physical switch. 5900v configuration can be orchestrated
through the IMC VAN Connection Manager.
The HP 5900v can coexist with a traditional vSwitch in the hypervisor. At least one
physical NIC on the host server must be assigned to the HP 5900v for EVB and
VEPA functionality. A traditional vSwitch can have other VMs associated with it,
using a separate physical NIC on the host. The limitation is that the traditional
vSwitch and the HP 5900v cannot share a physical interface. Each must have
exclusive access to its own physical NIC on the hypervisor host.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A2-20 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

HP 5900v: EVB Interaction with EVB Station 2

Figure A2-19: HP 5900v: EVB Interaction with EVB Station 2

The HP 5900v is responsible for negotiating the S-Channel with the EVB Bridge,
and also for announcing the addition or removal of VMs. A separate protocol is
used for each of these responsibilities.
CDCP is used for S-Channel set up between the ER and the EVB Bridge, as
previously discussed. The Edge Control Protocol (ECP) is used to announce VM
additions and removals, thus automating vNIC session setup and teardown. ECP’s
transition state machine includes the pre-associate, associate, and de-associate
states.
The ER sends a pre-associate message to the EVB Bridge for VMs that are in the
process of coming online. This gives the EVB Bridge time to prepare and configure
new sub-interface for the VM.
The associate state is applicable when the VM is operational and can be used in
production. This state is also used when a VM is moved to another hypervisor.
During the transition, the VSI remains in the associate state with the original EVB
Bridge interface, while the target host sends a pre-associate announcement to the
new EVB Bridge interface.
When the transfer is complete, the original host sends a de-associate message to
the original interface, thus removing the sub-interface from the EVB Bridge. The
target host moves to an associate state with new EVB Bridge interface. The
transition is fairly seamless, since the target interface was already prepared and
configured before the VM came online.

Rev. 14.41 A2-21

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

HP 5900v Components

Figure A2-20: HP 5900v Components

The HP 5900v consist of a Virtual Forwarding Engine (VFE) and a Virtual Control
Engine (VCE).
The VFE is the HP 5900v data plane component, and is installed on each
hypervisor, replacing the standard vSwitch for the EVB deployment. This
component has no user interface or any type of local management capabilities.
The VFE must be configured and managed by the VCE.
The VCE is the control plane of the HP 5900v, and runs as a VM on a hypervisor
host. This is shown as a separate logical server in the figure, which is running as a
VM on the physical host. Arrows pointing from the VCE to the VFEs indicate that
configuration information is being transferred to modify VFE operation.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A2-22 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

HP 5900v Communication 1

Figure A2-21: HP 5900v Communication 1

For EVB deployments, VMware’s vSphere product requires a HP 5900v Plugin


module. This plugin is used to deploy the HP 5900v VFEs to ESX hosts. VFEs are
deployed to ESX hosts in a similar way that traditional VMware vSwitches are
distributed. However, for HP 5900v support, VMWare’s distributed vSwitch feature
must be licensed.
The plugin is also used to bind physical NICs to HP 5900v VFE instances. The
administrator can select which physical links on a host server will be used as
Uplink Relay Ports (URP). Multiple interfaces can be bound to the HP 5900v for
redundancy.
The VFE acts as an Edge Relay on the host. It can be the sole virtual switch on
the host, or it can coexist with the traditional vSwitch on the host. For example, the
traditional vSwitch could use physical interfaces 0 and 1 on the hypervisor host,
while the HP 5900v uses physical interfaces 2 and 3.

Rev. 14.41 A2-23

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

HP 5900v Communication 2

Figure A2-22: HP 5900v Communication 2

The management plugin also defines the VMware port group to VSI Type version
mapping. The VCE component communicates this information to the vSphere
host, which sends port group configurations to ESX hosts. This configuration does
not include VLAN or QoS information, only the VSI type and version data. It is
simply an internal VSI type index number.
This information is distributed to the VFE instances as VMs are brought online.
This means that the VCE is not aware of the actual ACL and QoS rules to be
applied. It simply knows the internal identifier of the VSI type profile that should be
applied.

A2-24 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

Learning Activity: EVB Operation and


Component Review
Refer to the figure. Write the letter pointing to the component in the figure next to
the appropriate component name listed below. Provide a brief description of each
component in the space provided.

 EVB Bridge:

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________
 EVB Station:

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________
 Edge Relay (ER):

_______________________________________________________________
 Uplink Relay Port (URP):

_______________________________________________________________
 Downlink Relay Port (DRP):

Rev. 14.41 A2-25

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

_______________________________________________________________
 Virtual Station Interface (VSI):

_______________________________________________________________

_______________________________________________________________
 S-Channel:

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________
 VSI Manager:

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A2-26 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

Learning Activity: Answers


 EVB Bridge: (g) an HP 5900-series or similar pSwitch connected to the
hypervisor. Uses the reflective relay feature to switch frames out the same
interface on which they were received.
 EVB Station: (a) the physical hypervisor host that connects to the EVB
Bridge, which can host multiple Edge Relays (vSwitches).
 Edge Relay (ER): (e) the HP 5900v replaces the traditional vSwitch so VM
traffic is egressed to the EVB Bridge.
 Uplink Relay Port (URP): (c) the EVB Station-to-Bridge link.
 Downlink Relay Port (DRP): (d) The ER-to-vNIC link, one per active VM.
 Virtual Station Interface (VSI): (b) The EVB term for a VM’s vNIC
 S-Channel: (f) A virtual channel negotiated over the EVB station-to-Bridge
URP links. Interface S-Channel 1/0/1:12 indicates that physical interface
Ten1/0/1 is being used for a URP connection for an S-Channel, and has a
sub-interface to support an active VM that exists on VLAN 12. This sub-
interface is dynamically created and removed as VMs are activated and
deactivated, using an extension of LLDP called CDCP.
 VSI Manager: (h) IMC VAN Connection Manager integrates with vSphere or
similar hypervisor manager to learn about VM starts, stops, and migrations.
Templates on the VSI Manager control VLANs, QoS rules, ACLs applied to
each particular VM. The 5900v ER tells the 5900 EVB Bridge which template
to use. The EVB Bridge then queries IMC VAN Connection Manager for this
configuration.

Rev. 14.41 A2-27

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

HP 5900v Installation Prerequisites

Figure A2-23: HP 5900v Installation Prerequisites

The figure summarizes the prerequisites to installing a HP 5900v-based solution.


This scenario assumes the use of a VMWare environment.
IMC 7.0 or later must be installed and running, along with the Virtual Network
Manager, which is part of the basic platform. The VAN Connection Manager must
also be installed.
VMware vCenter Server should be installed and running, as should the VMware
ESXi hosts.
Key information should also be gathered in preparation for the actual installation of
the HP 5900v. This includes the IMC server’s IP address, ports, and administrator
login credentials. You will also need the VMWare vCenter IP address and login
credentials, as well as an IP address to assign to the VCE.

A2-28 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

Installation Flow Overview 1

Figure A2-24: Installation Flow Overview 1

The HP 5900v is deployed as a VM, using the vSphere “Deploy OVF Template”
option. The HP 5900v software package is available for download from the HP
web site in an OVF format. The OVF template deployment wizard streamlines the
installation.
In the screenshot shown, the configuration wizard requests IMC login credentials
and vCenter IP address and credentials. These settings will be pushed to the VM
as it comes online. In addition to installing the VM, this OVF Template also
includes the required vSphere plugin installation. This happens transparently. No
special deployment process is required for the vSphere plugin.
Once deployed, the HP 5900v VCE can be configured with new IMC or vCenter
information by directly accessing the IP address of the VCE.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 A2-29

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Installation Flow Overview 2

Figure A2-25: Installation Flow Overview 2

The administrator connects to the VCE at http://VCE-IP-Address:8080/gui. After


login, the configuration can be modified. This includes the local IP address, IMC
and vCenter IP addresses, and their related credentials.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A2-30 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

Installation Flow Overview 3

Figure A2-26: Installation Flow Overview 3

As mentioned, the VCE deployment automatically installs the vSphere plugin. This
can be verified from the vSphere management interface. As shown in the figure,
an additional tab appears for the HP Virtual Distributed Switch (VDS) solution. This
tab includes an option for VFE installation.
The example shows two ESXi hypervisor hosts are available. The administrator
can select the host on which to install the VFE component. After clicking the check
box next to the host, the install button is clicked, and the VFE components will be
deployed as a background task.
NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 A2-31

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Initial Configuration – Device Discovery and


VDS

Figure A2-27: Initial Configuration – Device Discovery and VDS

After VFEs are deployed they should be configured. The initial configuration
includes adding physical switches to the deployment, such as Top-of-Rack HP
5900 EVB Bridges. ESXi and vSphere hosts are added using a SOAP/HTTP
template. This allows IMC to query specific VMware information related to vMotion
and other VM status information using SOAP.
A Virtual Distributed Switch (VDS) must be created on the vSphere host. One and
only one VDS is supported on each vSphere host. Then port groups can be
defined on this VDS for the uplink ports. In the bottom-right screenshot, two
hypervisor hosts are listed by their IP address, along with available uplink ports.
The network administrator can select which interfaces can be used as uplinks for
each server. The link aggregation type can also be specified.
Once the uplink has been configured a default port group can be defined, as
shown in the bottom-left screen shot.
NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A2-32 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

New Network/VM Configuration Flow Overview


1

Figure A2-28: New Network/VM Configuration Flow Overview 1

The figure shows a new VM configuration process from the IMC VAN Connection
Manager. The first step is to define the network object. In this example, a new
network is defined for Customer C and is assigned to VLAN 31. A maximum
connection limit for this VLAN is set at 200. The 201st VM that attempts to connect
to this VLAN will be refused, and the VM will not come online. This is a convenient
way to prevent the overpopulation of an IP subnet.
Next, the VSI should be created. In this example a VSI Type for Customer C’s
front-end server is created, and assigned to the Cust-C-31 network that was just
defined, using VLAN 31. Specific service units can also be enabled, such as
bandwidth control or VM access control, as in the example.
As the administrator enables these services, new configuration parameters
become available. For example, IP address/mask pairs can be configured for
filtering, as can the amount of bandwidth to be allowed.
These Service Unit features are not required. If the network administrator simply
wants to assign VLANs, all the service units can be unchecked. Some client VMs
may not need bandwidth or QoS control, so a simple VLAN assignment would
suffice.
The VAN Connection Manager automatically translates whatever options were
selected into Comware CLI Commands. This could include creating traffic
classifiers for QoS, access-lists, and QoS traffic behaviors to control the interface
rate of the filtering. The traffic classifiers and behaviors could be combined into a
policy, and applied to the S-Channel’s sub-interface. This will all be done
dynamically and automatically by IMC during the pre-associate phase of the VM
connection.

Rev. 14.41 A2-33

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A2-34 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

New Network/VM Configuration Flow Overview


2

Figure A2-29: New Network/VM Configuration Flow Overview 2

Once the VSI Type has been released you will see them in the VAN Connection
Manager. In this example we can see that a VSI Type named Customer-C-Server-
Front has been released with version 1. You can only deploy actual versions to VM
port groups.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 A2-35

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

New Network/VM Configuration Flow Overview


3

Figure A2-30: New Network/VM Configuration Flow Overview 3

The next step happens from within the vSphere Network Configuration
environment. Using the HP plugin, you can create new Port Groups. This vSphere
plugin will query the IMC VAN Module for the list of VSI types, such as the ones
you created in the previous steps. These VSI types are made available as
selections in vSphere. Now, when a new port group is defined, an appropriate VSI
Type version can be bound to it.
In this example a new port group named PortGroup-VLAN600 has been defined.
The administrator has bound the VSI Type version named VLAN600-VSI(V1) to
this port group. This VSI Type version template contains all of the VLAN, QoS, and
security settings that you configured previously from IMC VAN.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A2-36 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

New Network/VM Configuration Flow Overview


4

Figure A2-31: New Network/VM Configuration Flow Overview 4

As you recall, the port group was bound to a VSI Type version in the previous step.
The final step is to bind the VM’s vNIC to a port group. This operation is exactly
the same as what VMware administrator has been doing for years. This is because
the EVB port groups are all listed next to any traditional port groups that may have
been locally defined.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 A2-37

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Operational VM Boot Process Flow 1

Figure A2-32: Operational VM Boot Process Flow 1

The figure summarizes the VM boot process.


Using vSphere management tools the administrator starts VM-11. The vSphere
management tool tells ESX1 to start VM-11. vSphere also supplies the VCE with
port group information, including the associated VSI Type version to be applied.
It is important to understand that at this point vSphere is only providing the VSI
Type ID information. The ESX host is not providing the actual QoS and ACL
settings to the VCE. The VCE only receives the identifier (such as 1, 2, 3 or 20 for
example) for the VSI Type version that was bound to that VM’s assigned port
group.

A2-38 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

Operational VM Boot Process Flow 2

Figure A2-33: Operational VM Boot Process Flow 2

The VCE now knows that the VM is booting and that the network must be
provisioned. The VCE informs the VFE of this fact. The VFE announces to the
local EVB Bridge that a VM is coming online. In essence, the VCE tells the VFE to
initiate an ECP session with the EVB Bridge.
The VFE begins an ECP exchange with the EVB Bridge. The figure shows a
packet trace of this communication. ECP includes information about the VSI type
ID and the virtual ID. This is simply an internal identifier for the VSI type. It also
includes information about the MAC address and VLAN ID of that VM.
The EVB Bridge now knows that a new sub-interface should be created. This sub-
interface will be used to process traffic for the VM’s MAC address
(00:10:95:00:00:02, in this example).
The actual sub-interface configuration is unknown at this point. The EVB Bridge
only knows that the sub-interface should be configured with VSI type ID 1, Version
1.

Rev. 14.41 A2-39

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Operational VM Boot Process Flow 3

Figure A2-34: Operational VM Boot Process Flow 3

The EVB Bridge creates a new sub-interface for every new vNIC connection. If the
switch has an S-Channel Interface 1/0/1:1, then the first sub-interface created will
be S-Channel 1/0/1:1.0. This sub-interface will bound to the VLAN and MAC
address of the VM.
This can be seen in the switch configuration as a VSI filter. This enables you to
use the switch configuration to determine which sub-interface is used by a
particular VM’s MAC address. This is created dynamically based on the ECP
exchange, since the HP 5900v provides the MAC address and VLAN used by the
VM.

A2-40 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

Operational VM Boot Process Flow 4

Figure A2-35: Operational VM Boot Process Flow 4

The EVB Bridge now has a sub-interface for VM-11, but no configuration
parameters have been applied. For this to occur, the EVB Bridge must contact the
VSI manager.
Two options are available. The interface could be manually configured by the
network administrator. As previously stated, this is not a best practice. Instead, it is
best to leverage the centralized control provided by the IMC VAN Connection
Manager.
The EVB Bridge will send detailed configuration information to the VSI Manager.
This includes filter information, which includes the VM’s MAC address and VLAN,
as well as the VSI-ID, VSI type and version information, and sub-interface details.
Essentially the EVB Bridge tells the VSI manager, “I have a new sub-interface that
should be configured with VSI 51, version 1.”

Rev. 14.41 A2-41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Operational VM Boot Process Flow 5

Figure A2-36: Operational VM Boot Process Flow 5

The IMC VAN Connection Manager creates an entry for this VM connection in its
VAN database. When it receives the EVB Bridge configuration request, it performs
a lookup to find the appropriate VSI Type version configuration.
Next, the IMC VAN Connection Manager opens a Telnet session to the EVB Bridge
and delivers CLI configuration syntax to the sub-interface. This will include any
ACLs, QoS policies, traffic classifiers, behaviors, and policies. Any required
policies will of course be applied to the sub-interface.

A2-42 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

Operational VM Boot Process Flow 6

Figure A2-37: Operational VM Boot Process Flow 6

Once this has been completed, the VSI Associate state is achieved. VM-11 is
online with a unique sub-interface configuration on the EVB Bridge. The running
configuration contains the operational VSI Type version configuration settings.
In this example, VM-11 is now logically connected to S-Channel 1/0/1:1.0. Note
that VSI Type version changes cannot be done on the fly. If changes were made
on the IMC VAN module, this would not be reflected in the configuration on the
EVB Bridge. The two would be out of sync.
Whenever a VSI configuration is modified in the IMC VAN module, it must be
released as a new version and then assigned to the port group. At that point the
configuration change can become active.

Rev. 14.41 A2-43

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

EVB Bridge Configuration Steps

Figure A2-38: EVB Bridge Configuration Steps

The basic steps to configure the EVB Bridge include configuring the interface for
EVB support, configuring LLDP, and configuring the VSI Manager.

A2-44 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

Step 1: Configure Interface with EVB support

Figure A2-39: Step 1: Configure Interface with EVB support

The interface must be configured as a trunk link, and EVB must be enabled on the
interface. The “evb enable” command also enables CDCP. VLANs will be permitted
dynamically with the VSI manager. VLAN1 enabled by default, and is used as a
service VLAN.
In the example, EVB is enabled on a physical interface. It can also be enabled on
Bridge Aggregation interfaces.

Rev. 14.41 A2-45

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Step 2: Configure LLDP

Figure A2-40: Step 2: Configure LLDP

LLDP has to be enabled at the global level, and the interface must be configured
to use the non-TPMR destination MAC address. This is accomplished with the
“lldp agent” command, as shown in the figure.

A2-46 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

Step 3: S-Channel Reflective Relay

Figure A2-41: Step 3: S-Channel Reflective Relay

Typically, the reflective relay feature is automatically negotiated by CDCP. HP’s


5900v product acts as an Edge Relay device, and so will automatically negotiate
this feature using CDCP. This means that the “evb reflective-relay” command is not
necessary.
If some other ER device was deployed that lacked this capability, you need to
enable this feature manually, as shown in the figure.

Rev. 14.41 A2-47

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Step 4: VSI Manager Configuration

Figure A2-42: Step 4: VSI Manager Configuration

The VSI Manager is responsible for delivering VSI network configuration to the
EVB Bridge. A local manager could be used, but is not recommended. It is far
more effective to deploy a product like HP’s VAN Connection Manager to act as a
central VSI manager.
You must inform the EVB Bridge who it is to communicate with for this purpose.
This configuration can be done at the global level, as shown in the figure.
Configured this way, VSI Manager IP address and name specified will be used for
all interfaces on the EVB Bridge. If you wanted some interfaces to use a different
VSI manager, you could configure it using similar syntax at the interface level.

A2-48 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

Step 5: Verify

Figure A2-43: Step 5: Verify

The figure shows several commands that can be used to verify your configuration
efforts.

Rev. 14.41 A2-49

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Summary

Figure A2-44: Summary

Edge Virtual Bridging (EVB) is defined in the IEEE 802.1Qbg standard. The
purpose of this standard is to enable Virtual Machines (VMs) to share a common
bridge port for forwarding services.
This standard encompasses Virtual Ethernet Port Aggregator (VEPA) technologies,
and protocols to help automate the coordination and configuration of network
resources. This results in enhanced network capabilities for VM-to-VM traffic,
including QoS, ACLs, sFlow, port mirroring.
HP’s 5900v can replace a hypervisor’s native vSwitch to enable EVB and VEPA
services in the data center. It can work with physical switches like HP’s 5900
series.
In an EVB deployment, the physical switch is called an EVB Bridge, the HP 5900v
virtual switch is called an Edge Relay (ER), and a VM’s vNIC is called a Virtual
Station Interface (VSI). The EVB Bridge and ER communicate and connect over a
virtual link called the S-Channel. This channel is negotiated using CDCP.
HP’s IMC VAN Connection Manager serves as an EVB VSI Manager. This
component enables sub-interfaces to be automatically configured on an EVB
Bridge’s S-Channel interface to accommodate the movement, addition, and
removal of VMs in the data center.
The HP 5900v’s VFE component is installed on hypervisor hosts. Its VCE
component is installed as a VM on a hypervisor host. The physical EVB Bridge
must also be configured to support an EVB solution.

A2-50 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

Learning Check
After each module, your facilitator will lead a class discussion to capture key
insights and challenges from the module and any accompanying lab activity. To
prepare for the discussion, answer each of the questions below.
1. Which of the following is an advantage of an EVB – VEPA solution?
a. More frames can be forwarded locally inside a hypervisor host.
b. EVB – VEPA solutions can be implemented purely inside a standard
switched environment, with no additional components required.
c. Services such as QoS, security ACL’s, and sFlow are extended into the
hypervisor environment and so are supported on internal vSwitches.
d. Since all frames are forwarded by the physical switch, consistent services
and visibility are available for all traffic flows
e. EVB – VEPA can take advantage of a standard IMC solution. No
additional modules for IMC are required.
2. Which three statements are true about how EVB VLAN management (Choose
three)?
a. VLANs for all VMs must be provisioned on all data center switches.
b. VLANs need only be deployed where they are actually needed.
c. VLANs are automatically provisioned when they are defined in IMC
d. VLANs are automatically provisioned when a VM comes online
e. Communication between the IMC VAN module and the pSwitch help
facilitate VLAN management.
3. The S-Channel is a virtual link between the ER and EVB Bridge, and uses
sub-interfaces to support dynamic VLAN creation.
a. True.
b. False.
4. Which three statements are true about an EVB VSI manager (Choose three)?
a. It allows sub-interfaces to be automatically configured on a switch by
integrating with a hypervisor management tool
b. An EVB VSI manager is included by default with an IMC installation.
c. The IMC VAN Connection Manager module adds EVB VSI manager
capabilities to IMC.
d. Automatic VLAN creation using an EVB VSI manager is the only method
for creating VLANs for Virtual Machines in an EVB deployment.
e. To aid in automatic VLAN creation, VSI Templates are defined with
networks and VSI types.
5. What are two features of the HP 5900v (Choose two)?
a. Both a standard hypervisor vSwitch and the HP 5900v can be used for
VEPA functionality in an EVB deployment.

Rev. 14.41 A2-51

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

b. A hypervisor’s standard vSwitch can coexist with the HP 5900v, but only
the 5900v provides VEPA functionality.
c. The 5900v uses CDCP to communicate with an HP 5900 physical switch.
d. The 5900v is not responsible for announcing the addition or removal of
VLANs to an EVB Bridge.

A2-52 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
EVB - VEPA

Learning Check Answers


1. d
2. b, d, e
3. a
4. a, c, e
5. b, c

Rev. 14.41 A2-53

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

A2-54 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN
Appendix 3

Objectives
This module begins an Introduction to of Virtual eXtensible LANs, or VXLANs. You
will learn how this Layer 2 overlay technology functions, and how it is used in a
data center environment. This includes an understanding of solution objectives for
VXLAN, and how those objectives are fulfilled with capabilities such as MAC
learning inside the VXLAN, and how multi-destination delivery methods are
handled.
You will also learn how VXLAN can be interconnected to traditional physical
networks, providing multiple deployment options. One such option includes the use
and configuration of VXLAN hardware-based gateways, such as the HP model
5930.

After completing this module, you should be able to:


 Describe the VXLAN feature
 Understand the VXLAN basic operations
 Describe the MAC learning process in a VXLAN
 Describe the virtual VXLAN to physical VLAN network integration
 Understand the basic configuration of a VXLAN tunnel

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev.14.41 A3-1

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

VXLAN Overview

Figure A3-1: VXLAN Overview

This module includes section on VXLAN introductory topics, before moving into
VXLAN theory of operation and design considerations. The final section involves
the configuration of VXLAN functionality on HP switches, such as the HP 5930.
.

A3-2 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

VXLAN introduction

Figure A3-2: VXLAN introduction

VXLAN functionality is based on an RFC draft document, which is yet to be


assigned an official RFC number. VXLAN provides an alternative to the traditional
VLAN concept.
Traditional VLANs are tagged inside an 802.1q header with a 12-bit number,
providing for a maximum of 4094 distinct VLANs. This is more than adequate for
typical corporate deployments. However, multi-tenant data centers can quickly
exhaust this resource.
Suppose a hosting site had several clients that each used two or three VLANS,
The hosting service would run out of VLANs somewhere around 2000 clients. Of
course, several of these clients may require a much larger number of VLANs,
limiting the client base even further.
Another challenge with traditional VLANs is the possible over-expansion of Layer 2
broadcast domains. In a data center environment, many switches connect to
hypervisor servers that host Virtual Machines (VMs). In this scenario, it can be
difficult for network administrators to predict where VMs will actually be hosted. On
which physical servers, connected to which physical switches might any VM need
to be hosted?
To accommodate this uncertainty, every client VLAN might be supported among
every physical switch. This would enable any VM to operate on any server.
However, in a larger data center with hundreds of switches, this could create large
broadcast domains and unnecessary broadcast traffic. This is also less than
optimal from a security standpoint, as well as for limiting the scope of network
outages and errors.
VXLAN technology offers a solution to these limitations.

Rev. 14.41 A3-3

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Supported Products

Figure A3-3: Supported Products

HP top-of-rack switches, such as the HP 5930 supports VXLAN, as well as


chassis-based models like the 7900, 12900 series switches.
The 48-port 10G FC module, and the 24-port 40G FC module are also supported

A3-4 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

VXLAN Operation

Figure A3-4: VXLAN Operation

VXLAN technology is most advantageous in a data center environment, since


traditional campus networks do not experience the issues that VXLAN is designed
to resolve.
VXLAN is an IP-based Layer 2 overlay technology. It encapsulates Layer 2 traffic
inside an IP datagram for improved function and scalability. A format referred to as
“MAC in UDP” is used for this encapsulation. Once this Layer 2 traffic is
encapsulated, it can take advantage of any existing IP routed infrastructure inside
the data center, or between data centers. All of the redundancy, resiliency, and
load-sharing capabilities typical of routed infrastructures are therefore available to
VXLAN services.
The VXLAN ID is a 24-bit value – double the number of bits available with 802.1q.
This means that over 16 million VXLAN IDs are available, as compared to 4,000
VLAN IDs for traditional VLANs.

Rev. 14.41 A3-5

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

VXLAN Concepts

Figure A3-5: VXLAN Concepts

The figure depicts a transport IP network composed of a number of switches,


interconnected in a routed environment. Several physical server hosts are
connected to this infrastructure.
VXLAN functionality is yet to be fully deployed, with no physical devices configured
with this capability. The VMs hosted on these servers, have however been
configured with a 24-bit VXLAN Network Identifier (VNI).
Two IP backbone sub-nets are indicated. The 10.1.1.0/24 includes two physical
ESX servers name ESX-11 and ESX-12. The 10.1.2.0/24 subnet also has two
hosts named ESX-21 and ESX-22.
ESX-11 is hosting VMs 1, 3 and 11. The ESX Hypervisor was configured with
VXLAN 1001, which was bound to the virtual links for VM1 and VM3, while VM11
was assigned a VNI of 2001. Similar configurations exist on the other servers, as
shown.
The ESX-11 and -12 servers will encapsulate the Layer 2 frames into an IP packet,
including the appropriate VNI. The result is that VMs 1 through 5 are functionally
on the same Layer 2 multipoint subnet. Similarly, virtual network 2001 contains
hosts VM11, 12, and 13. VXLANs 1001 and 2001 operate like a traditional classic
VLAN without any Layer 3 IP interface in them. They are totally isolated and these
VMs can communicate with each other but they cannot communicate with any
device outside of their own VXLAN.

A3-6 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

VXLAN: VTEP

Figure A3-6: VXLAN: VTEP

The VXLAN Tunnel End Point (VTEP) provides an entry point into the VXLAN,
while interacting with the other VTEPs to provide connectivity. In the figure, each
physical ESX hosts has been assigned this role. They are responsible to
encapsulate traffic from the VMs into the tunnel, and send the resulting packets to
their destination. The receiving, destination VTEP must decapsulate the packet
and deliver it toward the intended target.
Essentially, the VTEP provides an “on-ramp” to the VXLAN by performing this
frame encapsulation and decapsulation service.

Rev. 14.41 A3-7

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

VXLAN Tunnel and Multicast

Figure A3-7: Supported Products

When one VXLAN VTEP or tunnel end point communicates with another, a VXLAN
tunnel is established. A VXLAN tunnel is required for each destination VTEP in the
same VXLAN. In the figure, ESX 11, 12 and 21 are the physical devices hosting
VMs assigned to VXLAN ID 1001, and so each of those devices maintains a tunnel
with the other two. VMs assigned to VXLAN ID 2001 are hosted on servers ESX
11, 12 and 22, with a similar set of tunnels.
The tunnel is merely a mechanism of transport through the IP network. One tunnel
can be used by multiple VXLAN IDs because each VM’s encapsulated packet
contains its VNI. This VNI is then used to send packets to appropriate tunnel
endpoints. This is why traffic from multiple VXLANs can use the same tunnel.
The underlying IP transport network uses multicasting to deliver broadcast,
multicast, and unknown unicast frames for each VXLAN. Suppose VM1 sends a
broadcast. Since hosts ESX 12 and 21 contain VMs in the same VXLAN, they
must receive this broadcast. There are two ways to recreate this broadcast domain
functionality.
One method is to use head-end replication. With this method, host ESX 11
encapsulates the Layer 2 broadcast frame two times. It is encapsulated once as
an IP unicast to host ESX 12, and again as a unicast to ESX 21. This unicast-
based solution simplifies the deployment by eliminating the need for multicast
services. However, larger deployments could have scalability issues, since every
broadcast must be individually unicast to each host in the VXLAN.
The second method optimizes scalability by using multicasting. Using this solution,
ESX 11 can encapsulate each Layer 2 broadcast in a single IP multicast packet,
addressed to some multicast address, such as 239.1.1.1 in this scenario. All hosts
are configured to know that VNI 1001 is bound to this address. Hosts ESX 12 and
A3-8 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

21 will use IGMP join the multicast group, informing the IP transport network that
they need to receive traffic destined to 239.1.1.1. If properly configured for
multicasting, the IP network will deliver the original transmission to both ESX
hosts.
This method does add a bit of complexity to the transport network deployment,
since it must support multicast functionality. However, this saves resources and
improves scalability, since each Layer 2 broadcast need only be transmitted once
by the VTEP device, regardless of the VXLAN’s host count.
Another advantage to multicasting is that only ESX hosts which actually require a
VXLAN’s traffic will join the multicast group. Suppose an administrator were to
move VM5 from ESX-21 to some other physical server. ESX-21 would realize that
it is no longer hosting VMs on VXLAN 1001, and send an IGMP leave message to
the IP network, leaving the multicast group 239.1.1.1.
This VXLAN functionality provides a major advantage over traditional VLANs. The
network can respond to the changing relationship between Virtual Machines and
physical server connections, and send broadcast/multicast traffic only to required
destinations.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 A3-9

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

VXLAN Packet Structure

Figure A3-8: VXLAN Packet Structure

The figure reveals VXLAN packet structure. The original Layer 2 frame is
encapsulated into a UDP IP frame, with an additional 8-byte VXLAN header
inserted, which contains the VXLAN ID.
This additional encapsulation adds an additional fifty bytes of overhead to the
transmission. The original frame has the 8-byte VXLAN header added, an 8-byte
UDP header, another 20-byte IP header and 14-bytes for Ethernet. For the original
Layer 2 frame to support 1500-byte payloads, the MTU of the IP infrastructure
should be increased to at least 1550 bytes or greater.

A3-10 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

VXLAN Traffic Flow Overview

Figure A3-9: VXLAN Traffic Flow Overview

The figure introduces this section, which is focused on VXLAN traffic flow. The
objective is to understand how unicast, multicast, and broadcast traffic is
accommodated through a VXLAN-capable infrastructure.
We will examine a scenario that focuses only on VMWare ESX hypervisor
connectivity to the VXLAN, before delving into typical multicast and unicast
scenarios.
You will understand the packet flow that supports VXLAN communications. This
includes MAC learning, interconnecting a VXLAN-based system to traditional
networks.
Finally you will learn about Comware hardware-based VXLAN Gateway operation,
providing a solution to bridge VXLANs to physical VLANs.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 A3-11

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

VXLAN: Multi-Destination Delivery

Figure A3-10:

The scenario above describes multi-destination delivery, showing how VM1 sends
an ARP broadcast intended for VM5.
VM1 sends an ARP broadcast, which is delivered through the virtual link to the
VTEP. ESX-11 receives this frame from the VM’s virtual port, which is assigned to
VXLAN 1001.
ESX-11 determines that the destination MAC address is a broadcast. ESX-11
encapsulates the packet into a VXLAN frame, with VXLAN ID 1001. The source IP
address is ESX-11’s IP address of 10.1.1.11, and the destination IP address is the
configured multicast address of 239.1.1.1. ESX-11 sends this encapsulated frame
into the transport IP network.

A3-12 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

VXLAN: Multi-Destination Delivery

Figure A3-11: VXLAN: Multi-Destination Delivery

The transport IP network will deliver the IP Multicast packet to all the joined hosts.
This assumes that multicast routing has been properly configured on this
infrastructure. Another assumption is that the remote VTEPs have joined the IP
Multicast group using IGMP.
If so, the IP packet is delivered to ESX-12 and ESX-21, and the other members of
the 239.1.1.1 multicast group. This scenario will focus on the ESX-21, since the
same activity applies to ESX-12.
ESX-21 decapsulates the VXLAN packet and so discovers the source VM’s virtual
MAC address on the tunnel interface. It binds this MAC address to the ESX-11
source IP address of 10.1.1.11.
ESX-21 then reads the destination MAC address, which is a broadcast in this
case. Any multicast, broadcast, or unknown unicast destination will be flooded to
all of ESX-21’s local virtual ports assigned to VXLAN 1001.In this case, only VM5
is active, and so it receives the ARP request sent to it by VM1.

Rev. 14.41 A3-13

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

VXLAN Unicast Delivery

Figure A3-12: VXLAN Unicast Delivery

VM5 has now received the ARP request and responds to VM1 with an ARP reply,
which is formed as a Unicast packet. ESX-21 receives the packet from VM5’s
virtual port, which is assigned to VXLAN 1001.
ESX-21 looks up the destination MAC address, which is VM1’s MAC address.
Since ESX-21 just received an ARP request from VM1, it has an entry in its MAC
address table for this host. It will use its outbound port towards the ESX-11’s IP
address.
ESX-21 encapsulates the packet into a VXLAN frame tagged with VXLAN ID 1001.
Its own IP address is the source, and the destination IP address will ESX-11’s IP
address of 10.1.1.11. A multicast is not needed in this case, since the destination
is known. This unicast IP datagram is sent into the transport IP network.
Once encapsulated, VXLAN packets can be treated as any other IP-based traffic.
It will benefit from the various types of equal-cost load-balancing capabilities
offered by a typical routed IP infrastructure for intra-VXLAN delivery.

A3-14 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

VXLAN Unicast Delivery

Figure A3-13: VXLAN Unicast Delivery

ESX-11 receives and decapsulates the VXLAN packet, learning VM5’s source
MAC address. It binds this MAC address to ESX-21’s source IP as outgoing
interface. It then reads the destination MAC address, which is VM1’s address in
this case. It then forwards the packet to the local virtual port assigned to VM1.
VM1 receives the ARP reply.
The result of the multicast and unicast flows that we have just analyzed is that all
VMs have communication at Layer 2 and all the VTEPs have built a table that
associates learned MAC address to VTEP IP addresses. Therefore, each source
can deliver frames directly to any valid destination host.

Rev. 14.41 A3-15

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Learning Activity: VXLAN Review

Refer to the figure above when necessary, and answer the following questions.
1. Which devices above must have a tunnel between them, and why? Assuming
two VXLANs must be supported, how many tunnels must be active?

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________
2. For devices in the IP network to successfully support VXLAN, what should
their MTUs be set to, and why?

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A3-16 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

3. What are two approaches to supporting multi-destination traffic, and why


might you choose one over the other?

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________
4. ESX-12 has just received a frame from vm1, destined for vm4. How will ESX-
12 keep track of vm1’s MAC address?
a. ESX-12 does not need to keep track of this, since it is handled by the
fabric
b. It maps the source MAC address on the outer-most Ethernet header to its
local interface
c. It maps the source MAC address on the original Ethernet header to its
local interfaces IP address
d. It maps the source MAC address of the original Ethernet header to ESX-
11’s IP address
e. It maps the FC-ID from the native frame to ESX-11’s IP address
5. When ESX-12 sends the response from vm4 back to vm1, what will it used as
the VXLAN ID, source IP address, and destination IP address?

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 A3-17

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Learning Activity: Answers


1. Which devices above must have a tunnel between them, and why? Assuming
two VXLANs must be supported, how many tunnels must be active?
Each VTEP must form a tunnel with other VTEPS in the same VXLAN (in other
words, has the same VNI assignment). Since physical hosts ESX-11, 12, and 21
all support VXLAN 1001, each must have a tunnel to the other two. Since hosts
ESX-11, 12, and 22 support VXLAN 2001, each must have a tunnel to the other
two.
Multiple VXLANs can be supported over a single tunnel, so only one tunnel needs
to form between VTEPs, regardless of the number of VXLANs supported.
2. For devices in the IP network to successfully support VXLAN, what should
their MTUs be set to, and why?
VXLAN adds a 20-byte IP header, 8-byte UDP header, 8-byte VXLAN header, and
a 14-byte Ethernet header, therefore, the MTU must be greater than 1550 bytes on
all device interfaces in the IP network.
3. What are two approaches to supporting multi-destination traffic, and why
might you choose one over the other?
Head-end replication is one solution, in which each multi-destination frame is
replicated for each VTEP that supports the VXLAN. This a less scalable method,
due to the overhead associated with processing multiple unicast frames. However,
for smaller deployments it allows for a simple IP network, since multicast support
need not be configured.
For larger networks, multicast support can be configured on the IP network, and
then leveraged by VXLAN to more efficiently forward frames. The added
complexity of having to configure multicast support is likely worth the additional
efficiency for large and growing deployments.
4. ESX-12 has just received a frame from vm1, destined for vm4. How will ESX-
12 keep track of vm1’s MAC address?
The answer is c, It maps the source MAC address of the original Ethernet header
to ESX-11’s IP address
5. When ESX-12 sends the response from vm4 back to vm1, what will it used as
the VXLAN ID, source IP address, and destination IP address?
Both VMs in this scenario are on VXLAN 1001, so that would be used. The source
IP address would be 12.1.2.12, and the destination IP address would be 10.1.1.11.

A3-18 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

VXLAN to Physical Network

Figure A3-14: VXLAN to Physical Network

Having explored how devices on the same VXLAN communicate, the focus is now
on routing between VXLANs. Some kind of a gateway function must allow VMs to
access external networks. This is similar to how a host connected to a traditional
VLAN requires a default gateway on the same VLAN.
There are three possible solutions for inter-VXLAN routing:
 VXLAN Layer 3 VM-Based: This solution places routing services inside the
ESX host, using the HP Virtual Service Router.
 VXLAN Layer 3 VMWare Edge Gateway: This solution relies on a VMWare
product inside the ESX host to provide routing services.
 VXLAN Layer 2 hardware: This solution exposes the VXLAN to an external
VLAN connection, using a hardware-based VXLAN –VLAN gateway, like the
HP 5930.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 A3-19

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

VXLAN to Physical network: VM Based IP


Routing

Figure A3-15: VXLAN to Physical network: VM Based IP Routing

With VM-Based IP routing, you create a virtual machine that has two or more
virtual NICs. One vNIC is bound to the VXLAN network object on the ESX server,
while the other vNIC is bound to a traditional VMNet VLAN in a traditional vSwitch
defined on the ESX server. This VMNet VLAN is associated with a physical NIC on
the ESX host, which is connected to a switch port configured to support that VLAN.
The figure highlights this scenario for the ESX-11 server. To support VM-base
routing, VM2 is created. This VM includes a G1/0 vNIC bound to VXLAN 1001,
with IP address 192.168.1.1. Its G2/0 interface is bound to VMNet VLAN 102, and
has an IP address of 192.168.2.1.
Host VM1 is configured on VXLAN 1001 as before, and its default gateway is the
192.168.1.1 address. As with traditional default gateways, the VM2 virtual gateway
device accepts frames from hosts and routes them out its G2/0 interface.
Traffic sent out VM2’s G2/0 interface includes an 802.1q VLAN tag of 102 as it
exits the physical NIC. The attached switch port is configured to support this
VLAN, and so accepts it inbound and forwards it on through the IP transport
network. Of course, this VM2 virtual router must exchange routes with the physical
routers on the IP transport network.
This example shows VM2 running on physical host ESX-11. However, this virtual
router could operate on any ESX host that has access to VXLAN 1001, and is
connected to a physical switch port that supports VLAN 102. If the virtual service
router is moved using Vmotion, the logical router topology does not change.
This solution is viable with any operating system that allows routing. For example,
the HP Comware portfolio includes HP Virtual Service Routing. This is a software-
based router optimized for hypervisor environments. Currently, this solution
supports routed access only. It is not possible to connect the 192.168.1.1 VXLAN
directly to an external, traditional VLAN.
NOTES
A3-20 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 A3-21

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

VXLAN to Physical Network: VMWare Edge


Gateway

Figure A3-16: VXLAN to Physical Network: VMWare Edge Gateway

This solution is very similar to the previous solution, but deployed using a VMWare
product. The VMWare Guest operating system can provide routing, filtering,
address translation, and firewall services. Fundamental operation of this solution is
as described in the previous example.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A3-22 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

VXLAN to physical network: Hardware gateway

Figure A3-17: VXLAN to physical network: Hardware gateway

The third solution available for VXLAN-to-physical network connectivity uses


external hardware, such as the HP Comware 5930. The 5930 supports this
functionality due to its newer Trident II chipset. The 5900, with its earlier Trident +
chipset cannot support this VXLAN functionality.
With this solution, hosts in a VXLAN are directly mapped to a traditional VLAN,
using a Virtual Switch Instance (VSI). This is a Layer 2 connection. There is no
longer a software gateway, with its associated IP addressing, required to connect
the virtual and physical environments. The VXLAN is simply mapped to an external
VLAN.
IP routing services are not provided by the 5930 Layer2 gateway. This requires an
external routing device. Routing can be provided internally by a VM routing
service, such as the aforementioned HP VSR or VMWare Edge Gateway.
Alternatively, routing could be provided via a physical IP routing switch. This
provides two ways to deploy this solution, and it is up to the network administrator
which method they prefer.

Note
The next 7900 and 12900 LPU will support VXLAN termination and routing on
the same device.

Rev. 14.41 A3-23

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Hardware Gateways and OVSDB

Figure A3-18: Hardware Gateways and OVSDB

VTEP configurations on the ESX Host are created and maintained by the vSphere
management software, using the Open Virtual Switch Data Base (OVSDB) format.
The moment the virtual machines start, this management software knows that they
are bound to specific VXLANs. It notifies all other VTEP devices to create the
appropriate internal interfaces, providing a kind of centralized configuration
management.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A3-24 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

Design Considerations

Figure A3-19: Design Considerations

The figure introduces some design considerations related to VXLAN connectivity.


First, the IP transport network should support IP multicast. This means that PIM
must be configured to accommodate routed links between the VTEPs. Also, IGMP
must be configured to avoid Layer 2 flooding. Of course, if all hosts are on the
same subnet, there is no routing, and therefore the need for multicast routing is
eliminated
Also, you must provision IP multicast ranges for the VXLAN IDs. Multiple VXLANs
can be mapped to the same transport multicast IP address. However, this will
result in sub-optimal delivery for VXLAN multi-destination traffic. For example,
suppose that both VXLAN 1001 and 1002 are mapped to multicast address
239.1.1.1. The broadcast traffic for VXLAN 1001 would physically arrive on all the
tunnel end points of both VXLAN 1001 and 1002.
This effectively increases the size of the broadcast domain, and causes undue
processing on endpoints. VTEPs which only have VMs on VXLAN1001 will receive
broadcasts for VXLAN 1002. They will decapsulate these packets, realize that they
are not needed, and then discard them.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________
Rev. 14.41 A3-25

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A3-26 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

Configuration Steps for VXLAN

Figure A3-20: Configuration Steps for VXLAN

The figure introduces the steps to configure VXLAN, and depicts a sample
topology. In this scenario, the transport IP network is multicast-enabled. Switch
5930-1 is near the top of the diagram, while 5930-2 is near the bottom. These are
the two VTEPs. They will be configured to set up a tunnel interface so that VXLAN
1001 traffic can traverse the transport IP network. That specific VXLAN will also be
delivered to physical VLAN 101
The tunnel interface on each switch will be bound to the VSI defined for VXLAN
1001. A Service Instance will be created to bind the physical interface VLAN traffic
to the VSI. The top server, with IP address 192.168.1.11, is connected to an
access port in VLAN 101. That switch will tag its uplink traffic when sending it to
5930-1, which maps VLAN 101 to VXLAN 1001. It delivers frames to the VSI,
which processes the traffic and sends it out over the Tunnel Interface.
Broadcasts from the 192.168.1.11 server are delivered to the tunnel interface over
the VXLAN network. This VXLAN encapsulated packet arrives at 5930-2, on the
tunnel interface on the VSI. The VSI performs local flooding, sending the packet on
the local wire, tagged with VLAN 101. This traffic is processed by the external
access switch, which recognizes it as a broadcast frame, and flood it out all ports
in VLAN 101, including the port connected to the server at IP address
192.168.1.12.

Rev. 14.41 A3-27

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Step 1: Configure Global L2VPN

Figure A3-21: Step 1: Configure Global L2VPN

The first step is to enable the Layer 2 VPN globally. This is the same command
that is used for Layer2 VPN/VPLS configuration. This enables the VSI model
inside the Comware 5930 switch hardware.

A3-28 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

Step 2: Configure VXLAN Tunnel

Figure A3-22: Step 2: Configure VXLAN Tunnel

The next step is to create a VXLAN tunnel. This provides the VXLAN
encapsulation services over the transport IP Network. To ensure a stable,
unchanging tunnel source IP address, a Loopback interface is created, and
specified. These are point-to-point tunnels, so you must create a tunnel interface
for each remote VTEP. This scenario only uses a single remote 5930, so only one
tunnel endpoint is defined here.
As previously mentioned, this step must be done manually, since OVSDB
information cannot currently be shared between hardware gateways and ESX
hosts. Should this capability become available, then the vSphere management
host could dynamically create new tunnels on the hardware gateway.
In the figure, Interface loopback 0 is defined and an IP address is assigned to it.
The tunnel interface is created with VXLAN mode, and the tunnel’s local source
and remote destination loopback addresses are specified. On the 5930-2 switch,
the loopback address would be 10.2.0.22, and the tunnels source and destination
addresses would be reversed.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 A3-29

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Step 3: Create VSI VXLAN + Bind VXLAN


Tunnel

Figure A3-23: Step 3: Create VSI VXLAN + Bind VXLAN Tunnel

Next, the VSI is defined. This VSI will support a local, physical interface which is
bound to the Service Instance and the VXLAN Tunnel interface. The VSI will
actually contain the VXLAN Identifier.
In the figure, a VSI named Customer1 is created, and associated with VXLAN
1001. VXLAN 1001 is in turn bound to the previously created tunnel.
If multiple tunnels to multiple VTEP endpoints were required, more tunnel end
points would be created, and those tunnels would also be added as virtual ports to
the virtual switches.

A3-30 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

Step 4: Create Service Instance

Figure A3-24: Step 4: Create Service Instance

A service instance defines which local traffic should be connected to the VXLAN
VSI. The Service Instance ID is locally significant to the Interface only.
In the figure, interface ten gigabit 1/0/2 is configured to be associated with service-
instance 10, and configured to use VLAN 101.
At this point in the discussion, some might remember that the point of having
VXLANs was to have more than 4000 VLANs. However, once VXLANs are
mapped to traditional VLANS, the old VLAN limitation seems to resurface.
This doesn’t have to be the case. With this model, VXLANs can be bound to 4000
VLAN IDs on interface ten1/0/2, and another set of VXLANs can be bound to 4000
other VLANs on another physical interface.
The VLAN 101 on interface ten1/0/2 has nothing to do with the VLAN 101-tagged
traffic on interface ten1/0/3. Therefore, you can distribute blocks of 4000 VLANs
over different physical ports to different regions of the data center.
This is so because of the VSI-to-Service Instance mapping you will configure in the
next step.

Rev. 14.41 A3-31

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Step 5: Bind Service Instance to VSI

Figure A3-25: Step 5: Bind Service Instance to VSI

Next, the Service Instance is bound to the VSI. This creates a kind of cross-
connect between the service instance and the VSI. Since this cross-connect
directly maps the physical interface to a VSI, the traffic is not processed globally by
the switch as VLAN 101 traffic. It is only processed inside the VSI.
Globally defined VLAN 101 operates totally independent of the VLAN 101 that is
processed by the service instance on this physical interface.

A3-32 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

Step 6: Configure Transport IP Interface IGMP

Figure A3-26: Step 6: Configure Transport IP Interface IGMP

The next step is to configure Transport IP Interfaces and IGMP. An IP connection


to the remote VTEP Loopback address is required, and this interface needs to
support an IGMP client function.
In this example the gigabit interface 1/0/1 is assigned an IP address, IGMP is
enabled, and the IGMP host function is set. This means causes this interface to act
as an IP Multicast endpoint, and use IGMP to join a multicast group, just like an
actual client endpoint would.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

Rev. 14.41 A3-33

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Step 7: Configure VXLAN VSI Multicast address

Figure A3-27: Step 7: Configure VXLAN VSI Multicast address

Now that IGMP host functionality is enabled, the multicast group to join is
specified, along with source IP address used to join that group.
It is recommended to use a unique Multicast IP address per VXLAN. This
optimized efficiency and reduces the unnecessary processing of frames, as
previously explained.

A3-34 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

Step 8: Verify

Figure A3-28: Step 8: Verify

The figure shows several display command that can be used to validate your
configuration efforts. You will explore the commands above during lab activities.
This includes validating the status of the tunnel interface and the VSI, as well as
checking MAC-addresses.

Rev. 14.41 A3-35

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Summary

Figure A3-29: Summary

In this module, you learned that VXLANs extend that scalability of traditional
VLANs to support over 16 million broadcast domains, while improving deployment
and management flexibility and efficiency for data center administrators.
Supported on 5930 Tor and 7900, 11900, 12900 products, VXLAN is an IP-based
overlay technology that tunnels L2 frames inside IP datagrams. Each physical
server or switch has a tunnel endpoint called a VXLAN Tunnel Endpoint (VTEP).
Both Layer 2 broadcast and unicast frames are tunneled through a traditional IP
transport network to provide all the functionality of single broadcast domain.
VXLAN and physical networks can be interconnected with Layer 3 routing
functionality deployed inside a hypervisor environment, or via a Layer 2 hardware
VXLAN-to-VLAN gateway, such as the HP 5930.
The IP transport network can support VXLANs using unicast or multicast services.
Unicast is simple to deploy, but has scalability and processing concerns. A
multicast deployment optimizes bandwidth and packet processing utilization.
Finally, you learned how to configure a 5930 to operate as a VXLAN-to-VLAN
gateway.

NOTES

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

_______________________________________________________________

A3-36 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
VXLAN

Learning Check
After each module, your facilitator will lead a class discussion to capture key
insights and challenges from the module and any accompanying lab activity. To
prepare for the discussion, answer each of the questions below.
1. What are three advantages of VXLAN (Choose three)?
a. It is an IEEE standards-based protocol.
b. Any existing IP routed infrastructure can be used.
c. The VXLAN ID is a 16-bit number, allowing for over 64,000 VLAN IDs
d. It provides flexibility, since it is an IP-based Layer 2 overlay technology
e. Scalability is further enhanced through the use of BGP extensions.
2. Choose three correctly described components of a typical VXLAN deployment
(Choose three).
a. The VTEP provides an entry point into the VXLAN.
b. Each participating VM host is assigned a VNI.
c. A VXLAN tunnel is formed between two VNI-assigned VMware hosts.
d. VXLAN can use a multicast IP address for Layer 2 multi-destination
delivery.
e. VXLAN requires IP multicast capability, since that is the only method of
transporting broadcast frames across the VXLAN fabric.
3. Because of the additional header information added by VXLAN systems, the
MTU of the IP infrastructure should be increased to at least 1550 bytes
a. True.
b. False.
4. What are three possible solutions for routing between VXLANs (Choose
three)?
a. The internal, native vRouter on the hypervisor.
b. The HP Virtual Services router.
c. VXLAN Layer 3 VMWare Edge Gateway.
d. A hardware-based VXLAN – VLAN gateway, like the HP 5930.
e. Any Layer 3-capable device.
5. What are two design considerations for VXLAN deployments (Choose two)?
a. Avoid Multicast routing protocol configuration on the IP infrastructure
b. IGMP must be configured to avoid Layer 2 flooding
c. You need to configure IP multicast ranges for the VXLAN IDs.
d. Mapping multiple VXLANs to the same transport IP address can improve
the efficiency of packet deliver.

Rev. 14.41 A3-37

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
Building HP FlexFabric Data Centers

Learning Check Answers


1. b, d, e
2. a, b, d
3. a
4. b, c, d
5. b, c

A3-38 Rev. 14.41

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.
To learn more about HP Networking, visit
www.hp.com/networking
© 2014 Hewlett-Packard Development Company, L.P. The information contained herein is
subject to change without notice. The only warranties for HP products and services are set
forth in the express warranty statements accompanying such products and services. Nothing
herein should be construed as constituting an additional warranty. HP shall not be liable for
technical or editorial errors or omissions contained herein.

HP Employee self-study use only. Reproduction or transfer outside of HP in whole or in part is prohibited.

You might also like