h18548.8 Poweredge MX Networking DPG

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 307

Dell Technologies PowerEdge MX

Networking
Deployment Guide
H18548.8

Abstract
This document provides an overview of the architecture, features, and functionality of
the Dell Technologies PowerEdge MX networking infrastructure, including the steps for
configuring and troubleshooting the PowerEdge MX networking switches in Full Switch
and SmartFabric modes.

Dell Technologies Solutions

April 2023
Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2023 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Contents

Chapter 1: Dell Technologies PowerEdge MX Platform Overview..................................................10


Dell Technologies Demo Center..................................................................................................................................... 10
Dell PowerEdge MX models and components............................................................................................................ 10
Introduction................................................................................................................................................................... 10
Hardware.........................................................................................................................................................................11
PowerEdge MX7000 - front............................................................................................................................................12
Dell PowerEdge MX7000 - rear......................................................................................................................................17
PowerEdge MX compute slot to I/O slot mapping...................................................................................................26
Open Manage Enterprise - Modular Edition............................................................................................................... 28
Introduction...................................................................................................................................................................28
PowerEdge MX initial deployment ......................................................................................................................... 28

Chapter 2: PowerEdge MX Scalable Fabric Architecture..............................................................29


Scalable Fabric Architecture.......................................................................................................................................... 29
Complex Scalable Fabric topologies.............................................................................................................................. 31
Quad-port Ethernet NICs................................................................................................................................................ 32
Interfaces and port groups............................................................................................................................................. 38
Recommended port order for MX7116n FEM connectivity..................................................................................... 43
Embedded top-of-rack switching..................................................................................................................................44
MX Chassis management wiring....................................................................................................................................45

Chapter 3: Dell SmartFabric OS10............................................................................................... 48


Operating modes............................................................................................................................................................... 48
Full Switch mode......................................................................................................................................................... 48
SmartFabric mode....................................................................................................................................................... 49
Changing operating modes............................................................................................................................................. 49
VLAN restrictions...............................................................................................................................................................51
LLDP for iDRAC..................................................................................................................................................................51
Virtual Link Trunking.........................................................................................................................................................52
Storage networking.......................................................................................................................................................... 52
NPIV Proxy Gateway.................................................................................................................................................. 52
Direct attached (F_Port).......................................................................................................................................... 53
FCoE Transit or FIP Snooping Bridge.....................................................................................................................53
iSCSI............................................................................................................................................................................... 54
NVMe/TCP...................................................................................................................................................................55
Host FCoE session load balancing................................................................................................................................ 55
OS10 version 10.5.2.4 or later.................................................................................................................................. 56
OS10 version 10.5.1.9 and earlier............................................................................................................................. 56
PowerEdge MX IOM operations.................................................................................................................................... 56
Switch Management page overview...................................................................................................................... 56
Switch Overview..........................................................................................................................................................57
Hardware tab................................................................................................................................................................58
View port status.......................................................................................................................................................... 59
Firmware tab................................................................................................................................................................. 61

Contents 3
Upgrading Dell SmartFabric OS10............................................................................................................................61
Alerts tab....................................................................................................................................................................... 62
Settings tab.................................................................................................................................................................. 63
OS10 privileged accounts................................................................................................................................................ 64
NIC teaming guidelines.................................................................................................................................................... 65

Chapter 4: Full Switch Mode....................................................................................................... 67


VLAN scaling guidelines for Full Switch mode........................................................................................................... 67
Managing Fibre Channel Zones on MX9116n FSE..................................................................................................... 67
Configure FC aliases for server and storage adapter WWPNs........................................................................ 68
Create FC zones..........................................................................................................................................................68
Create zone set........................................................................................................................................................... 69
Activate zone set........................................................................................................................................................ 69
Full Switch mode IO module replacement process................................................................................................... 69
VLAN stacking....................................................................................................................................................................70

Chapter 5: Overview of SmartFabric Services for PowerEdge MX............................................... 75


Functional overview..........................................................................................................................................................75
OS10 operating mode differences................................................................................................................................. 75
CLI commands available in SmartFabric mode...........................................................................................................76
IOM slot placement in SmartFabric mode................................................................................................................... 77
Two MX9116n Fabric Switching Engines in different chassis........................................................................... 77
Two MX5108n Ethernet switches in the same chassis...................................................................................... 78
Two MX9116n Fabric Switching Engines in the same chassis.......................................................................... 78
Switch-to-switch (VLTi) cabling................................................................................................................................... 79
VLT backup link............................................................................................................................................................ 79
Configuring port speed and breakout.......................................................................................................................... 80
VLAN scaling guidelines....................................................................................................................................................81
Maximum Transmission Unit behavior..........................................................................................................................82
Layer 2 Multicast, IGMP, and MLD snooping.............................................................................................................82
IGMP snooping.............................................................................................................................................................83
MLD snooping.............................................................................................................................................................. 83
Configuring L2 Multicast in SmartFabric mode................................................................................................... 83
Validation....................................................................................................................................................................... 84
Upstream network requirements...................................................................................................................................85
Physical connectivity.................................................................................................................................................. 85
Supported slot configurations for IOMs................................................................................................................ 86
Other restrictions and guidelines...................................................................................................................................88
Ethernet – No Spanning Tree uplink............................................................................................................................ 88
Spanning Tree Protocol - legacy Ethernet uplink......................................................................................................90
Networks and automated QoS....................................................................................................................................... 91
Server templates, profiles, virtual identities, networks, and deployment............................................................92
Templates...................................................................................................................................................................... 92
Profiles........................................................................................................................................................................... 93
Virtual identities and identity pools......................................................................................................................... 93
Deployment................................................................................................................................................................... 93
VMware vCenter integration - OpenManage Network Integration...................................................................... 93
OpenManage Integration for VMware vCenter......................................................................................................... 94

4 Contents
Chapter 6: SmartFabric Creation.................................................................................................95
Steps to create a SmartFabric.......................................................................................................................................95
Physically cable PowerEdge MX chassis and upstream switches.........................................................................95
Define VLANs..................................................................................................................................................................... 95
Define VLANs for FCoE............................................................................................................................................. 96
Create the SmartFabric................................................................................................................................................... 97
Optional steps.................................................................................................................................................................... 98
Forward error correction........................................................................................................................................... 98
Configure uplink port speed or breakout............................................................................................................. 100
Configure Ethernet ports......................................................................................................................................... 101
Create Ethernet – No Spanning Tree uplink.............................................................................................................102
Ethernet – No Spanning Tree upstream switch configuration.............................................................................104
Optional - Configure Fibre Channel............................................................................................................................ 105
Configure Fibre Channel universal ports..............................................................................................................105
Create Fibre Channel uplinks.................................................................................................................................. 105
Enable support for larger VLAN counts..................................................................................................................... 106
Uplink failure detection.................................................................................................................................................. 109
Verifying UFD configuration.....................................................................................................................................112
Configuring the upstream switch and connecting uplink cables...........................................................................112

Chapter 7: Server Deployment.................................................................................................... 113


Deploying a server............................................................................................................................................................ 113
Server preparation........................................................................................................................................................... 113
Create a server template................................................................................................................................................113
Create identity pools....................................................................................................................................................... 115
Associate server template with networks – no FCoE............................................................................................. 116
Associate server template with networks - with FCoE...........................................................................................117
Deploy a server template................................................................................................................................................119
Profile deployment.......................................................................................................................................................... 120

Chapter 8: SmartFabric Deployment Validation.......................................................................... 125


Validate the SmartFabric health.................................................................................................................................. 125
Validation of quad-port NIC topologies......................................................................................................................126
Validate with OME-M............................................................................................................................................... 126
Validation through switch CLI.................................................................................................................................129
Validating Ethernet - No Spanning Tree uplinks...................................................................................................... 129
Upstream switch validation - SmartFabric OS10............................................................................................... 130
Upstream switch validation - Cisco.......................................................................................................................132

Chapter 9: SmartFabric Operations............................................................................................135


Viewing SmartFabric health and status..................................................................................................................... 135
Edit a SmartFabric...........................................................................................................................................................136
Edit uplinks........................................................................................................................................................................ 137
Edit VLANs........................................................................................................................................................................ 138
Edit VLANs on deployed servers with OME-M 1.20.00 and later.................................................................. 138
Edit VLANs on a deployed Server with OME-M 1.10.20 and earlier .............................................................140
Delete SmartFabric.......................................................................................................................................................... 141
Connect non-MX Ethernet devices to a SmartFabric.............................................................................................141

Contents 5
Expanding from a single-chassis to dual-chassis configuration........................................................................... 142
Step 1: Cable Management module....................................................................................................................... 142
Step 2: Create Multichassis Management Group.............................................................................................. 142
Step 3: Add second MX Chassis to the MCM Group....................................................................................... 142
Step 4: Move MX9116n FSE from first chassis to second chassis................................................................ 143
Step 5: Validation.......................................................................................................................................................144
SmartFabric mode IOM replacement process.......................................................................................................... 144
MXG610 Fibre Channel switch module replacement process...............................................................................147
Chassis Backup and Restore.........................................................................................................................................147
Backing up the chassis............................................................................................................................................. 148
Restoring chassis........................................................................................................................................................151
Manual backup of IOM configuration through the CLI.....................................................................................153

Chapter 10: General Troubleshooting......................................................................................... 154


View or extract logs using OME-M............................................................................................................................. 154
Troubleshooting MCM topology errors...................................................................................................................... 154
Troubleshooting VLT and vPC configuration on upstream switches..................................................................155
Troubleshooting FEM and compute sled discovery................................................................................................ 156
Troubleshooting FC and FCoE..................................................................................................................................... 156
Rebalancing FC and FCoE sessions............................................................................................................................ 158
Common CLI troubleshooting commands for Full Switch and SmartFabric modes.........................................161

Chapter 11: SmartFabric Troubleshooting.................................................................................. 166


Troubleshooting SmartFabric issues........................................................................................................................... 166
Troubleshoot port group breakout errors..................................................................................................................166
Troubleshooting VLTi between switches...................................................................................................................170
Troubleshooting uplink errors........................................................................................................................................ 171
Troubleshooting legacy Ethernet uplink with STP...................................................................................................173
Troubleshooting common issues.................................................................................................................................. 174
SmartFabric Services troubleshooting commands.................................................................................................. 176

Chapter 12: Configuration Scenarios.......................................................................................... 182


Scenario 1: SmartFabric deployment with S5232F-ON upstream switches with Ethernet - No
Spanning Tree uplink...................................................................................................................................................183
Configure SmartFabric............................................................................................................................................. 183
Dell PowerSwitch S5232F-ON configuration..................................................................................................... 184
Dell PowerSwitch S5232-ON validation...............................................................................................................185
Scenario 2: SmartFabric connected to Cisco Nexus 3232C switches with Ethernet - No Spanning
Tree uplink..................................................................................................................................................................... 187
Configure SmartFabric..............................................................................................................................................187
Cisco Nexus 3232C switch configuration............................................................................................................188
Configuration validation........................................................................................................................................... 190
Scenario 3: SmartFabric deployment with S5232F-ON upstream switches with legacy Ethernet uplink.193
Dell PowerSwitch S5232F-ON configuration..................................................................................................... 194
Dell PowerSwitch S5232F-ON validation............................................................................................................ 195
Scenario 4: SmartFabric connected to Cisco Nexus 3232C switches with legacy Ethernet uplink............197
Cisco Nexus 3232C switch configuration............................................................................................................ 197
Configuration validation........................................................................................................................................... 199
Scenario 5: Connect MX9116n FSE to Fibre Channel storage - NPIV Proxy Gateway mode......................202
Scenario 6: Connect MX9116n FSE to Fibre Channel storage - FC Direct Attach.........................................206

6 Contents
Scenario 7: Connect MX5108n to Fibre Channel storage - FSB...........................................................................211
Scenario 8: Configure boot from SAN........................................................................................................................215
Configure NIC boot device...................................................................................................................................... 216
Configure BIOS settings...........................................................................................................................................217
Connect FCoE LUN................................................................................................................................................... 218
Set up and install media connection......................................................................................................................218
Use Lifecycle Controller to set up operating system driver for media installation.................................... 218

Chapter 13: PowerEdge MX 100 GbE solution with external Fabric Switching Engine................. 219
PowerEdge MX models and components for 100 GbE.......................................................................................... 219
Dell Networking MX8116n Fabric Expander Module..........................................................................................219
Dell PowerSwitch Z9432F-ON.............................................................................................................................. 220
Dell PowerEdge MX760c compute sled.............................................................................................................. 220
PowerEdge Scalable Fabric Architecture.................................................................................................................. 221
100 GbE deployment options..................................................................................................................................222
Topologies for 25 GbE............................................................................................................................................. 225
Topologies for 25 GbE and 100 GbE in the same scalable fabric..................................................................226
MX Chassis management wiring........................................................................................................................... 228
MX8116n management...................................................................................................................................................228
MX8116n within OME-Modular.............................................................................................................................. 229
Z9432F-ON management............................................................................................................................................. 232
MX8116n FEM port mapping on the Z9432F-ON................................................................................................... 232
Compute sleds with 100 GbE dual port mezzanine cards............................................................................... 232
Compute sleds with 25 GbE quad port mezzanine cards............................................................................... 233
Compute sleds with 25 GbE dual port mezzanine cards.................................................................................235
100 GbE solution configuration examples................................................................................................................. 237
100 GbE solution example....................................................................................................................................... 237
25 GbE solution example.........................................................................................................................................242
100 GbE solution configuration validation................................................................................................................. 247
Show Interface Status............................................................................................................................................. 247
Show Port Group...................................................................................................................................................... 249
Show LLDP neighbors..............................................................................................................................................249
Show interface port channel summary................................................................................................................250
Show VLAN................................................................................................................................................................ 250
Show VLT................................................................................................................................................................... 250
100 GbE combined deployment with legacy IOMs.................................................................................................. 251
Single chassis combined deployment....................................................................................................................251
Multi-chassis combined deployment..................................................................................................................... 251
Networking configuration management of combined deployments............................................................. 252
Combined deployment restrictions....................................................................................................................... 252
100 GbE deployment with rack servers.....................................................................................................................253

Chapter 14: Advanced NPAR...................................................................................................... 254


Hardware and software requirements....................................................................................................................... 254
Restrictions and limitations.......................................................................................................................................... 254
Advanced NPAR feature restrictions...................................................................................................................255
Scalability restrictions..............................................................................................................................................255
Advanced NPAR solution for MX Platform.............................................................................................................. 256
Broadcom 57504 quad port 25 GbE mezzanine card...................................................................................... 257

Contents 7
Advanced NPAR on MX SmartFabric mode.............................................................................................................258
Configure NPAR device settings and NIC partitioning.................................................................................... 258
Advanced Quad Port NIC NPAR status.............................................................................................................. 260
SmartFabric configuration in OME-Modular...................................................................................................... 263
Advanced NPAR Quad Port NIC in Full Switch mode............................................................................................267
S5232F-ON configuration.......................................................................................................................................267
MX9116n configuration in Full switch mode....................................................................................................... 269
Configuration validation.................................................................................................................................................272
Show LLDP................................................................................................................................................................. 272
Show Eth-npar........................................................................................................................................................... 273

Appendix A: Additional Tasks..................................................................................................... 276


Reset SmartFabric OS10 switch to factory defaults..............................................................................................276
Reset Cisco Nexus 3232C to factory defaults........................................................................................................ 276
Connect to IO Module console port using RACADM..............................................................................................276
MX I/O module OS10 installation using ONIE.......................................................................................................... 277
Manual installation.....................................................................................................................................................277
Automatic installation............................................................................................................................................... 278
MXG610s FC switch upgrade downgrade.................................................................................................................279
MXG610s switch details validation............................................................................................................................. 280

Appendix B: Additional Information........................................................................................... 282


PTM port mapping..........................................................................................................................................................282
Supported cables and optical connectors.................................................................................................................283
PowerEdge MX IOM slot support matrix.................................................................................................................. 289

Appendix C: Dell PowerSwitch S4148U-ON Configuration in Scenario 7..................................... 290


Switch configuration commands.................................................................................................................................290

Appendix D: Dell PowerStore 1000T...........................................................................................293


About Dell PowerStore 1000T..................................................................................................................................... 293
Configure PowerStore 1000T FC storage................................................................................................................ 293
Create a host............................................................................................................................................................. 293
Create host groups and add hosts........................................................................................................................294
Create volume groups..............................................................................................................................................296
Create volumes..........................................................................................................................................................296
Determine PowerStore 1000T storage array FC WWPNs.................................................................................... 298
Determine CNA FCoE port WWPNs.......................................................................................................................... 299

Appendix E: Hardware and Version Information.......................................................................... 301


Hardware used in this guide..........................................................................................................................................301
Dell PowerSwitch S3048-ON................................................................................................................................. 301
Dell PowerSwitch S5232F-ON............................................................................................................................... 301
Dell PowerSwitch S4148U-ON...............................................................................................................................302
Dell PowerSwitch Z9264F-ON.............................................................................................................................. 302
Dell PowerStore 1000T............................................................................................................................................302
Cisco Nexus 3232C.................................................................................................................................................. 303
Software and firmware versions used....................................................................................................................... 303
Scenarios 1 through 4.............................................................................................................................................. 303

8 Contents
Scenarios 5 through 8..............................................................................................................................................304

Appendix F: References.............................................................................................................306
Dell Technologies documentation............................................................................................................................... 306
OME-M and OS10 compatibility and documentation....................................................................................... 306
Dell Technologies Networking Infrastructure Solutions documentation..................................................... 307
Feedback and technical support................................................................................................................................. 307

Contents 9
1
Dell Technologies PowerEdge MX Platform
Overview
Dell Technologies Demo Center
The Dell Technologies Demo Center is a highly scalable, cloud-based service that provides 24/7 self-service access to virtual
labs, hardware labs, and interactive product simulations. Several interactive demos are available on the Demo Center for
PowerEdge MX platform deployments. Go to Dell Technologies Interactive Demo: OpenManage Enterprise Modular for MX
solution management to quickly become familiar with deploying MX Networks.

Dell PowerEdge MX models and components

Introduction
The vision of Dell Technologies is to be the essential technology company from the edge, to the core, and to the cloud. Dell
Technologies ensures modernization for today's applications and the emerging cloud-native world. Dell Networking is committed
to disrupting the fundamental economics of the market with an open strategy that gives you the freedom of choice for
networking operating systems and top-tier merchant silicon. The Dell Technologies strategy enables business transformations
that maximize the benefits of collaborative software and standards-based hardware, including lowered costs, flexibility, freedom,
and security. Dell Technologies provides further customer enablement through validated deployment guides that demonstrate
these benefits while maintaining a high standard of quality, consistency, and support.
The Dell PowerEdge MX platform is a unified, high-performance data center infrastructure. It provides the agility, resiliency,
and efficiency to optimize a wide variety of traditional and new, emerging data center workloads and applications. With its
kinetic architecture and agile management, PowerEdge MX dynamically configures compute, storage, and fabric; increases team
effectiveness; and accelerates operations. The responsive design delivers the innovation and longevity that customers need for
their IT and digital business transformations.
As part of the PowerEdge MX platform, the Dell SmartFabric OS10 network operating system includes SmartFabric Services
(SFS), a network automation and orchestration solution that is fully integrated with the MX platform.
NOTE: This guide may contain language that is not consistent with Dell's current guidelines. Dell plans to update this guide
over subsequent future releases to revise the language accordingly.

10 Dell Technologies PowerEdge MX Platform Overview


Figure 1. PowerEdge MX7000 chassis

Hardware
This section contains information about the hardware and options available in the Dell PowerEdge MX7000. The section is
divided into two parts:
● The front of the MX7000 chassis, containing compute and storage sleds
● The back of the MX7000 chassis, containing networking, storage, and management components

Dell Technologies PowerEdge MX Platform Overview 11


PowerEdge MX7000 - front

Overview
The following figure shows the front view of the PowerEdge MX7000 chassis. The left side of the chassis can have one of three
control panel options:
● LED status light panel
● Touch screen LCD panel
● Touch screen LCD panel equipped with Dell EMC PowerEdge iDRAC Quick Sync 2
The bottom of the figure shows six hot-pluggable, redundant, 3,000-watt power supplies. Above the power supplies are eight
single-width slots that support compute and storage sleds. In the example below, the slots contain:
● Four Dell EMC PowerEdge MX740c, MX750c, and MX760c sleds in slots one through four
● One Dell EMC PowerEdge MX840C sled in slots five and six
● Two Dell EMC PowerEdge MX5016s sleds in slots seven and eight

Figure 2. PowerEdge MX7000 – front

12 Dell Technologies PowerEdge MX Platform Overview


PowerEdge MX740c and MX750c compute sleds
The PowerEdge MX740c and MX750c are two-socket, full-height, single-width compute sleds that offer impressive
performance and scalability. The MX740c and MX750c are ideal for dense virtualization environments and can serve as a
foundation for collaborative workloads. The MX7000 chassis supports up to eight MX740c and MX750c sleds, or both.
Key features include:
● Single-width slot design
● Two CPU sockets
● 24 or 32 DIMM slots of DDR4 memory (MX740c and MX750c)
● Boot options include BOSS-S1 or Internal Dual SD Modules (IDSDM)
● Up to six SAS/SATA SSD/HDD and NVMe PCIe SSDs
● Two PCIe mezzanine card slots for connecting to network Fabric A and B
● One PCIe mini-mezzanine card slot for connecting to storage Fabric C
● iDRAC with Lifecycle Controller

Figure 3. PowerEdge MX740c sled with six 2.5-inch SAS drives

Dell PowerEdge MX760c compute sled


The Dell PowerEdge MX760c is a two-socket, full-height, single-width compute sled that offers impressive performance and
scalability. The MX760c is ideal for dense virtualization environments and can serve as a foundation for collaborative workloads.
Businesses can install up to eight MX760c sleds in a single MX7000 chassis, which can be combined with compute sleds from
different generations.
Key features of the PowerEdge MX760c include:
● Single width slot design
● Single or dual CPU (up to 56 cores per processor/socket with four x UPI @ 24 GT/s)

Dell Technologies PowerEdge MX Platform Overview 13


● 32x Dimm slots DDR5 with eight memory channels
● 8x E3.S NVMe (Gen5 x4) or 6 x 2.5" SAS/SATA SSDs or 6 x NVMe (Gen4) SSDs
● BOSS-N1 HW-RAID for boot, (2 x M.2 NVMe Internal)
● H965i Performance RAID, SAS/SATA or NVMe RAID
● iDRAC9 with lifecycle controller
● Dual Port Mezz 100 GbE on fabrics A/B
● Dual Port and Quad Port Mezz 25 GbE on fabrics A/B
● Dual Port FC32G on fabric C

Figure 4. Dell PowerEdge MX760c sled with eight E3.s SSD drives

NOTE: The 100 GbE Dual Port Mezzanine card is also available on the MX750c.

14 Dell Technologies PowerEdge MX Platform Overview


PowerEdge MX840c compute sled
The PowerEdge MX840c is a powerful four-socket, full-height, double-width server that features dense compute, exceptionally
large memory capacity, and a highly expandable storage subsystem. It is the ultimate scale-up server that excels at running a
wide range of database applications, substantial virtualization, and software-defined storage environments. The MX7000 chassis
supports up to four MX840c compute sleds.
Key features of the MX840c include:
● Dual-width slot design
● Four CPU sockets
● 48 DIMM slots of DDR4 memory
● Boot options include BOSS-S1 or IDSDM
● Up to eight SAS/SATA SSD/HDD and NVMe PCIe SSDs
● Four PCIe mezzanine card slots for connecting to network Fabric A and B
● Two PCIe mini-mezzanine card slots for connecting to storage Fabric C
● iDRAC9 with Lifecycle Controller

Figure 5. PowerEdge MX840c sled with eight 2.5-inch SAS drives

Dell Technologies PowerEdge MX Platform Overview 15


PowerEdge MX5016s storage sled
The PowerEdge MX5016s storage sled delivers scale-out, direct attached storage within the PowerEdge MX architecture. The
MX5016s provides customizable 12-GB/s direct-attached SAS storage with up to 16 SAS HDDs/SSDs. The MX740c, MX750c,
and the MX840c compute sleds can share drives with the MX5016s using the dedicated PowerEdge MX5000s SAS switch.
Internal server drives may be combined with up to seven MX5016s sleds on one chassis for extensive scalability. The MX7000
chassis supports up to seven MX5016s storage sleds.

Figure 6. PowerEdge MX5016s sled with the drive bay extended

16 Dell Technologies PowerEdge MX Platform Overview


Dell PowerEdge MX7000 - rear

Overview
The Dell PowerEdge MX7000 includes three I/O fabrics and the Management Modules. Fabrics A and B are for Ethernet and
future I/O module connectivity, and Fabric C is for SAS and Fibre Channel (FC) connectivity. Each fabric provides two slots
for redundancy. Management Modules contain the chassis intelligence, which overlooks and orchestrates the operations of the
various components. The following example figure shows the rear of the PowerEdge MX7000 chassis. From top to bottom, the
chassis is configured with:
● One Dell Networking MX9116n Fabric Switching Engine (FSE) installed in fabric slot A1
● One Dell Networking MX7116n Fabric Expander Module (FEM) installed in fabric slot A2
● Two Dell Networking MX5108n Ethernet switches installed in fabric slots B1 and B2
● Two Dell Networking MXG610s Fibre Channel switches installed in fabric slots C1 and C2
● Two Dell PowerEdge MX9002m modules are installed in management slots MM1 and MM2

Figure 7. Dell PowerEdge MX7000 – rear

The Dell PowerEdge MX7000 now includes a 100 GbE solution by deploying a 100 GbE Fabric Expander Module (FEM), the Dell
Networking MX8116n. The 100 GbE FEM is installed in Fabrics A and B and operate with an external Fabric Switching Engine
(FSE), the Dell PowerSwitch Z9432F-ON. For additional information on the 100 GbE solution see PowerEdge MX 100 GbE
solution with external Fabric Switching Engine.

Dell Technologies PowerEdge MX Platform Overview 17


Dell PowerEdge MX9002m management module
The Dell PowerEdge MX9002m management module controls the overall chassis power, cooling, and hosts the OpenManage
Enterprise Modular console. Two external Gigabit Ethernet ports are provided to enable management connectivity and to
connect additional MX7000 chassis in a single logical chassis. The MX7000 chassis supports two MX9002m modules for
redundancy. The following figure shows a single MX9002m module and its components.

Figure 8. Dell PowerEdge MX9002m module

The following MX9002m module components are labeled in the figure:


1. Handle release
2. Gigabit Ethernet port 1
3. Gigabit Ethernet port 2
4. ID button and health status LED
5. Power status LED
6. Micro-B USB serial port

18 Dell Technologies PowerEdge MX Platform Overview


Dell PowerEdge MX9116n Fabric Switching Engine
The Dell PowerEdge MX9116n Fabric Switching Engine (FSE) is a scalable, high-performance, low latency 25 Gbps Ethernet
switch, purpose-built for the PowerEdge MX platform. The MX9116n FSE provides enhanced capabilities and cost-effectiveness
for enterprise, mid-market, Tier 2 cloud, and NFV service providers with demanding compute and storage traffic environments.
The MX9116n FSE provides:
● Sixteen internal 25 GbE server facing ports, ports 1 through 16, connected to compute sleds
● Twelve QSFP28-Double Density (DD) ports for fabric expansion/uplinks, ports 17 through 40 (These ports can be operated
as 2x 100 GbE, 2x 40 GbE, 8x 25 GbE, and 8x 10 GbE.
● Two 100 GbE QSFP28 ports, used for Ethernet uplinks, ports 41 and 42
● Two 100 GbE QSFP28 unified ports, used for Ethernet and Fibre Channel connections, ports 43 and 44.
For more information about port-mapping and virtual ports, see Interfaces and port groups.
The two standard QSFP28 ports can be used for Ethernet uplinks. The QSFP28 unified ports can support Ethernet or native
Fibre Channel connectivity, supporting both NPIV Proxy Gateway (NPG) and direct attach FC capabilities.
The twelve QSFP28-DD ports provide additional uplinks, VLTi links, and connections to rack servers at 10 GbE or 25 GbE using
breakout cables. The QSFP28-DD ports also provide fabric expansion connections for up to nine additional MX7000 chassis
using the MX7116n Fabric Expander Module. The MX7000 chassis supports up to four MX9116n FSEs in Fabric A and Fabric
B, or both. See the PowerEdge MX IOM slot support matrix for more information about supported slot configurations and the
PowerEdge MX I/O Guide for more information about cable selection.

Figure 9. MX9116n FSE

The following MX9116n FSE components are labeled in the figure:


1. Express service tag
2. Storage USB port
3. Micro-B USB console port
4. Power and indicator LEDs
5. Handle release
6. Two QSFP28 ports
7. Two QSFP28 unified ports
8. Twelve QSFP28-DD ports

Dell Technologies PowerEdge MX Platform Overview 19


The following table shows the port mapping example for internal and external interfaces on the MX9116n FSE. The MX9116n
FSE maps dual-port mezzanine cards to odd-numbered ports. The MX7116n FEM, connected to the MX9116n FSE, maps to
sequential virtual ports with each port representing a compute sled attached to the MX7116n FEM.

Table 1. Port-mapping example for Fabric A


MX7000 slot MX9116n FSE ports MX7116n FEM virtual ports
1 Ethernet 1/1/1 Ethernet 1/71/1
2 Ethernet 1/1/3 Ethernet 1/71/2
3 Ethernet 1/1/5 Ethernet 1/71/3
4 Ethernet 1/1/7 Ethernet 1/71/4
5 Ethernet 1/1/9 Ethernet 1/71/5
6 Ethernet 1/1/11 Ethernet 1/71/6
7 Ethernet 1/1/13 Ethernet 1/71/7
8 Ethernet 1/1/15 Ethernet 1/71/8

Dell Networking MX7116n Fabric Expander Module


The Dell Networking MX7116n Fabric Expander Module (FEM) acts as an Ethernet repeater, taking signals from an attached
compute sled and repeating them to the associated lane on the external QSFP28-DD connector. The MX7116n FEM provides
two QSFP28-DD interfaces, each providing up to eight 25 Gbps connections to the chassis and 16 internal server-facing ports.
There is no operating system or switching ASIC on the MX7116n FEM, so it rarely requires an upgrade. There is also no
management or user interface, making the MX7116n FEM almost maintenance-free. The MX7000 chassis supports up to four
MX7116n FEMs in Fabric A, Fabric B, or both. See PowerEdge MX IOM slot support matrix for more information about supported
slot configurations, and the PowerEdge MX I/O Guide for more information about cable selection.

Figure 10. MX7116n FEM

The following MX7116n FEM components are labeled in the figure:


1. Express service tag
2. Supported optic LED
3. Power and indicator LEDs
4. Module insertion and removal latch
5. Two QSFP28-DD fabric expander ports
The following figure shows how the MX7116n FEM can act as a pass-through module. The breakout of the port is shown to
connect to ToR switches (using SFP+, SFP28, QSFP+, or QSFP28 connections), and the internal connections to compute sleds
with two ports on the Mezzanine card. When connecting to QSFP+ or QSFP28 interfaces, the interface must be configured as
4x 10 GbE or 4x 25 GbE, respectively.
NOTE: For an MX7116n FEM acting as a pass-through module, only Dell ToR switches are supported for external
connections.

20 Dell Technologies PowerEdge MX Platform Overview


Figure 11. Ethernet MX7116n-FEM mezzanine mapping

The following figure shows different uplink options for the MX7116n FEM to act as a pass-through module operating at 25 GbE.
The MX7116n FEM should be connected to an upstream switch at 25 GbE. Support for 10 GbE is available as of OME-Modular
1.20.00.
If the MX7116n FEM port connects to QSFP28 ports, a QSFP28-DD to 2x QSFP28 cable is used. If the MX7116n FEM port
connects to SFP28 ports, a QSFP28-DD to 8x SFP28 cable is used. These cables can be DAC, AOC, or optical transceiver plus
passive fiber. See the PowerEdge MX I/O Guide for more information about cable selection.
NOTE: If connecting the FEM to a QSFP+/QSFP28 port on a ToR switch, ensure that the port is configured to break out
to 4x 10 GbE or 4x 25 GbE and not 40 GbE or 100 GbE.

Dell Technologies PowerEdge MX Platform Overview 21


Figure 12. Topologies for MX7116n FEM as pass-through module

NOTE: The MX7116n FEM cannot act as a stand-alone switch and must be connected to the MX9116n FSE or other Dell
ToR switches to function. Connecting the MX7116n FEM to non-Dell switches is not supported.

Dell Networking MX5108n Ethernet switch


The Dell Networking MX5108n Ethernet switch is targeted at PowerEdge MX deployments of one or two chassis. While not
a scalable switch, it still provides high-performance and low latency with a nonblocking switching architecture. The MX5108n
provides line-rate 25 Gbps Layer 2 and Layer 3 forwarding capacity to all connected servers with no oversubscription.
In addition to eight internal 25 GbE ports, the MX5108n provides:
● One 40 GbE QSFP+ port
● Two 100 GbE QSFP28 ports
● Four 10 GbE RJ45 Base-T ports
These ports can be used to provide a combination of network uplink, VLT interconnect (VLTi), or FCoE connectivity. The
MX5108n supports FCoE Initialization Protocol (FIP) Snooping Bridge (FSB) mode, but does not support NPG or direct-attach
FC capabilities. The MX7000 chassis supports up to four MX5108n Ethernet switches in Fabric A and Fabric B, or both. See
PowerEdge MX IOM slot support matrix for more information about supported slot configurations and the PowerEdge MX I/O
Guide for more information about cable selection.

Figure 13. MX5108n Ethernet switch

The following MX5108n components are labeled in the figure:


1. Express service tag
2. Storage USB port
3. Micro-B USB console port
4. Power and indicator LEDs
5. Module insertion and removal latch

22 Dell Technologies PowerEdge MX Platform Overview


6. One QSFP+ port
7. Two QSFP28 ports
8. Four 10GBase-T ports
NOTE: Compute sleds with quad-port mezzanine cards are not supported with MX5108n Ethernet switches.

PowerEdge MX Ethernet Pass-Through Modules


There are two Ethernet Pass-Through Modules (PTM) providing nonswitched Ethernet connections to ToR switches. Each
PTM provides 16 internal ports mapped directly to 16 external ports. The MX7000 chassis supports four PTMs in Fabric A and
Fabric B, or both. See PowerEdge MX IOM slot support matrix for more information about supported slot configurations and the
PowerEdge MX I/O Guide for more information about cable selection. For more information about PTM port to compute sled
mapping, see PTM port mapping.
The following figure shows the 25 GbE Ethernet PTM. The 25 GbE PTM provides 16 external SFP28 ports that can operate at
10 GbE or 25 GbE.

Figure 14. 25 GbE Ethernet PTM

The following 25 GbE PTM components are labeled in the figure:


1. Express service tag
2. Power and indicator LEDs
3. Module insertion and removal latch
4. 16 SFP28 ports
The 10GBase-T Ethernet PTM, shown in the following figure, provides 16 external RJ45 Base-T ports that operate at 10 GbE.

Figure 15. 10GBase-T Ethernet PTM

The following 10GBase-T Ethernet PTM components are labeled in the figure:
1. Express service tag
2. Power and indicator LEDs

Dell Technologies PowerEdge MX Platform Overview 23


3. Module insertion and removal latch
4. 16 10GBase-T ports

Dell Networking MXG610s Fibre Channel switch


The Dell Networking MXG610s is a high-performance, 32 Gbps Fibre Channel switch based on Brocade technology. It is ideal for
connectivity to all-flash SAN storage solutions and is designed for maximum flexibility and value with pay-as-you-grow scalability
using a Port on Demand (PoD) license model. The MXG610s is compatible with Brocade and Cisco FC switches. The MXG610s
runs the Brocade FOS operating system and Brocade tools are used to manage the switch.
In addition to 16 internal 32-GFC ports, the MXG610s provides:
● Eight external SFP+ ports
● Two 4x 32 Gbps external QSFP ports
The internal and external port information is as follows:
● Internal ports support 16-Gbps or 32-Gbps speed
● Internal ports support F_Port mode and N_Port mode for NPIV connections
● External ports support F_Port, N_Port, D_Port, and E_Port modes
● SFP+ ports auto negotiate to 8 Gbps, 16 Gbps, or 32 Gbps speeds when 32 Gbps SFP+ transceivers are used
● SFP+ ports auto negotiate to 8 Gbps or 16 Gbps speeds when 16 Gbps SFP+ transceivers are used
● QSFP ports auto negotiate to 16 Gbps, or 32 Gbps speeds when 32 Gbps QSFP transceivers are used
● QSFP ports support breakout cables
● QSFP ports support ISL connections only. Interchassis link (ICL) connections are not supported
● Dynamic Ports on Demand (POD) support with increments of 8-port licenses
The external ports support the connection of the MX7000 chassis to existing SAN switches, or the connection of a FC storage
array directly to the switch.
NOTE: The MX7000 chassis requires redundant MXG610s in Fabric C. The operation of a single MXG610s switch is not
supported.

NOTE: For information about the optical transceivers and cables used with the MXG610s, see the MGX610 Fibre Channel
Switch Module Installation Guide.

Figure 16. MXG610s Fibre Channel switch module

The following MXG610s components are labeled in the figure:


1. Express service tag
2. Module insertion and removal latch
3. Micro-B USB console port
4. Power and indicator LEDs
5. Eight external SFP+ ports
6. Two 4x 32-GFC QSFP ports

24 Dell Technologies PowerEdge MX Platform Overview


Dell Networking MXG610s Fibre Channel switch models and licenses
The Dell Networking MXG610s FC switch can be purchased in two configurations:
● Sixteen activated ports and four 32 Gbps SFP+ SWL optical transceivers
● Sixteen activated ports, eight 32 Gbps SFP+ SWL optical transceivers, and the Enterprise software bundle
Enterprise software bundle
The Enterprise bundle includes ISL Trunking, Fabric Vision, and Extended Fabric licenses:

ISL Trunking Allows you to aggregate multiple physical links into one logical link for enhanced network performance and
fault tolerance. ISL trunking also enables Brocade Access Gateway ISL Trunking (N_port Trunking).
Fabric Vision Enables MAPS (Monitoring and Alerting Policy Suite), Flow Vision, IO Insight, VM Insight, and ClearLink,
or D_Port, to non-Brocade devices
● MAPS enables rules-based monitoring, alerting capabilities, and provides comprehensive dashboards
to troubleshoot problems in Brocade SAN environments
● Flow Vision enables the host to LUN flow monitoring, application flow mirroring for offline capture and
deeper analysis, and test traffic flow generation function for SAN infrastructure validation
● IO Insight automatically detects degraded storage IO performance with integrated device latency, and
IOPS monitoring embedded in the hardware
● ClearLink (D_Port) to non-Brocade devices allows extensive diagnostic testing of links to devices
other than Brocade switches and adapters.
NOTE: The functionality requires the support of the attached device, and the ability for the user
to check the device.
Extended Fabric Provides greater than 10 km of switched fabric connectivity at full bandwidth over long distances

NOTE: These features described are only available as part of the Enterprise software bundle. Individual feature licenses are
not available.
Ports on Demand
You can purchase the Ports on Demand (POD) licenses to activate up to 24 additional ports using 8-port POD licenses. The
switch module supports dynamic POD license allocation, where two port licenses are assigned to ports 0 and 17 at the factory.
The remaining licenses are assigned to active ports on a first-come, first-served basis. After the licenses are installed, you can
move them from one port to another, making port licensing flexible.
Broadcom software licensing upgrades
To obtain software licenses for the MXG610s, you must register the switch on the Broadcom support portal at https://
support.broadcom.com/.

NOTE: Run the chassisshow command to obtain the required Factory Serial Number.

To obtain upgrades for MXG610s software, contact Dell Technical Support.

Dell PowerEdge MX5000s SAS module


The Dell PowerEdge MX5000s SAS module supports four SAS internal connections to all eight front-facing slots in the
PowerEdge MX7000 chassis. The MX5000s uses T10 SAS zoning to provide multiple SAS zones/domains for the compute
sleds. Storage management is conducted through the OpenManage Enterprise Modular console.

NOTE: The external (rear-facing) ports on MX5000s SAS switches are not currently enabled.

The MX5000s provides Fabric C SAS connectivity to each compute and one or more MX5016s storage sleds. Compute
sleds connect to the MX5000s using either SAS Host Bus Adapters (HBA) or a PowerEdge RAID Controller (PERC) in the
mini-mezzanine PCIe slot.
The MX5000s switches are deployed as redundant pairs to offer multiple SAS paths to the individual SAS disk drives. The
MX7000 chassis supports redundant MX5000s in Fabric C.

NOTE: A MX5000s SAS module and a MXG610s are not supported in the same MX7000 chassis.

Dell Technologies PowerEdge MX Platform Overview 25


Figure 17. MX5000s SAS module

The following MX5000s components are labeled in the figure:


1. Express service tag
2. Module insertion and removal latch
3. Power and indicator LEDs
4. Six SAS ports

PowerEdge MX compute slot to I/O slot mapping

Overview
The PowerEdge MX7000 chassis includes two general-purpose I/O fabrics, Fabric A and B. The vertically aligned compute
sleds in slots one through eight connect to the horizontally aligned I/O modules (IOMs) in fabric slots A1, A2, B1, and B2. This
orthogonal connection method results in a midplane free design and enables the adoption of new I/O technologies without the
burden of having to upgrade the midplane.

Figure 18. MX7000 orthogonal connection

Mezzanine cards
The MX740c, MX750c, and MX760c support up to two mezzanine cards, which are installed in slots A1 and B1, and the MX840c
supports up to four mezzanine cards, which are installed in slots A1, A2, B1, and B2. Each mezzanine card provides redundant
connections to each fabric, A or B, as shown in the following figure. A mezzanine card connects orthogonally to the pair of IOMs

26 Dell Technologies PowerEdge MX Platform Overview


installed in the corresponding fabric slot. For example, port one of mezzanine card A1 connects to fabric slot A1, a MX9116n FSE
(not shown). The second port of mezzanine card A1 connects to fabric slot A2, a MX7116n FEM (not shown).

Figure 19. MX740c mezzanine cards

Mini-mezzanine card
The MX7000 chassis also provides Fabric C, shown in the following figure, supporting redundant MXG610s FC switches, or
MX5000s SAS modules. This fabric uses a midplane connecting the C1 and C2 modules to each compute or storage sled. The
MX740c supports one mini-mezzanine card, which is installed in slot C1, and the MX840c supports two mini-mezzanine cards,
which are installed in slots C1 and C2.

Figure 20. MX740c mini-mezzanine card

Dell Technologies PowerEdge MX Platform Overview 27


Open Manage Enterprise - Modular Edition
Introduction
The Dell PowerEdge MX9002m management module hosts the OpenManage Enterprise - Modular Edition (OME-M) console.
OME-M is the latest addition to the Dell OpenManage Enterprise suite of tools and provides a centralized management interface
for the PowerEdge MX platform. The OME-M console features include:
● Manage up to 20 chassis from a single web or REST API endpoint using multichassis management groups
● End-to-end life cycle management for servers, storage, and networking
● Monitoring and management of the entire PowerEdge MX platform
● Integration with OpenManage Mobile for configuration and troubleshooting, including wireless server vKVM
● Integration with OpenManage Enterprise for multi-datacenter management of PowerEdge systems

PowerEdge MX initial deployment


Initial PowerEdge MX deployment begins with assigning network settings for OME-M and completing the Chassis Deployment
Wizard.
There are three methods for initial configuration:
● Using the LCD touchscreen on the front-left of the MX7000 chassis (if installed)
● Setting the initial OME-M console IP address through the KVM ports on the front-right side of the MX7000 chassis
● Setting the initial OME-M console IP address through the serial port on the MX9002m module
The Deployment Wizard is displayed on first login to the console and enables configuration of the following:
● Time
● Alerting
● iDRAC9 quick deployment settings
● Network IOM access settings
● Firmware updates
● Network proxy settings
● MCM group definition
NOTE: For more information regarding the initial deployment of the MX7000, see the PowerEdge MX7000 -
Documentation site.

28 Dell Technologies PowerEdge MX Platform Overview


2
PowerEdge MX Scalable Fabric Architecture
Scalable Fabric Architecture
Overview
A multichassis group enables multiple chassis to be managed as if they were a single chassis. A PowerEdge MX Scalable Fabric
enables multiple chassis to behave like a single chassis from a networking perspective.
A Scalable Fabric consists of two main components - the MX9116n FSE and the MX7116n FEM. A typical configuration includes
one MX9116n FSE and one MX7116n FEM in each of the first two chassis, and additional pairs of MX7116n FEMs in the remaining
chassis. Each MX7116n FEM connects to the MX9116n FSE corresponding to its fabric and slot. This hardware-enabled
architecture applies regardless of whether the switch is running in Full Switch or SmartFabric mode.
The following figure shows up to ten MX7000 chassis in a single Scalable Fabric. The first two chassis house MX9116n FSEs,
while chassis 3 through 10 only house MX7116n FEMs. All connections in the following figure use QSFP28-DD connections.
NOTE: For information on the Scalable Fabric Architecture with the 100 GbE solution with the MX8116n, see PowerEdge
MX 100 GbE solution with external Fabric Switching Engine.

NOTE: The following diagrams show the connections for a scalable fabric on multiple chassis between the FSE and
FEM components. The diagrams do not show the VLTi connections required for operating in SmartFabric mode or as
recommended when in Full Switch mode.

Figure 21. Scalable Fabric example using Fabric A

PowerEdge MX Scalable Fabric Architecture 29


NOTE: To expand from single-chassis to dual-chassis configuration, see Expanding from a single-chassis to dual-chassis
configuration.
The following table shows the recommended IOM slot placement when creating a Scalable Fabric Architecture.

Table 2. Scalable Fabric Architecture maximum recommended design


MX7000 chassis Fabric slot IOM module
Chassis 1 A1 MX9116n FSE
A2 MX7116n FEM
Chassis 2 A1 MX7116n FEM
A2 MX9116n FSE
Chassis 3–10 A1 MX7116n FEM
A2 MX7116n FEM

To provide further redundancy and throughput to each compute sled, Fabric B can be used to create an additional Scalable
Fabric Architecture. Utilizing Fabric A and B can provide up to eight 25-Gbps connections to each MX740c or sixteen 25-Gbps
connections to each MX840c.

Figure 22. Two Scalable Fabrics spanning two MX7000 chassis

Restrictions and guidelines


The following restrictions and guidelines are in place when building a Scalable Fabric:
● All MX7000 chassis in the same Scalable Fabric must be in the same multichassis group.
● Mixing IOM types in the same Scalable Fabric (for example, MX9116n FSE in fabric slot A1 and MX5108n in fabric slot A2) is
not supported. See PowerEdge MX IOM slot support matrix for more information about IOM placement.
● All participating MX9116n FSEs and MX7116n FEMs must be in MX7000 chassis that are part of the same MCM group. For
more information, find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation
table.
● When using both Fabric A and B for a Scalable Fabric, the following restrictions apply:
○ IOM placement for each fabric must be the same in each chassis. For instance, if an MX9116n FSE is in chassis 1 fabric
slot A1, then the second MX9116n FSE should be in chassis 1 fabric slot B1.
○ For chassis 3 through 10, which only contain MX7116n FEMs, they must connect to the MX9116n FSE that is in the same
group.
NOTE: For information about the recommended MX9116n FSE port connectivity order, see the Additional Information
section.

30 PowerEdge MX Scalable Fabric Architecture


Complex Scalable Fabric topologies
Beginning with OME-M 1.20.00 and SmartFabric OS10.5.0.7, additional Scalable Fabric topologies are supported in Full Switch
and SmartFabric modes. These topologies are more complex than the ones presented in previous sections. These designs enable
physical NIC redundancy using a pair of switches instead of two pairs, providing a significant cost reduction.
These complex topologies support connections between MX9116n FSEs in FabA with MX7116n FEMs in FabB across single and
multiple chassis, up to a total of five chassis. Once you connect the FSE and FEMs, ensure that the slot numbers are the same
for the connection. For example, MX9116n FSE in slot A1 can be connected to an MX7116n FEM in slot B1 (same chassis), or slot
A1 (second chassis), or slot B1 (second chassis), and so on.
NOTE: Cabling multiple chassis together with these topologies can become very complex. Care must be taken to correctly
connect each component.

The complex scalable fabric topologies in this section apply to dual-port Ethernet NICs.
These complex topologies are described as follows.
NOTE: The following diagrams show the connections for a scalable fabric on multiple chassis between the FSE and
FEM components. The diagrams do not show the VLTi connections required for operating in SmartFabric mode or as
recommended when in Full Switch mode.
Single chassis:
● MX9116n FSE in slot A1 is connected to MX7116n FEM in slot B1.
● MX9116n FSE in slot A2 is connected to MX7116n FEM in slot B2.

Figure 23. Single chassis topology

Dual chassis:
● MX9116n FSE in Chassis 1 slot A1 is connected to MX7116n FEMs in Chassis 1 slot B1, Chassis 2 slot A1, Chassis 2 slot B1.
● MX9116n FSE in Chassis 2 slot A2 is connected to MX7116n FEMs in Chassis 1 slot A2, Chassis 1 slot B2, Chassis 2 slot B2.

PowerEdge MX Scalable Fabric Architecture 31


Figure 24. Dual chassis topology

Multiple chassis:
The topology with multiple chassis is similar to the dual chassis. Make sure to connect the FSE and FEM in the same numeric
slot numbers. For example, connecting FSE in Chassis 1 slot A1 to FEM in Chassis 2 slot B2 is not supported.

Figure 25. Multiple chassis topology

Quad-port Ethernet NICs


PowerEdge MX 1.20.10 adds support for the Broadcom 57504 quad-port Ethernet adapter. For chassis with MX7116n FEMs, the
first QSFP28-DD port is used when attaching dual-port NICs. The first and second QSFP28-DD ports of the MX7116n FEM are
used when attaching quad-port NICs. When both QSFP28-DD ports are connected, a server with a dual-port NIC will only use
the first port on each FEM. With quad-port NICs, both ports are used.

32 PowerEdge MX Scalable Fabric Architecture


NOTE: The MX5108n Ethernet switch does not support quad-port adapters.

NOTE: The Broadcom 57504 quad-port Ethernet adapter is not a converged network adapter and does not support FCoE
or iSCSI offload.
The MX9116n FSE has sixteen 25 GbE server-facing ports, ethernet1/1/1 through ethernet1/1/16, which are used when the
PowerEdge MX server sleds are in the same chassis as the MX9116n FSE.
With only dual-port NICs in all server sleds, only the odd-numbered server-facing ports are active. If the server has a quad-port
NIC, but the MX7116n FEM has only one port connected to the MX9116n FSE, only half of the NIC ports will be connected and
show a link up.

PowerEdge MX Scalable Fabric Architecture 33


The following table shows the MX server sled to MX9116n FSE interface mapping for dual-port NIC servers which are directly
connected to the switch.

Table 3. Interface mapping for dual-port NIC servers


Sled number MX9116n FSE server interface
Sled 1 ethernet 1/1/1
Sled 2 ethernet 1/1/3
Sled 3 ethernet 1/1/5
Sled 4 ethernet 1/1/7
Sled 5 ethernet 1/1/9
Sled 6 ethernet 1/1/11
Sled 7 ethernet 1/1/13
Sled 8 ethernet 1/1/15

With quad-port NICs in all server sleds, both the odd- and even-numbered server-facing ports will be active. The following table
shows the MX server sled to MX9116n FSE interface mapping for quad-port NIC servers which are directly connected to the
switch.

Table 4. Interface mapping for quad-port NIC servers


Sled number MX9116n FSE server interface
Sled 1 ethernet 1/1/1, ethernet 1/1/2
Sled 2 ethernet 1/1/3, ethernet 1/1/4
Sled 3 ethernet 1/1/5, ethernet 1/1/6
Sled 4 ethernet 1/1/7, ethernet 1/1/8
Sled 5 ethernet 1/1/9, ethernet 1/1/10
Sled 6 ethernet 1/1/11, ethernet 1/1/12
Sled 7 ethernet 1/1/13, ethernet 1/1/14
Sled 8 ethernet 1/1/15, ethernet 1/1/16

When using multiple chassis and MX7116n FEMs, virtual slots are used to maintain a continuous mapping between the NIC and
physical port. For more information on virtual slots, see Virtual ports and slots.
In a multiple chassis Scalable Fabric, the interface numbers for the first two are mixed, as one NIC connection is to the MX9116n
in the same chassis as the server, and the other NIC connection is to the MX7116n. In this example, the following table shows
the server interface mapping for Chassis 1 using quad-port adapters.

Table 5. Interface mapping for multiple chassis


Chassis 1 sled number Chassis 1 MX9116n server interface Chassis 2 MX9116n server interface
Sled 1 ethernet 1/1/1, ethernet 1/1/2 ethernet 1/71/1, ethernet 1/71/9
Sled 2 ethernet 1/1/3, ethernet 1/1/4 ethernet 1/71/2, ethernet 1/71/10
Sled 3 ethernet 1/1/5, ethernet 1/1/6 ethernet 1/71/3, ethernet 1/71/11
Sled 4 ethernet 1/1/7, ethernet 1/1/8 ethernet 1/71/4, ethernet 1/71/12
Sled 5 ethernet 1/1/9, ethernet 1/1/10 ethernet 1/71/5, ethernet 1/71/13
Sled 6 ethernet 1/1/11, ethernet 1/1/12 ethernet 1/71/6, ethernet 1/71/14
Sled 7 ethernet 1/1/13, ethernet 1/1/14 ethernet 1/71/7, ethernet 1/71/15
Sled 8 ethernet 1/1/15, ethernet 1/1/16 ethernet 1/71/8, ethernet 1/71/16

Quad-port NIC restrictions and guidelines

34 PowerEdge MX Scalable Fabric Architecture


● If the server has a quad-port NIC, but the MX7116n FEM has only one port connected to the MX9116n FSE, only half of the
NIC ports will be connected and show a link up.
● Both ports on the MX7116n FEM must be connected to the same MX9116n FSE.
NOTE: Do not connect one MX7116n FEM port to one MX9116n FSE and the other MX7116n FEM port to another
MX9116n FSE. This is not supported. The Unsupported configuration for quad-port NICs figure shows the unsupported
configuration.
● If a Scalable Fabric has some chassis with quad-port NICs and some with only dual-port NICs, only the chassis with
quad-port NICs require the second MX7116n FEM port to be connected, as shown in the Multiple chassis topology with
quad-port and dual-port NICs – single fabric figure.
● It is supported to have a dual-port NIC in Fabric A and a quad-port NIC in Fabric B (or the inverse), or have a quad-port NIC
in both Fabric A and Fabric B.
● Up to five chassis with quad-port NICs are supported in a single Scalable Fabric.
The following set of figures show the basic supported topologies when using quad-port Ethernet adapters.
NOTE: The following diagrams show the connections for a scalable fabric on multiple chassis between the FSE and
FEM components. The diagrams do not show the VLTi connections required for operating in SmartFabric mode or as
recommended when in Full Switch mode.
The following figure shows a single-chassis topology with quad-port NICs. Make sure to connect both ports on the MX7116n
FEM to the same MX9116n FSE.

Figure 26. Single-chassis topology with quad-port NICs - dual fabric

The following figure shows the two-chassis topology with quad-port NICs in each chassis. Only a single fabric is configured.
Make sure to connect both ports on the MX7116n FEM to the same MX9116n FSE.

Figure 27. Two-chassis topology with quad-port NICs – single fabric

The following figure shows the two-chassis topology with quad-port NICs. Dual fabrics are configured.

PowerEdge MX Scalable Fabric Architecture 35


Figure 28. Two-chassis topology with quad-port NICs – dual fabric

The following figure shows the multiple chassis topology with quad-port NICs. Only a single fabric is configured.

Figure 29. Multiple chassis topology with quad-port NICs – single fabric

The following figure shows the multiple chassis topology with quad-port NICs in two chassis and dual-port NICs in one chassis.
Only a single fabric is configured. Make sure to connect both ports on the MX7116n FEM to the same MX9116n FSE with the
quad-port card. Do not connect the second port on the MX7116n FEM when configured with a dual-port NIC.

36 PowerEdge MX Scalable Fabric Architecture


Figure 30. Multiple chassis topology with quad-port and dual-port NICs – single fabric

The following figure shows one example of an unsupported topology. The ports on the MX7116n FEMs must never be connected
to different MX9116n FSEs.

Figure 31. Unsupported configuration for quad-port NICs

PowerEdge MX Scalable Fabric Architecture 37


Interfaces and port groups
On the MX9116n FSE and MX5108n, server-facing interfaces are internal and are enabled by default. To view the backplane port
connections to servers, use the show inventory media command.
In the output, a server-facing interface displays INTERNAL as its media. A FIXED port does not use external transceivers and
always displays as Dell EMC Qualified true.

OS10# show inventory media


--------------------------------------------------------------------------------
System Inventory Media
--------------------------------------------------------------------------------
Node/Slot/Port Category Media Serial-Number Dell EMC-Qualified
--------------------------------------------------------------------------------
1/1/1 FIXED INTERNAL true
1/1/2 FIXED INTERNAL true
1/1/3 FIXED INTERNAL true
1/1/4 FIXED INTERNAL true
1/1/5 FIXED INTERNAL true
1/1/6 FIXED INTERNAL true
1/1/7 FIXED INTERNAL true
1/1/8 FIXED INTERNAL true
1/1/9 FIXED INTERNAL true
1/1/10 FIXED INTERNAL true
1/1/11 FIXED INTERNAL true
1/1/12 FIXED INTERNAL true
1/1/13 FIXED INTERNAL true
1/1/14 FIXED INTERNAL true
1/1/15 FIXED INTERNAL true
1/1/16 FIXED INTERNAL true
1/1/17 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC TW04829489D0007 true
1/1/18 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC TW04829489D0007 true
1/1/19 Not Present
1/1/20 Not Present
1/1/21 Not Present
--------------------- Output Truncated ----------------------------------------
1/1/37 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC TW04829489J0021 true
1/1/38 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC TW04829489J0021 true
1/1/39 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC TW04829489J0024 true
1/1/40 QSFP28-DD QSFP28-DD 200GBASE 2SR4 AOC TW04829489J0024 true
1/1/41 QSFP28 QSFP28 100GBASE CR4 2M CN0APX0084G1F05 true
1/1/42 QSFP28 QSFP28 100GBASE CR4 2M CN0APX0084G1F49 true
--------------------- Output Truncated ----------------------------------------

To view the server-facing interface port status, use the show interface status command. Server-facing ports are
numbered 1/1/1 to 1/1/16.
For the MX9116n FSE, servers that have a dual-port NIC connect only to odd-numbered internal Ethernet interfaces; for
example, a MX740c in slot one would be 1/1/1, and a MX840c in slots five and six occupies 1/1/9 and 1/1/11.

NOTE: Even-numbered Ethernet ports between 1/1/1–1/1/16 are reserved for quad-port NICs.

A port group is a logical port that consists of one or more physical ports and provides a single interface. Only the MX9116n FSE
supports the following port groups:
● QSFP28-DD – Port groups 1 through 12
● QSFP28 – Port groups 13 and 14
● QSFP28 Unified – Port groups 15 and 16
The following figure shows these port groups along the top, and the bottom shows the physical ports in each port group. For
instance, QSFP28-DD port group 1 has member ports 1/1/17 and 1/1/18, and unified port group 15 has a single member, port
1/1/43.

38 PowerEdge MX Scalable Fabric Architecture


Figure 32. MX9116n FSE port groups

QSFP28-DD port groups


On the MX9116n FSE, QSFP28-DD port groups are 1 through 12, which contain ports 1/1/17 through 1/1/40 and are used to:
● Connect to a MX7116n FEM to extend the Scalable Fabric
● Connect to an Ethernet rack server or storage device
● Connect to another networking device, typically an Ethernet switch
By default, QSFP28-DD port groups 1 through 9 are in fabric-expander-mode and QSFP28-DD port groups 10 through 12 are in
2x 100 GbE breakout mode. Fabric Expander mode is an 8x 25 GbE interface that is used only to connect to MX7116n FEMs in
additional chassis. The interfaces from the MX7116n FEM appear as standard Ethernet interfaces from the perspective of the
MX9116n FSE.
The following figure illustrates how the QSFP28-DD cable provides 8x 25 GbE lanes between the MX9116n FSE and a MX7116n
FEM.

Figure 33. QSFP28-DD connection between MX9116n FSE and MX7116n FEM

NOTE: Compute sleds with dual-port NICs require only MX7116n FEM port 1 to be connected.

In addition to fabric-expander-mode, QSFP28-DD port groups support the following Ethernet breakout configurations:
● Using QSFP28-DD optics/cables:
○ 2x 100 GbE – Breakout a QSFP28-DD port into two 100-GbE interfaces
○ 2x 40 GbE – Breakout a QSFP28-DD port into two 40-GbE interfaces
○ 8x 25 GbE – Breakout a QSFP28-DD port into eight 25-GbE interfaces

PowerEdge MX Scalable Fabric Architecture 39


○ 8x 10 GbE – Breakout a QSFP28-DD port into eight 10-GbE interfaces
● Using QSFP28 optics/cables:
○ 1x 100 GbE – Breakout a QSFP28-DD port into one 100-GbE interface
○ 4x 25 GbE – Breakout a QSFP28-DD port into four 25-GbE interfaces
● Using QSFP+ optics/cables:
○ 1x 40 GbE – Breakout a QSFP28-DD port into one 40-GbE interface
○ 4x 10 GbE – Breakout a QSFP28-DD port into four 10-GbE interfaces
NOTE: Before changing the port breakout configuration from one setting to another, the port must first be set back to the
hardware default setting.

NOTE: QSFP28-DD ports are backwards compatible with QSFP28 and QSFP+ optics and cables.

Single-density QSFP28 port groups


On the MX9116n FSE, single-density QSFP28 port groups are 13 and 14, contain ports 1/1/41 and 1/1/42 respectively, and are
used to connect to upstream networking devices. By default, both port groups are set to 1x 100 GbE. Port groups 13 and 14
support the following Ethernet breakout configurations:
● 4x 10 GbE – Breakout a QSFP28 port into four 10-GbE interfaces
● 1x 40 GbE – Set a QSFP28 port to 40 GbE mode
● 4x 25 GbE – Breakout a QSFP28 port into four 25-GbE interfaces
● 2x 50 GbE – Breakout a QSFP28 port into two 50-GbE interfaces
● 1x 100 GbE – Reset the unified port back to the default, 100-GbE mode

Unified port groups


Unified port groups operate as either Ethernet or FC. By default, both unified port groups, 15 and 16, are set to 1x 100 GbE. To
activate the two port groups as FC interfaces in Full Switch mode, use the command mode fc. Both port groups are enabled
as Ethernet or FC together. You cannot have port group 15 as Ethernet and port group 16 as Fibre Channel.
The MX9116n FSE unified port groups support the following Ethernet breakout configurations:
● 4x 10 GbE – Breakout a QSFP28 port into four 10-GbE interfaces
● 1x 40 GbE – Set a QSFP28 port to 40 GbE mode
● 4x 25 GbE – Breakout a QSFP port into four 25-GbE interfaces
● 2x 50 GbE – Breakout a QSFP28 port into two 50-GbE interfaces
● 1x 100 GbE – Reset the unified port back to the default, 100-GbE mode
The MX9116n FSE unified port groups support the following FC breakout configurations:
● 4x 8 Gb – Breakout a unified port group into four 8-Gb FC interfaces
● 2x 16 Gb – Breakout a unified port group into two 16-Gb FC interfaces
● 4x 16 Gb – Breakout a unified port group into four 16-Gb FC interfaces
● 1x 32 Gb – Breakout a unified port group into one 32-Gb FC interface
● 2x 32 Gb – Breakout a unified port group into two 32-Gb FC interfaces
● 4x 32 Gb – Breakout a unified port group into four 32-Gb FC interfaces, rate limited
NOTE: After enabling FC on the unified ports, these ports will be set administratively down and must be enabled in order to
be used.

Rate limited 32 Gb Fibre Channel


When using 32-Gb FC, the actual data rate is 28 Gbps due to 64b/66b encoding. The following figure shows unified port group
15. The port group is set to 4x 32 Gb FC mode. However, each of the four lanes is 25 Gbps, not 28 Gbps. When these lanes are
mapped from the Network Processing Unit (NPU) to the FC ASIC for conversion to FC signaling, the four 32 Gb FC interfaces
are mapped to four 25 Gbps lanes. With each lane operating at 25 Gbps, not 28 Gbps, the result is rate limited to 25 Gbps.

40 PowerEdge MX Scalable Fabric Architecture


MX9116n FSE
Unified Port 4x 32 GFC ports
4x 25 Gbps Rate limit: 25 Gbps

Lane 1 - 25 Gbps Lane 1 - 25 Gbps

Lane 2 - 25 Gbps Lane 2 - 25 Gbps


Network Fibre Channel
Processing Unit Lane 3 - 25 Gbps ASIC Lane 3 - 25 Gbps

Lane 4 - 25 Gbps Lane 4 - 25 Gbps

Figure 34. 4x 32 Gb FC breakout mode, rate limit of 25 Gbps

While each 32 Gb FC connection is providing 25 Gbps, the overall FC bandwidth available is 100 Gbps per unified port group,
or 200 Gbps for both ports. However, if an application requires the maximum 28 Gbps throughput per port, use the 2x 32 Gb
breakout mode. This mode configures the connections between the NPU and the FC ASIC, as shown in the following figure.

MX9116n FSE
Unified Port
2x 50 Gbps 2x 32 GFC ports

Lane 1 - 50 Gbps Lane 1 - 28 Gbps

Network Fibre Channel


Processing Unit ASIC
Lane 2 - 50 Gbps Lane 2 - 28 Gbps

Figure 35. 2x 32 Gb FC breakout mode

In 2x 32 Gb FC breakout mode, the MX9116n FSE binds two 50 Gbps links together to provide a total of 100 Gbps bandwidth
per lane to the FC ASIC. This results in the two FC ports operating at 28 Gbps. The overall FC bandwidth available is 56 Gbps
per unified port, or 112 Gbps for both (compared to the 200 Gbps using 4x 32-Gb FC).
NOTE: Rate limited ports are not oversubscribed ports. There is no FC frame drop on these ports and buffer to buffer
credit exchanges ensure flow consistency.

Virtual ports and slots


A virtual port is a logical interface that connects to a downstream server and has no physical location on the switch. Virtual
ports are created when a MX9116n FSE onboards (discovers and configures) a MX7116n FEM.

PowerEdge MX Scalable Fabric Architecture 41


If a MX7116n is moved and cabled to a different QSFP28-DD port on the MX9116n, all software configurations on the virtual
ports are maintained. Only the QSFP28-DD breakout interfaces mapped to the virtual ports change.
A virtual slot contains all provisioned virtual ports across one or both FEM connections. On the MX9116n FSE, virtual slots 71
through 82 are pre-provisioned, and each virtual slot has eight virtual ports. For example, virtual slot 71 contains virtual ports
ethernet 1/71/1 through 1/71/8. When a quad-port adapter is used, that virtual slot will expand to 16 virtual ports, for example
ethernet 1/71/1 through 1/71/16.
If the MX9116n FSE is in SmartFabric mode, the MX7116n FEM is automatically configured with a virtual slot ID and virtual ports
that are mapped to the physical interfaces. The following table shows how the physical ports are mapped to the virtual slot and
ports.
If the MX9116n FSE is in Full Switch mode, it automatically discovers the MX7116n FEM when the following conditions are met:
● The MX7116n FEM is connected to the MX9116n FSE by attaching a Dell qualified cable between the QSFP28-DD ports on
both devices.
● The interface for the QSFP28-DD port group connected to the MX9116n FSE is in 8x 25 GbE FEM mode.
● At least one blade server is inserted into the MX7000 chassis containing the MX7116n FEM.
The FEM will be automatically discovered and provisioned into a virtual slot when operating in SmartFabric mode. In Full Switch
mode, this mapping is done with the unit-provision command. See show unit-provision for more information on the show
unit-provision command.
To verify that a MX7116n FEM is communicating with the MX9116n FSE, enter the show discovered-expanders
command.

MX9116n-FSE # show discovered-expanders


Service Model Type Chassis Chassis-slot Port-group Virtual
tag service-tag Slot-Id
--------------------------------------------------------------------------
D10DXC2 MX7116n FEM 1 SKY002Z A1 1/1/1

Table 6. Virtual Port mapping example 1


MX7116n service tag MX9116n QSFP28-DD MX9116n physical MX7116n virtual slot MX7116n virtual ports
port group interface (ID)
12AB3456 portgroup1/1/1 1/1/17:1 71 1/71/1
1/1/17:2 1/71/2
1/1/17:3 1/71/3
1/1/17:4 1/71/4
1/1/18:1 1/71/5
1/1/18:2 1/71/6
1/1/18:3 1/71/7
1/1/18:4 1/71/8

Use the same command to show the list of MX7116n FEMs in a quad-port NIC configured scenario, in which each MX7116n FEM
creates two connections with the MX9116n FSE. In a dual-chassis scenario, MX7116n FEMs are connected on port group 1 and
port group 7 to the MX9116n FSE as shown below. For example, if the quad-port NIC is configured on compute sled 1, then
virtual ports 1/1/71:1 and 1/1/71:9 will be up.

MX9116N-1# show discovered-expanders


Service Model Type Chassis Chassis-slot Port-group Virtual
tag service-tag Slot-Id
--------------------------------------------------------------------------
D10DXC2 MX7116n FEM 1 SKY002Z A1 1/1/1 71
D10DXC2 MX7116n FEM 1 SKY002Z A1 1/1/7 71
D10DXC4 MX7116n FEM 1 SKY003Z A1 1/1/2 72

42 PowerEdge MX Scalable Fabric Architecture


Table 7. Virtual Port mapping example 2
MX7116n service tag MX9116n QSFP28-DD MX9116n physical MX7116n virtual slot MX7116n virtual ports
port group interface (ID)
12AB3456 portgroup1/1/1 1/1/17:1 71 1/71/1

portgroup1/1/7 1/1/17:2 1/71/2


1/1/17:3 1/71/3
1/1/17:4 1/71/4
1/1/18:1 1/71/5
1/1/18:2 1/71/6
1/1/18:3 1/71/7
1/1/18:4 1/71/8
1/1/29:1 1/71/9
1/1/29:2 1/71/10
1/1/29:3 1/71/11
1/1/29:4 1/71/12
1/1/30:1 1/71/13
1/1/30:2 1/71/14
1/1/30:3 1/71/15
1/1/30:4 1/71/16

The MX9116n physical interfaces mapped to the MX7116n virtual ports display dormant (instead of up) in the show
interface status output until a virtual port starts to transmit server traffic.

MX9116n-FSE # show interface status


Port Description Status Speed Duplex Mode Vlan
Eth 1/1/17:1 dormant
Eth 1/1/17:2 dormant
<output truncated>

Recommended port order for MX7116n FEM


connectivity
While any QSFP28-DD port can be used for any purpose, the following table and figure outline the recommended but not
required, port order for connecting the chassis with the MX7116n FEM modules to the MX9116n FSE to optimize NPU utilization.
NOTE: If you are using the connection order shown in the following table, you must change the Port group 9 breakout
type to FabricExpander.

Table 8. Recommended PowerEdge MX7000 chassis connection order


Chassis MX9116n FSE port group Physical port numbers
1/2 Port group 1 17 and 18
3 Port group 7 29 and 30
4 Port group 2 19 and 20
5 Port group 8 31 and 32
6 Port group 3 21 and 22

PowerEdge MX Scalable Fabric Architecture 43


Table 8. Recommended PowerEdge MX7000 chassis connection order (continued)
Chassis MX9116n FSE port group Physical port numbers
7 Port group 9 33 and 34
8 Port group 4 23 and 24
9 Port group 10 35 and 36
10 Port group 5 25 and 26

Figure 36. Recommended MX7000 chassis connection order

Embedded top-of-rack switching


Most environments with blade servers also have rack servers. The following figure shows a typical design having rack servers
connecting to their respective top-of-rack (ToR) switches and blade chassis connecting to a different set of ToR switches. If
the storage array is Ethernet-based, it is typically connected to the core/spine. This design is inefficient and expensive.

Figure 37. Traditional mixed blade/rack networking

Communication between rack and blade servers must traverse the core, increasing latency, and the storage array consumes
expensive core switch ports. All of this results in increased operations cost from the increased number of managed switches.
Embedded ToR functionality is built into the MX9116n FSE. Configure any QSFP28-DD port to break out into 8x 10 GbE or 8x
25 GbE and connect the appropriate cables and optics. This enables all servers and storage to connect directly to the MX9116n

44 PowerEdge MX Scalable Fabric Architecture


FSE, and communication between all devices that are kept within the switch. This provides a single point of management and
network security while reducing cost and improving performance and latency.
The preceding figure shows eight switches in total. In the following figure, using embedded ToR, switch count is reduced to the
two MX9116n FSE in the two chassis:

Figure 38. MX9116n FSE embedded ToR

MX Chassis management wiring


You can use the automatic uplink detection and network loop prevention features in OME-Modular to connect multiple chassis
with cables. This cabling or wiring method is called stacking. Stacking saves port usage in the data center switches and access
for each chassis in the network.
While wiring a chassis, connect one network cable from each management module to the out-of-band (OOB) management
switch of the data center. Ensure that both ports on the OOB management switch are enabled and are in the same network and
VLAN.
The following image is a representation of the individual chassis wiring:

PowerEdge MX Scalable Fabric Architecture 45


Figure 39. Individual chassis management wiring

The following image is a representation of the two-chassis wiring:

Figure 40. Two-chassis management wiring

The following image is a representation of the multi-chassis wiring:

46 PowerEdge MX Scalable Fabric Architecture


Figure 41. Multi-chassis management wiring

PowerEdge MX Scalable Fabric Architecture 47


3
Dell SmartFabric OS10
The networking market is transitioning from a closed, proprietary stack to open hardware supporting various operating systems.
Dell SmartFabric OS10 is designed to allow multilayered disaggregation of the network functionality. While OS10 contributions to
Open Source provide users with freedom and flexibility to pick their own third-party networking, monitoring, management, and
orchestration applications; SmartFabric OS10 bundles an industry-hardened networking stack featuring standard Layer 2 and
Layer 3 protocols over a well-accepted CLI interface.

Figure 42. Dell SmartFabric OS10 high-level architecture

Operating modes
The Dell Networking MX9116n Fabric Switching Engine (FSE) and MX5108n Ethernet Switch operate in one of two modes:
● Full Switch mode (Default) – All switch-specific SmartFabric OS10 capabilities are available and managed through the CLI.
● SmartFabric mode – Switches operate as a Layer 2 I/O aggregation fabric and are managed through the Open Manage
Enterprise-Modular (OME-M) console.

Full Switch mode


In Full Switch mode, all SmartFabric OS10 features and functions that are supported by the hardware are available to the user.
In other words, the switch operates the same way as any other SmartFabric OS10 switch. Configuration is primarily done using
the CLI, however, the following items can be configured or managed using the OME-M UI:
● Initial switch deployment: Configure hostname, password, SNMP, NTP, and so on
● Monitor health, logs, alerts, and events
● Update the SmartFabric OS10 firmware
● View physical topology

48 Dell SmartFabric OS10


● Switch power management
Full Switch mode is typically used when a desired feature or function is not available when operating in SmartFabric mode. For
more information about Dell SmartFabric OS10 operations, find the relevant version of the User Guide in the OME-M and OS10
compatibility and documentation table.

SmartFabric mode
A SmartFabric is a logical entity that consists of a collection of physical resources, such as servers and switches, and logical
resources such as networks, templates, and uplinks. The OpenManage Enterprise – Modular (OME-M) console provides a
method to manage these resources as a single unit.
For more information about SmartFabric mode, see Overview of SmartFabric Services for PowerEdge MX.

Changing operating modes


In both Full Switch and SmartFabric modes, only configuration changes you make using the OME-M UI are retained when you
switch modes. The graphical user interface is used for switch configuration in SmartFabric mode and the OS10 CLI is used for
switch configuration in Full Switch mode.
By default, a switch is in Full Switch mode. When that switch is added to a fabric, it automatically changes to SmartFabric mode.
When you change from Full Switch to SmartFabric mode, all Full Switch CLI configurations are deleted except for the subset of
CLI commands that are supported in SmartFabric mode.

Dell SmartFabric OS10 49


Figure 43. Switch settings saved when switching between operating modes

50 Dell SmartFabric OS10


To change a switch from SmartFabric to Full Switch mode, you must delete the fabric. At that time, only the configuration
changes such as admin password, hostname, and management IP address, will be retained.
NOTE: There is no CLI command to switch between operating modes. Delete the fabric to change from SmartFabric to Full
Switch mode.
The CLI command show switch-operating-mode displays the currently configured operating mode of the switch. This
information is also available on the switch landing page in the OME-M UI.

VLAN restrictions
VLANs 4004 and 4020 are reserved for internal switch communication and cannot be assigned to any interface in Full Switch
or SmartFabric mode. VLAN 4020 is automatically created by the system as the Management VLAN. Do not remove this VLAN,
and do not remove the VLAN tag or edit the Management VLAN on the Edit Uplink page if it is running in SmartFabric mode.
The VLAN and subnet that are assigned to OME-M cannot be used in the data path or fabric of the MX-IOMs. Ensure the
management network used for OME-M does not conflict with networks configured on the fabric. All other VLANs are allowed on
the data plane and can be assigned to any interface.

LLDP for iDRAC


To understand the physical network topology, SmartFabric OS10 discovers end-host devices based on specific custom originator
TLVs in LLDP PDUs sent out through the connected ports by the iDRAC, regardless of whether the switches are in Full Switch
or SmartFabric mode. The types of information provided are shown in the following table.
For servers connected to switches in SmartFabric mode, the iDRAC LLDP topology feature must be enabled. Without it, the
fabric does not recognize the compute sled and the user cannot deploy networks to the sled.
NOTE: Topology LLDP is enabled by default for PowerEdge MX servers and disabled for all other Dell servers. To enable
or disable the feature, open the iDRAC console and navigate to iDRAC Settings > Connectivity > Network > Common
Settings > Topology LLDP.

Table 9. iDRAC LLDP TLVs and subtypes


TLV Subtype Description
Originator 1 Indicates the iDRAC string that is used as originator. This string enables external
switches to identify iDRAC LLDP PDUs.
Port type 2 The following are the applicable port types:
● iDRAC port (dedicated)
● iDRAC port (dedicated)
● iDRAC and NIC port (shared)
Port FQDD 3 Port number that uniquely identifies a NIC port within a server.
Server service tag 4 Service tag ID of the server.
Server model name 5 Model name of the server.
Server slot number 6 Slot number of the server. For example: 1, 2, 3, 1a, and 1b.
Chassis service tag 7 Service tag ID of the chassis (applicable only to MX servers).
Chassis model 8 Model name of the chassis (applicable only to MX servers).
IOM service tag 9 Service tag ID of the IOM device (applicable only to MX servers).
IOM model name 10 Model name of the IOM device (applicable only to MX servers).
IOM slot label 11 Slot label of the IOM device. For example: A1, B1, A2, and B2 (applicable only to MX
servers).
IOM port number 12 Port number of the NIC. For example: 1, 2, 3, and so on.

Dell SmartFabric OS10 51


For additional information about LLDP and TLVs, see the Link Layer Discovery Protocol section of the Dell SmartFabric OS10
User Guide. Find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.
See the Common CLI troubleshooting commands for Full Switch and SmartFabric modes section for examples of the show
lldp neighbors command, which provides information about connected devices.

Virtual Link Trunking


Virtual Link Trunking (VLT) aggregates two identical physical switches to form a single logical extended switch. However, each
of the VLT peers has its own control and data planes and can be configured individually for port, protocol, and management
behaviors. Though the dual physical units act as a single logical unit, the control and data plane of both switches remain isolated,
ensuring high availability and high resilience for all its connected devices. This differs from the legacy stacking concept, where
there is a single control plane across all switches in the stack, creating a single point of failure.
With the critical need for high availability in modern data centers and enterprise networks, VLT plays a vital role connecting with
rapid convergence, seamless traffic flow, efficient load balancing, and loop free capabilities.
With the instantaneous synchronization of MAC and ARP entries, both the nodes remain active/active and continue to forward
the data traffic seamlessly.
VLT is required when operating in SmartFabric mode.
For more information about VLT, see the Virtual Link Trunking chapter in the Dell SmartFabric OS10 User Guide. Find the
relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.

Storage networking
PowerEdge MX Ethernet I/O modules support Fibre Channel (FC) connectivity in different ways:
● Direct Attach, also called F_Port
● NPIV Proxy Gateway (NPG)
● FIP Snooping Bridge (FSB)
● Internet Small Computer Systems Interface, or iSCSI
The method to implement depends on the existing infrastructure and application requirements. Consult your Dell representative
for more information.
Configuring FC connectivity in SmartFabric mode is simple and is almost identical across the three connectivity types.
NOTE: The PowerEdge MX Platform supports all Dell PowerStore storage appliance models. This document provides
example deployments that include the PowerStore 1000T appliance. For specific details on PowerStore appliance models,
see the Dell PowerStore T page.

NPIV Proxy Gateway


The most common connectivity method, NPIV Proxy Gateway mode (NPG) is used when connecting PowerEdge MX to a
storage area network that hosts a storage array. NPG mode is simple to implement as there is little configuration that must be
done. The NPG switch converts FCoE from the server to native FC and aggregates the traffic into an uplink. The NPG switch is
effectively transparent to the FC SAN, which “sees” the hosts themselves. This mode is supported only on the MX9116n FSE.
OS10 supports configuring N_Port mode on an Ethernet port that connects to converged network adapters (CNA). NPG node
port (N_Port) is a port on a network node that act as a host or initiator device and is used in FC point-to-point or FC switched
fabric. N_Port ID Virtualization (NPIV) allows multiple N_Port IDs to share a single physical N_Port.
In the deployment example shown below, MX9116n IOMs are configured as NPGs connected with pre-configured FC switches
using port 1/1/44 on each MX9116n to allow connectivity to a Dell PowerStore 1000T storage array. Port-group 1/1/16 is
configured as 4x 16 GFC to convert physical port 1/1/44 into 4x 16 GFC connections. MX9116n FSE universal ports 44:1 and 44:2
are used for FC connections and operate in N_Port mode to connect to the FC switches. The FC Gateway uplink type enables
N_Port functionality on the MX9116n unified ports, converting FCoE traffic to native FC traffic and passing that traffic to a
storage array through FC switches.

52 Dell SmartFabric OS10


FC Switch FC Switch
Spine 1 Spine 2

FC SAN A
FC SAN B
:2
rts 4:2 rts /44
Po 1/1/4 Po 1/1
– –
: 1 :1
4 44
1/ 1/4 1/
1/
Controller A Controller B

MX9116n VLT MX9116n PowerStore 1000T


(Leaf 1) (Leaf 2) Unity 500F

MX7000 MX7000
chassis 1 chassis 2

Figure 44. Fibre channel NPG network to Dell PowerStore 1000T SAN

NOTE: For more information about configuration and deployment, see Scenario 5: Connect MX9116n FSE to Fibre Channel
storage - NPIV Proxy Gateway mode.

Direct attached (F_Port)


Direct Attached mode, or F_Port, is used when FC storage needs to be directly connected to the MX9116n FSE. The MX9116n
supports the required FC services such as name server and zoning that are typical of standard FC switches.
This example demonstrates the direct attachment of the Dell PowerStore 1000T storage array. MX9116n FSE universal ports
44:1 and 44:2 are required for FC connections and operate in F_Port mode, which allows for an FC storage array to be
connected directly to the MX9116n FSE. The uplink type enables F_Port functionality on the MX9116n unified ports, converting
FCoE traffic to native FC traffic and passing that traffic to a directly attached FC storage array.
This mode is supported only on the MX9116n FSE.

Spine 1 Spine 2 PowerStore 1000T


Controller A Controller B

FC SAN A
FC SAN B
1/44:2
1/44:1

4:1 :2
1/4 1/44

MX9116n VLT MX9116n


(Leaf 1) (Leaf 2)

MX7000 MX7000
chassis 1 chassis 2

Figure 45. Fibre Channel (F_Port) direct attach network to Dell PowerStore 1000T SAN

NOTE: For more information on configuration and deployment, see Scenario 6: Connect MX9116n FSE to Fibre Channel
storage - FC Direct Attach.

FCoE Transit or FIP Snooping Bridge


The FCoE Transit, or FIP Snooping Bridge (FSB) mode is used when connecting the Dell PowerEdge MX to an upstream switch,
such as the Dell PowerSwitch S4148U that accepts FCoE and converts it to native FC. This mode is typically used when
an existing FCoE infrastructure is in place that PowerEdge MX must connect to. In the following example, the PowerSwitch
S4148U-ON receives FCoE traffic from the MX5108n Ethernet switch and converts that FCoE traffic to native FC passes that
traffic to an external FC switch.
When operating in FSB mode, the switch snoops Fibre Channel over Ethernet (FCoE) Initialization Protocol (FIP) packets on
FCoE-enabled VLANs, and discovers the following information:

Dell SmartFabric OS10 53


● End nodes (ENodes)
● Fibre channel forwarders (FCFs)
● Connections between ENodes and FCFs
● Sessions between ENodes and FCFs
Using the discovered information, the switch installs ACL entries that provide security and point-to-point link emulation. This
mode is supported on both the MX9116n FSE and the MX5108n Ethernet Switch.

Figure 46. FCoE (FSB) network to Dell PowerStore 1000T SAN through S4148U-ON NPG switch

NOTE: An FCoE uplink from the MX5108n or MX9116n must contain only a single port interface on an MX IOM to the
existing FCoE switch such as the Dell PowerSwitch S4148U shown in the diagram above.

NOTE: For more information about configuration and deployment, see Scenario 7: Connect MX5108n to Fibre Channel
storage - FSB.

NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.

iSCSI
iSCSI is a transport layer protocol that embeds SCSI commands inside of TCP/IP packets. TCP/IP transports the SCSI
commands from the Host (initiator) to storage array (target). iSCSI traffic can be run on a shared or dedicated network
depending on application performance requirements.
In the example below, MX9116n FSEs are connected to Dell PowerStore 1000T storage array controllers SP A and SP B through
ports 1/1/41:1-2. If there are multiple paths from host to target, iSCSI can use multiple sessions for each path. Each path from
the initiator to the target will have its own session and connection. This connectivity method is often referred as “port binding”.
Dell Technologies recommends that you use the port binding method for connecting MX environment to the PowerStore 1000T
storage array. Configure multiple iSCSI targets on PowerStore 1000T and establish connectivity from the host initiators (MX
compute sleds) to each of the targets. When Logical Unit Numbers (LUNs) are successfully created on target, host initiators
can make connections to target through iSCSI session. For more information, see the Dell PowerStore T page.

54 Dell SmartFabric OS10


Spine 1 Spine 2 PowerStore 1000T
Controller A Controller B

iSCSI SAN A
iSCSI SAN B

1/1/41:2
1/1/41:1
:1 :2
/41 1
1/1 1/1/4

MX9116n VLT MX9116n


(Leaf 1) (Leaf 2)

MX740c MX740c

MX7000 Chassis 1 MX7000 Chassis 2

Figure 47. iSCSI network to Dell PowerStore 1000T

NVMe/TCP

OME-M 1.40.20 NVMe/TCP support


With the release of OME-M 1.40.20, the MX platform supports NVMe/TCP.
NVMe/TCP and SFSS solutions with PowerEdge MX require PowerEdge MX Baseline 22.03.00 (1.40.20), and are supported in
full switch mode only. Converged FCoE and NVMe/TCP on the same IOM is not currently supported.

NVMe/TCP support on 25 GbE IOMs to include MX9116n FSE, MX7116n FEM, and
MX5108n
NVMe/TCP and SFSS solutions with PowerEdge MX require PowerEdge MX Baseline 22.09.00 (2.00.00 and later) and are
supported in SmartFabric mode and Full Switch mode.
When operating in SmartFabric Mode, the Storage - NVMe/TCP VLAN type for NVMe/TCP traffic is required.
Converged FCoE and NVMe/TCP on the same IOM is not currently supported.

NVMe/TCP support on 100 GbE solution with external Z9432F-ON FSE and
MX8116n FEM, for operation at both 25 GbE and 100 GbE
NVMe/TCP and SFSS solutions with PowerEdge MX require PowerEdge MX Baseline 23.05.00 (2.10.00 and later) and is
supported in Full Switch mode.
FCoE is not supported on the MX8116n based solution and therefore, converged FCoE and NVMe/TCP is not supported.
All network mezzanine cards supported on the MX8116n based solution are supported for NVMe/TCP and SFSS.
For more information, refer to the following resources:

Resource Description
SFSS Deployment Guide This document demonstrates the planning and deployment of SmartFabric
Storage Software (SFSS) for NVMe/TCP.
NVMe/TCP Host/Storage Interoperability Simple This document provides information about the NVMe/TCP Host/Storage
Support Matrix Interoperability support matrix.
NVMe/TCP Supported Switches Simple Support This document provides information about the NVMe/TCP Supported
Matrix Switches Simple Support matrix.

Host FCoE session load balancing


Host FCoE session load balancing differs depending on the version of OS10 that is being used.

Dell SmartFabric OS10 55


OS10 version 10.5.2.4 or later
The FC uplinks from the MX9116n follow industry-standard protocols. Unlike the Ethernet LACP Link Aggregation Group (LAG)
protocol, there is no industry-standard mechanism for bonding multiple FC uplinks together. Because of this, Fibre Channel
switch manufacturers independently developed their own proprietary mechanisms that are not interoperable. This prevents the
MX9116n FC uplinks to be bonded using the native or proprietary protocols.
Instead, load balancing is achieved through a single Fibre Channel Forwarder (FCF) per vFabric. The following describes the
behavior of the logical FCF:
● This feature presents all available operational Fibre Channel uplinks in a fabric as a single logical unit. The uplinks are
presented as one logical Fibre Channel Forwarder (FCF) to the end points connected to the same fabric.
● Better load balancing is achieved during boot-up and bulk configuration by requiring the FC uplink successfully completed the
initial login with the upstream switch at the time of timer expiry.
NOTE: Set timeout value using the CLI command fcoe delay fcf-adv timeout.
● The system finds the optimally loaded FC uplink while the load balancing algorithm makes use of the link's session count and
the link speed as factors for session re-balancing.
● End devices do not have control over the link chosen for session establishment. This behavior ensures better load balancing
across the available uplinks. After the session is established, the FCoE/FC data traffic is re-directed to the appropriate port
to which the login request was associated.
NOTE: As of OME-M 1.20.00 and OS10.5.0.7, it is possible to rebalance FCoE sessions across FCFs. For more
information, see Rebalancing FC and FCoE sessions.

OS10 version 10.5.1.9 and earlier


The FC uplinks from the MX9116n follow industry-standard protocols. Unlike the Ethernet LACP Link Aggregation Group (LAG)
protocol, there is no industry-standard mechanism for bonding multiple FC uplinks together. Because of this, Fibre Channel
switch manufacturers independently developed their own proprietary mechanisms that are not interoperable. This prevents the
MX9116n FC uplinks to be bonded using the native or proprietary protocols.
Instead, load balancing rule are used and listed below:
● Load is calculated based on number of server sessions that are connected to the fibre channel forwarder (FCF). The FCF
runs in OS10 and provides the FC gateway functionality. There is one FCF for each physical uplink.
● If only one FCF is available, then all the servers form FCoE sessions with that FCF.
● In the case of multiple FCFs, the NPG module running in OS10 will provide the least loaded FCF available at that time to the
next server that will log in to the FC fabric.
● Load balancing is performed only during the server login process.
● If a new FCF/uplink is created, existing server sessions will not be automatically balanced across the new session. New
server sessions will leverage the new FCF.
● Once a server is logged into an FCF, it will not shift to least loaded FCF until there is a disruption to the existing session.
NOTE: As of OME-M 1.20.00 and OS10.5.0.7, it is possible to rebalance FCoE sessions across FCFs. For more
information, see Rebalancing FC and FCoE sessions.

PowerEdge MX IOM operations


Dell PowerEdge MX switches can be managed using the OME-M console. From the Switch Management page, you can
view activity, health, and alerts. The Switch Management page also allows you to perform operations such as power control,
firmware update, and port configuration. Many of these operations can also be performed in Full Switch mode.

Switch Management page overview


To access the Switch Management page:
1. Open the OME-M console.
2. From the navigation menu, click Devices > I/O Modules.
3. Select the preferred switch.

56 Dell SmartFabric OS10


NOTE: In the following example, the MX9116n FSE IOM-A1 is selected.

Figure 48. IOM Overview page on OME-M

Switch Overview
The Overview page provides a convenient location to view the pertinent data on the IOM such as:
● Chassis information
● Recent alerts
● Recent activity
● IOM subsystems
● Environment
The Power Control drop-down button provides three options:
● Power Off: Turns off the IOM
● Power Cycle: Power cycles the IOM
● System Reset: Initiates a cold reboot of the IOM

Dell SmartFabric OS10 57


Figure 49. Power Control options

The Blink LED drop-down button provides an option to turn the ID LED on the IOM on or off. To turn on the ID LED, select
Blink LED > Turn On. This selection activates a blinking blue LED which provides easy identification. To turn off the blinking ID
LED, select Blink LED > Turn Off.

Figure 50. Blink LED button

Hardware tab
The Hardware tab provides information about the following IOM hardware:
● FRU
● Device Management Info
● Installed software
● Port information

58 Dell SmartFabric OS10


Figure 51. Hardware tab

In Smartfabric mode, the Port Information tab provides useful operations such as:
● Configuring port-group breakout
● Toggling the admin state of ports
● Configuring MTU of ports
● Toggling Auto Negotiation
● Setting the port description
NOTE: Do not use the OME-M UI to manage ports of a switch in Full Switch mode.

Figure 52. Port Information

View port status


The OME-M console can be used to show the port status. In this example, the figure displays ports for an MX9116n FSE.
1. Open the OME-M console.
2. From the navigation menu, click Devices > I/O Modules.
3. Select an IOM and click the View Details button to the right of the Inventory screen. The IOM Overview displays for that
device.
4. From IOM Overview, click Hardware.
5. Click to select the Port Information tab.
The image below shows Ethernet 1/1/1, 1/1/3, 1/71/1, and 1/72/1 in the correct operational status, which is Up. The interfaces
correspond to the MX740c compute sleds in slots 1 and 2 in both chassis. The figure also shows the VLT connection (port
channel 1000) and the uplinks (port channel 1) to the S5232F-ON leaf switches.

Dell SmartFabric OS10 59


Figure 53. IOM port information

60 Dell SmartFabric OS10


Firmware tab
The Firmware tab provides options to manage the firmware on the IOM. For more information about updating switch firmware,
see Upgrading Dell SmartFabric OS10.

Figure 54. Firmware tab

Upgrading Dell SmartFabric OS10


Upgrading the IOMs in the fabric should be done using the OME-M console. The upgrade is carried out using a Dell Update
Package (DUP). A DUP is a self-contained package format that updates a single element on a system. Using DUPs, you can
update a wide range of system components simultaneously and apply scripts to similar sets of Dell systems to levels. As of
OME-M 1.30.00 and OS10.5.2.4, the OS10 DUP is carried in the online firmware catalog and can be installed as part of a
firmware baseline. Earlier versions of the OS10 DUP must be downloaded from https://www.dell.com/support/ and are not
carried in the online firmware catalog.
NOTE: To access the complete inventory of drivers and other downloads specific for your system, sign in to your Dell
Support account.

NOTE: The following phased update order helps you to manually orchestrate MX component updates with no workload
disruption:
1. Update the components in the following order: OME-Modular application.
2. Network IOMs (Smart Fabrics and Full-Switches) and SAS IOMs
3. Server update—Phased update of servers (depending on clustering solution)

NOTE: When upgrading OS10, always perform the upgrade as part of an overall MX baseline. Follow the installation
instructions in the OME-M User's Guide. Find the relevant version of the User Guide in the OME-M and OS10 compatibility
and documentation table table.

Dell SmartFabric OS10 61


Figure 55. Download page file for MX9116n FSE

NOTE: If an IOM is in SmartFabric mode, all the switches that are part of the fabric are updated in sequence automatically.
Do not select both of the switches in the fabric to update.

NOTE: If an IOM is in Full Switch mode, the firmware upgrade is completed only on the specific IOMs that are selected in
the UI.
For step-by-step instructions about how to upgrade OS10 on PowerEdge MX IO modules along with a version-to-version
upgrade matrix, see the OME-M User's Guide. Find the relevant version of the User Guide in the OME-M and OS10 compatibility
and documentation table.

Alerts tab
The Alerts tab provides information about alerts and notifies the administrator. The advanced filter option can be leveraged to
quickly filter out alerts. Various operations can be performed on an alert or several alerts such as:
● Acknowledge
● Unacknowledged
● Ignore

62 Dell SmartFabric OS10


● Export
● Delete

Figure 56. Alerts tab

Settings tab
The Settings tab provides options to configure the following settings for the IOMs:
● Network
● Management
● Monitoring
● Advanced Settings

Figure 57. Settings tab

Network
The Network option includes configuring IPv4, IPv6, DNS Server, and Management VLAN settings.

Figure 58. Network settings

Management
The Management option includes setting the hostname and admin account password.
NOTE: Beginning with OME-M 1.20.00 and OS 10.5.0.7, this field will set the admin account password. For versions
OME-M 1.10.20 and OS10.5.0.5 and earlier, the field name Root Password will set the OS10 linuxadmin account password.
The default username for CLI access is admin and the password is admin.

Dell SmartFabric OS10 63


Figure 59. Management settings

Monitoring
The Monitoring section provides options for SNMP settings.

Figure 60. Monitoring settings option

Advanced Settings
The Advanced Settings tab offers the option for time configuration replication and alert replication. Select the Replicate
Time Configuration from Chassis check box to replicate the time settings that are configured in the chassis to the IOM.
Select the Replicate Alert Destination Configuration from Chassis check box to replicate the alert destination settings that
are configured in the chassis to the IOM.

Figure 61. Advanced settings option

OS10 privileged accounts


OS10 uses two privileged user accounts:
● For day to day operations, the default administrative account is 'admin' for the user name, and 'admin' is the default
password.
● For specific troubleshooting needs, Dell Technologies support may have you log in to the Linux shell.

64 Dell SmartFabric OS10


NOTE: The Linux shell account is linuxadmin and the default password is linuxadmin.

NOTE: You cannot delete the default linuxadmin user name. The default admin user name can only be deleted if at
least one OS10 user with the sysadmin role is configured.
For more information on OS10 privileged accounts, find the relevant version of the User Guide in the OME-M and OS10
compatibility and documentation table.

Setting the OS10 admin account password using OME-M


To configure the OS10 ‘admin’ account password, access the OME-M UI. Choose Devices > I/O Module > Select an IOM and
choose Settings.

Figure 62. Set password on OME-M

NOTE: Passwords require a minimum of nine characters.

NOTE: OME-M versions prior to 1.20.00 will set the linuxadmin password, instead of the 'admin' password, when using
this page.
If the MXG610s I/O module is selected, this procedure sets the admin account password for the Fabric OS running on the IOM.

Failure to set the password message


The following error message displays if the password requirements are not met.

Figure 63. Error message for password requirements failure

Validate password configuration


SSH to the switch and log in using the new password to ensure that the new password has been set.

NIC teaming guidelines


While NIC teaming is not required, it is suggested for redundancy unless a specific implementation recommends against it.

Dell SmartFabric OS10 65


There are two main kinds of NIC teaming:

Switch Also referred to as LACP, 802.3ad, or Dynamic Link Aggregation, this teaming method uses the LACP
dependent protocol to understand the teaming topology. This teaming method provides active/active teaming and
requires the switch to support LACP teaming.
Switch This method uses the operating system and NIC device drivers on the server to team the NICs. Each NIC
independent vendor may provide slightly different implementations with different pros and cons.

NIC Partitioning (NPAR) can impact how NIC teaming operates. Based on restrictions that the NIC vendors implement and that
are related to NIC partitioning, certain configurations preclude certain types of teaming.
The following restrictions are in place for both Full Switch and SmartFabric modes:
● If NPAR is not in use, both switch-dependent (LACP and static LAG) and switch-independent teaming methods are
supported.
● If NPAR is in use, only switch-independent teaming methods are supported. Switch-dependent teaming (LACP and static
LAG) is not supported.
If switch dependent (LACP) teaming is used, the following restrictions are in place:
● The iDRAC shared LAN on motherboard (LOM) feature can only be used if the Failover option on the iDRAC is enabled.
● If the host operating system is Microsoft Windows, the LACP timer MUST be set to Slow, also referred to as Normal.
Refer to the network adapter or operating system documentation for detailed NIC teaming instructions.
● Microsoft Windows 2012 R2, refer to the Instructions section
● Microsoft Windows 2016, refer to the Instructions section
NOTE: For deployments utilizing NPAR on the MX Platform with VMware solutions, contact Dell Support.

The following table shows the options that the MX Platform provides for NIC teaming:

Table 10. NIC teaming options on the MX Platform


Teaming option Description
No teaming No NIC bonding, teaming, or switch-independent teaming
LACP teaming LACP (Also called 802.3ad or dynamic link aggregation.)
Other Other
NOTE: If using the Broadcom 57504 Quad-Port NIC and two separate LACP
groups are needed, select this option and configure the LACP groups in the
Operating System. Otherwise, this setting is not recommended as it can have a
performance impact on link management.

NOTE: LACP Fast timer is not currently supported.

66 Dell SmartFabric OS10


4
Full Switch Mode
VLAN scaling guidelines for Full Switch mode
When running RSTP with IGMP snooping disabled, the below table indicates the total number of Port VLAN (PV) combinations
that are supported. This number is calculated by multiplying the total number of VLANs provisioned on the switch and the
number of active ports, including VLTi and uplink port channels. For example, a switch with 20 active ports and 200 provisioned
VLANs has a PV value of 4,000 (20 x 200). SmartFabric OS10 includes a command scale-profile vlan that enables a
larger PV value. On OS10 version 10.5.2.4 and earlier, IGMP/MLD snooping cannot be enabled when scale-profile-vlan is
enabled. For more information on this command and its use, find the relevant version of the User Guide in the OME-M and OS10
compatibility and documentation table.
NOTE: Enabling scale-profile vlan can be done without a reboot of the switch, however any VLANs created prior to
this will not support the additional VLAN scale capabilities until the switch has been rebooted.

NOTE: Prior to enabling scale-profile vlan, add the mode L3 command to VLAN 4020 and any VLANs with FCoE
or routing enabled. Failure to do this will disrupt network traffic on those VLANs, including access to the management
interface on the switch. For more information on this command and its use, find the relevant version of the User Guide in
the OME-M and OS10 compatibility and documentation table.

Table 11. Supported Port VLAN values


OS10 version Platform With scale-profile VLAN enabled Without scale-profile VLAN enabled
10.5.5.1 (factory MX5108n 45,000 PV 10,000 PV
installed) MX9116n 200,000 PV 30,000 PV
10.5.5.2

10.5.4.1 MX5108n 45,000 PV 10,000 PV


MX9116n 200,000 PV 30,000 PV
10.5.3.1 MX5108n 45,000 PV 10,000 PV
MX9116n 180,000 PV 30,000 PV

10.5.2.6 MX5108n 30,000 PV 10,000 PV


10.5.2.9 MX9116n 60,000 PV 30,000 PV

10.5.1.6 MX5108n 30,000 PV 10,000 PV


10.5.1.7 MX9116n 60,000 PV 30,000 PV

10.5.0.7 MX5108n 20,000 PV 10,000 PV


MX9116n 60,000 PV 20,000 PV

NOTE: When the PV value becomes very large, some show commands may take additional time to execute. This delay does
not impact switching performance, only the CLI display function.

Managing Fibre Channel Zones on MX9116n FSE


When a storage array is directly connected to the MX9116n FSE, Fibre Channel Zones can be used to improve security and
performance.

Full Switch Mode 67


Preparation of the servers is the same as mentioned in Server preparation. Determine the FC WWPNs for the compute sleds
and storage array as discussed in Dell PowerStore 1000T.
NOTE: FC zoning is supported in both SmartFabric mode and Full Switch mode. In each mode, the FC zones are configured
through the CLI as shown in the example below.
These examples assume that the storage array has been successfully connected to the MX9116n FSE’s FC uplinks and there are
no errors.
Below are examples of the steps and commands to configure FC Zoning.
NOTE: For more information about the Dell SmartFabric OS10 Fibre Channel capabilities and commands, find the relevant
version of the User Guide in the OME-M and OS10 compatibility and documentation table.
These examples are valid for both Full Switch and SmartFabric modes.
NOTE: For the default zone settings to work properly, ensure that the maximum number of logged-in FC and FCoE nodes is
less than 120.

Configure FC aliases for server and storage adapter WWPNs


An FC alias is a human defined name that references a WWN. This allows users to refer to those devices by the easy to
remember alias instead of the long WWN. In this example, aliases for two MX740c compute sleds and a Dell PowerStore 1000T
storage array is defined.
The WWNs for the servers are obtained using the OME-M console.

MX9116n-A1 MX9116n-A2

configure terminal configure terminal

fc alias mx740c-1p1 fc alias mx740c-1p2


member wwn 20:01:00:0E:1E:09:A2:3A member wwn 20:01:00:0E:1E:09:A2:3B

fc alias mx740c-2p1 fc alias mx740c-2p2


member wwn 20:01:00:0E:1E:09:B8:F6 member wwn 20:01:00:0E:1E:09:B8:F7

fc alias SpA-0 fc alias SpA-1


member wwn 50:06:01:66:47:E0:1B:19 member wwn 50:06:01:67:47:E0:1B:19

fc alias SpB-0 fc alias SpB-1


member wwn 50:06:01:6E:47:E0:1B:19 member wwn 50:06:01:6F:47:E0:1B:19

Create FC zones
Server and storage adapter WWPNs, or their aliases are combined into zones to allow communication between devices in the
same zone. Dell Technologies recommends single-initiator zoning. In other words, no more than one server HBA port per zone.
For high availability, each server HBA port should be zoned to at least one port from SP A and one port from SP B. In this
example, one zone is created for each server HBA port. The zone contains the server port and the two storage processor ports
that are connected to the same MX9116n FS.

NOTE: The maximum number of members in an FC zone is 255.

MX9116n-A1 MX9116n-A2

fc zone mx740c-1p1zone fc zone mx740c-1p2zone


member alias-name mx740c-1p1 member alias-name mx740c-1p2
member alias-name SpB-0 member alias-name SpB-1
member alias-name SpA-0 member alias-name SpA-1

fc zone mx740c-2p1zone fc zone mx740c-2p2zone


member alias-name mx740c-2p1 member alias-name mx740c-2p2
member alias-name SpB-0 member alias-name SpB-1
member alias-name SpA-0 member alias-name SpA-1

68 Full Switch Mode


Create zone set
A zone set is a collection of zones. A zone set named zoneset1 is created on each switch, and the zones are added to it.

MX9116n-A1 MX9116n-A2

fc zoneset zoneset1 fc zoneset zoneset1


member mx740c-1p1zone member mx740c-1p2zone
member mx740c-2p1zone member mx740c-2p2zone
exit exit

Activate zone set


Once the zone set is created and members are added, activating the zone set is the last step in the process. After the zone set
is activated, save the configuration using the write memory command.

MX9116n-A1 MX9116n-A2

vfabric 1 vfabric 1
zoneset activate zoneset1 zoneset activate zoneset1
exit exit

write memory write memory

Full Switch mode IO module replacement process


NOTE: If you are replacing an I/O module (IOM) in SmartFabric mode prior to OME-M version 1.30.00, the process used
depends on the version of OS10 installed and should be run with Dell Technical Support engaged. For technical support, go
to https://www.dell.com/support or call (USA) 1-800-945-3355. With OME-M 1.30.00 and later, see the SmartFabric mode
IOM replacement process section.

NOTE: A new replacement IOM will have a factory default configuration. All port interfaces in the default configuration are
in the no shutdown state.

In Full Switch mode, Dell PowerEdge MX platform gives you the option to replace the I/O modules in the case of persistent
errors or failures. The MX9116n FSE and MX5108n can be replaced with another I/O module of the same type. In the case of
errors or failures, replace the old IOM with a new IOM.
Follow the instructions in this section to replace a failed the I/O module.
Prerequisites:
● The replacement IOM must be a new device within the chassis deployment. Do not use an IOM that was previously deployed
within the MCM group.
● The other IOM in Full Switch mode must be up, running, and healthy; otherwise a complete traffic outage may occur.
● The new IOM must have the same OS10 version as the faulty IOM.
NOTE: OS10 is factory-installed in the MX9116n FSE or MX5108n Ethernet Switch. If the faulty IOM has an upgraded
version of OS10, you must upgrade the new IOM to the same version.
The following is an overview of the module replacement process:
1. Back up the IOM configuration.
2. Physically replace the IOM.
3. Verify firmware versions and configure the IOM settings.
4. Restore the IOM configuration.
5. Connect the cables to the new IOM.

Full Switch Mode 69


Back up the IOM configuration
If possible, obtain a current backup of the running configuration for the IOM being replaced. The running configuration contains
the current OS10 system configuration and consists of a series of OS10 commands.
For instructions on how to back up the switch configuration, find the relevant version of the User Guide in the OME-M and
OS10 compatibility and documentation table.

Physically replace the IOM


Perform the following steps to physically replace an IOM:
1. Identify the faulty IOM to replace.
2. Carefully record the cable and port connections to ensure that the correct cables are connected to the correct ports once
the replacement IOM is installed. Disconnect the cables connected to the faulty IOM.
3. Remove the faulty IOM and set it aside.
4. Insert the new IOM in the same slot as the failed IOM.
NOTE: The model of the new IOM must be the same, and the new IOM must have the same version of SmartFabric
OS10 as the old IOM.
5. Confirm that the new IOM has been recognized by OME-M before proceeding further.

Verify firmware versions and configure the IOM settings


Verify the firmware version on the new IOM using the show version command. If required, upgrade the firmware on the new
IOM. To view a pending firmware upgrade, use the show image firmware command. For more information, see the Install
firmware upgrade section in the Dell SmartFabric OS10 User Guide. Find the relevant version of the User Guide in the OME-M
and OS10 compatibility and documentation table.
Configure the hostname and IP management protocols (such as SNMP and NTP) on the new IOM and then restore the
configuration to the new switch. For more information, see the System management chapter in the Dell SmartFabric OS10 User
Guide. Find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.
NOTE: When you remove the faulty IOM in Full Switch mode, the CLI configurations are lost. Reapply the configurations in
the new IOM using OS10 CLI.

Restore the IOM configuration


To restore a backup configuration, copy a local or remote file to the startup configuration and reload the switch.
See the Dell SmartFabric OS10 User Guide for instructions on how to restore the switch configuration. Find the relevant version
of the User Guide in the OME-M and OS10 compatibility and documentation table.

Connect the cables to the new IOM


The I/O module is now ready to be used. Connect the network cables in the same configuration that was used on the failing I/O
module.

VLAN stacking
Dell Technologies introduces VLAN stacking in Dell SmartFabric OS10.5.4.0. This feature, commonly called Q-in-Q, is available
for use on the Dell PowerEdge MX platform in Full Switch mode starting with version OS10.5.4.1.
VLAN stacking is often recommended for the service provider use case. VLAN stacking enables service providers to offer
separate VLANs to customers with no coordination between customers, with minimal coordination between customers and the
provider. VLAN stacking allows service providers to add their own VLAN tag to data or control frames traversing the provider
network. The provider can differentiate customers even if those customers use the same VLAN ID. The providers' network
forwarding decisions are based on the provider VLAN tag only. This tag enables the provider to map traffic through the core
independent of the customer; the customer and provider only coordinate at the provider edge.

70 Full Switch Mode


At the access point of a VLAN-stacking network, service providers add a VLAN tag, the S-Tag, to each frame before the 802.1Q
tag. From this point on, the frame is double tagged. The service provider uses the S-Tag to forward frame traffic across its
network. At the egress edge, the provider removes the S-Tag so that the customer receives the frame in its original condition,
as shown in the following figure.

Service provider network


Provider Edge Provider Edge
Provider trunk Provider trunk
Pro
ge v
Ed t Ac ider
er r ces Ed
rovid ss po s p ge
P cce ort
A

Destination 802.1 Q header 802.1 Q header


Source MAC IP Header TCP Header Payload
MAC (S-TAG/O-TAG) (C-TAG/I-TAG)

Customer Edge Customer Edge

Destination 802.1 Q header


Source MAC IP Header TCP Header Payload
MAC (C-TAG/I-TAG)

Figure 64. Addition (ingress) and removal (egress) of the S-Tag before the original 802.1Q header

Another use case that is more suited to the Dell PowerEdge MX Platform is to allow the MX7000 Chassis, or MX Scalable
Fabric, to be treated as a single workload from the perspective of the top of rack (ToR) leaf pair. VLAN stacking is used to allow
many workloads with unique VLANs to be represented by a single stack VLAN on the uplink of the MX IOMs. This allows for
VLAN changes to occur within the MX Scalable Fabric on each server without the need for networking admins to change the
configuration in the overall data center. This also provides PowerEdge MX Platform a flexibility of better VLAN Management and
Scaling.
The following diagrams demonstrate a few topologies:

Full Switch Mode 71


Figure 65. VLAN stacking to a data center - One leaf pair

Figure 66. VLAN stacking to a data center - Multiple leaf pairs

72 Full Switch Mode


Figure 67. 802.1Q header and port types for VLAN stacking to a data center

Full Switch Mode 73


For more information about VLAN Stacking, see the VLAN Stacking section in the the Dell SmartFabric OS10 User Guide. Find
the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.

74 Full Switch Mode


5
Overview of SmartFabric Services for
PowerEdge MX
A SmartFabric is a logical entity that consists of a collection of physical resources, such as servers and switches, and logical
resources such as networks, templates, and uplinks. The OpenManage Enterprise – Modular (OME-M) console provides a
method to manage these resources as a single unit.

Functional overview
SmartFabric mode provides the following functionality:
● Data center modernization
○ I/O aggregation
○ Plug-and-play fabric deployment
○ Single interface to manage all switches in the fabric
● Lifecycle management
○ Fabric-wide SmartFabric OS10 updates
○ Automated or user-enforced rollback to last well-known state
● Fabric automation
○ Physical topology compliance
○ Server networking managed using templates
○ Automated QoS assignment per VLAN
○ Automated storage networking
● Failure remediation
○ Dynamically adjusts bandwidth across all interswitch links in the event of a link failure
○ Automatically detects fabric misconfigurations or link level failure conditions
○ Automatically heals the fabric on failure condition removal
NOTE: In SmartFabric mode, MX series switches operate entirely as a Layer 2 network fabric. Layer 3 protocols are not
supported.

OS10 operating mode differences


The following table outlines the differences between the two operating modes and apply to both the MX9116n FSE and the
MX5108n switches.

Table 12. OS10 operating mode differences


Full Switch mode SmartFabric mode
Configuration changes are persistent during power cycle Only the configuration changes made using the OS10
events. commands below are persistent across power cycle events.
All other CLI configuration commands are disabled.

alarm
alias
batch
boot
clean-reset
clear
cli
clock

Overview of SmartFabric Services for PowerEdge MX 75


Table 12. OS10 operating mode differences (continued)
Full Switch mode SmartFabric mode

commit
configure
copy
crypto
debug
delete
dir
disable
discard
enable
errdisable
event
exit
fefd
generate
help
image
kill-session
license
lock
move
no
nve
password-change
ping
ping6
re-balance
reload
show
start
support-assist
support-assist-activity
system
terminal
traceroute
uds
undebug
unlock
validate
write
ztd
spanning-tree
vlan

All switch interfaces are assigned to VLAN 1 by default and Layer 2 bridging is disabled by default. Interfaces must join a
are in the same Layer 2 bridge domain. bridge domain (VLAN) before being able to forward frames.
All configuration changes are saved in the running Verify configuration changes using feature-specific show
configuration by default. To display the current configuration, commands, such as show interface and show vlan,
use the show running-configuration command. instead of show running-configuration.

CLI commands available in SmartFabric mode


When operating in SmartFabric mode, access to CLI commands is restricted to SmartFabric OS10 show commands and the
following subset of CLI configuration commands:

alarm Alarm commands


alias Set alias for a command
batch Batch Mode
boot Tell the system where to access the software image at bootup
clean-reset Boot system mode clean-reset/normal
clear Clear command
cli Cli command
clock Configure the system clock

76 Overview of SmartFabric Services for PowerEdge MX


commit Commit candidate configuration
configure Enter configuration mode
copy Perform a file copy operation
crypto Cryptography commands
debug Debug command
delete Perform a file delete operation on local file system
dir Show the list of files for the specified system folder
disable Turn off privileged commands at a specific level
discard Discard candidate configuration
enable Turn on privileged commands at a specific level
errdisable Reset errdisable settings
event Event commands
exit Exit from the CLI
fefd Reset error disabled interface(s)
generate Command to generate executed functionality
help Display available commands
image Image commands
kill-session Kill a CLISH session
license License and digital fulfillment commands
lock Lock candidate configuration
move Perform a file move/rename operation on local filesystem
no No commands under exec mode
nve NVE Controller exec command
password-change change password for your login credentials
ping ping -h shows help
ping6 ping6 -h shows help
re-balance rebalance
reload Reboot Networking Operating System
show Show running system information
start Activate transaction based configuration
support-assist Support Assist command
support-assist-activity Support Assist related activity
system System command
terminal Set terminal settings
traceroute traceroute --help shows help
uds UDS functionality
undebug Disable debug for all modules
unlock Unlock candidate configuration
validate Validate candidate configuration
write Copy from current system configuration
ztd ZTD Commands.

spanning-tree
disable Disable spanning-tree globally
mac-flush-timer Set the maximum time in which mac flushes will be optimized
mode Spanning tree type to enable
rstp Set rstp port parameters
vlan Select vlan range option

IOM slot placement in SmartFabric mode


SmartFabric mode supports three specific switch placement options. Attempts to use placements different than described here
is not supported and may result in unpredictable behavior and/or data loss.
A SmartFabic cannot be split across physical fabric slots. For example, you cannot create a SmartFabric with switches in slot A1
and B1. They must be A1/A2 or B1/B2.

NOTE: The cabling shown in this section is the VLTi connection between the MX switches.

Two MX9116n Fabric Switching Engines in different chassis


This is the required IOM placement when creating a SmartFabric on top of a Scalable Fabric Architecture. Placing the FSE
modules in different chassis provides redundancy in the event of a chassis failure. This configuration supports placement in
Chassis1 Slot A1 and Chassis 2 Slot A2 and/or Chassis1 Slot B1 and Chassis 2 Slot B2. A SmartFabric cannot include a switch in
Fabric A and a switch in Fabric B.

Overview of SmartFabric Services for PowerEdge MX 77


Figure 68. IOM placement – 2x MX9116n in different chassis

Two MX5108n Ethernet switches in the same chassis


The MX5108n Ethernet Switch is only supported in single chassis configurations, with the switches in either slots A1/A2 or slots
B1/B2. A SmartFabric cannot include a switch in Fabric A and a switch in Fabric B.

Figure 69. IOM placement – 2x MX5108n in the same chassis

Two MX9116n Fabric Switching Engines in the same chassis


This placement should only be used in environments with a single chassis, with the switches in either slots A1/A2 or slots B1/B2.
A SmartFabric cannot include a switch in Fabric A and a switch in Fabric B.
As of OME-M 1.20.00, an MX deployment can start with a single MX7000 chassis with a pair of MX9116n FSEs and grow to
two or more chassis. The instructions for this can be found in this document in Expanding from a single-chassis to dual-chassis
configuration.

78 Overview of SmartFabric Services for PowerEdge MX


Figure 70. IOM placement – 2x MX9116n in the same chassis

Switch-to-switch (VLTi) cabling


When operating in SmartFabric mode, each switch pair runs a VLT interconnect (VLTi) between them. For the MX9116n FSE,
QSFP28-DD port groups 11 and 12 (eth1/1/37-1/1/40) are used.
For the MX5108n, ports 9 and 10 are used. Port 10 operates at 40 GbE instead of 100 GbE because all VLTi links must run at the
same speed.

NOTE: The VLTi ports are not user selectable, and the SmartFabric engine enforces the connection topology.

Figure 71. MX9116n SmartFabric VLTi cabling

Figure 72. MX5108n SmartFabric VLTi cabling

VLT backup link


A pair of cables is used to provide redundancy for the primary VLTi link. A third redundancy mechanism, a VLT backup link,
is automatically created when the SmartFabric is created. This link exchanges VLT heartbeat information between the two
switches using the management network to avoid a split-brain scenario should the external VLTi links go down. Based on the
node liveliness information, the VLT LAG/port is in up state in the primary VLT peer and in down state in the secondary VLT
peer. When only the VLTi link fails, but the peer is alive, the secondary VLT peer shuts down the VLT ports. When the node in
primary peer fails, the secondary becomes the primary peer.
To see the status of VLT backup link, run show vlt domain-id backup-link.
For example:

OS10# show vlt 255 backup-link


VLT Backup Link
------------------------
Destination : fde1:53ba:e9a0:de14:2204:fff:fe00:a267
Peer Heartbeat status : Up
Heartbeat interval : 30

Overview of SmartFabric Services for PowerEdge MX 79


Heartbeat timeout : 90
Destination VRF : default

Configuring port speed and breakout


If you need to change the default port speed and/or breakout configuration of an uplink port, you must complete this task
before creating the uplink.
For example, the QSFP28 interfaces that belong to port groups 13, 14, 15, and 16 on MX9116n FSE are typically used for uplink
connections. By default, the ports are set to 1x 100 GbE. The QSFP28 interface supports the following Ethernet breakout
configurations:
● 1x 100 GbE – One 100 GbE interface
● 1x 40 GbE – One 40 GbE interface
● 2x 50 GbE – Breakout a QSFP28 port into two 50 GbE interfaces
● 4x 25 GbE – Breakout a QSFP28 port into four 25 GbE interfaces
● 4x 10 GbE – Breakout a QSFP28 port into four 10 GbE interfaces
The MX9116n FSE also supports fibre channel (FC) capabilities using universal ports on port-groups 15 and 16. For more
information about configuring FC storage on the MX9116n FSE, see Scenario 5 and Scenario 6 in the Configuration scenarios
section.
For more information on interface breakouts, find the relevant version of the User Guide in the OME-M and OS10 compatibility
and documentation table.

80 Overview of SmartFabric Services for PowerEdge MX


VLAN scaling guidelines
Because SmartFabric mode provides network automation capabilities that Full Switch mode does not, the number of supported
VLANs differs between the modes. The following table provides the recommended maximum number of VLANs per fabric,
uplink, and server port. For a SmartFabric created with OME-M version 1.20.10 or earlier, you must enable support for VLAN
counts larger than 256 per fabric. A SmartFabric created with OME-M version 1.30.00 or later has this support automatically
enabled. See the Enable support for larger VLAN counts for more information.
If the number of configured VLANs is more than 500, it is recommended to have IGMP/MLD snooping enabled only on the
VLANs that required it and should not exceed 500. With configured VLANs less than 500, disable IGMP/MLD snooping globally.
Beginning with OME-M 1.30.00, IGMP/MLD snooping can be enabled in SmartFabric mode. To enable IGMP/MLD Snooping,
see the Layer 2 Multicast, Internet Group Management Protocol (IGMP) snooping, Multicast Listener Discovery Protocol (MLD)
snooping section.

NOTE: These are recommendations, not enforced maximums.

Table 13. Recommended maximum number of VLANs in SmartFabric mode


OS10 version Parameter Value

10.5.5.1 (factory install) Recommended max VLANs per fabric 3000


10.5.5.2 Recommended max VLANs per uplink 3000
Recommended max VLANs per server port 1500
Maximum number of MX9116n FSEs in a single MCM group 12
Maximum number of MX5108n Ethernet switches in a single 8
MCM group
10.5.4.1 Recommended max VLANs per fabric 3000
Recommended max VLANs per uplink 3000
Recommended max VLANs per server port 1500
Maximum number of MX9116n FSEs in a single MCM group 12
Maximum number of MX5108n Ethernet switches in a single 8
MCM group
10.5.3.1 Recommended max VLANs per fabric 3000
Recommended max VLANs per uplink 3000
Recommended max VLANs per server port 1024
Maximum number of MX9116n FSEs in a single MCM group 12
Maximum number of MX5108n Ethernet switches in a single 8
MCM group
10.5.2.4 Recommended max VLANs per fabric 1536
Recommended max VLANs per uplink 512 across all uplinks
Recommended max VLANs per server port 512 across all uplinks
Maximum number of MX9116n FSEs in a single MCM group 12
Maximum number of MX5108n Ethernet switches in a single 8
MCM group

10.5.1.6 Recommended max VLANs per fabric 512


10.5.1.7 Recommended max VLANs per uplink 512 across all uplinks
Recommended max VLANs per server port 256
Maximum number of MX9116n FSEs in a single MCM group 12 a

Overview of SmartFabric Services for PowerEdge MX 81


Table 13. Recommended maximum number of VLANs in SmartFabric mode (continued)
OS10 version Parameter Value
Maximum number of MX5108n Ethernet switches in a single 8
MCM group
10.5.0.1 through 10.5.0.7 Recommended max VLANs per fabric 256
Recommended max VLANs per uplink 64 across all uplinks
Recommended max VLANs per server port 64

10.4.0.R3S Recommended max VLANs per fabric 128


10.4.0.R4S Recommended max VLANs per uplink 128 across all uplinks
Recommended max VLANs per server port 32

a. From SmartFabric OS10.5.1.6 and later, twelve FSEs in a single MCM group and eight MX5108 switches in a single
MCM group is supported, but twelve FSEs and eight MX5108 (20 total) switches together in a single MCM group is not
supported.

NOTE: VLANs 4004 and 4020 are reserved for internal switch communication and cannot be assigned to any interface in
Full Switch or SmartFabric mode. VLAN 4020 is a Management VLAN and is enabled by default. Do not remove this VLAN,
and do not remove the VLAN tag or edit Management VLAN on the Edit Uplink page. In Full Switch mode, you can create
a VLAN, enable it, and define it as a Management VLAN in global configuration mode on the switch. All other VLANs are
allowed on data plane and can be assigned to any interface. For more information on Configuring VLANs in Full Switch
mode, find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.

NOTE: In SmartFabric mode, a VLAN can be created using the CLI, but cannot be deleted or removed. Therefore, all VLAN
configuration must be done in the OME-M UI while in SmartFabric mode.

Maximum Transmission Unit behavior


Beginning with OS10.5.1.6, the default maximum transmission unit (MTU) size is 9216 bytes. Earlier versions default to 1512
bytes. When a SmartFabric is created, the default MTU for the switch is set to jumbo (9216 bytes), even if manually changed
prior to creating the SmartFabric. This introduces the following behaviors:
● If the MTU is not individually set on a specific interface, the MTU is 9216 bytes.
● If the MTU has been specifically set on an individual interface, the MTU is the value that has been specified.
● If a FCoE VLAN is assigned to an interface, the MTU is set to 2500 bytes even if the MTU has been manually set to a
different value before the FCoE VLAN was assigned. It is recommended that you set the MTU back to 9216 bytes after the
FCoE VLAN is assigned.
See Configure Ethernet ports for instructions on setting the MTU.

Layer 2 Multicast, IGMP, and MLD snooping


Multicast is a technique that allows networking devices to send data to a group of interested receivers in a single transmission.
Multicast allows you to more efficiently use network resources, specifically for bandwidth-consuming services. Dell SmartFabric
OS10 supports the multicast feature in IPv4 and IPv6 networks and uses the following protocols for multicast distribution:
● Internet Group Management Protocol (IGMP)
● Protocol Independent Multicast (PIM)
To enable multicast routing in Full switch mode, see Dell SmartFabric OS10 user guide. Find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table. Beginning with OME-M 1.30.00 and later, configuring
Layer 2 Multicast in a SmartFabric is supported.

82 Overview of SmartFabric Services for PowerEdge MX


IGMP snooping
IGMP is a communications protocol that establishes multicast group memberships to neighboring switches and routers using
IPv4 networks. OS10 supports IGMPv1, IGMPv2, and IGMPv3 to manage the multicast group memberships on IPv4 networks.
IGMP snooping uses the information in IGMP packets to generate a forwarding table that associates ports with multicast
groups. When switches receive multicast frames, they forward them to their intended receivers. OS10 supports IGMP snooping
on VLAN interfaces.
To enable IGMP snooping in Full switch mode, see Dell SmartFabric OS10 user guide. Find the relevant version of the User Guide
in the OME-M and OS10 compatibility and documentation table.

MLD snooping
IPv6 uses MLD protocol to manage multicast groups. OS10 supports MLDv1and MLDv2 to manage the multicast group
memberships on IPv6 networks.
MLD snooping enables switches to use the information in MLD packets and generate a forwarding table that associates ports
with multicast groups. When switches receive multicast frames, they forward them to their intended receivers. OS10 supports
MLD snooping on VLAN interfaces.
To enable MLD snooping in Full switch mode, see Dell SmartFabric OS10 user guide. Find the relevant version of the User Guide
in the OME-M and OS10 compatibility and documentation table.

Configuring L2 Multicast in SmartFabric mode


To enable L2 Multicast, IGMP snooping and MLD snooping in SmartFabric mode, follow the steps mentioned below:
1. Access OME-M Console.
2. Go to Devices > Fabric and click on the desired Fabric.
3. Select the Multicast VLANs tab.
NOTE: This tab shows current IGMP version, MLD version and Flood restrict configuration. Flood restrict enables the
switch to forward unknown multicast packets to a multicast router. For it to be effective on the VLAN, IGMP and MLD
snooping must be enabled on the VLAN.

Figure 73. L2 Multicast option under Fabric


4. Select L2 Multicast.
5. Under IGMP, select VLAN(s) from Available VLANs and shift it to Selected VLANs as per the configuration.

Overview of SmartFabric Services for PowerEdge MX 83


Figure 74. Select VLANs for IGMP snooping
6. Select the Add selected VLANs to MLD configuration option for the same VLANs to be configured for MLD snooping.
7. Click Next.
8. Select the VLANs for MLD snooping then click Finish.

Figure 75. Selected VLANs for IGMP and MLD snooping

Validation
Run the following commands on MX IOMs in the Fabric to validate the IGMP and MLD snooping.
The show ip igmp snooping summary command shows maximum number of instances and total number interfaces with
igmp snooping enabled.

MX9116N-A1# show ip igmp snooping summary


Maximum number of IGMP and MLD Instances: 512
Total Number of interface with IGMP Snooping enabled is: 1

The show ip igmp snooping interface command shows VLANs, IGMP version and all other IGMP snooping details.

MX9116N-A1# show ip igmp snooping interface


Vlan10 is up, line protocol is up
IGMP version is 3
IGMP snooping is enabled on interface
IGMP snooping query interval is 60 seconds
IGMP snooping querier timeout is 130 seconds
IGMP snooping last member query response interval is 1000 ms
IGMP Snooping max response time is 10 seconds
IGMP snooping fast-leave is disabled on this interface
IGMP snooping querier is disabled on this interface
Multicast snooping flood-restrict is enabled on this interface

84 Overview of SmartFabric Services for PowerEdge MX


Upstream network requirements
This section describes the requirements and guidelines for connecting a SmartFabric to an upstream network.

Physical connectivity
All physical Ethernet connections within an uplink from a SmartFabric are automatically grouped into a single LACP LAG. All
related ports on the upstream switches must also be in a single LACP LAG. Failure to do so may create network loops.
A minimum of one physical uplink from each MX switch to each upstream switch is required and the uplinks must be connected
in a mesh design. For example, if you have two upstream switches, you need two uplinks from each MX9116n FSE, as shown in
the following figure.
Starting with Dell Networking OS10.5.2.4 and later, a SmartFabric supports a maximum of four Ethernet – no Spanning Tree
or three legacy Ethernet uplinks. Versions of Dell Networking OS10.5.1.6 or earlier, a SmartFabric supports a maximum of three
Ethernet - no Spanning Tree uplinks or three legacy Ethernet uplinks.
NOTE: If multiple uplinks are going to be used, you cannot use the same VLAN ID on more than one uplink without creating
a network loop.

NOTE: The upstream switch ports must be in a single LACP LAG as shown in the figure below. Creating multiple LAGs
within a single uplink results in a network loop and is not supported.

Figure 76. Required upstream network connectivity

The maximum number of uplinks supported in SmartFabric are detailed in the following table.

Table 14. Number of uplinks supported


OME-M version Uplink type supported Number of uplinks
2.10.00 Ethernet - No Spanning Tree 4
Legacy Ethernet (with Spanning Tree) 3
Fibre Channel 1 (from each switch)
2.00.00 Ethernet - No Spanning Tree 4
Legacy Ethernet (with Spanning Tree) 3
Fibre Channel 1 (from each switch)
1.40.00 Ethernet - No Spanning Tree 4

Overview of SmartFabric Services for PowerEdge MX 85


Table 14. Number of uplinks supported (continued)
OME-M version Uplink type supported Number of uplinks
Legacy Ethernet (with Spanning Tree) 3
Fibre Channel 1 (from each switch)
1.30.00 Ethernet - No Spanning Tree 4
Legacy Ethernet (with Spanning Tree) 3
Fibre Channel 1 (from each switch)
1.20.10 Ethernet - No Spanning Tree 3
Legacy Ethernet (with Spanning Tree) 3
Fibre Channel 1 (from each switch)
1.20.00 Ethernet - No Spanning Tree 2 (Only QSFP28 interfaces are currently
supported for Ethernet - No Spanning Tree
uplinks.)
Legacy Ethernet (with Spanning Tree) 3
Fibre Channel 1 (from each switch)
1.10.20 and earlier Ethernet 3
Fibre Channel 1 (from each switch)

NOTE: If multiple uplinks are to be used, you cannot use the same VLAN ID on more than one uplink without creating a
network loop.
Dell Technologies has tested uplinks with the following combination of switch models, and operating system versions.

Table 15. Tested upstream switches and operating system versions


Manufacturer Switch model Operating system version
Cisco Nexus C93180YC-EX NX-OS 9.2.4
FEX C2232-PP
Nexus C93180YC-EX ACI 14.0(3d)
Nexus C9332C
FEX C2232-PP
Arista DCS-7280SR2K-48C6-F 4.23.0F

Supported slot configurations for IOMs


The following table lists the supported IOM slot configurations.

Table 16. Supported IOM slot matrix for 100 GbE solution
Slot A1 Slot A2 Slot B1 Slot B2
MX8116n MX8116n Empty Empty
Empty Empty MX8116n MX8116n
MX8116n MX8116n MX8116n MX8116n
MX8116n Empty MX8116n Empty
Empty MX8116n Empty MX8116n

86 Overview of SmartFabric Services for PowerEdge MX


NOTE: The above table shows supported slot configurations for only the 100GbE solution. The deployment of 25GbE based
IOMs with the 100GbE is supported and can be seen in the section 100 GbE combined deployment with legacy IOMs.

Table 17. Supported IOM slot matrix for 25 GbE solution


Slot A1 Slot A2 Slot B1 Slot B2
MX9116n Empty Empty Empty
MX5108n Empty Empty Empty
MX7116n Empty Empty Empty
25G PTM Empty Empty Empty
10GBT PTM Empty Empty Empty
MX9116n Empty MX9116n Empty
MX5108n Empty MX5108n Empty
MX7116n Empty MX7116n Empty
25G PTM Empty 25G PTM Empty
10GBT PTM Empty 10GBT PTM Empty
MX9116n MX9116n MX9116n Empty
MX5108n MX5108n MX5108n Empty
MX7116n MX7116n MX7116n Empty
25G PTM 25G PTM 25G PTM Empty
10GBT PTM 10GBT PTM 10GBT PTM Empty
MX9116n MX9116n MX5108n MX5108n
MX9116n MX9116n 25G PTM 25G PTM
MX9116n MX9116n 10GBT PTM 10GBT PTM
MX9116n MX7116n MX5108n MX5108n
MX7116n MX9116n MX5108n MX5108n
MX9116n MX7116n 25G PTM 25G PTM
MX7116n MX9116n 25G PTM 25G PTM
MX9116n MX7116n 10GBT PTM 10GBT PTM
MX7116n MX9116n 10GBT PTM 10GBT PTM
MX7116n MX7116n MX5108n MX5108n
MX7116n MX7116n 25G PTM 25G PTM
MX7116n MX7116n 10GBT PTM 10GBT PTM
MX5108n MX5108n MX9116n MX9116n
MX5108n MX5108n MX7116n MX7116n
MX5108n MX5108n MX9116n MX7116n
MX5108n MX5108n MX7116n MX9116n
MX5108n MX5108n 25G PTM 25G PTM
MX5108n MX5108n 10GBT PTM 10GBT PTM
25G PTM 25G PTM MX9116n MX9116n
25G PTM 25G PTM MX7116n MX7116n

Overview of SmartFabric Services for PowerEdge MX 87


Table 17. Supported IOM slot matrix for 25 GbE solution (continued)
Slot A1 Slot A2 Slot B1 Slot B2
25G PTM 25G PTM MX9116n MX7116n
25G PTM 25G PTM MX7116n MX9116n
25G PTM a 25G PTM a 10GBT PTM a 10GBT PTM a
10GBT PTM 10GBT PTM MX9116n MX9116n
10GBT PTM 10GBT PTM MX7116n MX7116n
10GBT PTM 10GBT PTM MX9116n MX7116n
10GBT PTM 10GBT PTM MX7116n MX9116n
10GBT PTM a 10GBT PTM a 25G PTM a 25G PTM a

a. Combining two types of Pass-Through Modules (PTMs) is supported.

Other restrictions and guidelines


The following additional restrictions and guidelines are in place when operating in SmartFabric mode:
● Interconnecting switches in Slots A1/A2 with switches in Slots B1/B2, regardless of chassis, is not supported.
● When operating with multiple chassis, switches in Slots A1/A2 or Slots B1/B2 in one chassis must be interconnected only
with other Slots A1/A2 or Slots B1/B2 switches respectively. Connecting switches that reside in Slots A1/A2 in one chassis
with switches in Slots B1/B2 in another is not supported.
● Physical uplinks must be symmetrical. If one switch in a SmartFabric has two uplinks, the other switch must have two uplinks
of the same speed. Single-armed uplinks are not currently supported.
● You cannot have a pair of switches in SmartFabric mode uplink to another pair of switches in SmartFabric mode. A
SmartFabric can uplink to a pair of switches in Full Switch mode.
● VLANs 4004 and 4020 are reserved for internal switch communication and must not be assigned to an interface.
● In SmartFabric mode, although you can use the CLI to create any non-restricted VLANs, but you cannot assign interfaces to
them. For this reason, do not use the CLI to create VLANs in SmartFabric mode.
● VLAN 1 is automatically created as the Default/Native VLAN, but it is not required to be used. See Define VLANs for more
information.
● Do not create a VLAN or subnet on the Fabric that is in use for the management network on the MX Chassis or MX IOMs.

Ethernet – No Spanning Tree uplink


OME-M 1.20.00 and OS10.5.0.7 and later supports a new uplink type: Ethernet - No Spanning Tree. This uplink type allows
Ethernet uplinks to represent a SmartFabric as an end host with multiple adapters to the upstream network with spanning tree
being disabled on the uplink interfaces.
A loop free topology without STP is achieved by not allowing overlapping VLANs across uplinks. Supported use cases are shown
in the following figures.
NOTE: For PowerEdge MX systems using OME-M 1.20.00 and OS10.5.0.7 and later, Ethernet - No Spanning Tree uplinks
should be used instead of the legacy Ethernet uplink.
Supported scenarios:
The Ethernet - No Spanning Tree feature supports uplinks to both Dell and non-Dell switches in a vPC/VLT. Each uplink must be
in a single LACP LAG.
Guidelines:
● On an existing SmartFabric, all legacy Ethernet uplinks must be deleted before creating Ethernet - No Spanning Tree uplinks
to avoid the possibility of creating a network loop.
● Ethernet-No Spanning Tree uplinks cannot co-exist with legacy Ethernet uplinks in the same SmartFabric.
● VLAN IDs (tagged/untagged) must not overlap.
● FCoE Uplinks require separate untagged VLAN IDs.

88 Overview of SmartFabric Services for PowerEdge MX


● With OME-M 1.20.00, only QSFP28 interfaces on the MX9116n FSE are supported for Ethernet - No Spanning Tree uplinks.
With OME-M 1.20.10 and later, QSFP28-DD interfaces for Ethernet - No Spanning Tree are also supported.
Use Case 1: Standard uplink configuration (maximum of 2 uplinks)

Figure 77. Standard uplink configuration

Use Case 2: Uplink with FC gateway

Figure 78. Uplink with FC gateway

Use Case 3: Uplink with direct attached FC

Overview of SmartFabric Services for PowerEdge MX 89


Figure 79. Uplink with direct attached FC

Use Case 4: Ethernet - No Spanning Tree uplink with FCoE FSB

Figure 80. Uplink in FSB scenario

Configuring Ethernet - No Spanning Tree uplinks


Creating an Ethernet – No Spanning Tree uplink is the same process as the legacy Ethernet uplink except the upstream
switch configuration is different. Configuration examples for upstream switches can be found in this guide under Configuration
Scenarios. Instructions for how to create an uplink are included in this guide under Create Ethernet – No Spanning Tree uplink.

Spanning Tree Protocol - legacy Ethernet uplink


It is not recommended to use the legacy Ethernet uplink type when creating a new SmartFabric. Use the Ethernet - No
Spanning Tree uplink.
By default, SmartFabric OS10 uses Rapid per-VLAN Spanning Tree Plus (RPVST+) across all switching platforms including
PowerEdge MX networking IOMs. SmartFabric OS10 also supports RSTP.
NOTE: Dell Technologies recommends using RSTP instead of RPVST+ when more than 64 VLANs are required in a fabric to
avoid performance problems.
Caution should be taken when connecting an RPVST+ to an existing RSTP environment. RPVST+ creates a single topology per
VLAN with the default VLAN, typically VLAN 1, for the Common Spanning Tree (CST) with RSTP.
For non-native VLANs, all bridge protocol data unit (BPDU) traffic is tagged and forwarded by the upstream, RSTP-enabled
switch with the associated VLAN. These BPDUs use a protocol-specific multicast address.

90 Overview of SmartFabric Services for PowerEdge MX


Any other RPVST+ tree that is attached to the RSTP tree might process these packets accordingly leading to the potential of
unexpected trees.
NOTE: When connecting to an existing environment that is not using RPVST+, Dell Technologies recommends changing to
the existing spanning tree protocol before connecting a SmartFabric OS10 switch. This change ensures that the same type
of Spanning Tree is run on the SmartFabric OS10 MX switches and the upstream switches.
To switch from RPVST+ to RSTP, use the spanning-tree mode rstp command:

MX9116N-A1(config)# spanning-tree mode rstp


MX9116N-A1(config)# end

To validate the STP configuration, use the show spanning-tree brief command:

MX9116N-A1#show spanning-tree brief


Spanning tree enabled protocol rstp with force-version rstp
Executing IEEE compatible Spanning Tree Protocol Root ID Priority 0, Address
4c76.25e8.f2c0 Root Bridge hello time 2, max age 20, forward delay 15 Bridge ID
Priority 32768, Address 2004.0f00.cd1e Configured hello time 2, max age 20, forward
delay 15 Flush Interval 200 centi-sec, Flush Invocations 95 Flush Indication threshold 0
(MAC flush optimization is disabled)

NOTE: STP is required when using legacy Ethernet uplinks. MSTP is not supported. Operating a SmartFabric with STP
disabled and the legacy Ethernet uplink may create a network loop and is not supported. Use the Ethernet - No Spanning
Tree uplink instead.

NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.

Networks and automated QoS


In addition to assigning VLANs to server profiles, SmartFabric automates QoS settings based on the Network Type specified.
The following figure shows that when defining a VLAN, several options are pre-defined.

Figure 81. Network types available in SmartFabric mode

Overview of SmartFabric Services for PowerEdge MX 91


The following table lists the network types and related settings. The QoS group is the numerical value for the queues available in
SmartFabric mode. Available queues include 2 through 5. Queues 1, 6, and 7 are reserved.
NOTE: In SmartFabric mode, an administrator cannot change the default weights for the queues. Weights for each queue
can be seen using the show queuing weights interface ethernet command that is described in Common CLI
troubleshooting commands for Full Switch and SmartFabric modes.

Table 18. Network types and default QoS settings


Network type Description QoS group
General Purpose (Bronze) Used for low-priority data traffic 2
General Purpose (Silver) Used for standard/default-priority data traffic 3
General Purpose (Gold) Used for high-priority data traffic 4
General Purpose (Platinum) Used for extremely high-priority data traffic 5
Cluster Interconnect Used for cluster heartbeat VLANs 5
Hypervisor Management Used for hypervisor management connections such as the ESXi 5
management VLAN
Storage - NVMe/TCP Used for NVMe/TCP storage traffic 4
Storage - iSCSI Used for iSCSI VLANs 5
Storage - FCoE Used for FCoE VLANs 5
Storage - Data Replication Used for VLANs supporting storage data replication such as for 5
VMware VSAN
VM Migration Used for VLANs supporting vMotion and similar technologies 5
VMware FT Logging Used for VLANs supporting VMware Fault Tolerance 5

Server templates, profiles, virtual identities,


networks, and deployment
For detailed information about server templates, profiles, virtual identities, and deployment, see the OpenManage Enterprise -
Modular documentation.

Templates
A template is a set of system configuration settings referred to as attributes. A template may contain a small set of attributes
for a specific purpose, or all the attributes for a full system configuration. Templates allow for multiple servers to be configured
quickly and automatically without the risk of human error.
Networks (VLANs) are assigned to NICs as part of the server template. When the template is deployed, those networks are
programmed on the fabric for the servers that are associated with the template.
NOTE: Network assignment through template only functions for servers connected to a SmartFabric. If a template with
network assignments is deployed to a server connected to a switch in Full Switch mode, the network assignments are
ignored.
The OME-M UI provides the following options for creating templates:
● Most frequently, templates are created by getting the current system configuration from a server that has been configured
to the exact specifications required. This is referred to as a Reference Server.
● Templates may be cloned, copied, and edited.
● A template can be created by importing a Server Configuration Profile (SCP) file. The SCP file may be from a server or
exported by OpenManage Essentials, OpenManage Enterprise, or OME-M.
● OME-M comes prepopulated with several templates for specific purposes.

92 Overview of SmartFabric Services for PowerEdge MX


Profiles
A server profile is a combination of template and identity settings that are applied to a specific server or multiple servers.
When the server template is deployed successfully, OME-M automatically creates and applies a server profile to that template.
OME-M also allows you to manually create a server profile that you can apply to the designated compute sleds.
Instead of deleting and recreating server templates, profiles can be used to deploy with modified attributes on server templates.
A single profile can be applied to multiple server templates with modified attributes, or all attributes.

Virtual identities and identity pools


Some of the attributes that are in a template are referred to as identity attributes. Identity attributes identify a device and
distinguish it from all other devices on the network. Since identity attributes must uniquely identify a device, it is imperative that
each device has a unique network identity. Otherwise, devices cannot communicate with each other over the network.
Devices come with unique manufacturer-assigned identity values preinstalled, such as a factory-assigned MAC address. Those
identities are fixed and never change. However, devices can assume a set of alternate identity values, called a “virtual identity.”
A virtual identity functions on the network using that identity, as if the virtual identity was its factory-installed identity. The use
of virtual identity is the basis for stateless operations.
OME-M uses identity pools to manage the set of values that can be used as virtual identities for discovered devices. It controls
the assignment of virtual identity values, selecting values for individual deployments from predefined ranges of possible values.
This allows the customer to control the set of values which can be used for identities. The customer does not have to enter all
needed identity values with every deployment request, or remember which values have or have not been used. Identity pools
make configuration deployment and migration easier to manage.
Identity pools are used with template deployment and profile operations. They provide sets of values that can be used for virtual
identity attributes for deployment. After a template is created, an identity pool may be associated with it. Doing this directs
the identity pool to get identity values whenever the template is deployed to a target device. The same identity pool can be
associated with, or used by, any number of templates. Only one identity pool can be associated with a template.
Each template has specific virtual identity needs, based on its configuration. For example, one template may have iSCSI
configured, so it needs the appropriate virtual identities for iSCSI operations. Another template may not have iSCSI configured,
but may have FCoE configured, so it needs virtual identities for FCoE operations but not for iSCSI operations.

Deployment
Deployment is the process of applying a full or partial system configuration on a specific target device. In OME-M, templates are
the basis for all deployments. Templates contain the system configuration attributes that get provisioned to the target server,
then the iDRAC on the target server applies the attributes contained in the template and reboots the server if necessary. Often,
templates contain virtual identity attributes. As mentioned above, identity attributes must have unique values on the network.
Identity Pools facilitate the assignment and management of unique virtual identities.

VMware vCenter integration - OpenManage Network


Integration
Dell OpenManage Network Integration (OMNI) is an external plug-in for VMware vCenter that is designed to complement
SmartFabric Services (SFS) by integrating with VMware vCenter to perform fabric automation. With the release of OMNI
2.0, this integration is extended to SFS that runs on PowerEdge MX. This integration automates VLAN changes that occur in
VMware vCenter and propagates those changes into the related SFS instances running on the MX platform as shown in the
following figure.
The combination of OMNI and Cisco ACI vCenter integration creates a fully automated solution. OMNI and the Cisco APIC
recognize changes in vCenter and automatically propagate the changes to the MX SmartFabric and ACI fabric respectively. This
allows a VLAN change to be made in vCenter, and it will flow through the entire solution without any manual intervention.
For more information about OMNI, see the SmartFabric Services for OpenManage Network Integration User Guide on the Dell
OpenManage Network Integration for VMware vCenter documentation page.

NOTE: OMNI 2.0 and 2.1 only support VLAN automation with one uplink per SmartFabric.

Overview of SmartFabric Services for PowerEdge MX 93


Figure 82. OMNI integration workflow

OpenManage Integration for VMware vCenter


The Dell OpenManage systems management solutions portfolio provides full-lifecycle management of PowerEdge servers and
associated infrastructure. The foundational technologies are the integrated Dell Remote Access Controller (iDRAC) and the
OpenManage Enterprise console.
The Dell OpenManage Integration for VMware vCenter (OMIVV) is designed to streamline the management processes in your
data center environment by allowing you to use VMware vCenter Server to manage your full server infrastructure - both
physical and virtual.
The following list of OMIVV capabilities applies to the full portfolio of Dell OpenManage systems management solutions:
● Monitor PowerEdge hardware inventory directly in Host and Cluster views and the OMIVV dashboard within vCenter
● Bubble up hardware system alerts for configurable actions in vCenter
● Manage firmware alongside vSphere Lifecycle Manager in vSphere 7.0 and higher
● Set baselines for server configuration and firmware levels with cluster aware updates for non-vSphere Lifecycle Manager
vSphere and vSAN clusters
● Speed deployment of ESXi to new PowerEdge servers and quickly add them to managed vCenters
OMIVV provides a unified PowerEdge and VMware inventory, monitoring, and update solution. Specifically, for the Dell
PowerEdge MX platform, OMIVV provides the following:
● Inventory, monitoring, and alerting directly within vCenter
● Manage server lifecycle updates in vCenter

94 Overview of SmartFabric Services for PowerEdge MX


6
SmartFabric Creation
Steps to create a SmartFabric
The procedures in this section make the following assumptions:
● All MX7000 chassis and management modules are cabled correctly and in a MultiChassis Management group.
● The VLTi cables between switches have been connected.
● Open Manage Enterprise - Modular is at version 1.20.00 and OS10 is version 10.5.0.7 or later.
● The entire platform is healthy.
NOTE: All server, network, and chassis hardware must be updated to the latest firmware. See Software and firmware
versions used for the minimum recommended firmware versions.
To walk through the steps of creating a SmartFabric yourself, see the interactive demos for MX at Dell Technologies Interactive
Demo: OpenManage Enterprise Modular for MX solution management.

Physically cable PowerEdge MX chassis and upstream


switches
There are multiple areas of cabling for the PowerEdge MX chassis that must be completed. It is recommended to cable the
PowerEdge MX chassis and upstream switches before creating the SmartFabric.

Table 19. Cable requirements and instructions


Cable requirement Instructions
Management module cabling MX Chassis management wiring
https://www.dell.com/support/manuals/en-us/poweredge-
mx7000/omem_1_30_10_ug/revision-history?
guid=guid-891bbdd9-3032-4b85-9f92-63ac8c002d9b&lang=e
n-us

VLTi cabling options See IOM placement – 2x MX9116n in different chassis, IOM
placement – 2x MX5108n in the same chassis, and IOM
placement – 2x MX9116n in the same chassis.
Cabling the PowerEdge MX chassis upstream See the example topologies in Configuration Scenarios.
Console cable access, in-band and out-of-band management Management Networks for Dell Networking
networks

For more information about network cabling on PowerEdge MX, see Supported cables and optical connectors.

MX Chassis management wiring

Define VLANs
Before creating the SmartFabric, the initial set of VLANs should be created. The first VLAN to be created should be the default,
or native VLAN, typically VLAN 1. The default VLAN must be created for any untagged traffic to cross the fabric.

SmartFabric Creation 95
NOTE: VLAN 1 will be created as a Default VLAN when the first fabric is created.

To define VLANs using the OME-M console, perform the following steps.
1. Open the OME-M console.
2. From the navigation menu, click Configuration > VLANs.
NOTE: In OME-M 1.10.20 and earlier, the VLANs screen is titled Networks.

3. In the VLANs pane, click Define.


4. In the Define Network window, complete the following:
a. Enter a name for the VLAN in the Name box. In this example, VLAN0010 was used.
b. Optionally, enter a description in the Description box. In this example, the description was entered as “Company A
General Purpose”.
c. Enter the VLAN number in the VLAN ID box. In this example, 10 was entered.
d. From the Network Type list, select the desired network type. In this example, General Purpose (Bronze) was used.
e. Click Finish.
The following figure shows VLAN 1 and VLAN 10 after being created using the previous steps.

Figure 83. Defined VLANs list

Define VLANs for FCoE

NOTE: Define VLANs for FCoE if implementing Fibre Channel configurations. Skip this section if not required.

A standard Ethernet uplink carries assigned VLANs on all physical uplinks. When implementing FCoE, traffic for SAN path A and
SAN path B must be kept separate. The storage arrays have two separate controllers which create two paths, SAN path A and
SAN path B, connected to the MX9116n FSE. For storage traffic to be redundant, two separate VLANs are created for that
traffic.
Using the same process described in Define VLANs, create two additional VLANs for FCoE traffic.

Table 20. FCoE VLAN attributes


Name Description Network type VLAN ID SAN
FC A1 FCOE A1 Storage - FCoE 30 A
FC A2 FCOE A2 Storage - FCoE 40 B

96 SmartFabric Creation
Figure 84. Defined FCoE VLANs list

NOTE: To create VLANs for FCoE, from the Network Type list, select Storage – FCoE, and then click Finish. VLANs to be
used for FCoE must be configured as the Storage – FCoE network type.

NOTE: In OME-M 1.10.20 and earlier, the VLANs screen is titled as Networks.

Create the SmartFabric


To create a SmartFabric using the OME-M console, perform the following steps:
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. In the Fabric pane, click Add Fabric.
4. In the Create Fabric window, complete the following:
a. Enter a name for the fabric in the Name box. In this example, "SmartFabric" was entered.
b. Optionally, enter a description in the Description box. In this example, the description was entered as “SmartFabric
using MX9116n/MX7116n in Fabric A.”
c. Click Next.
d. From the Design Type list, select the appropriate type. In this example, “2x MX9116n Fabric Switching Engine in
different chassis” was selected.
e. From the Chassis-X list, select the first MX7000 chassis.
f. From the Switch-A list, select Slot-IOM-A1.
g. From the Chassis-Y list, select the second MX7000 chassis to join the fabric.
h. From the Switch-B list, select Slot-IOM-A2.
i. Click Next.
j. On the Summary page, verify the proposed configuration and click Finish.
NOTE: From the Summary window, a list of the physical cabling requirements can be printed.

SmartFabric Creation 97
Figure 85. SmartFabric deployment design window

The SmartFabric deploys. The process of Fabric creation can take up to 20 minutes to complete. During this time, all related
switches are rebooted, and the operating mode changes from Full Switch to SmartFabric mode.

NOTE: After the fabric is created, the fabric health is critical until at least one uplink is created.

The following figure shows the new SmartFabric object and some basic information about the fabric.

Figure 86. SmartFabric post-deployment without defined uplinks

Optional steps
The configuration of forward error correction, uplink port speed and breakout, MTU, and autonegotiation is optional.

Forward error correction


NOTE: Users should only use this feature if needed.

Forward error correction (FEC) is a technique used for controlling errors in data transmission at high speeds. With FEC, the
destination recognizes only the data with no errors from the source that is sending redundant error correcting code with the
data frame. This technique extends the range of the signal by correcting error without retransmission. FEC enhances data
reliability.
Available FEC modes:
● CL91-RS - Supports 100 GbE
● CL108-RS – Supports 25 GbE and 50 GbE
● CL74-FC – Supports 25 GbE and 50 GbE
● Auto
● Off
In SmartFabric mode, configuring FEC is supported on OME-M 1.20.00 and later. FEC options CL91-RS, CL108-RS, CL74-FC,
Auto, and Off are available. The options displayed in the UI vary depending on the speed of the selected interface.

98 SmartFabric Creation
The following table shows the default FEC and auto negotiation values for optics and cables for the QSFP28-DD and QSFP28
ports at 200 GbE and 100 GbE speeds.

Table 21. Media, Auto negotiation, and default FEC values for 200 GbE and 100 GbE
Media Auto negotiation FEC
200 GbE and 100 GbE DAC Enabled CL91-RS
200 GbE and 100 GbE Fiber or AOC, except LR-related optics Disabled CL91-RS
200 GbE and 100 GbE LR-related optics Disabled Disabled

The following table shows the default FEC and auto negotiation values for optics and cables for the QSFP28-DD and QSFP28
ports at 200, 100, 50, and 25 GbE speeds.

Table 22. Media, cable type, auto negotiation, and default FEC values
Media DAC cable type Auto negotiation FEC
200, 100, 50, and 25 GbE DAC CR-L Enabled CL-108-RS
CR-S Enabled CL-74-FC
CR-N Enabled Disabled
200, 100, 50, and 25 GbE Fiber or AOC, except N/A Disabled CL108-RS
LR-related optics
200, 100, 50, and 25 GbE LR-related optics N/A Disabled Disabled

To configure FEC in Full Switch mode, find the relevant version of the Dell SmartFabric OS10 User Guide in the OME-M and
OS10 compatibility and documentation table.
To configure FEC in SmartFabric mode on the OME-M console, perform the following steps.
Steps
1. Access the OME-M console.
2. Choose Devices > I/O Modules > Click on an I/O Module.
3. In an I/O Module option, choose Hardware > Port Information. This option lists the IOM ports and its information.
4. Select a port to configure FEC and click Configure FEC option at the top.
NOTE: FEC options are not supported for compute sled facing ports and FEM ports (breakout FEM, virtual ports).

Figure 87. Configure FEC option


5. This shows the Current and Auto negotiated FEC Settings. Choose FEC Type for the port selected from the list.

SmartFabric Creation 99
Figure 88. Select FEC Type

Verify FEC configuration


FEC can be verified on I/O Module CLI in Full switch and SmartFabric mode. Run the following command to verify.
The show interface ethernet 1/1/41 command shows the current and negotiated FEC for port 1/1/41.

MX9116N-A1# show interface ethernet 1/1/41


Ethernet 1/1/41 is up, line protocol is up
Port is part of Port-channel 2
Hardware is Eth, address is 20:04:0f:21:d4:f1
Current address is 20:04:0f:21:d4:f1
Pluggable media present, QSFP28 type is QSFP28 100GBASE-SR4-NOF
Wavelength is 850
Receive power reading is not available

Interface index is 112


Internet address is not set
Mode of IPv4 Address Assignment: not set
Interface IPv6 oper status: Disabled
MTU 9216 bytes, IP MTU 9184 bytes
LineSpeed 100G, Auto-Negotiation off
Configured FEC is cl91-rs, Negotiated FEC is cl91-rs
(Output Truncated)

Configure uplink port speed or breakout

NOTE: Users should only perform this task if needed.

If the uplink ports must be reconfigured to a different speed or breakout setting from the default, you must complete this before
creating the uplink.
To configure the Ethernet breakout on port groups using OME-M Console, perform the following steps:
1. Open the OME-M console.
2. From the navigation menu, click Devices > I/O Modules.
3. Select the switch that you want to manage. In this example, a MX9116n FSE in slot IOM-A1 is selected.
4. Choose Hardware > Port Information.
5. In the Port Information pane, choose the desired port group. In this example port-group1/1/13 is selected.
6. Select Configure Breakout. In the Configure Breakout dialog box, select the required Breakout option. In the example
provided, the Breakout Type for port-group1/1/13 is selected as 1x 40GE.
NOTE: Before choosing the breakout type, you must set the Breakout Type to HardwareDefault and then set the
desired configuration. If the desired breakout type is selected before setting HardwareDefault, an error occurs.
7. Click Finish.

100 SmartFabric Creation


Figure 89. Select the desired breakout type
8. Configure the remaining breakout types on additional uplink port groups as needed.

Configure Ethernet ports


Use the OME-M console to configure various settings such as port breakout, MTU size, auto negotiation, and so forth. Perform
the following steps to gain insight into modifying various entities.
NOTE: In SmartFabric mode, the configuration of the interfaces using the CLI should not be performed. Use the OME-
Modular UI instead. In Full Switch mode, the configuration of the interfaces using the OME-Modular UI is not supported, use
CLI instead.
1. From the Switch management page, choose Hardware > Port Information.

Figure 90. IOM Overview page on OME-M

SmartFabric Creation 101


Figure 91. Port information section
2. To configure MTU, select the port that is listed under the respective port-group.
3. Click Configure MTU. Enter MTU size in bytes.

Figure 92. Configure MTU


4. Click Finish.
5. To configure Auto Negotiation, select the port that is listed under the respective port-group and then click Toggle
AutoNeg. This changes the Auto Negotiation of the port to Disabled/Enabled.
6. Click Finish.

Figure 93. Enable/Disable Auto Negotiation


7. To configure the administrative state (shut/no shut) of a port, select the port that is listed under the respective port-group.
Click Toggle Admin State. This toggles the port administrative state to aDisabled/Enabled state.
8. Click Finish.

Create Ethernet – No Spanning Tree uplink


As of OME-M 1.20.00 and OS10.5.0.7, the preferred uplink type is the Ethernet - No Spanning Tree Protocol uplink. The legacy
Ethernet uplink is still available but is no longer recommended. The process for creating a legacy Ethernet uplink is the same as
below except for selecting Ethernet as the uplink type.
An Ethernet - No Spanning Tree uplink represents a SmartFabric as an end host with multiple adapters to the upstream
network. For this, STP is disabled on the uplink interfaces. A loop-free topology without STP is achieved by not allowing
overlapping VLANs across uplinks.

102 SmartFabric Creation


NOTE: Ethernet – No Spanning Tree uplink is supported with Dell and non-Dell switches in a vPC/VLT. Each uplink must be
a single LACP LAG.

NOTE: To change the port speed or breakout configuration, see Configure uplink port speed or breakout and make those
changes before creating the uplinks.
After initial deployment, the new fabric shows Uplink Count as ‘zero’ and shows a warning (yellow triangle with exclamation
point). The lack of a fabric uplink results in a failed health check (red circle with x). To create the uplink, perform the following
steps.
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. Click on the fabric name. In this example, SmartFabric is selected.
4. In the Fabric Details pane, click Uplinks.
5. Click the Add Uplinks button.
6. In the Add Uplink window, complete the following:
a. Enter a name for the uplink in the Name box. In this example, Uplink01 is entered.
b. Optionally, enter a description in the Description box.
c. From the Uplink Type list, select the desired type of uplink. In this example, Ethernet – No Spanning Tree is selected.

Figure 94. Create Ethernet – No Spanning Tree uplink

NOTE: For more information on Uplink Failure Detection, see the Uplink failure detection section.
d. Click Next.
e. From the Switch Ports list, select the uplink ports on both the Mx9116n FSEs. In this example, ethernet 1/1/41 and
ethernet 1/1/42 are selected for both MX9116n FSEs.
NOTE: The show inventory CLI command can be used to find the I/O Module service tag information (for example,
8XRJ0T2).

f. From the Tagged Networks list, select the desired tagged VLANs. In this example, VLAN0010 is selected.
g. From the Untagged Network list, select the untagged VLAN. In this example, VLAN0001 is selected.

SmartFabric Creation 103


Figure 95. Create Ethernet uplink

7. Click Finish.
At this point, SmartFabric creates the uplink object and the status for the fabric changes to OK (green box with checkmark).
NOTE: VLAN1 will be assigned to Untagged Network by default.

Ethernet – No Spanning Tree upstream switch


configuration
If using Ethernet – No Spanning Tree uplinks, refer to the following table to configure your uplink switches. Configurations for
Dell Networking OS10 (S5232F-ON) and Cisco Nexus 9000-series were used for these examples.

Table 23. Dell OS10 and Cisco Nexus Ethernet – No Spanning Tree configuration
Dell Networking OS10 Cisco Nexus OS
Global Settings Global Settings
spanning-tree mode RSTP spanning-tree port type edge bpduguard default
spanning-tree port type network default

Port-channel Port-channel
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan xy switchport trunk allowed vlan xy
spanning-tree bpdu guard enable channel-group <channel-group-id> mode active
spanning-tree guard root
spanning-tree port type edge

Interface Interface
no shutdown switchport mode trunk

104 SmartFabric Creation


Table 23. Dell OS10 and Cisco Nexus Ethernet – No Spanning Tree configuration (continued)
Dell Networking OS10 Cisco Nexus OS

channel-group <channel-group-id> mode active switchport trunk allowed vlan xy


no switchport spanning-tree port type edge trunk
spanning-tree bpduguard enable
spanning-tree guard root

Optional - Configure Fibre Channel


Depending on the deployment, configuration of the Fibre Channel ports and uplinks is optional.

Configure Fibre Channel universal ports

NOTE: Configure Fibre Channel universal ports, if implementing Fibre Channel configurations as per requirement.

NOTE: Fibre Channel port speed must be specifically configured. Auto negotiation is not currently supported.

On the MX9116n FSE, port-group 1/1/15 and 1/1/16 are universal ports capable of connecting to FC devices at various speeds
depending on the optic being used. In this example, we are configuring the universal port speed as 4x32G FC. To enable FC
capabilities, perform the following steps on each MX9116n FSE.

NOTE: Port-group 1/1/16 is used for FC connections in this example.

1. Open the OME-M console.


2. From the navigation menu click Devices, then click I/O Modules.
3. In the Devices panel, click to select the IOM to configure.
4. In the IOM panel, click Hardware, then click Port Information.
NOTE: See the SmartFabric Services for PowerEdge MX Port-Group Configuration Errors video for more information
about configuration errors.

5. Click the port-group 1/1/16 check box, then click Configure breakout.
6. In the Configure breakout panel, select 4X32GFC as the breakout type used in this example.
NOTE: With OME-M 1.20.10 and earlier, you must set the Breakout Type to HardwareDefault first and then set the
desired configuration. If the desired breakout type is selected before setting HardwareDefault, an error occurs.
7. Click Finish.
NOTE: When enabling Fibre Channel ports, they are set administratively down by default. Select the ports and click the
Toggle Admin State button. Click Finish to administratively set the ports to up.

NOTE: The MX9116n supports FC speeds of 8G, 16G, and 32G FC.

Create Fibre Channel uplinks


Before creating a Fibre Channel uplink, make sure you have configured the universal ports as FC ports using the steps in the
previous Configure Fibre Channel universal ports section.

NOTE: Create Fibre Channel uplinks for FCoE, if implementing Fibre Channel configurations as per requirement.

NOTE: The steps in this section allow you to connect to an existing FC switch using NPG mode, or directly attach an
FC storage array. The uplink type is the only setting within the MX7000 chassis that distinguishes between the two
configurations.
To create uplinks, perform the following steps.

SmartFabric Creation 105


1. Open the OME-M console.
2. From the navigation menu click Devices, then click Fabric.
3. Click the SmartFabric fabric name.
4. In the Fabric Details panel, click Uplinks, then click the Add Uplinks button.
5. From the Add Uplinks window, use the information in the following table to enter an uplink name in the Name box.
6. Optionally, enter a description in the Description box.
7. From the Uplink Type list, select Type, then click Next. In this example, FCoE is selected as Uplink type. Choose Uplink
type as per your configuration from FC Gateway, FC Direct Attach, or FCoE options.
8. From the Switch Ports list, select the FC ports as defined in the following table. Select the appropriate port for the
connected uplink.
9. From the Tagged Networks list, select VLAN defined in the following table, then click Finish. SmartFabric creates the
uplink object, and the status for the fabric changes to OK.
NOTE: Fibre Channel ports are administratively disabled by default. Make sure to set the Fibre Channel ports to Enabled
by toggling the Admin State of the ports. This is done by choosing Devices > I/O Modules > MX9116n FSE switch >
Hardware and Port Information. Select the port and choose Toggle Admin State.

NOTE: An FCoE uplink from the MX5108n or MX9116n must contain only a single port interface on an MX IOM to the
existing FCoE switch such as the Dell PowerSwitch S4148U shown in the diagram above.

NOTE: Make sure to have MTU set up on the internal Ethernet ports leveraging FCoE. If the MTU is not set, configure the
MTU by selecting port under Port Information and choosing Configure MTU. Enter the MTU size between 2500 and 9216
bytes.

NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.
For the examples shown in Scenario 5: Connect MX9116n FSE to Fibre Channel storage - NPIV Proxy Gateway mode and
Scenario 6: Connect MX9116n FSE to Fibre Channel storage - FC Direct Attach, the uplink attributes are defined in the following
table.

Table 24. Uplink attributes


Uplink name Description Ports VLAN (tagged)
FCoE A1 FC Uplink for switch in Slot A1 Switch model dependent 30
FCoE A2 FC Uplink for switch in Slot Switch model dependent 40
A2

NOTE: Do not assign the same FCoE VLAN to both switches. They must be kept separate.

Enable support for larger VLAN counts


A SmartFabric created on PowerEdge MX versions 1.20.10 or later has support for large VLAN counts enabled by default. For
a SmartFabric created with OME-M prior to version 1.20.10, you must manually enable support for VLAN counts larger than
256 per fabric. After upgrading to OME-M 1.20.10 or later, support for larger VLAN count is automatically enabled even on
SmartFabrics created prior to OME-M 1.20.10.

NOTE: If your environment has fewer than 256 VLANs, this support does not need to be enabled.

To enable this support, perform the following steps:


1. Download the script titled Set-ScaleVLANProfile.ps1 from the GitHub repository.
2. Copy this script to any folder or directory.
3. Open PowerShell and change the path to the directory where the script was copied.
4. Execute the script.

106 SmartFabric Creation


Figure 96. PowerShell - execute script
5. Enter the IP address of the OME-M instance that manages the switch being replaced. In this example, the OME-M Instance
IP is 100.67.XX.XX.

Figure 97. PowerShell - enter OME-M instance IP address


6. Provide the credentials for the OME-M instance.

Figure 98. PowerShell - enter credentials


7. When prompted, enter Enabled to enable the scale-vlan-profile. To disable the profile, enter Disabled.

Figure 99. PowerShell - enable scale-vlan-profile


8. Using the cursor, select the SmartFabric to enable the scale-vlan-profile on.

SmartFabric Creation 107


Figure 100. PowerShell - select fabric

If successful, the script will indicate success.

Figure 101. PowerShell - execution of script successful

9. To verify that the scale-vlan-profile has been enabled, access a switch CLI that is part of the fabric and execute the show
running-configuration command. If successful, the entry scale-profile vlan will be listed in the configuration.

108 SmartFabric Creation


Figure 102. Show running-configuration output on IOM to verify

Uplink failure detection


Uplink failure detection (UFD) detects the loss of upstream connectivity from switch uplinks to the next-hop switch. If the
switch loses upstream connectivity, the related downstream server-facing interfaces are shut down so the host can use a
different path to send data out of the fabric. By default, the attached hosts continue to send traffic to that switch without a
direct path to the destination. The downstream devices do not generally receive an indication that the upstream connectivity
was lost because connectivity to the local switch is still operational. To solve this issue, use UFD. The VLTi link to the peer
switch can temporarily handle traffic during a network outage, but this is not considered a best practice.
NOTE: In the case of a loss of all VLTi links, the VLT secondary peer IOM brings down its VLT port channels. In SmartFabric
mode, UFD will not bring down those associated interfaces since there is an operational uplink. Ensure the server facing
ports are in a VLT port channel for proper behavior.
An uplink state group is configured on each switch, which creates an association between the uplinks to the upstream devices
and the downlink interfaces. In the event that all uplinks fail on a switch, UFD automatically shuts down the downstream
interfaces. This propagates to the hosts attached to the switch. Each host then uses its link to the remaining switch to continue
sending traffic across the network. An interface in an uplink-state group can be a physical interface or a port channel (LAG)
aggregation of physical interfaces.
In SmartFabric mode, UFD is automatically enabled with OME-M 1.10.20. UFD is user-configurable with OME-M 1.20.00 and
later. In Full Switch mode, UFD is NOT enabled by default and must be configured at the switch CLI. Enabling UFD is
recommended.

SmartFabric Creation 109


Figure 103. UFD topology

For example, in the MX scenario that is mentioned in Scenario 5: Connect MX9116n FSE to Fibre Channel storage - NPIV
Proxy Gateway mode, when an uplink is set as FC gateway, UFD associates the set of downstream interfaces which are part
of the corresponding FCoE VLAN into a UFD group. In this scenario, the VLANs are VLAN 30 and VLAN 40 on each switch
respectively. The downstream interfaces are the ones connected to the MX740c compute sleds.
In SmartFabric mode with an FC uplink failure situation, where all FC uplink ports are down (for example, removing the fibre
channel transceiver from the switch), the switch operationally disables the downstream interfaces which belong to that UFD
group AND have the FCoE VLAN provisioned to them. A server that does not have an impacted FCoE VLAN is not disturbed.
Once the downstream ports are set operationally down, the traffic on these server ports is stopped, giving the operating system
the ability to fail traffic over to the other path. In a scenario with MX9116n FSEs, a maximum of eight FC ports can be part of an
FC Gateway uplink.
This is resolved by shutting down only the corresponding compute sled downstream ports which provide an alternate path to
the compute sleds. Bring up at least one FC port that is part of the FC gateway uplink so that the FCoE traffic can transition
through another FC port on the NIC or an IOM in the fabric. Remove FCoE VLANs from Ethernet-only downstream ports to
avoid an impact on Ethernet traffic.

110 SmartFabric Creation


Figure 104. UFD in an MX scenario

NOTE: In SmartFabric mode, one uplink-state-group is created and is enabled by default. In Full Switch mode, up to 16
uplink-state groups can be created, the same as any SmartFabric OS10 switch. By default, no uplink-state groups are
created in Full Switch mode. Physical port channels can be assigned to an uplink-state group.
To include uplinks into a UFD group in SmartFabric mode, perform the following steps.
Steps
1. Access the OME-M console.
2. Select Devices > Fabric. Choose created fabric.
3. The UFD group can be included in two ways. If uplinks are not created, select Add Uplink. Enter Name, Description, and
Uplink type.
4. Mark the check box Include in Uplink Failure Detection Group.

Figure 105. UFD under Add Uplink


5. If uplinks are created, choose an uplink, select Edit.
6. Under Edit Uplink, mark the check box Include in Uplink Failure Detection Group.
7. This enables UFD and includes the uplink into the UFD group.

SmartFabric Creation 111


Figure 106. UFD under Edit Uplink

Verifying UFD configuration


To verify UFD on a switch, run the following CLI commands.

MX9116n-1# show uplink-state-group 1


Uplink State Group: 1, Status: Enabled, Up

MX9116n-1# show uplink-state-group detail


(Up): Interface up (Dwn): Interface down
(Dis): Interface disabled (NA): Not Available
*: VLT Port-channel V: VLT Status P: Peer Operational Status ^: Tracking
Status
Uplink State Group : 1 Status : Enabled,up
Upstream Interfaces : Fc 1/1/44:1(Up), Fc 1/1/44:2(Up)
Downstream Interfaces: Eth 1/1/1(Up), Eth 1/1/3(Up), Eth
1/71/2(Up), Eth 1/71/7(Up)

Configuring the upstream switch and connecting


uplink cables
The upstream switch ports must be configured in a single LACP LAG. This document provides eight example configurations.
● Scenario 1: SmartFabric deployment with S5232F-ON upstream switches with Ethernet - No Spanning Tree uplink
● Scenario 2: SmartFabric connected to Cisco Nexus 3232C switches with Ethernet - No Spanning Tree uplink
● Scenario 3: SmartFabric deployment with S5232F-ON upstream switches with legacy Ethernet uplink
● Scenario 4: SmartFabric connected to Cisco Nexus 3232C switches with legacy Ethernet uplink
● Scenario 5: Connect MX9116n FSE to Fibre Channel storage - NPIV Proxy Gateway mode
● Scenario 6: Connect MX9116n FSE to Fibre Channel storage - FC Direct Attach
● Scenario 7: Connect MX5108n to Fibre Channel storage - FSB
● Scenario 8: Configure boot from SAN

112 SmartFabric Creation


7
Server Deployment
Deploying a server
Before beginning, ensure that all server firmware, especially the NIC/CNA, has been updated to the latest version. For additional
information about components and firmware used in this guide, see Software and firmware versions used.

Server preparation
The examples in this guide reference the Dell PowerEdge MX740c compute sled with QLogic QL41262 Converged Network
Adapters (CNA) installed. CNAs are required to achieve FCoE connectivity. Use the steps below to prepare each CNA by setting
them to factory defaults (if required) and configuring NIC partitioning (NPAR) if needed. Not every implementation requires
NPAR.
NOTE: iDRAC steps in this section may vary depending on hardware, software, and browser versions used. See the
documentation for your Dell server for instructions on connecting to the iDRAC.

Create a server template


Before creating the template, select a server to be the reference server and configure the hardware to the exact settings
required for the implementation.

NOTE: In SmartFabric mode, you must use a template to deploy a server and to configure networking.

A server template contains parameters that are extracted from a server and allows these parameters to be quickly applied to
multiple compute sleds. A server template contains all server settings for a specific deployment type including BIOS, iDRAC,
RAID, NIC/CNA, and so on. The template is captured from a reference server and can then be deployed to multiple servers
simultaneously. The server template allows an administrator to associate VLANs to compute sleds.
The templates contain settings for the following categories:
● Local access configuration
● Location configuration
● Power configuration
● Chassis network configuration
● Slot configuration
● Setup configuration
To create a server template, perform the following steps.
1. Open the OME-M console.
2. From the navigation menu, click Configuration, then click Templates.
NOTE: With OME-M 1.20.10 and earlier, the Templates option is called Deploy.
3. From the center panel, click Create Template, then click From Reference Device to open the Create Template window.

Server Deployment 113


Figure 107. Create a server template
4. In the Template Name box, enter a name. In this example, MX740c with FCOE CNA is entered.

Figure 108. Create Template dialog box

5. Optionally, enter a description in the Description box, then click Next.


6. In the Device Selection section, click Select Device.

Figure 109. Device Selection screen

114 Server Deployment


7. From the Select Devices window, choose the previously configured server or the server whose settings need to be applied
to the target servers, then click Finish.

Figure 110. Devices selected


8. From the Elements to Clone list, select all the elements, and then click Finish.

Figure 111. Select the elements to clone

A job starts and the new server template displays on the list. When complete, the Completed successfully status displays.

Create identity pools


Identity pools are recommended, but not required. Virtual identity pools are used in conjunction with server templates to
automate network onboarding of compute sleds. Perform the following steps to create an ID pool.
For more information about identity pools, find the relevant version of the User Guide in the OME-M and OS10 compatibility and
documentation table.
1. Open the OME-M console.
2. From the navigation menu, click Configuration, then click Identity Pools.
3. In the Network panel, click Create. The Create Identity Pool window displays.
4. Type Ethernet CNA into the Pool Name box.
5. Optionally, enter a description in the Description box.
6. Click Next.
7. Click to select the Include Ethernet Virtual MAC Addresses option.
8. In the Starting MAC Address box, type a unique MAC address (for example, 06:3C:F9:A4:CC:00).
9. Type 255 in the Number of Virtual MAC Identities box, click Next, then click Next again.

Server Deployment 115


10. Select the Include FCoE Identity option if using FCoE.
11. In the Starting MAC Address box, type a unique MAC address (for example, 06:3C:F9:A4:CD:00).
12. Type 255 in the Number of FCoE identities box for FCoE scenarios.

Figure 112. Include FCoE identity

13. Click Finish, then click Finish again.

Associate server template with networks – no FCoE


After successfully creating a new template, associate the template with a network.
1. From the Templates pane, select the template to be associated with VLANs. In this example, the FCOE CNA server
template is selected.
NOTE: With OME-M 1.20.10 and earlier, the Templates option is called Deploy.
2. Click Edit Network.
3. In the Edit Network window, complete the following:
a. Optionally, from the Identity Pool list, choose the desired identity pool. In this example, the Ethernet ID Pool is
selected.
b. Click Next.

Figure 113. IO Pool Assignment screen


c. Assign bandwidth to the ports and the partitions as required by your configuration, then click Next.
d. Optionally, from the NIC Teaming option, choose the desired NIC Teaming option.
e. The NIC teaming option can be selected as No Teaming, LACP teaming, and Other, as detailed in NIC teaming
guidelines.
f. For both ports, from the Untagged Network list, select the untagged VLAN. In this example, VLAN0001 is selected.
g. For both ports, from the Tagged Network list, select the tagged VLAN. In this example, VLAN0010 is selected.
h. Click Finish.
The following figure shows the associated networks for the server template with OME-M 1.30.00 and later.

116 Server Deployment


Figure 114. Server template network settings - no FCoE with OME-M 1.30.00 and later

The following figure shows the associated networks for the server template with OME-M 1.20.10 and earlier.

Figure 115. Server template network settings - no FCoE with OME-M 1.20.10 and earlier

Associate server template with networks - with FCoE


After successfully creating a template, associate the template with a network.
1. From the Templates pane, select the template to be associated with VLANs. In this example, the MX740c with FCOE CNA
server template is selected.
NOTE: With OME-M 1.20.10 and earlier, the Templates option is called Deploy.
2. Click Edit Network.
3. In the Edit Network window, complete the following:
a. To choose FCoE VLANs, select the appropriate Identity Pool list provided, then click Next.
b. Assign the bandwidth to the ports and partitions as necessary for your configuration, then click Next.
c. From the Untagged Network list, select the untagged VLAN for both ports.
NOTE: In this example, VLAN0001 is selected.
d. For NIC in Mezzanine 1A Port 1, select FC A1 from the Tagged Network list.

Server Deployment 117


Figure 116. Select VLANs
e. For NIC in Mezzanine 1A Port 2, select FC A2 from the Tagged Network list.
f. Click Finish.
The following figure shows the associated networks for the server template with OME-M 1.30.00 and later.

Figure 117. Server template network settings - FCoE with OME-M 1.30.00 and later

The following figure shows the associated networks for the server template with OME-M 1.20.10 and earlier.

118 Server Deployment


Figure 118. Server template network settings - FCoE with OME-M 1.20.10 and earlier

Deploy a server template


To deploy the server template, perform the following steps.
NOTE: To deploy a server template with OME-M 1.20.10 and earlier, see the Steps to deploy server template with
OME-M 1.20.10 and earlier section below.
Steps to deploy server template with OME-M 1.30.00 and later
1. From the Templates pane, select the template to be deployed. In this example, the MX740c with FCOE CNA server
template is selected.
2. Click Deploy Template.
3. In the Deploy Template window, complete the following:
a. Click the Deploy to Devices or Attach to Slots button and choose which slots or compute sleds the template needs to
be deployed to. These are target servers.
b. Select the Do not forcefully reboot the host OS option then click Next.
c. Keep other settings set to Default.
d. From iDRAC Management IP settings, choose Don’t change IP settings option then click Next.
e. Choose Target Attributes as required for your configuration then click Next.
f. Click to Reserve identities from the Identity Pool then click Next.
4. Click Next then select Run Now.
5. Click Finish.

Steps to deploy server template with OME-M 1.20.10 and earlier


1. From the Deploy pane, select the template to be deployed. In this example, the MX740c with FCOE CNA server template is
selected.
2. Click Deploy Template.
3. From the Deploy Template window, complete the following:
a. Click the Select button to choose which slots or compute sleds to deploy the template to.
b. Select the Do not forcefully reboot the host OS option.
4. Click Next, then select Run Now.
5. Click Finish.
The interfaces on the switch are updated automatically. SmartFabric configures each interface with an untagged VLAN and any
tagged VLANs. Also, SmartFabric deploys the associated QoS settings. See the Networks and automated QoS section for more
information.

Server Deployment 119


To monitor the deployment progress, go to Monitor > Jobs > Select Job > View Details. This shows the progress of the
server template deployment.

Figure 119. Job details displaying deployment of server template

Profile deployment
PowerEdge MX environment supports Profiles with OME-M 1.30.00 and later. OME-M creates and automatically assigns a
profile once the server template is deployed successfully. If the server template is not deployed, OME–M allows user to create
server profiles and apply them to compute sled or slot.

Profiles with server template deployment


Once the server template is deployed successfully, OME-M automatically creates Profiles. In this example, Profile from template
MX740c with FCOE CNA has been created and deployed as shown in figure below.

Figure 120. Profile created with server template deployment

NOTE: The server template cannot be deleted until it is Unassigned from a profile. To unassign server templates from a
profile, see the Unassign a profile section. To delete a profile, see the Delete a profile section.

Create a profile
If the server template is not deployed, OME–M allows user to create server profiles and apply them to compute sled or slot.
To create a profile, perform the following steps:
1. Open OME-M console and select Configuration.
2. From the drop-down menu, select Profiles.
3. From the Profiles pane, choose Create.
4. From Select Template window, choose MX740c with FCOE CNA then click Next.

120 Server Deployment


NOTE: Ensure that you attach the server template to a virtual identity pool. Deploying the profile without an identity
pool attachment will not change the virtual network addresses on the target devices.

Figure 121. Select template under Profiles


5. On the Details tab, enter the Name Prefix, Description, and Profile Count of the profile and click Next.
NOTE: You can create a maximum of 100 profiles at a time.

Figure 122. Details for profiles


6. Select Boot to Network ISO and enter the following file share information.
a. Share Type—Select CIFS or NFS as required
b. ISO Information—Enter the ISO path
c. Share Information—Enter the Share IP Address, Workgroup, Username, and Password
d. Time to Attach ISO—Select the time duration to attach ISO from the drop-down
e. Test Connection—Displays the test connection status
7. Click Next. The iDRAC Management IP tab displays.
8. Click Finish.

View a profile
User can view a profile and network details under this option. On the Profiles page, select a profile and click View and select
View Profile. The View Profile wizard is displayed.

View Profile You can view Boot to Network ISO, iDRAC Management IP, Target Attribute, and Virtual
Identities information that is related to the profile.
View Network You can view Bandwidth and VLANs information that is related to the profile.

Edit a profile
The Edit Profile feature allows user to change the Profile name, Network options, iDRAC management IP, Target
attributes, and Unassigned virtual identities. The user can edit the profile characteristics that are unique to the device or
slot.
To edit a profile, perform the following steps:
1. From the OME-M console, click Configurations > Profiles and select the profile to be edited.
2. Select Edit > Edit Profile.

Server Deployment 121


3. On the Details tab, edit name and description of the profile and click Next.

Figure 123. Edit Profile description


4. From the Boot to network ISO tab, edit the information already entered while creating a profile, then click Next.
5. Select the Target IP settings, then select one of the following options:
● Don't change IP settings
● Set as DHCP
● Set static IP
6. Click Next.

Figure 124. iDRAC Management IP settings


7. From the Target Attributes screen, select the components or attributes in the iDRAC, NIC, and System sections to
include in the template, then click Finish.

Figure 125. Edit Target Attributes

Assign a profile
The Assign a profile function allows the user to assign and deploy a profile on target devices.
To assign a profile, perform the following steps:
1. From the OME-M console, click Configurations > Profiles and select a profile to assign.

122 Server Deployment


2. Click Assign.
3. On the Details tab, verify the details and click Next.
4. Select Attach to Slots or Deploy to Devices and click Select Slots or Sleds to choose the target servers.

Figure 126. Deploy Profile screen


5. Select the target server or servers where the profile is being deployed.

Figure 127. Select target servers


6. Choose Do not forcefully reboot the host OS option and click Next.

Figure 128. Target servers deployed


7. Select Boot to Network ISO, enter the file share information as needed, then click Next.
8. Select iDRAC Management IP settings then click Next.
9. Select the Target Attributes under the iDRAC, NIC, and System options then click Next.
10. Click Run Now or Enable Schedule then click Finish.
NOTE: The Enable Schedule option is disabled for slot-based profile deployment.

CAUTION: When you select Enable Schedule, the profile deployment runs at the scheduled time, even if you
have already performed a Run Now function before the schedule. The Deploy Profile job will fail when it is
run at the scheduled time which results in an error message displaying.

Server Deployment 123


Unassign a profile
Use the Unassign a profile function to disassociate profiles from the selected targets.

NOTE: You can only select the profiles that are in an Assigned or Deployed state.

To unassign the profile:


1. From the OME-M console, click Configurations > Profiles then select a profile to unassign.
2. From the Actions menu, click Unassign. The Unassign Profile window displays.
3. In the Unassign profile wizard, the Force Reclaim Identities is checked by default. This action reclaims the identities
from this device, and the server is forcefully rebooted. All the VLANs configured on the server are removed.
4. Click Finish.
NOTE: The Unassign profile job is not created when the action is performed on the assigned profile that has the Last
Action Status as Scheduled for device-based deployment.

Delete a profile
You can delete profiles that are not running any profile actions and is in the unassigned profile state. To delete the profile:
1. From the Profiles page, select the profile or profiles that you want to delete and click Delete.
2. Click Yes to confirm the deletion.

124 Server Deployment


8
SmartFabric Deployment Validation
Validate the SmartFabric health
The OME-M console can be used to show the overall health of the SmartFabric.
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. Select SmartFabric1 to expand the details of the fabric. The following figure shows the status of the fabric.

Figure 129. Fabric status details


The Overview tab shows the current inventory, including switches, servers, and interconnects between the MX9116n FSEs in
the fabric. The image below shows the SmartFabric switch in a healthy state.

Figure 130. SmartFabric switch inventory

The following image shows the participating servers in a healthy state.

Figure 131. SmartFabric server inventory

The image below shows the content of the Topology tab and the VLTi that the SmartFabric mode created.

SmartFabric Deployment Validation 125


Figure 132. SmartFabric Topology overview

Within the Topology tab, you can also view the Wiring Diagram table as shown in the image below.

Figure 133. Wiring Diagram table

Validation of quad-port NIC topologies

Validate with OME-M


Validation of quad-port NICs can be done on OME-M by performing the following steps:
1. Access the OME-M Console.
2. Go to Devices > Compute.
3. Select a compute sled. Choose Hardware and then Network Devices. The following figure shows the quad-port NIC in an
OME-M Console.

126 SmartFabric Deployment Validation


Figure 134. Ports on quad-port NIC shows on OME-M UI
4. Expand one of the ports to see details about Product name, Link status, and MAC Address.

Figure 135. Details of quad-port NIC

The Topology view on the Home Screen of the OME-M console shows connections for the quad-port NIC. To access this,
perform the following steps:
a. Access the OME-M Console.
b. Go to Home > View Topology. This will show connections between MX7116n FEMs and MX9116n FSEs, similar to
Two-chassis topology with quad-port NICs – dual fabric.

SmartFabric Deployment Validation 127


Figure 136. View Topology

NOTE: Make sure that the compute sled iDRAC is at the latest version to ensure an accurate Group Topology view.

5. Once connections are established and validated, access the Port Information on I/O Modules by performing the following
steps:
a. Access the OME-M Console.
b. Go to Devices > I/O Modules.
c. Select an IOM > Hardware > Port Information. This shows two port groups each with eight internal ports.
For example, if Compute Sled 1 is configured with a dual-port NIC, then only one port group with eight ports can be seen
on OME-M. These internal ports are numbered 1/71/1 through 1/71/8. For Compute Sled 1 with a dual-port NIC, port
1/71/1 is Up.

Figure 137. Port Information for dual-port NIC

If Compute Sled 1 is configured with a quad-port NIC, then two port groups each with eight ports can be seen on
OME-M. These internal ports are numbered 1/71/1 through 1/71/16. For Compute Sled 1 with a quad-port NIC, ports
1/71/1 and 1/71/9 are Up.

128 SmartFabric Deployment Validation


Figure 138. Port Information for quad-port NIC

Validation through switch CLI


The show discovered-expanders command is only available on the MX9116n FSE and displays all connected MX7116n
FEMs, its service tag attached to the MX9116n FSEs, and the associated port group and virtual slot. With a quad-port NIC,
each MX7116n FEM creates two connections with the MX9116n FSE on port group 1/1/1 and port group 1/1/7, as shown in the
following.

MX9116N-1# show discovered-expanders


Service Model Type Chassis Chassis-slot Port-group Virtual
tag service-tag Slot-Id
--------------------------------------------------------------------------
D10DXC2 MX7116n 1 SKY002Z A1 1/1/1 71
FEM
D10DXC2 MX7116n 1 SKY002Z A1 1/1/7 71
FEM
D10DXC4 MX7116n 1 SKY003Z A1 1/1/2 72
FEM

Validating Ethernet - No Spanning Tree uplinks


If using Ethernet – No Spanning Tree uplinks, use the CLI commands in this section to validate the configuration.

show port-channel summary on MX9116n FSE


From the MX I/O module, use the show port-channel summary command to confirm the port-channel is created for the
uplink with spanning-tree disabled on MX Switches

MX9116N-A1# show port-channel summary

Flags: D - Down I - member up but inactive P - member up and active U - Up


(port-channel) F - Fallback Activated

SmartFabric Deployment Validation 129


------------------------------------------------------------------
Group Port-Channel Type Protocol Member Ports
--------------------------------------------------------------------------------
2 port-channel2 (U) Eth DYNAMIC 1/1/41(P)
1000 port-channel1000 (U) Eth STATIC 1/1/37(P) 1/1/38(P) 1/1/39(P) 1/1/40(P)

Upstream switch validation - SmartFabric OS10

show port-channel summary


From the upstream switch, run the show port-channel summary CLI command to verify port-channel is up and running
and no STP BPDUs are received on the upstream switch. This command is for SmartFabric OS10. If the upstream swtich is not
running OS10, execute the similar command specific to that switch model.

Figure 139. show port-channel summary command

show spanning-tree interface port-channel


From the upstream switch, run the show spanning-tree interface port- channel CLI command to verify that no
BPDUs are received on the port.

Figure 140. show spanning-tree interface port-channel command

130 SmartFabric Deployment Validation


show running-configuration interface port-channel
From the upstream switches, use the show running-configuration interface port-channel CLI command to
verify spanning tree is enabled on the port-channel interface. Then run the show lldp neighbors command to show that
no BPDU packets are received on the interface.

Figure 141. show running-configuration interface port-channel command

SmartFabric Deployment Validation 131


show lldp neighbors
After running the show running-configuration interface port-channel command above, use the show lldp
neighbors CLI command to verify that no BPDU packets are received on the interface.

Figure 142. show lldp neighbors command

Upstream switch validation - Cisco

show port-channel summary


From an upstream Cisco Nexus switch, run the show port-channel summary CLI command to verify port-channel is up
and running and no STP BPDUs are received on the upstream switch.

132 SmartFabric Deployment Validation


Figure 143. show port-channel summary

SmartFabric Deployment Validation 133


show running-configuration interface port-channel
Run the show running-config interface port-channel {port-channel ID} command on the Cisco Nexus to
show spanning tree configuration.

Figure 144. show running-config interface port-channel command

134 SmartFabric Deployment Validation


9
SmartFabric Operations

Viewing SmartFabric health and status


VIew the SmartFabric using OME-M. The green checkmark next to the fabric name indicates that the fabric is in a healthy state.
In this example, the fabric created is named Fabric01.
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. To view the Fabric components, select the fabric. This can also be achieved by clicking the View Details button on the
right.

Figure 145. SmartFabric details screen


Fabric components include:
● Uplinks
● Switches
● Servers
● ISL links
Uplinks connect the MX9116n switches with upstream switches. In this example, the uplink is named as Uplink1.

Figure 146. Uplinks information within Fabric Details

SmartFabric Operations 135


Switches lists the I/O modules that are part of the fabric. In this example, the fabric has two MX9116n switches.

NOTE: Fabric Expander Modules are transparent and therefore do not appear on the Fabric Details page.

Figure 147. Switches listing within Fabric Details

Servers lists the compute sleds that are part of the fabric. In this example, two PowerEdge MX740c compute sleds are part of
the fabric.

Figure 148. Servers listing within Fabric Details

ISL Links lists the VLT interconnects between the two switches. The ISL links must be connected on port groups 11 and 12 on
MX9116n switches, and ports 9 and 10 on MX5108n switches.

CAUTION: This connection is required. Failure to connect the defined ports results in a fabric validation error.

Figure 149. ISL Links within Fabric Details

Edit a SmartFabric
A fabric has four components:
● Uplinks
● Switches
● Servers
● ISL Links
To edit the fabric that is discussed in this section, edit the fabric name and description using the following steps:
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. On the right, click the Edit button.

136 SmartFabric Operations


Figure 150. Edit fabric name and description screen
4. In the Edit Fabric dialog box, change the name and description, then click Finish.

Edit uplinks
Perform the following steps to edit uplinks on a SmartFabric:
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. Select the fabric.
4. Select the Uplink to edit and click Edit.
NOTE: In this example, Uplink1 is selected.
5. In the Edit Uplink dialog box, modify the Name and Description as necessary.
NOTE: The uplink type cannot be modified once the fabric is created. If the uplink type must be changed after the
fabric is created, delete the uplink and create a new uplink with the wanted uplink type.

Figure 151. Edit Uplink dialog box

NOTE: The Include in Uplink Failure Detection Group box under Uplink Type will only be seen on OME-M 1.20.00 and
later.
6. Click Next.

SmartFabric Operations 137


7. Edit the Uplink ports on the MX switches that connect to the upstream switches. In this example, ports 41 and 42 that are
on the MX9116n switches, connect to upstream switches, and are displayed.
NOTE: Carefully modify the uplink ports on both MX switches. Select the IOM to display the respective uplink switch
ports.

Figure 152. Edit uplink ports and VLAN networks


8. If necessary, modify the tagged and untagged VLANs.
NOTE: If you have changed OME-M to use a VLAN other than the default, make sure that you do not add that VLAN to
an uplink.

NOTE: An FCoE uplink from the MX5108n or MX9116n must contain only a single port interface on an MX IOM to the
existing FCoE switch such as the Dell PowerSwitch S4148U shown in the diagram above.
9. Click Finish.

Edit VLANs
The following sections describe this task for deployed servers with different versions of OME-M.

Edit VLANs on deployed servers with OME-M 1.20.00 and later


OME-M 1.20.00 adds the ability to edit VLANs on multiple servers at the same time. This section describes how to edit VLANs
and deploy settings from a reference server to multiple target servers in SmartFabric mode. After deployment of SmartFabric
and deployment of server templates, network settings can be changed by the following instructions.
1. Open the OME-M console.
2. From the navigation menu, click Device > Fabric.
3. Select the fabric.
4. Select Servers from the left pane.
5. Choose Edit Networks.

138 SmartFabric Operations


Figure 153. Edit Networks
6. Select Reference Server, click Next. The Reference Server settings will be deployed on one more target servers in the
fabric. In this example, Sled-1 is chosen as the Reference Server.

Figure 154. Select Reference Server


7. Choose NIC teaming from LACP, No Teaming, and Other options.
8. Modify the VLAN selections as required by defining the tagged and untagged VLANs.
9. Select VLANs on Tagged and Untagged Network for each Mezzanine card port. Click Next.

Figure 155. Modify VLANs


10. Select Target Server(s).
11. To select multiple Servers click Add and choose multiple servers from the list. Click Add again.

SmartFabric Operations 139


Figure 156. Select multiple target servers
12. Select the servers.

Figure 157. Select target servers


13. Click Finish.
NOTE: VLAN settings will be pushed to the selected servers and will overwrite any existing settings.

Edit VLANs on a deployed Server with OME-M 1.10.20 and earlier


NOTE: Instructions in this section are supported until OME-M 1.10.20. If you are using the updated Firmware OME-M
1.20.00 and later, follow the instructions in the next section.
The OME-M Console is used to add/remove VLANs on the deployed servers in a SmartFabric. Perform the following steps to
add/remove VLANs on the deployed servers.
1. Open the OME-M console.
2. From the navigation menu, click Device > Fabric.
3. Select the fabric.
4. Select Servers from the left pane.

Figure 158. Add and remove VLANs


5. Choose the wanted server. In this example, the PowerEdge MX740C with service tag 8XQP0T2 is selected.

140 SmartFabric Operations


6. Choose Edit Networks.
7. Choose NIC teaming from LACP, No Teaming, and Other options.
8. Modify the VLAN selections as required by defining the tagged and untagged VLANs.
9. Select VLANs on Tagged and Untagged Network for each Mezzanine card port.
10. Click Save.

Figure 159. Modify VLANs

NOTE: At this time, only one server can be selected at a time in the UI.

Delete SmartFabric
To remove the SmartFabric using the OME-M console, perform the following steps:
1. Open the OME-M console.
2. From the navigation menu, click Devices > Fabric.
3. Select SmartFabric.
4. Click the Delete button.
5. In the delete fabric dialog box, click Yes.
All participating switches reboot to Full Switch mode.
CAUTION: Any configuration that is not completed by the OME-M console is lost when switching between IOM
operating modes.

Connect non-MX Ethernet devices to a SmartFabric


As of SmartFabric OS10.5.0.1 and OME-M 1.10.00, PowerEdge MX Ethernet switches allow the connection of non-MX devices
such as rack servers to the fabric, so long as that device provides a physical interface that is supported by the switch. Once
connected, VLANs must be assigned to each port that the device is connected. This capability does not allow non-MX devices
to support FCoE.
To connect a non-MX device to a switch running in SmartFabric mode, perform the following steps:
1. Open the OME-M console.
2. To configure the breakout on the port-group, see the Configure uplink port speed or breakout section, if needed.
3. Once the breakout on the port-group is complete, select the port. by selecting Edit VLANs.
NOTE: Make sure that the port is not in use for any other purpose.
4. Click Edit VLANs and then select Default VLAN 1, which is shown as Untagged Network in the example below.
5. Select any of the other VLANs as the Tagged Network.

SmartFabric Operations 141


Figure 160. Selection of VLANs in Edit VLANs section
6. Click Finish.
7. Repeat these steps for any other port or IOM.

Expanding from a single-chassis to dual-chassis


configuration
Starting with OME-Modular 1.20.00 and OS10.5.0.7, a single MX7000 chassis with a pair of MX9116n switches can be expanded
to two MX7000 chassis with MX9116n FSEs and M7116n FEMs while running in SmartFabric mode. As shown in the following
steps, this process will not require any reconfiguration, is not destructive, and can be performed with the system online as long
as network redundancy is configured correctly.
NOTE: Before beginning this process, ensure that server redundancy is configured and working correctly. While this
process is not destructive, it will disrupt the network path for NIC ports connected to the switch being moved

Step 1: Cable Management module


Connect network cables to the MX7000 Management Modules on both chassis. For more information on Management Module
cabling, see the PowerEdge MX Chassis Management Networking Cabling White Paper.
NOTE: Make sure to have both chassis powered on and have an IP address assigned to the Management Module using LCD
panel or KVM ports.

Step 2: Create Multichassis Management Group


Create a Multichassis Management (MCM) Group on the single MX chassis configuration. For a scalable fabric that uses more
than one MX chassis, the chassis must be in an MCM Group.

NOTE: This step can be skipped if MCM Group is already created.

Step 3: Add second MX Chassis to the MCM Group


Perform the following steps:
1. Access the OME-M UI.
2. Select Chassis. Choose Configure > Add member.
3. Select the second MX7000 Chassis from the available chassis to be added as a member to the existing MCM group.
4. Click Finish.

142 SmartFabric Operations


Step 4: Move MX9116n FSE from first chassis to second chassis
Access OME-M UI from the lead MX Chassis. Choose I/O Modules under Devices.

Figure 161. Select IOM under Devices > I/O Modules

Select I/O Module in slot A2 from the first chassis. Power off the IOM from the Power Control drop-down menu.

Figure 162. Power off the IOM

SmartFabric Operations 143


1. Once the MX9116n FSE in Chassis 1-Slot A2 in the first chassis is powered off, physically move the switch to Slot A2 of the
second MX7000 chassis, but do NOT insert it completely at this time.
2. Insert a MX7116n FEM in Chassis 1-Slot A2 and another FEM in Chassis 2-Slot A1.
3. Connect QSFP28-DD cables between FSE and FEM, as shown in the following figure.
NOTE: The following diagram shows the connections for a scalable fabric on multiple chassis between the FSE and
FEM components. The diagram does not show the VLTi connections required for operating in SmartFabric mode or as
recommended when in Full Switch mode.

Figure 163. Connection between FSE and FEM


4. Once cabled, fully insert the MX9116n FSE in Chassis 2-Slot A2 and it will power on automatically.
5. These steps can be repeated for IOMs in slots B1/B2 as well.

Step 5: Validation
Perform the following steps to validate the environment.
1. Make sure that all MX9116n FSEs and MX7116n FEMs on both chassis appear in the OME-M UI. Restart the second MX9116n
FSE if you do not see it in the correct chassis.
2. Check the SmartFabric configuration to ensure that nothing has changed.
3. Make sure all internal switch ports on the MX9116n FSE and MX7116n FEMs are enabled and up. Check link lights for the
external ports to make sure that they are illuminated.

SmartFabric mode IOM replacement process


NOTE: The Dell PowerEdge MX platform gives you the ability to replace an I/O module in a SmartFabric if required. The
process used depends on the version of OS10 installed and should be run with Dell Technical Support engaged before
starting and throughout the process of IOM replacement. For technical support, go to https://www.dell.com/support or
call (USA) 1-800-945-3355.

NOTE: A new replacement IOM will have a factory default configuration. All port interfaces in the default configuration are
in the no shutdown state.

With OME-M 1.30.00 and later, Dell PowerEdge MX platform gives you the option to replace the I/O modules in SmartFabric
mode in the case of persistent errors or failures and if required through OME-M console. This process can only be done on
OME-M after SmartFabric is created.
Prerequisites:
● The MX9116n FSE and MX5108n can only be replaced with another I/O Module of the same type. Ensure that you have the
same Dell SmartFabric OS10 version on the switch that is to be replaced, and on the new switch.
● The replacement IOM must be a new device within the chassis deployment. Do not use an IOM that was previously deployed
within the MCM group.
● The other IOM in SmartFabric mode must be up, running, and healthy; otherwise a complete traffic outage may occur.

144 SmartFabric Operations


NOTE: OS10 is factory-installed in the MX9116n FSE or MX5108n Ethernet Switch. If the faulty IOM has an upgraded
version of OS10, you must upgrade the new IOM to the same version.
To replace the IOM through OME-M, follow the steps provided in this section.
CAUTION: Carefully follow the steps indicated in the OME-M prompts. Performing the steps out of order or
missing a step could cause a failure and may require a replacement of the switch.
1. Open OME-M console.
2. From Navigation pane, choose Devices > Fabric.
3. Select the already created Fabric and select the Replace Switch option.

Figure 164. Replace Fabric Switch Introduction screen


4. Click Next.
5. Copy the Current Running Configurations from the switch that is to be replaced.
NOTE: See the Dell SmartFabric OS10 User Guide for more information. Find the relevant version of the User Guide in
the OME-M and OS10 compatibility and documentation table.
6. Click Next.

Figure 165. Copy Current Configuration screen


7. Carefully remove the cables from the switch that is to be replaced.
8. Remove the switch that is to be replaced from the chassis and insert the new switch in the same slot.
CAUTION: Do not connect the cables yet. Wait for the switch to boot and ensure that the OS10 version on
new switch is same as the switch that is being replaced.
9. Confirm the OS10 version on OME-M then click Next.

SmartFabric Operations 145


Figure 166. Replace Switch Hardware screen
10. Configure new switch and apply the settings that were copied from the switch that is being replaced, to the new switch.
NOTE: For more information about the application of the settings from the switch that is being replaced to the new
switch, see the Dell SmartFabric OS10 User Guide. Find the relevant version of the User Guide in the OME-M and OS10
compatibility and documentation table.

CAUTION: Do not connect the cables at this time.


11. After you have configured the software settings and have verified the configuration of the new switch, click to place a
check in the Confirm New Switch Settings box, then click Next.

Figure 167. Configure New Switch screen


12. From the Activate New Switch screen, click the drop-down to select the Old Switch and New Switch in the fields
provided.

146 SmartFabric Operations


Figure 168. Activate New Switch screen
13. After you have confirmed that each of the steps required to recreate the SmartFabric using the new switch is complete,
click to place a check in the Confirm SmartFabric Configuration box.
14. Click Finish, then click Yes to complete the process.

MXG610 Fibre Channel switch module replacement


process
NOTE: The Dell PowerEdge MX platform gives you the ability to replace an I/O module in a SmartFabric if required. This
process depends on the operating system version that is installed and should be run with Dell Technical Support engaged
before starting and throughout the process of IOM replacement. For technical support, go to https://www.dell.com/
support or call (USA) 1-800-945-3355.

NOTE: Before beginning this process, you must have a replacement switch module or filler blade available. Never leave
the slot on the blade server chassis open for an extended time period. To maintain proper airflow, fill the slot with either a
replacement switch module or filler blade.
1. Back up the switch module configuration to an FTP or TFTP server using the configUpload command. The
configUpload command uploads the switch module configuration to the server and makes it available for downloading
to the replacement switch module if necessary. To ensure that a complete configuration is available for downloading to a
replacement switch module, back up the configuration regularly.
2. Stop all activities on the ports that the switch module uses. To verify that there is no activity, view the switch module LEDs.
3. Disconnect all cables from the SFP+/QSFP ports and remove the SFP+ or QSFP optical transceivers from the switch
module external ports.
4. Press the Release latch and gently pull the release lever out from the switch module.
5. Slide the switch module out of the I/O module bay and set it aside.
6. Insert the replacement switch module in the I/O module bay of the blade server chassis.
NOTE: Complete this step within 60 seconds.
7. Insert the SFP+ or QSFP optical transceivers.
8. Reconnect the cables and establish a connection to the blade server management module.

Chassis Backup and Restore


Backing up the configurations on the IOMs is supported in two ways:
● Chassis backup for SmartFabric
● Manual backup through the CLI

SmartFabric Operations 147


NOTE: The Chassis backup for SmartFabric does not provide backup for Ethernet switch settings like Hostname,
Password, Management network, Spanning tree configurations, and other CLI configurations. Manual backup through the
CLI is also recommended when performing a chassis backup.

Backing up the chassis


Back up the chassis and compute sled configuration for later use. To backup the chassis, you must have administrator access
with the device configuration privilege. The chassis configuration contains the following settings:
● Application settings
○ Setup configuration
○ Power configuration
○ Chassis network configuration
○ Local access configuration
○ Location configuration
○ Slot configuration
○ OME Modular network settings
○ Users settings
○ Security settings
○ Alert settings
● System configuration
○ Templates
○ Profiles
○ Identity pools and Vlans
● Catalogs and baselines
● Alert policies
● SmartFabric
● MCM configuration
NOTE: Backup and Restore operations are supported in FIPS-enabled configuration. The FIPS attribute is not part of
backup files by default. You must toggle the required FIPS mode before initiating the restore process.
You can use the backed-up configuration in other chassis.
To create a chassis backup:
1. Manually back up all IOM startup configurations. Refer to Manual backup of IOM configuration through the CLI.
2. On the chassis Overview page, click More Actions > Backup.

The Chassis Backup window is displayed.


3. On the Introduction section, read the process and click Next.
The Backup File Settings section is displayed.
4. In Backup File Location, select the Share Type where you want to store the chassis backup file.
The available options are:
● CIFS
● NFS

148 SmartFabric Operations


5. Enter the Network Share Address and Network Share Filepath.
6. Enter a name for the Backup File.
The backup file name should not contain file extension. It can contain alphanumeric characters and the special characters,
hyphen (-), period (.), and underscore (_).
7. If the Share Type is CIFS, enter the Domain, User Name, and Password. Else, go to step 8.
8. In the Sensitive Data, select the Include Passwords check box to include passwords while taking the backup. These
passwords are encrypted and are applied when the backup file is restored on the same chassis. For additional information on
Sensitive Data, find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table..
9. In Backup File Password, enter the Encryption Password and Confirm Encryption Password.
The backup file is encrypted and cannot be edited.

NOTE: The password must be 8 to 32 characters long and must be a combination of an uppercase, a lowercase, a
special character (+, &, ?, >, -, }, |, ., !, (, ', ,, _, [, ", @, #, ), *, ;, $, ], /, §, %, =, <, :, {, I) , and a number.

10. In the Ethernet Switch Backup Instructions, Select the check box to confirm the Ethernet switch backup settings.
NOTE: Chassis backup is not supported on Ethernet switch settings like Hostname, Password, Management network,
Spanning tree configurations, IOMs that are in full switch mode, and some CLI configurations. For the list of CLI
configurations that are not supported, find the relevant version of the User Guide in the OME-M and OS10 compatibility
and documentation table.

SmartFabric Operations 149


NOTE: Back up the IOM Startup.xml file from all the IOMs before you perform chassis backup. See Refer to Manual
backup of IOM configuration through the CLI.

11. Click Finish. Click Learn More to see more information about the Ethernet switch backup instructions.
A message is displayed indicating that the backup is successful, and the chassis Overview page is displayed.
You can check the status and details of the backup process on the Montitoring > Jobs page.
NOTE: Backup and restore operations cannot be performed when you have initiated any job and the job status is
in-progress.

Sensitive data
This option allows you to include passwords while taking the backup.
If you do not select the Include Password option, passwords for the following components are not included.

Table 25. Sensitive data


Category Description
Network Proxy password
Alerts Email username and password
Alerts SNMP Destination V3 user credentials
Network Services SNMP Agent V3 user credentials
Local Access: Power Button Disabled button LCD override PIN
Catalogs CIFS or HTTPS username and password
Templates a All user created templates
Users AD or LDAP password and bind password
Users OIDC registration username and password

a. The secured attributes for templates include the following:


iDRAC Config
● USB Management
○ USB 1 Password for zip file on USB
● RAC Remote Hosts
○ RemoteHosts 1 SMTP Password
● Auto Update
○ AutoUpdate 1 Password
○ AutoUpdate 1 ProxyPassword
● Remote File Share
○ RFS 1 Remote File Share Password
● RAC VNC Server
○ VNCServer 1 Password
● SupportAssist
○ SupportAssist 1 Default Password
● LDAP

150 SmartFabric Operations


Table 25. Sensitive data
○ LDAP 1 LDAP Bind Password

NOTE: When you reenter the SNMP agent v3 user credentials to complete the restore task, reenter the other network
services settings too.

Restoring chassis
You can restore the configuration of a chassis using a backup file. You must have the chassis administrator role with device
configuration privilege to restore the chassis.
Catalogs and associated baselines cannot be restored when downloads.dell.com is not reachable. Catalogs with proxy settings
cannot be restored on a different chassis because the proxy password is not restored on a different chassis. This action causes
downloads.dell.com not reachable. Configure proxy password manually and then rerun all catalog and baseline jobs to complete
the restore process. If the source of a catalog is a validated firmware, you must manually re-create the catalog and all baselines
that are associated with the catalog to complete the restoration.
Based on the HTTPS network share configuration, the catalogs for HTTPs are restored with or without password after the
backup file excluding sensitive data is restored. If entering the username and password for the HTTPS share is not mandatory,
the catalog is restored, else the catalog is restored with job status "failed". Enter the username and password manually after the
restore task for the status to display as "completed".
SmartFabric restore operation is not supported if:
● It is restored on a different chassis.
● There is any difference between the current setup of the IOM hardware and the backup file.
NOTE: The chassis backup and restore feature is supported only if the OME-M firmware version in the backup file and the
chassis during the restore process are identical. The restore functionality is not supported if the OME-M versions do not
match.
To restore a chassis:
1. Ethernet switch settings that are associated with a SmartFabric must be restored prior to starting the MX Chassis restore
process. Refer to Manual backup of IOM configuration through the CLI.
2. To ensure the restored startup configuration is loaded into the running configuration, reload the IOMs immediately after
restoring the startup configuration.
NOTE: The running configuration is automatically written to the startup.xml every 30 minutes. Reloading the IOM
immediately after each startup configuration restore avoids the startup.xml being overwritten.

3. On the chassis Overview page, click More Actions > Restore.


The Restore Chassis window is displayed.
4. On the Introduction section, read the process and click Next.
The Upload File section is displayed.
NOTE: Click Learn More to see more information about the Ethernet switch restore. The restore process must be
completed as part of step 1.
5. Under Restore File Location, select the Share Type where the configuration backup file is located.
NOTE: If the current MM link configuration setup is different from the backup file, you must match the TOR (Top of
Rack) connection to the MM link configuration before the restore operation.

NOTE: During the SmartFabric restore, all the IOMs are converted in to the operating mode as in the backup file.

NOTE: All the IOMs which go through the fabric restore are reloaded. The IOMs are reloaded twice if there is any
difference in the mode of backup file and the current IOM.

6. Enter the Network Share Address, and Network Share Filepath where the backup file is stored.
7. Enter the name of the Backup File and Extension.

SmartFabric Operations 151


8. If the Share Type is CIFS, enter the Domain, Username, and Password to access shared location. Else, go to step 9.
9. In the Restore File Password section, enter the Encryption Password to open the encrypted backup file.
NOTE: The password must be 8 to 32 characters long and must be a combination of an uppercase, a lowercase, a
special character (+, &, ?, >, -, }, |, ., !, (, ', ,, _, [, ", @, #, ), *, ;, $, ], /, §, %, =, <, :, {, I) , and a number.

NOTE: If the restore operation is done excluding passwords or on a different chassis with proxy settings, the proxy
dependent tasks like the repository task try to connect to the external share. Rerun the tasks after configuring the
proxy password.

10. Click Validate to upload and validate the chassis configuration.


The Optional Component section is displayed.
11. (Optional) From the Optional components, you can choose to restore files on the selected components.
● Restore File Validation Status—Displays the validation status of the restore files.
NOTE: The status indicates whether the restore file validation status is successful. If the validation is not successful,
an error message is displayed with the recommended action.
● Optional Components—Displays the components that you can select for the restore operation. The available options
are:
○ Templates, Profiles, Identity Pools, and VLAN Configurations
○ Application and Chassis Settings
○ Catalogs and Baselines
○ Alert Policies
○ SmartFabric Settings
NOTE: The list of Optional Components is based on the backup chassis settings. The components that are not
part of chassis backup is listed under Unavailable Components section below.
● Mandatory Components—Displays the mandatory components for the restore operation.
● Unavailable Components—Displays the components that are unavailable for the restore operation.

Figure 169. Restore chassis optional components

152 SmartFabric Operations


Figure 170. Restore chassis confirmation

12. Click Restore to restore the chassis.

Manual backup of IOM configuration through the CLI


The running configuration contains the current OS10 system configuration and consists of a series of OS10 commands. Copy the
configuration to a remote server or local directory as a backup or for viewing and editing. The running configuration is copied as
a text file that you can view and edit with a text editor.
Manual backup of IOM configuration provides a backup of the running configuration. To back up the chassis, including the
SmartFabric settings, use the instructions in Backing up the chassis.
Copy running configuration to startup configuration
To display the configured settings in the current OS10 session, use the show running-configuration. To save new
configuration settings across system reboots, copy the running configuration to the startup configuration file.

OS10# copy running-configuration startup-configuration

Back up startup file to local directory

OS10# copy config://startup.xml config://backup-9-28.xml

Restore startup file from backup

OS10# copy config://backup-9-28.xml config://startup.xml


OS10# reload
System configuration has been modified. Save? [yes/no]:no

There are several options to copy files from the IOM to a remote server through many protocols. These options can be found in
the Dell SmartFabric OS10 User Guide.

SmartFabric Operations 153


10
General Troubleshooting
View or extract logs using OME-M
This section briefly describes a method for collecting Extract Logs to troubleshoot any hardware or firmware issues in an
MX environment. Dell PowerEdge MX7000 comes with a Management Module that provides chassis management. An integral
feature of the management firmware is to keep a detailed log of events from managed devices and software events in the
management firmware. Firmware logs collected from Management Module components, which can be used for troubleshooting,
are grouped as Extract Logs.
It is important to note that Extract Logs are on-demand (user-initiated) from the Management Module and are always stored in
a network share that the customer configures.
For step-by-step instructions about how to view and collect these logs, For instructions, find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table.

Troubleshooting MCM topology errors


The OME-M console can be used to show the physical cabling of the SmartFabric.
1. Open the OME-M console.
2. In the left navigation panel, click View Topology.
3. Click the lead chassis and then click Show Wiring.
4. To show the cabling, click the light-blue checkmark icons.

Figure 171. SmartFabric cabling


The following figure shows the validation errors displayed when a VLTi cable is connected incorrectly.

154 General Troubleshooting


Figure 172. SmartFabric cabling error

Troubleshooting VLT and vPC configuration on


upstream switches
Configuring a single VLT domain with Dell upstream switches or a single vPC domain with Cisco upstream switches is required.
Creating two VLT/vPC domains may cause a network loop. See Scenario 1: SmartFabric deployment with S5232F-ON upstream
switches with Ethernet - No Spanning Tree uplink and Scenario 2: SmartFabric connected to Cisco Nexus 3232C switches with
Ethernet - No Spanning Tree uplink for the topology that is used in the deployment example.
The following example shows a mismatch of the VLT domain IDs on VLT peer switches. To resolve this issue, ensure that a
single VLT domain is used across the VLT peers.

S5232-Leaf1# show vlt 1


Domain ID : 1
Unit ID : 1
Role : primary
Version : 1.0
Local System MAC address : 4c:76:25:e8:f2:c0

S5232-Leaf2# show vlt 30


Domain ID : 30
Unit ID : 1
Role : primary
Version : 1.0

The following example shows a mismatch of the vPC domain IDs on vPC peer switches. To resolve this issue, ensure that a
single vPC domain is used across the vPC peers.

Nexus-3232C-Leaf1# show vpc


Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id : 1
Peer status : peer link is down
vPC keep-alive status : peer is alive, but domain IDs do not match

General Troubleshooting 155


---- OUTPUT TRUNCATED -----

3232C-Leaf2# show vpc


Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id : 255
Peer status : peer link is down
vPC keep-alive status : peer is alive, but domain IDs do not match
---- OUTPUT TRUNCATED -----

Troubleshooting FEM and compute sled discovery


Verify the following if server or FEM discovery does not happen:
● Verify that the compute sled is properly seated in the compute slot in the MX7000 chassis.
● Verify that at least one compute sled in the chassis is powered on.
● If the connected FSE port does not show a link up, toggle the auto negotiation settings for that port.
● Confirm that all of the firmware on the compute sleds is up to date and aligned with the installed MX baseline.
● If a QLogic/Marvell 41262 or 41232 adapter is used in the compute sled, the link speed setting on the adapter should be set
to SmartAN.
● Check the Topology LLDP settings. You can verify the settings by selecting iDRAC Settings > Connectivity from the
iDRAC UI that is on the compute sled. Check that this setting is set to Enabled as shown in the following figure.

Figure 173. Ensure that Topology LLDP is enabled

Troubleshooting FC and FCoE


When troubleshooting FC and FCoE, consider the following:
● Verify that the firmware and drivers are up to date on the CNAs.
● Check the support matrix to confirm that the CNAs are supported by the storage that is used in the deployment. For the
support matrix for Dell storage platforms, see the following:
○ Dell Technologies E-Lab Navigator
○ Dell Storage Compatibility Matrix for SC Series, PS Series, and FS Series storage solutions
● Verify that port group breakout mode is configured correctly.
● Ensure that the FC port-groups that are broken out on the unified ports in the MX9116n switches are set administratively up
after the ports are changed from Ethernet to FC.

156 General Troubleshooting


● MX9116n switches operating in SmartFabric mode support various commands to verify the configuration. Use the following
commands to verify FC configurations from MX9116n CLI:

OS10# show fc
alias Show FC alias
ns Show FC NS Switch parameters
statistics Show FC Switch parameters
switch Show FC Switch parameters
zone Show FC Zone
zoneset Show fc zoneset

● Use the following commands to verify FCoE configurations from MX9116n CLI:

OS10# show fcoe


enode Show FCOE enode information
fcf Show FCOE fcf information
sessions Show FCOE session information
statistics Show FCOE statistics information
system Show FCOE system information
vlan Show FCOE vlan information

● Verify that the FC ports are up, for example:

OS10# show interface status | grep 1/43


Fc 1/1/43:1 up 16G auto -
Fc 1/1/43:2 up 16G auto -
Fc 1/1/43:3 down 0 auto –
Fc 1/1/43:4 down 0 auto –

The show vfabric command output provides various information including the default zone mode, the active zone set, and
interfaces that are members of the vfabric.

OS10# show vfabric


Fabric Name New vfabric
Fabric Type FPORT
Fabric Id 1
Vlan Id 30
FC-MAP 0xEFC00
Vlan priority 3
FCF Priority 128
FKA-Adv-Period Enabled,8
Config-State ACTIVE
Oper-State UP
==========================================
Switch Config Parameters
==========================================
Domain ID 1
==========================================
Switch Zoning Parameters
==========================================
Default Zone Mode: Allow
Active ZoneSet: None
==========================================
Members
fibrechannel1/1/44:1
ethernet1/1/1
ethernet1/71/1
ethernet1/71/2

The show fcoe sessions command shows active FCoE sessions. The output includes MAC addresses, Ethernet interfaces,
the FCoE VLAN ID, FC IDs, and WWPNs of logged-in CNAs.

NOTE: Due to the width of the command output, each line of output is shown on two lines below.

OS10# show fcoe sessions


Enode MAC Enode Interface FCF MAC FCF interface VLAN FCoE
MAC FC-ID PORT WWPN PORT WWNN
-----------------------------------------------------------------------------------------
----------------------------------------------------------------
06:c3:f9:a4:cd:03 Eth 1/71/1 20:04:0f:00:ce:1d ~ 30

General Troubleshooting 157


0e:fc:00:01:01:00 01:01:00 20:01:06:c3:f9:a4:cd:03 20:00:06:c3:f9:a4:cd:03
f4:e9:d4:73:d0:0c Eth 1/1/1 20:04:0f:00:ce:1d ~ 30
0e:fc:00:01:02:00 01:02:00 20:01:f4:e9:d4:73:d0:0c 20:00:f4:e9:d4:73:d0:00

NOTE: For more information about FC and FCoE, find the relevant version of the Dell SmartFabric OS10 User Guide in the
OME-M and OS10 compatibility and documentation table.

Rebalancing FC and FCoE sessions


Beginning with OME-M 1.20.00 and OS10.5.0.7, the ability to rebalance FC and FCoE sessions across FC uplinks has been
added. This can be validated in Scenario 5: Connect MX9116n FSE to Fibre Channel storage - NPIV Proxy Gateway mode.
The system performs an end-node based rebalancing when the CLI command is run. Factors for rebalancing are the current
session count on the uplink includes Fabric Login requests (FLOGI), Fabric Discovery Requests (FDISC), and the speed of the
uplink. Rebalance can be applied once the FC fabric is up and running and uplinks sessions are established in them.
Prior to the release of Dell SmartFabric OS10.5.1, NPG implementations exposed one Fibre Channel Forwarder (FCF) for each
physical FC uplink to end nodes. Starting with Dell SmartFabric OS10.5.2.4, all physical uplink interfaces within a vFabric are
represented as a single logical FCF. This improves session management and failover as the CNA no longer has to select a
different FCF during a link event.

Requirements and configuration guidelines


When a new physical uplink is added to a vFabric operating in NPG mode, or when a physical uplink having FC/FCoE sessions
established in them goes down, the system will go to an unbalanced state. A manual rebalance can be performed when the
system is found to be unbalanced.
The new uplink added must be operationally up before the rebalance is triggered. When an uplink goes down, all the sessions
associated with that uplink will be interrupted and will be reestablished and load balanced to the other available uplinks.
Rebalancing is done at the vFabric level.
NOTE: Sessions that are interrupted and reestablished appear to the host as an FC path failure until the session is
reestablished. Ensure that MPIO functionality on the host is operational before performing the rebalance.
Because FC session rebalancing is path disruptive, the command provides the ability to perform a dry run to provide a list of
servers that will be affected.
Below are the steps to perform rebalancing of uplinks.

System in unbalanced state


The following command shows that the system is in an unbalanced state. Run the show npg device brief and show npg
uplink-interfaces commands to see the unbalanced state of the system. In the following figure, interface Fc 1/1/23
has two FC sessions and interface Fc 1/1/24 has zero.

Figure 174. System in unbalanced state

158 General Troubleshooting


Perform a dry run of the rebalance command
To understand what changes will be made during a rebalance before making them, run the re-balance npg sessions
vfabric 10 dry-run command to view what changes will be made when the command is executed.

Figure 175. Review the rebalance using the dry-run command

General Troubleshooting 159


Run rebalance command
To perform the rebalance, run the command re-balance npg sessions vfabric 10.

Figure 176. Rebalance with actual run

NOTE: Once the rebalance is complete, below syslog will be generated in MX console.

System in a balanced state


The following figure shows the system in a balanced state. Interface Fc 1/1/23 now has one FC session and interface Fc
1/1/24 has one.

Figure 177. System in balanced state

160 General Troubleshooting


Beginning with Dell SmartFabric OS10.5.2, the show npg uplink-interface command has one more option added as
fcf-info to display the FCF Availability Status, fabric name of the FC upstream switch connected, error reason, FCF
advertisement delay timeout left, and duplicate FC id assignment counter.

MX9116N-A1# show npg uplink-interfaces fcf-info


Vfabric-Id : 10
FAD Timeout Left : 0 second(s)
FCF Availability Status : Yes
Uplink Duplicate
Intf Upstream Fabric-name Error Reason FC Id(s)
--------------------------------------------------------------------------
Fc 1/1/24 10:00:14:18:77:20:7f:cf NONE 0
Fc 1/1/23 10:00:14:18:77:20:7f:cf NONE 0

Common CLI troubleshooting commands for Full


Switch and SmartFabric modes
show switch-operating-mode
Use the show switch-operating-mode command to display the current operating mode:

MX9116N-1# show switch-operating-mode

Switch-Operating-Mode : Smart Fabric Mode

show discovered-expanders
The show discovered-expanders command is only available on the MX9116n FSE. The MX7116n FEMs display the service
tag that is attached to the MX9116n FSEs, the associated port group, and the virtual slot.

MX9116N-1# show discovered-expanders


Service Model Type Chassis Chassis-slot Port-group Virtual
tag service-tag Slot-Id
--------------------------------------------------------------------------
D10DXC2 MX7116n 1 SKY002Z A1 1/1/1 71
FEM

show unit-provision
The show unit-provision command is only available on the MX9116n FSE and displays the unit ID, the provision name, and
the discovered name of the MX7116n FEM that is attached to the MX9116n FSE.

MX9116N-1# show unit-provision


Node ID | Unit ID | Provision Name | Discovered Name | State |
---------+---------+---------------------------------+-------|
1 | 71 | D10DXC2 | D10DXC2 | up |

show lldp neighbors


The show lldp neighbors command shows information about LLDP neighbors. The iDRAC that is in the PowerEdge MX
compute sled produces LLDP topology packets that contain specific information that the SmartFabric Services engine uses to
determine the physical network topology regardless of whether a switch is in Full Switch or SmartFabric mode. For servers that
are connected to switches in SmartFabric mode, the iDRAC LLDP topology feature is required. Without it, the fabric does not
recognize the compute sled and the user cannot deploy networks to the sled.

General Troubleshooting 161


The iDRAC MAC address can be verified by selecting iDRAC Settings > Overview > Current Network Settings from the
iDRAC UI of a compute sled as shown in the following example:

Figure 178. IOM Port information

Alternately, the iDRAC MAC information can be obtained from the System Information on the iDRAC Dashboard page.

Figure 179. System Information on iDRAC Dashboard

When viewing the LLDP neighbors, the iDRAC MAC address in addition to the NIC MAC address of the respective mezzanine
card are shown.

MX9116N-1# show lldp neighbors


Loc PortID Rem Host Name Rem Port Id Rem Chassis Id
--------------------------------------------------------------------------------
ethernet1/1/1 Not Advertised 98:03:9b:65:73:b2 98:03:9b:65:73:b4
ethernet1/1/1 iDRAC-8XQP0T2 8XQP0T2 NIC.Mezzanine.1A-1-1 d0:94:66:87:ab:40
---- OUTPUT TRUNCATED -----

In the example deployment validation of LLDP neighbors, Ethernet1/1/1, ethernet 1/1/3, and ethernet
1/1/71-1/1/72 represent the two MX740c sleds in one chassis. The first entry is the iDRAC for the compute sled. The
iDRAC uses connectivity to the mezzanine card to advertise LLDP information. The second entry is the mezzanine card itself.
Ethernet 1/71/1 and ethernet 1/71/2 represent the MX740c compute sleds connected to the MX7116n FEM in the
other chassis.
Ethernet range ethernet1/1/37-1/1/40 are the VLTi interfaces for the SmartFabric. Last, ethernet1/1/41-1/1/42
are the links in a port channel that is connected to the Dell Networking S5232-ON leaf switches.

MX9116N-1# show lldp neighbors


Loc PortID Rem Host Name Rem Port Id Rem Chassis Id
----------------------------------------------------------------------------
ethernet1/1/1 iDRAC-CBMP9N2 CBMP9N2 NIC.Mezzanine.1A-1-1 d0:94:66:2a:07:2f
ethernet1/1/1 Not Advertised 24:6e:96:9c:e3:50 24:6e:96:9c:e3:50
ethernet1/1/3 iDRAC-1S35MN2 1S35MN2 NIC.Mezzanine.1A-1-1 d0:94:66:29:fa:f4

162 General Troubleshooting


ethernet1/1/3 Not Advertised 24:6e:96:9c:e5:48 24:6e:96:9c:e5:48
ethernet1/1/37 C160A2 ethernet1/1/37 20:04:0f:00:a1:9e
ethernet1/1/38 C160A2 ethernet1/1/38 20:04:0f:00:a1:9e
ethernet1/1/39 C160A2 ethernet1/1/39 20:04:0f:00:a1:9e
ethernet1/1/40 C160A2 ethernet1/1/40 20:04:0f:00:a1:9e
ethernet1/1/41 S5232-Leaf1 ethernet1/1/3 4c:76:25:e8:f2:c0
ethernet1/1/42 S5232-Leaf2 ethernet1/1/3 4c:76:25:e8:e8:40
ethernet1/71/1 Not Advertised 24:6e:96:9c:e5:d8 24:6e:96:9c:e5:d8
ethernet1/71/1 iDRAC-CF52XM2 CF52XM2 NIC.Mezzanine.1A-1-1 d0:94:66:29:fe:b4
ethernet1/71/2 Not Advertised 24:6e:96:9c:e5:da 24:6e:96:9c:e5:da
ethernet1/71/2 iDRAC-1S34MN2 1S34MN2 NIC.Mezzanine.1A-1-1 d0:94:66:29:ff:27

show qos system


The show qos system command displays the QoS configuration that is applied to the system. The command is useful to
verify the service policy that is created manually or automatically by a SmartFabric deployment.

MX9116N-1# show qos system


Service-policy (input): PM_VLAN
ETS Mode : off

show policy-map
Using the service policy from show qos system, the show policy-map type qos PM_VLAN command displays QoS policy
details including associated class maps, for example, CM10, and QoS queue settings, qos-group 2.

MX9116N-1# show policy-map type qos PM_VLAN


Service-policy (qos) input: PM_VLAN
Class-map (qos): CM10
set qos-group 2

show class-map
The show class-map command displays details for all the configured class-maps. For example, the association between CM10
and VLAN 10 is shown.

MX9116N-1# show class-map


Class-map (application): class-iscsi
Class-map (qos): class-trust
Class-map (qos): CM10(match-any)
Match: mac vlan 10
Class-map (qos): CM2(match-any

show vlt domain-id vlt-port-detail


The show vlt domain-id vlt-port-detail command shows the VLT port channel status for both VLT peers. The VLT
in this example is connected to the Cisco ACI vPC. It is automatically configured in port channel 1, and it consists of two ports
on each switch.

MX9116n-1# show vlt 255 vlt-port-detail


vlt-port-channel ID : 1
VLT Unit ID Port-Channel Status Configured ports Active ports
-------------------------------------------------------------------------------
* 1 port-channel1 up 2 2
2 port-channel1 up 2 2

General Troubleshooting 163


show interface port channel summary
The show interface port-channel summary command shows the LAG number (VLT port channel 1 in this example),
the mode, status, and ports used in the port channel.

MX9116n-1# show interface port-channel summary


LAG Mode Status Uptime Ports
1 L2-HYBRID up 00:29:20 Eth 1/1/43 (Up)
Eth 1/1/44 (Up)

show queuing weights interface ethernet


The show queuing weights interface ethernet command shows the queue and weights for each queue in
percentage. These queues belong to the QoS groups mentioned in Networks and automated QoS. For example, queue 2 belongs
to Bronze and queue 3 belongs to Silver.

MX9116N-1# show queuing weights interface ethernet 1/1/41


Interface ethernet1/1/41
Queue Weight(In percentage)
--------------------------------
0 1
1 2
2 3
3 4
4 5
5 10
6 25
7 50

The example of QoS group, its related queue, and weight is shown here.

QoS Group Queue Weight(In percentage)


--------------------------------
0 0 1
1 1 2
2(Bronze) 2 3
3(Silver) 3 4
4(Gold) 4 5
5(Platinum) 5 10
6 6 25
7 7 50

show lldp dcbx interface ethernet ets detail


The show lldp dcbx interface ethernet ets detail command shows each port group, its priorities and bandwidth
in admin, remote and local mode. Ensure that you have dcbx enabled to run the command. Bandwidth is in percentage. The
minimum and maximum bandwidth can be changed in OME-M under the Edit Network option for the created server template.

MX9116N-1# show lldp dcbx interface ethernet 1/1/1 ets detail


Interface ethernet1/1/1
Max Supported PG is 8
Number of Traffic Classes is 8
Admin mode is on

Admin Parameters :
------------------
Admin is enabled

PG-grp Priority# Bandwidth TSA


------------------------------------------------
0 0,1,2,5,6,7 1% ETS
1 0% SP
2 0% SP
3 3 98% ETS

164 General Troubleshooting


4 4 1% ETS
5 0% SP
6 0% SP
7 0% SP

Remote Parameters :
-------------------
Remote is enabled
PG-grp Priority# Bandwidth TSA
------------------------------------------------
0 0,1,2,5,6,7 1% ETS
1 0% SP
2 0% SP
3 3 98% ETS
4 4 1% ETS
5 0% SP
6 0% SP
7 0% SP

Remote Willing Status is enabled


Local Parameters :
-------------------
Local is enabled

PG-grp Priority# Bandwidth TSA


------------------------------------------------
0 0,1,2,5,6,7 1% ETS
1 0% SP
2 0% SP
3 3 98% ETS
4 4 1% ETS
5 0% SP
6 0% SP
7 0% SP

Oper status is init


ETS DCBX Oper status is Up
State Machine Type is Asymmetric
Conf TLV Tx Status is enabled
Reco TLV Tx Status is enabled

4 Input Conf TLV Pkts, 55 Output Conf TLV Pkts, 2 Error Conf TLV Pkts
0 Input Reco TLV Pkts, 55 Output Reco TLV Pkts, 0 Error Reco TLV Pkts

General Troubleshooting 165


11
SmartFabric Troubleshooting
Troubleshooting SmartFabric issues
This section provides information about errors that might be encountered while working with a SmartFabric. Troubleshooting
and remediation actions are also included to help with resolving errors.

Troubleshoot port group breakout errors


The creation of a SmartFabric requires you to perform steps in a specific order. The SmartFabric deployment consists of four
main steps that are performed using the OME-M console:
1. Create the VLANs to be used in the fabric.
2. Select the switches and create the fabric based on the preferred physical topology.
3. Create uplinks from the fabric to the existing network and assign VLANs to those uplinks.
4. Create and deploy the appropriate server templates to the compute sleds.
For cases where changing the port speed or breakout configuration of port groups is required, the port must be configured after
the SmartFabric creation and before adding the uplinks.
With OME-M 1.30.00 and later, the port breakout can be directly configured to the desired breakout type as shown in figure
below.

166 SmartFabric Troubleshooting


Figure 180. Recommended order of steps for port breakout for OME-M 1.30.10 and later

With OME-M 1.20.10 and earlier, you must set the Breakout Type to HardwareDefault first and then set the desired
configuration as shown in the figure below.

SmartFabric Troubleshooting 167


Figure 181. Recommended order of steps for port breakout for OME-M 1.20.10 and earlier

If the recommended order of steps is not followed, you may encounter the following errors:

Table 26. Troubleshooting port group breakout errors


Scenario Error display
Configuration of the breakout requires you to create the
SmartFabric first. When attempting to configure the breakout
before creating a SmartFabric, the following error displays:

168 SmartFabric Troubleshooting


Table 26. Troubleshooting port group breakout errors (continued)
Scenario Error display
With OME-M 1.20.10 and earlier, configuration of the breakout
requires you to select the HardwareDefault breakout type first.
If the breakout type is directly selected without first selecting
HardwareDefault, the following error displays:

Once the uplinks are added, they are most often associated
with tagged or untagged VLANs. When attempting to configure
the breakout on the uplink port-groups after adding uplinks
associated with VLANs to the fabric, the following error displays:

SmartFabric Troubleshooting 169


Troubleshooting VLTi between switches
NOTE: The below example shows the MX9116n FSE, however the process is the same for the MX5108n Ethernet Switch.

After the SmartFabric is created, you may see the following error: Warning: Unable to validate the fabric because the
design link ICL-1_REVERSE not connected as per design and Unable to validate the fabric because the design link
ICL-1_FORWARD not connected as per design.
There are two common reasons why you may receive this error:
● QSFP28 cables are being used between MX9116n switches instead of QSFP28-DD cables.
● The VLTi cables are not connected to the correct physical ports.
An example is shown below. To see the warning message, go to the OME-M UI and click Devices > Fabric. Choose View
Details next to Warning. You can view the details of the warning message choosing the SmartFabric that was created, and
clicking Topology. The warnings are displayed in Validation Errors section.

Figure 182. Warning for VLTi connections using QSFP28 100 GbE cables

Figure 183. Warning messages

This occurs because the VLTi connections between two MX9116n FSEs are using QSFP28 cables instead of QSFP28-DD cables.
Make sure QSFP28-DD cables are connected between port group 11 and 12 (ports 1/1/37 through 1/1/40) on both FSEs for
VLTi connections.

170 SmartFabric Troubleshooting


Troubleshooting uplink errors
Toggle auto negotiation
Enabling or disabling auto negotiation from the OME-M console can bring up the uplinks connecting to the upstream switches.
For example, when deploying the SmartFabric with the Cisco Nexus 3232C (see Scenario 2: SmartFabric connected to Cisco
Nexus 3232C switches with Ethernet - No Spanning Tree uplink), disable auto negotiation on uplink ports on the MX switches to
bring the link up.
The OME-M console is used to disable/enable auto negotiation ports on MX switches. The following steps illustrate turning
disabling auto negotiation on ports 41 and 42 of a MX9116n.
1. From switch management page, choose Hardware > Port Information.
2. Select the ports on which auto negotiation must be disabled. In this example, ports 1/1/41 and port 1/1/42 are selected.
3. Click Toggle AutoNeg > Finish.

Figure 184. Toggle AutoNeg dialog box

Set uplink ports to administratively up


The uplink ports on the switch might be administratively down. Enabling the uplink ports can be carried out from the OME-M
console. The uplink ports can be administratively down when a port group breakout happens, especially for FC breakouts.
The OME-M console can be used to disable/enable the ports on MX switches. The following steps illustrate turning setting the
administrative state on ports 41 and 42 of an MX9116n.
1. From switch management page, choose Hardware > Port Information.
2. Select the ports.
NOTE: In this example, ports 1/1/41 and port 1/1/42 are selected.
3. Click Toggle Admin State > Finish .

Figure 185. Toggle Admin State dialog box

Verify MTU size


Set the same MTU size on the ports that connect the MX switches, the ports on the upstream switches, and the server NICs.
To set the MTU size from the OME-M console, see Configure Ethernet ports.

SmartFabric Troubleshooting 171


Verify auto negotiation settings on upstream switches
Verify the auto negotiation settings on the upstream switches. In the case where auto negotiation settings are modified, the
links might not come up. Change the auto negotiation on upstream switches to resolve the issue.
For example, if the auto negotiation was disabled on the Cisco Nexus upstream switches, the setting can be turned on. To
enable the autonegotiation on an Ethernet interface on Cisco Nexus switches, run the following commands:

switch# configure terminal


switch(config)# interface ethernet <interface-number>
switch(config-if)# negotiate auto

The following example shows interface ethernet 1/2 with auto negotiation enabled on the interface:

Nexus-3232C-Leaf1(config-if)# do show int eth 1/2


Ethernet1/2 is down (XCVR not inserted)
admin state is down, Dedicated Interface
Hardware: 40000/100000 Ethernet, address: 00fe.c8ca.f367 (bia 00fe.c8ca.f36c)
MTU 1500 bytes, BW 100000000 Kbit, DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, medium is broadcast
auto-duplex, auto-speed
Beacon is turned off
Auto-Negotiation is turned on, FEC mode is Auto
---- OUTPUT TRUNCATED -----

Verify LACP
The interface status of the upstream switches can provide valuable information for the link being down. The following example
shows interfaces 1 and 3 on upstream Cisco Nexus switches as members of port channel 1:

3232C-Leaf2# show interface status


--------------------------------------------------------------------------------
Port Name Status Vlan Duplex Speed Type
--------------------------------------------------------------------------------
mgmt0 -- connected routed full 1000 --
Eth1/1 To MX Chassis 1 suspended trunk full 100G QSFP-100G-SR4
Eth1/2 -- xcvrAbsen routed auto auto --
Eth1/3 To MX Chassis 2 suspended trunk full 100G QSFP-100G-SR4
---- OUTPUT TRUNCATED -----

Checking interface 1 reveals that the ports are not receiving the LACP PDUs as shown in the following example:

3232C-Leaf2# show int eth 1/1


Ethernet1/1 is down (suspended(no LACP PDUs))
admin state is up, Dedicated Interface
Belongs to Po1
---- OUTPUT TRUNCATED -----

NOTE: Within the Dell PowerSwitch, use the show interface status command to view the interfaces and associated
status information. Use the show interface ethernet interface number to view the interface details.

In the following example, the errors listed above occurred because an uplink was not created on the fabric.

Figure 186. Fabric topology with no uplinks

172 SmartFabric Troubleshooting


The following image shows the Topology with QSFP28 100 GbE connection on ports 37 and 39 instead of QSFP28-DD
connection, an unsupported configuration.

Figure 187. Fabric topology with uplinks and QSFP28 100 VLTi connection

The resolution is to add one or more uplinks and verify that the fabric turns healthy.

Figure 188. Healthy fabric

Troubleshooting legacy Ethernet uplink with STP


When using the legacy Ethernet uplink type, it is essential to ensure that network loops are prevented by using appropriate
Spanning Tree Protocol (STP) on the MX and upstream switches. STP prevents loops in the network. Loops can occur when
multiple redundant parts are available between the switches. To prevent the network from going down due to loops, various
types of STP are available.
When using the Ethernet – No Spanning Tree Protocol uplink, STP is not required on the upstream switch interfaces.
NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.

Verify STP is enabled on upstream switches


STP is required when connecting a SmartFabric to the upstream network when using the legacy Ethernet uplink. Turning off
Spanning Tree in the upstream switches will result in network loops and may cause downtime. Enable the appropriate STP type
on the upstream switches.

SmartFabric Troubleshooting 173


Verify STP type is identical on MX and upstream switches
Check the upstream switch if STP is enabled and verify that the type of STP matches the type of STP running on the MX
switches. By default, the MX switches run RPVST+ as shown below:

OS10# show spanning-tree brief


Spanning tree enabled protocol rapid-pvst
VLAN 1
Executing IEEE compatible Spanning Tree Protocol
---- OUTPUT TRUNCATED -----

The following example shows that the STP on the upstream switches, Cisco Nexus 3232C, is configured to run MST:

Nexus-3232C-Leaf1(config)# do show spanning-tree summary


Switch is in mst mode (IEEE Standard)
Root bridge for: MST0000
Port Type Default is disable
---- OUTPUT TRUNCATED -----

The recommended course of action is to change the STP type to RPVST+ on the upstream Cisco Nexus switches.

Nexus-3232C-Leaf1(config)# spanning-tree mode rapid-pvst


Nexus-3232C-Leaf1(config)# do show spanning-tree summary
Switch is in rapid-pvst mode
--- OUTPUT TRUNCATED -----

Another course of action in the above case can be to change the spanning tree type on the MX switches operating in
SmartFabric mode to match the STP type on the upstream switches. Make the change using the SmartFabric OS10 CLI. The
options available for the type of STP are as follows:

MX9116N-A1(config)# spanning-tree mode ?


rstp Enable rstp
rapid-pvst Enable rapid-pvst

Troubleshooting common issues


This section discusses the various issues that you may encounter when configuring the scenarios and examples that are
mentioned in this guide. A problem statement is given for each scenario, along with one or more possible solutions.

Table 27. Problem and resolution examples


Problem Scenario Solution
MX7116n FEMs Two MX7000 chassis are connected in an When resolving the issue, consider the following:
are not discovered MCM group with MX9116n FSEs and MX7116n 1. Do not enable LLDP under the Discovery option
when creating a FEMs. MX9116n FSEs are connected to Upstream in the distributed virtual switch settings. LLDP
SmartFabric. switches. Upstream switches are connected to rack is not a supported Discovery protocol on a
servers, and vCenter is deployed in this scenario. Distributed Virtual Switch in ESXi on the MX
VMs are also deployed on the ESXi hosts MX platform.
Compute sleds and Rack servers. 2. Disable Beacon Probing and revert to Link
Status only on all port groups. This can be done
The Link Layer Discovery Protocol (LLDP)
under Port-group settings > Teaming and
advertisements from the blade NICs may not be
visible to the IOMs. Running the show lldp
neighbors command from the IOM does not list
the NICs shown here

In the blade iDRAC, the NICs status shows as


Unknown, and the Switch Connection ID and
Switch Port Connection ID are shown as Not
Applicable.

174 SmartFabric Troubleshooting


Table 27. Problem and resolution examples (continued)
Problem Scenario Solution
Failover.

This issue may prevent MX7116n from being


discovered when creating a SmartFabric.

3. If the NICs are configured for Jumbo Frames,


try turning this off.
4. Set up the Traffic Filtering (ACL) to drop
LLDP packets in ingress and egress direction.
Verify that the same ACL does not exist on
any physical switch or virtual switch where the
SmartFabric is expected to be interconnected.
Dropped packets Two MX7000 chassis are connected in an The issue is when one of the MX9116n FSE on
between VMs for MCM group with MX9116n FSEs and MX7116n MX7000 chassis becomes the Spanning Tree root
15 seconds after FEMs. MX9116n FSEs are connected to Upstream when using the legacy Ethernet uplink type.
the switch reboots switches. Upstream switches are connected to rack
To resolve this issue, make an upstream switch the
servers, and vCenter is deployed in this scenario.
STP Root, not the MX9116n FSE. In the topology
VMs are also deployed on the ESXi hosts MX
mentioned here, the switch with the lower priority
Compute sleds and Rack servers. Verify that STP
number increases the likelihood that the bridge to it
is enabled.
becomes the STP Root.
Rebooting the MX9116n FSE on the MX7000
Run the commands mentioned in the Dell
chassis while passing traffic between the VMs
SmartFabric OS10 User Guide to make upstream
deployed on the MX compute sleds and the VMs
switch the STP root. Find the relevant version
that are deployed on rack servers causes three to
of the User Guide in the OME-M and OS10
five requested time outs and dropped packets for
compatibility and documentation table.
up to 15 seconds.

Not able to set Scenario: I/O Modules MX9116n FSE or MX5108n By default, the MX9116n FSE and MX5108n IOMs
QoS on a compute is connected to MX740c compute sled with Intel support the DCBx protocol and can be used to push
sled connected to XXV710 ethernet controller. IOMs are connected to their QoS configuration to the server NIC. The NIC
MX9116n FSE or upstream switches must be configured to accept these QoS settings
MX5108 from the switch by setting their Remote Willing
Running show lldp dcbx interface ethernet <node/
Status to Enable.
slot/port> pfc detail command shows Remote
willingness status is disabled on server facing ports. In Full Switch mode, user can configure DCBx as
mentioned in the Dell SmartFabric OS10 User Guide.
OS10# show lldp dcbx interface ethernet 1/1/1 pfc
Find the relevant version of the User Guide in the
detail
OME-M and OS10 compatibility and documentation
Interface ethernet1/1/15 table.
Admin mode is on In SmartFabric mode, DCBX configuration is tied to
FCoE UPLINK and it will enable only after FCoE
Admin is enabled, Priority list is 4,5,6,7
Uplink configured on this switch.
Remote is enabled, Priority list is 4,5,6,7
Once DCBX configuration applied on switch side,
Remote Willing Status is disabled it will push to remote end and remote end must
accept this configuration by “Remote Willing Status
(Output Truncated)
Enabled”.
The NIC on the server that is attached to the
switch is not configured to receive DCBx or any
QoS configurations, which is what causes the

SmartFabric Troubleshooting 175


Table 27. Problem and resolution examples (continued)
Problem Scenario Solution

Remote Willing Status is disabled message. Some


server NICs will only receive a QoS configuration
(scheduling, bandwidth, priority queues, etc.) from
the switch they are attaching to. The drivers for
these NICs do not support this configuration via
software, but only from a peer via the DCBx
protocol.

Removing the To reproduce the scenario with MX IOMs In Full Switch mode, user can create a VLAN, enable
management VLAN connected to Upstream switches: it and define as a Management VLAN in global
tag under Edit 1. Create management VLAN. configuration mode on switch. For more information
Uplinks removes 2. After creating SmartFabric and adding uplinks, on Configuring VLANs in Full Switch mode, find the
the management the VLANs can be edited from the Edit Uplinks relevant version of the User Guide in the OME-M
VLAN page. and OS10 compatibility and documentation table.
3. Go to OME-M Console > Devices > Fabric > In SmartFabric mode, management VLAN 4020 will
Select a fabric > Select uplink > Edit. be created by default.
4. Click Next to access Edit Uplink page.
Make sure not to add management VLAN by Add
5. Add Network and add management VLAN
Network or remove tag on management VLAN.
6. Tag the management VLAN. The UI accepts the
change but no change in device. Access the CLI This removes the management VLAN itself.
to confirm.
7. Remove the tag on management VLAN, this in
turn deletes the management VLAN as well.

SmartFabric Services troubleshooting commands


The following commands allow user to view various SmartFabric Services configuration information. These commands can also
be used as troubleshooting purpose on SmartFabric OS10.
For information related to Support release for commands, find the relevant version of the User Guide in the OME-M and OS10
compatibility and documentation table.

show smartfabric personality


The show smartfabric personality command is used on a node to view the personality of SmartFabric Services
configured. The possible values can be PowerEdge MX, Isilon, VxRail, and L3 fabric .

show smartfabric cluster


The show smartfabric cluster command is used to see if node is part of the cluster. This displays the cluster
information of the node such as node role, service, virtual ip address, and node domain. It can also be used to verify role
of the node as either Backup or Master.

OS10# show smartfabric cluster

----------------------------------------------------------
CLUSTER DOMAIN ID : 50
VIP : fde1:53ba:e9a0:de14:0:5eff:fe00:150
ROLE : MASTER
SERVICE-TAG : CBJXLN2

NOTE: New features may not appear in the MSM UI until the master is upgraded to the version that supports the new
features. The example above shows how the show smartfabric cluster command determines which I/O module is
the master, and which I/O module role is the back-up.

176 SmartFabric Troubleshooting


show smartfabric cluster member
The show smartfabric cluster member command is used to see the member details of the cluster. This displays the
cluster member information such as service-tag, ip address, status, role, type of each node, and the service tag of the chassis
that the node belongs to.

OS10# show smartfabric cluster member


Service-tag IP Address Status Role Type
Chassis-Service-Tag Chassis-Slot
-----------------------------------------------------------------------------------------
--------------------------------
CBJXLN2 fde1:53ba:e9a0:de14:2204:fff:fe00:cde7 ONLINE MASTER MX5108n
SKY002Z A1
BZTQPK2 fde1:53ba:e9a0:de14:2204:fff:fe00:19e5 ONLINE BACKUP MX5108n
SKY002Z B1
6L59XM2 fde1:53ba:e9a0:de14:2204:fff:fe00:3de5 ONLINE BACKUP MX5108n
SKY002Z B2
F13RPK2 fde1:53ba:e9a0:de14:2204:fff:fe00:a267 ONLINE BACKUP MX5108n
SKY003Z A2

show smartfabric details


The show smartfabric details command is used to see the all configured fabric details. This command displays the
nodes that are part of the fabric, status of the fabric, and design type associated with the fabric.

OS10# show smartfabric details


----------------------------------------------------------
Name : Fabric 1
Description :
ID : 74b3d3a4-7804-4c15-b6d3-5e4e7c364f82
DesignType : 2xMX9116n_Fabric_Switching_Engines_in_different_chassis
Validation Status: VALID
VLTi Status : VALID
Placement Status : VALID
Nodes : CBJXLN2, F13RPK2
----------------------------------------------------------

show smartfabric uplinks


The show smartfabric uplinks command is used to verify the uplinks configured across the nodes in the fabric. This
command displays the following information that is associated with the fabric:
● Name
● Description
● ID
● Media type
● Native VLAN
● Configured interfaces
● Network profile

OS10# show smartfabric uplinks


----------------------------------------------------------
Name : FCoE Path A
Description :
ID : 1b328dc2-b99c-466e-b87c-b84c9c342225
Media Type : FC
Native Vlan : 0
Untagged-network :
Networks : 6a161bae-788f-4d65-8b0c-69b404c477dc
Configured-Interfaces : CBJXLN2:fibrechannel1/1/44:1, CBJXLN2:fibrechannel1/1/44:2
----------------------------------------------------------
----------------------------------------------------------
Name : Uplink1
Description :

SmartFabric Troubleshooting 177


ID : d493fee2-9680-41c7-989d-cf0347aab4fd
Media Type : ETHERNET
Native Vlan : 1
Untagged-network :
Networks : e6189b88-7f19-4b05-98b5-0c05ff7ff8c8, 284dae93-b91f-4593-9cff-
c8521cd7ae90
Configured-Interfaces : CBJXLN2:ethernet1/1/42:1, F13RPK2:ethernet1/1/41:1,
F13RPK2:ethernet1/1/42:1, CBJXLN2:ethernet1/1/41:1
----------------------------------------------------------
----------------------------------------------------------
Name : FCoE Path B
Description :
ID : 0f7ad3a2-e59e-4a07-9a74-4e57558f0a4d
Media Type : FC
Native Vlan : 0
Untagged-network :
Networks : e2c35ec5-c177-46f1-9a69-75d8b202d739
Configured-Interfaces : F13RPK2:fibrechannel1/1/44:1, F13RPK2:fibrechannel1/1/44:2

show smartfabric networks


The show smartfabric networks command is used to view the various network profiles configured. The command
displays the VLANs that are configured, QoS Priority, and the network type for each network profile.

OS10# show smartfabric networks


Name Type QosPriority Vlan

--------------------------------------------------------------------------------
FCoE A1 STORAGE_FCOE PLATINUM 998
VLAN1 GENERAL_PURPOSE BRONZE 1
FCoE A2 STORAGE_FCOE PLATINUM 999
VLAN10 GENERAL_PURPOSE SILVER 10
UPLINK VLAN GENERAL_PURPOSE SILVER 2491

show smartfabric validation-error


The show smartfabric validation-error command displays all the information about the validation errors such as
category, subcategory, recommended action, severity, timestamp, and recommended link to each error.

show smartfabric nodes


The show smartfabric nodes command is used to view the details of the nodes that are part of the cluster. This
command helps the user to view the status of a node and chassis details.

OS10# show smartfabric nodes


Service-Tag Type Status Mode Chassis-Service
Chassis-Slot
Tag
--------------------------------------------------------------------------
F13RPK2 MX9116n ONLINE FABRIC SKY003Z A2
110DXC2 MX7116n NOT-APPLICABLE SKY002Z A2
CBJXLN2 MX9116n ONLINE FABRIC SKY002Z A1
6L59XM2 MX5108n ONLINE FULL-SWITCH SKY002Z B2
D10DXC2 MX7116n NOT-APPLICABLE SKY003Z A1
BZTQPK2 MX5108n ONLINE FULL-SWITCH SKY002Z B1

178 SmartFabric Troubleshooting


show smartfabric configured-servers
The show smartfabric configured-servers command displays the list of deployed servers and details such as service
tag, model (MX740c/MX840c) of compute sled, chassis slot and chassis service tag. It also shows that the compute sled has
been discovered, onboarded and configured or not.

OS10# show smartfabric configured-server


**********************************************************
Service-Tag : 8XQP0T2
Server-Model : PowerEdge MX740c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : 8XXJ0T2
Is-Discovered : TRUE
Is-Onboarded : TRUE
Is-Configured : TRUE
**********************************************************
**********************************************************
Service-Tag : DTQHMR2
Server-Model : PowerEdge MX740c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : 8XXJ0T2
Is-Discovered : TRUE
Is-Onboarded : TRUE
Is-Configured : TRUE
**********************************************************
**********************************************************
Service-Tag : 8XRH0T2
Server-Model : PowerEdge MX740c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : F7PQ0T2
Is-Discovered : TRUE
Is-Onboarded : TRUE
Is-Configured : TRUE
**********************************************************
**********************************************************
Service-Tag : ST0000W
Server-Model : PowerEdge MX840c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : F7PQ0T2
Is-Discovered : TRUE
Is-Onboarded : TRUE
Is-Configured : TRUE

show smartfabric configured-servers configured-


server-interface
The show smartfabric configured-servers configured-server-interface <compute-sled service
tag> command shows details of one deployed server such as NIC-ID, Switch Interface and Fabric. It also shows tagged and
untagged VLANs on NIC Mezzanine Card ports.

OS10# show smartfabric configured-server configured-server-interface DTQHMR2


**********************************************************
Service-Tag : DTQHMR2
----------------------------------------------------------
Nic-Id : NIC.Mezzanine.1A-2-1
Switch-Interface : 87QNMR2:ethernet1/71/2
Fabric : SF (abdeec7f-3a83-483a-929e-aa102429ae86)
Is-Discovered : TRUE
Is-Onboarded : TRUE
Is-Configured : TRUE
NicBonded : FALSE
Native-vlan : 1
Static-onboard-interface:

SmartFabric Troubleshooting 179


Networks : 40, 1611

----------------------------------------------------------
Nic-Id : NIC.Mezzanine.1A-1-1
Switch-Interface : 8XRJ0T2:ethernet1/1/3
Fabric : SF (abdeec7f-3a83-483a-929e-aa102429ae86)
Is-Discovered : TRUE
Is-Onboarded : TRUE
Is-Configured : TRUE
NicBonded : FALSE
Native-vlan : 1
Static-onboard-interface:
Networks : 30, 1611

show smartfabric discovered-server


The show smartfabric discovered-server command shows the list of servers present in the cluster and discovered by
IOMs.

OS10# show smartfabric discovered-server


----------------------------------------------------------
Service-Tag : 8XQP0T2
Server-Model : PowerEdge MX740c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : 8XXJ0T2
----------------------------------------------------------
----------------------------------------------------------
Service-Tag : DTQHMR2
Server-Model : PowerEdge MX740c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : 8XXJ0T2
----------------------------------------------------------
----------------------------------------------------------
Service-Tag : 8XRH0T2
Server-Model : PowerEdge MX740c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : F7PQ0T2
----------------------------------------------------------
----------------------------------------------------------
Service-Tag : ST0000W
Server-Model : PowerEdge MX840c
Chassis-Slot : 1
Chassis-Model : PowerEdge MX7000
Chassis-Service-Tag : F7PQ0T2

show smartfabric discovered-servers discovered-


server-interface
The show smartfabric discovered-servers discovered-server-interface <compute-sled service
tag> command shows list of discovered servers NIC connections.

OS10# show smartfabric discovered-server discovered-server-interface DTQHMR2


Nic-Id : Switch-Interface
------------------------------------------------------
NIC.Mezzanine.1A-1-1 8XRJ0T2:ethernet1/1/3
NIC.Mezzanine.1A-2-1 87QNMR2:ethernet1/71/2

180 SmartFabric Troubleshooting


show smartfabric upgrade-status
The show smartfabric upgrade-status command shows the current upgrade status of an I/O module in SmartFabric
mode.

OS10# show smartfabric upgrade-status

Opaque-id : 53f953f5-91ae-4009-b457-ef0f531cdc15
Upgrade Protocol : PUSH
Upgrade start time : 2021-02-11 14:48:51.595000
Status : INPROGRESS
Nodes to Upgrade : FD59H13
Reboot Sequence : FD59H13

Node Current-Action Current-Status Status-Message


-----------------------------------------------------------------------------------------
--------
FD59H13 REBOOT REBOOTING [Action : Reboot] Successfully sent the
request for rebooting the node.

show logging smartfabric


The show logging smartfabric command shows the events log information around SmartFabric services.

OS10# show logging smartfabric


2021-02-11 20:06:14.335 OS10 python3[notice]: [SFS_EVENT_LOG:DNV-CAGT] Cluster Group
INIT Group UUID/vrid:(78ff7f40-ef99-46b0-b760-c7c248abd1fc:18) from db
2021-02-11 14:09:01.527 MX9116N-A1 python3[notice]: [SFS_EVENT_LOG:DNV-CAGT]
Processing FA Ready CPS event stag:8XRJ0T2 ready:True
2021-02-11 14:09:01.528 MX9116N-A1 python3[notice]: [SFS_EVENT_LOG:DNV-CAGT] Processing
FA ready event stag:8XRJ0T2 ready:True
2021-02-11 14:09:01.578 MX9116N-A1 python3[notice]: [SFS_EVENT_LOG:DNV-CAGT] [Starting
MDNS manager] intf:br4004
2021-02-11 14:09:02.294 MX9116N-A1 python3[notice]: [SFS_EVENT_LOG:DNV-CAGT] Processing
FA connection state connect:False
2021-02-11 14:09:02.295 MX9116N-A1 python3[notice]: [SFS_EVENT_LOG:DNV-CAGT] processing
MDNS update message:{'group-uuid': '78ff7f40-ef99-46b0-b760-c7c248abd1fc', 'chassis-
model': '', 'device-type': '', 'chassis-name': '', 'group-vrid': '18', 'group-name': '',
'group-vrid-state': 'reserved', 'chassis-service-tag': '8XXJ0T2', 'group-type': 'LEAD'}
Output Truncated

SmartFabric Troubleshooting 181


12
Configuration Scenarios
This chapter discusses different topology configurations and scenarios.
● Scenarios 1 through 4 discuss Ethernet configurations with Ethernet - No Spanning Tree and Legacy Ethernet uplinks
● Scenarios 5 through 8 discuss Storage Networking scenarios with Dell PowerEdge MX connected to a storage array. The
scenarios also contain configurations with NPG, FSB, and direct attached modes

182 Configuration Scenarios


Scenario 1: SmartFabric deployment with S5232F-ON
upstream switches with Ethernet - No Spanning Tree
uplink
The following figure shows a topology using a pair of Dell PowerSwitch S5232F-ON upstream switches, but any SmartFabric
OS10 switches can be used. This section details configuration of the S5232F-ON with Ethernet - No Spanning Tree uplink as
well as validation of the S5232F-ON configuration. It also includes instructions on how to configure SmartFabric.

Figure 189. SmartFabric with Dell PowerSwitch S5232F-ON leaf switches

NOTE: See QSFP28 double density connectors for more information about the QSFP28-DD cables.

Configure SmartFabric
Perform the following steps to configure SmartFabric:

Configuration Scenarios 183


1. Physically cable the MX9116n FSE to the S5232F-ON upstream switch. Make sure that chassis are in a Multi-Chassis
Management group. For instructions, find the relevant version of the User Guide in the OME-M and OS10 compatibility and
documentation table.
2. Define VLANs to use in the Fabric. For instructions, see Define VLANs.
3. Create the SmartFabric as per instructions in Create the SmartFabric.
4. Configure uplink port speed or breakout. For more instructions, see Configuring port speed and breakout.
5. After the SmartFabric is created, create the Ethernet - No Spanning Tree uplink. See Create Ethernet – No Spanning Tree
uplink for more information.
6. Set MX I/O modules global spanning tree configurations to Rapid Spanning Tree Protocol (RSTP).
7. Create and deploy the appropriate server templates to the compute sleds. See Server Deployment for more information.

Dell PowerSwitch S5232F-ON configuration


This section outlines the configuration commands issued to the Dell PowerSwitch S5232F-ON switches with Ethernet - No
Spanning Tree uplink connected from MX9116n FSE to S5232F-ON. The switches start with their factory default settings as
indicated in the Reset SmartFabric OS10 switch to factory defaults section.
NOTE: With Ethernet - No Spanning Tree uplink, spanning tree is disabled on the upstream port channel on the MX
I/O modules. To disable spanning tree on ports connected to MX I/O modules, run the commands below on the Dell
PowerSwitch S5232F-ON.

NOTE: For information related to the same scenario using the legacy Ethernet uplink with Spanning Tree Protocol, see
Scenario 3: SmartFabric deployment with S5232F-ON upstream switches with legacy Ethernet uplink.
There are four steps to configure the S5232F-ON upstream switches:
1. Set the switch hostname and management IP address. Enable spanning-tree mode as RSTP.
2. Configure the VLT between the switches.
3. Configure the VLANs.
4. Configure the port channels to connect to the MX switches.
Use the following commands to set the hostname, and to configure the OOB management interface and default gateway.

S5232-ON Leaf 1 S5232-ON Leaf 2

configure terminal configure terminal

hostname S5232-Leaf1 hostname S5232-Leaf2


spanning-tree mode rstp spanning-tree mode rstp

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
no shutdown no shutdown
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24

management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

Configure the VLT between switches using the following commands. VLT configuration involves setting a discovery interface
range and discovering the VLT peer in the VLTi. The vlt-domain command configures the peer leaf-2 switch as a back-up
destination.

S5232-ON Leaf 1 S5232-ON Leaf 2

interface range ethernet1/1/29-1/1/31 interface range ethernet1/1/29-1/1/31


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport

vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
discovery-interface ethernet1/1/29-1/1/31 discovery-interface ethernet1/1/29-1/1/31

184 Configuration Scenarios


Configure the required VLANs on each switch. In this deployment example, the VLAN used is VLAN 10 and the Untagged VLAN
used is VLAN 1.

S5232-ON Leaf 1 S5232-ON Leaf 2

interface vlan1 interface vlan1


description “Default VLAN” description “Default VLAN”
no shutdown no shutdown
interface vlan10 interface vlan10
description “Company A General Purpose” description “Company A General Purpose”
no shutdown no shutdown

Configure the port channels that connect to the downstream switches. The LACP protocol is used to create the dynamic LAG.
Trunk ports allow tagged VLANs to traverse the trunk link. In this example, the trunk is configured to allow VLAN 10. Disable
the spanning tree on port channels and run the commands related to Ethernet - No Spanning Tree uplinks as mentioned in the
following.

S5232-ON Leaf 1 S5232-ON Leaf 2

interface port-channel1 interface port-channel1


description "To MX Chassis" description "To MX Chassis"
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport access vlan 1 switchport access vlan 1
switchport trunk allowed vlan 10 switchport trunk allowed vlan 10
vlt-port-channel 1 vlt-port-channel 1
mtu 9216 mtu 9216
no shutdown no shutdown
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree guard root spanning-tree guard root
spanning-tree disable spanning-tree disable
spanning-tree port type edge spanning-tree port type edge

interface ethernet1/1/1 interface ethernet1/1/1


description "To MX Chassis-1" description "To MX Chassis-1"
no shutdown no shutdown
no switchport no switchport
channel-group 1 mode active channel-group 1 mode active

interface ethernet1/1/3 interface ethernet1/1/3


description "To MX Chassis-2" description "To MX Chassis-2"
no shutdown no shutdown
no switchport no switchport
channel-group 1 mode active channel-group 1 mode active

end end
write memory write memory

Dell PowerSwitch S5232-ON validation


This section contains validation commands for the Dell PowerSwitch S5232-ON leaf switches.

show vlt
The show vlt command validates the VLT configuration status when the VLTi Link Status is up. The role of one switch in the
VLT pair is primary, and its peer switch (not shown) is assigned the secondary role.

S5232F-Leaf1# show vlt 1


Domain ID : 1
Unit ID : 1
Role : primary
Version : 1.0
Local System MAC address : 4c:76:25:e8:f2:c0
VLT MAC address : 4c:76:25:e8:f2:c0

Configuration Scenarios 185


IP address : fda5:74c8:b79e:1::1
Delay-Restore timer : 90 seconds
Peer-Routing : Disabled
Peer-Routing-Timeout timer : 0 seconds
VLTi Link Status
port-channel1000 : up

VLT Peer Unit ID System MAC Address Status IP Address Version


--------------------------------------------------------------------------------
2 4c:76:25:e8:e8:40 up fda5:74c8:b79e:1::2 1.0

show lldp neighbors


The show lldp neighbors command provides information about connected devices. In this case, ethernet1/1/1 and
ethernet1/1/3 connect to the two MX9116n FSEs, C160A2 and C140A1 . The remaining links, ethernet1/1/29, and
ethernet 1/1/31, represent the VLTi connection.

S5232F-Leaf1# show lldp neighbors


Loc PortID Rem Host Name Rem Port Id Rem Chassis Id
----------------------------------------------------------------
ethernet1/1/1 C160A2 ethernet1/1/41 20:04:0f:00:a1:9e
ethernet1/1/3 C140A1 ethernet1/1/41 20:04:0f:00:cd:1e
ethernet1/1/29 S5232F-Leaf2 ethernet1/1/29 4c:76:25:e8:e8:40
ethernet1/1/31 S5232F-Leaf2 ethernet1/1/31 4c:76:25:e8:e8:40

show smartfabric uplinks


The show smartfabric uplinks command is used to verify the uplinks configured across the nodes in the fabric. This
displays name, description, id, media type, native vlan, configured interfaces, and network profile associated with fabric. Run this
command on MX9116n FSE. The following output shows that the uplink created is an Ethernet - No Spanning Tree uplink.

MX9116n-A1# show smartfabric uplinks


----------------------------------------------------------
Name : Uplink 1
Description :
ID : 3d4f2222-f082-43c1-b034-b14a8df3a172
Media Type : Ethernet - No Spanning Tree
Native Vlan : 1
Untagged-network :
Networks : 9418125b-5f1f-48d7-8b5d-648b0977c643
Configured-Interfaces : 87QNMR2:ethernet1/1/41, 87QNMR2:ethernet1/1/42
8XRJ0T2:ethernet1/1/41, 8XRJ0T2:ethernet1/1/42
----------------------------------------------------------

186 Configuration Scenarios


Scenario 2: SmartFabric connected to Cisco Nexus
3232C switches with Ethernet - No Spanning Tree
uplink
The figure below shows a topology using a pair of Cisco Nexus 3232C as leaf switches, but other Cisco Nexus switches may
be used. This section details configuration of the Cisco Nexus switch with Ethernet - No Spanning Tree uplink, validation of the
topology with Cisco Nexus switches, and creation of a SmartFabric with the corresponding uplinks.

Figure 190. SmartFabric with Cisco Nexus 3232C leaf switches

NOTE: See the QSFP28 double density connectors for more information about the QSFP28-DD cables.

Configure SmartFabric
Perform the following steps to configure SmartFabric:
1. Physically cable the MX9116n FSE to the Cisco Nexus upstream switch. Make sure that chassis are in a Multi-Chassis
Management group. For instructions, find the relevant version of the User Guide in the OME-M and OS10 compatibility and
documentation table.
2. Define VLANs to use in the Fabric. For instructions, see Define VLANs.
3. Create the SmartFabric as per instructions in Create the SmartFabric.

Configuration Scenarios 187


4. Configure uplink port speed or breakout. For more instructions, see Configuring port speed and breakout.
5. After the SmartFabric is created, create the Ethernet - No Spanning Tree uplink. See Create Ethernet – No Spanning Tree
uplink for more information.
6. Set MX I/O modules global spanning tree configurations to Rapid Spanning Tree Protocol (RSTP).
7. Create and deploy the appropriate server templates to the compute sleds. See Server Deployment for more information.

Cisco Nexus 3232C switch configuration


The following section outlines the configuration commands that are issued to the Cisco Nexus 3232C leaf switches with
Ethernet - No Spanning Tree uplink connected from MX9116n FSE to the Cisco Nexus switch.
NOTE: While this configuration example is specific to the Cisco Nexus 3232C switch, the same concepts apply to other
Cisco Nexus and IOS switches.
The switches start at their factory default settings, as described in Reset Cisco Nexus 3232C to factory defaults.
NOTE: With Ethernet - No Spanning Tree Uplink, spanning tree is disabled on upstream port channel on MX I/O modules.
To disable spanning tree on ports connected to MX I/O modules, run the commands below on the Cisco Nexus switches.
In this deployment example, default VLAN is VLAN 1 and the created VLAN is VLAN 10. See the Cisco Nexus 3000 Series
NX-OS Configuration Guide for more details.

NOTE: For information related to the same scenario using the legacy Ethernet uplink with Spanning Tree Protocol, see
Scenario 4: SmartFabric connected to Cisco Nexus 3232C switches with legacy Ethernet uplink.
There are four steps to configure the 3232C upstream switches:
1. Set switch hostname, management IP address, enable features vPC, LLDP, LACP, and interface-vlan.
2. Configure vPC between the switches.
3. Configure the VLANs.
4. Configure the downstream port channels to connect to the MX switches.
Enter the following commands to set the hostname and enable required features. Configure the management interface and
default gateway. Also run the global setting commands for Spanning Tree Protocol as mentioned in the following.
NOTE: The MX IOMs run Rapid per-VLAN Spanning Tree Plus (RPVST+) by default. Cisco Nexus switches run RSTP by
default. Ensure the Dell and non-Dell switches are both configured to use RSTP. For the Ethernet - No spanning Tree
uplinks from MX9116n FSE to the Cisco Nexus switches, spanning tree must be disabled on ports of Cisco Nexus.

Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

configure terminal configure terminal

hostname 3232C-Leaf1 hostname 3232C-Leaf2

feature vpc feature vpc


feature lldp feature lldp
feature lacp feature lacp
feature interface-vlan feature interface-vlan
spanning-tree port type edge bpduguard spanning-tree port type edge bpduguard
default default
spanning-tree port type network default spanning-tree port type network default

interface mgmt0 interface mgmt0


vrf member management vrf member management
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24

vrf context management vrf context management


ip route 0.0.0.0/0 100.67.XX.XX ip route 0.0.0.0/0 100.67.YY.YY

Enter the following commands to create a virtual port channel (vPC) domain and assign the keepalive destination to the peer
switch management IP. Then create a port channel for the vPC peer link and assign the appropriate switchport interfaces.

188 Configuration Scenarios


Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

vpc domain 255 vpc domain 255


peer-keepalive destination 100.67.YY.YY peer-keepalive destination 100.67.XX.XX

interface port-channel255 interface port-channel255


switchport switchport
switchport mode trunk switchport mode trunk
vpc peer-link vpc peer-link

interface Ethernet1/29 interface Ethernet1/29


description vPC Interconnect description vPC Interconnect
switchport switchport
switchport mode trunk switchport mode trunk
channel-group 255 mode active channel-group 255 mode active
no shutdown no shutdown

interface Ethernet1/31 interface Ethernet1/31


description vPC Interconnect description vPC Interconnect
switchport switchport
switchport mode trunk switchport mode trunk
channel-group 255 mode active channel-group 255 mode active
no shutdown no shutdown

Configure the required VLANs on each switch. In this deployment example, the Tagged VLAN used is VLAN 10 and Untagged
VLAN used is VLAN 1. Disable spanning tree on VLANs.

Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

interface vlan1 interface vlan1


description “Default VLAN” description “Default VLAN”
no spanning-tree mode no spanning-tree mode
no shutdown no shutdown

interface vlan10 interface vlan10


description “Company A General Purpose” description “Company A General Purpose”
no spanning-tree mode no spanning-tree mode
no shutdown no shutdown

Enter the following commands to configure the port channels to connect to the downstream MX9116n FSEs. Then, exit
configuration mode and save the configuration. Disable spanning tree on the port channel connected to MX9116n FSE.

Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

interface port-channel1 interface port-channel1


description To MX Chassis description To MX Chassis
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,10 switchport trunk allowed vlan 1,10
spanning-tree bpduguard enable spanning-tree bpduguard enable
spanning-tree port type edge spanning-tree port type edge
spanning-tree guard root spanning-tree guard root
vpc 255 vpc 255

interface Ethernet1/1 interface Ethernet1/1


description To MX Chassis 1 description To MX Chassis 1
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,10 switchport trunk allowed vlan 1,10
channel-group 1 mode active channel-group 1 mode active
no shutdown no shutdown

interface Ethernet1/3 interface Ethernet1/3


description To MX Chassis 2 description To MX Chassis 2
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,10 switchport trunk allowed vlan 1,10

Configuration Scenarios 189


Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

channel-group 1 mode active channel-group 1 mode active


no shutdown no shutdown

end end
copy running-configuration startup- copy running-configuration startup-
configuration configuration

NOTE: If the connections to the MX switches do not come up, see SmartFabric Troubleshooting for troubleshooting steps.

Trunk ports on switches allow tagged traffic to traverse the links. All flooded traffic for the VLAN is sent across trunk ports
to all the switches, even if those switches do not have an associated VLAN. This takes up the network bandwidth with
unnecessary traffic. VLAN or VTP Pruning is the feature that can be used to eliminate this unnecessary traffic by pruning the
VLANs.
Pruning restricts the flooded traffic to only those trunk ports with associated VLANs to optimize the usage of network
bandwidth. If the existing environment is configured for Cisco VTP or VLAN pruning, ensure that the Cisco upstream switches
are configured appropriately. See the Cisco Nexus 3000 Series NX-OS Configuration Guide for additional information.
NOTE: Do not use switchport trunk allow vlan all on the Cisco interfaces. The VLANs must be explicitly
assigned to the interface.

Configuration validation
This section covers the validation of the Cisco Nexus 3232C leaf switches. For information about the Dell Networking MX
switch validation commands, see Common CLI troubleshooting commands for Full Switch and SmartFabric modes.

show vpc
The show vpc command validates the vPC configuration status. The peer adjacency should be OK, with the peer should show
as alive. The end of the command shows which VLANs are active across the vPC.

NX3232C-Leaf1# show vpc


Legend:
(*) - local vPC is down, forwarding via vPC peer-link

vPC domain id : 255


Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status : success
Per-vlan consistency status : success
Type-2 inconsistency reason : Consistency Check Not Performed
vPC role : secondary, operational primary
Number of vPCs configured : 1
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled
Auto-recovery status : Disabled
Delay-restore status : Timer is off.(timeout = 30s)
Delay-restore SVI status : Timer is off.(timeout = 10s)

vPC Peer-link status


---------------------------------------------------------------------
id Port Status Active vlans
-- ---- ------ --------------------------------------------------
1 Po255 up 1,10

vPC status
----------------------------------------------------------------------
id Port Status Consistency Reason Active vlans
-- ---- ------ ----------- ------ ------------
255 Po1 up success success 1,10

190 Configuration Scenarios


show vpc consistency-parameters
The show vpc consistency-parameters command displays the configured values on all interfaces in the vPC. The
displayed configurations are only those configurations that limit the vPC peer link and vPC from coming up.

NX3232C-Leaf1# show vpc consistency-parameters vpc 255


Legend:
Type 1 : vPC will be suspended in case of mismatch

Name Type Local Value Peer Value


------------- ---- ---------------------- -----------------------
STP Port Type 1 Normal Port Normal Port
STP Port Guard 1 Default Default
STP MST Simulate PVST 1 Default Default
lag-id 1 [(1000, [(1000,
20-4-f-0-cd-1e, 1, 0, 20-4-f-0-cd-1e, 1, 0,
0), (7f9b, 0), (7f9b,
0-23-4-ee-be-ff, 80ff, 0-23-4-ee-be-ff, 80ff,
0, 0)] 0, 0)]
mode 1 active active
delayed-lacp 1 disabled disabled
Speed 1 100 Gb/s 100 Gb/s
Duplex 1 full full
Port Mode 1 trunk trunk
Native Vlan 1 1 1
MTU 1 1500 1500
Dot1q Tunnel 1 no no
Switchport Isolated 1 0 0
vPC card type 1 N9K TOR N9K TOR
Allowed VLANs - 1,10 1,10
Local suspended VLANs - - -

show lldp neighbors


The show lldp neighbors command provides information about lldp neighbors. In this example, Eth1/1 and Eth1/3 are
connected to the two MX9116n FSEs, C160A2 and C140A1. The remaining links, Eth1/29 and Eth1/31, represent the vPC
connection.

NX3232C-Leaf1(config)# show lldp neighbors


Capability codes:
(R) Router, (B) Bridge, (T) Telephone, (C) DOCSIS Cable Device
(W) WLAN Access Point, (P) Repeater, (S) Station, (O) Other
Device ID Local Intf Hold-time Capability Port ID
S3048-ON mgmt0 120 PBR ethernet1/1/45
C160A2 Eth1/1 120 PBR ethernet1/1/41
C140A1 Eth1/3 120 PBR ethernet1/1/41
NX3232C-Leaf2 Eth1/29 120 BR Ethernet1/29
NX3232C-Leaf2 Eth1/31 120 BR Ethernet1/31
Total entries displayed: 5

show smartfabric uplinks


The show smartfabric uplinks command is used to verify the uplinks configured across the nodes in the fabric. This
displays name, description, id, media type, native vlan, configured interfaces, and network profile associated with fabric. Run this
command on MX9116n FSE. The following output shows that the uplink created is an Ethernet - No Spanning Tree uplink.

MX9116n-A1# show smartfabric uplinks


----------------------------------------------------------
Name : Uplink 1
Description :
ID : 3d4f2222-f082-43c1-b034-b14a8df3a172
Media Type : Ethernet - No Spanning Tree
Native Vlan : 1
Untagged-network :
Networks : 9418125b-5f1f-48d7-8b5d-648b0977c643

Configuration Scenarios 191


Configured-Interfaces : 87QNMR2:ethernet1/1/41, 87QNMR2:ethernet1/1/42
8XRJ0T2:ethernet1/1/41, 8XRJ0T2:ethernet1/1/42
----------------------------------------------------------

192 Configuration Scenarios


Scenario 3: SmartFabric deployment with S5232F-ON
upstream switches with legacy Ethernet uplink
The following figure shows a topology using a pair of Dell PowerSwitch S5232F-ON switches as upstream switches. This section
walks through configuring the S5232F-ON and validating the configuration, but any SmartFabric OS10 switches can be used.
This section details configuration of the S5232F-ON as well as validation of the configuration.
NOTE: For information related to the same scenario using the Ethernet - No Spanning Tree uplink (recommended), see
Scenario 1: SmartFabric deployment with S5232F-ON upstream switches with Ethernet - No Spanning Tree uplink.

Figure 191. SmartFabric with Dell PowerSwitch S5232F-ON leaf switches

NOTE: See the Supported cables and optical connectors for more information about the QSFP28-DD cables.

Configuration Scenarios 193


Dell PowerSwitch S5232F-ON configuration
This section outlines the configuration commands issued to the Dell PowerSwitch S5232F-ON switches. The switches start with
their factory default settings as indicated in the Reset SmartFabric OS10 switch to factory defaults section.
NOTE: The MX IOMs run Rapid Per-VLAN Spanning Tree Plus (RPVST+) by default. RPVST+ runs RSTP on each VLAN
while RSTP runs a single instance of spanning tree across the default VLAN. The Dell PowerSwitch S5232F-ON used in this
example runs SmartFabric OS10 and has RPVST+ enabled by default.

NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.
There are four steps to configure the S5232F-ON upstream switches:
1. Set the switch hostname and management IP address.
2. Configure the VLT between the switches.
3. Configure the VLANs.
4. Configure the port channels to connect to the MX switches.
Use the following commands to set the hostname, and to configure the OOB management interface and default gateway.

S5232F-ON Leaf 1 S5232F-ON Leaf 2

configure terminal configure terminal

hostname S5232-Leaf1 hostname S5232-Leaf2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
no shutdown no shutdown
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24

management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

NOTE: Use the spanning-tree {vlan vlan-id priority priority-value} command to set the bridge
priority for the upstream switches. The bridge priority ranges from 0 to 61440 in increments of 4096. For example, to
make S5232F-ON Leaf 1 as the root bridge for VLAN 10, enter the command spanning-tree vlan 10 priority 4096.
Configure the VLT between switches using the following commands. VLT configuration involves setting a discovery interface
range and discovering the VLT peer in the VLTi. vlt-domain configures the peer leaf-2 switch as a back up destination.

S5232F-ON Leaf 1 S5232F-ON Leaf 2

interface range ethernet1/1/29-1/1/31 interface range ethernet1/1/29-1/1/31


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport

vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
discovery-interface ethernet1/1/29-1/1/31 discovery-interface ethernet1/1/29-1/1/31

Configure the required VLANs on each switch. In this deployment example, the VLAN used is VLAN 10.

194 Configuration Scenarios


S5232F-ON Leaf 1 S5232F-ON Leaf 2

interface vlan10 interface vlan10


description “Company A General Purpose” description “Company A General Purpose”
no shutdown no shutdown

Configure the port channels that connect to the downstream switches. The LACP protocol is used to create the dynamic LAG.
Trunk ports allow tagged VLANs to traverse the trunk link. In this example, the trunk is configured to allow VLAN 10.

S5232F-ON Leaf 1 S5232F-ON Leaf 2

interface port-channel1 interface port-channel1


description "To MX Chassis" description "To MX Chassis"
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan10 switchport trunk allowed vlan10
vlt-port-channel 1 vlt-port-channel 1

interface ethernet1/1/1 interface ethernet1/1/1


description "To MX Chassis-1" description "To MX Chassis-1"
no shutdown no shutdown
no switchport no switchport
channel-group 1 mode active channel-group 1 mode active

interface ethernet1/1/3 interface ethernet1/1/3


description "To MX Chassis-2" description "To MX Chassis-2"
no shutdown no shutdown
no switchport no switchport
channel-group 1 mode active channel-group 1 mode active

end end
write memory write memory

Dell PowerSwitch S5232F-ON validation


This section contains validation commands for the Dell PowerSwitch S5232F-ON leaf switches.

show vlt
The show vlt command validates the VLT configuration status when the VLTi Link Status is up. The role of one switch in the
VLT pair is primary, and its peer switch (not shown) is assigned the secondary role.

S5232-Leaf1# show vlt 1


Domain ID : 1
Unit ID : 1
Role : primary
Version : 1.0
Local System MAC address : 4c:76:25:e8:f2:c0
VLT MAC address : 4c:76:25:e8:f2:c0
IP address : fda5:74c8:b79e:1::1
Delay-Restore timer : 90 seconds
Peer-Routing : Disabled
Peer-Routing-Timeout timer : 0 seconds
VLTi Link Status
port-channel1000 : up

VLT Peer Unit ID System MAC Address Status IP Address Version


--------------------------------------------------------------------------------
2 4c:76:25:e8:e8:40 up fda5:74c8:b79e:1::2 1.0

Configuration Scenarios 195


show lldp neighbors
The show lldp neighbors command provides information about connected devices. In this case, ethernet1/1/1 and
ethernet1/1/3 connect to the two MX9116n FSEs, C160A2 and C140A1 . The remaining links, ethernet1/1/29, and
ethernet 1/1/31, represent the VLTi connection.

S5232-Leaf1# show lldp neighbors


Loc PortID Rem Host Name Rem Port Id Rem Chassis Id
----------------------------------------------------------------
ethernet1/1/1 C160A2 ethernet1/1/41 20:04:0f:00:a1:9e
ethernet1/1/3 C140A1 ethernet1/1/41 20:04:0f:00:cd:1e
ethernet1/1/29 S5232-Leaf2 ethernet1/1/29 4c:76:25:e8:e8:40
ethernet1/1/31 S5232-Leaf2 ethernet1/1/31 4c:76:25:e8:e8:40

show spanning-tree brief


The show spanning-tree brief command validates that STP is enabled on the leaf switches. All the interfaces are
forwarding (FWD), as shown in the Sts column.

S5232-Leaf1# show spanning-tree brief


Spanning tree enabled protocol rapid-pvst
VLAN 1
Executing IEEE compatible Spanning Tree Protocol
Root ID Priority 32768, Address 2004.0f00.a19e
Root Bridge hello time 2, max age 20, forward delay 15
Bridge ID Priority 32769, Address 4c76.25e8.f2c0
Configured hello time 2, max age 20, forward delay 15
Flush Interval 200 centi-sec, Flush Invocations 432
Flush Indication threshold 0 (MAC flush optimization is disabled)
Interface Designated
Name PortID Prio Cost Sts Cost Bridge ID PortID
--------------------------------------------------------------------------------
port-channel1 128.2517 128 50 FWD 0 32768 2004.0f00

Interface
Name Role PortID Prio Cost Sts Cost Link-type Edge
--------------------------------------------------------------------------------
port-channel1 Root 128.2517 128 50 FWD 0 AUTO No

VLAN 10
Executing IEEE compatible Spanning Tree Protocol
Root ID Priority 32778, Address 4c76.25e8.e840
Root Bridge hello time 2, max age 20, forward delay 15
Bridge ID Priority 32778, Address 4c76.25e8.f2c0
Configured hello time 2, max age 20, forward delay 15
Flush Interval 200 centi-sec, Flush Invocations 5
Flush Indication threshold 0 (MAC flush optimization is disabled)
Interface Designated
Interface Designated
Name PortID Prio Cost Sts Cost Bridge ID PortID
--------------------------------------------------------------------------------
port-channel1 128.2517 128 50 FWD 1 32768 2004.0f00
Interface
Name Role PortID Prio Cost Sts Cost Link-type Edge
--------------------------------------------------------------------------------
port-channel1 Root 128.2517 128 50 FWD 1 AUTO No

196 Configuration Scenarios


Scenario 4: SmartFabric connected to Cisco Nexus
3232C switches with legacy Ethernet uplink
The figure below shows a topology using a pair of Cisco Nexus 3232C as leaf switches, but other Cisco Nexus switches may be
used. This section details configuration of the Cisco Nexus 3232Cs and creation of a SmartFabric with the corresponding legacy
Ethernet uplinks.
NOTE: For information related to the same scenario using Ethernet - No Spanning Tree uplink, see Scenario 2: SmartFabric
connected to Cisco Nexus 3232C switches with Ethernet - No Spanning Tree uplink.

Figure 192. SmartFabric with Cisco Nexus 3232C leaf switches

NOTE: See Supported cables and optical connectors for more information about the QSFP28-DD cables.

Cisco Nexus 3232C switch configuration


This section outlines the configuration commands that are issued to the Cisco Nexus 3232C leaf switches.
NOTE: While this configuration example is specific to the Cisco Nexus 3232C switch, the same concepts apply to other
Cisco Nexus and IOS switches.
The switches start at their factory default settings, as described in the Reset Cisco Nexus 3232C to factory defaults section.

Configuration Scenarios 197


NOTE: The MX IOMs run Rapid per-VLAN Spanning Tree Plus (RPVST+) by default. Ensure the Cisco and Dell switches
are configured to use compatible STP protocols. The mode of STP on the Cisco switch can be set using the command
spanning-tree mode, which is shown below. In this deployment example, default VLAN is VLAN 1 and the created VLAN is
VLAN 10. See the Cisco Nexus 3000 Series NX-OS Configuration Guide for more details.

NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.
There are four steps to configure the 3232C upstream switches:
1. Set switch hostname, management IP address, enable features and spanning tree.
2. Configure vPC between the switches.
3. Configure the VLANs.
4. Configure the downstream port channels to connect to the MX switches.
Enter the following commands to set the hostname, enable required features, and enable RPVST spanning tree mode. Configure
the management interface and default gateway.

Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

configure terminal configure terminal

hostname 3232C-Leaf1 hostname 3232C-Leaf2

feature vpc feature vpc


feature lldp feature lldp
feature lacp feature lacp

spanning-tree mode rapid-pvst spanning-tree mode rapid-pvst

interface mgmt0 interface mgmt0


vrf member management vrf member management
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24

vrf context management vrf context management


ip route 0.0.0.0/0 100.67.XX.XX ip route 0.0.0.0/0 100.67.YY.YY

Enter the following commands to create a virtual port channel (vPC) domain and assign the keepalive destination to the peer
switch management IP. Then create a port channel for the vPC peer link and assign the appropriate switchport interfaces.

Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

vpc domain 255 vpc domain 255


peer-keepalive destination 100.67.YY.YY peer-keepalive destination 100.67.XX.XX

interface port-channel255 interface port-channel255


switchport switchport
switchport mode trunk switchport mode trunk
vpc peer-link vpc peer-link

interface Ethernet1/29 interface Ethernet1/29


description vPC Interconnect description vPC Interconnect
switchport switchport
switchport mode trunk switchport mode trunk
channel-group 255 mode active channel-group 255 mode active
no shutdown no shutdown

interface Ethernet1/31 interface Ethernet1/31


description vPC Interconnect description vPC Interconnect
switchport switchport
switchport mode trunk switchport mode trunk

198 Configuration Scenarios


Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

channel-group 255 mode active channel-group 255 mode active


no shutdown no shutdown

Enter the following commands to configure the port channels to connect to the downstream MX9116n FSEs. Then, exit
configuration mode and save the configuration.

Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2

interface port-channel1 interface port-channel1


description To MX Chassis description To MX Chassis
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,10 switchport trunk allowed vlan 1,10
vpc 255 vpc 255

interface Ethernet1/1 interface Ethernet1/1


description To MX Chassis 1 description To MX Chassis 1
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,10 switchport trunk allowed vlan 1,10
channel-group 1 mode active channel-group 1 mode active
no shutdown no shutdown

interface Ethernet1/3 interface Ethernet1/3


description To MX Chassis 2 description To MX Chassis 2
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,10 switchport trunk allowed vlan 1,10
channel-group 1 mode active channel-group 1 mode active
no shutdown no shutdown

end end
copy running-configuration startup- copy running-configuration startup-
configuration configuration

NOTE: If the connections to the MX switches do not come up, see SmartFabric Troubleshooting for troubleshooting steps.

Trunk ports on switches allow tagged traffic to traverse the links. All flooded traffic for the VLAN is sent across trunk ports
to all the switches, even if those switches do not have an associated VLAN. This takes up the network bandwidth with
unnecessary traffic. VLAN or VTP Pruning is the feature that can be used to eliminate this unnecessary traffic by pruning the
VLANs.
Pruning restricts the flooded traffic to only those trunk ports with associated VLANs to optimize the usage of network
bandwidth. If the existing environment is configured for Cisco VTP or VLAN pruning, ensure that the Cisco upstream switches
are configured appropriately. See the Cisco Nexus 3000 Series NX-OS Configuration Guide for additional information.
NOTE: Do not use switchport trunk allow vlan all on the Cisco interfaces. The VLANs must be explicitly
assigned to the interface.

Configuration validation
This section covers the validation of the Cisco Nexus 3232C leaf switches. For information about the Dell Networking MX
switch validation commands, see Common CLI troubleshooting commands for Full Switch and SmartFabric modes.

show vpc
The show vpc command validates the vPC configuration status. The peer adjacency should be OK, with the peer should show
as alive. The end of the command shows which VLANs are active across the vPC.

NX3232C-Leaf1# show vpc


Legend:

Configuration Scenarios 199


(*) - local vPC is down, forwarding via vPC peer-link

vPC domain id : 255


Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status : success
Per-vlan consistency status : success
Type-2 inconsistency reason : Consistency Check Not Performed
vPC role : secondary, operational primary
Number of vPCs configured : 1
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled
Auto-recovery status : Disabled
Delay-restore status : Timer is off.(timeout = 30s)
Delay-restore SVI status : Timer is off.(timeout = 10s)

vPC Peer-link status


---------------------------------------------------------------------
id Port Status Active vlans
-- ---- ------ --------------------------------------------------
1 Po255 up 1,10

vPC status
----------------------------------------------------------------------
id Port Status Consistency Reason Active vlans
-- ---- ------ ----------- ------ ------------
255 Po1 up success success 1,10

show vpc consistency-parameters


The show vpc consistency-parameters command displays the configured values on all interfaces in the vPC. The
displayed configurations are only those configurations that limit the vPC peer link and vPC from coming up.

NX3232C-Leaf1# show vpc consistency-parameters vpc 255


Legend:
Type 1 : vPC will be suspended in case of mismatch

Name Type Local Value Peer Value


------------- ---- ---------------------- -----------------------
STP Port Type 1 Normal Port Normal Port
STP Port Guard 1 Default Default
STP MST Simulate PVST 1 Default Default
lag-id 1 [(1000, [(1000,
20-4-f-0-cd-1e, 1, 0, 20-4-f-0-cd-1e, 1, 0,
0), (7f9b, 0), (7f9b,
0-23-4-ee-be-ff, 80ff, 0-23-4-ee-be-ff, 80ff,
0, 0)] 0, 0)]
mode 1 active active
delayed-lacp 1 disabled disabled
Speed 1 100 Gb/s 100 Gb/s
Duplex 1 full full
Port Mode 1 trunk trunk
Native Vlan 1 1 1
MTU 1 1500 1500
Dot1q Tunnel 1 no no
Switchport Isolated 1 0 0
vPC card type 1 N9K TOR N9K TOR
Allowed VLANs - 1,10 1,10
Local suspended VLANs - - -

200 Configuration Scenarios


show lldp neighbors
The show lldp neighbors command provides information about lldp neighbors. In this example, Eth1/1 and Eth1/3 are
connected to the two MX9116n FSEs, C160A2 and C140A1. The remaining links, Eth1/29 and Eth1/31, represent the vPC
connection.

NX3232C-Leaf1(config)# show lldp neighbors


Capability codes:
(R) Router, (B) Bridge, (T) Telephone, (C) DOCSIS Cable Device
(W) WLAN Access Point, (P) Repeater, (S) Station, (O) Other
Device ID Local Intf Hold-time Capability Port ID
S3048-ON mgmt0 120 PBR ethernet1/1/45
C160A2 Eth1/1 120 PBR ethernet1/1/41
C140A1 Eth1/3 120 PBR ethernet1/1/41
NX3232C-Leaf2 Eth1/29 120 BR Ethernet1/29
NX3232C-Leaf2 Eth1/31 120 BR Ethernet1/31
Total entries displayed: 5

show spanning-tree summary


The show spanning-tree summary command validates that STP is enabled on the leaf switches. All interfaces are shown
as forwarding.

NX3232C-Leaf1# show spanning-tree summary


Switch is in rapid-pvst mode
Root bridge for: VLAN0010
Port Type Default is disable
Edge Port [PortFast] BPDU Guard Default is disabled
Edge Port [PortFast] BPDU Filter Default is disabled
Bridge Assurance is enabled
Loopguard Default is disabled
Pathcost method used is short
STP-Lite is disabled

Name Blocking Listening Learning Forwarding STP Active


---------------------- -------- --------- -------- ---------- ----------
VLAN0001 0 0 0 2 2
VLAN0010 0 0 0 2 2
---------------------- -------- --------- -------- ---------- ----------
2 vlans 0 0 0 4 4

Configuration Scenarios 201


Scenario 5: Connect MX9116n FSE to Fibre Channel
storage - NPIV Proxy Gateway mode
This section discusses a method for connecting the MX9116n FSE to an FC storage array connected to existing FC switches
using the NPIV Proxy Gateway (NPG) mode for the connection. NPG mode allows for larger SAN deployments that aggregate
I/O traffic at the NPG switch.

FC Switch FC Switch
Spine 1 Spine 2

FC SAN A
FC SAN B
2
4:2 rts /44:
rts
Po 1/1/4 Po 1/1
– –
1 :1
4: 44
1/ 1/4 1 /1/
Controller A Controller B

MX9116n VLT MX9116n PowerStore 1000T


(Leaf 1) (Leaf 2) Unity 500F

MX7000 MX7000
chassis 1 chassis 2

Figure 193. FC (NPG) network to Dell PowerStore 1000T

SmartFabric mode
This scenario shows attachment to an existing FC switch infrastructure. Configuration of the existing FC switches is beyond the
scope of this document.

NOTE: The MX5108n Ethernet Switch does not support this feature.

This example assumes that an existing SmartFabric has been created and is fully operational. For instructions on creating a
SmartFabric, see SmartFabric Creation.
To configure NPG mode on an existing SmartFabric, the following steps are completed using the OME-M console:
1. Connect the MX9116n FSE to the FC SAN.
CAUTION: Ensure that the cables do not criss-cross between the switches.

Make sure that chassis are in a Multi-Chassis Management group. For instructions, find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table.
2. Define FCoE VLANs to use in the fabric. For instructions, see Define VLANs for information about defining the VLANs.
3. If necessary, create the Identity Pools. See Create identity pools for more information about how to create identity pools.
4. Configure the physical switch ports for FC operation. See Configure Fibre Channel universal ports for instructions.
5. Create the FC Gateway uplinks. For instruction, see Create Fibre Channel uplinks for steps on creating uplinks.
6. Create and deploy the appropriate server templates to the compute sleds. See Server Deployment for more information.
Once the server operating system loads the FCoE driver, the WWN appears on the fabric and on the FC SAN. The system is
now ready to connect to Fibre Channel storage. See Dell PowerStore 1000T for setting up storage logical unit numbers (LUNs).
NOTE: For information related to use cases and configuring Ethernet – No Spanning Tree uplink with different tagged and
untagged VLANs, see Ethernet – No Spanning Tree uplink.

NOTE: When MX9116n FSEs are in NPG mode, connecting to more than one SAN is possible by creating multiple vFabrics
each with their own NPG gateway only in Full Switch mode. However, an individual server can only connect to one vFabric
at a time, so one server cannot see both SANs.

202 Configuration Scenarios


Full switch mode
This section contains the Full Switch mode switch configuration of MX I/O modules in NPG mode. Configuration of the existing
FC switches is beyond the scope of this document.

NOTE: The MX5108n Ethernet Switch does not support this feature.

To configure MX IOMs in Full Switch mode, Follow the steps mentioned below to configure MX IOMs through CLI:
1. Verify the MX9116n FSE is in Full Switch mode by running show switch-operating-mode command.
2. Connect the MX9116n FSE to the FC SAN.
CAUTION: Ensure that the cables do not criss-cross between the switches.

Once the server operating system loads the FCoE driver, the WWN appears on the fabric and on the FC SAN. The system is
now ready to connect to Fibre Channel storage. See Dell PowerStore 1000T for setting up storage logical unit numbers (LUNs).
NOTE: When MX9116n FSEs are in NPG mode, connecting to more than one SAN is possible by creating multiple vFabrics
each with their own NPG gateway only in Full Switch mode. However, an individual server can only connect to one vFabric
at a time, so one server cannot see both SANs.
Configure global switch settings
Run the following commands to configure the switch hostname, OOB management IP address, and OOB management default
gateway.

MX9116-B1 MX9116-B2

configure terminal configure terminal

hostname MX9116-B1 hostname MX9116-B2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24
no shutdown no shutdown

management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

Configure FC port group and speed


Configure the port group for the FC interfaces used to connect to storage. In the deployment example here, port-group 1/1/16
is configured for breakout from 1x64 GFC to 4x16 GFC.

MX9116-B1 MX9116-B2

configure terminal configure terminal


port-group 1/1/16 port-group 1/1/16
mode FC 16g-4x mode FC 16g-4x
exit exit

Configure VLTi
Configure VLTi on ports 37 through 40 on the MX9116n FSE. This establishes the connection between the two MX IOMs.

MX9116-B1 MX9116-B2

interface range ethernet 1/1/37-1/1/40 interface range ethernet 1/1/37-1/1/40


description VLTi description VLT
no switchport no switchport

vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
discovery-interface ethernet 1/1/37-1/1/40 discovery-interface ethernet 1/1/37-1/1/40
peer-routing peer-routing

Configuration Scenarios 203


NPG FC or FCoE configuration
For each IOM, define the VLANs and virtual fabrics. The global feature fc npg command enables the switch in NPG mode.
Create FCoE VLANs and create vFabric for SAN.

MX9116-B1 MX9116-B2

dcbx enable dcbx enable


feature fc npg feature fc npg

interface vlan 30 interface vlan 40


description FC_B1 description FC_B2
no shutdown no shutdown

vfabric 101 vfabric 102


vlan 30 vlan 40
fcoe fcmap 0xEFC00 fcoe fcmap 0xEFC01

Configure upstream interfaces


Configure the IOMs FC uplink connections to the existing FC switch. In the deployment example here, FC ports 1/1/44:1 and
1/1/44:2 are configured for upstream FC switch connections.

MX9116-B1 MX9116-B2

interface fibrechannel 1/1/44:1 interface fibrechannel 1/1/44:1


description uplink1_to_FC_switch description uplink1_to_FC_switch
vfabric 101 vfabric 102
no shutdown no shutdown

interface fibrechannel 1/1/44:2 interface fibrechannel 1/1/44:2


description uplink2_to_FC_switch description uplink2_to_FC_switch
vfabric 101 vfabric 102
no shutdown no shutdown

Configure downstream interfaces


Configure the IOMs ports connected to MX Compute sleds. In the deployment example here, ports 1/1/1 and 1/1/3 are
configured for downstream connections.

MX9116-B1 MX9116-B2

interface ethernet 1/1/1 interface ethernet 1/1/1


description MX_ComputeSled_1 description MX_ComputeSled_1
Switchport access vlan 1 Switchport access vlan 1
vfabric 101 vfabric 102
no shutdown no shutdown

interface ethernet 1/1/3 interface ethernet 1/1/3


description MX_ComputeSled_2 description MX_ComputeSled_2
Switchport access vlan 1 Switchport access vlan 1
vfabric 101 vfabric 102
no shutdown no shutdown

Configure UFD
Uplink Failure Detection, or UFD, is recommended on all server-facing interfaces and upstream interfaces.

MX9116-B1 MX9116-B2

uplink-state-group 1 uplink-state-group 1
name "UFD_Group_1" name "UFD_Group_1"
downstream ethernet1/1/1-1/1/3 downstream ethernet1/1/1-1/1/3
upstream fibrechannel1/1/44:1-1/1/44:2
enable

204 Configuration Scenarios


MX9116-B1 MX9116-B2

upstream fibrechannel1/1/44:1-1/1/44:2
enable

Configuration validation
show fcoe sessions
The show fcoe sessions command shows active FCoE sessions. The output includes MAC addresses, Ethernet interfaces,
the FCoE VLAN ID, FC IDs, and WWPNs of logged-in CNAs.

NOTE: Due to the width of the command output, each line of output is shown on two lines below.

C140A1# show fcoe sessions


Enode MAC Enode Interface FCF MAC FCF interface VLAN FCoE
MAC FC-ID PORT WWPN PORT WWNN
-----------------------------------------------------------------------------------------
----------------------------------------------------------------
06:c3:f9:a4:cd:03 Eth 1/71/1 20:04:0f:00:ce:1d ~ 30
0e:fc:00:01:01:00 01:01:00 20:01:06:c3:f9:a4:cd:00 20:00:06:c3:f9:a4:cd:00
06:3c:f9:a4:cd:01 Eth 1/1/1 20:04:0f:21:d5:7f Fc 1/1/43:2 30
0e:fc:00:01:04:01 01:04:01 20:01:06:3c:f9:a4:cd:01 20:00:06:3c:f9:a4:cd:01

show vfabric
The show vfabric command output provides various information including the default zone mode, the active zone set, and
interfaces that are members of the vfabric.

C140A1# show vfabric


Fabric Name New vfabric
Fabric Type NPG
Fabric Id 101
Vlan Id 30
FC-MAP 0xEFC00
Vlan priority 3
FCF Priority 128
FKA-Adv-Period Enabled,8
Config-State ACTIVE
Oper-State UP
==========================================
Members
fibrechannel1/1/43:1
fibrechannel1/1/43:2
ethernet1/1/1

show fc switch
The show fc switch command verifies the switch mode (for example, F_Port) for FC traffic.

C140A1# show fc switch


Switch Mode : NPG
Switch WWN : 10:00:20:04:0f:21:d4:80

Configuration Scenarios 205


Scenario 6: Connect MX9116n FSE to Fibre Channel
storage - FC Direct Attach
This chapter discusses a method for connecting an FC storage array directly to the MX9116n FSE.
On PowerEdge MX platform, the difference between configuring NPG mode or FC Direct Attach mode on the MX9116n FSE is
selecting different uplink type desired.

Spine 1 Spine 2 PowerStore 1000T


Controller A Controller B

FC SAN A
FC SAN B

1/44:2
1/44:1
4:1 :2
1/4 1/44

MX9116n VLT MX9116n


(Leaf 1) (Leaf 2)

MX7000 MX7000
chassis 1 chassis 2

Figure 194. Fibre Channel (F_Port) Direct Attach to Dell PowerStore 1000T

SmarFabric mode
This example shows directly attaching a Dell PowerStore 1000T storage array to the MX9116n FSE using universal ports 44:1 and
44:2.

NOTE: The MX5108n Ethernet Switch does not support this feature.

This example assumes that an existing SmartFabric has been created and is fully operational. For instructions on creating a
SmartFabric, see SmartFabric Creation.
To configure NPG mode on an existing SmartFabric, the following steps are completed using the OME-M console:
1. Connect the storage array to the MX9116n FSE. Each storage controller is connected to each MX9116n FSE.
● Define FCoE VLANs to use in the fabric. For instructions, see Define VLANs.
● Make sure that chassis are in a Multi-Chassis Management group. For instructions, find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table.
2. If necessary, create Identity Pools. See the Create identity pools section for more information about how to create identity
pools.
3. Configure the physical switch ports for FC operation. See the Configure Fibre Channel universal ports section for
instructions.
4. Create the FC Direct Attached uplinks. For more information about creating uplinks, see the Create Fibre Channel uplinks
section.
5. Create and deploy the appropriate server templates to the compute sleds. See Server Deployment for more information.
6. Configure zones and zone sets. See the Managing Fibre Channel Zones on MX9116n FSE section for instructions.
Once the server operating system loads the FCoE, the WWN appears on the fabric and on the FC SAN. The system is now
ready to connect to Fibre Channel storage. See Dell PowerStore 1000T for how to create host groups and map volumes to the
target host.

NOTE: The configuration of FC Zones through the CLI is supported while using SmartFabric mode.

NOTE: For information related to use cases and configuring Ethernet - No Spanning Tree uplink with different tagged and
untagged VLANs, see the Ethernet – No Spanning Tree uplink section.

206 Configuration Scenarios


Full switch mode
This section contains the Full Switch mode switch configuration of MX I/O modules connected directly to Dell PowerStore
1000T in Direct-Attached mode.

NOTE: The MX5108n Ethernet Switch does not support this feature.

1. Verify the MX9116n FSE is in Full Switch mode by running show switch-operating-mode command.
2. Connect the MX9116n FSE to the FC SAN.
Once the server operating system loads the FCoE, the WWN appears on the fabric and on the FC SAN. The system is now
ready to connect to Fibre Channel storage. See Dell PowerStore 1000T for how to create host groups and map volumes to the
target host.
Configure global switch settings
Configure the switch hostname, OOB management IP address, and OOB management default gateway.

MX9116-B1 MX9116-B2

configure terminal configure terminal

hostname MX9116-B1 hostname MX9116-B2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24
no shutdown no shutdown
management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

Configure FC port group and speed


Configure the port group for the FC interfaces used to connect to storage. In the deployment example here, port-group 1/1/16
is configured for breakout from 1x64 GFC to 4x16 GFC.

MX9116-B1 MX9116-B2

configure terminal configure terminal


port-group 1/1/16 port-group 1/1/16
mode FC 16g-4x mode FC 16g-4x
exit exit

Configure VLTi
Configure VLTi on Ports 37 through 40 on MX9116n FSE. This establishes connection between two MX IOMs.

MX9116-B1 MX9116-B2

interface range ethernet 1/1/37-1/1/40 interface range ethernet 1/1/37-1/1/40


description VLTi description VLT
no switchport no switchport

vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
discovery-interface ethernet 1/1/37-1/1/40 discovery-interface ethernet 1/1/37-1/1/40
peer-routing peer-routing

Direct attached FC or FCoE configuration


For each IOM, define the VLANs and virtual fabrics. The global feature fc domain-id 1 command enables the switch in
direct-attached mode. Create FCoE VLANs and create vFabric for SAN.

Configuration Scenarios 207


MX9116-B1 MX9116-B2

dcbx enable dcbx enable


feature fc domain-id 1 feature fc domain-id 1

interface vlan 30 interface vlan 40


description FC_B1 description FC_B2
no shutdown no shutdown

vfabric 101 vfabric 102


vlan 30 vlan 40
fcoe fcmap 0xEFC00 fcoe fcmap 0xEFC01

Configure upstream interfaces


Configure the IOMs FC uplink connections to the existing Dell PowerStore 1000T array. In the deployment example here, FC
ports 1/1/44:1 and 1/1/44:2 are configured for upstream storage array connections.

MX9116-B1 MX9116-B2

interface fibrechannel 1/1/44:1 interface fibrechannel 1/1/44:1


description uplink1_to_PowerStore description uplink1_to_PowerStore
vfabric 101 vfabric 102
no shutdown no shutdown

interface fibrechannel 1/1/44:2 interface fibrechannel 1/1/44:2


description uplink2_to_PowerStore description uplink2_to_PowerStore
vfabric 101 vfabric 102
no shutdown no shutdown

Configure downstream interfaces


Configure the IOMs ports connected to MX Compute sleds. In the deployment example here, ports 1/1/1 and 1/1/3 are
configured for downstream connections.

MX9116-B1 MX9116-B2

interface ethernet 1/1/1 interface ethernet 1/1/1


description MX_ComputeSled_1 description MX_ComputeSled_1
Switchport access vlan 1 Switchport access vlan 1
vfabric 101 vfabric 102
no shutdown no shutdown

interface ethernet 1/1/3 interface ethernet 1/1/3


description MX_ComputeSled_2 description MX_ComputeSled_2
Switchport access vlan 1 Switchport access vlan 1
vfabric 101 vfabric 102
no shutdown no shutdown

Configure UFD
Uplink Failure Detection, or UFD, is recommended on all server-facing interfaces and upstream interfaces.

MX9116-B1 MX9116-B2

uplink-state-group 1 uplink-state-group 1
name "UFD_Group_1" name "UFD_Group_1"
downstream ethernet1/1/1-1/1/3 downstream ethernet1/1/1-1/1/3
upstream fibrechannel1/1/44:1-1/1/44:2 upstream fibrechannel1/1/44:1-1/1/44:2
enable enable

To configure the Fibre Channel zoning on MX IOMs, see the Managing Fibre Channel zones on MX9116n FSE section.

208 Configuration Scenarios


Configuration validation
show fc ns switch
The show fc ns switch command shows all device ports that are logged into the fabric. In this deployment, four ports are
logged in to each switch: two storage ports and two CNA ports.

C140A1# show fc ns switch

Total number of devices = 3


Switch Name 10:00:20:04:0f:00:cd:1e
Domain Id 1
Switch Port fibrechannel1/1/44:1
FC-Id 01:00:00
Port Name 58:cc:f0:90:49:20:0c:e7
Node Name 58:cc:f9:90:c9:20:0c:e7
Class of Service 8
Symbolic Port Name PowerSt::::SPA::FC::::::
Symbolic Node Name PowerSt::::SPA::FC::::::
Port Type N_PORT
Registered with NameServer Yes
Registered for SCN Yes

Switch Name 10:00:20:04:0f:00:cd:1e


Domain Id 1
Switch Port ethernet1/71/1
FC-Id 01:01:00
Port Name 20:01:06:c3:f9:a4:cd:03
Node Name 20:00:06:c3:f9:a4:cd:03
Class of Service 8
Symbolic Port Name
Symbolic Node Name
Port Type N_PORT
Registered with NameServer Yes
Registered for SCN Yes

Switch Name 10:00:20:04:0f:00:cd:1e


Domain Id 1
Switch Port ethernet1/1/1
FC-Id 01:02:00
Port Name 20:01:f4:e9:d4:73:d0:0c
Node Name 20:00:f4:e9:d4:73:d0:0c
Class of Service 8
Symbolic Port Name QLogic qedf v8.24.8.0
Symbolic Node Name QLogic qedf v8.24.8.0
Port Type N_PORT
Registered with NameServer Yes
Registered for SCN Yes

show fcoe sessions


The show fcoe sessions command shows active FCoE sessions. The output includes MAC addresses, Ethernet interfaces,
the FCoE VLAN ID, FC IDs, and WWPNs of logged-in CNAs.

NOTE: Due to the width of the command output, each line of output is shown on two lines below.

C140A1# show fcoe sessions


Enode MAC Enode Interface FCF MAC FCF interface VLAN FCoE
MAC FC-ID PORT WWPN PORT WWNN
-----------------------------------------------------------------------------------------
----------------------------------------------------------------
06:c3:f9:a4:cd:03 Eth 1/71/1 20:04:0f:00:ce:1d ~ 30
0e:fc:00:01:01:00 01:01:00 20:01:06:c3:f9:a4:cd:03 20:00:06:c3:f9:a4:cd:03
f4:e9:d4:73:d0:0c Eth 1/1/1 20:04:0f:00:ce:1d ~ 30
0e:fc:00:01:02:00 01:02:00 20:01:f4:e9:d4:73:d0:0c 20:00:f4:e9:d4:73:d0:0c

show vfabric

Configuration Scenarios 209


The show vfabric command output provides various information including the default zone mode, the active zone set, and
interfaces that are members of the vfabric.

C140A1# show vfabric


Fabric Name New vfabric
Fabric Type FPORT
Fabric Id 1
Vlan Id 30
FC-MAP 0xEFC00
Vlan priority 3
FCF Priority 128
FKA-Adv-Period Enabled,8
Config-State ACTIVE
Oper-State UP
==========================================
Switch Config Parameters
==========================================
Domain ID 1
==========================================
Switch Zoning Parameters
==========================================
Default Zone Mode: Allow
Active ZoneSet: None
==========================================
Members
fibrechannel1/1/44:1
ethernet1/1/1
ethernet1/71/1
ethernet1/71/2

show fc switch
The show fc switch command verifies the switch mode (for example, F_Port) for FC traffic.

C140A1# show fc switch


Switch Mode : FPORT
Switch WWN : 10:00:e4:f0:04:6b:04:42

210 Configuration Scenarios


Scenario 7: Connect MX5108n to Fibre Channel
storage - FSB
This chapter provides instructions for connecting either the MX5108n or MX9116n to a Fibre Channel SAN using native FCoE
uplinks. This connection type would be used in an environment where an existing switch such as the Dell PowerSwitch S4148U
has the capability to accept native FCoE and connect to native FC.
Dell SmartFabric OS10 uses a FIP Snooping Bridge (FSB) to detect and manage FCoE traffic and discovers the following
information:
● End nodes (E_Nodes)
● Fibre Channel forwarder (FCF)
● Connections between E_Nodes and FCFs
● Sessions between E_Nodes and FCFs
Using the discovered information, the switch installs ACL entries that provide security and point-to-point link emulation to
ensure that FCoE traffic is handled appropriately.
NOTE: The examples in this chapter use the Dell Networking MX5108n. The same instructions may also be applied and used
with the MX9116n.

NOTE: An FCoE uplink from the MX5108n or MX9116n must contain only a single port interface on an MX IOM to the
existing FCoE switch such as the Dell PowerSwitch S4148U shown in the diagram above.
The FSB switch can connect to an upstream switch operating in NPG mode:
NOTE: Ensure the STP root bridge is not assigned to any MX IOM when using the legacy Ethernet uplink or FCoE uplink
types in SmartFabric mode, or when using Spanning Tree Protocol (STP) in Full Switch operating mode. For deployments
with MX IOM uplinks connected to a switch with Dell SmartFabric OS10 utilizing Rapid-PVST, the bridge priority can be
configured using the command spanning-tree {vlan vlan-id priority priority-value}. Set the external
switch to the lowest priority-value to force its assignment as the STP root bridge. For additional details on using STP with
OS10 switches, see the Dell OS10 SmartFabric User Guide.

Figure 195. FCoE (FSB) Network to Dell PowerStore 1000T through NPG mode switch

Or operating in F_Port mode:

Configuration Scenarios 211


S4148U-ON S4148U-ON
(F_port mode) (F_port mode)

ToR switch 1 VLT ToR switch 2


FCoE SAN A
FC SAN A

FCoE SAN B
FC SAN B
rt rt
Po 11:1 Po 11:1
1/ 1/
1 / 1/
Controller A Controller B

MX5108n MX5108n
FSB mode VLT
FSB mode PowerStore 1000T
(Leaf 1) (Leaf 2)

MX7000 chassis

Figure 196. FCoE (FSB) Network to Dell PowerStore 1000T through F_Port mode switch

NOTE: See the Dell SmartFabric OS10 User Guide for configuring FSB mode globally on the Dell Networking S4148U-ON
switches. Find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.

SmartFabric mode
This example assumes that an existing SmartFabric has been created and is fully operational. For instructions on creating a
SmartFabric, see SmartFabric Creation.
1. To configure FCoE mode on an existing SmartFabric, the following steps are completed using the OME-M console: Connect
the MX switch to the S4148U.
CAUTION: Verify that the cables do not criss-cross between the switches.

Make sure that chassis are in a Multi-Chassis Management group. For instructions, find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table.
2. Define FCoE VLANs to use in the fabric. For instructions, see the Define VLANs section for more information about defining
the VLANs.
3. If necessary, create Identity Pools. See the Create identity pools for more information.
4. Create the FCoE uplinks. See the Create Fibre Channel uplinks section for more information about creating uplinks.
5. Create and deploy the appropriate server templates to the compute sleds. See Server Deployment for more information.
6. Configure the S4148U switch. See the Dell Networking Fibre Channel Deployment with S4148U-ON in F_port Mode
knowledge base article for more information.
Once the server operating system loads the FCoE driver, the WWN displays on the fabric and on the FC SAN. The system is
now ready to connect to Fibre Channel storage. See Dell PowerStore 1000T to create host groups and map volumes to the
target host.
To validate the configuration, use the same commands that are mentioned in SmartFabric Deployment Validation.

Full switch mode


This section contains the Full Switch mode switch configuration of MX I/O modules in FSB mode. Configuration of the
existingFC switches is beyond the scope of this document.
To configure MX IOMs in Full Switch mode, Follow the steps mentioned below to configure MX IOMs through CLI:
1. Verify the MX9116n FSE is in Full Switch mode by running show switch-operating-mode command.
2. Connect the MX9116n FSE to the FC SAN.
3. Configure the S4148U switch. See the Dell Networking Fibre Channel Deployment with S4148U-ON in F_port Mode
knowledge base article for more information.
Once the server operating system loads the FCoE driver, the WWN displays on the fabric and on the FC SAN. The system is
now ready to connect to Fibre Channel storage. See Dell PowerStore 1000T to create host groups and map volumes to the
target host.
Configure global switch settings

212 Configuration Scenarios


Configure the switch hostname, OOB management IP address, and OOB management default gateway. Configure the port
group for the ethernet interfaces used to connect to upstream switches S4148U-ON. In this deployment example, port 1/1/11 is
configured to breakout from 1x40 GbE to 4x10 GbE.

MX5108-A1 MX5108-A2

configure terminal configure terminal

hostname MX5108-A1 hostname MX5108-A2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24
no shutdown no shutdown
management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

interface breakout 1/1/11 map 10g-4x interface breakout 1/1/11 map 10g-4x

Configure VLTi
Configure VLTi on Ports 9 and 10 on MX5108n ethernet switches. By default, port 9 is 40 GbE. Configure breakout on port 10
from 1x100 GbE to 1x40 GbE.

MX5108-A1 MX5108-A2

interface breakout 1/1/10 map 40g-1x interface breakout 1/1/10 map 40g-1x

interface range ethernet 1/1/9-1/1/10 interface range ethernet 1/1/9-1/1/10


description VLTi description VLTi
no switchport no switchport

vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
discovery-interface ethernet 1/1/9-1/1/10 discovery-interface ethernet 1/1/9-1/1/10
peer-routing peer-routing

FSB FC or FCoE configuration


On each of the MX IOMs, enable FSB mode by running the feature fip-snooping with-cvl command.

NOTE: This command is mandatory for FSB cascading, port-pinning, and standalone FSB.

MX5108-A1 MX5108-A2

dcbx enable dcbx enable


feature fip-snooping with-cvl feature fip-snooping with-cvl

VLAN configuration
For each IOM, define the VLANs.

MX5108-A1 MX5108-A2

interface Vlan 30 interface Vlan 40


description FC-A1 description FC-A2
fip-snooping enable fip-snooping enable

no shutdown no shutdown

QoS and CoS configuration

Configuration Scenarios 213


MX5108-A1 MX5108-A2

class-map type network-qos fcoematch class-map type network-qos fcoematch


match qos-group 3 match qos-group 3

policy-map type network-qos PFC policy-map type network-qos PFC


class fcoematch class fcoematch
pause pause
pfc-cos 3 pfc-cos 3

class-map type queuing lan class-map type queuing lan


match queue 1 match queue 1
class-map type queuing san class-map type queuing san
match queue 3 match queue 3

policy-map type queuing ETS policy-map type queuing ETS


class lan class lan
bandwidth percent 70 bandwidth percent 70
class san class san
bandwidth percent 30 bandwidth percent 30

qos-map traffic-class TC-Q qos-map traffic-class TC-Q


queue 1 qos-group 0-2,4-7 queue 1 qos-group 0-2,4-7
queue 3 qos-group 3 queue 3 qos-group 3

Configure upstream interfaces

MX5108-A1 MX5108-A2

interface ethernet 1/1/11:1 interface ethernet 1/1/11:1


description "S4148U1_F-Port-1" description "S4148U2_F-Port-1"
switchport access vlan 1 switchport access vlan 1
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 30 switchport trunk allowed vlan 40
priority-flow-control mode on priority-flow-control mode on
service-policy input type network-qos PFC service-policy input type network-qos PFC
service-policy output type queuing ETS service-policy output type queuing ETS
ets mode on ets mode on
qos-map traffic-class TC-Q qos-map traffic-class TC-Q
fip-snooping port-mode fcf fip-snooping port-mode fcf
no shutdown no shutdown

Configure downstream interfaces


Configure the IOMs ports connected to MX Compute sleds. In the deployment example here, ports 1/1/1 and 1/1/3 are
configured for downstream connections.

MX5108-A1 MX5108-A2

interface ethernet 1/1/1 interface ethernet 1/1/1


description MX_ComputeSled_1 description MX_ComputeSled_1
Switchport access vlan 1 Switchport access vlan 1
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 30 switchport trunk allowed vlan 40
service-policy input type network-qos PFC service-policy input type network-qos PFC
service-policy output type queuing ETS service-policy output type queuing ETS
qos-map traffic-class TC-Q qos-map traffic-class TC-Q
no shutdown no shutdown

interface ethernet 1/1/3 interface ethernet 1/1/3


description MX_ComputeSled_2 description MX_ComputeSled_2
Switchport access vlan 1 Switchport access vlan 1
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 30 switchport trunk allowed vlan 40
service-policy input type network-qos PFC service-policy input type network-qos PFC
service-policy output type queuing ETS service-policy output type queuing ETS

214 Configuration Scenarios


MX5108-A1 MX5108-A2

qos-map traffic-class TC-Q qos-map traffic-class TC-Q


no shutdown no shutdown

Configure UFD
Uplink Failure Detection, or UFD, is recommended on all server-facing interfaces and upstream interfaces.

MX5108-A1 MX5108-A2

uplink-state-group 1 uplink-state-group 1
name "UFD_Group_1" name "UFD_Group_1"
downstream ethernet1/1/1-1/1/3 downstream ethernet1/1/1-1/1/3
upstream fibrechannel1/1/11:1-1/1/11:2 upstream fibrechannel1/1/11:1-1/1/11:2
enable enable

Scenario 8: Configure boot from SAN


The host operating system of MX Server can boot from a remote FC storage array using the IOMs. Booting to an operating
system through FC direct attach (F_port), FC (NPG), and FCoE (FSB) scenarios are supported.

FC storage Boot from SAN


(3 methods)

Direct Attached (F_port) FC (NPG) FCoE (FSB)

MX9116n MX9116n MX5108n


MX7000 MX7000 MX7000

Server
Server
Server Server
Server Server

Figure 197. Boot from SAN

The figure below shows the example topology that is used in this chapter to demonstrate Boot from SAN. The required steps
are provided to configure NIC partitioning, system BIOS, an FCoE LUN, and an OS install media device required for Boot from
SAN.

Figure 198. FCoE boot from SAN

NOTE: See the Dell SmartFabric OS10 User Guide for configuring NPG mode globally on the S4148U-ON switches. Find the
relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.

Configuration Scenarios 215


Configure NIC boot device
In this section, each QLogic CNA port is partitioned into one Ethernet and one FCoE partition.
NOTE: This is only done on CNA ports that carry converged traffic. In this example, these are the two 25 GbE QLogic CNA
ports on each server that attach to the switches internally through an orthogonal connection.
1. Connect to the server's iDRAC in a web browser and launch the virtual console.
2. In the virtual console, select BIOS Setup from the Next Boot menu.
3. Reboot the server.
4. On the System Setup Main Menu, select Device Settings.
5. Select the first CNA port.
6. Select Device Level Configuration.
7. Set the Virtualization Mode to NPAR (if not already set), and click Back.

Figure 199. Virtualization mode to NPAR


8. Choose NIC Partitioning Configuration.
9. Select Partition 1 Configuration.
10. Set NIC + RDMA Mode to Disabled.

Figure 200. Set the value of NIC and RDMA mode


11. Click Back to return.
12. Select Partition 2 Configuration.
13. Set FCoE Mode to Enabled as shown.

Figure 201. FCoE mode to Enabled


14. Click Back and select Back to go to Main Configuration Page.

216 Configuration Scenarios


15. Select NIC Configuration, then set the Boot Protocol to UEFI FCoE, and then click Back.

Figure 202. Set value of Boot Protocol to UEFI FCoE


16. If present, select Partition 3 Configuration in NIC Partitioning Configuration.
17. Set all modes to Disabled and then click Back.
18. If present, select Partition 4 Configuration in NIC Partitioning Configuration.
19. Set all modes to Disabled and then click Back.
20. Select FCoE Configuration.
NOTE: It is not required to have a Virtual LAN ID setup in the CNA, as the CNA uses FIP discovery on the untagged
VLAN to obtain the FCoE VLAN.
21. Set Connect 1 to Enabled.
22. Set the World Wide Port Name Target 1 connected to the port on PowerStore 1000T.

Figure 203. FCoE configuration


23. Click Back and then click Finish.
24. When prompted, answer Yes to save changes and click OK in the Success window.
25. Select the second CNA port and repeat the steps in this section for port 2.
26. Click Finish to exit to the System Setup Main Menu.

Configure BIOS settings


To allow boot from SAN, perform the following steps in the system BIOS settings to disable the PXE BIOS.
1. Select System BIOS from the System Setup Main Menu.
2. Select Network Settings.
3. Click Disable for all PXE Devices.
4. Click Back.
5. Click Finish, click Finish again, then select Yes to exit and reboot.

Configuration Scenarios 217


NOTE: As previously documented, this server configuration may be used to generate a template to deploy to other servers
with identical hardware. When a template is not used, repeat the steps in this chapter for each MX server sled that requires
access to the FC storage.

Connect FCoE LUN


The server should be provisioned to connect to an FCoE boot LUN before moving on. Follow the procedures in Dell PowerStore
1000T to configure and connect to FCoE volumes. Once connected, continue to the steps below to complete the Boot from
SAN configuration.

Set up and install media connection


NOTE: The steps in this section were completed using the iDRAC Java Virtual Console.

1. Connect to the server’s iDRAC in a web browser and launch the virtual console.
2. In the virtual console, from the Virtual Media menu, select Virtual Media.
3. In the virtual console, from the Virtual Media menu, select Map CD/DVD.
4. Click Browse to find the location of the operating system install media then click Map Device.
5. In the virtual console, from the Next Boot menu, select Lifecycle Controller.
6. Reboot the server.

Use Lifecycle Controller to set up operating system driver for


media installation
The installation media for some operating systems do not contain the necessary FCoE drivers to boot from a FCoE LUN. Use
this procedure to create an internal operating system install media device.

NOTE: For VMware ESXi, see the Dell customized media instructions provided on the Dell Technologies Support website.

1. In Lifecycle Controller, select OS Deployment, then select Deploy OS.


2. From the Select an Operating System screen, verify that Boot mode is set to UEFI.
3. Select an operating system to install to the boot LUN.

Figure 204. Lifecycle Controller operating system deployment menu


4. Click Next.
5. Click the Manual Install check box, then click Next.
6. Click Next on the Insert OS Media screen.
7. Click Finish when prompted on the Reboot System screen.
8. System reboots to Virtual Media. Press any key to boot install media when prompted.
9. Follow the operating system prompts to install the operating system to the FCoE storage volumes.

218 Configuration Scenarios


13
PowerEdge MX 100 GbE solution with
external Fabric Switching Engine
The Dell PowerEdge MX platform is advancing its position as the leading high-performance data center infrastructure by
introducing a 100 GbE networking solution. This evolved networking architecture not only provides the benefit of 100 GbE
speed, but also increases the number of MX7000 chassis within a Scalable Fabric.
The 100 GbE networking solution brings a new type of architecture, starting with an external Fabric Switching Engine (FSE).
The Dell PowerSwitch Z9432F-ON is a high-performance external switch, designed for rack-mounting. It features high-density
100/400 GbE ports and serves as the FSE.
The Dell Networking MX8116n Fabric Expander Module (FEM) features two QSFP56-DD ports that each aggregate four
100 GbE server-facing ports. The MX7000 chassis supports up to four MX8116n FEMs, with each FEM connecting to the
Z9432F-ON FSE.

PowerEdge MX models and components for 100 GbE


This section describes the key hardware components for 100 GbE operation within the MX Platform.
For more details on the MX7000 chassis, 25 GbE IOMs, or previous generation compute sleds, see the Dell Technologies
PowerEdge MX Platform Overview section of this document.

Dell Networking MX8116n Fabric Expander Module


The Dell Networking MX8116n FEM acts as an Ethernet repeater, taking signals from an attached compute sled and repeating
them to the associated lane on the external QSFP56-DD connector. The MX8116n FEM also includes two QSFP56-DD
interfaces, with each interface providing up to four 100 Gbps connections to the chassis and eight internal 100 GbE server-
facing ports.
The MX7000 chassis supports up to four MX8116n FEMs in Fabric A and Fabric B.
For more information about supported slot configurations, see the Supported slot configurations for IOMs. Additionally, for more
information about cable selection, see the PowerEdge MX I/O Guide.

Figure 205. MX8116n FEM

The following MX8116n FEM components are labeled in the preceding figure:
1. Express service tag
2. Power and indicator LEDs
3. Module insertion and removal latch
4. Two QSFP56-DD fabric expander ports

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 219
Dell PowerSwitch Z9432F-ON
The Dell PowerSwitch Z9432F-ON fixed switch serves as the designated FSE of the MX platform and can support MX chassis
deployed with 100 GbE or 25 GbE-based compute sleds. The switch comes equipped with 32 QSFP56-DD ports that provide for
uplinks, Virtual Link Trunking interconnect (VLTi), and fabric expansion connections.
The Z9432F-ON provides state-of-the art, high-density 100/400 GbE ports and a broad range of functionality to meet the
growing demands of modern data center environments. This switch is compact and offers industry-leading density of either 32
ports of 400 GbE in QSFP56-DD form factor, 128 ports of 100, or up to 144 ports of 10/25/50 (through breakout) in a 1RU
design.

Figure 206. Dell PowerSwitch Z9432F-ON

The following are key features of the Z9432F-ON:


● Multi-rate 400 GbE ports support 10/25/40/50/100 GbE
● 25.6Tbps non-blocking (full duplex), switching fabric delivers line-rate performance under full load on Z9432F-ON
● L2 multipath support using Virtual Link Trunking (VLT) and Routed VLT support
● Dell SmartFabric OS10 software enables Dell Technologies's Layer 2 and 3 switching and routing protocols with integrated IP
services, quality of service, manageability, and automation features
● Scalable L2 and L3 Ethernet switching with QoS and a full complement of standards-based IPv4 and IPv6 features, including
OSPF and BGP routing support

Dell PowerEdge MX760c compute sled


The Dell PowerEdge MX760c is a two-socket, full-height, single-width compute sled that offers impressive performance and
scalability. The MX760c is ideal for dense virtualization environments and can serve as a foundation for collaborative workloads.
Businesses can install up to eight MX760c sleds in a single MX7000 chassis, which can be combined with compute sleds from
different generations.
Key features of the PowerEdge MX760c include:
● Single width slot design
● Single or dual CPU (up to 56 cores per processor/socket with four x UPI @ 24 GT/s)
● 32x Dimm slots DDR5 with eight memory channels
● 8x E3.S NVMe (Gen5 x4) or 6 x 2.5" SAS/SATA SSDs or 6 x NVMe (Gen4) SSDs
● BOSS-N1 HW-RAID for boot, (2 x M.2 NVMe Internal)
● H965i Performance RAID, SAS/SATA or NVMe RAID
● iDRAC9 with lifecycle controller
● Dual Port Mezz 100 GbE on fabrics A/B
● Dual Port and Quad Port Mezz 25 GbE on fabrics A/B
● Dual Port FC32G on fabric C

220 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
Figure 207. Dell PowerEdge MX760c sled with eight E3.s SSD drives

NOTE: The 100 GbE Dual Port Mezzanine card is also available on the MX750c. For more information, see PowerEdge
MX7000 - front.

PowerEdge Scalable Fabric Architecture


A multi chassis group enables multiple chassis to be managed as if they were a single chassis. A PowerEdge MX Scalable Fabric
enables multiple chassis to behave like a single chassis from a networking perspective.
A Scalable Fabric consists of two main components: the Z9432F-ON FSE and the MX8116n FEM. Configurations include a single
MX8116n or pair of MX8116n FEMs in each chassis for each fabric slot in use, for up to four FEMs per chassis. Each MX8116n
FEM connects to the Z9432F-ON FSE.
There are four deployment options for a Scalable Fabric with the Z9432F-ON FSE. Each option supports a different number of
chassis in a single Scalable Fabric. Also, each option supports a different number of chassis based on the number of fabrics used
in each chassis to a single pair of Z9432F-ON.
NOTE: The options described below only apply to deployments containing the MX8116n FEM with Z9432F-ON FSE. For
deployments with a combination of the MX7116n FEM with MX9116n FSE and the MX8116n FEM with Z9432F-ON FSE, see
section 100 GbE combined deployment with legacy IOMs.

100 GbE deployment


Single fabric All chassis comprised of compute sleds with 100 GbE dual port NICs on either Fabric A or Fabric B.
Dual fabric All chassis comprised of compute sleds with 100 GbE dual port NICs using both Fabrics A and B.

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 221
Dual port 25 GbE deployment
Single fabric All chassis comprised of compute sleds with 25 GbE dual port NICs on either Fabric A or Fabric B.
Dual fabric All chassis comprised of compute sleds with 25 GbE dual port NICs using both Fabrics A and B.

Quad port 25 GbE deployment


Single fabric All chassis comprised of compute sleds with 25 GbE quad port NICs on either Fabric A or Fabric B.
Dual fabric All chassis comprised of compute sleds with 25 GbE quad port NICs using both Fabrics A and B.

Mixed deployment
Comprised of a combination of chassis that include 100 GbE dual port NICs, 25 GbE dual ports NICs, or 25 GbE quad port NICs.
Each chassis can be deployed with single or dual fabrics.
NOTE:

Each individual chassis must only contain compute sleds with NICs at the same speed and number of ports.
The following table lists the maximum number of chassis supported for each deployment option:

Table 28. Maximum number of chassis supported in a Scalable Fabric


Deployment option Maximum number of chassis
supported
100 GbE Single fabric 14
Dual fabric 7
25 GbE dual port Single fabric 14
Dual fabric 7
25 GbE quad port Single fabric 8
Dual fabric 4
Mixed deployment N/A, depends on type of NICs used in
chassis

100 GbE deployment options


In each of the following topologies, all servers are built with a dual port 100 GbE mezzanine card. The mezzanine card can be
installed in either mezzanine slot A, B, or both.
When using the 100 GbE mezzanine card, the Z9432F-ON port-group should be in unrestricted mode and the port mode set for
100g-4x. For further configuration details, see the examples in the Full Switch section in this chapter.
NOTE: The following diagrams show the connections for a scalable fabric between the FSE and FEM components. The
diagrams do not show the VLTi connections recommended when in Full Switch mode.

Single fabric
The 100 GbE single fabric topology in the following diagram shows the basic connections for up to eight MX760c server
modules. Each MX760c server module has a 100 GbE mezzanine card installed in its mezzanine A slot.
This example shows Fabric A populated with two MX8116n FEM. Additional chassis can be added to the deployment using the
same methodology: connecting successive chassis to the Z9432F-ON FSE pair.
For detailed information on MX8116n port mapping, see the MX8116n port mapping section in this chapter.

222 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
Figure 208. 100 GbE single fabric topology

Dual fabric combined fabrics


The 100 GbE dual fabric topology in the following diagram shows the basic connections for up to eight MX760c server modules.
This example shows Fabric A and Fabric B populated with two MX8116n FEMs in each fabric. Additional chassis can be added to
the deployment using the same methodology: connecting successive chassis to the Z9432F-ON FSE pair.
For detailed information on MX8116n port mapping, see the MX8116n port mapping section in this chapter.

Figure 209. 100 GbE dual fabric topology, combined fabrics

Dual fabric separate fabrics


The 100 GbE dual fabric topology in the following diagram shows the basic connections for up to eight MX760c server modules.
This example shows Fabric A and Fabric B populated with two MX8116n FEMs in each fabric.
This example shows Fabric A and Fabric B connecting to two different networks. The MX760c server module in this case has
two mezzanine cards, with each card connected to a separate network.
Additional chassis can be added to the deployment using the same methodology: connecting successive chassis to the Z9432F-
ON FSE pair.
For detailed information on MX8116n port mapping, see the MX8116n port mapping section in this chapter.

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 223
Figure 210. 100 GbE dual fabric, separate fabrics

Dual fabric, single MX8116n in each fabric, separate fabrics


The 100 GbE dual fabric topology in the following diagram shows the basic connections for up to eight MX760c server modules.
This example shows Fabric A and Fabric B populated with a single MX8116n FEM in each fabric. For this option, a single port is in
use on each mezzanine card while the other port is not connected.
This example shows Fabric A and Fabric B connecting to two different networks. The MX760c server module in this case has
two mezzanine cards, each connected to a separate network.
Additional chassis can be added to the deployment using the same methodology: connecting successive chassis to the Z9432F-
ON FSE pair.
For detailed information on MX8116n port mapping, see the MX8116n port mapping section in this chapter.

Figure 211. 100 GbE dual fabric, single FEM, separate fabrics

224 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
Topologies for 25 GbE
In each of the following topologies, all servers are built with a dual or quad port 25 GbE mezzanine card. The mezzanine card
can be installed in either mezzanine slot A or B, or both slots.
NOTE:

Qlogic 25GbE mezzanine cards are not supported with the Z9432F-ON and MX8116n architecture.
When using the 25 GbE dual port mezzanine card, the Z9432F-ON port-group should be in unrestricted mode and the port
mode set for 25g-4x.
When using the 25 GbE quad port mezzanine card, the Z9432F-ON port-group should be in restricted mode and the port mode
set for 25g-8x.
For further configuration details, see the examples in the Full Switch section in this chapter

Single fabric. Applies to both dual port and quad port NICs.
The 25 GbE single fabric topology in the following diagram shows the basic connections for up to eight MX760c server modules.
Each MX760c server module has a 25 GbE mezzanine card installed in its mezzanine A slot.
This example shows Fabric A populated with two MX8116n FEM. Additional chassis can be added to the deployment using the
same methodology: connecting successive chassis to the Z9432F-ON FSE pair.
For detailed information on MX8116n port mapping, see the MX8116n port mapping section in this chapter.

Figure 212. 25 GbE dual port, single fabric topology

Dual fabric combined fabrics. Applies to both dual port and quad port NICs.
The 25 GbE dual fabric topology in the following diagram shows the basic connections for up to eight MX760c server modules.
This example shows Fabric A and Fabric B populated with two MX8116n FEM in each fabric. Additional chassis can be added to
the deployment using the same methodology: connecting successive chassis to the Z9432F-ON FSE pair.
For detailed information on MX8116n port mapping, see the MX8116n port mapping section in this chapter.

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 225
Figure 213. 25 GbE dual port, dual fabric topology

Dual fabric separate fabrics. Applies to both dual and quad port NICs.
The 25 GbE dual fabric topology in the following diagram shows the basic connections for up to eight MX760c server modules.
This example shows Fabric A and Fabric B populated with two MX8116n FEM in each fabric.
This example shows Fabric A and Fabric B connecting to two different networks. The MX760c server module in this case has
two mezzanine cards, each connected to a separate network.
Additional chassis can be added to the deployment using the same methodology: connecting successive chassis to the Z9432F-
ON FSE pair.
For detailed information on MX8116n port mapping, see the MX8116n port mapping section in this chapter.

Figure 214. 25 GbE dual fabric, separate fabrics

Topologies for 25 GbE and 100 GbE in the same scalable fabric
The 100 GbE solution with external Z9432F-ON FSE can support both 25 GbE and 100 GbE within the same deployment. The
Z9432F-ON port groups and port modes can be configured to operate with a variety of speeds and breakouts. The restrictions
on the port groups and operation mode might affect interface capability and deployment options. For configuration examples on
Z9432F-ON port groups and modes, see the Full Switch section in this chapter.

226 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
Support for 25 GbE and 100 GbE within the same deployment is limited to each individual chassis. Each chassis must contain a
server module with mezzanine cards that operate with the same 25 GbE or 100 GbE speed. An individual chassis must have all
25 GbE or all 100 GbE mezzanine cards.

Single fabric. Applies to dual port 100 GbE and dual port and quad port 25 GbE NICs.
The 100 GbE and 25 GbE single fabric topology in the following diagram shows the basic connections for up to eight MX760c
server modules in each chassis. Each MX760c server module has a 100 GbE or 25 GbE mezzanine card installed in its mezzanine
A slot for each chassis.
This example shows Fabric A populated with two MX8116n FEM. Additional chassis can be added to the deployment using the
same methodology: connecting successive chassis to the Z9432F-ON FSE pair.

Figure 215. 100 GbE chassis and 25 GbE chassis, single fabric topology

Dual fabric, combined fabrics. Applies to dual port 100 GbE and dual port and quad
port 25 GbE NICs.
The 100 GbE and 25 GbE dual fabric topology in the following diagram shows the basic connections for up to eight MX760c
server modules. Each MX760c server module has a 100 GbE or 25 GbE mezzanine card installed in its mezzanine A and B slot
for each chassis.
This example shows Fabric A and Fabric B populated with two MX8116n FEM in each fabric. Additional chassis can be added to
the deployment using the same methodology: connecting successive chassis to the Z9432F-ON FSE pair.
For detailed information on MX8116n port mapping, see the MX8116n port mapping section in this chapter.

Figure 216. 100 GbE chassis and 25 GbE chassis, dual fabric topology, combined fabrics

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 227
Dual fabric, separate fabrics. Applies to dual port 100 GbE and dual port and quad
port 25 GbE NICs.
The 100 GbE and 25 GbE dual fabric topology in the following diagram shows the basic connections for up to eight MX760c
server modules. Each MX760c server module has a 100 GbE or 25 GbE mezzanine card installed in its mezzanine A and B slot
for each chassis.
This example shows Fabric A and Fabric B connecting to two different networks. The MX760c server module in this case has
two mezzanine cards, with each connected to a separate network.
Additional chassis can be added to the deployment using the same methodology: connecting successive chassis to the Z9432F-
ON FSE pair.
For detailed information on MX8116n port mapping, see the MX8116n port mapping section in this chapter

Figure 217. 100 GbE chassis and 25 GbE chassis, dual fabric topology, separate fabrics

MX Chassis management wiring


The new 100 GbE solution with external Z9432F-ON FSE has no effect on the MX chassis management wiring for multi chassis
deployments. The same procedure for connecting multiple chassis for a MCM, multi chassis group is used for the 100 GbE
solution. For details on wiring multiple chassis, see MX Chassis management wiring.

MX8116n management
The MX8116n FEM is the only networking module for the 100 GbE solution that is installed within the MX chassis. The MX8116n
FEM acts as an Ethernet repeater to take traffic from the server facing ports to the external links connecting the Z9432F-ON.
This section describes some of the management options and update procedures available for the MX8116n.

228 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
MX8116n within OME-Modular
The 100 GbE solution for the MX Platform does not include a SmartFabric feature. The solution is only supported through Full
Switch mode of the Z9432F-ON external rack switch. However, the MX8116n can still be viewed within the inventory of I/O
Modules.
To view the information available in the OME-M GUI, perform the following steps:
1. Open the OME-M console.
2. From the navigation menu, click Devices > I/O Modules.
3. Select an MX8116n and click View Details button to the right of the Inventory screen. The IOM Overview displays for the
device.
The following figures show the standard information on the selected MX8116n, along with the health and power status of the
device. This is the standard overview page for all IOMs within OME-M.

Figure 218. MX8116n Overview details

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 229
Figure 219. MX8116n Health and Power status

4. Click the Hardware tab and then select FRU, Device Management Info, and Installed Software to see each detail.

Figure 220. MX8116n FRU information

230 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
Figure 221. MX8116n Device Management information

Figure 222. MX8116n Installed Software information

Upgrading the firmware on the MX8116n FEM

The MX8116n FEM has a Linux-based OS and can be accessed through the RACADM command-line utility. The network
interface of the MX8116n by default is not configured. See Dell OpenManage Enterprise Modular Edition Version 2.10.00 for
PowerEdge MX7000 Chassis RACADM Command Line Reference Guide on the Documentation tab of the PowerEdge MX700
support site for details on accessing the Linux prompt.
To perform a FW update in the RACADM command-line utility, use the following steps:
1. Access the MX8116n through the RACADM command-line utility.
2. Log in using the credentials from the OME-M chassis credentials.
3. Follow the instructions in the PowerEdge MX8116n IOM Release Notes to set up the IP interface on the MX8116n and
perform the firmware upgrade.

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 231
Z9432F-ON management
The Z9432F-ON is a standalone rack switch operating in Full Switch mode. The main configuration, status, and life-cycle
maintenance are all performed through the switch's command line interface (CLI). The CLI is accessed in the same way as
traditional rack switches. The current MX9116n and MX5108n IOMs are accessed through the switch's management interface.
For guidance on accessing the CLI and configuring the management interface, see the Dell SmartFabric OS10 User Guide on the
Documentation tab on the SmartFabric OS10 support site.
NOTE: The switch replacement process for the Z9432F-ON is the same as the Full Switch IOM replacement process.
See Full Switch mode IO module replacement process for information. Additional details can also be found in the Dell
SmartFabric OS10 User Guide.

Find the relevant version of the User Guide in the Dell Technologies documentation table.

MX8116n FEM port mapping on the Z9432F-ON


The MX8116n FEM can operate at 25 GbE and 100 GbE. The 25 GbE solution can support both dual and quad port NICs, while
the 100 GbE solution is dual port only.
The following sections describe the port mapping for each port mode of operation, showing an example of interface 1/1/1 and
interface 1/1/31 on the Z9432F-ON to connect the MX8116n. The interfaces used are arbitrary for this example and are not a
recommendation for connection order.

Compute sleds with 100 GbE dual port mezzanine cards


This example below shows a single MX8116n that can be installed in any fabric slot in either A or B. The diagram and table show
only one slot of the fabric connecting to a single Z9432F-ON. Typically, a second MX8116n and Z9432F-ON are part of the
deployment, but they are not shown in this example. The information would be the same for the second pair.
On the Z9432F-ON, the following port group settings are required for 100 GbE dual port mezzanine cards. Each port group
contains two interfaces. The following example shows port groups 1/1/1 and 1/1/16, which contain the port interfaces 1/1/1 and
1/1/31. Within the port-group, the port mode for each port interface can be configured.
The following configuration shows the final state required for 100 GbE dual port mezzanine cards:

port-group 1/1/1
profile unrestricted
port 1/1/1 mode Eth 100g-4x
port 1/1/2 mode Eth 100g-4x

port-group 1/1/16
profile unrestricted
port 1/1/31 mode Eth 100g-4x
port 1/1/32 mode Eth 100g-4x

Once the port modes are configured and the connections are made, the MX8116n ports auto-negotiate to match the port
operating mode of the Z9432F-ON interfaces. The internal server facing ports of the MX8116n auto-negotiate with the
mezzanine card port speed of 100 GbE.
The following figure shows the interface numbering for each sled and corresponding MX8116n port when using a QSFP56DD
based optic or cable:

232 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
Figure 223. Z9432F-ON Port mapping for 100 GbE solution

Sled 1 through sled 4 use Port 2 on the MX8116n, while sled 5 through sled 8 use Port 1. In this example, the interfaces used on
the Z9432F-ON are arbitrary. QSFP56-DD interfaces on the Z9432F-ON can be connected in any order.
NOTE:

For the 100 GbE dual port mezzanine card solution, a QSF56DD-based optic or cable must be used.

Compute sleds with 25 GbE quad port mezzanine cards


This example shows a single MX8116n that can be installed in any fabric slot in either A or B. The diagram and table show
only one slot of the fabric connecting to a single Z9432F-ON. Typically, a second MX8116n and Z9432F-ON are part of the
deployment, but they are not shown in this example. The information would be the same for the second pair.
On the Z9432F-ON, the following port group settings are required for 25 GbE quad port mezzanine cards. Each port group
contains two interfaces. The following example shows port groups 1/1/1 and 1/1/16, which contain the port interfaces 1/1/1 and
1/1/31. For the required 25g-8x port mode operation, the profile must first be set to restricted. In this case, this restriction
means that the second port interface in the port group can only operate in a restricted mode. The restriction on the second port
means that it must operate in a 1x mode, making the even ports unsuitable for connections to the MX8116n. Therefore, only the
odd ports can be used for connections to the MX8116n.
Within the port-group, both the profile and the port mode for each port interface can be configured. The following configuration
shows the final state required for 25 GbE quad port mezzanine cards:

port-group 1/1/1
profile restricted
port 1/1/1 mode Eth 25g-8x
port 1/1/2 mode Eth 400g-1x

port-group 1/1/16
profile restricted
port 1/1/31 mode Eth 25g-8x
port 1/1/32 mode Eth 400g-1x

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 233
The even ports can be used for links to external networks and any port mode available can be used. The port mode shown for
the even ports does not need to be kept at the setting in the example.
Once the port modes are configured and the connections are made, the MX8116n ports auto-negotiate to match the port
operating mode of the Z9432F-ON interfaces. The internal server facing ports of the MX8116n auto-negotiate with a mezzanine
card port speed of 25 GbE.
The following figure shows the interface numbering for each sled and corresponding MX8116n port when using a QSFP56DD
based optic or cable:

Figure 224. Z9432F-ON Port mapping for 25 GbE quad port solution for QSFP56DD based optics and cables

Sleds 1 through sled 4 use Port 2 on the MX8116n, while sleds 5 through sled 8 use Port 1. The interfaces used on the
Z9432F-ON in this example are arbitrary. Odd-numbered QSFP56-DD interfaces on the Z9432F-ON can be connected in any
order.
For the 25 GbE quad port mezzanine card solution, there is an option to use QSF28-DD based optics and cables. The following
configuration shows the final state required for 25 GbE quad port mezzanine cards.
Within the port-group, both the profile and the port mode for each port interface can be configured. The following configuration
shows the final state required for 25 GbE quad port mezzanine cards:

port-group 1/1/1
profile restricted
port 1/1/1 mode Eth 25g-8x
port 1/1/2 mode Eth 400g-1x

port-group 1/1/16
profile restricted
port 1/1/31 mode Eth 25g-8x
port 1/1/32 mode Eth 400g-1x

Once the port modes are configured and the connections are made, the MX8116n ports auto-negotiate to match the port
operating mode of the Z9432F-ON interfaces. The internal server facing ports of the MX8116n auto-negotiate with the
mezzanine card port speed of 25 GbE.
The following figure shows the interface numbering for each sled and corresponding MX8116n port when using a QSFP28-DD
based optic or cable:

234 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
Figure 225. Z9432F-ON Port mapping for 25 GbE quad port solution with QSFP28-DD based optics and cables

Compute sleds with 25 GbE dual port mezzanine cards


This example shows a single MX8116n that can be installed in any fabric slot in either A or B. The diagram and table show
only one slot of the fabric connecting to a single Z9432F-ON. Typically, a second MX8116n and Z9432F-ON are part of the
deployment, but they are not shown in this example. The information would be the same for the second pair.
On the Z9432F-ON, the following port group settings are required for 25 GbE dual port mezzanine cards. Each port group
contains two interfaces. The following example shows port groups 1/1/1 and 1/1/16, which contain the port interfaces 1/1/1 and
1/1/31. For the required 25g-4x port mode operation, the profile should stay in the default unrestricted setting. Unlike quad port
deployments, dual port deployments can use both even and odd ports on the Z9432F-ON.
Within the port-group, both the profile and the port mode for each port interface can be configured. The following configuration
shows the final state required for 25 GbE dual port mezzanine cards:

port-group 1/1/1
profile restricted
port 1/1/1 mode Eth 25g-8x
port 1/1/2 mode Eth 400g-1x

port-group 1/1/16
profile restricted
port 1/1/31 mode Eth 25g-8x
port 1/1/32 mode Eth 400g-1x

Once the port modes are configured and the connections are made, the MX8116n ports auto-negotiate to match the port
operating mode of the Z9432F-ON interfaces. The internal server facing ports of the MX8116n auto-negotiate with the
mezzanine card port speed of 25 GbE.
The following figure shows the interface numbering for each sled and corresponding MX8116n port when using a QSFP56DD
based optic or cable:

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 235
Figure 226. Z9432F-ON Port mapping for 25 GbE dual port solution for QSFP56DD based optic or cable

Sled 1 through sled 4 use Port 2 on the MX8116n, while sled 5 through sled 8 use Port 1. The interfaces used on the Z9432F-ON
in this example are arbitrary. QSFP56-DD interfaces on the Z9432F-ON can be connected in any order.
For the 25 GbE dual port mezzanine card solution, there is an option to use QSFP28-DD based optics and cables. The following
configuration shows the final state required for 25 GbE dual port mezzanine cards.
Within the port-group, both the profile and the port mode for each port interface can be configured. The following configuration
shows the final state required for 25 GbE dual port mezzanine cards:

port-group 1/1/1
profile unrestricted
port 1/1/1 mode Eth 25g-4x
port 1/1/2 mode Eth 400g-1x

port-group 1/1/16
profile unrestricted
port 1/1/31 mode Eth 25g-4x
port 1/1/32 mode Eth 400g-1x

Once the port modes are configured and the connections are made, the MX8116n ports auto-negotiate to match the port
operating mode of the Z9432F-ON interfaces. The internal server facing ports of the MX8116n auto-negotiate with the
mezzanine card port speed of 25 GbE.
The following figure shows the interface numbering for each sled and corresponding MX8116n port when using a QSFP28-DD
based optic or cable:

236 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
Figure 227. Z9432F-ON Port mapping for 25 GbE dual port solution with QSFP28-DD based optics and cables

100 GbE solution configuration examples


The 100 GbE solution, based on the MX8116n and Z9432F-ON, is exclusively a Full Switch configuration. The configuration of
the fabric in the MX chassis is completed on the Z9432F-ON FSE, which is an external rack-mounted switch. For guidance
on accessing the CLI and configuring the management interface, see the Dell SmartFabric OS10 User Guide. Find the relevant
version of the User Guide in the OME-M and OS10 compatibility and documentation table.
This section shows two example deployments. The first example shows a deployment with compute sleds using the 100 GbE
BCOM 57508 dual port mezzanine card. The second example shows a deployment with a 25 GbE BCOM 57504 quad port
mezzanine card. In each deployment example, the Z9432F-ON has uplinks to a ToR switch pair using the S5232F-ON.

100 GbE solution example


The diagram below shows the connections from the MX8116n inside the MX chassis to the Z9432F-ON switches external to the
MX chassis. Two compute sleds are used, and both sleds contain 100 GbE mezzanine cards. The compute sled in Chassis-01 is
installed in slot 2, and the compute sled in Chassis-02 is installed in slot 7.

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 237
Figure 228. 100 GbE solution wiring diagram

NOTE: The diagram above shows only one connection on each MX8116n for simplicity. See the port mapping in the previous
section for details on which MX8116n port serves each compute sled slot.

S5232-ON Configuration, External Network ToR

The following configuration examples are for the ToR switch pair. The configuration is limited to the minimum basic interfaces
and features for connecting to the MX fabric on the Z9432F-ON switch pair.
NOTE: The interface numbers and VLANs used in these example configurations are arbitrary. Change the configuration
details to suit your deployment requirements.

Configure global switch settings


Configure the switch hostname, OOB management IP address, OOB management default gateway, and NTP server IP address.

S5232-1 S5232-2

configure terminal configure terminal

hostname S5232-1 hostname S5232-2

interface mgmt 1/1/1 interface mgmt 1/1/1

238 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
S5232-1 S5232-2

no ip address dhcp no ip address dhcp


ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24
no shutdown no shutdown
ipv6 address autoconfig ipv6 address autoconfig

management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

ntp server 100.67.XX.XX ntp server 100.67.YY.YY

Configure VLTi
For this example deployment, interfaces Ethernet 1/1/29 and 1/1/31 are configured as VLTi ports.

S5232-1 S5232-2

configure terminal configure terminal

interface ethernet1/1/29 interface ethernet1/1/29


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off

interface ethernet1/1/31 interface ethernet1/1/31


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off

vlt-domain 1 vlt-domain 1
backup destination 100.67.XX.XX backup destination 100.67.YY.YY
discovery-interface ethernet1/1/29,1/1/31 discovery-interface ethernet1/1/29,1/1/31
peer-routing peer-routing

Configure downstream interfaces


To configure downstream interfaces, perform the following steps:
1. Configure the port group for the ethernet interfaces connected to the downstream Z9432F-ON switches.
2. Create the in-band production VLANs. A single VLAN-1811 is created for this example and is configured as an access VLAN
on port-channel 10. Interface ports 1/1/9 and 1/1/10 are configured as a member of port-channel 10.
3. Save the running-config to the start-up config.

S5232-1 S5232-2

configure terminal configure terminal

interface vlan1811 interface vlan1811


no shutdown no shutdown

interface port-channel10 interface port-channel10


description To-Z9432 description To-Z9432
no shutdown no shutdown
mtu 9216 mtu 9216
switchport mode access switchport mode access
switchport access vlan 1811 switchport access vlan 1811

interface range ethernet1/1/9-1/1/10 interface range ethernet1/1/9-1/1/10


description To-Z9432 description To-Z9432
no shutdown no shutdown
channel-group 10 mode active channel-group 10 mode active

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 239
S5232-1 S5232-2

no switchport no switchport
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off
flowcontrol transmit on flowcontrol transmit on

write memory write memory

Z9432-ON Configuration FSE

Configure global switch settings


Configure the switch hostname, OOB management IP address, OOB management default gateway, and NTP server IP address.

Z9432-1 Z9432-2

configure terminal configure terminal

hostname Z9432-1 hostname Z9432-2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24
no shutdown no shutdown
ipv6 address autoconfig ipv6 address autoconfig

management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

ntp server 100.67.XX.YY ntp server 100.67.XX.YY

Port-Groups breakout
To configure the port-groups, configure the following:
● Set breakout port-group 1/1/16 for VLTi interfaces 1/1/31 and 1/1/32 to mode Eth 100g.
● Set breakout port-group 1/1/5 for upstream interfaces 1/1/9 and 1/1/10 to mode Eth 100g.
● Set breakout port-group 1/1/6 and 1/1/7 for downstream compute sled interfaces 1/1/11 and 1/1/13 to mode Eth 100g-4x.
For this deployment example, two compute sleds are being used with an 100 GbE NIC, therefore the port-groups are set to
breakout to 100g-4x.

Z9432-1 Z9432-2

configure terminal configure terminal

port-group 1/1/6 port-group 1/1/6


profile unrestricted profile unrestricted
port 1/1/11 mode Eth 100g-4x port 1/1/11 mode Eth 100g-4x

port-group 1/1/7 port-group 1/1/7


profile unrestricted profile unrestricted
port 1/1/13 mode Eth 100g-4x port 1/1/13 mode Eth 100g-4x

port-group 1/1/5 port-group 1/1/5


profile unrestricted profile unrestricted
port 1/1/9 mode Eth 100g-1x port 1/1/9 mode Eth 100g-1x
port 1/1/10 mode Eth 100g-1x port 1/1/10 mode Eth 100g-1x

240 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
Z9432-1 Z9432-2

port-group 1/1/16 port-group 1/1/16


profile unrestricted profile unrestricted
port 1/1/31 mode Eth 100g-1x port 1/1/31 mode Eth 100g-1x
port 1/1/32 mode Eth 100g-1x port 1/1/32 mode Eth 100g-1x

Configure VLTi
Configure interfaces Ethernet 1/1/31:1 and 1/1/32:1 as VLTi ports.

Z9432-1 Z9432-2

configure terminal configure terminal

interface ethernet1/1/31:1 interface ethernet1/1/31:1


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off

interface ethernet1/1/32:1 interface ethernet1/1/32:1


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off

vlt-domain 1 vlt-domain 1
backup destination 100.67.XX.XX backup destination 100.67.YY.YY
discovery-interface discovery-interface
ethernet1/1/31:1,1/1/32:1 ethernet1/1/31:1,1/1/32:1

Configure upstream interfaces


In this deployment example, port 1/1/9 and port 1/1/10 are configured as a member of port-channel 10.
To configure the upstream interfaces, configure the port channel for the Ethernet interfaces connected to upstream switches
S5232. Then, create the in-band production VLANs.
A single VLAN-1811 is created for this example and is configured as an access VLAN on port-channel 10.

Z9432-1 Z9432-2

configure terminal configure terminal

interface vlan1811 interface vlan1811


no shutdown no shutdown

interface port-channel10 interface port-channel10


description To-S5232 description To-S5232
no shutdown no shutdown
switchport access vlan 1811 switchport access vlan 1811
vlt-port-channel 1 vlt-port-channel 1

interface ethernet1/1/9:1 interface ethernet1/1/9:1


description To-S5232 description To-S5232
no shutdown no shutdown
channel-group 10 mode active channel-group 10 mode active
no switchport no switchport
flowcontrol receive off flowcontrol receive off

interface ethernet1/1/10:1 interface ethernet1/1/10:1

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 241
Z9432-1 Z9432-2

description To-S5232 description To-S5232


no shutdown no shutdown
channel-group 10 mode active channel-group 10 mode active
no switchport no switchport
flowcontrol receive off flowcontrol receive off

Configure downstream interfaces


To configure the downstream interfaces, configure the ethernet interfaces connected to the downstream MX8116n FEMs.
In this deployment example, port 1/1/11 under port-group 1/1/6 is set to break out to 100g-4x for the compute sled in the lead
chassis.
For the other compute sled in the backup chassis, port 1/1/13 under port-group 1/1/7 is set to break out to 100g-4x. In this
case, configure the interfaces as a switchport mode trunk for VLAN-1811.

Z9432-1 Z9432-2

configure terminal configure terminal

interface ethernet1/1/11:3 interface ethernet1/1/11:3


description LD-B1-Sled2-P1 description LD-B2-Sled2-P2
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1811 switchport trunk allowed vlan 1811
flowcontrol receive off flowcontrol receive off
flowcontrol transmit on flowcontrol transmit on

interface ethernet1/1/13:5 interface ethernet1/1/13:5


description BK-B1-Sled7-P1 description BK-B2-Sled7-P2
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1811 switchport trunk allowed vlan 1811
flowcontrol receive off flowcontrol receive off
flowcontrol transmit on flowcontrol transmit on

25 GbE solution example


The diagram below shows the connections from the MX8116n within the MX chassis to the Z9432F-ON switches external to the
MX chassis. Two compute sleds are used, both containing 25 GbE quad port mezzanine cards. The compute sled in Chassis-01 is
installed in slot 2, and the compute sled in Chassis-02 is installed in slot 7.

242 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
Figure 229. 25 GbE solution diagram

NOTE: The diagram above shows only one connection on each MX8116n for simplicity. See the port mapping in the previous
section for details on which MX8116n port serves each compute sled slot.

S5232 Configuration, External Network ToR

The following configuration examples are for the ToR switch pair. The configuration is limited to the minimum basic interfaces
and features for connecting to the MX fabric on the Z9432F-ON switch pair.
NOTE: The interface numbers and VLANs used in these example configurations are arbitrary. Change the configuration
details to suit your deployment requirements.

Configure global switch settings


Configure the switch hostname, OOB management IP address, OOB management default gateway, and NTP server IP address.

S5232-1 S5232-2

configure terminal configure terminal

hostname S5232-1 hostname S5232-2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 243
S5232-1 S5232-2

no shutdown no shutdown
ipv6 address autoconfig ipv6 address autoconfig

management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

ntp server 100.67.XX.XX ntp server 100.67.YY.YY

Configure VLTi
For this example deployment, interfaces Ethernet 1/1/29 and 1/1/31 are configured as VLTi ports.

S5232-1 S5232-2

configure terminal configure terminal

interface ethernet1/1/29 interface ethernet1/1/29


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off

interface ethernet1/1/31 interface ethernet1/1/31


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off

vlt-domain 1 vlt-domain 1
backup destination 100.67.XX.XX backup destination 100.67.YY.YY
discovery-interface ethernet1/1/29,1/1/31 discovery-interface ethernet1/1/29,1/1/31
peer-routing peer-routing

Configure downstream interfaces


To configure downstream interfaces, perform the following steps:
1. Configure the port group for the ethernet interfaces connected to the downstream Z9432F-ON switches.
2. Create the in-band production VLANs. A single VLAN-1811 is created for this example and is configured as an access VLAN
on port-channel 10. Interface ports 1/1/9 and 1/1/10 are configured as a member of port-channel 10.
3. Save the running-config to the start-up config.

S5232-1 S5232-2

configure terminal configure terminal

interface vlan1811 interface vlan1811


no shutdown no shutdown

interface port-channel10 interface port-channel10


description To-Z9432 description To-Z9432
no shutdown no shutdown
mtu 9216 mtu 9216
switchport mode access switchport mode access
switchport access vlan 1811 switchport access vlan 1811

interface range ethernet1/1/9-1/1/10 interface range ethernet1/1/9-1/1/10


description To-Z9432 description To-Z9432
no shutdown no shutdown
channel-group 10 mode active channel-group 10 mode active
no switchport no switchport
mtu 9216 mtu 9216

244 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
S5232-1 S5232-2

flowcontrol receive off flowcontrol receive off


flowcontrol transmit on flowcontrol transmit on

write memory write memory

Z9432 configuration in Full Switch mode

Configure global switch settings

Configure the switch hostname, OOB management IP address, OOB management default gateway, and NTP server IP address.

Z9432-1 Z9432-2

configure terminal configure terminal

hostname Z9432-1 hostname Z9432-2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24
no shutdown no shutdown
ipv6 address autoconfig ipv6 address autoconfig

management route 0.0.0.0/0 100.67.103.254 management route 0.0.0.0/0 100.67.103.254

ntp server 100.67.10.20 ntp server 100.67.10.20

Port-Groups breakout
To configure the port-groups, configure the following:
● Breakout port-group 1/1/16 for VLTi interfaces 1/1/31 and 1/1/32 to mode Eth 100g-1x.
● Breakout port-group 1/1/5 for upstream interfaces 1/1/9 and 1/1/10 to mode Eth 100g-1x.
● Breakout port-group 1/1/2 and 1/1/4 for downstream compute sled interfaces 1/1/3:3 & 1/1/3:4 and 1/1/7:3 & 1/1/7:4 to
mode Eth 100g-4x.
For this deployment example, two compute sleds are being used with an 25 GbE quad port NIC, therefore the port-groups are
set to breakout to 25g-8x.

Z9432-1 Z9432-2

configure terminal configure terminal

port-group 1/1/2 port-group 1/1/2


profile restricted profile restricted
port 1/1/3 mode Eth 25g-8x port 1/1/3 mode Eth 25g-8x
port 1/1/4 mode Eth 400g-1x port 1/1/4 mode Eth 400g-1x

port-group 1/1/4 port-group 1/1/4


profile restricted profile restricted
port 1/1/7 mode Eth 25g-8x port 1/1/7 mode Eth 25g-8x
port 1/1/8 mode Eth 400g-1x port 1/1/8 mode Eth 400g-1x

port-group 1/1/5 port-group 1/1/5


profile unrestricted profile unrestricted
port 1/1/9 mode Eth 100g-1x port 1/1/9 mode Eth 100g-1x
port 1/1/10 mode Eth 100g-1x port 1/1/10 mode Eth 100g-1x

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 245
Z9432-1 Z9432-2

port-group 1/1/16 port-group 1/1/16


profile unrestricted profile unrestricted
port 1/1/31 mode Eth 100g-1x port 1/1/31 mode Eth 100g-1x
port 1/1/32 mode Eth 100g-1x port 1/1/32 mode Eth 100g-1x

Configure VLTi
Interfaces Ethernet 1/1/31:1 and 1/1/32:1 are configured as VLTi ports.

Z9432-1 Z9432-2

configure terminal configure terminal

interface ethernet1/1/31:1 interface ethernet1/1/31:1


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off

interface ethernet1/1/32:1 interface ethernet1/1/32:1


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off

vlt-domain 1 vlt-domain 1
backup destination 100.67.XX.XX backup destination 100.67.YY.YY
discovery-interface discovery-interface
ethernet1/1/31:1,1/1/32:1 ethernet1/1/31:1,1/1/32:1

Configure upstream interfaces


Configure the port channel for the ethernet interfaces connected to upstream switches S5232. Then, create the in-band
production VLANs. A single VLAN-1811 is created for this example and is configured as an access VLAN on port-channel 10.
In this deployment example, port 1/1/9 and port 1/1/10 are configured as a member of port-channel 10.

Z9432-1 Z9432-2

configure terminal configure terminal

interface vlan1811 interface vlan1811


no shutdown no shutdown

interface port-channel10 interface port-channel10


description To-S5232 description To-S5232
no shutdown no shutdown
switchport access vlan 1811 switchport access vlan 1811
vlt-port-channel 1 vlt-port-channel 1

interface ethernet1/1/9:1 interface ethernet1/1/9:1


description To-S5232 description To-S5232
no shutdown no shutdown
channel-group 10 mode active channel-group 10 mode active
no switchport no switchport
flowcontrol receive off flowcontrol receive off

interface ethernet1/1/10:1 interface ethernet1/1/10:1


description To-S5232 description To-S5232
no shutdown no shutdown
channel-group 10 mode active channel-group 10 mode active

246 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
Z9432-1 Z9432-2

no switchport no switchport
flowcontrol receive off flowcontrol receive off

Configure downstream interfaces


To configure downstream interfaces, configure the ethernet interfaces connected to downstream MX8116n FEMs.
In this deployment example, port 1/1/3 under port-group 1/1/2 is set to breakout to 25g-8x for the compute sled in the lead
chassis, sled-2.
For the other compute sled in backup chassis, sled-7, port 1/1/7 under port-group 1/1/4 is set to breakout to 25g-8x. In this
case, configure the interfaces as a switchport mode trunk for VLAN-1811.

Z9432-1 Z9432-2

configure terminal configure terminal

interface ethernet1/1/3:3 interface ethernet1/1/3:3


description LD-B1-Sd2-P1 description LD-B2-Sd2-P2
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1811 switchport trunk allowed vlan 1811
flowcontrol receive off flowcontrol receive off
flowcontrol transmit on flowcontrol transmit on

interface ethernet1/1/3:4 interface ethernet1/1/3:4


description LD-B1-Sd2-P3 description LD-B2-Sd2-P4
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1811 switchport trunk allowed vlan 1811
flowcontrol receive off flowcontrol receive off
flowcontrol transmit on flowcontrol transmit on

interface ethernet1/1/7:5 interface ethernet1/1/7:5


no shutdown no shutdown
description BK-B1-Sd7-P1 description BK-B2-Sd7-P2
no shutdown no shutdown
switchport mode access switchport mode trunk
switchport access vlan 1811 switchport trunk allowed vlan 1811
flowcontrol receive off flowcontrol receive off
flowcontrol transmit on flowcontrol transmit on

interface ethernet1/1/7:6 interface ethernet1/1/7:6


no shutdown no shutdown
description BK-B1-Sd7-P3 description BK-B2-Sd7-P4
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1811 switchport trunk allowed vlan 1811
flowcontrol receive off flowcontrol receive off
flowcontrol transmit on flowcontrol transmit on

100 GbE solution configuration validation


The following SHOW command output results are captured from a Z9432 leaf-1 switch only. This deployment guide does not
capture SHOW command results from a ToR switch, S5232 switch, or Z9432 leaf-2 switch.

Show Interface Status

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 247
The show interface status shows the ports information including description, status, speed, number of associated
VLANs, and whether the VLAN is tagged or untagged. In the deployment example described in this guide, port 1/1/3:3-1/1/3:4
and 1/1/7:3-1/1/7:4 are connected to an MX 750c sled server NIC through MX8116n port-2.

Z9432-1# show interface status

-----------------------------------------------------------------------------------------
---------
Port Description Status Speed Duplex Mode Vlan Tagged-Vlans
-----------------------------------------------------------------------------------------
---------
Eth 1/1/1 down 0 full A 1 -
Eth 1/1/2 down 0 full A 1 -
Eth 1/1/3:1 down 0 full A 1 -
Eth 1/1/3:2 down 0 full A 1 -
Eth 1/1/3:3 LD-B1-Sd2-P1 up 25G full T 1 1811
Eth 1/1/3:4 LD-B1-Sd2-P3 up 25G full T 1 1811
Eth 1/1/3:5 down 0 full A 1 -
Eth 1/1/3:6 down 0 full A 1 -
Eth 1/1/3:7 down 0 full A 1 -
Eth 1/1/3:8 down 0 full A 1 -
Eth 1/1/4 down 0 full A 1 -
Eth 1/1/5 down 0 full A 1 -
Eth 1/1/6 down 0 full A 1 -
Eth 1/1/7:1 down 0 full A 1 -
Eth 1/1/7:2 down 0 full A 1 -
Eth 1/1/7:3 BK-B1-Sd2-P1 up 25G full T 1 1811
Eth 1/1/3:4 BK-B1-Sd2-P3 up 25G full T 1 1811
Eth 1/1/7:5 down 0 full A 1 -
Eth 1/1/7:6 down 0 full A 1 -
Eth 1/1/7:7 down 0 full A 1 -
Eth 1/1/7:8 down 0 full A 1 -
Eth 1/1/8 down 0 full A 1 -
Eth 1/1/9:1 To-S5232 up 100G full -
Eth 1/1/10:1 To-S5232 up 100G full -
Eth 1/1/11:1 down 0 full A 1 -
Eth 1/1/11:3 LD-B1-Sd2-P1 up 100G full T 1 1811
Eth 1/1/11:5 down 0 full A 1 -
Eth 1/1/11:7 down 0 full A 1 -
Eth 1/1/12 down 0 full A 1 -
Eth 1/1/13:1 down 0 full A 1 -
Eth 1/1/13:3 down 0 full -
Eth 1/1/13:5 BK-B1-Sd7-P1 up 100G full T 1 1811
Eth 1/1/13:7 down 0 full A 1 -
Eth 1/1/14 down 0 full A 1 -
Eth 1/1/15 down 0 full A 1 -
Eth 1/1/16 down 0 full A 1 -
Eth 1/1/17 down 0 full A 1 -
Eth 1/1/18 down 0 full A 1 -
Eth 1/1/19 down 0 full A 1 -
Eth 1/1/20 down 0 full A 1 -
Eth 1/1/21 down 0 full A 1 -
Eth 1/1/22 down 0 full A 1 -
Eth 1/1/23 down 0 full A 1 -
Eth 1/1/24 down 0 full A 1 -
Eth 1/1/25 down 0 full A 1 -
Eth 1/1/26 down 0 full A 1 -
Eth 1/1/27 down 0 full A 1 -
Eth 1/1/28 down 0 full A 1 -
Eth 1/1/29 down 0 full A 1 -
Eth 1/1/30 down 0 full A 1 -
Eth 1/1/31:1 VLTi up 100G full -
Eth 1/1/32:1 VLTi up 100G full -
Eth 1/1/33 down 0 full A 1 -
Eth 1/1/34 down 0 full A 1 -
-----------------------------------------------------------------------------------------
---------

248 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
Show Port Group
The show port-group command shows the port group breakout. In the example for the deployment described in this guide,
port 1/1/3 in port-group1/1/2 and port 1/1/7 in port-group1/1/4 are breakout to 25g-8x.

Z9432-1# show port-group

hybrid-group profile Ports Mode


port-group1/1/1 restricted 1/1/1 Eth 400g-1x
1/1/2 Eth 400g-1x
port-group1/1/2 restricted 1/1/3 Eth 25g-8x
1/1/4 Eth 400g-1x
port-group1/1/3 restricted 1/1/5 Eth 400g-1x
1/1/6 Eth 400g-1x
port-group1/1/4 restricted 1/1/7 Eth 25g-8x
1/1/8 Eth 400g-1x
port-group1/1/5 unrestricted 1/1/9 Eth 100g-1x
1/1/10 Eth 100g-1x
port-group1/1/6 unrestricted 1/1/11 Eth 100g-4x
1/1/12 Eth 400g-1x
port-group1/1/7 unrestricted 1/1/13 Eth 100g-4x
1/1/14 Eth 400g-1x
port-group1/1/8 unrestricted 1/1/15 Eth 400g-1x
1/1/16 Eth 400g-1x
port-group1/1/9 unrestricted 1/1/17 Eth 400g-1x
1/1/18 Eth 400g-1x
port-group1/1/10 unrestricted 1/1/19 Eth 400g-1x
1/1/20 Eth 400g-1x
port-group1/1/11 unrestricted 1/1/21 Eth 400g-1x
1/1/22 Eth 400g-1x
port-group1/1/12 unrestricted 1/1/23 Eth 400g-1x
1/1/24 Eth 400g-1x
port-group1/1/13 unrestricted 1/1/25 Eth 400g-1x
1/1/26 Eth 400g-1x
port-group1/1/14 unrestricted 1/1/27 Eth 400g-1x
1/1/28 Eth 400g-1x
port-group1/1/15 unrestricted 1/1/29 Eth 400g-1x
1/1/30 Eth 400g-1x
port-group1/1/16 unrestricted 1/1/31 Eth 100g-1x
1/1/32 Eth 100g-1x

Show LLDP neighbors


The show lldp neighbors command shows information about LLDP neighbors. The iDRAC in the PowerEdge MX compute
sled produces LLDP topology packets. These packets contain specific information that the SmartFabric Services engine uses to
determine the physical network topology, regardless of whether a switch is in Full Switch or SmartFabric mode.
When viewing the LLDP neighbors, the output shows the iDRAC MAC address in addition to the NIC MAC address of the
respective mezzanine card.

Z9432-1# show lldp neighbors


Loc PortID Rem Host Name Rem Port Id Rem Chassis Id
--------------------------------------------------------------------------------------
ethernet1/1/3:3 Broadcom Adv. Qua... bc:97:e1:0c:43:20 bc:97:e1:0c:43:20
ethernet1/1/3:3 PowerEdge MX750c ... FZL52G3 NIC.Mezzanine.1B-1-1 30:d0:42:d8:b7:d6
ethernet1/1/3:4 Broadcom Adv. Qua... bc:97:e1:0c:43:22 bc:97:e1:0c:43:22
ethernet1/1/3:4 PowerEdge MX750c ... FZL52G3 NIC.Mezzanine.1B-3-1 30:d0:42:d8:b7:d6
ethernet1/1/7:3 Broadcom Adv. Qua... bc:97:e1:0c:40:b0 bc:97:e1:0c:40:b0
ethernet1/1/7:3 PowerEdge MX750c ... FZL62G3 NIC.Mezzanine.1B-1-1 30:d0:42:d8:b7:e2
ethernet1/1/7:4 Broadcom Adv. Qua... bc:97:e1:0c:40:b2 bc:97:e1:0c:40:b2
ethernet1/1/7:4 PowerEdge MX750c ... FZL62G3 NIC.Mezzanine.1B-3-1 30:d0:42:d8:b7:e2
ethernet1/1/9:1 S5232-Leaf-1 ethernet1/1/9 3c:2c:30:49:21:80
ethernet1/1/10:1 S5232-Leaf-1 ethernet1/1/10 3c:2c:30:49:21:80
ethernet1/1/11:3 Broadcom BCM57508... 84:16:0c:6a:5a:30 84:16:0c:6a:5a:30
ethernet1/1/11:3 PowerEdge MX750c ... FZL62G3 NIC.Mezzanine.1B-1-1 30:d0:42:d8:b7:e2
ethernet1/1/13:5 Broadcom BCM57508... 84:16:0c:6a:4a:60 84:16:0c:6a:4a:60
ethernet1/1/13:5 PowerEdge MX760c ... CBVRFT3 NIC.Mezzanine.1B-1-1 90:8d:6e:fd:62:b2

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 249
ethernet1/1/31:1 Z9432-2 ethernet1/1/31:1 e8:b5:d0:92:9a:eb
ethernet1/1/32:1 Z9432-2 ethernet1/1/32:1 e8:b5:d0:92:9a:eb
mgmt1/1/1 Rack103-N2048 Gi1/0/24 28:f1:0e:ef:c4:98
Z9432-1#

Show interface port channel summary


The show interface port-channel summary command shows the LAG number (VLT port channel 1 in this example),
mode, status, and ports used in the port channel.

Z9432-1# show interface port-channel summary


LAG Mode Status Uptime Ports
10 L2 up 00:04:43 Eth 1/1/9:1 (Up)
Eth 1/1/10:1 (Up)
Z9432-1#

Show VLAN
The show vlan command shows the VLAN details for all configured VLANs.

9432-1# show vlan


Codes: * - Default VLAN, M - Management VLAN, R - Remote Port Mirroring VLANs,
@ - Attached to Virtual Network, P - Primary, C - Community, I - Isolated,
S - VLAN-Stack VLAN
Q: A - Access (Untagged), T - Tagged
NUM Status Description Q Ports
* 1 Active A
Eth1/1/1-1/1/2,1/1/3:1-1/1/3:2,1/1/3:5-1/1/3:8,1/1/4-1/1/6,1/1/7:1-1/1/7:2,1/1/7:5-1/1/7:
8,1/1/8,1/1/11-1/1/30,1/1/33-1/1/34
A Po1000
1811 Active T Eth1/1/11:3,1/1/13:5
T Eth1/1/3:3-1/1/3:4,1/1/7:3-1/1/7:4
T Po1000
A Po10-11
4094 Active T Po1000
Z9432-1#

Show VLT
The show vlt command shows the VLT details for the VLT domain entered.

Z9432-1# show vlt 1


Domain ID : 1
Unit ID : 1
Role : primary
Version : 3.1
Local System MAC address : e8:b5:d0:92:3e:6b
Role priority : 32768
VLT MAC address : e8:b5:d0:92:3e:6b
IP address : fda5:74c8:b79e:1::1
Delay-Restore timer : 90 seconds
Peer-Routing : Disabled
Peer-Routing-Timeout timer : 0 seconds
Multicast peer-routing timer : 300 seconds
VLTi Link Status
port-channel1000 : up

VLT Peer Unit ID System MAC Address Status IP Address Version


----------------------------------------------------------------------------------
1 e8:b5:d0:92:9a:eb up fda5:74c8:b79e:1::2 3.1

250 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
100 GbE combined deployment with legacy IOMs
Customers with existing deployments can expand or add the new 100 GbE based MX networking solution to the same chassis,
or add new chassis with both chassis deployed. Each networking solution will operate independently. This section shows
examples of deployment topologies and provides details about how to manage each networking solution.

Single chassis combined deployment


The following diagram shows a single chassis deployment with the MX8116n FEM with Z9432F-ON FSE solution in fabric A and
the traditional MX9116n solution in fabric B. Each solution is connected to a separate external network ToR pair of switches.

Figure 230. Combined deployment with single chassis

Multi-chassis combined deployment


The following diagram shows a two chassis deployment. The first deployment includes the MX8116n FEM with Z9432F-ON FSE
solution in fabric A. The second deployment includes the traditional MX9116n FSE with MX7116n FEM solution in fabric B. Each
solution is connected to a separate external network ToR pair of switches.
For each type of fabric solution, the restrictions on the number of chassis supported apply. For the MX8116n FEM with the
Z9432F-ON FSE (with single fabric 100 GbE), the deployment is limited to 14 chassis. With the traditional MX9116n FSE with
MX7116n FEM, the maximum number of chassis supported is 10 chassis. If a deployment has 14 chassis with the 100 GbE
solution, only 10 of those chassis can also have the traditional MX9116n FSE and MX7116 FEM solution. The limits are based on
the number of physical ports available to connect each FEM to the respective FSE for each solution to form a Scalable Fabric.
The maximum number of chassis supported in a Scalable Fabric for MX8116n based solutions can be seen in section PowerEdge
Scalable Fabric Architecture .

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 251
The maximum number of chassis supported in a Scalable Fabric for MX9116n based solutions can be seen in the section Scalable
Fabric Architecture.
NOTE:

Each individual chassis must only contain compute sleds with NICs at the same speed and number of ports.

Figure 231. Combined deployment with multi-chassis

Networking configuration management of combined deployments


The management of each solution does not change based on the deployment of a combined fabric solution. The MX8116n FEM
based 100 GbE solution configuration is exclusively managed on the Z9432F-ON FSE external switches in Full Switch mode. The
traditional MX9116n FSE and MX7116n FEM 25 GbE solution configuration can be managed either through SmartFabric mode or
Full Switch mode.
For details on the network configuration management of the MX8116n FEM with Z9432F-ON FSE, see the configuration details
within this chapter. Full configuration examples are in the section titled 100 GbE solution configuration examples.
For details about the network configuration management of the MX9116n FSE with MX7116n FEM, see SmartFabric Creation for
SmartFabric mode. For details about the Full Switch mode, see Full Switch Mode.

Combined deployment restrictions

The following restrictions are applied when deploying combined solutions in the same chassis:
● The latest baseline firmware for OME-Modular, including all the chassis' components such as the IOMs and the Z9432F-ON,
must be installed.

252 PowerEdge MX 100 GbE solution with external Fabric Switching Engine
● Advanced NPAR feature for the MX9116n FSE and MX7116 FEM solution cannot be implemented on deployments with
MX8116n based solutions in the same chassis.

100 GbE deployment with rack servers


The MX8116n FEM with Z9432F-ON solution can be deployed with rack servers and other appliances. The Z9432F-ON operates
in Full Switch mode and its ports will support various speeds and features available on a Dell SmartFabric OS10 based
PowerSwitch.
There is no restriction to the number of additional rack servers or appliances connected to the Z9432F-ON. Each port
consumed by the servers or appliances will reduce the maximum number of MX Chassis in the Scalable Fabric. In addition
to the reduced number of ports, the port mode and port profile used on the Z9432F-ON interfaces may limit the number of
available ports for additional MX Chassis. See the Dell SmartFabric OS10 User Guide for details on the port modes and profiles
for the Z9432F-ON. Find the relevant version of the User Guide in the OME-M and OS10 compatibility and documentation table.
The diagram below shows an example of a deployment with an MX Chassis and rack-mounted servers connected to the same
Z9432F-ON.

Figure 232. 100 GbE deployment with rack servers

PowerEdge MX 100 GbE solution with external Fabric Switching Engine 253
14
Advanced NPAR
Network Partitioning (NPAR) is a standard feature on the MX platform and has served as a switch-independent feature for
several releases. With the release of OME-M 2.10.00, Advanced NPAR is now available in both SmartFabric and Full Switch
modes.
Advanced NPAR is a switch-dependent feature that splits both the mezzanine card ports and the ethernet ports in the IOMs
into logical partitions. These partitions on both the mezzanine card and the IOM can be treated as physical interfaces. The main
advantage of Advanced NPAR is the ability to assign VLANs to each partition. Previously, users could only assign a single VLAN
to any partition on a mezzanine card physical port.
NOTE:

The Advanced NPAR feature is only supported on the 25 GbE based IOMs in the MX chassis. Advanced NPAR is not
supported on the MX8116n FEM with Z9432F-ON FSE solution.

Hardware and software requirements


The following hardware and minimum firmware releases are supported for Advanced NPAR:
IOMs:
● MX9116n - firmware version 10.5.5.1 (factory installed) and 10.5.5.2
● MX5108n - firmware version 10.5.5.1 (factory installed) and 10.5.5.2
● MX7116n FEM - in multi-chassis deployments with MX9116n FSE
MX Chassis:
● OME-Modular - version 2.10.00
Compute sleds:
● MX760c - baseline 23.05.00
● MX750c - baseline 23.05.00
● MX740c - baseline 23.05.00
● MX840c - baseline 23.05.00
Mezzanine card:
● Broadcom57504- firmware 22.31.13.70, baseline 23.05.00
Server host operating system:
● VMwareESXi7.0U3
● VMwareESXi8.0
Guest OS:
● Windows and Linux

Restrictions and limitations


Advanced NPAR solution deployments include several feature restrictions and scalability limitations compared to standard
deployments.

254 Advanced NPAR


Advanced NPAR feature restrictions
The following restrictions apply to all Advanced NPAR deployments for both Full Switch and SmartFabric modes. Ignore any
recommended action for features that are not supported or configurable by the user under SmartFabric mode. These actions
only apply for Full Switch configurations.

FCoE FCoE is not supported on ports with Advanced NPAR partitions.

Dynamic or static Port channels or switch dependent port teaming is not supported on ports with Advanced NPAR
port-channels partitions.

IGMP snooping IGMP snooping is not supported on the VLAN that has NPAR partitions as a member.
Spanning Tree RSTP is the only supported Spanning Tree protocol.
Server templates Bandwidth allocation configuration is not valid for NPAR interfaces.
Uplink Failure UFD is not supported on NPAR partition interfaces.
Detection (UFD)
PVLAN and PVLAN and VXLAN cannot be configured when NPAR is enabled.
VXLAN
Layer3 Interfaces with NPAR partitioned cannot have Layer3 configurations.
QoS map and Changes in the qos-map and trust-map of a parent (physical) interface are inherited to the children
trust maps (partitions).
MTU Changes to the MTU configuration on the parent (physical) interface are inherited to the
children(partitions).
LLDP Disabling LLDP transmit or receive on the parent (physical) interface will impact logical interface
functionalities. As a best practice, leave LLDP enabled on the parent interface or globally.
Switch mode Advanced NPAR interfaces are not retained while changing switch modes. This applies to a mode change
change from Full Switch to SmartFabric and from SmartFabric to Full Switch.
VLAN Stacking Advanced NPAR and VLAN Stacking cannot be deployed or configured simultaneously. Advanced NPAR
(Q-in-Q) uses the mandatory 0x88A8 tag protocol identifier (TPID).
VLAN Partitions (children) under the same physical (parent) port cannot be assigned the same VLAN.
configuration Broadcast, multicast, and unknown unicast (BUM) traffic can egress out of the same physical port and
cause a loop storm. Also, the same VLAN assigned Asa best practice, configure nonoverlapping VLANs
across a parent and its children.
Egress packet Egress packet counters on the logical NPAR interfaces are updated even if packets are source
counters suppressed. These packets are not sent out of the partition interfaces. The packet count is incremented,
but the byte count does not increment. As a best practice, view the byte count to track the exact number
of bytes transmitted.
TPID Advanced NPAR uses the mandatory 0x88A8 TPID. As a best practice, do not configure the switch port
trunk TPID on any interface that is NPAR partitioned.

Scalability restrictions
NOTE: For a detailed description of Full Switch VLAN scaling and PV value calculation, see VLAN scaling guidelines for Full
Switch mode.

Table 29. Full Switch VLAN scaling


Advanced NPAR OS10 version Platform With scale-profile VLAN Without scale-profile VLAN
supported enabled enabled
Yes 10.5.5.1 (factory MX5108n 45,000 PV total 10,000 PV total
installed) and
15,000 for advanced NPAR 10,000 used for advanced
10.5.5.2
port NPAR or non-advanced NPAR

Advanced NPAR 255


Table 29. Full Switch VLAN scaling (continued)
Advanced NPAR OS10 version Platform With scale-profile VLAN Without scale-profile VLAN
supported enabled enabled

30,000 for non-advanced


NPAR port

MX9116n 200,000 PV total 30,000 PV total


10,000 for advanced NPAR 10,000 for advanced NPAR
port port
190,000 for non-advanced 20,000 for non-advanced
NPAR port NPAR port

No 10.5.4.1 MX5108n 45,000 PV 10,000 PV


MX9116n 200,000 PV 30,000 PV

Table 30. Advanced NPAR parameters and values


Advanced NPAR OS10 version Parameter Value
supported
Yes 10.5.5.1 (factory Recommended max VLANs per fabric 3000
installed) and 10.5.5.2
Recommended max VLANs per uplink 3000
Recommended max VLANs per server port 1500
Maximum number of MX9116n FSEs in a single 12
MCM group
Maximum number of MX5108n Ethernet switches 8
in a singleMCM group
No 10.5.4.1 Recommended max VLANs per fabric 3000
Recommended max VLANs per uplink 3000
Recommended max VLANs per server port 1500
Maximum number of MX9116n FSEs in a single 12
MCM group
Maximum number of MX5108n Ethernet switches 8
in a singleMCM group

Advanced NPAR solution for MX Platform


The following diagram shows the topology for the Advanced NPAR solution for MX platform.

256 Advanced NPAR


Figure 233. Advanced NPAR topology

The settings and configuration of Advanced NPAR the workflow includes the following steps.
1. In the server BIOS device level configuration, select Advanced NPAR and enable NParEPmode.
2. Ensure the Advanced NPAR is enabled by checking the status of Advanced NPAR in the server iDRAC, in VMware ESXi, and
in the Port Information on the I/O Modules.
3. Create the uplink in SmartFabric mode from MX IOM to upstream switches.
4. Create and deploy server template.

Broadcom 57504 quad port 25 GbE mezzanine card


Starting with the OME-Modular 2.10.00, Advanced NPAR is supported. This section details the available options.
The BCOM 57504 quad port mezzanine card can support no partitions (advanced NPAR not enabled), 2 partitions, or 4
partitions (with NParEP mode enabled).
The number of partitions created on each port are always the same quantity. The following combinations are supported:
● No partitions: (1, 1, 1, 1)
○ Advanced NPAR is disabled
● Number of partitions 8qty, two partitions per port:(2, 2, 2, 2)
○ Advanced NPAR enabled
● Number of partitions 16qty, four partitions per port(4, 4, 4, 4)
○ Advanced NPAR enabled, NParEP mode enabled
The following figure shows the NIC setting in the BIOS System Setup. To access the page in the BIOS, follow these steps:
System setup > Device Settings > [ chooseanyport ] > Device Level Configuration

Advanced NPAR 257


Figure 234. Advanced NPAR setting in System Setup Device Settings

Set the Advanced NPAR mode on one port and click Back > Finish to apply the configuration.
Also, if only two partitions are used per port, disable the NParEP Mode during the setup process. This setting is located directly
below the Virtualization Mode setting.

Advanced NPAR on MX SmartFabric mode


The example in this section provides instructions on how to configure Advanced NPAR when using SmartFabric mode.

Configure NPAR device settings and NIC partitioning


With the System Setup wizard, complete NPAR device settings and NIC Partitioning on the MX compute sled.
The Broadcom Advanced Quad 25Gb NIC example shown in this section supports four partitions per quad NIC port.
In this example, four NIC partitions were created for each port. Only one partition per port is configured with a VLAN for
Ethernet traffic. All 16 partitions can be used for up to16 different Ethernet networks.

System setup wizard

To enable and configure NPAR on a server NIC through the System Setup wizard:
1. In OME-M, select Compute.
2. Select the required compute sled.
3. In the URL field, enter the IP address for the server.
4. Open the Virtual Console.
5. From the menu at the top of the window, select Next Boot.
6. Select BIOS Setup, and then click Yes.
7. To reboot the server:
a. From the menu at the top of the window, click Power.
b. Select Reset System (warm boot), and then click Yes .

258 Advanced NPAR


Device settings

To configure the device settings:


1. From the System Setup main menu, select Device Settings.
2. Select Port-1 from mezzanine 1A of the quad NIC. The Main Configuration page displays.
3. To enable Virtualization Mode:
a. Click Device Level Configuration.
b. From the Virtualization Mode list, select Advanced NPar from the dropdown list.
c. To enable NParEP Mode, select Enabled. Enabling NParEP Mode creates four partitions per NIC port.
d. Leave the rest of the settings as is, and then click Back.

Figure 235. Advanced Quad NIC Device Level Configuration

NIC partitioning configuration

To configure NIC partitioning for Partition 1:


1. Click NIC Partitioning Configuration.
2. Click Global Bandwidth Allocation. The default bandwidth for each partition is set to 100. You can edit the global
bandwidth for each partition.
3. Click Back.
4. Click Partition 1 Configuration.
5. Verify NIC + RDMA Mode is set to Disabled.
6. Click Back, and then click Finish.

Advanced NPAR 259


Figure 236. Advanced Quad NIC Global Bandwidth Allocation

To configure the remaining three partitions of the same Advanced Quad NIC port, repeat NIC Partitioning Configuration
steps1 through 6.

Verify the configuration on the remaining three ports of the Advanced Quad NIC partitions:

7. Click Finish.
8. To save the changes, click Yes.
9. On the Success window, click OK. The Device Settings page displays.
10. To return to the System Setup Main Menu, click Finish.
11. When the Confirm Exit window appears, click Yes.

Advanced Quad Port NIC NPAR status


The MX compute sled NIC is now configured for Advanced NPAR. The following sections describe how to confirm the NIC
status and ensure that Advanced NPAR is enabled.

Server iDRAC

To confirm the Advanced NPAR status in the server iDRAC:


1. Open the server iDRAC.
2. Click System > Overview > Network Devices.
3. Select NIC Mezzanine 1A.
Port-1 to port-4 of NIC Mezzanine 1A have four partitions for each NIC port. The following figure shows the partitions Port-1:

260 Advanced NPAR


Figure 237. Advanced Quad NIC partitions in iDRAC

VMware ESXi

To confirm the Advanced NPAR status in the VMware ESXi server:


1. Log into the VMwareESXiserver.
2. Click Configure Management Network, and then click Network Adapters.
The partitions for Port-1 to Port-4 of NIC Mezzanine 1A are all UP and connected, as shown in the following figures:

Figure 238. Advanced Quad NIC partitions on VMware ESXi server

Advanced NPAR 261


Figure 239. Advanced Quad NIC partitions on VMwareESXi server (continued)

MX IOMs

To confirm the Advanced NPAR status in MX IOMs:


1. Log in to MX OME-M.
2. In the Devices drop-down list, select I/O Modules.
3. Click Chassis 1 MX9116n IOM > Hardware > Port Information. See the Interface mapping for dual-port NIC servers table
in Quad-port Ethernet NICs.

262 Advanced NPAR


Figure 240. Advanced Quad NIC partitions Interfaces on Chassis 1 IOM

4. In the Devices drop-down list, select I/O Modules.


5. Click Chassis 2 MX9116n IOM > Hardware > Port Information. See the Interface mapping for dual-port NIC servers
table in Quad-port Ethernet NICs.

Figure 241. Advanced Quad NIC partitions Interfaces on Chassis 2 IOM

SmartFabric configuration in OME-Modular


This section provides instructions on how to configure SmartFabric with Advanced NPAR enabled on the mezzanine cards. This
example uses the same quad port mezzanine card shown in the previous section. Before configuring the SmartFabric, make sure
the environment meets the following requirements:
● Quad port NIC is physically installed in the MX compute sled.

Advanced NPAR 263


● All physical connections from MX IOMs to upstream switches are in place.
● All required VLAN are defined, and SFS is created in OME-M.
● VMware ESXi has been deployed and configured.
For information about physical component installation and connections, see PowerEdge Scalable Fabric Architecture.
For additional instructions on configuring VLANs and creating a SmartFabric, see SmartFabric Creation.
The VMware ESXi server MGMT IP, VLAN, In-Band IP, and DNS configurations are beyond the scope of this deployment guide.

Create uplink

Create an uplink from the MX IOMs to the upstream switches. In this example, Port-41 and Port-42 are configured as uplinks to
upstream switches.
1. Log into OME-M.
2. In the Devices drop-down list, select Fabric.
3. Click the created SmartFabric name.
4. Click Add Uplink.
5. Assign a Name (mandatory) and Description (optional).
6. In the Uplink Type: drop-down list, select Ethernet - No Spanning Tree.
7. Click Next.
8. Select a switch port. Uplink the ports to the upstream switches.
9. For each Advanced NPAR VLAN, select the Tagged Network checkbox.
10. Leave VLAN 1 as Untagged Network as shown in the following figure.

264 Advanced NPAR


Figure 242. VLAN selection for uplink

11. Click Finish.

NOTE: After the uplink deploys, ensure that the SmartFabric Status is healthy.

Advanced NPAR 265


Figure 243. Advanced Quad NIC SmartFabric uplink

Create and deploy server template

Perform the following steps to create and deploy a server template:


1. Log in to OME-M.
2. In the Configuration tab drop-down list, select Templates.
3. Click Create Template, and then select From Reference Device.
4. Assign a Name (mandatory) and a Description (optional).
5. Click Next.
6. Click Select Device and select a compute sled.
7. Click Finish.
8. After the newly created server template status displays Completed ,select the newly created server template, and then
click Edit Network.
9. Click Next > Next.
10. Assign VLAN 1 as Untagged Network to one or all partitions.
11. Assign Advanced NPAR VLANs to each partition as shown in the following figure.

266 Advanced NPAR


Figure 244. Assign Advanced NPAR VLANs to each partition

12. Click Finish.


13. Select the newly created server template, and then click Deploy Template.
14. When the server deployment completes, click the Profiles tab to see one or more deployed servers.

Advanced NPAR Quad Port NIC in Full Switch mode


The example in this section provides instructions on how to configure Advanced NPAR when using Full Switch mode.

S5232F-ON configuration
The following configuration example is for the ToR switch pair. The configuration is limited to the minimum basic interfaces and
features for connecting to the MX fabric on theMX9116n IOMs.

Configure global switch settings

Configure switch hostname, OOB management IP address, OOB management default gateway, and NTP server IP address.

Advanced NPAR 267


Table 31. Global switch configuration example
S5232-1 S5232-2

configure terminal configure terminal

hostname S5232-1 hostname S5232-2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24
no shutdown no shutdown
ipv6 address autoconfig ipv6 address autoconfig

management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

ntp server 100.67.XX.XX ntp server 100.67.YY.YY

Configure VLTi

Interfaces Ethernet 1/1/29 and 1/1/31 are configured as VLTi ports for this example.

Table 32. VLTi configuration example


S5232-1 S5232-2

configure terminal configure terminal

interface ethernet1/1/29 interface ethernet1/1/29


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off

interface ethernet1/1/31 interface ethernet1/1/31


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off

vlt-domain 1 vlt-domain 1
backup destination 100.67.XX.XX backup destination 100.67.YY.YY
discovery-interface discovery-interface
ethernet1/1/29,1/1/31 ethernet1/1/29,1/1/31
peer-routing peer-routing

Configure downstream interfaces

Configure the port channel for the ethernet interfaces connected to downstream MX IOMs. Create a generic internal in-band
MGMT VLAN, VLAN-1611. This is created for the configuration example as switch port mode access on the port-
channel. In this example, port 1/1/1 and port 1/1/3 are configured as members of port-channel 20. Save the configuration from
running-config to start-up config.

Table 33. Port channel configuration example


S5232-1 S5232-2

configure terminal configure terminal

interface vlan1611 interface vlan1611


description MGMT-VLAN description MGMT-VLAN
no shutdown no shutdown
ip address 100.67.41.252/24 ip address 100.67.41.253/24

268 Advanced NPAR


Table 33. Port channel configuration example
S5232-1 S5232-2

vrrp-group 11 vrrp-group 11
virtual-address 100.67.41.254 virtual-address 100.67.41.254

interface port-channel20 interface port-channel20


description To-MX9116n description To-MX9116n
no shutdown no shutdown
mtu 9216 mtu 9216
switchport mode access switchport mode access
switchport access vlan 1611 switchport access vlan 1611

interface ethernet1/1/1 interface ethernet1/1/1


description To-MX9116n-1 description To-MX9116n-1
no shutdown no shutdown
channel-group 20 mode active channel-group 20 mode active
no switchport no switchport
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off
flowcontrol transmit on flowcontrol transmit on

interface ethernet1/1/3 interface ethernet1/1/3


description To-MX9116n-2 description To-MX9116n-2
no shutdown no shutdown
channel-group 20 mode active channel-group 20 mode active
no switchport no switchport
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off
flowcontrol transmit on flowcontrol transmit on

write memory write memory

MX9116n configuration in Full switch mode


The following configuration example is for the MX9116n IOMs. The configuration in this example is limited to the minimum basic
interfaces and features for connecting to the compute sleds and the uplinks to the S5232F-ON ToR pair.

Configure global switch settings

Configure switch hostname, OOB management IP address, OOB management default gateway, and NTP server IP address.

Table 34. Global switch setting configuration example


MX9116n-1 MX9116n-2

configure terminal configure terminal

hostname MX9116n-1 hostname MX9116n-2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24
no shutdown no shutdown
ipv6 address autoconfig ipv6 address autoconfig

management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

ntp server 100.67.XX.XX ntp server 100.67.YY.YY

Advanced NPAR 269


Configure VLTi

Interfaces Ethernet 1/1/37-1/1/38 and 1/1/39-1/1/40 are configured as VLTi ports for this example.

Table 35. VLTi configuration example


MX9116n-1 Mx9116n-2

configure terminal configure terminal

interface range ethernet1/1/37-1/1/38 interface range ethernet1/1/39-1/1/40


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport
flowcontrol receive off flowcontrol receive off

vlt-domain 255 vlt-domain 255


backup destination 100.67.XX.XX backup destination 100.67.YY.YY
discovery-interface discovery-interface
ethernet1/1/37-1/1/38 ethernet1/1/39-1/1/40

Configure upstream interfaces

Configure the port channel for the ethernet interfaces connected to the upstream S5232F-ON switch pair. Create a generic
in-band internal MGMT VLAN, VLAN-1611. This is created for the configuration example as switch port mode access
on the port-channel. In this example, port 1/1/41 and port 1/1/42 on each MX I/O Module, are configured as a member of
port-channel 20.

Table 36. Upstream interfaces configuration example


MX9116n-1 MX9116n-2

configure terminal configure terminal

interface vlan1611 interface vlan1611


no shutdown no shutdown

interface port-channel20 interface port-channel20


description To-S5232 description To-S5232
no shutdown no shutdown
mtu 9216 mtu 9216
switchport access vlan 1611 switchport access vlan 1611

interface ethernet1/1/41 interface ethernet1/1/41


description To-S5232 description To-S5232
no shutdown no shutdown
channel-group 20 mode active channel-group 20 mode active
no switchport no switchport
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off

interface ethernet1/1/42 interface ethernet1/1/42


description To-S5232 description To-S5232
no shutdown no shutdown
channel-group 20 mode active channel-group 20 mode active
no switchport no switchport
mtu 9216 mtu 9216
flowcontrol receive off flowcontrol receive off

270 Advanced NPAR


Configure downstream interfaces

Configure the internal interfaces connected from the MX I/O Module to the MX compute sled parent NIC ports. The Dell
SmartFabric OS10 CLI has two options to split the NIC parent port into sub-partitions.
● partition-2: 2 partitions per physical port
● partition-4: 4 partitions per physical port
For this example:
● The partition-4 option is selected to split NIC port into four partitions
● The MX760c compute sled is installed in MX Chassis-2 at slot-7
● MX9116n-2 port 1/1/13 has a breakout of 25g-8x and is connected to NIC port-2
● MX9116n-2 port 1/1/14 has a breakout of 25g-8x and is connected to NIC port-4
● MX9116n-1 port 1/71/7 has a breakout of 25g-8x and is connected to NIC port-1
● MX9116n-1 port 1/71/15 has a breakout of 25g-8x and is connected to NIC port-3
Configure a generic internal in-band MGMT VLAN, VLAN-1611. This is created for the configuration example as a switchport
mode trunk on ports from IOMs to sled parent NIC ports.
NOTE: For port mapping on multiple chassis, see the Interface mapping for dual-port NIC servers table in Quad-port
Ethernet NICs.

Table 37. Switchport mode trunk configuration example


MX9116n-1 MX9116n-2

configure terminal configure terminal

interface ethernet1/71/7 interface ethernet1/1/13


description NIC-Port-1 description NIC-Port-2
no shutdown no shutdown
fec off fec off
npar partition-4 npar partition-4
switchport mode trunk switchport mode trunk
switchport access vlan 1 switchport access vlan 1
switchport trunk allowed vlan 1611 switchport trunk allowed vlan 1611
flowcontrol receive off flowcontrol receive off

interface ethernet1/71/15 interface ethernet1/1/14


description NIC-Port-3 description NIC-Port-4
no shutdown no shutdown
fec off fec off
npar partition-4 npar partition-4
switchport mode trunk switchport mode trunk
switchport access vlan 1 switchport access vlan 1
switchport trunk allowed vlan 1611 switchport trunk allowed vlan 1611
flowcontrol receive off flowcontrol receive off

Configure NIC port partitions for different networks, using different VLANs. As shown in the following configuration.
NOTE: The VLAN configuration used in the port partitions interfaces have not been detailed in the example in this section.
Ensure all VLANs used are configured prior to configuring the interfaces.
In the following configuration, the partition interfaces are shown as 1/71/7/1, for example. The interface of1/71/7 is further
segmented into /1, /2, /3, /4 partitions and is placed at the end of the interface. The interfaces are shown in the
following format: Node/slot/port/partition.

Table 38. NIC port partition configuration example


MX9116n-1 MX9116n-2

configure terminal configure terminal

interface ethernet1/71/7/1 interface ethernet1/1/13/1


description NIC-Port-1-P1 description NIC-Port-2-P1
no shutdown no shutdown

Advanced NPAR 271


Table 38. NIC port partition configuration example
MX9116n-1 MX9116n-2

switchport mode trunk switchport mode trunk


switchport trunk allowed vlan 201 switchport trunk allowed vlan 205

interface ethernet1/71/7/2 interface ethernet1/1/13/2


description NIC-Port-1-P2 description NIC-Port-2-P2
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 202 switchport trunk allowed vlan 206

interface ethernet1/71/7/3 interface ethernet1/1/13/3


description NIC-Port-1-P3 description NIC-Port-2-P3
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 203 switchport trunk allowed vlan 207

interface ethernet1/71/7/4 interface ethernet1/1/13/4


description NIC-Port-1-P4 description NIC-Port-2-P4
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 204 switchport trunk allowed vlan 208

interface ethernet1/71/15/1 interface ethernet1/1/14/1


description NIC-Port-3-P1 description NIC-Port-4-P1
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 209 switchport trunk allowed vlan 213

interface ethernet1/71/15/2 interface ethernet1/1/14/2


description NIC-Port-3-P2 description NIC-Port-4-P2
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 210 switchport trunk allowed vlan 214

interface ethernet1/71/15/3 interface ethernet1/1/14/3


description NIC-Port-3-P3 description NIC-Port-4-P3
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 211 switchport trunk allowed vlan 215

interface ethernet1/71/7/4 interface ethernet1/1/14/4


description NIC-Port-3-P4 description NIC-Port-4-P4
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 212 switchport trunk allowed vlan 216

Configuration validation

Show LLDP
The following show command details the LLDP information for connected neighbors.

MX9116N-A2# show lldp neighbors


Loc PortID Rem Host Name Rem Port Id Rem Chassis Id
--------------------------------------------------------------------------------------
ethernet1/1/1 Not Advertised f4:c7:aa:22:ad:29 f4:c7:aa:22:ad:29
ethernet1/1/1 PowerEdge MX750c FZL42G3 NIC.Mezzanine.1A-2-1 30:d0:42:d8:bd:8e
ethernet1/1/3 Broadcom Adv. Qua... bc:97:e1:0c:49:81 bc:97:e1:0c:49:81
ethernet1/1/3 PowerEdge MX750c ... FZL62G3 NIC.Mezzanine.1A-2-1 30:d0:42:d8:b7:e2
ethernet1/1/4 Broadcom Adv. Qua... bc:97:e1:0c:49:83 bc:97:e1:0c:49:83
ethernet1/1/4 PowerEdge MX750c ... FZL62G3 NIC.Mezzanine.1A-4-1 30:d0:42:d8:b7:e2

272 Advanced NPAR


ethernet1/1/9 Not Advertised 34:80:0d:86:8c:c1 34:80:0d:86:8c:c1
ethernet1/1/9 iDRAC-FD85H13 FD85H13 NIC.Mezzanine.1A-2-1 4c:d9:8f:99:00:18
ethernet1/1/13 Broadcom Adv. Qua... 00:62:0b:7b:47:1d 00:62:0b:7b:47:1d
ethernet1/1/13 PowerEdge MX760c ... CBVRFT3 NIC.Mezzanine.1A-2-1 90:8d:6e:fd:62:b2
ethernet1/1/14 Broadcom Adv. Qua... 00:62:0b:7b:47:1f 00:62:0b:7b:47:1f
ethernet1/1/14 PowerEdge MX760c ... CBVRFT3 NIC.Mezzanine.1A-4-1 90:8d:6e:fd:62:b2
ethernet1/1/17:1 Not Advertised f4:c7:aa:22:a7:bf f4:c7:aa:22:a7:bf
ethernet1/1/17:1 PowerEdge MX750c FZL22G3 NIC.Mezzanine.1A-2-1 30:d0:42:d8:af:ba
ethernet1/1/17:2 Broadcom Adv. Qua... bc:97:e1:0c:43:c1 bc:97:e1:0c:43:c1
ethernet1/1/17:2 PowerEdge MX750c ... FZL52G3 NIC.Mezzanine.1A-2-1 30:d0:42:d8:b7:d6
ethernet1/1/17:4 Not Advertised 34:80:0d:88:a3:0f 34:80:0d:88:a3:0f
ethernet1/1/17:4 iDRAC-8R7X233 8R7X233 NIC.Mezzanine.1A-2-1 4c:d9:8f:a8:a2:16
ethernet1/1/18:3 Broadcom Adv. Qua... 00:62:0b:7b:31:81 00:62:0b:7b:31:81
ethernet1/1/18:3 PowerEdge MX760c ... 3BVRFT3 NIC.Mezzanine.1A-2-1 90:8d:6e:fd:5d:90
ethernet1/1/19:2 Broadcom Adv. Qua... bc:97:e1:0c:43:c3 bc:97:e1:0c:43:c3
ethernet1/1/19:2 PowerEdge MX750c ... FZL52G3 NIC.Mezzanine.1A-4-1 30:d0:42:d8:b7:d6
ethernet1/1/20:3 Broadcom Adv. Qua... 00:62:0b:7b:31:83 00:62:0b:7b:31:83
ethernet1/1/20:3 PowerEdge MX760c ... 3BVRFT3 NIC.Mezzanine.1A-4-1 90:8d:6e:fd:5d:90
ethernet1/1/37 MX9116N-A1 ethernet1/1/37 20:04:0f:0c:c1:ae
ethernet1/1/38 MX9116N-A1 ethernet1/1/38 20:04:0f:0c:c1:ae
ethernet1/1/39 MX9116N-A1 ethernet1/1/39 20:04:0f:0c:c1:ae
ethernet1/1/40 MX9116N-A1 ethernet1/1/40 20:04:0f:0c:c1:ae
ethernet1/1/41 S5232-Leaf-1 ethernet1/1/3 3c:2c:30:49:21:80
ethernet1/1/42 S5232-Leaf-2 ethernet1/1/3 3c:2c:30:49:23:00
MX9116N-A2#

Show Eth-npar
The following show command details the NPAR interfaces.

MX9116N-A2# show interface eth-npar


Eth-npar 1/1/13/1 is up, line protocol is up
Address is 20:04:0f:5f:1c:53, Current address is 20:04:0f:5f:1c:53
Interface index is 318
Internet address is not set
Mode of IPv4 Address Assignment: not set
Interface IPv6 oper status: Disabled
MTU 9216 bytes, IP MTU 9184 bytes
Last clearing of "show interface" counters: 00:09:01
Input statistics:
0 packets, 0 octets
Output statistics:
17 packets, 4840 octets
Time since last interface status change: 00:02:25

Eth-npar 1/1/13/2 is up, line protocol is up


Address is 20:04:0f:5f:1c:53, Current address is 20:04:0f:5f:1c:53
Interface index is 319
Internet address is not set
Mode of IPv4 Address Assignment: not set
Interface IPv6 oper status: Disabled
MTU 9216 bytes, IP MTU 9184 bytes
Last clearing of "show interface" counters: 00:09:01
Input statistics:
0 packets, 0 octets
Output statistics:
17 packets, 4840 octets
Time since last interface status change: 00:02:25

Eth-npar 1/1/13/3 is up, line protocol is up


Address is 20:04:0f:5f:1c:53, Current address is 20:04:0f:5f:1c:53
Interface index is 320
Internet address is not set
Mode of IPv4 Address Assignment: not set
Interface IPv6 oper status: Disabled
MTU 9216 bytes, IP MTU 9184 bytes
Last clearing of "show interface" counters: 00:09:01
Input statistics:

Advanced NPAR 273


0 packets, 0 octets
Output statistics:
17 packets, 4840 octets
Time since last interface status change: 00:02:25

Eth-npar 1/1/13/4 is up, line protocol is up


Address is 20:04:0f:5f:1c:53, Current address is 20:04:0f:5f:1c:53
Interface index is 321
Internet address is not set
Mode of IPv4 Address Assignment: not set
Interface IPv6 oper status: Disabled
MTU 9216 bytes, IP MTU 9184 bytes
Last clearing of "show interface" counters: 00:09:01
Input statistics:
0 packets, 0 octets
Output statistics:
17 packets, 4840 octets
Time since last interface status change: 00:02:25

Eth-npar 1/1/14/1 is up, line protocol is up


Address is 20:04:0f:5f:1c:54, Current address is 20:04:0f:5f:1c:54
Interface index is 322
Internet address is not set
Mode of IPv4 Address Assignment: not set
Interface IPv6 oper status: Disabled
MTU 9216 bytes, IP MTU 9184 bytes
Last clearing of "show interface" counters: 00:08:51
Input statistics:
2 packets, 144 octets
Output statistics:
20 packets, 5705 octets
Time since last interface status change: 00:08:51

Eth-npar 1/1/14/2 is up, line protocol is up


Address is 20:04:0f:5f:1c:54, Current address is 20:04:0f:5f:1c:54
Interface index is 323
Internet address is not set
Mode of IPv4 Address Assignment: not set
Interface IPv6 oper status: Disabled
MTU 9216 bytes, IP MTU 9184 bytes
Last clearing of "show interface" counters: 00:08:51
Input statistics:
26 packets, 1872 octets
Output statistics:
20 packets, 5705 octets
Time since last interface status change: 00:08:51

Eth-npar 1/1/14/3 is up, line protocol is up


Address is 20:04:0f:5f:1c:54, Current address is 20:04:0f:5f:1c:54
Interface index is 324
Internet address is not set
Mode of IPv4 Address Assignment: not set
Interface IPv6 oper status: Disabled
MTU 9216 bytes, IP MTU 9184 bytes
Last clearing of "show interface" counters: 00:08:51
Input statistics:
0 packets, 0 octets
Output statistics:
20 packets, 5705 octets
Time since last interface status change: 00:08:51

Eth-npar 1/1/14/4 is up, line protocol is up


Address is 20:04:0f:5f:1c:54, Current address is 20:04:0f:5f:1c:54
Interface index is 325
Internet address is not set
Mode of IPv4 Address Assignment: not set
Interface IPv6 oper status: Disabled
MTU 9216 bytes, IP MTU 9184 bytes

274 Advanced NPAR


Last clearing of "show interface" counters: 00:08:51
Input statistics:
0 packets, 0 octets
Output statistics:
20 packets, 5705 octets
Time since last interface status change: 00:08:51

MX9116N-A2#

Advanced NPAR 275


A
Additional Tasks
Reset SmartFabric OS10 switch to factory defaults
To reset SmartFabric OS10 switches back to the factory default configuration, enter the following commands:

OS10# delete startup-configuration

Proceed to delete startup-configuration [yes/no(default)]:yes


OS10# reload

System configuration has been modified. Save? [yes/no]:no

Proceed to reboot the system? [confirm yes/no]:yes

The switch reboots with default configuration settings.

Reset Cisco Nexus 3232C to factory defaults


To reset the Cisco Nexus 3232C switches to the factory default configuration, enter the following commands:

3232C# write erase


Warning: This command will erase the startup-configuration.
Do you wish to proceed anyway? (y/n) [n] y

After the next reboot, the switch loads with default configuration settings.

Connect to IO Module console port using RACADM


To connect to an IOM console port, first connect to the OME-Modular IP address using SSH using the same credentials used to
log in to the OME-M UI.
Use the RACADM command from the MX9002m management module:

racadm connect [-b] -m <module>

-b is for Binary mode.


-m is the Module option. The module option can be one of the following:
● server-<n>: where n = 1 to 8
● switch-<n>: where n = 1 to 6 or <a1 | a2 | b1 | b2 | c1 | c2>
For example:
● Connect to I/O Module 1 serial console:

racadm connect -m switch-1

● Connect to Server 1 serial console:

racadm connect -m server-1

276 Additional Tasks


MX I/O module OS10 installation using ONIE
The Dell SmartFabric OS10 can be installed using the Open Network Install Environment (ONIE) on MX I/O modules in two
ways:
● Manual installation - Manually configure network information if a DHCP server is not available or install the OS10 software
image using USB media.
● Automatic installation - ONIE discovers network information including the Dynamic Host Configuration Protocol (DHCP)
server, connects to an image server, and downloads and installs an image automatically.

System setup
Connect the chassis Management port on Management module to the network to download an image.
Before installation, verify that the system is connected correctly. To connect and access the I/O module on MX Chassis, see
the Connect to IO Module console port using RACADM section. Also, you can directly SSH to the IP, if it is assigned to the IOM
through management module.

Install OS10
For an ONIE-enabled switch, go to the ONIE boot menu. An ONIE-enabled switch boots with preloaded diagnostics (DIAGs) and
ONIE software.

+-------------------------------+
|*ONIE: Install OS |
| ONIE: Rescue |
| ONIE: Uninstall OS |
| ONIE: Update ONIE |
| ONIE: Embed ONIE |
| ONIE: Diag ONIE |
+-------------------------------+

Install OS Boots to the ONIE prompt and installs an OS10 image using the Automatic Discovery process. When ONIE
installs a new OS image, the previously installed image and OS10 configuration are deleted

Rescue Boots to the ONIE prompt and enables manual installation of an OS10 image or ONIE update
Uninstall OS Deletes the contents of all disk partitions, including the OS10 configuration, except ONIE and diagnostics
Update ONIE Installs a new ONIE version
EDA DIAG Runs the system diagnostics

After the ONIE process installs an OS10 image and you reboot the switch in ONIE: Install OS mode (default), ONIE takes
ownership of the system and remains in Install mode until an OS10 image successfully installs again. To boot the switch from
ONIE for any reason other than installation, select the ONIE: Rescue or ONIE: Update ONIE option from the ONIE boot menu.
The OS10 installer image creates several partitions. After the installation is complete, the switch automatically reboots and loads
an OS10 active image. The other image becomes the standby image. Both the Active and Standby images are of the same
version.
NOTE: During an automatic or manual OS10 installation, if an error condition occurs that results in an unsuccessful
installation, perform Uninstall OS first to clear the partitions if there is an existing OS on the device. If the problem
persists, contact Dell Technologies Technical Support.

Manual installation
If you do not use the ONIE-based automatic installation of an OS10 image and if a DHCP server is not available, you can
manually install the image. Configure the Management port and the software image file to start the installation.

Additional Tasks 277


Manual installation using SCP, TFTP, or FTP server
1. Save the OS10 software image on an SCP, TFTP, or FTP server.
2. Power on the switch and select ONIE: Rescue for manual installation.
3. Enter the onie-discovery-stop command to stop the DHCP discovery.
4. Configure the IP addresses on the Management port, where x.x.x.x represents your internal IP address. After you configure
the Management port, the response is up.

ifconfig eth0 x.x.x.x netmask 255.255.0.0 up

5. Enter the onie-nos-install image_url command to install the software on the device.
NOTE: The installation command accesses the OS10 software from the specified SCP, TFTP, or FTP URL, creates
partitions, verifies installation, and reboots itself.
The following is an example of the installation command:

ONIE:/ # onie-nos-install ftp://a.b.c.d/PKGS_OS10–Enterprise-x.x.xx.bin

NOTE: a.b.c.d represents the location to download the image file from, and x.x.xx represents the version number
of the software to install.

Manual installation using USB drive


You can install the OS10 software image using a USB device. Verify that the USB device supports a FAT or EXT2 file system.
1. Copy OS10 image file PKGS_OS10–Enterprise-x.x.xx.bin to USB storage device.
2. Plug the USB storage device into the USB storage port on the switch.
3. Power on the switch to automatically boot using the ONIE: Rescue option.
4. Optionally, enter the onie-discovery-stop command to stop ONIE discovery if the device boots to ONIE: Install OS.
5. Run the mkdir /mnt/media command to create a USB mount location on the system.
6. Enter the fdisk -l command to identify the path to the USB drive.
7. Run the mount -t vfat usb-drive-path /mnt/media command to mount the USB media plugged in the USB port
on the device.
8. Enter the onie-nos-install /mnt/media/image_file command to install the software from the USB,
where /mnt/media specifies the path where the USB partition is mounted.
The ONIE auto-discovery process discovers the image file at the specified USB path, loads the software image, and reboots the
switch to OS10 active image.

Automatic installation
You can automatically install an OS10 image on a Dell ONIE-enabled device. This process is known as zero-touch install. After
the device boots to ONIE: Install OS, ONIE auto-discovery follows these steps to locate the installer file and uses the first
successful method:
1. Use a statically configured path that is passed from the boot loader.
2. Search file systems on locally attached devices, such as USB.
3. Search the exact URLs from a DHCPv4 server.
4. Search the inexact URLs based on the DHCP responses.
5. Search IPv6 neighbors.
6. Start a TFTP waterfall.
The ONIE automatic discovery process locates the stored software image, downloads and installs it, and reboots the device with
the new image. Automatic discovery repeats until a successful software image installation occurs and reboots the switch.
If DHCPv4 server is used, ONIE auto-discovery obtains the hostname, domain name, Management interface IP address,
and the IP address of the domain name server (DNS) from the DHCP server and DHCP options. It also searches SCP, FTP, or
TFTP servers with the default DNS of the ONIE server. DHCP options are not used to provide the server IP.
If USB storage device is used, ONIE searches only FAT or EXT2 file systems for an OS10 image.

278 Additional Tasks


MXG610s FC switch upgrade downgrade

Upgrade
To upgrade the Firmware on MXG610s FC switch, perform the following steps:
1. Validate the current Fabric OS version and other build information by running version command on MXG610 IOM.

Figure 245. MXG610s current firmware version


2. Back up your switch configuration before the firmware downloads. Enter the supportsave command to collect all
current core files. Also, include all serial consoles and any open network connection sessions, such as TELNET, with any
troubleshooting reports.

Figure 246. Back up configuration

NOTE: The FTP protocol is being deprecated starting with Fabric OS version 9.0.1a. Uploads or downloads using FTP
may not be supported. For release notes and MXG610s software, contact Dell Technical Support.
3. Enter the firmwaredownload command to download the firmware. You will need to provide the Server Name or IP
address, File name, Username, Network Protocol (1-auto-select, 2-FTP, 3-SCP, 4-SFTP) to be used, and the password.

Figure 247. Firmware download


4. When issued with the path to the directory where the firmware is stored, it automatically performs search for the correct
package file type associated with the switch.
Firmware upgrades are available for customers with support service contracts and for partners on the Dell Technologies website
https://www.dell.com/support/home/en-us?app=drivers.
NOTE: When upgrading multiple switch modules, complete the steps above on each switch module before upgrading to the
next one. Do not copy one switch configuration to another switch. Save each switch configuration on file and restore each
switch with the corresponding switch configuration.

Downgrade
To downgrade the firmware on the MXG610s, perform the following steps:

NOTE: Once upgraded to Gen 7, you cannot downgrade to a Fabric OS lower than Fabric OS 9.0.0.

1. Enter the firmwareshow command to validate the current firmware on the switch.

Additional Tasks 279


Figure 248. Current firmware
2. Enter firmwaredownloadstatus to confirm that there is no firmware download already in progress. If there is a
download in progress, wait until that download process is complete.
3. Enter switchshow command to verify that no ports are running as G_Ports.

Figure 249. Switch ports information


4. Enter configupload to save the configuration file to your FTP or SSH server or to a USB memory device.
5. Enter supportsave to retrieve all current core files.
NOTE: The information provided in the supportsave command is useful to troubleshoot the firmware download
process if a problem occurs.
6. Enter errclear to clear all existing messages, including internal messages.
7. Enter supportsave -R (uppercase R). This action clears all core and trace files. Continue with the firmware download.

MXG610s switch details validation


NOTE: When cabling SFP+ optical transceivers, start from port 0, then port 17, and then the other ports.

To validate that the transceivers are supported and working correctly, use the sfpshow command. It displays the port
information, transceiver information, and speed information.

280 Additional Tasks


Figure 250. Transceiver information

The switchshow command displays switch hostname, switch type, online status, switch role, and all other switch-related
information, as shown in the figure below.

Figure 251. Switch information

The fabricshow command displays Switch ID, Worldwide Name, and Management IP address of the switch.

Figure 252. Fabric information

Additional Tasks 281


B
Additional Information
PTM port mapping
The following figures show the port mapping between compute sleds and Pass-Through Module (PTM) interfaces. This mapping
applies to both 25 GbE and 10 GbE PTMs.

Figure 253. Ethernet PTM dual-port NIC mapping

Figure 254. Ethernet PTM quad-port NIC mapping

NOTE: Ports 9 through 14 are reserved for future expansion.

282 Additional Information


Supported cables and optical connectors

PowerEdge MX7000 supported optics and cables


he PowerEdge MX7000 supports various optics and cables. The sections in this appendix provide a summary of the specified
industry standards and the use case regarding the chassis. The following table shows the various cable types that are
supported.

NOTE: Additional information about supported cables and optics can be found in the PowerEdge MX IO Guide.

Table 39. Cable types


Cable type Description
DAC (copper) ● Direct attach copper
● Copper wires and shielding
● 2-wires/channel
AOC (optical) Active Optical Cable
MMF (optical) ● Multi-mode fiber
● Large core fiber (~50 µm)
● 100 m reach
● Transceivers are low cost
● Fiber is 3x the cost of SMF
SMF (optical) ● Single-mode fiber
● Tiny core fiber (~9 µm)
● 2/10 km reach
● Transceivers are expensive

The following table shows the different optical connectors and a brief description of the standard.

Table 40. Optical connectors


Connector Description
Small Form-factor Pluggable (SFP) SFP
● SFP = 1 Gb ● 1 channel
● SFP+ = 10 Gb ● 2 fibers or wires
● SFP28 = 25 Gb ● 1-1.5 W
● Duplex LC optical connector
● MMF or SMF
Quad Small Form-factor Pluggable (QSFP) QSFP
● QSFP+ = 40 Gb ● 4 channels
● QSFP28 = 100 Gb ● 8 fibers or wires
● 3.5-5 W
● MPO12 8 fiber parallel optical connector
Quad Small Form-factor Pluggable Double - Density (QSFP- QSFP-DD
DD) ● 8 channels
● QSFP28-DD = 2x 100 Gb ● 16 fibers or wires
● QSFP56-DD = 2x 200 Gb ● 10 W
● MPO12DD 16 fiber parallel optical connector

The following table shows the model of IOM where each type of media is relevant.

Additional Information 283


Table 41. Media associations
Media type MX9116n MX7116n MX5108n 25 GbE PTM
SFP+ x
SFP28 x
QSFP+ x x
QSFP28 x x
QSFP28-DD x x

Each type of media has a specific use case regarding the MX7000, with each type of media there are various applications. The
following sections outline where in the chassis each type of media is relevant.
NOTE: See the Dell Networking Transceivers and Cables document for more information about supported optics and
cables.

SFP+/SFP28
As seen in the preceding table, SFP+ is a 10 GbE transceiver and SFP28 is a 25 GbE transceiver, both of which can use either
fiber or copper media to achieve 10 GbE or 25 GbE communication in each direction. While the MX5108n has four 10GBase-T
copper interfaces, the focus is on optical connectors.
The SFP+ media type is typically seen in the PowerEdge MX7000 using the 25 GbE Pass-Through Module (PTM) and using
breakout cables from the QSFP+ and QSFP28 ports. The following are supported on the PowerEdge MX7000:
● Direct Attach Copper (DAC)
● LC fiber optic cable with SFP+ transceivers
The use of SFP+/SFP28, as it relates to QSFP+ and QSFP28, as discussed in those sections.

NOTE: The endpoints of the connection need to be set to 10 GbE if SFP+ media is being used.

Figure 255. SFP+/SFP28 media: Direct Attach Copper (DAC)

284 Additional Information


Figure 256. SFP+/SFP28 media: LC fiber optic cable

Figure 257. SFP+/SFP28 media: SFP+/SFP28 transceiver

The preceding figures show examples of SFP+ cables and transceivers. Also, the SFP+ form factor can be seen referenced in
the QSFP+ and QSFP28 sections using breakout cables.

QSFP+
QSFP+ is a 40 Gb standard that uses either fiber or copper media to achieve communication in each direction. This standard
has four individual 10-Gb lanes that can be used together to achieve 40 GbE throughput or separately as four individual 10 GbE
connections (using breakout connections). One variant of the Dell QSFP+ transceiver is shown in the following figure.

Figure 258. QSFP+ transceiver

The QSFP+ media type has several uses in the MX7000. While the MX9116n does not have interfaces that are dedicated to
QSFP+, ports 41 through 44 can be broken out to 1x 40 GbE that enables QSFP+ media to be used in those ports. The MX5108n
has one dedicated QSFP+ port and two QSFP28 ports that can be configured for 1x 40 GbE.

Additional Information 285


The following figures show examples of QSFP+ Coppers. The Direct Attach Copper (DAC) is a copper cable with a QSFP+
transceiver on either end. The Multi-fiber Push On (MPO) cable is a fiber cable that has MPO connectors on either end; these
connectors attach to QSFP+ transceivers. The third variant is an Active Optical Cable (AOC) that is similar to the DAC with a
fixed fiber optic cable in between the attached QSFP+ transceivers.

Figure 259. QSFP+ cables: Direct Attach Copper (DAC)

Figure 260. QSFP+ cables: Multi-fiber Push On (MPO) cable

Figure 261. QSFP+ cables: Active Optical Cable (AOC)

The MX7000 also supports the use of QSFP+ to SFP+ breakout cables. This offers the ability to use a QSFP+ port and connect
to four SFP+ ports on the terminating end.
The following figures show the DAC and MPO cables, which are two variations of breakout cables. The MPO cable in this
example attaches to one QSFP+ transceiver and four SFP+ transceivers.

286 Additional Information


Figure 262. QSFP+ to SFP+ Breakout cables: Direct Attach Copper (DAC) breakout

Figure 263. QSFP+ to SFP+ breakout cables: Multi-fiber Push On (MPO) breakout cable

NOTE: The MPO breakout cables uses a QSFP+ transceiver on one end and four SFP+ transceivers on the terminating end.

QSFP28
The QSFP28 standard is 100 Gb that uses either fiber or copper media to achieve communication in each direction. The QSFP28
transceiver has four individual 25-Gb lanes which can be used together to achieve 100 GbE throughput or separately as four
individual 25 GbE connections (using four SFP28 modules). One variant of the Dell QSFP28 transceiver is shown in the following
figure.

Figure 264. QSFP28 transceiver

There are three variations of cables for QSFP28 connections. The variations are shown in the following figures.

Additional Information 287


Figure 265. QSFP28 cables: Direct Attach Copper (DAC)

Figure 266. QSFP28 cables: Multi-fiber Push On (MPO) cable

Figure 267. QSFP28 cables: Active Optical Cable (AOC)

NOTE: The QSFP28 form factor can use the same MPO cable as QSFP+. The DAC and AOC cables are different in that the
attached transceiver is a QSFP28 transceiver rather than QSFP+.
QSFP28 supports the following breakout configurations:
● 1x 40 Gb with QSFP+ connections, using either a DAC, AOC, or MPO cable and transceiver.
● 2x 50 Gb with a fully populated QSFP28 end and two depopulated QSFP28 ends, each with 2x 25 GbE lanes. This product is
only available as DAC cables.
● 4x 25 Gb with a QSFP28 connection and using four SFP28 connections, using either a DAC, AOC, or MPO breakout cable
with associated transceivers.
● 4x 10 Gb with a QSFP28 connection and using four SFP+ connections, using either a DAC, AOC, or MPO breakout cable with
associated transceivers.

QSFP28 double density connectors


A key technology that enables the Scalable Fabric Architecture is the QSFP28 double-density (DD) connector. The QSFP28-DD
form factor expands on the QSFP28 pluggable form factor by doubling the number of available lanes from four to eight, with
each lane operating at 25 Gbps, the result is 200 Gbps for each connection.

288 Additional Information


The following figure shows that the QSFP28-DD connector is slightly longer than the QSFP28 connector. This is to enable the
second row of pads that carry the additional four 25-Gbps lanes.
NOTE: A 100 GbE QSFP28 optic can be inserted into a QSFP28-DD port, resulting in 100 GbE of available bandwidth. The
other 100 GbE will not be available.

Figure 268. QSFP28-DD and QSFP28 physical interfaces

QSFP28-DD cables and optics build on the current QSFP28 naming convention. For example, the current 100 GbE short range
transceiver has the following description:
Q28-100G-SR4: Dell Networking Transceiver, 100GbE QSFP28, SR4, MPO12, MMF
The equivalent QSFP28-DD description is easily identifiable:
Q28DD-200G-2SR4: Dell Networking Transceiver, 2x100GbE QSFP28-DD, 2SR4, MPO12-DD, MMF

PowerEdge MX IOM slot support matrix


For information about the recommended PowerEdge MX IOM slot configurations, see Supported slot configurations for IOMs

Additional Information 289


C
Dell PowerSwitch S4148U-ON Configuration
in Scenario 7
In Scenario 7: Connect MX5108n to Fibre Channel storage - FSB, S4148U-ON switches are connected to the MX9116n FSE in
the MX7000 chassis and to the FC switches. This chapter covers the switch configuration for S4148U-ON switches running
OS10. Run the commands in the following sections to complete the configuration of both leaf switches.

Switch configuration commands


Run the following commands to configure the hostname, OOB management IP address, and default gateway.

General settings
NOTE: The MX I/O Modules run Rapid Per-VLAN Spanning Tree Plus (RPVST+) by default. RPVST+ runs RSTP on each
VLAN while RSTP runs a single instance of spanning tree across the default VLAN. The Dell PowerSwitch S4148U-ON
used in this example runs SmartFabric OS10 and has RPVST+ enabled by default. See the Spanning Tree Protocol
recommendations in the Dell SmartFabric OS10 User Guide for more information. Find the relevant version of the User
Guide in the OME-M and OS10 compatibility and documentation table.

S4148U-ON Leaf 1 S4148U-ON Leaf 2

configure terminal configure terminal

hostname S4148U-Leaf1 hostname S4148U-Leaf2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
no shutdown no shutdown
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24

management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

NOTE: Use the spanning-tree {vlan vlan-id priority priority-value} command to set the bridge
priority for the upstream switches. The bridge priority ranges from 0 to 61440 in increments of 4096. The switch which has
lowest bridge priority becomes STP root.

Configure VLANs
Run the commands in this section to configure VLANs. In this deployment example, the VLANs used are VLAN 30 and VLAN 40.
Set the MTU as 9216 Bytes.

S4148U-ON Leaf 1 S4148U-ON Leaf 2

interface vlan40 interface vlan30


mtu 9216 mtu 9216
no shutdown no shutdown

290 Dell PowerSwitch S4148U-ON Configuration in Scenario 7


Configure DCBx, NPG, and vFabric
Configure and enable DCBx feature, NPG as a FC feature and vFabric.
NOTE: Remove all the FC configuration, vFabric global configuration, and vFabric configuration under interface or port
channels prior to configuring the FC feature.

S4148U-ON Leaf 1 S4148U-ON Leaf 2

dcbx enable dcbx enable


feature fc npg feature fc npg
vfabric 101 vfabric 102
vlan 40 vlan 30
fcoe fcmap 0xEFC00 fcoe fcmap 0xEFC01

Configure QoS
Configure class-map, policy-map and define QoS parameters. In this example, queue 3 is defined as the Output queue in policy
map. The bandwidth is also defined as 50%. Configure the QoS parameters as mentioned in the following example.

S4148U-ON Leaf 1 S4148U-ON Leaf 2

class-map type network-qos class-map type network-qos


class_Dot1p_3 class_Dot1p_3
match qos-group 3 match qos-group 3
class-map type queuing class-map type queuing
map_ETSQueue_0 map_ETSQueue_0
match queue 0 match queue 0

class-map type queuing class-map type queuing


map_ETSQueue_3 map_ETSQueue_3
match queue 3 match queue 3

trust dot1p-map trust dot1p-map


map_Dot1pToGroups map_Dot1pToGroups
qos-group 0 dot1p 0-2,4-7 qos-group 0 dot1p 0-2,4-7
qos-group 3 dot1p 3 qos-group 3 dot1p 3
qos-map traffic-class qos-map traffic-class
map_GroupsToQueues map_GroupsToQueues

queue 0 qos-group 0 queue 0 qos-group 0


queue 3 qos-group 3 queue 3 qos-group 3

policy-map type network-qos policy-map type network-qos


policy_Input_PFC policy_Input_PFC

class class_Dot1p_3 class class_Dot1p_3


pause pause
pfc-cos 3 pfc-cos 3

policy-map type queuing policy-map type queuing


policy_Output_BandwidthPercent policy_Output_BandwidthPercent

class map_ETSQueue_0 class map_ETSQueue_0


bandwidth percent 50 bandwidth percent 50
class map_ETSQueue_3 class map_ETSQueue_3
bandwidth percent 50 bandwidth percent 50

system qos system qos

trust-map dot1p trust-map dot1p


map_Dot1pToGroups map_Dot1pToGroups
qos-map traffic-class qos-map traffic-class
map_GroupsToQueues map_GroupsToQueues

Dell PowerSwitch S4148U-ON Configuration in Scenario 7 291


Configure interfaces
In this topology, Interface 1/1/1 and 1/1/3 on both leafs are connected to the FC switches. Interfaces 1/1/11 and 1/1/12
are connected to MX9116n FSEs. Configure the interfaces as mentioned below. Make sure to configure port groups before
configuring interfaces.

S4148U-ON Leaf 1 S4148U-ON Leaf 2

interface fibrechannel 1/1/1 interface fibrechannel 1/1/1


no shutdown no shutdown
vfabric 101 vfabric 102

interface ethernet1/1/11 interface ethernet1/1/11


no shutdown no shutdown
switchport access vlan 1 switchport access vlan 1
priority-flow-control mode on priority-flow-control mode on
service-policy input type service-policy input type
network-qos policy_Input_PFC network-qos policy_Input_PFC
service-policy output type service-policy output type
queuing queuing
policy_Output_BandwidthPercent policy_Output_BandwidthPercent
ets mode on ets mode on
vfabric 101 vfabric 102

interface ethernet1/1/12 interface ethernet1/1/12


no shutdown no shutdown
switchport access vlan 1 switchport access vlan 1
priority-flow-control mode on priority-flow-control mode on
service-policy input type service-policy input type
network-qos policy_Input_PFC network-qos policy_Input_PFC
service-policy output type service-policy output type
queuing queuing
policy_Output_BandwidthPercent policy_Output_BandwidthPercent
ets mode on ets mode on
vfabric 101 vfabric 102

end end
write memory write memory

292 Dell PowerSwitch S4148U-ON Configuration in Scenario 7


D
Dell PowerStore 1000T
About Dell PowerStore 1000T
This section shows how an administrator can configure Dell PowerStore 1000T to create hosts, add volumes, determine the
Worldwide Port Names (WWPNs) of Converged Network Adapters (CNAs) and map storage volumes to the target host. The
WWPNs are used to connect FC storage targets to specific servers for file storage or OS boot.
NOTE: The configuration steps and screenshots in this section were taken from the current PowerStore OS version at the
time of publication. For the latest instructions, see the PowerStore 1000T Documentation Page, where you will find the
latest networking and configuration guides.

Configure PowerStore 1000T FC storage


This section covers configuration of a Dell PowerStore 1000T storage array. See the Dell PowerStore Quick Start Guide for
more detail about how to set up the storage array for the first time.
Once the initial storage array cluster configuration is complete and all the network devices are connected, perform the following
steps to create:
● FC storage array hosts
● Host groups
● Volume groups
● Volumes to the groups

Create a host
Perform the following steps to create a host.
1. Connect to the PowerStore 1000T UI in a web browser and log in using the required credentials.
2. Click on Compute and select the Hosts & Host Groups option.

Figure 269. Create host

3. Click Add Host. Enter a host name and select the Operating System. Click Next.

Dell PowerStore 1000T 293


Figure 270. Add host name and operating system

4. Select Protocol Type. In this example, Fibre Channel is selected. Click Next.
5. The Initiator WWPN will discover automatically. Select the Initiator Identifier WWPN and click Next.
6. Review selections on the Summary page and click Add Host to create the host as shown in the following figure.
The host is displayed on the Compute > Hosts & Host Groups page.

Figure 271. Fibre channel host created

Create host groups and add hosts


Perform the following steps to create host groups and add hosts to the group.
1. Once the host is created, click Add Host Group as shown in the following figure.

294 Dell PowerStore 1000T


Figure 272. Add Host Group

2. Enter the name of the host group, select Protocol Type, and select the right host to add.
3. Click Create.

Figure 273. Host group created

Additional hosts may be added to the same host group as needed by clicking the (+ Add Host) button on the Host Groups
page.

Dell PowerStore 1000T 295


Create volume groups
Perform the following steps to create volume groups.
1. Click on the Storage tab and select Volume Groups.
2. Enter the name for the volume group. Leave other options as default.
3. Click Create.

Figure 274. Volume group created

Create volumes
Perform the following steps to create the volumes under Volume Groups.
1. Once the volume group is created, click ADD VOLUMES.
2. Select Add New Volumes.

296 Dell PowerStore 1000T


Figure 275. Add volumes to volume group

3. Enter the name of volume. Select the desired quantity and size of each volume. In this example, two volumes quantity and
size 10 GB is selected. Leave the other options as default.
4. Click Next as shown in the following figure.

Figure 276. Create volumes

5. Select the right host group to map volumes. Leave the other options as default, as shown in the following figure.

Dell PowerStore 1000T 297


Figure 277. Map volumes to host

NOTE: To modify Volume name or size, click Storage > Volumes > Select Volume, then Modify to make changes.

Determine PowerStore 1000T storage array FC


WWPNs
The WWPNs of FC adapters in storage arrays are also used for FC configuration. Perform the following steps to determine
WWPNs on PowerStore 1000T storage arrays.
1. Connect to the PowerStore 1000T UI in a web browser and log in.
2. Click on the Compute > Select Hosts & Host Groups option.
3. Select Host group > Click on Host. The Fibre Channel Ports page is displayed as shown in the following figure.

Figure 278. Fibre Channel Ports

Two WWNs are listed for each port. The World Wide Node Name (WWNN), outlined in black, identifies the PowerStore
1000T Node storage array. The WWPNs, outlined in blue, identify the individual ports associated with the corresponding
array node.
Record the WWPNs as shown in the following table:

298 Dell PowerStore 1000T


Table 42. Storage array FC adapter WWPNs
Service processor Physical port WWNN WWPN
PS CTRL A FC 0 58:cc:f9:90:c9:20:0c:e7 58:cc:f9:90:49:21:0c:e7
PS CTRL A FC 1 58:cc:f9:90:c9:20:0c:e7 58:cc:f9:90:49:22:0c:e7
PS CTRL B FC 0 58:cc:f9:98:c9:20:0c:e7 58:cc:f9:98:49:21:0c:e7
PS CTRL B FC 1 58:cc:f9:98:c9:20:0c:e7 58:cc:f9:98:49:22:0c:e7

Determine CNA FCoE port WWPNs


In this example, the MX740c server's FCoE adapter WWPNs are used for FC connection configuration. Perform the following
steps to determine adapter WWPNs.
1. Connect to the MX computer sled server's iDRAC in a web browser and log in.
2. Select System, then click Network Devices.
3. Click the CNA. In this example, NIC Mezzanine 1A is used. Under Ports and Partitioned Ports, the FCoE partition for
each port is displayed as shown in the following figure.

Figure 279. The FCoE partition displayed for each port

Dell PowerStore 1000T 299


Figure 280. FCoE partitions in iDRAC

4. The first FCoE partition is Port 1, Partition 2. Click the (+) icon to view the MAC Addresses as shown in the following figure.

Figure 281. MAC address and FCoE WWPN for CNA port 1

5. Record the MAC Address and WWPN.

NOTE: A convenient method is to copy and paste these values into a text file.

6. Repeat steps 4 and 5 for the FCoE partition on port 2.


7. Repeat the steps in this section for the remaining MX740c servers.
The FCoE WWPNs and MAC addresses used in this deployment example are shown in the following table:

Table 43. Server CNA FCoE port WWPNs and MACs


Server Port WWPN MAC
MX740c-1 1 20:01:F4:E9:D4:0C:24:F218:66:DA:71:50:AD 18:66:DA:71:50:ACF4:E9:D4:0C:24:F2
MX740c-2 2 20:01:F4:E9:D4:0C:24:F320:01:18:66:DA:71:50:AF F4:E9:D4:0C:24:F318:66:DA:71:50:AE
MX740c-2 1 20:01:34:80:0D:86:80:6218:66:DA:77:D0:C3 34:80:0D:86:80:6218:66:DA:77:D0:C2
MX740c-2 2 20:01:34:80:0D:86:80:6320:01:18:66:DA:77:D0:C5 34:80:0D:86:80:618:66:DA:77:D0:C34

300 Dell PowerStore 1000T


E
Hardware and Version Information
Hardware used in this guide
This section covers the rack-mounted networking switches used in the examples in this guide.

Table 44. Hardware and roles


Hardware Role
Dell PowerSwitch S3048-ON One S3048-ON switch supports out-of-band (OOB) management traffic for all
examples.
Dell PowerSwitch S5232F-ON A pair of S5232F-ON switches are used as leaf switches in Scenario 1: SmartFabric
deployment with S5232F-ON upstream switches with Ethernet - No Spanning Tree
uplink.
Dell PowerSwitch S4148U-ON Two S4148U-ON switches support storage traffic, and are the first of two leaf switch
options.
Dell PowerSwitch Z9264F-ON This switch may be used as a leaf or spine switch in a Leaf-spine topology. It is
optimized for nonblocking 100 GbE leaf/spine fabrics and high-density 25/50 GbE
in-rack server and storage connections. It provides up to 64 ports of 100 GbE QSFP28
or up to 128 ports of 1/10/25/40/50 GbE ports using breakout cables.
Dell PowerStore 1000T storage array This array is used for the FC connections. Additional 2U Disk Array Enclosures (DAEs)
may be added, providing twenty-five additional drives each.
Cisco Nexus 3232C A pair of Cisco Nexus 3232C switches are used as leaf switches in Scenario 2:
SmartFabric connected to Cisco Nexus 3232C switches with Ethernet - No Spanning
Tree uplink.

More detail about each of these devices is provided in the following sections.
For detailed information about hardware components related to the MX platform, see Software and firmware versions used.

Dell PowerSwitch S3048-ON


The Dell PowerSwitch S3048-ON is a 1U switch with forty-eight 1 GbE BASE-T ports and four 10 GbE SFP+ ports.

Figure 282. PowerSwitch S3048-ON

Dell PowerSwitch S5232F-ON


The Dell PowerSwitch S5232F-ON is a 1U, multilayer switch with 32x 100 GbE QSFP28 ports and 2x 10 GbE SFP+ ports.

Hardware and Version Information 301


Figure 283. Dell PowerSwitch S5232F-ON

Dell PowerSwitch S4148U-ON


The Dell PowerSwitch S4148U-ON is a 1U switch with 48x SFP+ ports, 2x QSFP+ ports, and 4x QSFP28 ports.

Figure 284. Dell PowerSwitch S4148U-ON

Dell PowerSwitch Z9264F-ON


The Dell PowerSwitch Z9264F-ON is a 2U, multilayer switch with 64x 100 GbE QSFP28 ports and 2x 10 GbE SFP+ ports.

Figure 285. PowerSwitch Z9264F-ON

Dell PowerStore 1000T


The PowerStore 1000T storage platform is based on a versatile platform utilizing Intel Xeon Scalable processors and advanced
storage technologies, including end-to-end NVMe Flash, dual-ported Intel OptaneTM SSDs, and NVMe-FC. It supports NAS,
iSCSI, FC, and NVMe-FC. The base enclosure is a 2U, two-node enclosure with twenty-five 2.5” NVMe drive slots.

Figure 286. Dell PowerStore 1000T front view

302 Hardware and Version Information


Figure 287. Dell PowerStore 1000T rear view

Cisco Nexus 3232C


The Cisco Nexus 3232C is a 1U fixed form-factor 100 GbE switch with thirty-two QSFP28 ports supporting 10/25/40/50/100
GbE.

Software and firmware versions used


Scenarios 1 through 4
The following tables include the hardware components and supported software and firmware versions for Scenario 1, Scenario 2,
Scenario 3, and Scenario 4.

Dell PowerSwitch
Table 45. Dell PowerSwitch switches and OS versions – Scenarios 1 through 4
Qty Item Software version
2 Dell PowerSwitch S5232F-ON leaf switches 10.5.4.4
1 Dell PowerSwitch S3048-ON OOB management switch 10.5.4.4

Dell PowerEdge MX7000 chassis and components


Table 46. Dell PowerEdge MX7000 chassis and components – Scenarios 1 through 4
Qty Item Software version
2 Dell PowerEdge MX7000 chassis -
4 Dell PowerEdge MX740c sled See the following table
4 Dell PowerEdge M9002m modules 2.10.00
2 Dell Networking MX9116n FSE 10.5.5.2
2 Dell Networking MX7116n FEM -

Table 47. Minimum software and firmware requirements - MX9116n


Software Minimum release version requirement
ONIE 3.35.5.1-24
BIOS 3.35.0.1-5
CPLD system 0.13

Hardware and Version Information 303


Dell PowerEdge MX740c chassis and components
Table 48. Dell PowerEdge MX740c compute sled details – Scenarios 1 through 4
Qty per sled Item Firmware version
1 Intel(R) Xeon(R) Silver 4114 CPU @ 2.20 GHz -
12 16 GB DDR4 DIMMs (192 GB total) -
3 600 GB SAS HDD -
1 Intel(R) Ethernet 25 G 2P XXV710 mezzanine card 21.5.9
- BIOS 2.17.1
- iDRAC with Lifecycle Controller 6.10.30.05

Cisco Nexus switches


Table 49. Nexus switches and OS versions – Scenarios 1 through 4
Qty Item Software version
2 Cisco Nexus 3232C 7.0(3)I4(1)

Scenarios 5 through 8
The tables in this section include the hardware components and supported software and firmware versions for Scenario 5
through Scenario 8 in this document.

Table 50. Minimum software and firmware requirements - MX9116n


Software Minimum release version requirement
ONIE 3.35.5.1-24
BIOS 3.35.0.1-5
CPLD system 0.13

Table 51. Minimum software and firmware requirements - MX5108n


Software Minimum release version requirement
ONIE 3.35.5.1-24
BIOS 3.35.0.1-5
CPLD system 0.13

Table 52. Dell Switches and OS versions - Scenarios 5 through 8


Qty Item Software version
1 Dell PowerSwitch S3048-ON management switch 10.5.4.4
2 Dell Networking MX9116n FSE 10.5.5.2
2 Dell Networking MX5108 10.5.5.2
2 Dell PowerSwitch S4148U-ON 10.5.4.4
2 Dell Networking MX7116n FEM -

304 Hardware and Version Information


Table 53. Dell PowerEdge MX-series components - Scenarios 5 through 8
Qty Item Software version
4 Dell PowerEdge M9002m modules 2.10.00
4 Dell PowerEdge MX740c compute sleds See the following table

Table 54. Dell PowerEdge MX740c compute sled details - Scenarios 5 through 8
Qty per sled Item Firmware version
1 QLogic QL41262HMKR (25 G) mezzanine CNA 16.10.00
2 Intel(R) Xeon(R) Silver 4114 CPU @ 2.20 GHz -
12 16 GB DDR4 DIMMs (192 GB total) -
3 600 GB SAS HDD -
- BIOS 2.17.1
- iDRAC with Lifecycle Controller 6.10.30.05

Hardware and Version Information 305


F
References
Dell Technologies documentation
The following Dell Technologies documentation provides additional and relevant information. Access to these documents may
depend on your log in credentials. If you do not have access to a document, contact your Dell Technologies representative.
● Dell Networking Guides
● Dell PowerEdge MX IO Guide
● Dell SmartFabric OS10 User Guide
● Dell PowerStore Guides
● Dell Technologies Interactive Demo: OpenManage Enterprise Modular for MX solution management
● Dell PowerEdge MX SmartFabric and Cisco ACI Integration Guide
● Dell Fabric Design Center
● Manuals and documents for Dell Networking MX5108n
● Manuals and documents for Dell Networking MX9116n
● Manuals and documents for Dell PowerEdge MX7000
● Manuals and documents for Dell PowerSwitch S3048-ON
● Manuals and documents for Dell PowerSwitch S5232-ON
● Manuals and documents for Dell PowerSwitch S4148U-ON
● Fibre Channel Deployment with S4148U-ON in F_Port Mode
● FCoE-to-Fibre Channel Deployment with S4148U-ON in F_Port Mode

OME-M and OS10 compatibility and documentation


This section includes the compatibility matrix of OME-M and OS10 and provides links to OME-M and OS10 user guides and
release notes for all versions.

OME-M and OS10 compatibility


OME-M version OS10 version
1.10.00 10.5.0.1
1.10.20 10.5.0.5
1.20.00 10.5.0.7, 10.5.9
1.20.10 10.5.1.6, 10.5.1.7, 10.5.1.9
1.30.00 10.5.2.3 (factory only), 10.5.2.4, 10.5.2.6
1.30.10 10.5.2.6
1.40.00, 1.40.10, 1.40.20 10.5.3.1
2.00.00 10.5.4.1
2.10.00 10.5.5.1 (factory installed), 10.5.5.2

OME-M and OS10 documentation


The following OME-M documents are available on the Documentation tab of the PowerEdge MX7000 support site.
● Dell OpenManage Enterprise-Modular Edition for PowerEdge MX7000 Chassis User's Guide

306 References
● Dell OpenManage Enterprise-Modular Edition for PowerEdge MX7000 Chassis Release Notes
The following OS10 documents are available on the Documentation tab of the SmartFabric OS10 Software support site.
● Dell SmartFabric OS10 User Guide
● SmartFabric OS10 Release Notes for PowerEdge MX

Dell Technologies Networking Infrastructure Solutions


documentation
The following documentation provides additional networking solutions information.
NOTE: Access to the documentation may require user credentials. If you do not have access to a document, contact your
Dell Technologies representative.
Networking solutions: https://infohub.delltechnologies.com/t/networking-solutions-57/

Feedback and technical support


We encourage readers to provide feedback on the quality and usefulness of this publication by sending an email to
Dell_Networking_Solutions@Dell.com.
For technical support, visit https://www.dell.com/support.

References 307

You might also like