Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Leaf-Spine Cluster Installation Guide

Cluster Installation
November 2021

• Leaf-Spine topology.................................................................................................................................................................................................2
• Workflow for installing a new Leaf-Spine cluster.............................................................................................................................................7
• Workflow for Leaf-Spine cluster growth........................................................................................................................................................... 8
• Upgrade OneFS........................................................................................................................................................................................................ 9
• Upgrade the switch operating system................................................................................................................................................................ 9
• Install rails for the Dell Z9100 switches............................................................................................................................................................ 10
• Install rails for the Dell Z9264 switches............................................................................................................................................................ 12
• Install Dell Z9264 switches in the rack..............................................................................................................................................................13
• Racking guidelines for Leaf-Spine clusters...................................................................................................................................................... 14
• Install Dell switches in the rack........................................................................................................................................................................... 15
• Cable management for Leaf-Spine clusters.....................................................................................................................................................18
• Where to get help................................................................................................................................................................................................... 21
• Additional options for getting help..................................................................................................................................................................... 21
Leaf-Spine topology
OneFS 9.0.0.0 and later releases support Leaf-Spine network topology for internal networks that communicate with the nodes
that form clusters up to 252 nodes. For large clusters that are intended to grow significantly over time, the Leaf-Spine topology
is recommended.
NOTE: To connect 252 nodes on a OneFS 8.2.2 Leaf-Spine cluster, you must update the cluster with the latest OneFS
8.2.2 rollup patch. For more information, see the Current OneFS patches guide.

Architecture
In a Leaf-Spine topology, Z9264, and Z9100-ON switches are arranged in a two-level hierarchy. The bottom level switches
with the nodes connected are called Leaf switches. Leaf switches are connected to the top level switches called Spine
switches. These switches are, in turn, connected to additional Spine switches for networking the entire cluster of nodes. OneFS
requires two independent Leaf-Spine networks for intracluster communication. These networks are known as Int-A and Int-B
respectively.
NOTE: Using both Z9100 and Z9264 in the same Leaf-Spine configuration is not supported currently. The two networks
must be built using only Z9100 or Z9264 switches.
The following table lists the main Leaf-Spine components in a cluster.

Table 1. Leaf-Spine network components


Component Description Connection considerations
Spine Z9264 64-port 100 GbE (QSFP28) Back-end network with 100 GbE (uplink)
switch connects to the Leaf switch
Leaf Z9264 64-port 100 GbE (QSFP28) ● Up to 10 x 100 GbE uplinks to spine
switch ● 54 performance nodes (40GbE/
100GbE) OR 108 nonperformance
nodes (10GbE/25GbE) using
breakout cable[1]
[1] Only 27 ports are available due to a
Z9264 limitation. Only every other port is
available if using breakout (64 total ports
- 10 uplink = 54, divide by 2).
Spine Z9100-ON 32-port 100 GbE (QSFP28) Back-end network with 100 GbE (uplink)
switch connects to the Leaf switch
Leaf Z9100-ON 32-port 100 GbE (QSFP28) ● 22 performance nodes or 88 archive
switch nodes with breakout cables for
downlinks from the Leaf switch
● Up to 10 x 100 GbE uplinks from the
Leaf switch
Performance - 100 GbE and 40 GbE F600, F800, F810, F900, H500, H600, ● F600*, F900, H700*, and H7000*
back-end H700, H5600, and H7000 Isilon nodes nodes support a 40 GbE or 100 GbE
connection to the leaf switch
Archive - 25 GbE and 10 GbE back-end F200, F600, A200, A2000, A300, A3000, ● Archive nodes support a 10GbE
and H400 Isilon nodes connection to the Leaf. Use a
breakout cable to connect up to four
nodes to a single Leaf port.
● A300, A3000, F200, F600*, H700*,
and H7000* nodes support a 10 GbE
or 25 GbE connection to the leaf
switch.
Breakout cable 40GbE cable that breaks out to four This cable connects a Leaf switch 40
10GbE cables OR 100GbE cable that GbE port to four 10 GbE nodes or a
breaks out to four 25GbE cables Leaf switch 100 GbE port to four 25 GbE
nodes.

2
Table 1. Leaf-Spine network components (continued)
Component Description Connection considerations
Uplink Leaf to Spine connection There must be the same number of
uplinks on every Leaf switch. That
Downlink Leaf to node connection number should be the number of uplinks
that are required by the Leaf switch with
the most downlinks.
* If equipped with 100 GbE NIC

Dell Z9100-ON switches


The Dell Z9100-ON switch is supported for Leaf-Spine network topology and is compatible with all Generation 6 and PowerScale
nodes.

Z9100-ON
NOTE: The Dell Z9100-ON switches arrive autoconfigured for installation. Do not make any configuration changes to the
switches or to the firmware before cabling them to the back-end network.
The Dell Z9100-ON is:
● A 1U, 32-port (QSFP28) switch supporting up to 100 GbE connections
● The method through which PowerScale nodes communicate with one another for intracluster traffic

Figure 1. Z9100–ON I/O-side view


1. MicroUSB-B console port 2. RS-232/RJ45 serial console port
3. Thirty–two 10/25/40/50/100 QSFP28 ports 4. 1G/10G SFP+ ports
5. Stack ID 6. USB Type A storage
7. 10/100/1000BaseT Ethernet management port 8. Luggage tag

Figure 2. Z9100–ON PSU-side view


1. Fan modules

3
2. Power supply units

Table 2. Z9100-ON switch details


Z9100-ON features Description
Density High-density 10/25/40/100 GbE core switch with 32 100-GbE
QSFP28 ports expandable to:
● 128x10 GbE ports using QSFP+ to SFP+ breakout
● 128x25 GbE ports using QSFP28 to SFP28 breakout
● 32x40 GbE ports using QSFP+ optics/DACs
● 32x100 GbE ports using QSFP28 optics/DACs
Each port can be broken out independent of the breakout
selection for other ports in the switch.
Design Compact form factor with a 1RU design
Power consumption Low power consumption at 605 W (maximumd) / 288 W
(typical) per system with all ports running at line rate with
AOC optics)
Airflow The PSU to I/O is the default airflow for Isilon Leaf-Spine
architecture.
High availability Redundant, hot-swappable power supplies and fans

Z9100-ON switch requirements


The following table lists the number of Z9100-ON switches required to create a cluster for a given number of nodes. The
number of switches that are listed is for one network only. Because the back-end consists of two similar networks, the numbers
must be doubled to get the total number of switches. The number of nodes that are listed are performance nodes. Each
performance node can be replaced by up to four archive nodes by using breakout cables. It is recommended that the total
number of nodes on any Leaf switch does not exceed 32.

Table 3. Z9100-ON switch requirements for Leaf-Spine clusters


Maximum number of nodes Number of spine switches Number of leaf switches Number of cables between
each pair of leaf and spine
switches
All 40 GbE ports
44 1 2 9
66 1 3 9
88 2 4 5
110 2 5 5
132 2 6 5
144 3 7 3
252 5 12 2
All 100 GbE ports
32 1 2 16
64 2 4 8
112 4 7 4
128 4 8 4
135 5 9 3
150 5 10 3

4
Dell Z9264 switches
The Dell Z9264 switch is supported for Leaf-Spine network topology and is compatible with all Generation 6 and PowerScale
nodes.

Z9264
NOTE: The Dell Z9264 switches arrive autoconfigured for installation. Do not make any configuration changes to the
switches or to the firmware before cabling them to the back-end network.
The Dell Z9264 is:
● A 2U, 64-port (QSFP28) switch supporting up to 100 GbE connections
● The method through which PowerScale nodes communicate with one another for intracluster traffic

Figure 3.
1. MicroUSB-B console port 2. RJ45 console port
3. Sixty-four QSFP28 ports 4. Two SFP+ ports
5. USB Type A port 6. RJ45 management port
7. Luggage tag

5
Figure 4.
1. Fan modules
2. Power supply units

Table 4. Z9264 switch details


Z9264 features Description
Density High-density 10/25/40/100 GbE core switch with 64 100GbE
QSFP28 ports expandable to:
● 64x40 GbE ports using QSFP+ optics/DACs
● 64x100 GbE ports using QSFP28 optics/DACs
Each port can be broken out independent of the breakout
selection for other ports in the switch.
Design Compact form factor with a 2RU design
Power consumption Low power consumption at 605 W (maximumd) / 288 W
(typical) per system with all ports running at line rate with
AOC optics)
Airflow The PSU to I/O is the default airflow for Isilon Leaf-Spine
architecture.
High availability Redundant, hot-swappable power supplies and fans

Z9264 switch requirements


The following table lists the number of Z9264 switches required to create a cluster for a given number of nodes. The number of
switches that are listed is for one network only. Because the back-end consists of two similar networks, the numbers must be
doubled to get the total number of switches. The number of nodes that are listed is performance nodes. Each performance node
can be replaced by up to four archive nodes by using breakout cables. It is recommended that the total number of nodes on any
Leaf switch does not exceed 32.

6
Table 5. Z9264 switch requirements for Leaf-Spine clusters
Maximum number of nodes Number of spine switches Number of leaf switches Number of cables between
each pair of leaf and spine
switches
All 40GbE ports
88 1 2 18
132 1 3 18
160 1 4 16
176 2 4 9
220 2 5 9
252 3 6 6
All 100GbE ports
64 1 2 32
128 2 4 16
150 3 5 10
180 3 6 10
252 4 8 8

Workflow for installing a new Leaf-Spine cluster


A brief workflow of installing a cluster with Leaf-Spine topology is described.
It is recommended to reserve ports 1 through 10 for uplinks and ports 11 through 32 for downlinks.
If you install a new or scale a cluster to include 32 performance nodes with 40 GbE or100 GbE back-end ports or more than
96 archive nodes with 10 GbE or 25 GbE back-end ports, use the Leaf-Spine topology to configure the back-end network. See
Table 1 for node types.
To install a new Leaf-Spine cluster, follow this workflow.
1. Install the switch rails.
2. Install the Spine switches followed by the Leaf switches.
3. Cable the leaf switches to the spine switches and then to the nodes for both the networks.
4. Ensure that the personality on the switches is correct; that is, Leaf shows leaf personality and Spine shows spine personality.
5. NOTE: Do not connect Leaf to Leaf or Spine to Spine switches.

Ensure the switch operating system version is 10.5.0.6.C2.


The Upgrade the switch operating system section in this guide provides detailed instructions on how to check and upgrade
switches.
6. Confirm that OneFS 9.0.0.0 or later is installed on the cluster.
The Upgrade OneFS for Leaf-Spine section in this guide provides detailed instructions on how to check and upgrade
switches.
7. Create a cluster by using any four similar nodes on the first Leaf switch.
The Run the configuration wizard section of the PowerScale Generation 6 Installation Guide provides detailed instructions.
8. Add the remaining nodes to the cluster that was created in step 5.
The Run the configuration wizard section of the PowerScale Generation 6 Installation Guide provides detailed instructions.
9. Confirm the cluster installation by running isi status to verify that all nodes are present and there are not outstanding
CELOG alerts.

7
NOTE: The events reported can be related to links introduced between two or more Leaf switch to node connections
(downlinks) or between two or more Leaf-to-Spine switch connections (uplinks). Incorrect cabling is also reported in
events.

The PowerScale OneFS Event Reference Guide provides instructions on how to view events.
10. If this network is only Int-A, repeat these steps on Int-B.
The Leaf-Spine cluster is installed.
Best practices and examples for Leaf-Spine clusters are available in the white paper Best Practices For a Dell EMC Isilon
Leaf-Spine Network.

Workflow for Leaf-Spine cluster growth


You can expand the storage capacity of a Leaf-Spine cluster by adding switches and nodes.
Use the following tables to determine the number of switches and cables required to expand the Leaf-Spine cluster.
CAUTION: To ensure that adding switches and nodes to an existing Leaf-Spine cluster is nondisruptive:
● Do not power off any of the switches in the cluster.
● Only move excess links between Leaf and Spine switches.
● Cable one network, wait for few minutes, and then cable the other network.

CAUTION: If you are reusing switches that were previously used in another cluster, reimage them before adding them
to a new fabric. Reimage them with the same version that is deployed in the existing fabric.

NOTE: To connect 252 nodes on a OneFS 8.2.2 Leaf-Spine cluster, you must update the cluster with the latest OneFS
8.2.2 rollup patch. For more information, see the Current OneFS patches guide.
Best practices and examples for Leaf-Spine clusters are available in the White Paper, Best Practices For a Dell EMC Isilon
Leaf-Spine Network.
1. Determine the number of Leaf or Spine switches required for adding the new nodes to the cluster. The table, Z9100-ON and
Z9264 switch requirements for Leaf-Spine clusters, lists the switch requirements.
2. If additional Leaf or Spine switches are not required, connect the new nodes to free ports on the Leaf switches. Ensure that
you connect the new nodes by using ports 11 through 32 as recommended.
3. If additional Leaf or Spine switches are required:
a. Install the new switches in the rack.
b. All switches must have the same operating system version. The Dell switch operating system upgrade for OneFS 8.2 and
later upgrade guide provides instructions.
c. Determine the number of cables required to connect between each pair of Leaf and Spine switches by using the tables.
NOTE: The number of cables that are needed between each pair of Leaf and Spine switches reduces when a new
Spine switch is added. As a result, some of the existing cables can be moved to connect to the new Spine.

d. If a new Spine switch is being added, connect the existing Leaf switches to the new Spine switch. Ensure that you
connect the new Leaf switch to each Spine switch as recommended.
4. Check the personality on newly installed switches to ensure they are the same, using the command: show smartfabric
personality
Example output:

ENG-NC-BEZ9100-AI126-48# show smartfabric personality

Personality :L3 Fabric


Role :LEAF
ICL :

5. Power on the new nodes and join them to the cluster by using the Configuration Wizard. The Run the Configuration Wizard
section of the PowerScale F200 and F600 node installation Guide provides detailed instructions.

8
Upgrade OneFS
You can use the command-line interface to upgrade OneFS on the cluster. OneFS 8.2 or later is required for Leaf-Spine cluster
configurations.
Follow the pre-upgrade steps in the OneFS Upgrade Planning and Process Guide to confirm cluster health, and resolve any
compatibility issues before upgrading OneFS.
NOTE: To connect 252 nodes on a OneFS 8.2.2 Leaf-Spine cluster, you must update the cluster with the latest OneFS
8.2.2 roll up patch. For more information, see the Current OneFS patches guide.
Follow these steps to upgrade OneFS from the command-line interface. Download the OneFS installation image from the Dell
EMC Product Support site. The OneFS Upgrade Planning and Process Guide provides complete details.
1. Open a secure shell (SSH) connection to the lowest-numbered node in the cluster, and log in with the root account.
2. Verify the version of OneFS that is currently installed on the cluster.
isi version
If OneFS 8.2 or later is installed on the cluster, skip steps 3 and 4 as the cluster already supports Leaf-Spine clusters.
3. To perform the upgrade, run the following command, where <install-image-path> is the file path of the upgrade install
image. The file path must be accessible in an /ifs directory.
NOTE: The –simultaneous option takes all nodes in the cluster out of service at the same time. The cluster is
unavailable until the upgrade completes. The upgrade completes one node at a time and is nondisruptive if you omit the
--simultaneous option from the command.
isi upgrade cluster start <install-image-path> --simultaneous
The isi upgrade cluster command runs asynchronously, sets up the upgrade process, and returns quickly. To view
the progress of the upgrade, run the following command:
isi upgrade view
4. Commit the upgrade by running the following command:
isi upgrade cluster commit
The progress of the upgrade can be monitored by running the following command:
isi upgrade view
NOTE: If there is an issue, contact Support.

Upgrade the switch operating system


Leaf-Spine network configurations with up to 252 nodes are supported with Dell Z9100-ON or Z9264 switches running
operating system version 10.5.0.6.C2. This section describes the steps to check the switch operating system version and to
upgrade it for Leaf-Spine clusters.
NOTE: All switches must have the same operating system version. If all the switches have the same version and that
version is 10.5.0.6.C2, skip this section as switch operating system upgrade is not required.
To check the switch operating system version, perform step 1 on all switches.
1. Check the switch operating system version by performing the following steps on each switch.
a. Connect to the switch console.
The Dell Switch Operating System Upgrade Guide for OneFS 8.2 and later provides complete details.

b. Login with Username admin, and Password admin.


c. Check the switch operating system version by running the following command:
show version The screen displays an output similar to the following:

Dell EMC Networking OS10-Enterprise


Copyright(c)1999-‐2020 byDell Inc. All Rights Reserved.
OS Version: 10.5.0.6
Build Version: 10.5.0.6.685
Build Time: 2020-‐03-‐01T10:51:29-0800
System Type: Z9100-ON

9
Architecture: x86_64
Up Time:1 day 00:02:03

The OS Version row displays the version. The version should be 10.5.0.6.C2. If the version is 10.5.0.6.685 or later, skip
steps 2, 3, and 4.
2. Save the license file and the configuration.
See the Dell switch operating system upgrade guide for OneFS 8.2 and later upgrade guide for instructions.
3. Access ONIE.
See the Dell switch operating system upgrade guide for OneFS 8.2 and later upgrade guide for instructions.
4. Install the upgrade.
See the Dell switch operating system upgrade guide for OneFS 8.2 and later, the section Install configuration - DNOS
10.5.0.6.C2 for instructions.
5. To install the configuration file on a switch, run the following commands, depending on the role for which you are configuring
the switch:
● For all flat TOR setups of switches S4112, S4148, Z9100, and Z9264, run the following commands to configure all
switches to have the Leaf role:

configure terminal
smartfabric l3fbric enable role LEAF

● For Z9100 and Z9264 Leaf-Spine setups, run the following commands to configure the switch for the spine role:

configure terminal
SW(config)# smartfabric l3fabric enable role SPINE

● For Leaf-Spine setups, run the following commands to configure the leaf switches:

configure terminal
smartfabric l3fabric enable role LEAF

6. The following prompt appears: Reboot to change the personality? [yes/no]


Enter yes.
7. Install the license file and reconfigure.
See the Dell switch operating system upgrade guide for OneFS 8.2 and later for instructions.

Install rails for the Dell Z9100 switches


Install Dell EMC rails to support Dell switches in the rack.
1. Separate the inner and outer rails.
Attach the inner rails to the switch before installing it in the rack.
2. Clip the rail to the front of the rack.
There is label on the rail that reads INSTALL DEVICE THIS END. Slide the switch into the rack from the front, so that
label is at the front of the rack.

10
a. Align the support screws with the correct location on the rack.
b. Press down on the label that reads PUSH to open the rail clip.
c. Guide the support screws at the end of the rail into the rack holes until the clip snaps to the rack and holds the rail in
place.

11
3. Extend the rail until the back end of the rail clips to the back of the rack.
4. Repeat these steps to install the second rail on the other side of the rack.
5. Repeat until all switch rails are installed.

Install rails for the Dell Z9264 switches


Install Dell EMC rails to support Dell Z9264 switches in the rack.
NOTE: There are two sets of rails: the tool-less Dell 2U Ready Rails and the 2U Dell EMC designed rails that require
mounting screws to secure the front. Determine which rail kit version you are using. If tools are required, skip this section.
1. Separate the inner and outer rails.
Attach the inner rails with the desired offset to the switch before installing it in the rack.
2. Take the tool-less install side of the rail and press it into the rear NEMA until its has securely clipped in. Extend the rail
forward so that the screw holes are aligned to the inside of the front NEMA.
NOTE: Do not slide the switch into the rails until both mounting screws for each of the outer rails are in place in the
front.

12
Figure 5. Outer rail installation

a. Pull the rails forward from the rear NEMA after clipping them in to meet the front NEMA.
b. Insert a screw in the second and forth to the top hole positions on the front of the rail to secure it in place during the
installation process.
3. Repeat these steps to install the second rail on the other side of the rack.
4. Repeat until all switch rails are installed.

Install Dell Z9264 switches in the rack


Install Dell EMC rails to support Dell Z9264 switches in the rack.
Mount the inner rails on the switches and then install them in the rack (rear facing). Use the default 31-inch mounting points
for the inner rails. For the z9264, there are three mounting offsets that can be chosen: 22 inches, 28 inches, 31 inches. The
22-inch is its own inner rail. The 28-inch and 31-inch rail offset can be adjusted by removing two screws in the front of the rail,
sliding forward and backward the adjustable portion, and reinserting the two screws to lock the rails into that offset position.
This adjustment can be made before or after securing the inner rail to the switch.
1. Attach the inner rails to the switch. Ensure the ears at the end of the rails are pointing outward and are installed at the back
end of the switch.
2. To install either the 22-inch or the 28-inch or 31-inch, line up the 10 holes in the rail with the 10 holes on the side of the
switch and insert a screw in each location. Ensure that the rail ears are pointing out.
NOTE: Do not slide the switch with inner rails into outer rails until all mounting screws for the inner rails and switch have
been inserted and secured.

NOTE: The rails have a distinct left and right, which correspond to their orientation when viewed from the front of the
rack when installed.
There are Left and Right labels on the front of the inner rails. Slide the switch into the rack from the front so that the labels
are at the front of the rack.

13
Figure 6. Installation step sequence 1

3. From the front of the rack, slide the switch with inner rails onto the outer rails until the front ears of the inner rails meet the
front NEMAs.
4. If installing a bezel, any clips or pods that are used to mount the bezel should be installed at this step. After the clips or pods
are installed, check that all remaining exposed mounting holes on the front of the switch rails are secured to the rack with
screws.

Figure 7. Installation step sequence 3 and 4

5. Repeat until all switch rails are installed.

Racking guidelines for Leaf-Spine clusters


The guidelines in this section helps you plan your racking arrangement for building or scaling Leaf-Spine clusters. Only install Dell
EMC equipment in the rack.
● To ensure that you can scale your cluster in the future, leave enough rack space for switches and/or nodes.
● Place the Spine switches at a central location where you have room to grow the cluster on either side. Build your int-a
network (back-end) from one side of the Spine switches, and your int-b (front-end) network from the other side.

14
● Try to place the nodes in the same rack as the Leaf switches to which they are connecting. If that is not possible, place the
nodes in an adjacent rack.
If you plan to expand the cluster, additional Spine switches and extra cabling are required. Expanding from 1 Spine switch to 2
Spine switches, half of the existing connections to Spine 1 are moved to Spine 2. To simplify the expansion, it is best practice
to add switches within the same rack. For example, to add 3 nodes to the rack, Gen6 nodes are 4U, so enough space for 12U is
required.

Install Dell switches in the rack


Mount inner rails on the switches and then install them in the rack (rear facing).
There are two mounting points for the inner rails, one 28" and the other 31" from the front rail. Use the default 31" mounting
points for the inner rails.
1. Attach the inner rails to the switch.
Ensure the ears at the end of the rails are pointing outward and are installed at the back end of the switch.
a. Insert the mounting posts on the side of the switch into the large holes in the rail.
b. Slide the rails toward the back of the switch to secure the mounting posts in the smaller holes in the rail.

2. Slide the switch into the rails that are mounted in the rack.

15
3. Slide the switch into the rack until you secure the back ears of the rail to the rack with the two black screws on the rail.
Secure the rail to the rack by tightening the black thumbscrew on each rail.

16
Sequence of tasks
Complete the tasks for adding a node to a cluster in the following sequence to ensure a more orderly installation.
If you are only adding a new node into an existing cluster, you can skip all steps related to installing the rails and chassis.
1. If you are going to be racking nodes with all drives installed, confirm that a mechanical lift is available.
The mechanical lift must be rated for at least 300 lbs (136 kg). If a mechanical lift is not available, label the drive sleds when
you remove them to return them safely to the original slot. The Isilon Generation 6 Installation Guide provides more details.
NOTE: Node enclosures with all drives installed can weigh up to 300 lbs (60 x 4 lbs) or 136 kg.

2. Install rails for the new enclosure.


3. Install the enclosure.
4. Install the cable management arms for the chassis.
5. Install the power and network cables.
6. Complete the hardware installation, including applying system tags and labels.
7. Bring the system online and to include the new node enclosures.
8. Return shipping materials.

Third-party rack requirements


You must confirm that all third-party racks meet the minimum requirements for installation.
Third-party racks must meet the requirements listed in the following table.

Requirement Category
Cabinet 44 inches minimum rack depth without rear door.
52 inches minimum rack depth with a rear door.
The node attached cable management arms will extend approximately 7 inches out the
back of a 44 inch deep rack.
The 2U PDU will extend approximately 3 inches out the back of a 44 inch deep rack.
The standard server rack depth is 37 inches.
The server rack depth with the 2U PDU is 44 inches.

Recommended 24 inches wide cabinet to provide room for cable routing on the sides of
the cabinet.
If you are installing nodes in a 24 inch wide rack, you are required to install 2U horizontal
PDUs.
If you are installing nodes in a 30 inch wide rack, you can install vertical PDUs, but the
PDUs must be rear-facing.
Sufficient contiguous space anywhere in the rack to install the components in the
required relative order.
If you install a front door, it must maintain a minimum of 1.2 inches of clearance to the
bezels. It must be perforated with 50% or more evenly distributed air opening. It should
enable easy access for service personnel and allow the LEDs to be visible through it.
The cable management arms may extend out the back of the cabinet and do not allow for
rear doors if the cabinet is not 52" deep.
If you install a rear door, it must be perforated with 50% or more evenly distributed air
opening.

Blanking panels should be used as required to prevent air circulation inside the cabinet.
There is a recommended minimum of 42 inches of clearance in the front and 36 inches
of clearance in the rear of the cabinet to allow for service area and proper airflow. A
minimum of 60 inches of clearance is required in the front of the cabinet, and a minimum

17
Requirement Category
of 36 inches of clearance is recommended in the rear of the cabinet to allow for service
area and proper airflow.
NOTE: Make note of the significant clearance that is required at the front of a node.
The node will slide out roughly two floor tiles away from the rack when servicing
drives.

NEMA rails NEMA round and square hole rails are supported.
NEMA threaded hole rails are NOT supported.
NEMA round holes must accept M5 size screws.
The optimal spacing for the front to rear NEMA rail spacing is 29 inches, with a minimum
of 27 inches and maximum of 34 inches.
The optimal spacing for the front to rear NEMA rail spacing is 29 inches, with a minimum
of 23.25 inches and maximum of 34 inches.

Power The customer rack must have redundant power zones, one on each side of the rack with
separate PDU power strips. Each redundant power zone should have capacity for the
maximum power load.
Use the power calculator to refine the power requirements based on the hardware
configuration and customer provided PDU.
For customer provided PDU’s, the Dell EMC power cords on the servers and switches
expect C13/C14 connections.
NOTE: Dell EMC is not responsible for any failures, issues, or outages resulting from
failure of the customer provided PDUs.

Cabling Cables for the product must be routed in such a way that it mimics the standard offering
coming from the factory. This includes dressing cables to the sides to prevent drooping
and interfering with service of field replaceable units (FRUs).
Optical cables should be dressed to maintain a 1.5 inches bend radius.
Cables for third-party components in the rack cannot cross or interfere with components
in such a way that they block front to back air flow or individual FRU service activity.
Weight Customer rack and data center floor must be capable of supporting the weight the
equipment.
Use the power and weight calculator to refine the weight requirements based on the
hardware configuration and customer provided cabinet and PDU.

Cable management for Leaf-Spine clusters


To set up and manage cables for Leaf-Spine clusters, follow these guidelines.
● Use breakout cables if you are planning to add nodes with 10 GbE or 25 GbE back-end interfaces to the cluster. Breakout
cables enable you to connect a single cable to the switch that breaks out to four archive nodes.
● If cable cost and length is a consideration, rack the nodes as close to the Leaf switches as possible. Shorter passive cables
are more cost effective than the longer active cables. Also, passive cables encounter fewer problems and errors.
● Do not connect Leaf to Leaf, or Spine to Spine switches. CELOG detects the existence of any invalid connections in the
cluster.

Supported cable length options


● 100 GbE cable options for uplinks and downlinks:
○ Passive copper - 1, 3, and 5 meters
○ Active optical - 3, 7, 10, and 30 meters
○ Optical - 1, 3, 5, 10, 30, 50, 100, and 150 meters

18
● 40 GbE cable options for downlinks:
○ Copper - 1, 3, and 5 meters
○ Optical - 1, 3, 5, 10, 30, 50, 100, and 150 meters
● Breakout cables 4x10GbE and 4x25GbE:
○ Copper - 1, 3, and 5 meters
○ Optical - breakout cables do not require optics.

Example Leaf-Spine configurations


Follow the general design principles used in these examples to deploy an Isilon Leaf-Spine cluster in your datacenter.

Table 6. Leaf-Spine to node maximum connections


Spine switches Leaf switches Maximum ports
1 2-3 66
2 2-6 132
3 3-7 144
5 up to 12 252

NOTE: Dell EMC does not recommend using 4 Spine switches since it requires using just 8 uplinks. The Leaf switches are
limited to support 22 or less ports for downlinks. Any more than 22 ports over-subscribes the back-end networks. This
requires re-cabling some of the nodes to different leaf switches to grow the cluster. For example, if to grow from 3 Spine
switches to 4 Spine switches, you must re-cable the nodes to different Leaf switches (unless nodes had never connected
to more than 22 ports).
40 node configuration
It is assumed that this cluster will not grow beyond 44 Performance nodes with 40GbE, or 176 Archive nodes with 10 GbE by
using breakout cables. Although initially, this configuration does not require a Leaf-Spine architecture, the target growth of the
cluster exceeds what a single Z9100-ON switch supports.

Table 7. Leaf to Spine port recommendations


Leaf switch Spine connection
L1 Port 1,2,3,4,5,6,7,8, and 9 on Spine 1
L2 Port 10,11,12,13,14,15,16,17, and 18 on Spine 1

The configuration with 40 node (20 performance nodes and 20 archive nodes) includes:
● Six Dell Z9100-ON switches (3 per side)
○ 2 Spine switches
○ 4 Leaf switches
● 36 QSFP, 28 100GbE uplink cables (9 uplink cables per Leaf)
● 40 QSFP+ twin-ax or MPO backend cables
● 80 Optics (if 40 MPO cables are used, one optic for each end of the cable)
● 10 QSFP to SFP+ breakout cables

19
Figure 8. 40 node configuration

The following are design considerations for this Leaf-Spine configuration:


● Organize the nodes per type, so that all Performance nodes are on rack 1 and all Archive nodes are on rack 2.
● To simplify and organize the cabling, place both an Int-A and Int-B Leaf switch in the same rack for all nodes to connect to
switches within the same rack, and rack to rack cabling to only be uplinks (Leaf to Spine).
● Group the uplink cables connected to the Spine in a logical way. As you can see from the magnified drawing above, the first
9 ports on the Spine were used by Leaf #1 then the next 9 ports by Leaf #2.
● Reserve the first 10 ports on Leaf switches for uplink connections.
Best practices and additional examples for Leaf-Spine clusters are available in the following white papers:
● Best Practices for a Dell EMC PowerScale Leaf-Spine Network
● Dell EMC PowerScale Back-End Network Overview

20
Where to get help
The Dell Technologies Support site (https://www.dell.com/support) contains important information about products and
services including drivers, installation packages, product documentation, knowledge base articles, and advisories.
A valid support contract and account might be required to access all the available information about a specific Dell Technologies
product or service.

Additional options for getting help


This section contains resources for getting answers to questions about PowerScale products.

Dell Technologies support ● https://www.dell.com/support/incidents-online/en-us/contactus/product/


isilon-onefs
Telephone support ● United States: 1-800-SVC-4EMC (1-800-782-4362)
● Canada: 1-800-543-4782
● Worldwide: 1-508-497-7901
● Local phone numbers for a specific country or region are available at https://
www.dell.com/support/incidents-online/en-us/contactus/product/isilon-onefs.
PowerScale OneFS Documentation Info ● https://www.dell.com/support/kbdoc/en-us/000152189/powerscale-onefs-info-
Hubs hubs
Dell Community Board for self-help ● https://www.dell.com/community

21
Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the
problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2018 - 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.

You might also like