Professional Documents
Culture Documents
PowerScale - Isilon - A3000 and A300-PowerScale Node Installation Guide
PowerScale - Isilon - A3000 and A300-PowerScale Node Installation Guide
Topic
A3000 and A300
Selections
PowerScale A3000 and A300: PowerScale Node Installation Guide
PowerScale Node Installation Guide: Notes, cautions, and warnings
PowerScale Node Installation Guide: Node installation introduction
PowerScale Node Installation Guide: Node installation for A3000, A300, H7000, and H700 nodes
PowerScale Node Installation Guide: Attaching network and power cables
PowerScale Node Installation Guide: Before you begin
PowerScale Node Installation Guide: Configure the node
PowerScale Node Installation Guide: Front panel LCD menu
PowerScale Node Installation Guide: Update the install database
PowerScale Node Installation Guide: Node installation for F900, F600, F200, B100, and P100 nodes
PowerScale Node Installation Guide: Node configuration
REPORT PROBLEMS
If you find any errors in this procedure or have comments regarding this application, send email to
SolVeFeedback@dell.com
Copyright © 2023 Dell Inc. or its subsidiaries. All Rights Reserved. Dell Technologies, Dell, EMC, Dell
EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be
trademarks of their respective owners.
The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of
any kind with respect to the information in this publication, and specifically disclaims implied warranties of
merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable
software license.
This document may contain certain words that are not consistent with Dell's current language guidelines.
Dell plans to update the document over subsequent future releases to revise these words accordingly.
Page 1 of 40
This document may contain language from third party content that is not under Dell's control and is not
consistent with Dell's current guidelines for Dell's own content. When such third party content is updated
by the relevant third parties, this document will be revised accordingly.
Page 2 of 40
Contents
Preliminary Activity Tasks .......................................................................................................5
Read, understand, and perform these tasks.................................................................................................5
Page 3 of 40
Drive types.............................................................................................................................................23
Unpack and verify components .............................................................................................................23
Rail kit components for 2U systems ......................................................................................................23
Rail kit components for 1U systems ......................................................................................................24
Install the rails........................................................................................................................................25
Secure the rail assemblies to the cabinet ..............................................................................................26
Install the system in the cabinet.............................................................................................................27
Install the front bezel..............................................................................................................................28
Connect and route cords and cables .....................................................................................................28
Node ports .............................................................................................................................................29
Dell Switch configuration .......................................................................................................................30
Node configuration................................................................................................................30
Configure the node ................................................................................................................................30
Federal installations...............................................................................................................................30
SmartLock compliance mode ................................................................................................................31
Connect to the node using a serial cable...............................................................................................31
Run the configuration wizard .................................................................................................................32
Preformat SED Nodes (Optional) ..........................................................................................................34
Updating node firmware.........................................................................................................................35
Licensing and remote support ...............................................................................................................35
Configure the Integrated Dell Remote Access Controller ......................................................................35
Front panel LCD display ........................................................................................................................36
View the Home screen...........................................................................................................................37
Setup menu ...........................................................................................................................................37
View menu .............................................................................................................................................38
Join a cluster by using buttons and the LCD display .............................................................................38
Update the install database ...................................................................................................................39
Where to get help ..................................................................................................................................39
Additional options for getting help..........................................................................................................39
Page 4 of 40
Preliminary Activity Tasks
This section may contain tasks that you must complete before performing this procedure.
Table 1 List of cautions, warnings, notes, and/or KB solutions related to this activity
2. This is a link to the top trending service topics. These topics may or not be related to this activity.
This is merely a proactive attempt to make you aware of any KB articles that may be associated with
this product.
Note: There may not be any top trending service topics for this product at any given time.
Page 5 of 40
General Information for Removing and Installing FRUs
This section describes precautions you must take and general procedures you must follow when
removing, installing, or storing field-replaceable units (FRUs). The procedures in this section apply to FRU
handling during hardware upgrades as well as during general replacement.
FRUs are designed to be powered up at all times. This means you can accomplish FRU replacements
and most hardware upgrades while the cabinet is powered up. To maintain proper airflow for cooling and
to ensure EMI compliance, make sure all front bezels, filler panels, and filler modules are reinstalled after
the FRU replacement or hardware upgrade is completed.
IMPORTANT: These procedures are not a substitute for the use of an ESD kit. You should follow them
only in the event of an emergency.
• Before touching any FRU, touch a bare (unpainted) metal surface of the enclosure.
• Before removing any FRU from its antistatic bag, place one hand firmly on a bare metal surface of the
enclosure, and at the same time, pick up the FRU while it is still sealed in the antistatic bag. Once you
have done this, do not move around the room or contact other furnishings, personnel, or surfaces
until you have installed the FRU.
• When you remove a FRU from the antistatic bag, avoid touching any electronic components and
circuits on it.
Page 6 of 40
• If you must move around the room or touch other surfaces before installing a FRU, first place the
FRU back in the antistatic bag. When you are ready again to install the FRU, repeat these
procedures.
NOTE:
A NOTE indicates important information that helps you make better use of your product.
CAUTION:
A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem.
WARNING:
A WARNING indicates a potential for property damage, personal injury, or death.
Page 7 of 40
Node installation introduction
About PowerScale nodes
Node Description
Page 8 of 40
Node Description
• Are 1U models that can be added to an existing cluster in single node increments.
• Can be included in a maximum 252 node cluster.
• Support inline software data compression (3:1 depending on the workload and the
dataset).
• Support data deduplication.
• Provide additional compute, memory, and networking resources to a cluster but do not
provide additional storage.
• Enable 2-way NDMP backup and restore from third-party fibre channel-attached tape
libraries.
• Are 1U models that can be added to an existing cluster in single node increments.
• Can be included in a maximum 252 node cluster.
• Support inline software data compression (3:1 depending on the workload and the
dataset).
• Support data deduplication.
• Provide additional compute, memory, and networking resources to a cluster but do not
provide additional storage.
Page 9 of 40
Before you begin
WARNING:
• Before you begin, read and follow the safety instructions in any Safety, Environmental, and
Regulatory information document shipped with the system.
• To avoid injury, do not attempt to lift the system by yourself.
• The figures in this document do not represent a specific system.
• The rail kit is compatible with square, unthreaded round, and threaded round hole racks.
WARNING:
Do not install A3000, A300, H7000, or H700 nodes with Gen6 nodes into existing Gen6 chassis installations. The higher powered
A3000, A300, H7000, and H700 nodes can cause a fuse to open on the Gen6 chassis midplane, which then requires a chassis
replacement. A300, A3000, H700, and H7000 nodes can only be installed into the chassis they ship in from the factory or into other
chassis like nodes are shipped in.
Drive types
This information applies to nodes that contain any of the following drive types: self-encrypting drives
(SEDs), hard disk drives (HDDs), and solid state drives (SSDs).
If you are performing this procedure with a node containing SSDs, follow the additional steps that are
provided in this document to ensure compatibility with the cluster.
CAUTION:
Only install the drives that were shipped with the node. Do not mix drives of different capacities in your node.
If you remove drive sleds from the chassis during installation, ensure to label the sleds clearly. Replace
the drive sleds in the same sled bay you removed them from. If drive sleds are mixed between nodes,
even before configuration, the system is inoperable.
If you are working with a node containing SEDs, the node might take up to two hours longer to join the
cluster than a node with standard drives. Do not power off the node during the join process.
NOTE:
Page 10 of 40
To avoid personal injury or damage to the hardware, always use multiple people to lift and move heavy equipment.
Installation types
You may be able to skip certain sections of this procedure based on the type of installation you are
performing.
New cluster
If you are installing a new cluster, follow every step in this procedure. Repeat the procedure for each
chassis you install.
If you are installing a new cluster with more than 22 nodes, or if you are growing an existing cluster to
include more than 22 nodes, follow the instructions in Install a new cluster using Leaf-Spine configuration
in the Leaf-Spine Cluster Installation Guide. See the PoweScale Site Preparation and Planning Guide for
more information about the Leaf-Spine network topology.
New chassis
If you are adding a new Generation 6 chassis to an existing cluster, follow every step in this procedure.
NOTE:
Check the depth of the racks to ensure that they fit the depth of the chassis being installed. The Generation 6 Site Preparation and
Planning Guide provides details.
The two rails are packaged separately inside the chassis shipping container.
Page 11 of 40
2. Remove the mounting screws from the back section of the rail.
The back section is the thinner of the two rail sections. There are three mounting screws that are
attached to the back bracket. There are also two smaller alignment screws. Do not uninstall the
alignment screws.
3. Attach the back section of the rail to the rack with the three mounting screws.
Ensure that the locking tab is on the outside of the rail.
Page 12 of 40
4. Remove the mounting screws from the front section of the rail.
The front section is the wider of the two rail sections. There are three mounting screws that are
attached to the front bracket. There are also two smaller alignment screws. Do not uninstall the
alignment screws.
5. Slide the front section of the rail onto the back section that is secured to the rack.
6. Adjust the rail until you can insert the alignment screws on the front bracket into the rack.
7. Attach the front section of the rail to the rack with only two of the mounting screws.
Attach the mounting screws in the holes between the top and bottom alignment screws. You will
install mounting screws in the top and bottom holes after the chassis is installed, to secure the
chassis to the rack.
Page 13 of 40
8. Repeat these steps to install the second rail in the rack.
NOTE:
A chassis that contains drives and nodes can weigh up to 285 pounds. We recommend that you attach the chassis to a lift to install
it in a rack. If a lift is not available, you must remove all drive sleds and nodes from the chassis before you attempt to lift it. Even
when the chassis is empty, only attempt to lift and install the chassis with multiple people.
CAUTION:
If you remove drive sleds from the chassis during installation, make sure to label the sleds clearly. You must replace the drive sleds
in the same sled bay you removed them from. If drive sleds are mixed between nodes, even prior to configuration, the system will
be inoperable.
1. Align the chassis with the rails that are attached to the rack.
2. Slide the first few inches of the back of the chassis onto the supporting ledge of the rails.
3. Release the lift casters and carefully slide the chassis into the cabinet as far as the lift will allow.
4. Secure the lift casters on the floor.
5. Carefully push the chassis off the lift arms and into the rack.
CAUTION:
Make sure to leave the lift under the chassis until the chassis is safely balanced and secured within the cabinet.
6. Install two mounting screws at the top and bottom of each rail to secure the chassis to the rack.
Page 14 of 40
7. If you removed the drives and nodes prior to installing the chassis, re-install them now.
CAUTION:
Remember that you must install drive sleds with the compute module they were packaged with on arrival to the site. If you removed
the compute nodes and drive sleds to rack the chassis, you must replace the drive sleds and compute modules in the same bays
from which you removed them. If drive sleds are mixed between nodes, even before configuration, the system is inoperable.
If all compute nodes and drive sleds are already installed in the chassis, you can skip this section.
1. At the back of the chassis, locate the empty node bay where you install the node.
2. Pull the release lever away from the node.
Keep the lever in the open position until the node is pushed all the way in to the node bay.
3. Slide the node into the node bay.
NOTE:
Support the compute node with both hands until it is fully inserted in the drive bay.
Page 15 of 40
You can feel the lever pull the node into place in the bay. If you do not feel the lever pull the
node into the bay, pull the lever back into the open position, make sure that the node is pushed
all the way into the node bay, then push the lever in against the node again.
5. Tighten the thumbscrew on the release lever to secure the lever in place. Node automatically
powers up when you insert it into the bay.
6. At the front of the chassis, locate the empty drive sled bays where you install the drive sleds that
correspond to the compute module you installed.
7. Make sure the drive sled handle is open before inserting the drive sled.
8. With two hands, slide the drive sled into the sled bay.
9. Push the drive sled handle back into the face of the sled to secure the drive sled in the bay.
10. Repeat the previous steps to install all drive sleds for the corresponding compute module.
11. Repeat all the steps in this section to install other nodes.
Back panel
The back panel provides connections for power, network access, and serial communication, as well as
access to the power supplies and cache SSDs.
Page 16 of 40
1. 1 GbE management and SSH port 6. Multifunction button
NOTE:
1 GbE management interface on Generation 6 hardware is designed to handle SSH traffic only.
CAUTION:
Only trained support personnel should connect to the node with the USB or HDMI debugging ports. For direct access to the node,
connect to the console connector.
CAUTION:
Do not connect mobile devices to the USB connector for charging.
Multifunction button
You can perform two different functions with the multifunction button. With a short press of the button,
you can begin a stack dump. With a long press of the button, you can force the node to power off.
NOTE:
Power off nodes from the OneFS command line. Only power off a node with the multifunction button if the node does not respond
to the OneFS command.
Page 17 of 40
Supported switches
Switches ship with the proper rails or tray to install the switch in the rack.
The following internal network switches ship with rails to install the switch. The switch rails are adjustable
to fit NEMA front rail to rear rail spacing ranging from 22 in. to 34 in.
Z9264F-ON 128-port 64x100 GbE, 64x40 GbE, 128x10 GbE, 128 x 25GbE (with
breakout cables)
The Z9264F-ON is a fixed 2U Ethernet switch. The Z9264-F provides either 64 ports of 100 GbE or 40
GbE in QSFP28 or 128 ports of 25 GbE or 10 GbE by breakout. Breakout cables are only used in the odd-
numbered ports and using one in odd-numbered port disables the corresponding even-numbered port. For
example, you can use 10 GbE or 25 GbE = 128 (32x 4:1 breakouts). You can then mix and match by
removing 2x 40 GbE or 100 GbE and adding 4x 10 GbE or 25 GbE, and conversely.
Z9100-ON 128-port 32x100 GbE, 32x40 GbE, 128x10 GbE (with breakout cables),
128 x 25GbE
The Z9100-ON fixed 1U Ethernet switch can accommodate high port density (lower and upper RUs). The
switch accommodates multiple interface types (32 ports of 100 GbE or 40 GbE in QSFP28 or 128 ports of
25 GbE or 10 GbE with breakout).
NOTE:
In OneFS 8.2.0 and later, the Z9100-ON switch is required for Leaf-Spine networking of large clusters.
S5232 128-port 32x100 GbE, 32x40 GbE, 128x10 GbE (with breakout cables), 128 x
25GbE (with breakout cables)
Only 124 10/25 GE nodes can be supported on the S5232 through breakout.
The S4148F-ON is the next generation family of 10 GbE (48 ports) top-of-rack, aggregation-switch, or
router products that aggregate 10 GbE server or storage devices. The switch provides multi speed uplinks
Page 18 of 40
for maximum flexibility and simple management.
S4112F-ON 12-port 3x100 GbE (with breakout, connect 12x10 GbE nodes using the
3x100 GbE ports) 12 x10GbE
The S4112F-ON supports 10/100GbE with 12 fixed SFP+ ports to implement 10 GbE and three fixed
QSFP28 ports to implement 4x10 or 4x25 using breakout. A total of 24 10 GbE connections including the
three fixed QSFP28 ports using 4x10 breakout cables.
3. To PDU 1 4. To PDU 2
Work with the site manager to determine external network connections, and bundle the additional
network cables together with the internal network cables from the same node pair.
It is important to keep future maintenance in mind as you dress the network and power cables. Cables
must be dressed loosely enough to allow you to:
Page 19 of 40
• remove any of the four compute nodes from the back of the Generation 6 chassis.
• remove power supplies from the back of compute nodes.
In order to avoid dense bundles of cables, you can dress the cables from the node pairs to either side of
the rack. For example, dress the cables from nodes 1 and 2 toward the lower right corner of the chassis,
and dress the cables from nodes 3 and 4 toward the lower left corner of the chassis.
Wrap network cables and power cables into two separate bundles to avoid EMI (electromagnetic
interference) issues, but make sure that both bundles easily shift together away from components that
need to be removed during maintenance, such as compute nodes and power supplies.
LCD Interface
The LCD interface is located on the node front panel. The interface consists of the LCD screen, a round
button labeled ENTER for making selections, and four arrow buttons for navigating menus.
There are also four LEDs across the bottom of the interface that indicate which node you are
communicating with. You can change which node you are communicating with the arrow buttons.
The LCD screen is dark until you activate it. To activate the LCD screen and view the menu, press the
square selection button.
Press the right arrow button to move to the next level of a menu.
Attach menu
The Attach menu contains the following sub-menu:
Drive
Adds a drive to the node. After you select this command, you can select the drive bay that contains the
drive you would like to add.
Status menu
The Status menu contains the following sub-menus:
Alerts
Displays the number of critical, warning, and informational alerts that are active on the cluster.
Cluster
Details
Page 20 of 40
Displays the cluster name, the version of OneFS installed on the cluster, the health status of the cluster,
and the number of nodes in the cluster.
Capacity
Displays the total capacity of the cluster and the percentage of used and available space on the cluster.
Throughput
Node
Details
Displays the node ID, the node serial number, the health status of the node, and the node uptime as
<days>, <hours>:<minutes>:<seconds>
Capacity
Displays the total capacity of the node and the percentage of used and available space on the node.
Network
Throughput
Disk/CPU
Displays the current access status of the node, either Read-Write or Read-Only. Also displays the
current CPU throttling status, either Unthrottled or Throttled.
Drives
Hardware
Displays the current hardware status of the node as <cluster name>-<node number>:<status>.
Page 21 of 40
Statistics
Displays a list of hardware components. Select one of the hardware components to view statistics
related to that component.
Update menu
The Update menu allows you to update OneFS on the node. Press the selection button to confirm that
you would like to update the node. You can press the left navigation button to back out of this menu
without updating.
Service menu
The Service menu contains the following sub-menus:
Throttle
Unthrottle
Read-Only
Read-Write
UnitLED On
UnitLED Off
Shutdown menu
The Shutdown menu allows you to shut down or reboot the node. This menu also allows you to shut
down or reboot the entire cluster. Press the up or down navigation button to cycle through the four shut
down and reboot options, or to cancel out of the menu.
Press the selection button to confirm the command. You can press the left navigation button to back out
of this menu without shutting down or rebooting.
Page 22 of 40
Update the install database
After all work is complete, update the install database.
Node installation for F900, F600, F200, B100, and P100 nodes
This chapter describes how to install F900, F600, F200, B100, and P100 nodes into an equipment
cabinet.
Drive types
This information applies to nodes that contain NVME or SAS drives and use Instant Secure Erase
(ISE)or self-encrypting drives (SED) Non-FIPS or FIPS as their security method.
CAUTION:
Only install the drives that were shipped with the node. Do not mix drives of different capacities in the node. If you remove drive
carriers from the chassis during installation, ensure that the carriers are labeled clearly. Replace the drive carriers in the same bay
from which they were removed. If drive carriers are mixed between nodes, even before configuration, the system is inoperable.
NOTE:
To avoid personal injury or damage to the hardware, always use multiple people to lift and move heavy equipment.
Page 23 of 40
Figure 1. Sliding rail assembly - 2U systems
Page 24 of 40
Install the rails
The rails are labeled left and right and cannot be interchanged. The front side of each rail is labeled Left
Front or Right Front when viewed from the cabinet front.
1. Determine where to mount the system and use masking tape or a felt-tip pen to mark the
location at the front and back of the cabinet.
NOTE:
Install the left rail assembly first.
2. Fully extend the rear sliding bracket of the rail.
3. Position the rail end piece that is labeled Left Front facing inward and orient the rear end piece to
align with the holes on the rear cabinet flanges.
4. Push the rail straight toward the rear of the rack until the latch locks in place.
5. Rotate the front-end piece latch outward. Pull the rail forward until the pins slide into the flange.
Release the latch to secure the rail in place.
Page 25 of 40
Figure 2. Installing the front end of the rail
NOTE:
For square hole cabinets, install the supplied conical washer before installing the screw. For unthreaded round hole cabinets,
install only the screw without the conical washer.
1. Align the screws with the designated U spaces on the front and rear rack flanges.
Ensure that the screw holes on the tab of the system retention bracket are seated on the
designated U spaces.
2. Insert and tighten the two screws using the Phillips #2 screwdriver.
Page 26 of 40
Figure 1. Installing screws
CAUTION:
The system is heavy and should be installed in a cabinet by two people. To avoid personal injury and/or damage to the equipment,
do not attempt to install the system in a cabinet without a mechanical lift and/or help from another person.
1. At front of the cabinet, pull the inner slide rails out of the rack until they lock into place.
2. Locate the rear rail standoff on each side of the system. Position the system above the rails and
lower the rear rail standoffs into the rear J-slots on the slide assemblies.
3. Rotate the system downward until all the rail standoffs are seated in the J-slots.
4. Push the system inward until the lock levers click into place.
5. Pull the blue slide release lock tabs forward on both rails and slide the system into the cabinet.
The slam latches will engage to secure the system in the cabinet.
NOTE:
Ensure that the inner rail slides completely into the middle rail. The middle rail locks if the inner rail is not fully engaged.
Page 27 of 40
Figure 3. Slide the system into the cabinet
NOTE:
Ensure that there is enough space for the cables to move when you slide the system out of the rack.
Page 28 of 40
3. Thread the straps through the CMA bracket slots on each side of the system to hold the
cable bundles.
Node ports
The back-end ports are the private network connections to the nodes. Port 1 from all nodes connects to
one switch, and port 2 from all the nodes connects to a second switch. Both back-end switches are
provided.
The front-end ports are for the client network connections.
NOTE:
In the F900 and F600 nodes, the rNDC does not provide network connectivity. In the F200, the rNDC can provide 10 GbE or 25
GbE connections for front-end networking.
Page 29 of 40
Figure 4. F900 back-end ports
configure terminal
For Leaf and Spine network configuration, see the PowerScale Leaf-Spine Installation Guide.
2. The following prompt appears: Reboot to change the personality? [yes/no]
Type yes.
Node configuration
Configure the node
Before using the node, you must either create a new cluster or add the node to an existing cluster.
Federal installations
Configure nodes to comply with United States federal regulations.
If you are installing the nodes that are included in this guide in a United States federal agency, configure
the external network with IPv6 addresses. If the OneFS cluster is configured for IPv6, enablement of link-
local is required to comply with Federal requirements.
Page 30 of 40
As part of the installation procedure, configure the external cluster for IPv6 addresses in the Isilon
configuration wizard after a node is powered on.
After you install the cluster, enable link-local addresses by following the instructions in the KB article How
to enable link-local addresses for IPv6.
CAUTION:
Once you select to run a node in SmartLock compliance mode, you cannot leave compliance mode without reformatting the node.
• vCenter
• VMware vSphere API for Storage Awareness (VASA)
• VMware vSphere API for Array Integration (VAAI) NAS Plug-In
1. Connect a null modem serial cable to the serial port of a computer, such as a laptop.
2. Connect the other end of the serial cable to the serial port on the back panel of the node.
3. Start a serial communication utility such as Minicom (UNIX) or PuTTY (Windows).
4. Configure the connection utility to use the following port settings:
Setting Value
Data bits 8
Parity None
Stop bits 1
Page 31 of 40
Run the configuration wizard
The configuration wizard starts automatically when a new node is powered on. The wizard provides step-
by-step guidance for configuring a new cluster or adding a node to an existing cluster.
The following procedure assumes that there is an open serial connection to a new node.
NOTE:
You can type back at most prompts to return to the previous step in the wizard.
1. For new clusters, joining a node to an existing cluster, or preparing a node to run in SmartLock
compliance mode, choose one of the following options:
o To create a cluster, type 1.
o To join the node to an existing cluster, type 2.
o To exit the wizard and configure the node manually, type 3.
o To restart the node in SmartLock compliance mode, type 4.
CAUTION:
If you choose to restart the node in SmartLock compliance mode, the node restarts and returns to this step.
Selection 4 changes to enable you to disable SmartLock compliance mode. Selection 4 is the last opportunity
to back out of compliance mode without reformatting the node.
2. Follow the prompts to configure the node.
For new clusters, the following table lists the information necessary to configure the cluster. To
ensure that the installation process is not interrupted, it is recommended that you collect this
information before installation.
Setting Description
SmartLock compliance A valid SmartLock license for clusters in compliance mode only
license
Root password The password for the root user for clusters in compliance mode do not allow a
root user and request a compliance administrator (comp admin) password.
Cluster name The name used to identify the cluster. Cluster names must begin with a letter
and can contain only numbers, letters, and hyphens.
NOTE:
if the cluster name is longer than 11 characters, the following warning displays: WARNING: Limit
cluster name to 11 characters or less when the NetBIOS Name Service is
enabled to avoid name truncation. Isilon uses up to 4 characters for
individual node names.
Page 32 of 40
Setting Description
int-a network settings The int-a network settings are for communication between nodes.
o Netmask The int-a network must be configured with IPv4.
o IP range The int-a network must be on a separate subnet from an int-b/failover network.
int-b/failover network The int-b/failover network settings are optional. The int-b network is for
settings communication between nodes, and provides redundancy with the int-a
network.
o Netmask
o IP range The int-b network must be configured with IPv4.
The external network settings are for client access to the cluster. The 25 Gb,
External network and 100 Gb ports can be configured from the wizard.
settings
The default external network can be configured with IPv4 or IPv6 addresses.
o Netmask
o MTU The MTU choices are 1500 or 9000.
o IP range Configure the external network with IPv6 addresses by entering an integer
less than 128 for the netmask value. The standard external netmask value for
IPv6 addresses is 64. If you enter a netmask value with dot-decimal notation,
use IPv4 addresses for the IP range.
In the configuration wizard, the following options are available:
Or
NOTE:
The 100gige is an option on F900 and F600 nodes.
Default gateway The IP address of the optional gateway server through which the cluster
communicates with clients outside the subnet. Enter an IPv4 or IPv6 address,
depending on how the external network is configured.
Page 33 of 40
Setting Description
Date and time settings The day and time settings are for the cluster.
o Time zone
o Day and time
Cluster join mode The method that the cluster uses to add new nodes. Choose one of the
following options:
Manual join
Cluster join mode enables configured nodes in the cluster, or new nodes to
request to join the cluster.
Secure join
NOTE:
If you are installing a node that contains SEDs (self-encrypting drives), the node formats the drives now. The formatting
process might take up to two hours to complete.
1. Connect to each node using the serial console and enter Preformat in the configuration wizard
main menu.
Once preformat is complete on each node, the configuration wizard is displayed again and the
preformat option is no longer available.
2. Connect to first node using the serial console again and use the configuration wizard to create a
new cluster.
3. Connect to each subsequent node using the serial console again. Use the configuration wizard
to join an existing cluster.
Page 34 of 40
Updating node firmware
To make sure that the most recent firmware is installed on a node, update the node firmware.
Follow instructions in the most current Node Firmware Release Notes to update your node to the most
recent Node Firmware Package.
NOTE:
Although iDRAC is pre-installed in F900, F600, F200, B100, and P100 nodes, caution is recommended when using iDRAC. Some
iDRAC features and functionality are accessible with the iDRAC interface but are not supported. OneFS only supports the following
IPMI commands with the iDRAC interface:
NOTE:
iDRAC applies only to F900, F600, F200, B100, and P100 node types.
IDRAC does not require any additional software installation.
1. After connecting the network cables and powering on the node, iDRAC is available for use. For
iDRAC, the RJ45 (1 GbE) connects to the external network switch.
Page 35 of 40
o root
o calvin
NOTE:
F900, F600, F200, B100, and P100 nodes can be ordered with both default username and password (root, calvin) or with
a random password option. If the nodes were ordered with the random password option, the username and password
differ. The random password is located on the bottom of the luggage tag.
NOTE:
iDRAC applies only to F900, F600, F200, B100, and P100 node types.
The following lists the status and conditions of the LCD display:
NOTE:
If the system is connected to a power source and an error is detected, the LCD is amber whether the system is turned on or off.
Page 36 of 40
Item Button or Description
Display
3 Right Moves the cursor forward in one-step increments during message scrolling:
1. Press and hold the right button to increase scrolling speed.
2. Release the button To stop scrolling, release the button.
NOTE:
The display stops scrolling when the button is released. After 45 s of inactivity, the display starts
scrolling.
4 LCD Display Displays system information, status, and error messages or iDRAC address
Setup menu
NOTE:
When you select an option in the Setup menu, confirm the option before going to the next action.
Option Description
iDRAC Select DHCP or Static IP to configure the network mode. If Static IP is selected, the available
fields are IP, Subnet (Sub), and Gateway (Gtw). Select Setup DNS to enable DNS and to view
domain addresses. Two separate DNS entries are available.
Set Select SEL to view LCD error messages in a format that matches the IPMI description in the
SEL SEL. You can match an LCD message with an SEL entry. Select Simple to view LCD error
messages in a simplified description. For information about the generated event and error
messages in the system firmware and agents that monitor system components, see the Error
Page 37 of 40
Option Description
View menu
NOTE:
When you select an option in the View menu, confirm the option before going to the next action.
Option Description
iDRAC IP Displays the IPv4 or IPv6 addresses for iDRAC9. Addresses include DNS (Primary and
Secondary), Gateway, IP, and Subnet (IPv6 does not have Subnet).
MAC Displays the MAC addresses for iDRAC, iSCSI, or Network devices
Name Displays the name of the Host, Model, or User String for the system
Number Displays the Asset tag or the Service tag for the system
Power Displays the power output of the system in BTU/hr or Watts. The display format can be
configured in the Set home submenu of the Setup menu.
Temperature Displays the temperature of the system in Celsius or Fahrenheit. The display format can be
configured in the Set home submenu of the Setup menu.
NOTE:
When you select an option in the View menu, confirm the option before proceeding to the next action.
When the node starts and is unconfigured, the LCD display reads, Unconfigured, and launches a
wizard. The wizard joins the node to a cluster that is connected to the back-end network.
To join the node to a cluster when the LCD display reads, Unconfigured:
NOTE:
Page 38 of 40
Some of the clusters might not have enough IP addresses. The attempt to join the node fails.
4. To join the displayed cluster, press Select.
5. <Return>: Return to the scan menu in Step 3.
6. The LCD display reads Joining….
1. If the node joins the cluster successfully, the LCD displays the hostname of the node.
2. If the node fails to join the cluster, the LCD displays Failed to join…. Return to Step
4.
3. To try another cluster, press Select.
Page 39 of 40
Dell Community Board • https://www.dell.com/community
for self-help
Page 40 of 40