Professional Documents
Culture Documents
ECS - ECS Capacity Expansion Procedures-Add Disk(s)
ECS - ECS Capacity Expansion Procedures-Add Disk(s)
ECS - ECS Capacity Expansion Procedures-Add Disk(s)
Topic
ECS Capacity Expansion Procedures
Selections
Which capacity expansion activity will you be performing?: Add Disk(s)
Select Disk Activity: EX500 Storage Disk Expansion
REPORT PROBLEMS
If you find any errors in this procedure or have comments regarding this application, send email to
SolVeFeedback@dell.com
Copyright © 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Dell Technologies, Dell, EMC, Dell
EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be
trademarks of their respective owners.
The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of
any kind with respect to the information in this publication, and specifically disclaims implied warranties of
merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable
software license.
This document may contain certain words that are not consistent with Dell's current language guidelines.
Dell plans to update the document over subsequent future releases to revise these words accordingly.
This document may contain language from third party content that is not under Dell's control and is not
consistent with Dell's current guidelines for Dell's own content. When such third party content is updated
by the relevant third parties, this document will be revised accordingly.
Page 1 of 20
Contents
Preliminary Activity Tasks .......................................................................................................3
Read, understand, and perform these tasks.................................................................................................3
Page 2 of 20
Preliminary Activity Tasks
This section may contain tasks that you must complete before performing this procedure.
Table 1 List of cautions, warnings, notes, and/or KB solutions related to this activity
2. This is a link to the top trending service topics. These topics may or not be related to this activity.
This is merely a proactive attempt to make you aware of any KB articles that may be associated with
this product.
Note: There may not be any top trending service topics for this product at any given time.
Page 3 of 20
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells
you how to avoid the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
Revision history
Table 1. Revision history
June 2021 Rev 1.1 Updates for 16 TB support on ECS version 3.5 and 2 TB and 4 TB support on
ECS version 3.7.
• Pre-site requirements
• Validate disk expansion kits
• Locate the racks to be expanded
Page 4 of 20
• Connect service laptop to the ECS racks to be expanded
• Execute xDoctor to check health of ECS racks to be expanded
• Hardware—Add expansion disks to all chassis in racks
• Software—Expand and Verify Storage Disks in racks
• Validate ECS health using xDoctor of the expanded racks
• Ensure that the customer validates ECS UI
If there are five or more nodes in the system (VDC) with the same disk size and total node disk capacity,
the minimum node expansion is only one. This upgrade is possible when the expansion node has the
same disk size and the total disk capacity of the existing nodes. For example, you have an EX500
appliance with five nodes containing 24x8TB disks (192TB). You can add a single node with 24x8TB
disks or a total disk capacity of 192TB.
If you are adding nodes that have different disk sizes or node capacity than the nodes currently in the
system, you must add a minimum of five nodes at a time. For example, you have an EX500 appliance
with five nodes containing 24x8TB drives. If you want to add nodes of different drive size (12TB or
16TB), you must add five or more nodes, at a time.
NOTE: All drives within a node must be of the same drive size, but there can be nodes of differing
drive sizes within a rack.
Table 1. EX500 Disk Upgrade Kit Lists the disk upgrade kit details.
Page 5 of 20
Disk Capacity Expansion Rules
• Rack/Multi-rack VDC:
o All EX500 nodes in a rack with the same disk size must have the same number of disks
(12 or 24).
o EX500 nodes in a single / multi-rack VDC can have different disk sizes and qualities as
long as there is a minimum of 5 nodes with the same disk size and quality.
o Expand from the bottom chassis to the top chassis.
• Node/Server:
o EX500 does not support mixing of disk capacities within a server.
o All empty slots must contain fillers.
Node or Rack Minimim Nodes Disk Enclosures Disks in Disks in Gross Storage
or Maximum per Node Enclosure Enclosure capacity
(BP2 Back) (BP1 Front
Page 6 of 20
12 TB = 720 TB
16 TB = 960 TB
20 TB = 1.20
PB
The following graphic represents the recommended racking. The grapic below depicts 0U PDU.
Figure 1. EX500 minimum and maximum configurations
Page 7 of 20
Update ECS VDC license based on disk capacity added
The VDC license may require updating by Dell EMC Software Licensing Central. The Project Manager
should review the current VDC license to determine if additional storage capacity is required to existing
VDC license post disk capacity expansion.
Presite Tasks
Learn about the tasks you must carry out before you arrive at the customer site.
Page 8 of 20
The VDC license may require updating using Software Licensing Central. If additional storage capacity is
required to be added to existing the VDC License, the Project Manager informs the customer that
existing license must be regenerated and reapplied to VDC.
Steps
1. Determine the service login user and password (default) required for this engagement. See ECS
Service User Access document in ECS Solve > ECS How to Procedures for default information.
If the CLI user or password has been changed from the default, contact the customer for details.
2. Obtain from Project Manager the Configuration Guide detailing disk capacity expansion
information required to successfully perform the capacity expansion engagement. Verify that the
following details are included:
o Disk expansion option kits, quantities and capacity (2 TB, 4 TB, 8 TB, 12 TB, 16 TB, or
20 TB).
o Rack PSNT serial numbers to be expanded.
o Quantity of disks to be added per EX500 Chassis per rack.
o Virtual Data Center (VDC) disk capacity anticipated post disk capacity expansion.
3. Contact the Dell EMC Project Manager to validate material has arrived at the customer site and
implementation schedule.
Tools
Ensure that you have the following tools to complete the procedure:
Steps
1. Service laptop
2. 25' Ethernet cable
3. ESD gloves or ESD wristband
Procedure
Learn about the steps required to update ECS VDC license, based on disk capacity added.
Steps
Page 9 of 20
3. Connect the service laptop to the ECS appliance:
Access to private network IP addresses (192.168.219.1 to 16 and 192.168.219.101 to 116) are
limited to the nodes connected in the rack backend 1/10/25GbE fox management switch.
Private.4 (NAN) network IP add.resses (169.254.x.x) of all nodes in all racks in the ECS Virtual
Data Center (VDC) are accessible from any node in the ECS VDC once you SSH in to a node
using a private IP address (192.168.219.x)
Access to public network IP addresses for all ECS racks are available once you SSH to one of
the ECS nodes if security lockdown is not enabled.
1. Connect your service laptop to the VDC:
Option Description
If the cabinet contains a Open the service shelf and connect the red network cable to the service
service shelf with a red laptop. The red cable connects to port 34 on the fox switch. The fox switch
network cable: is the bottom back-end switch in a dual switch configuration.
If the cabinet does not If the cabinet does not contain a service shelf with a red network cable.
contain a service shelf with
a red network cable:
If you want to connect a Locate port 36 on the fox switch. The fox switch is the bottom back-end
service laptop to the back of switch in a dual switch configuration. Port 36 has a 1GB SFP that you can
the rack: connect your service laptop to with a Cat6 cable.
2. Set the network interface on the laptop to the static address 192.168.219.99, subnet
mask 255.255.255.0, with no gateway required.
3. Use the ping command to verify that the temporary network between the laptop and
rack private management network is functioning.
If 192.168.219.1 does not answer, try 192.168.219.2. If neither responds, verify the
laptop IP or subnet mask, network connection, and switch port connection. If the service
laptop is connected to the Dell VPN, ping to 192.168.219.x does not return a response.
For example:
Page 10 of 20
C:\>ping 192.168.219.1
Pinging 192.168.219.1 with 32 bytes of data:
Reply from 192.168.219.1: bytes=32 time<1ms TTL=64
Reply from 192.168.219.1: bytes=32 time<1ms TTL=64
Reply from 192.168.219.1: bytes=32 time<1ms TTL=64
Reply from 192.168.219.1: bytes=32 time<1ms TTL=64
4. Verify that xDoctor is installed and is at the latest version across the ECS systems:
1. If you are not connected to the VDC, establish a secure shell (SSH) session using
PuTTy.
2. Authenticate with service credentials. See ECS Service User Access document in ECS
Solve >ECS >How to Procedures for default information.
3. Connect to node 1 of the rack (192.168.219.1). If node 1 is not available, connect to
node 2 (192.168.219.2).
4. Run the following command to check the xDoctor version: sudo xdoctor --
sysversion
In the following example, the xDoctor version is uniform on all nodes.
In the following example, the xDoctor version is not uniform on all nodes. xDoctor
automatically updates the node which is not uniform.
5. If the installed version of xDoctor listed in the above step is not the latest version as
documented in the ECS xDoctor Users Guide available in ECS SolVe, see the section in
the guide on upgrading or reinstalling xDoctor.
6. If all nodes have the latest version, then go to next step in this procedure.
5. Use xDoctor to check the ECS rack health.
1. Run the following command to log in to the current Rack Master: ssh master.rack
2. Launch xDoctor and perform a Full Diagnosis Suite using the system scope (default).
For example:
# sudo xdoctor
2018-11-12 19:42:44,421: xDoctor_4.6-46 - INFO: Initializing xDoctor v4.6-46
...
Page 11 of 20
2018-11-12 19:42:45,058: xDoctor_4.6-46 - INFO: Removing orphaned session -
session_1542051003.670
2018-11-12 19:42:45,059: xDoctor_4.6-46 - INFO: Removing orphaned session -
session_1542051684
2018-11-12 19:42:45,060: xDoctor_4.6-46 - INFO: Starting xDoctor
session_1542051764.325 ... (SYSTEM)
2018-11-12 19:42:45,060: xDoctor_4.6-46 - INFO: Master Control Check ...
2018-11-12 19:42:45,135: xDoctor_4.6-46 - INFO: xDoctor Composition - Full
Diagnostic Suite for ECS
2018-11-12 19:42:45,136: xDoctor_4.6-46 - INFO: Session limited to 0:40:00
2018-11-12 19:42:45,423: xDoctor_4.6-46 - INFO: -------------------------------
------------
2018-11-12 19:42:45,424: xDoctor_4.6-46 - INFO: ECS Version: 3.2 SP2 Patch 1 -
3.2.2.1
2018-11-12 19:42:45,424: xDoctor_4.6-46 - INFO: -------------------------------
------------
2018-11-12 19:42:45,432: xDoctor_4.6-46 - INFO: xDoctor Pre Features
2018-11-12 19:42:45,433: xDoctor_4.6-46 - INFO: Cron Activation
2018-11-12 19:42:45,433: xDoctor_4.6-46 - INFO: xDoctor already active ...
2018-11-12 19:42:45,433: xDoctor_4.6-46 - INFO: --------------------
2018-11-12 19:42:45,533: xDoctor_4.6-46 - INFO: Validating System Version ...
2018-11-12 19:42:46,159: xDoctor_4.6-46 - INFO: |- xDoctor version is sealed to
4.6-46
2018-11-12 19:42:46,159: xDoctor_4.6-46 - INFO: |- System version is sealed to
3.2.2.0-1960.7545e90.40
2018-11-12 19:42:46,159: xDoctor_4.6-46 - INFO: Distributing xDoctor session
files ...
2018-11-12 19:42:46,399: xDoctor_4.6-46 - INFO: Collecting data on designated
nodes, please stand by ... (update every 5 to 30 seconds)
2018-11-12 19:42:46,400: xDoctor_4.6-46 - INFO: Collection Limit: 0:32:00,
Pacemaker Limit: 900 sec
2018-11-12 19:42:51,406: xDoctor_4.6-46 - INFO: Collecting data ... at 0:00:05
2018-11-12 19:43:01,407: xDoctor_4.6-46 - INFO: Collecting data ... at 0:00:15
2018-11-12 19:43:16,422: xDoctor_4.6-46 - INFO: Collecting data ... at 0:00:30
3. Run the following command to determine the report archive for the xDoctor session run
in the previous step: sudo xdoctor -r | grep -a1 Latest For example:
Latest
Latest Report:
xdoctor -r -a 2015-10-27_183001
4. Use the output from the command in step 5.c. above to view the latest xDoctor report.
Add the -WEC option to display only "Warning, Error, and Critical" events. For example:
sudo xdoctor -r -a < archive date_time > -WEC
The following example shows a clean report with no events:
Page 12 of 20
Displaying xDoctor Report (2015-10-27_183001) Filter:['CRITICAL', 'ERROR',
'WARNING'] ...
Timestamp = 2015-10-27_210554
Category = health
Source = fcli
Severity = ERROR
Message = Object Main Service not Healthy
Extra = 10.241.172.46
RAP = RAP014
Solution = 204179
5. Resolve any Warning, Errors, or Critical events that the report may return before
continuing to the next step, unless associated with the acknowledged failure. All xDoctor
reported Warning, Error, and Critical events must be resolved before proceeding.
Contact ECS Remote Support for assistance as required.
6. Expand the EX500 node in order of rack and node.
For each rack to be expanded, start at the EX500 bottom-most node to be expanded, and
proceed one node at a time to completion, with the top-most node in the rack to be expanded.
• Ensure ESD precautions (wristband or gloves) are in place before installing disks.
• Identify the disks to be replaced and locations:
Steps
4. If there are disk fillers in the slots where the new drives are going to be added, remove them.
Page 13 of 20
Figure 1. EX500 Front View Hard Drive Bay
5. Insert disks one at a time into the hard drive slot 0 through 11 in disk enclosure bay 1 until the
carrier connects with the backplane.
NOTE: Do not force the hard drive into the slot and backplane connections. The backplane can be
permanently damaged. Slowly and carefully insert the drive into the slot until the cam lever
engages. Ensure that the cam lever on the carrier engages properly.
6. With the disk carrier latch fully open, align the module with the guides and gently insert the disk
into the slot until the carrier handle begins to close.
7. Close the hard drive carrier handle to lock the hard drive in place.
8. Continue until all the expansion kits (drives) for the nodes have been added.
9. Verify that there are no disk fault LED (amber) OFF on any of the added disks.
a.If any added disk fault LEDs (amber) OFF are lit, reseat the disk.
b.Document any fault that is not resolved and any fault LED failures in the table in Post
Deployment Actions Required for Defective Disks.
10.Repeat steps 1-6 for any additional nodes in the rack.
11.Verify that the expansion storage disks are physically detected in the chassis and rack:
a.Run the following command:
viprexec -i 'cs_hal list disks | grep -i total'
Where total is the number of disks expected based on initial disks + disks added per
chassis. In the following example of 12 disks being added to an existing 12 disk total=24
b.If the result of all chassis Totals are as expected, go to Expand storage disk for object
software.
c. Document failures (IP address and count of missing disks) in the table in "Postexpansion
actions that are required if disks are defective," then continue the procedure to expand
disks.
Page 14 of 20
Expand storage disk for object software
Steps
Page 15 of 20
Initial display may show most added disks in unallocated state. For example:
Every 2.0s: sudo -i fcli disks list Wed Nov 14 20:43:41 2018
The following example shows disks transitioning from unallocated state to object-main:
Every 2.0s: sudo -i fcli disks list Wed Nov 14 20:50:41 2018
Page 16 of 20
bb72eba0-7db3-4652-885a-43a22fb3caa6 layton-green object-main 12000GB HDD 52 0
0 0
bb72eba0-7db3-4652-885a-43a22fb3caa6 layton-green Unallocated 12000GB HDD 8 0
0 0
The following example shows all disks transitioning from unallocated state to object-main:
Every 2.0s: sudo -i fcli disks list Wed Nov 14 21:05:41 2018
Page 17 of 20
6. Document failure(s) (IP address and count of missing disks) in the table in Post Deployment
Actions Required for Defective Disks.
1. Launch xDoctor and perform a Full Diagnosis Suite using the default setting system scope. Run:
sudo xdoctor
For example:
# sudo xdoctor
2. Determine the report archive for the xDoctor session executed in step 1. Run:
sudo xdoctor -r | grep -a1 Latest
For example:
Page 18 of 20
# sudo xdoctor -r | grep -a1Latest
Latest Report:
xdoctor -r -a 2015-10-27_183001
3. Use the output from step 2 to view the latest xDoctor report. Add the -WEC option to display only
Warning, Error and Critical events.
sudo xdoctor -r -a < archive date_time > -WEC
The following example shows a clean report with no events:
Timestamp = 2015-10-27_210554
Category = health
Source = fcli
Severity = ERROR
Message = Object Main Service not Healthy
Extra = 10.241.172.46
RAP = RAP014
Solution = 204179
4. If the report returns any Warning, Errors, or Critical events, resolve those events, unless
associated with the acknowledged failure, before you continue with step 5.
5. End the SSH session. Run:
# logout
6. Disconnect the service laptop from the fox switch.
7.
o If this is a single-rack system, go to Validate ECS UI. You have completed storage disk
expansion.
o If this is a multi-rack system, for all racks in the VDC:
d.Repeat Update ECS VDC license based on disk capacity added starting at step 2.
e.Repeat Physically install expansion disks.
f. Repeat Expand storage disk for object software.
g.Repeat Check ECS rack health with xDoctor.
Results
Page 19 of 20
Next steps
After you complete storage disk expansion for all the racks in the VDC for which you plan to carry out
storage disk expansion, go to Validate ECS UI.
Validate ECS UI
When all disk expansion additions are successfully completed, the customer must validate that the
correct number of disks and capacity appears in the ECS UI portal.
Steps
1. Open browser to ECS UI portal on one of the nodes in the VDC which had disks expanded and
navigate to Dashboard.
2. In Node & Disks and Capacity Utilization panels, verify that the Disks quantity and Total Capacity
display the expected value.
If installed disk is defective and detected as FAILED upon deployment, see the ECS SSD Replacement
Guide.
If ECS does not detect a disk; that is, if a disk is not listed under ECS UI Manage >Maintenance, contact
ECS Remote Support for follow-up.
Page 20 of 20