Download as pdf or txt
Download as pdf or txt
You are on page 1of 206

P3AM-7642-25ENZ0

FUJITSU Storage
ETERNUS DX100 S4/DX200 S4,
ETERNUS DX100 S3/DX200 S3
Hybrid Storage Systems

Design Guide (Basic)

System configuration design


Table of Contents

1. Function Overview 14

2. Basic Functions 16

RAID Functions.............................................................................................................................. 16
Supported RAID .....................................................................................................................................................16
User Capacity (Logical Capacity)............................................................................................................................22
RAID Group............................................................................................................................................................24
Volume..................................................................................................................................................................26
Hot Spares.............................................................................................................................................................29

Data Protection............................................................................................................................. 31
Data Block Guard ..................................................................................................................................................31
Disk Drive Patrol....................................................................................................................................................33
Redundant Copy....................................................................................................................................................34
Rebuild..................................................................................................................................................................35
Fast Recovery ........................................................................................................................................................36
Copyback/Copybackless .........................................................................................................................................37
Protection (Shield) ................................................................................................................................................39
Reverse Cabling.....................................................................................................................................................41

Operations Optimization (Virtualization/Automated Storage Tiering)........................................... 42


Thin Provisioning ..................................................................................................................................................42
Flexible Tier ..........................................................................................................................................................48
Extreme Cache Pool ..............................................................................................................................................54

Optimization of Volume Configurations ........................................................................................ 55


RAID Migration......................................................................................................................................................57
Logical Device Expansion ......................................................................................................................................59
LUN Concatenation ...............................................................................................................................................60
Wide Striping ........................................................................................................................................................63

Data Encryption ............................................................................................................................ 64


Encryption with Self Encrypting Drive (SED) ..........................................................................................................65
Firmware Data Encryption.....................................................................................................................................66
Key Management Server Linkage..........................................................................................................................67

User Access Management ............................................................................................................. 70


Account Management ...........................................................................................................................................70
User Authentication ..............................................................................................................................................72

2
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
Table of Contents

Audit Log ..............................................................................................................................................................74

Environmental Burden Reduction ................................................................................................. 75


Eco-mode..............................................................................................................................................................75
Power Consumption Visualization .........................................................................................................................78

Operation Management/Device Monitoring.................................................................................. 79


Operation Management Interface .........................................................................................................................79
Performance Information Management................................................................................................................80
Event Notification .................................................................................................................................................82
Device Time Synchronization.................................................................................................................................85

Power Control ............................................................................................................................... 86


Power Synchronized Unit.......................................................................................................................................86
Remote Power Operation (Wake On LAN) .............................................................................................................87

Backup (Advanced Copy) .............................................................................................................. 88


Backup (SAN) ........................................................................................................................................................89

Performance Tuning.................................................................................................................... 102


Striping Size Expansion .......................................................................................................................................102
Assigned CMs ......................................................................................................................................................103

Smart Setup Wizard..................................................................................................................... 104

3. SAN Functions 110

Operations Optimization (Deduplication/Compression) .............................................................. 110


Deduplication/Compression ................................................................................................................................110

Improving Host Connectivity ....................................................................................................... 118


Host Affinity ........................................................................................................................................................118
iSCSI Security .......................................................................................................................................................120

Stable Operation via Load Control............................................................................................... 120


Quality of Service (QoS).......................................................................................................................................120
Host Response ....................................................................................................................................................122
Storage Cluster ....................................................................................................................................................123

Data Migration............................................................................................................................ 126


Storage Migration ...............................................................................................................................................126

Non-disruptive Storage Migration............................................................................................... 128


Server Linkage Functions ............................................................................................................ 130
Oracle VM Linkage ..............................................................................................................................................130
VMware Linkage..................................................................................................................................................131

3
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
Table of Contents

Veeam Storage Integration .................................................................................................................................136


Microsoft Linkage................................................................................................................................................139
OpenStack Linkage .............................................................................................................................................140
Logical Volume Manager (LVM) ..........................................................................................................................141

4. Connection Configuration 142

SAN Connection .......................................................................................................................... 142


Host Interface .....................................................................................................................................................142
Access Method ....................................................................................................................................................145

Remote Connections ................................................................................................................... 148


Remote Interfaces ...............................................................................................................................................149
Connectable Models............................................................................................................................................151

LAN Connection .......................................................................................................................... 152


LAN for Operation Management (MNT Port) .......................................................................................................152
LAN for Remote Support (RMT Port) ....................................................................................................................154
LAN Control (Master CM/Slave CM)......................................................................................................................157
Network Communication Protocols .....................................................................................................................159

Power Supply Connection............................................................................................................ 161


Input Power Supply Lines ....................................................................................................................................161
UPS Connection ...................................................................................................................................................161

Power Synchronized Connections................................................................................................ 162


Power Synchronized Connections (PWC) .............................................................................................................162
Power Synchronized Connections (Wake On LAN) ...............................................................................................165

5. Hardware Configurations 166

Configuration Schematics ........................................................................................................... 167


Optional Product Installation Conditions..................................................................................... 174
Controller Module ...............................................................................................................................................174
Memory Extension ..............................................................................................................................................175
Host Interfaces ....................................................................................................................................................176
Unified License....................................................................................................................................................177
Drive Enclosures..................................................................................................................................................178
I/O Module ..........................................................................................................................................................178
Drives..................................................................................................................................................................179

Standard Installation Rules ......................................................................................................... 182


Controller Module ...............................................................................................................................................182

4
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
Table of Contents

Host Interface .....................................................................................................................................................183


Drive Enclosure ...................................................................................................................................................185
I/O Module ..........................................................................................................................................................185
Drive ...................................................................................................................................................................186

Recommended RAID Group Configurations ................................................................................. 191

6. Maintenance/Expansion 194

Hot Swap/Hot Expansion ............................................................................................................ 194


User Expansion ........................................................................................................................... 197
SSD Sanitization.......................................................................................................................... 197

A. Function Specification List 198

List of Supported Protocols.......................................................................................................... 198


Target Pool for Each Function/Volume List .................................................................................. 199
Target RAID Groups/Pools of Each Function .........................................................................................................199
Target Volumes of Each Function ........................................................................................................................200

Combinations of Functions That Are Available for Simultaneous Executions............................... 202


Combinations of Functions That Are Available for Simultaneous Executions.......................................................202
Number of Processes That Can Be Executed Simultaneously ...............................................................................204
Capacity That Can Be Processed Simultaneously .................................................................................................204

5
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
List of Figures

Figure 1 RAID0 Concept..........................................................................................................................................17


Figure 2 RAID1 Concept..........................................................................................................................................17
Figure 3 RAID1+0 Concept......................................................................................................................................18
Figure 4 RAID5 Concept..........................................................................................................................................18
Figure 5 RAID5+0 Concept......................................................................................................................................19
Figure 6 RAID6 Concept..........................................................................................................................................20
Figure 7 RAID6-FR Concept.....................................................................................................................................21
Figure 8 Example of a RAID Group .........................................................................................................................25
Figure 9 Volume Concept .......................................................................................................................................26
Figure 10 Hot Spares................................................................................................................................................29
Figure 11 Data Block Guard......................................................................................................................................31
Figure 12 Disk Drive Patrol.......................................................................................................................................33
Figure 13 Redundant Copy Function ........................................................................................................................34
Figure 14 Rebuild.....................................................................................................................................................35
Figure 15 Fast Recovery ...........................................................................................................................................36
Figure 16 Copyback..................................................................................................................................................37
Figure 17 Copybackless ............................................................................................................................................38
Figure 18 Protection (Shield) ...................................................................................................................................39
Figure 19 Reverse Cabling........................................................................................................................................41
Figure 20 Storage Capacity Virtualization.................................................................................................................43
Figure 21 TPV Balancing (When Allocating Disproportionate TPV Physical Capacity Evenly) ....................................45
Figure 22 TPV Balancing (When Distributing Host Accesses Evenly after TPP Expansion) ........................................46
Figure 23 TPV/FTV Capacity Optimization .................................................................................................................47
Figure 24 Flexible Tier..............................................................................................................................................49
Figure 25 FTV Configuration .....................................................................................................................................50
Figure 26 FTRP Balancing.........................................................................................................................................53
Figure 27 Extreme Cache Pool..................................................................................................................................54
Figure 28 RAID Migration (When Data Is Migrated to a High Capacity Drive)...........................................................57
Figure 29 RAID Migration (When a Volume Is Moved to a Different RAID Level) ......................................................57
Figure 30 RAID Migration.........................................................................................................................................58
Figure 31 Logical Device Expansion (When Expanding the RAID Group Capacity)....................................................59
Figure 32 Logical Device Expansion (When Changing the RAID Level).....................................................................59
Figure 33 LUN Concatenation ..................................................................................................................................60
Figure 34 LUN Concatenation (When the Concatenation Source Is a New Volume)..................................................61
Figure 35 LUN Concatenation (When the Existing Volume Capacity Is Expanded) ...................................................61
Figure 36 Wide Striping............................................................................................................................................63
Figure 37 Data Encryption with Self Encrypting Drives (SED) ...................................................................................65
Figure 38 Firmware Data Encryption ........................................................................................................................66
Figure 39 Key Management Server Linkage .............................................................................................................68
Figure 40 Account Management ..............................................................................................................................70
Figure 41 Audit Log..................................................................................................................................................74
Figure 42 Eco-mode .................................................................................................................................................75
Figure 43 Power Consumption Visualization ............................................................................................................78
Figure 44 Event Notification ....................................................................................................................................82
Figure 45 Device Time Synchronization....................................................................................................................85
Figure 46 Power Synchronized Unit..........................................................................................................................86
Figure 47 Wake On LAN ...........................................................................................................................................87

6
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
List of Figures

Figure 48 Example of Advanced Copy ......................................................................................................................88


Figure 49 REC...........................................................................................................................................................91
Figure 50 Restore OPC..............................................................................................................................................94
Figure 51 EC or REC Reverse .....................................................................................................................................94
Figure 52 Targets for the Multi-Copy Function .........................................................................................................95
Figure 53 Multi-Copy................................................................................................................................................95
Figure 54 Multi-Copy (Including SnapOPC+) ............................................................................................................96
Figure 55 Multi-Copy (Using the Consistency Mode) ................................................................................................96
Figure 56 Multi-Copy (Case 1: When Performing a Cascade Copy for an REC Session in Consistency Mode) .............97
Figure 57 Multi-Copy (Case 2: When Performing a Cascade Copy for an REC Session in Consistency Mode) .............97
Figure 58 Cascade Copy............................................................................................................................................98
Figure 59 Cascade Copy (Using Three Copy Sessions).............................................................................................101
Figure 60 Cascade Copy (Using Four Copy Sessions)...............................................................................................101
Figure 61 Assigned CMs .........................................................................................................................................103
Figure 62 RAID Configuration Example (When 12 SSDs Are Installed) ...................................................................107
Figure 63 RAID Configuration Example (When 15 SAS Disks Are Installed) ............................................................109
Figure 64 Deduplication/Compression Overview ....................................................................................................111
Figure 65 Deduplication Overview .........................................................................................................................111
Figure 66 Compression Overview ...........................................................................................................................112
Figure 67 Details of the Deduplication/Compression Function ...............................................................................116
Figure 68 Host Affinity ...........................................................................................................................................118
Figure 69 Associating Host Groups, CA Port Groups, and LUN Groups.....................................................................119
Figure 70 QoS.........................................................................................................................................................120
Figure 71 Copy Path Bandwidth Limit ....................................................................................................................121
Figure 72 Host Response........................................................................................................................................122
Figure 73 Storage Cluster .......................................................................................................................................123
Figure 74 Mapping TFOVs, TFO Groups, and CA Port Pairs ......................................................................................124
Figure 75 Storage Migration ..................................................................................................................................126
Figure 76 Non-disruptive Storage Migration ..........................................................................................................128
Figure 77 Oracle VM Linkage .................................................................................................................................130
Figure 78 VMware Linkage.....................................................................................................................................131
Figure 79 VVOL (Operational Configuration) ..........................................................................................................133
Figure 80 VVOL (System Configuration) .................................................................................................................134
Figure 81 Veeam Storage Integration ....................................................................................................................136
Figure 82 Microsoft Linkage...................................................................................................................................139
Figure 83 Logical Volume Manager (LVM) .............................................................................................................141
Figure 84 Single Path Connection (When a SAN Connection Is Used — Direct Connection) .....................................145
Figure 85 Single Path Connection (When a SAN Connection Is Used — Switch Connection) ....................................145
Figure 86 Multipath Connection (When a SAN Connection Is Used — Basic Connection Configuration)...................146
Figure 87 Multipath Connection (When a SAN Connection Is Used — Switch Connection).......................................146
Figure 88 Multipath Connection (When a SAN Connection Is Used — for Enhanced Performance)..........................147
Figure 89 Example of Non-Supported Connection Configuration (When Multiple Types of Remote Interfaces Are In-
stalled in the Same ETERNUS DX/AF)......................................................................................................148
Figure 90 Example of Supported Connection Configuration (When Multiple Types of Remote Interfaces Are Installed
in the Same ETERNUS DX/AF) .................................................................................................................148
Figure 91 An FC Connection for a Remote Copy between ETERNUS DX/AF Storage Systems (When Redundant Paths
Are Used) ...............................................................................................................................................149
Figure 92 An FC Connection for a Remote Copy between ETERNUS DX/AF Storage Systems (When Lines Are Used).....
...............................................................................................................................................................149
Figure 93 An iSCSI Connection for a Remote Copy between ETERNUS DX/AF Storage Systems (When Lines Are Used) .
...............................................................................................................................................................150

7
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
List of Figures

Figure 94 Connection Example without a Dedicated Remote Support Port ............................................................153


Figure 95 Connection Example When the IP Address of the Slave CM Is Set (and a Dedicated Remote Support Port Is
Not Used) ...............................................................................................................................................153
Figure 96 Overview of the AIS Connect Function ....................................................................................................154
Figure 97 Security Features....................................................................................................................................155
Figure 98 Connection Example with a Dedicated Remote Support Port..................................................................156
Figure 99 Connection Example When the IP Address of the Slave CM Is Set (and a Dedicated Remote Support Port Is
Used) .....................................................................................................................................................157
Figure 100 LAN Control (Switching of the Master CM)..............................................................................................158
Figure 101 LAN Control (When the IP Address of the Slave CM Is Set)......................................................................158
Figure 102 Power Supply Control Using a Power Synchronized Unit (When Connecting One or Two Servers) ...........162
Figure 103 Power Supply Control Using a Power Synchronized Unit (When Connecting Three or More Servers).......164
Figure 104 Power Supply Control Using Wake On LAN .............................................................................................165
Figure 105 Minimum Configuration Diagram: ETERNUS DX100 S4/DX200 S4...........................................................167
Figure 106 Minimum Configuration Diagram: ETERNUS DX100 S3/DX200 S3...........................................................168
Figure 107 Maximum Configuration Diagram: ETERNUS DX100 S4/DX200 S4 ..........................................................169
Figure 108 Maximum Configuration Diagram: ETERNUS DX100 S3/DX200 S3 ..........................................................171
Figure 109 Enclosure Connection Path (When Only One Controller Is Installed).......................................................173
Figure 110 Enclosure Connection Path (When Two Controllers Are Installed) ..........................................................173
Figure 111 Controller Installation Order ...................................................................................................................182
Figure 112 Installation Diagram for Host Interfaces (When Only One Controller Is Installed) ..................................183
Figure 113 Host Interface Installation Diagram 1 (When Two Controllers Are Installed in the ETERNUS DX100 S4/
DX100 S3) ..............................................................................................................................................183
Figure 114 Host Interface Installation Diagram 2 (When Two Controllers Are Installed in the ETERNUS DX100 S4/
DX100 S3) ..............................................................................................................................................184
Figure 115 Host Interface Installation Diagram (When Two Controllers Are Installed in the ETERNUS DX200 S4/DX200
S3) .........................................................................................................................................................184
Figure 116 I/O Module Installation Order .................................................................................................................185
Figure 117 Drive Installation Diagram for High-Density Drive Enclosures ................................................................187
Figure 118 Installation Diagram for 2.5" Drives .......................................................................................................189
Figure 119 Installation Diagram for 3.5" Drives .......................................................................................................190
Figure 120 Drive Combination 1 ..............................................................................................................................191
Figure 121 Drive Combination 2 ..............................................................................................................................191
Figure 122 Drive Combination 3 ..............................................................................................................................192
Figure 123 Drive Combination 4 ..............................................................................................................................192
Figure 124 Drive Combination 5 ..............................................................................................................................193

8
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
List of Tables

Table 1 Basic Functions ........................................................................................................................................14


Table 2 SAN Functions ..........................................................................................................................................15
Table 3 RAID Level Comparison ............................................................................................................................21
Table 4 Formula for Calculating User Capacity for Each RAID Level .......................................................................22
Table 5 User Capacity per Drive.............................................................................................................................23
Table 6 RAID Group Types and Usage....................................................................................................................24
Table 7 Recommended Number of Drives per RAID Group ....................................................................................25
Table 8 Number of Volumes That Can Be Created .................................................................................................26
Table 9 Volumes That Can Be Created...................................................................................................................27
Table 10 Hot Spare Installation Conditions .............................................................................................................29
Table 11 Hot Spare Selection Criteria .....................................................................................................................30
Table 12 TPP Maximum Number and Capacity........................................................................................................43
Table 13 Chunk Size According to the Configured TPP Capacity...............................................................................43
Table 14 Levels and Configurations for a RAID Group That Can Be Registered in a TPP...........................................44
Table 15 TPP Thresholds .........................................................................................................................................44
Table 16 TPV Thresholds .........................................................................................................................................45
Table 17 Chunk Size and Data Transfer Unit ..........................................................................................................49
Table 18 The Maximum Number and the Maximum Capacity of FTSPs ...................................................................50
Table 19 Levels and Configurations for a RAID Group That Can Be Registered in a FTSP .........................................51
Table 20 FTRP Thresholds .......................................................................................................................................52
Table 21 FTV Thresholds .........................................................................................................................................52
Table 22 Optimization of Volume Configurations....................................................................................................55
Table 23 Functional Comparison between the SED Authentication Key (Common Key) and Key Management Server
Linkage ....................................................................................................................................................67
Table 24 Available Functions for Default Roles .......................................................................................................71
Table 25 Client Public Key (SSH Authentication).....................................................................................................72
Table 26 Eco-mode Specifications...........................................................................................................................76
Table 27 ETERNUS Web GUI Operating Environment ..............................................................................................79
Table 28 Levels and Contents of Events That Are Notified ......................................................................................82
Table 29 SNMP Specifications .................................................................................................................................83
Table 30 Control Software (Advanced Copy) ...........................................................................................................88
Table 31 List of Functions (Copy Methods) .............................................................................................................89
Table 32 Characteristics of SnapOPC/SnapOPC+ Operations with Each Type of Copy Destination Logical Volume .......
.................................................................................................................................................................90
Table 33 REC Data Transfer Mode ...........................................................................................................................91
Table 34 Available Cascade Copy Combinations (When a Cascade Copy Performs Session 1 Followed by Session 2) ..
.................................................................................................................................................................98
Table 35 Available Cascade Copy Combinations (When a Cascade Copy Performs Session 2 Followed by Session 1) ..
.................................................................................................................................................................99
Table 36 Available Stripe Depth............................................................................................................................102
Table 37 Guideline for the Number of Drives and User Capacities (When 1.92TB SSDs Are Installed) ...................104
Table 38 Guideline for the Number of Drives and User Capacities (When 1.2TB SAS Disks Are Installed)..............107
Table 39 Deduplication/Compression Function Specifications...............................................................................112
Table 40 Method for Enabling the Deduplication/Compression Function..............................................................113
Table 41 Volumes That Are to Be Created depending on the Selection of "Deduplication" and "Compression"......114
Table 42 Deduplication/Compression Setting for TPPs Where the Target Volumes Can Be Created .......................115
Table 43 Target Deduplication/Compression Volumes of Each Function ...............................................................117
Table 44 Storage Cluster Function Specifications ..................................................................................................124

9
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
List of Tables

Table 45 Specifications for Paths and Volumes between the Local Storage System and the External Storage System
...............................................................................................................................................................128
Table 46 Maximum VVOL Capacity........................................................................................................................135
Table 47 VVOL Management Information Specifications ......................................................................................135
Table 48 Volume Types That Can Be Used with Veeam Storage Integration..........................................................138
Table 49 Ethernet Frame Capacity (Jumbo Frame Settings)..................................................................................143
Table 50 Connectable Models and Available Remote Interfaces ...........................................................................151
Table 51 LAN Port Availability...............................................................................................................................159
Table 52 Number of Installable Drive Enclosures..................................................................................................178
Table 53 Drive Characteristics ...............................................................................................................................181
Table 54 Number of Installable Drives..................................................................................................................181
Table 55 Hot Swap and Hot Expansion Availability for Components (ETERNUS DX100 S4/DX200 S4) ...................194
Table 56 Hot Swap and Hot Expansion Availability for Components (ETERNUS DX100 S3/DX200 S3) ...................196
Table 57 List of Supported Protocols.....................................................................................................................198
Table 58 Combinations of Functions That Can Be Executed Simultaneously (1/2) ................................................202
Table 59 Combinations of Functions That Can Be Executed Simultaneously (2/2) ................................................202

10
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
Preface

Fujitsu would like to thank you for purchasing the FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS
DX100 S3/DX200 S3 (hereinafter collectively referred to as ETERNUS DX).
The ETERNUS DX is designed to be connected to Fujitsu servers (Fujitsu SPARC Servers, PRIMEQUEST, PRIMERGY,
and other servers) or non-Fujitsu servers.
This manual provides the system design information for the ETERNUS DX storage systems.
This manual is intended for use of the ETERNUS DX in regions other than Japan.
This manual applies to the latest controller firmware version.

Twenty-Fifth Edition
April 2019

11
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
Preface

Trademarks

Third-party trademark information related to this product is available at:


http://www.fujitsu.com/global/products/computing/storage/eternus/trademarks.html

About This Manual

Intended Audience
This manual is intended for field engineers or system administrators who design ETERNUS DX systems or use the
ETERNUS DX.

Related Information and Documents


The latest version of this manual and the latest information for your model are available at:
http://www.fujitsu.com/global/support/products/computing/storage/disk/manuals/
Refer to the following manuals of your model as necessary:
"Overview"
"Site Planning Guide"
"Product List"
"Configuration Guide (Basic)"
"ETERNUS Web GUI User's Guide"
"ETERNUS CLI User's Guide"
"Configuration Guide -Server Connection-"

Document Conventions

■ Third-Party Product Names


• Oracle Solaris may be referred to as "Solaris", "Solaris Operating System", or "Solaris OS".
• Microsoft® Windows Server® may be referred to as "Windows Server".

■ Notice Symbols
The following notice symbols are used in this manual:

Indicates information that you need to observe when using the ETERNUS storage system.
Make sure to read the information.
Indicates information and suggestions that supplement the descriptions included in this
manual.

12
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
Preface

Warning Signs
Warning signs are shown throughout this manual in order to prevent injury to the user and/or material damage.
These signs are composed of a symbol and a message describing the recommended level of caution. The follow-
ing explains the symbol, its level of caution, and its meaning as used in this manual.

This symbol indicates the possibility of serious or fatal injury if the ETERNUS DX is not used
properly.

This symbol indicates the possibility of minor or moderate personal injury, as well as dam-
age to the ETERNUS DX and/or to other users and their property, if the ETERNUS DX is not
used properly.

This symbol indicates IMPORTANT information for the user to note when using the ETERNUS
DX.

The following symbols are used to indicate the type of warnings or cautions being described.

The triangle emphasizes the urgency of the WARNING and CAUTION contents. Inside the
triangle and above it are details concerning the symbol (e.g. Electrical Shock).

The barred "Do Not..." circle warns against certain actions. The action which must be
avoided is both illustrated inside the barred circle and written above it (e.g. No Disassem-
bly).

The black "Must Do..." circle indicates actions that must be taken. The required action is
both illustrated inside the black disk and written above it (e.g. Unplug).

How Warnings are Presented in This Manual


A message is written beside the symbol indicating the caution level. This message is marked with a vertical rib-
bon in the left margin, to distinguish this warning from ordinary descriptions.
A display example is shown here.
Example warning
Warning level indicator

Do
CAUTION Warning type indicator

• To avoid damaging the ETERNUS storage system, pay attention to the


following points when cleaning the ETERNUS storage system:
- Make sure to disconnect the power when cleaning.
- Be careful that no liquid seeps into the ETERNUS storage system Warning details
when using cleaners, etc.
- Do not use alcohol or other solvents to clean the ETERNUS storage system.

Warning layout ribbon

13
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
1. Function Overview

The ETERNUS DX provides various functions to ensure data integrity, enhance security, reduce cost, and optimize
the overall performance of the system.
The ETERNUS DX integrates block data (SAN area) and file data (NAS area) in a single device and also provides
advanced functions according to each connection.
These functions enable to respond to problems from various situations.
The ETERNUS DX has functions such as the SAN function (supports block data access), the NAS function (supports
file data access), and basic functions that can be used without needing to recognize the SAN or the NAS connec-
tion.
For more details about the basic functions, refer to "2. Basic Functions" (page 16). For more details about the
functions that are used for a SAN connection, refer to "3. SAN Functions" (page 110).
Table 1 Basic Functions

Overview Function
Data protection "Data Block Guard" (page 31)
Functions that ensure data integrity to improve data reliability. "Disk Drive Patrol" (page 33)
It is possible to detect and fix drive failures early. "Redundant Copy" (page 34)
"Rebuild" (page 35)
"Fast Recovery" (page 36)
"Copyback/Copybackless" (page 37)
"Protection (Shield)" (page 39)
"Reverse Cabling" (page 41)
Resource utilization (virtualization/Automated Storage Tier- "Thin Provisioning" (page 42)
ing) "Flexible Tier" (page 48)
Functions that deliver effective resource utilization. "Extreme Cache Pool" (page 54)
• Data capacity expansion "RAID Migration" (page 57)
Functions that expand or relocate a RAID group or a volume "Logical Device Expansion" (page 59)
in order to flexibly meet any increases in the amount of data. "LUN Concatenation" (page 60)
• Guarantee of performance "Wide Striping" (page 63)
A function that creates a volume that is striped in multiple
RAID groups in order to improve performance.
Security measures (data encryption) "Encryption with Self Encrypting Drive (SED)" (page 65)
Functions that encrypt data in the drive media to prevent the "Firmware Data Encryption" (page 66)
data from being fraudulently decoded. "Key Management Server Linkage" (page 67)
Security measures (user access management) "Account Management" (page 70)
Functions to prevent information leakage that are caused by a "User Authentication" (page 72)
malicious access. "Audit Log" (page 74)
Environmental burden reduction "Eco-mode" (page 75)
Functions that adjust the operating time and the environment "Power Consumption Visualization" (page 78)
of the installation location in order to reduce power consump-
tion.
Operation management (device monitoring) "Operation Management Interface" (page 79)
Function that reduce load on the system administrator, and "Performance Information Management" (page 80)
that improve system stability and increase operating ratio of "Event Notification" (page 82)
the system. "Device Time Synchronization" (page 85)
Power control "Power Synchronized Unit" (page 86)
Power control functions that are used to link power-on and "Remote Power Operation (Wake On LAN)" (page 87)
power-off operations with servers and perform scheduled opera-
tions.

14
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
1. Function Overview

Overview Function
• High-speed backup "Backup (SAN)" (page 89)
• Continuous business
Data can be duplicated at any point without affecting other op-
erations.
Performance tuning "Striping Size Expansion" (page 102)
A function that can perform tuning in order to improve perform- "Assigned CMs" (page 103)
ance.
Simple configuration "Smart Setup Wizard" (page 104)
A wizard that simplifies the configuration of Thin Provisioning.

Table 2 SAN Functions

Overview Function
Operations Optimization (Deduplication/Compression) "Deduplication/Compression" (page 110)
A function that eliminates duplicated data and compresses the
data to reduce the amount of written data.
Security measures (unauthorized access prevention) "Host Affinity" (page 118)
Functions that prevent unintentional storage access. "iSCSI Security" (page 120)
Stable operation "Quality of Service (QoS)" (page 120)
For stable operation of server connections, the appropriate re- "Host Response" (page 122)
sponse action and the processing priority can be specified for "Storage Cluster" (page 123)
each server.
If an error occurs in the storage system during operations, the
connected storage system is switched automatically and opera-
tions can continue.
Data relocation "Storage Migration" (page 126)
A function that migrates data between ETERNUS storage sys-
tems.
Non-disruptive data relocation "Non-disruptive Storage Migration" (page 128)
A function that migrates data between ETERNUS storage sys-
tems without stopping the business server.
Information linkage (function linkage with servers) "Oracle VM Linkage" (page 130)
Functions that cooperate with a server to improve performance "VMware Linkage" (page 131)
in a virtualized environment. Beneficial effects such as central- "Veeam Storage Integration" (page 136)
ized management of the entire storage system and a reduction "Microsoft Linkage" (page 139)
of the load on servers can be realized.
"OpenStack Linkage" (page 140)
"Logical Volume Manager (LVM)" (page 141)

15
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions

This chapter describes the functions that control the storage system.

RAID Functions
This section explains the points to note before configuring a system using the ETERNUS DX.

Supported RAID
The ETERNUS DX supports the following RAID levels.
• RAID0 (striping)
• RAID1 (mirroring)
• RAID1+0 (striping of pairs of drives for mirroring)
• RAID5 (striping with distributed parity)
• RAID5+0 (double striping with distributed parity)
• RAID6 (striping with double distributed parity)
• RAID6-FR (provides the high speed rebuild function, and striping with double distributed parity)

Remember that a RAID0 configuration is not redundant. This means that if a RAID0 drive fails, the data will
not be recoverable.

This section explains the concepts and purposes (RAID level selection criteria) of the supported RAID levels.

When Nearline SAS disks that have 6TB or more are used, the available RAID levels are RAID0, RAID1, RAID6,
and RAID6-FR.

16
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
RAID Functions

■ RAID Level Concept


A description of each RAID level is shown below.

● RAID0 (Striping)
Data is split in unit of blocks and stored across multiple drives.
Figure 1 RAID0 Concept
Data writing request

A B C D

A B
C D

Drive#0 Drive#1

● RAID1 (Mirroring)
The data is stored on two duplicated drives at the same time.
If one drive fails, other drive continues operation.
Figure 2 RAID1 Concept
Data writing request

A B C D

A A
B B
C C
D D
Drive#0 Drive#1

17
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
RAID Functions

● RAID1+0 (Striping of Pairs of Drives for Mirroring)


RAID1+0 combines the high I/O performance of RAID0 (striping) with the reliability of RAID1 (mirroring).
Figure 3 RAID1+0 Concept
Data writing request

A B C D

D'
C Mirroring
Drive#3
C'
B Mirroring Drive#7

Drive#2
B'
A Mirroring Drive#6

Drive#1
A'

Mirroring Drive#5
Striping (RAID0)
Drive#0
Mirroring (RAID1)

Drive#4

● RAID5 (Striping with Distributed Parity)


Data is divided into blocks and allocated across multiple drives together with parity information created from
the data in order to ensure the redundancy of the data.
Figure 4 RAID5 Concept
Data writing request

A B C D

A B C D Create parity data

A B C D P A, B, C, D
E F G P E, F, G, H H
I J P I, J, K, L K L
M P M, N, O, P N O P
Drive#0 Drive#1 Drive#2 Drive#3 Drive#4

Parity for data A to D: P A, B, C, D


Parity for data E to H: P E, F, G, H
Parity for data I to L: P I, J, K, L
Parity for data M to P: P M, N, O, P

18
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
RAID Functions

● RAID5+0 (Double Striping with Distributed Parity)


Multiple RAID5 volumes are RAID0 striped. For large capacity configurations, RAID5+0 provides better perform-
ance, better reliability, and shorter rebuilding times than RAID5.
Figure 5 RAID5+0 Concept
Data writing request

A B C D

Striping (RAID0)
A B C D

A C
B D

Striping with
A B Create parity data C D Create parity data
distributed parity
(RAID5)

A B P A, B C D P C, D
E P E, F F G P G, H H
P I, J I J P K, L K L
M N P M, N O P P O, P
Drive#0 Drive#1 Drive#2 Drive#3 Drive#4 Drive#5

RAID5 RAID5

Striping (RAID0)

Striping with
distributed parity (RAID5)

19
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
RAID Functions

● RAID6 (Striping with Double Distributed Parity)


Allocating two different parities on different drives (double parity) makes it possible to recover from up to two
drive failures.
Figure 6 RAID6 Concept
Data writing request

A B C D

A B C D Create parity data

A B C D P1 A, B, C, D P2 A, B, C, D
E F G P1 E, F, G, H P2 E, F, G, H H
I J P1 I, J, K, L P2 I, J, K, L K L
M P1 M, N, O, P P2 M, N, O, P N O P
Drive#0 Drive#1 Drive#2 Drive#3 Drive#4 Drive#5

Parity for data A to D: P1 A, B, C, D and P2 A, B, C, D


Parity for data E to H: P1 E, F, G, H and P2 E, F, G, H
Parity for data I to L: P1 I, J, K, L and P2 I, J, K, L
Parity for data M to P: P1 M, N, O, P and P2 M, N, O, P

20
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
RAID Functions

● RAID6-FR (Provides the High Speed Rebuild Function, and Striping with Double Distributed Parity)
Distributing multiple data groups and reserved space equivalent to hot spares to the configuration drives makes
it possible to recover from up to two drive failures. RAID6-FR requires less build time than RAID6.
Figure 7 RAID6-FR Concept
Data writing request

A B C D

A B C D

Create parity data Create parity data

A B C P1 A, B, C P2 A, B, C D E F P1 D, E, F P2 D, E, F FHS
K P2 G, H, I J H L P1 J, K, L I FHS P2 J, K, L G P1 G, H, I
P1 M, N, O R P2 P, Q, R FHS M O P1 P, Q, R P2 M, N, O N Q P
X P1 S, T, U P1 V, W, X V P2 V, W, X FHS S W T U P2 S, T, U
FHS

FHS
Drive#0 Drive#1 Drive#2 Drive#3 Drive#4 Drive#5 Drive#6 Drive#7 Drive#8 Drive#9 Drive#10
RAID6-FR ((3D+2P) × 2 + 1HS)

Parity for data A, B, C: P1 A, B, C and P2 A, B, C


Parity for data D, E, F: P1 D, E, F and P2 D, E, F
Parity for data G, H, I: P1 G, H, I and P2 G, H, I
Parity for data J, K, L: P1 J, K, L and P2 J, K, L
Parity for data M, N, O: P1 M, N, O and P2 M, N, O
Parity for data P, Q, R: P1 P, Q, R and P2 P, Q, R
Parity for data S, T, U: P1 S, T, U and P2 S, T, U
Parity for data V, W, X: P1 V, W, X and P2 V, W, X
:
Fast recovery Hot Spare: FHS

■ Reliability, Performance, Capacity for Each RAID Level


Table 3 shows the comparison result of reliability, performance, capacity for each RAID level.
Table 3 RAID Level Comparison

RAID level Reliability Performance (*1) Capacity


RAID0 ´ ◎ ◎
RAID1 ¡ ¡ △
RAID1+0 ¡ ◎ △
RAID5 ¡ ¡ ¡
RAID5+0 ¡ ¡ ¡
RAID6 ◎ ¡ ¡
RAID6-FR ◎ ¡ ¡

◎: Very good ¡: Good △: Reasonable ´: Poor

*1: Performance may differ according to the number of drives and the processing method from the host.

21
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
RAID Functions

■ Recommended RAID Level


Select the appropriate RAID level according to the usage.
• Recommended RAID levels are RAID1, RAID1+0, RAID5, RAID5+0, RAID6, and RAID6-FR.
• When importance is placed upon read and write performance, a RAID1+0 configuration is recommended.
• For read only file servers and backup servers, RAID5, RAID5+0, RAID6, or RAID6-FR can also be used for higher
efficiency. However, if the drive fails, note that data restoration from parities and rebuilding process may re-
sult in a loss in performance.
• For SSDs, a RAID5 configuration or a fault tolerant enhanced RAID6 configuration is recommended because
SSDs operate much faster than other types of drive. For large capacity SSDs, using a RAID6-FR configuration,
which provides excellent performance for the rebuild process, is recommended.
• Using a RAID6 or RAID6-FR configuration is recommended when Nearline SAS disks that have 6TB or more are
used. For details on the RAID levels that can be configured with Nearline SAS disks that have 6TB or more,
refer to "Supported RAID" (page 16).

User Capacity (Logical Capacity)


User Capacity for Each RAID Level
The user capacity depends on the capacity of drives that configure a RAID group and the RAID level.
Table 4 shows the formula for calculating the user capacity for each RAID level.
Table 4 Formula for Calculating User Capacity for Each RAID Level

RAID level Formula for user capacity computation


RAID0 Drive capacity ´ Number of drives
RAID1 Drive capacity ´ Number of drives ¸ 2
RAID1+0 Drive capacity ´ Number of drives ¸ 2
RAID5 Drive capacity ´ (Number of drives - 1)
RAID5+0 Drive capacity ´ (Number of drives - 2)
RAID6 Drive capacity ´ (Number of drives - 2)
RAID6-FR Drive capacity ´ (Number of drives - (2 ´ N) - Number of hot spares) (*1)

*1: "N" is the number of RAID6 configuration sets. For example, if a RAID6 group is configured with "(3D+2P)
´2+1HS", N is "2".

22
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
RAID Functions

User Capacity of Drives


Table 5 shows the user capacity for each drive.
The supported drives vary between the ETERNUS DX100 S4/DX200 S4 and the ETERNUS DX100 S3/DX200 S3. For
details about drives, refer to "Overview" of the currently used storage systems.
Table 5 User Capacity per Drive

Product name (*1) User capacity


400GB SSD 374,528MB
800GB SSD 750,080MB
960GB SSD 914,432MB
1.6TB SSD 1,501,440MB
1.92TB SSD 1,830,144MB
3.84TB SSD 3,661,568MB
7.68TB SSD 7,324,416MB
15.36TB SSD 14,650,112MB
30.72TB SSD 29,301,504MB
300GB SAS disk 279,040MB
600GB SAS disk 559,104MB
900GB SAS disk 839,168MB
1.2TB SAS disk 1,119,232MB
1.8TB SAS disk 1,679,360MB
2.4TB SAS disk 2,239,744MB
1TB Nearline SAS disk 937,728MB
2TB Nearline SAS disk 1,866,240MB
3TB Nearline SAS disk 2,799,872MB
4TB Nearline SAS disk 3,733,504MB
6TB Nearline SAS disk (*2) 5,601,024MB
8TB Nearline SAS disk (*2) 7,468,288MB
10TB Nearline SAS disk (*2) 9,341,696MB
12TB Nearline SAS disk (*2) 11,210,496MB
14TB Nearline SAS disk (*2) 13,079,296MB

*1: The capacity of the product names for the drives is based on the assumption that 1MB = 1,0002 bytes,
while the user capacity for each drive is based on the assumption that 1MB = 1,0242 bytes. Furthermore,
OS file management overhead will reduce the actual usable capacity.
The user capacity is constant regardless of the drive size (2.5"/3.5"), the SSD type (Value SSD and MLC SSD),
or the encryption support (SED).
*2: For details on the RAID levels that can be configured with Nearline SAS disks that have 6TB or more, refer
to "Supported RAID" (page 16).

23
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
RAID Functions

RAID Group
This section explains RAID groups.
A RAID group is a group of drives. It is a unit that configures RAID. Multiple RAID groups with the same RAID
level or multiple RAID groups with different RAID levels can be set together in the ETERNUS DX. After a RAID
group is created, RAID levels can be changed and drives can be added.
Table 6 RAID Group Types and Usage

Maximum capacity
Type Usage
Per RAID group Per storage system
RAID group Areas to store normal data. Volumes (Standard, WSV, Approximately 363TB Depends on the number
SDV, SDPV) for work and Advanced Copy can be created (*1) of installable drives
in a RAID group. Approximately 726TB
(*2)
REC Disk Buffer Areas that are dedicated for the REC Consistency mode to Approximately 55TB 110TB (*3) (*8)
temporarily back up copy data. (*3) 222TB (*4) (*9)
Approximately 111TB
(*4)
Thin Provisioning RAID groups that are used for Thin Provisioning in which 2,048TB (*7)
Pool (TPP) (*5) the areas are managed as a Thin Provisioning Pool (TPP).
Thin Provisioning Volumes (TPVs) can be created in a
TPP.
Flexible Tier Sub RAID groups that are used for the Flexible Tier function in
Pool (FTSP) (*6) which the areas are managed as a Flexible Tier Sub Pool
(FTSP). Larger pools (Flexible Tier Pools: FTRPs) are com-
prised by layers of FTSPs. Flexible Tier Volumes (FTVs) can
be created in an FTSP.

*1: This value is for a 15.36TB SSD RAID6-FR ([13D+2P]´2+1HS) configuration in the ETERNUS DX100 S3/DX200
S3.
For details on the number of configuration drives for each RAID level and recommended configurations,
refer to Table 7.
*2: This value is for a 30.72TB SSD RAID6-FR ([13D+2P]´2+1HS) configuration in the ETERNUS DX100 S4/DX200
S4.
For details on the number of configuration drives for each RAID level and recommended configurations,
refer to Table 7.
*3: This value is for a 15.36TB SSD RAID1+0 (4D+4M) configuration in the ETERNUS DX100 S3/DX200 S3.
*4: This value is for a 30.72TB SSD RAID1+0 (4D+4M) configuration in the ETERNUS DX100 S4/DX200 S4.
*5: For details on the number of configuration drives for each RAID level and recommended configurations,
refer to Table 14.
*6: For details on the number of configuration drives for each RAID level and recommended configurations,
refer to Table 19.
*7: Total of the Thin Provisioning Pool capacity and the FTSP capacity.
*8: The maximum capacity of an ETERNUS DX100 S3/DX200 S3 with one controller is 55TB.
*9: The maximum capacity of an ETERNUS DX100 S4/DX200 S4 with one controller is 111TB.

24
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
RAID Functions

The same size drives (2.5", 3.5") and the same kind of drives (SAS disks, Nearline SAS disks, SSDs, or SEDs) must
be used to configure a RAID group.
Figure 8 Example of a RAID Group

SAS SAS SAS SAS SAS SSD SSD SSD SSD


600GB 600GB 600GB 600GB 600GB 400GB 400GB 400GB 400GB

RAID group 1 RAID group 2

• SAS disks and Nearline SAS disks can be installed together in the same group. Note that SAS disks and Near-
line SAS disks cannot be installed with SSDs or SEDs.
• Use drives that have the same size, capacity, rotational speed (for disks), Advanced Format support, inter-
face speed (for SSDs), and drive enclosure transfer speed (for SSDs) to configure RAID groups.
- If a RAID group is configured with drives that have different capacities, all the drives in the RAID group are
recognized as having the same capacity as the drive with the smallest capacity in the RAID group and the
rest of the capacity in the drives that have a larger capacity cannot be used.
- If a RAID group is configured with drives that have different rotational speeds, the performance of all of
the drives in the RAID group is reduced to that of the drive with the lowest rotational speed.
- If a RAID group is configured with SSDs that have different interface speeds, the performance of all of the
SSDs in the RAID group is reduced to that of the SSD with the lowest interface speed.
- 3.5" SAS disks are handled as being the same size type as the drives for high-density drive enclosures. For
example, 3.5" Nearline SAS disks and Nearline SAS disks for high-density drive enclosures can exist to-
gether in the same RAID group.
- When a RAID group is configured with SSDs in both the high-density drive enclosure (6Gbit/s), and the
3.5" type drive enclosure or the high-density drive enclosure (12Gbit/s), because the interface speed of
the high-density drive enclosure (6Gbit/s) is 6Gbit/s, all of the SSDs in the RAID group operate at 6Gbit/s.
• For details on the RAID levels that can be configured with Nearline SAS disks that have 6TB or more, refer to
"Supported RAID" (page 16).

Table 7 shows the recommended number of drives that configure a RAID group.
Table 7 Recommended Number of Drives per RAID Group

Number of configura-
RAID level Recommended number of drives (*1)
tion drives
RAID1 2 2(1D+1M)
RAID1+0 4 to 32 4(2D+2M), 6(3D+3M), 8(4D+4M), 10(5D+5M)
RAID5 3 to 16 3(2D+1P), 4(3D+1P), 5(4D+1P), 6(5D+1P)
RAID5+0 6 to 32 3(2D+1P) ´ 2, 4(3D+1P) ´ 2, 5(4D+1P) ´ 2, 6(5D+1P) ´ 2
RAID6 5 to 16 5(3D+2P), 6(4D+2P), 7(5D+2P)
RAID6-FR 11 to 31 17 ((6D+2P) ´2+1HS)

*1: D = Data, M = Mirror, P = Parity, HS = Hot Spare

25
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
RAID Functions

• Sequential access performance hardly varies with the number of drives for the RAID group.
• Random access performance tends to be proportional to the number of drives for the RAID group.
• Use of higher capacity drives will increase the time required for the drive rebuild process to complete.
• For RAID5, RAID5+0, and RAID6, ensure that a single RAID group is not being configured with too many
drives.
If the number of drives increases, the time to perform data restoration from parities and Rebuild/Copyback
when a drive fails also increases.
For details on the recommended number of drives, refer to Table 7.
• The RAID level that can be registered in REC Disk Buffers is RAID1+0. The drive configurations that can be
registered in REC Disk Buffers is 2D+2M or 4D+4M.
For details on the Thin Provisioning function and the RAID configurations that can be registered in Thin Pro-
visioning Pools, refer to "Storage Capacity Virtualization" (page 42).
For details on the Flexible Tier functions and the RAID configurations that can be registered in Flexible Tier
Pools, refer to "Automated Storage Tiering" (page 49).

An assigned CM is allocated to each RAID group. For details, refer to "Assigned CMs" (page 103).
For the installation locations of the drives that configure the RAID group, refer to "Recommended RAID Group
Configurations" (page 191).

Volume
This section explains volumes.
Logical drive areas in RAID groups are called volumes.
A volume is the basic RAID unit that can be recognized by the server.
Figure 9 Volume Concept

Volume 1
Volume 3
Volume 2

RAID group 1 RAID group 2

A volume may be up to 128TB. However, the maximum capacity of volume varies depending on the OS of the
server.
The number of volumes that can be created in the ETERNUS DX is shown below. Volumes can be created until
the combined total for each volume type reaches the maximum number of volumes.
Table 8 Number of Volumes That Can Be Created

Model Number of volumes (max.)


ETERNUS DX100 S4/DX100 S3 2,048 (*1)
4,096 (*2)
ETERNUS DX200 S4/DX200 S3 4,096 (*1)
8,192 (*2)

26
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
RAID Functions

*1: The values if the controller firmware version is earlier than V10L60 or if the "Expand Volume Mode" is disa-
bled.
*2: The values if the controller firmware version is V10L60 or later and if the "Expand Volume Mode" is ena-
bled.
A volume can be expanded or moved if required. Multiple volumes can be concatenated and treated as a single
volume. For availability of expansion, displacement, and concatenation for each volume, refer to "Target Vol-
umes of Each Function" (page 200).
The types of volumes that are listed in the table below can be created in the ETERNUS DX.
Table 9 Volumes That Can Be Created

Type Usage Maximum capacity


Standard (Open) A standard volume is used for normal usage, such as file sys- 128TB (*1)
tems and databases. The server recognizes it as a single logi-
cal unit.
"Standard" is displayed as the type for this volume in ETERNUS
Web GUI/ETERNUS CLI and "Open" is displayed in ETERNUS SF
software.
Snap Data Volume (SDV) This area is used as the copy destination for SnapOPC/ 24 [MB] + copy source
SnapOPC+. There is a SDV for each copy destination. volume capacity ´ 0.1 [%]
(*2)
Snap Data Pool Volume (SDPV) This volume is used to configure the Snap Data Pool (SDP) 2TB
area. The SDP capacity equals the total capacity of the SDPVs.
A volume is supplied from a SDP when the amount of updates
exceeds the capacity of the copy destination SDV.
Thin Provisioning Volume (TPV) This virtual volume is created in a Thin Provisioning Pool area. 128TB
Flexible Tier Volume (FTV) This volume is a target volume for layering. Data is automati- 128TB
cally redistributed in small block units according to the access
frequency. An FTV belongs to a Flexible Tier Pool.
Virtual Volumes (VVOLs) A VVOL is a VMware vSphere dedicated capacity virtualization 128TB
volume. Operations can be simplified by associating VVOLs
with virtual disks.
Its volume type is FTV.
Deduplication/Compression Volume This volume is a virtual volume that is recognized by the serv- 128TB
er when the Deduplication/Compression function is used. It
can be created by enabling the Deduplication/Compression
setting for a volume that is to be created. The data is seen by
the server as being non-deduplicated and uncompressed.
The volume type is TPV.
Wide Striping Volume (WSV) This volume is created by concatenating distributed areas in 128TB
from 2 to 64 RAID groups. Processing speed is fast because
data access is distributed.
ODX Buffer volume An ODX Buffer volume is a dedicated volume that is required 1TB
to use the Offloaded Data Transfer (ODX) function of Windows
Server 2012 or later. It is used to save the source data when
data is updated while a copy is being processed.
It can be created one per ETERNUS DX.
Its volume type is Standard, TPV, or FTV.

*1: When multiple volumes are concatenated using the LUN Concatenation function, the maximum capacity is
also 128TB.
*2: The capacity differs depending on the copy source volume capacity.
After a volume is created, formatting automatically starts. A server can access the volume while it is being for-
matted. Wait for the format to complete if high performance access is required for the volume.

27
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
RAID Functions

• In the ETERNUS DX, volumes have different stripe sizes that depend on the RAID level and the stripe depth
parameter.
For details about the stripe sizes for each RAID level and the stripe depth parameter values, refer to "ETER-
NUS Web GUI User's Guide".
Note that the available user capacity can be fully utilized if an exact multiple of the stripe size is set for the
volume size. If an exact multiple of the stripe size is not set for the volume size, the capacity is not fully
utilized and some areas remain unused.
• When a Thin Provisioning Pool (TPP) is created, a control volume is created for each RAID group that config-
ures the relevant TPP. Therefore, the maximum number of volumes that can be created in the ETERNUS DX
decreases by the number of RAID groups that configure a TPP.
• When the Flexible Tier function is enabled, 32 work volumes are created. The maximum number of volumes
that can be created in the ETERNUS DX decreases by the number of work volumes that are created.
• When a Flexible Tier Sub Pool (FTSP) is created, a control volume is created for each RAID group that config-
ures the relevant FTSP. Therefore, the maximum number of volumes that can be created in the ETERNUS DX
decreases by the number of RAID groups that configure an FTSP.
• When using the VVOL function, a single volume for the VVOL management information is created the mo-
ment a VVOL is created. The maximum number of volumes that can be created in the ETERNUS DX decrea-
ses by the number of volumes for the VVOL management information that are created.

28
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
RAID Functions

Hot Spares
Hot spares are used as spare drives for when drives in a RAID group fail, or when drives are in error status.
Figure 10 Hot Spares

Hot spare

Failure

RAID group

When the RAID level is RAID6-FR, data in a failed drive can be restored to a reserved space in a RAID group
even when a drive error occurs because a RAID6-FR RAID group retains a reserved space for a whole drive in
the RAID group. If the reserved area is in use and an error occurs in another drive (2nd) in the RAID group,
then the hot spare is used as a spare.

■ Types of Hot Spares


The following two types of hot spare are available:
• Global Hot Spare
This is available for any RAID group. When multiple hot spares are installed, the most appropriate drive is au-
tomatically selected and incorporated into a RAID group.
• Dedicated Hot Spare
This is only available to the specified RAID group (one RAID group).
The Dedicated Hot Spare cannot be registered in a RAID group that is registered in TPPs, FTRPs, or REC Disk
Buffers.

Assign "Dedicated Hot Spares" to RAID groups that contain important data, in order to preferentially improve
their access to hot spares.

■ Number of Installable Hot Spares


The number of required hot spares is determined by the total number of drives.
The following table shows the recommended number of hot spares for each drive type.
Table 10 Hot Spare Installation Conditions

Total number of drives


Model
Up to 120 Up to 240 Up to 264
ETERNUS DX100 S4/DX100 S3 1 2 —
ETERNUS DX200 S4/DX200 S3 1 2 3

29
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
RAID Functions

■ Types of Drives
If a combination of SAS disks, Nearline SAS disks, SSDs, and SEDs is installed in the ETERNUS DX, each different
type of drive requires a corresponding hot spare.
2.5" and 3.5" drive types are available. The drive type for high-density drive enclosures is 3.5".
There are two types of rotational speeds for SAS disks; 10,000rpm and 15,000rpm. If a drive error occurs and a
hot spare is configured in a RAID group with different rotational speed drives, the performance of all the drives
in the RAID group is determined by the drive with the slowest rotational speed. When using SAS disks with dif-
ferent rotational speeds, prepare hot spares that correspond to the different rotational speed drives if required.
Even if a RAID group is configured with SAS disks that have different interface speeds, performance is not affec-
ted.
There are two types of interface speeds for SSDs; 6Gbit/s and 12Gbit/s. If a drive error occurs and a hot spare is
configured in a RAID group with different interface speed SSDs, the performance of all the SSDs in the RAID
group is determined by the SSDs with the slowest interface speed. Preparing SSDs with the same interface speed
as the hot spare is recommended.
The capacity of each hot spare must be equal to the largest capacity of the same-type drives.

■ Selection Criteria
When multiple Global Hot Spares are installed, the following criteria are used to select which hot spare will re-
place a failed drive:
Table 11 Hot Spare Selection Criteria

Selection or-
Selection criteria
der
1 A hot spare with the same type, same capacity, and same rotational speed (for disks) or same interface speed
(for SSDs) as the failed drive
2 A hot spare with the same type, and same rotational speed (for disks) or same interface speed (for SSDs) as the
failed drive but with a larger capacity (*1)
3 A hot spare with the same type and same capacity as the failed drive but with a different rotational speed (for
disks) or a different interface speed (for SSDs)
4 A hot spare with the same type as the failed drive but with a larger capacity and a different rotational speed (for
disks) or a different interface speed (for SSDs) (*1)

*1: When there are multiple hot spares with a larger capacity than the failed drive, the hot spare with the
smallest capacity among them is used first.

30
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Protection

Data Protection

Data Block Guard


When a write request is issued by a server, the data block guard function adds check codes to all of the data that
is to be stored. The data is verified at multiple checkpoints on the transmission paths to ensure data integrity.
When data is written from the server, the Data Block Guard function adds eight bytes check codes to each block
(every 512 bytes) of the data and verifies the data at multiple checkpoints to ensure data consistency. This func-
tion can detect a data error when data is destroyed or data corruption occurs. When data is read from the server,
the check codes are confirmed and then removed, ensuring that data consistency is verified in the whole storage
system.
If an error is detected while data is being written to a drive, the data is read again from the data that is duplica-
ted in the cache memory. This data is checked for consistency and then written.
If an error is detected while data is being read from a drive, the data is restored using RAID redundancy.
Figure 11 Data Block Guard
Write Read

User data User data


A0 A1 A2 A0 A1 A2
1 3

A0 CC A1 CC A2 CC Controller A0 CC A1 CC A2 CC
CC: Check code

Cache memory

2 2
Written data
A0 CC A1 CC A2 CC

1. The check codes are added


2. The check codes are confirmed
3. The check codes are confirmed and removed

Also, the T10-Data Integrity Field (T10-DIF) function is supported. T10-DIF is a function that adds a check code to
data that is to be transferred between the Oracle Linux server and the ETERNUS DX, and ensures data integrity
at the SCSI level.
The server generates a check code for the user data in the host bus adapter (HBA), and verifies the check code
when reading data in order to ensure data integrity.
The ETERNUS DX double-checks data by using the data block guard function and by using the supported T10-DIF
to improve reliability.
Data is protected at the SCSI level on the path to the server. Therefore, data integrity can be ensured even if data
is corrupted during a check code reassignment.
By linking the Data Integrity Extensions (DIX) function of Oracle DB, data integrity can be ensured in the entire
system including the server.
The T10-DIF function can be used when connecting with HBAs that support T10-DIF with an FC interface.

31
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Protection

The T10-DIF function can be enabled or disabled for each volume when the volumes are created. This function
cannot be enabled or disabled after a volume has been created.

• The T10-DIF function can be enabled only in the Standard volume.


• LUN concatenation cannot be performed for volumes where the T10-DIF function is enabled.

32
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Protection

Disk Drive Patrol


In the ETERNUS DX, all of the drives are checked in order to detect drive errors early and to restore drives from
errors or disconnect them.
The Disk Drive Patrol function regularly diagnoses and monitors the operational status of all drives that are in-
stalled in the ETERNUS DX. Drives are checked (read check) regularly as a background process.
For drive checking, read check is performed sequentially for a part of the data in all the drives. If an error is
detected, data is restored using drives in the RAID group and the data is written back to another block of the
drive in which the error occurred.
Figure 12 Disk Drive Patrol
Data is read and Error detection
checked.

Error
D1

RAID group

Data reconstruction

The data is written back D1


to another block.

D1
D1 D2 D3 P

RAID group

D1 to D3: Data
P: Parity

Read checking is performed during the diagnosis.


These checks are performed in blocks (default 2MB) for each drive sequentially and are repeated until all the
blocks for all the drives have been checked. Patrol checks are performed every second, 24 hours a day (default).

Drives that are stopped by Eco-mode are checked when the drives start running again.

The Maintenance Operation privilege is required to set detailed parameters.

33
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Protection

Redundant Copy
Redundant Copy is a function that copies the data of a drive that shows a possible sign of failure to a hot spare.
When the Disk Patrol function decides that preventative maintenance is required for a drive, the data of the
maintenance target drive is re-created by the remaining drives and written to the hot spare. The Redundant
Copy function enables data to be restored while maintaining data redundancy.
Figure 13 Redundant Copy Function

Creates data from the drives other than


the maintenance target drive, and
writes data into the hot spare.
Sign of
failure Hot spare
RAID5 (Redundant)

Disconnects the maintenance target drive


and switches it to the hot spare.

RAID5 (Redundant)

Disconnected

If a bad sector is detected when a drive is checked, an alternate track is automatically assigned. This drive is
not recognized as having a sign of drive failure during this process. However, the drive will be disconnected
by the Redundant Copy function if the spare sector is insufficient and the problem cannot be solved by assign-
ing an alternate track.

• Redundant Copy speed


Giving priority to Redundant Copy over host access can be specified. By setting a higher Rebuild priority, the
performance of Redundant Copy operations may improve.
However, it should be noted that when the priority is high and a Redundant Copy operation is performed
for a RAID group, the performance (throughput) of this RAID group may be reduced.

34
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Protection

Rebuild
Rebuild processes recover data in failed drives by using other drives. If a free hot spare is available when one of
the RAID group drives has a problem, data of this drive is automatically replicated in the hot spare. This ensures
data redundancy.
Figure 14 Rebuild
Rebuild

Disconnects the failed drive to the ETERNUS


storage system and creates data from the drives
Failure other than the failed drive and writes the data
into the hot spare.
Hot spare
RAID5 (No redundancy)

Disconnection

Configures the hot spare in the RAID group.

Failed drive
RAID5 (Redundant)

When no hot spares are registered, rebuilding processes are only performed when a failed drive is replaced or
when a hot spare is registered.

• Rebuild Speed
Giving priority to rebuilding over host access can be specified. By setting a higher rebuild priority, the per-
formance of rebuild operations may improve.
However, it should be noted that when the priority is high and a rebuild operation is performed for a RAID
group, the performance (throughput) of this RAID group may be reduced.

35
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Protection

Fast Recovery
This function recovers data quickly by relocating data in the failed drive to the other remaining drives when a
drive error is detected.
For a RAID group that is configured with RAID6-FR, Fast Recovery is performed for the reserved area that is
equivalent to hot spares in the RAID group when a drive error occurs.
If a second drive fails when the reserved area is already used by the first failed drive, a normal rebuild (hot spare
rebuild in the ETERNUS DX) is performed.
For data in a failed drive, redundant data and reserved space are allocated in different drives according to the
area. A fast rebuild can be performed because multiple rebuild processes are performed for different areas si-
multaneously.
Figure 15 Fast Recovery

High Speed Rebuilding


(Creating data and writing to the FHS area simultaneously)
A failed drive is disconnected from the
ETERNUS storage system.
Data is created from the redundant data
in normal drives and written to reserved
space (FHS) in RAID6-FR.

: Redundant data for area A


Failure of the failed drive
: Redundant data for area B
RAID6-FR ((3D+2P) × 2 + 1HS) of the failed drive
RAID6-FR (No redundancy)
: Fast recovery Hot Spare (FHS)
Disconnect

RAID6-FR ((3D+2P) × 2) Failed drive


RAID6-FR (Redundant)

For the Fast Recovery function that is performed when the first drive fails, a copyback is performed after the
failed drive is replaced even if the Copybackless function is enabled.
For a normal rebuild process that is performed when the reserved space is already being used and the second
drive fails, a copyback is performed according to the settings of the Copybackless function.

36
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Protection

Copyback/Copybackless
A Copyback process copies data in a hot spare to the new drive that is used to replace the failed drive.
Figure 16 Copyback

After rebuilding has been completed,


replaces the failed drive with the new drive.

Failed drive
RAID5 (Redundant)

Copyback

After replacing has been completed,


copies the data from the hot spare
to the new drive.

RAID5 (Redundant)

Hot spare
RAID5 (Redundant)

• Copyback speed
Giving priority to Copyback over host access can be specified. By setting a higher Rebuild priority, the per-
formance of Copyback operations may improve.
However, it should be noted that when the priority is high and a Copyback operation is performed for a
RAID group, the performance (throughput) of this RAID group may be reduced.

If copybackless is enabled, the drives that are registered in the hot spare become part of the RAID group config-
uration drives after a rebuild or a redundant copy is completed for the hot spare.
The failed drive is disconnected from the RAID group configuration drives and then registered as a hot spare.
Copyback is not performed for the data even if the failed drive is replaced by a new drive because the failed drive
is used as a hot spare.
A copyback operation is performed when the following conditions for the copybackless target drive (or hot
spare) and the failed drive are the same.
• Drive type (SAS disks, Nearline SAS disks, SSDs, and Self Encrypting Drives [SEDs])
• Size (2.5" and 3.5" [including high-density drive enclosures])

37
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Protection

• Capacity
• Rotational speed (15,000rpm, 10,000rpm, and 7,200rpm) (*1)
• Interface speed (12Gbit/s and 6Gbit/s) (*2)
• Drive enclosure transfer rate (12Gbit/s and 6Gbit/s) (*2)

*1: For SAS disks or Nearline SAS disks (including SEDs) only.
*2: For SSDs only.
If different types of drives have been selected as the hot spare, copyback is performed after replacing the drives
even when the Copybackless function is enabled.
The Copybackless function can be enabled or disabled. This function is enabled by default.
Figure 17 Copybackless

After rebuilding is complete, the RAID group


configuration drive is replaced
by the hot spare.
Failed drive
RAID5 (Redundant)

The failed drive (hot spare) is


replaced by the new drive.

Hot spare
RAID5 (Redundant)

The replaced drive becomes a hot spare in


the storage system.

Hot spare
RAID5 (Redundant)

• To set the Copybackless function for each storage system, use the subsystem parameter settings. These set-
tings can be performed with the system management/maintenance operation privilege. After the settings
are changed, the ETERNUS DX does not need to be turned off and on again.
• If the Copybackless function is enabled, the drive that is replaced with the failed drive cannot be installed in
the prior RAID group configuration. This should be taken into consideration when enabling or disabling the
Copybackless function.

38
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Protection

Protection (Shield)
The Protection (Shield) function diagnoses temporary drive errors. A drive can continue to be used if it is deter-
mined to be normal. The target drive temporarily changes to diagnosis status when drive errors are detected by
the Disk Drive Patrol function or error notifications.
For a drive that configures a RAID group, data is moved to a hot spare by a rebuild or redundant copy before the
drive is diagnosed. For a drive that is disconnected from a RAID group, whether the drive has a permanent error
or a temporary error is determined. The drive can be used again if it is determined that the drive has only a
temporary error.
The target drives of the Protection (Shield) function are all the drives that are registered in RAID groups or regis-
tered as hot spares. Note that the Protection (Shield) function is not available for unused drives.
The Protection (Shield) function can be enabled or disabled. This function is enabled by default.
Figure 18 Protection (Shield)

Redundant Copy
Particular
error message
Data is created from the drives
that are not the target drives
? for the Protection (Shield) function and
written to the hot spare.
Hot spare
RAID5 (Redundant)

RAID5 (Redundant)
Suspend

Temporary
protection
The target drive for the Protection (Shield)
function is disconnected temporarily
and diagnosed.

RAID5 (Redundant)
Diagnosing

Hot spare
If the drive is determined to be
normal after the diagnosis is
performed, the drive is reconnected
to the storage system (*1).

RAID5 (Redundant)

*1: If copybackless is enabled, the drive is used as a hot spare disk. If copybackless is disabled, the drive is
used as a RAID group configuration drive and copyback starts. The copybackless setting can be enabled or
disabled until the drive is replaced.

39
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Protection

• The target drives are deactivated and then reactivated during temporary drive protection. Even though a
system status error may be displayed during this period, this phenomenon is only temporary. The status
returns to normal after the diagnosis is complete.
The following phenomenon may occur during temporary drive protection.
- The Fault LEDs (amber) on the operation panel and the drive turn on
- An error status is displayed by the ETERNUS Web GUI and the ETERNUS CLI
• Error or Warning is displayed as the system status
• Error, Warning, or Maintenance is displayed as the system status
• Target drives of the Protection (Shield) function only need to be replaced when drive reactivation fails.
If drive reactivation fails, a drive failure error is notified as an event notification message (such as SNMP/
REMCS). When drive reactivation is successful, an error message is not notified. To notify this message, use
the event notification settings.
• To set the Protection (Shield) function for each storage system, use the subsystem parameter settings. The
maintenance operation privilege is required to perform this setting.
After the settings are changed, the ETERNUS DX does not need to be turned off and on again.

40
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Protection

Reverse Cabling
Because the ETERNUS DX uses reverse cabling connections for data transfer paths between controllers and
drives, continued access is ensured even if a failure occurs in a drive enclosure.
If a drive enclosure fails for any reason, access to drives that are connected after the failed drive can be main-
tained because normal access paths are secured by using reverse cabling.
Figure 19 Reverse Cabling

DE#06 DE#06
IOM#0 IOM#1 IOM#0 IOM#1

DE#05 DE#05
IOM#0 IOM#1 IOM#0 IOM#1

DE#04 DE#04
A failure occurs
IOM#0 IOM#1 IOM#0 IOM#1
in a drive enclosure
DE#03 DE#03
IOM#0 IOM#1 IOM#0 Failure IOM#1

DE#02 DE#02
IOM#0 IOM#1 IOM#0 IOM#1

DE#01 DE#01
IOM#0 IOM#1 IOM#0 IOM#1

CE CE
CM#0 CM#1 CM#0 CM#1

Continued access is available to drives


Reverse cabling connection in the drive enclosures that follow the failed one.
(Connect the controller enclosure to the last drive enclosure)

: Accessible

: Inaccessible

41
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operations Optimization (Virtualization/Automated Storage Tiering)

Operations Optimization (Virtualization/Automated Storage


Tiering)

A single controller configuration differs from a dual controller configuration in the following ways:
• The Thin Provisioning function cannot be used.
• The Flexible Tier function cannot be used.

Thin Provisioning
The Thin Provisioning function has the following features:
• Storage Capacity Virtualization
The physical storage capacity can be reduced by allocating the virtual drives to a server, which allows efficient
use of the storage capacity. The volumes more than the capacity of all the installed drives can be allocated by
setting the capacity required for virtual volumes in the future.
• TPV Balancing
I/O access to the virtual volume can be distributed among the RAID groups in a pool, by relocating and balanc-
ing the physical allocation status of the virtual volume.
• TPV/FTV Capacity Optimization (Zero Reclamation)
Data in physically allocated areas are checked in blocks and unnecessary areas (areas where 0 is allocated to
all of the data in each block) are released to unallocated areas.

Storage Capacity Virtualization


Thin Provisioning improves the usability of the drives by managing the physical drives in a pool, and sharing the
unused capacity among the virtual volumes in the pool. The volume capacity that is seen from the server is vir-
tualized to allow the server to recognize a larger capacity than the physical volume capacity. Because a large
capacity virtual volume can be defined, the drives can be used in a more efficient and flexible manner.

42
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operations Optimization (Virtualization/Automated Storage Tiering)

Initial cost can be reduced because less drive capacity is required even if the capacity requirements cannot be
estimated. The power consumption requirements can also be reduced because a fewer number of drives are in-
stalled.
Figure 20 Storage Capacity Virtualization
Data that is
Virtual volume actually used

Physical drives
Allocated
RAID group
Management server

Allocated as required

Write
Mapping

In the Thin Provisioning function, the RAID group, which is configured with multiple drives, is managed as a Thin
Provisioning Pool (TPP). When a Write request is issued, a physical area is allocated to the virtual volume. The
free space in the TPP is shared among the virtual volumes which belong to the TPP, and a virtual volume, which
is larger than the drive capacity in the ETERNUS DX, can be created. A virtual volume to be created in a TPP is
referred to as a Thin Provisioning Volume (TPV).
• Thin Provisioning Pool (TPP)
A TPP is a physical drive pool which is configured with one or more RAID groups. TPP capacity can be expanded
in the units of RAID groups. Add RAID groups with the same specifications (RAID level, drive type, and number
of member drives) as those of the existing RAID groups.
The following table shows the maximum number and the maximum capacity of TPPs that can be registered in
the ETERNUS DX.
Table 12 TPP Maximum Number and Capacity

Item ETERNUS DX100 S4/DX100 S3 ETERNUS DX200 S4/DX200 S3


Number of pools (max.) 72 (*1) 132 (*1)
Pool capacity (max.) 2,048TB (*2)

*1: The maximum total number of Thin Provisioning Pools and FTSPs.
*2: The maximum pool capacity is the capacity that combines the FTSP capacity and the Thin Provisioning
Pool capacity in the ETERNUS DX.
The following table shows the TPP chunk size that is applied when TPPs are created.
Table 13 Chunk Size According to the Configured TPP Capacity

Setting value of the maximum pool capacity Chunk size (*1)


Up to 256TB 21MB
Up to 512TB 42MB
Up to 1,024TB 84MB
Up to 2,048TB 168MB

43
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operations Optimization (Virtualization/Automated Storage Tiering)

*1: Chunk size is for delimiting data. The chunk size is automatically set according to the maximum pool ca-
pacity.
To perform encryption, specify encryption by firmware when creating a TPP, or select the Self Encrypting Drive
(SED) for configuration when creating a TPP.
The following table shows the RAID configurations that can be registered in a TPP.
Table 14 Levels and Configurations for a RAID Group That Can Be Registered in a TPP

Recommended config-
RAID level Number of configurable drives
urations
RAID0 4 (4D) —
RAID1 2 (1D+1M) 2 (1D+1M)
RAID1+0 4 (2D+2M), 8 (4D+4M), 16 (8D+8M), 24 (12D+12M) 8 (4D+4M)
RAID5 4 (3D+1P), 5 (4D+1P), 7 (6D+1P), 8 (7D+1P), 9 (8D+1P), 13 (12D+1P) 4 (3D+1P), 8 (7D+1P)
RAID6 6 (4D+2P), 8 (6D+2P), 9 (7D+2P), 10 (8D+2P) 8 (6D+2P)
RAID6-FR 13 ((4D+2P) ´2+1HS), 17 ((6D+2P) ´2+1HS) 17 ((6D+2P) ´2+1HS)
31 ((8D+2P) ´3+1HS), 31 ((4D+2P) ´5+1HS)

• Thin Provisioning Volume (TPV)


The maximum capacity of a TPV is 128TB. Note that the total TPV capacity must be smaller than the maximum
capacity of the TPP.
When creating a TPV, the Allocation method can be selected.
- Thin
When data is written from the host to a TPV, a physical area is allocated to the created virtual volume. The
capacity size (chunk size) that is applied is the same value as the chunk size of the TPP where the TPV is
created. The physical storage capacity can be reduced by allocating a virtualized storage capacity.
- Thick
When creating a volume, the physical area is allocated to the entire volume area. This can be used for vol-
umes in the system area to prevent a system stoppage due to a pool capacity shortage during operations.
In general, selecting "Thin" is recommended. The Allocation method can be changed after a TPV is created.
Perform a TPV/FTV capacity optimization if "Thick" has changed to "Thin". By optimizing the capacity, the area
that was allocated to a TPV is released and the TPV becomes usable. If a TPV/FTV capacity optimization is not
performed, the usage of the TPV does not change even after the Allocation method is changed.
The capacity of a TPV can be expanded after it is created.
For details on the number of TPVs that can be created, refer to "Volume" (page 26).

● Threshold Monitoring of Used Capacity


When the used capacity of a TPP reaches the threshold, a notification is sent to the notification destination,
(SNMP Trap, e-mail, or Syslog) specified using the [Setup Event Notification] function. There are two types of
thresholds: "Attention" and "Warning". A different value can be specified for each threshold type.
Also, ETERNUS SF Storage Cruiser can be used to monitor the used capacity.
• TPP Thresholds
There are two TPP usage thresholds: Attention and Warning.
Table 15 TPP Thresholds

Threshold Selectable range Default Setting conditions


Attention 5 (%) to 80 (%) 75 (%) Attention threshold £ Warning threshold
Warning 5 (%) to 99 (%) 90 (%) The "Attention" threshold can be omitted.

44
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operations Optimization (Virtualization/Automated Storage Tiering)

• TPV Thresholds
There is only one TPV usage threshold: Attention. When the physically allocated capacity of a TPV reaches the
threshold, a response is sent to a host via a sense. The threshold is determined by the ratio of free space in the
TPP and the unallocated TPV capacity.
Table 16 TPV Thresholds

Threshold Selectable range Default


Attention 1 (%) to 100 (%) 80 (%)

• Use of TPVs is also not recommended when the OS writes meta information to the whole LUN during file
system creation.
• TPVs should be backed up of files as sets of their component files. While backing up a whole TPV is not
difficult, unallocated areas will also be backed up as dummy data. If the TPV then needs to be restored from
the backup, the dummy data is also "restored". This requires allocation of the physical drive area for the
entire TPV capacity, which negates the effects of thin provisioning.
• For advanced performance tuning, use standard RAID groups.
• Refer to the applicable OS and file system documentation before dynamically expanding the volume ca-
pacity because expanded volumes may not be recognized by some types and versions of server-side plat-
forms (OSs).
• If a TPP includes one or more RAID groups that are configured with Advanced Format drives, all TPVs created
in the relevant TPP are treated as Advanced Format volumes. In this case, the write performance may be
reduced when accessing the relevant TPV from an OS or an application that does not support Advanced For-
mat.

TPV Balancing
A drive is allocated when a write is issued to a virtual volume (TPV). Depending on the order and the frequency
of writes, more drives in a specific RAID group may be allocated disproportionately. Also, the physical capacity is
unevenly allocated among the newly added RAID group and the existing RAID groups when physical drives are
added to expand the capacity.
Balancing of TPVs can disperse the I/O access to virtual volumes among the RAID groups in the Thin Provisioning
Pool (TPP).
• When allocating disproportionate TPV physical capacity evenly
Figure 21 TPV Balancing (When Allocating Disproportionate TPV Physical Capacity Evenly)

TPP TPP

TPV#0 TPV#0 TPV#1 Balanced TPV#0 TPV#0 TPV#1


TPV#0 TPV#0 TPV#2 TPV#0 TPV#0 TPV#2
TPV#0 TPV#1 TPV#1 TPV#0
TPV#0 TPV#0
TPV#2 TPV#2
TPV#0 is
RAID group RAID group RAID group balanced RAID group RAID group RAID group
#0 #1 #2 #0 #1 #2

When I/O access is performed to the allocated RAID group #0, RAID group #1, and RAID group #2
area in TPV#0, only RAID group #0 is accessed. are accessed evenly when I/O access is performed
to the allocated area in TPV#0.

45
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operations Optimization (Virtualization/Automated Storage Tiering)

• When distributing host accesses evenly after TPP expansion (after drives are added)
Figure 22 TPV Balancing (When Distributing Host Accesses Evenly after TPP Expansion)

TPP TPP

Balancing

RAID group #0 - #2 Added RAID groups

Balance Thin Provisioning Volume is a function that evenly relocates the physically allocated capacity of TPVs
among the RAID groups that configure the TPP.
Balancing TPV allocation can be performed for TPVs in the same TPP. TPV balancing cannot be performed at the
same time as RAID Migration to a different TPP for which the target TPV does not belong.
When a write is issued to a virtual volume, a drive is allocated. When data is written to multiple TPVs in the TPP,
physical areas are allocated by rotating the RAID groups that configure the TPP in the order that the TPVs were
accessed. When using this method, depending on the write order or frequency, TPVs may be allocated unevenly
to a specific RAID group. In addition, when the capacity of a TPP is expanded, the physical capacity is unevenly
allocated among the newly added RAID group and the existing RAID groups.

● Balancing Level
The TPV balance status is displayed by three levels; "High", "Middle", and "Low". "High" indicates that the physi-
cal capacity of TPV is allocated evenly in the RAID groups registered in the TPP. "Low" indicates that the physical
capacity is allocated unequally to a specific RAID group in the TPP.
TPV balancing may not be available when other functions are being used in the device or the target volume.
Refer to "Combinations of Functions That Are Available for Simultaneous Executions" (page 202) for details on
the functions that can be executed simultaneously, the number of the process that can be processed simultane-
ously, and the capacity that can be processed concurrently.

• When a TPP has RAID groups unavailable for the balancing due to lack of free space, etc., the physical allo-
cation capacity is balanced among the remaining RAID groups within the TPP. In this case, the balancing
level after the balancing is completed may not be "High".
• By performing the TPV balancing, areas for working volumes (the migration destination TPVs with the same
capacity as the migration source) are secured for the TPP to which the TPVs belong. If this causes the total
logical capacity of the TPVs in all the TPPs that include these working volumes to exceed the maximum pool
capacity, a TPV balancing cannot be performed.
In addition, this may cause a temporary alarm state ("Caution" or "Warning", which indicates that the
threshold has been exceeded) in the TPP during a balancing execution. This alarm state is removed once
balancing completes successfully.
• While TPV balancing is being performed, the balancing level may become lower than before balancing was
performed if the capacity of the TPP to which the TPVs belong is expanded.

46
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operations Optimization (Virtualization/Automated Storage Tiering)

TPV/FTV Capacity Optimization


TPV/FTV capacity optimization can increase the unallocated areas in a pool (TPP/FTRP) by changing the physical
areas where 0 is allocated for all of the data to unallocated areas. This improves functional efficiency.
Once an area is physically allocated to a TPV/FTV, the area is never automatically released.
If operations are performed when all of the areas are physically allocated, the used areas that are recognized by
a server and the areas that are actually allocated might have different sizes.
The following operations are examples of operations that create allocated physical areas with sequential data to
which only 0 is allocated:
• Restoration of data for RAW image backup
• RAID Migration from Standard volumes to TPVs/FTVs
• Creation of a file system in which writing is performed to the entire area
The TPV/FTV capacity optimization function belongs to Thin Provisioning. This function can be started after a tar-
get TPV/FTV is selected via ETERNUS Web GUI or ETERNUS CLI. This function is also available when the RAID Mi-
gration destination is a TPP or an FTRP.
TPV/FTV capacity optimization reads and checks the data in each allocated area for the Thin Provisioning func-
tion. This function releases the allocated physical areas to unallocated areas if data that contains all zeros is
detected.
Figure 23 TPV/FTV Capacity Optimization

Check

Before the process TPV / FTV 0 0 0 0


LBA0

Release Release Release Release

After the process TPV / FTV

21MB (*1)

*1: The allocated capacity varies depending on the TPP/FTRP capacity.


: Physically allocated area (data other than ALL0 data)

: Physically allocated area (ALL0 data)

: Unallocated area

TPV/FTV capacity optimization may not be available when other functions are being used in the device or the
target volume.
For details on the functions that can be executed simultaneously, refer to "Combinations of Functions That Are
Available for Simultaneous Executions" (page 202).

47
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operations Optimization (Virtualization/Automated Storage Tiering)

Flexible Tier
The Flexible Tier function has the following functions:
• Automated Storage Tiering
This function automatically reallocates data according to the data access frequency and optimizes perform-
ance and cost.
• FTRP Balancing
I/O access to a virtual volume can be distributed among the RAID groups in a pool by relocating and balancing
the physical allocation status of the volume.
• TPV/FTV Capacity Optimization
Data in physically allocated areas are checked in blocks and unnecessary areas (areas where 0 is allocated to
all of the data in each block) are released to unallocated areas.
For details on these functions, refer to "TPV/FTV Capacity Optimization" (page 47).
• QoS automation function
The QoS for each volume can be controlled by using the ETERNUS SF Storage Cruiser's QoS management op-
tion.
For details on the QoS automation function, refer to the ETERNUS SF Storage Cruiser manual.

48
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operations Optimization (Virtualization/Automated Storage Tiering)

Automated Storage Tiering


The ETERNUS DX uses the Automated Storage Tiering function of ETERNUS SF Storage Cruiser to automatically
change data allocation during operations according to any change in status that occurs. ETERNUS SF Storage
Cruiser monitors data and determines the redistribution of data. The ETERNUS DX uses the Flexible Tier function
to move data in the storage system according to requests from ETERNUS SF Storage Cruiser.
The Flexible Tier function automatically redistributes data in the ETERNUS DX according to access frequency in
order to optimize performance and reduce operation cost. Storage tiering (SSDs, SAS disks, Nearline SAS disks) is
performed by moving frequently accessed data to high speed drives such as SSDs and less frequently accessed
data to cost effective disks with large capacities. Data can be moved in blocks (252MB) that are smaller than the
volume capacity.
The data transfer unit differs depending on the chunk size. The following table shows the relationship between
the data transfer unit and the chunk size.
Table 17 Chunk Size and Data Transfer Unit

Chunk size Transfer unit


21MB 252MB
42MB 504MB
84MB 1,008MB
168MB 2,016MB

By using the Automated Storage Tiering function, installation costs can be reduced because Nearline SAS disk,
which maintain performance, can be used.
Furthermore, because data is reallocated automatically, it can reduce the workload on the administrator for de-
signing storage performance.
Figure 24 Flexible Tier

Management server ETERNUS DX

Monitors the data access frequency


and optimizes performance
ETERNUS SF
Storage Cruiser

Access frequency

High High speed SSD

High speed SAS disk

Large capacity
Low and cheap
Nearline SAS disk
Work volume

49
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operations Optimization (Virtualization/Automated Storage Tiering)

The Flexible Tier uses pools configured by multiple RAID groups (Flexible Tier Sub Pools: FTSP) and larger pools
comprised by layers of Flexible Tier Sub Pools (Flexible Tier Pools: FTRP). A volume which is used by the Flexible
Tier is referred to as the Flexible Tier Volume (FTV).
Settings and operation management for the Flexible Tier function are performed with ETERNUS SF Storage Cruis-
er. For more details, refer to "ETERNUS SF Storage Cruiser Operation Guide for Optimization Option".
Figure 25 FTV Configuration

FTV
Chunk

Chunk

SSD SAS disk Nearline SAS disk

FTSP (High tier) FTSP (Middle tier) FTSP (Low tier)

FTRP (Parent pool)


: RAID group

• Flexible Tier Pool (FTRP)


An FTRP is a management unit for FTSP to be layered. Up to three FTSPs can be registered in one FTRP. This
means that the maximum number of layers is three.
The priority orders can be set per FTSP within one FTRP. Frequently accessed data is stored in an FTSP with a
higher priority. Because FTSPs share resources with TPPs, the maximum number of FTSPs which can be created
is decreased when TPPs are created.
For data encryption, specify encryption for a pool when creating an FTRP or create an FTSP with a Self Encrypt-
ing Drive (SED).
• Flexible Tier Sub Pool (FTSP)
An FTSP consists of one or more RAID groups. The FTSP capacity is expanded in units of RAID groups. Add RAID
groups with the same specifications (RAID level, drive type, and number of member drives) as those of the
existing RAID groups.
The following table shows the maximum number and the maximum capacity of FTSPs that can be registered
in an ETERNUS DX.
Table 18 The Maximum Number and the Maximum Capacity of FTSPs

Item ETERNUS DX100 S4/DX100 S3 ETERNUS DX200 S4/DX200 S3


The maximum number of Flexible Tier Pools 30 30
The maximum number of Flexible Tier Sub Pools 72 (*1) 132 (*1)

50
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operations Optimization (Virtualization/Automated Storage Tiering)

Item ETERNUS DX100 S4/DX100 S3 ETERNUS DX200 S4/DX200 S3


The maximum capacity of the Flexible Tier Sub Pool 2,048TB (*2)
Total capacity of the Flexible Tier Volume 2,048TB

*1: The maximum total number of Thin Provisioning Pools and FTSPs.
*2: The maximum pool capacity is the capacity that combines the FTSP capacity and the Thin Provisioning
Pool capacity in the ETERNUS DX. The maximum pool capacity of an FTRP is the same as the maximum
pool capacity of a Flexible Tier Sub Pool.
The RAID levels and the configurations, which can be registered in the FTSP, are the same as those of a TPP.
The following table shows the RAID configurations that can be registered in an FTSP.
Table 19 Levels and Configurations for a RAID Group That Can Be Registered in a FTSP

Recommended config-
RAID level Number of configurable drives
urations
RAID0 4 (4D) -
RAID1 2 (1D+1M) 2 (1D+1M)
RAID1+0 4 (2D+2M), 8 (4D+4M), 16 (8D+8M), 24 (12D+12M) 8 (4D+4M)
RAID5 4 (3D+1P), 5 (4D+1P), 7 (6D+1P), 8 (7D+1P), 9 (8D+1P), 13 (12D+1P) 4 (3D+1P), 8 (7D+1P)
RAID6 6 (4D+2P), 8 (6D+2P), 9 (7D+2P), 10 (8D+2P) 8 (6D+2P)
RAID6-FR 13 ((4D+2P) ´2+1HS), 17 ((6D+2P) ´2+1HS), 17 ((6D+2P) ´2+1HS)
31 ((8D+2P) ´3+1HS), 31 ((4D+2P) ´5+1HS)

• Flexible Tier Volume (FTV)


An FTV is a management unit of volumes to be layered. The maximum capacity of an FTV is 128TB. Note that
the total capacity of FTVs must be less than the maximum capacity of FTSPs.
When creating an FTV, the Allocation method can be selected.
- Thin
When data is written from the host to an FTV, the physical area is allocated to a created virtual volume. The
physical storage capacity can be reduced by allocating a virtualized storage capacity.
- Thick
When creating a volume, the physical area is allocated to the entire volume area. This can be used for vol-
umes in the system area to prevent a system stoppage due to a pool capacity shortage during operations.
In general, selecting "Thin" is recommended. The Allocation method can be changed after an FTV is created.
Perform a TPV/FTV capacity optimization if "Thick" has changed to "Thin". By optimizing the capacity, the area
that was allocated to an FTV is released and the FTV becomes usable. If a TPV/FTV capacity optimization is not
performed, the usage of the FTV does not change even after the Allocation method is changed.
The capacity of an FTV can be expanded after it is created.
For details on the number of FTVs that can be created, refer to "Volume" (page 26).

● Threshold Monitoring of Used Capacity


When the used capacity of an FTRP or an FTV reaches the threshold, an alarm notification can be sent from ETER-
NUS SF Storage Cruiser. There are two types of thresholds: "Attention" and "Warning". A different value can be
specified for each threshold type.
Make sure to add drives before free space in the FTRP runs out, and add FTSP capacity from ETERNUS SF Storage
Cruiser.

51
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operations Optimization (Virtualization/Automated Storage Tiering)

• FTRP Thresholds
There are two FTRP usage thresholds: Attention and Warning.
Table 20 FTRP Thresholds

Threshold Selectable range Default Setting conditions


Attention 5 (%) to 80 (%) 75 (%) Attention threshold £ Warning threshold
Warning 5 (%) to 99 (%) 90 (%) The "Attention" threshold can be omitted.

• FTV Thresholds
There is only one FTV usage threshold: Attention. If there is insufficient capacity for the FTV unallocated space
in the pool free space, an alarm notification is sent. The threshold is determined by the ratio of free space in
the FTSP and the unallocated FTV capacity.
Table 21 FTV Thresholds

Threshold Selectable range Default


Attention 1 (%) to 100 (%) 80 (%)

• When the Flexible Tier function is enabled, 32 work volumes (physical capacity is 0MB) are created. The
maximum number of volumes, which can be created in the ETERNUS DX, are reduced, depending on the
number of these work volumes.
• If an FTSP or an FTRP includes one or more RAID groups that are configured with Advanced Format drives,
the write performance may be reduced when accessing FTVs created in the relevant FTSP or FTRP from an
OS or an application that does not support Advanced Format.
• The FTRP capacity that can be used for VVOLs differs from the maximum Thin Provisioning Pool capacity. For
details, refer to "VMware VVOL" (page 133).

52
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operations Optimization (Virtualization/Automated Storage Tiering)

FTRP Balancing
When drives are added to a pool, the physical capacity is allocated unevenly among the RAID groups in the pool.
By using the Flexible Tier Pool balancing function, the allocated physical capacity as well as the usage rate of
the physical disks in the pool can be balanced. Balancing can be performed by selecting the FTRP to be balanced
in ETERNUS Web GUI and ETERNUS CLI.
Figure 26 FTRP Balancing

FTRP
Added Added
FTSP#0 FTSP#0 FTSP#0

RAID RAID RAID RAID RAID RAID RAID


group#0 group#1 group#2 group#3 group#4 group#5 group#6

Physical capacity is balanced


Balancing amongst the RAID groups in each FTSP

FTRP

FTSP#0 FTSP#0 FTSP#0

RAID RAID RAID RAID RAID RAID RAID


group#0 group#1 group#2 group#3 group#4 group#5 group#6

FTRP balancing is a function that evenly relocates the physically allocated capacity of FTVs amongst the RAID
groups that configure the FTSP.
Allocation of FTSPs is determined based on a performance analysis by Automated Storage Tiering function of
ETERNUS SF. This plays an important role for performance. The FTRP balancing function can be used to evenly
relocate the physically allocated capacity among RAID groups that configure the same FTSP. Note that balancing
cannot be performed if balancing migrates each physical area to other FTSPs.

● Balancing Level
"High", "Middle", or "Low" is displayed for the balance level of each FTSP.
"High" indicates that the physical capacity is allocated evenly in the RAID groups registered in the FTSP. "Low"
indicates that the physical capacity is allocated unequally to a specific RAID group in the FTSP.
FTRP balancing may not be available when other functions are being used in the device or the target volume.
Refer to "Combinations of Functions That Are Available for Simultaneous Executions" (page 202) for details on
the functions that can be executed simultaneously, the number of the process that can be processed simultane-
ously, and the capacity that can be processed concurrently.

53
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operations Optimization (Virtualization/Automated Storage Tiering)

• If the free capacity in the FTSP becomes insufficient while FTRP balancing is being performed, an error oc-
curs and the balancing session ends abnormally. Note that insufficient physical capacity cannot be replaced
by other FTSPs.
• When FTRP balancing is performed, an area for the work volume (the destination FTV which has the same
capacity as the source FTV) is secured in the FTRP to which the FTV belongs. As a result, the status of the
FTRP may temporarily become alarm status (the FTRP usage exceeds the "Caution" or "Warning" threshold).
This alarm state is removed once balancing completes successfully.
• If the capacity of the FTRP is expanded during an FTRP balancing process, the balancing level might be less
than before.
• FTRP balancing may not be performed regardless of what FTRP balancing level is used. FTRP balancing
availability depends on the physical allocation status of FTVs.

Extreme Cache Pool


The Extreme Cache Pool function uses SSDs in enclosures as the secondary cache to improve the read access per-
formance from the server. Self Encrypting SSDs can be used in addition to SSDs.
Frequently accessed areas are written asynchronously to specified SSDs for Extreme Cache Pools. When a read
request is issued from the server, data is read from the faster SSD to speed up the response.
Figure 27 Extreme Cache Pool

Read request

ETERNUS DX
Controller
Frequently accessed
When there is a read Cache memory areas are written to
request, the response the Extreme Cache
for the request will be Pool (SSDs)
faster by reading data
from the SSDs instead
of reading data from
the disks.
Extreme Cache Pool (SSDs)
Disks

• For the ETERNUS DX100 S4/DX200 S4


The maximum capacity that can be used as an Extreme Cache Pool is 800GB for each controller. Specify one or
two SSDs to be used for each controller.
400GB SSDs (MLC SSDs) can be used for Extreme Cache Pools. Value SSDs cannot be used.

To specify two SSDs per controller, the controller firmware version must be V10L82 or later, and "Extreme
Cache Pool (Expanded)" must be enabled.

A RAID group (RAID0) that is dedicated to the Extreme Cache Pool is configured with the specified SSDs, and
volumes for the Extreme Cache Pool are crated in the RAID group.
One volume for the Extreme Cache Pool is created for each controller. Different capacities can be set for each
controller.

54
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Optimization of Volume Configurations

To expand the Extreme Cache Pool capacity, disable "Extreme Cache Pool". After that, enable "Extreme Cache
Pool (Expanded)", increase the number of member drives, and redefine the SSDs used for the Extreme Cache
Pool.
• For the ETERNUS DX100 S3/DX200 S3
The capacity that can be used as an Extreme Cache Pool is 400GB for each controller. Specify a single SSD to be
used for each controller.
SSDs with a capacity of 400GB, 800GB, and 1.6TB can be used for Extreme Cache Pools. If an SSD larger than
400GB is selected, the remaining area cannot be used.
Volumes for the Extreme Cache Pool are created in the specified SSD. One volume for the Extreme Cache Pool
is created for each controller.
The Extreme Cache Pool function can be enabled or disabled for each volume. Note that the Extreme Cache Pool
function cannot be enabled for Deduplication/Compression Volumes, or volumes that are configured with SSDs.

• SSDs that are already in use cannot be specified for Extreme Cache Pools.
• The Extreme Cache function may improve random I/O.

Optimization of Volume Configurations


The ETERNUS DX allows for the expansion of volumes and RAID group capacities, migration among RAID groups,
and changing of RAID levels according to changes in the operation load and performance requirements. There
are several expansion functions.
Table 22 Optimization of Volume Configurations

RAID group expan- Migration among Changing the RAID Striping for RAID
Function/usage Volume expansion
sion RAID groups level groups
RAID Migration ¡ (Adding capacity ´ ¡ ¡ ´
during migration)
(*1)
Logical Device Ex- ´ ¡ ´ ¡ (Adding drives to ´
pansion existing RAID
groups)
LUN Concatenation ¡ (Concatenating ´ ´ ´ ´
free spaces)
Wide Striping ´ ´ ´ ´ ¡

¡: Possible, ´: Not possible

*1: For TPVs or FTVs, the capacity cannot be expanded during a migration.

● Expansion of Volume Capacity


• RAID Migration (with increased migration destination capacity)
When volume capacity is insufficient, a volume can be moved to a RAID group that has enough free space.
This function is recommended for use when the desired free space is available in the destination.
• LUN Concatenation
Adds areas of free space to an existing volume to expand its capacity. This uses free space from a RAID group
to efficiently expand the volume.

55
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Optimization of Volume Configurations

● Expansion of RAID Group Capacity


• Logical Device Expansion
Adds new drives to an existing RAID group to expand the RAID group capacity. This is used to expand the ex-
isting RAID group capacity instead of adding a new RAID group to add the volumes.

● Migration among RAID Groups


• RAID Migration
The performance of the current RAID groups may not be satisfactory due to conflicting volumes after perform-
ance requirements have been changed. Use RAID Migration to improve the performance by redistributing the
volumes amongst multiple RAID groups.

● Changing the RAID Level


• RAID Migration (to a RAID group with a different RAID level)
Migrating to a RAID group with a different RAID level changes the RAID level of volumes. This is used to con-
vert a given volume to a different RAID level.
• Logical Device Expansion (and changing RAID levels when adding the new drives)
The RAID level for RAID groups can be changed. Adding drives while changing is also available. This is used to
convert the RAID level of all the volumes belonging to a given RAID group.

● Striping for Multiple RAID Groups


• Wide Striping
Distributing a single volume to multiple RAID groups makes I/O access from the server more efficient and im-
proves the performance.

56
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Optimization of Volume Configurations

RAID Migration
RAID Migration is a function that moves a volume to a different RAID group with the data integrity being guar-
anteed. This allows easy redistribution of volumes among RAID groups in response to customer needs. RAID Mi-
gration can be carried out while the system is running, and may also be used to switch data to a different RAID
level changing from RAID5 to RAID1+0, for example.

To migrate volumes to FTRPs with ETERNUS CLI, use the Flexible Tier Migration function.

• Volumes moved from a 300GB drive configuration to a 600GB drive configuration


Figure 28 RAID Migration (When Data Is Migrated to a High Capacity Drive)

RAID5 (3D+1P) 300GB x 4 Unused 300GB x 4

LUN0
Migration

Unused 600GB x 4 RAID5 (3D+1P) 600GB x 4

LUN0

• Volumes moved to a different RAID level (RAID5 g RAID1+0)


Figure 29 RAID Migration (When a Volume Is Moved to a Different RAID Level)

RAID5 (3D+1P) 600GB x 4 Unused 600GB x 4

LUN0
Migration

Unused 600GB x 8 Unused 600GB x 8

LUN0

LUN0

The volume number (LUN) does not change before and after the migration. The host can access the volume
without being affected by the volume number.
The following changes can be performed by RAID migration.
• Changing the volume type
A volume is changed to the appropriate type for the migration destination RAID groups or pools (TPP and
FTRP).

57
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Optimization of Volume Configurations

• Changing the encryption attributes


The encryption attribute of the volume is changed according to the encryption setting of the volume or the
encryption attribute of the migration destination pool (TPP and FTRP).
• Changing the number of concatenations and the Wide Stripe Size (for WSV)
• Enabling the Deduplication/Compression function for existing volumes
The following processes can also be specified.
• Capacity expansion
When migration between RAID groups is performed, capacity expansion can also be performed at the same
time. However, the capacity cannot be expanded for TPVs or FTVs.
• TPV/FTV Capacity Optimization
When the migration destination is a pool (TPP or FTRP), TPV/FTV capacity optimization after the migration can
be set.
For details on the features of TPV/FTV capacity optimization, refer to "TPV/FTV Capacity Optimization" (page
47).

Figure 30 RAID Migration

RAID group The volume type depends on the


migration destination.
A volume becomes encrypted
Standard WSV
when moved to an encrypted pool.

Standard WSV

TPP (Unencrypted) TPP (Encrypted) FTRP (Unencrypted) FTRP (Encrypted)

TPV (Deduplication/ TPV (Deduplication/


Compression is enabled) Compression is enabled)
FTV FTV
TPV (Deduplication/ TPV (Deduplication/
Compression is disabled) Compression is disabled)

Unencrypted volumes
:
Encrypted volumes
:

Specify unused areas in the migration destination (RAID group or pool) with a capacity larger than the migration
source volume. Note that RAID groups that are registered as REC Disk Buffers cannot be specified as a migration
destination.
RAID migration may not be available when other functions are being used in the ETERNUS DX or the target vol-
ume.
Refer to "Combinations of Functions That Are Available for Simultaneous Executions" (page 202) for details on
the functions that can be executed simultaneously, the number of the process that can be processed simultane-
ously, and the capacity that can be processed concurrently.

During RAID Migration, the access performance for the RAID groups that are specified as the RAID Migration
source and RAID Migration destination may be reduced.

58
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Optimization of Volume Configurations

Logical Device Expansion


Logical Device Expansion (LDE) allows the capacity of an existing RAID group to be dynamically expanded by
changing of the RAID level or the drive configuration of the RAID group. When this function is performed, drives
can be also added at the same time. By using this LDE function to expand the capacity of an existing RAID
group, a new volume can be added without having to add new RAID groups.
• Expand the RAID group capacity (from RAID5(3D+1P) g RAID5(5D+1P))
Figure 31 Logical Device Expansion (When Expanding the RAID Group Capacity)

RAID5 (3D+1P) 600GB x 4

LUN0
Expansion RAID5 (5D+1P) 600GB x 6

LUN0
Unused 600GB x 2

Expands the capacity


by adding two drives

• Change the RAID levels (from RAID5(3D+1P) g RAID1+0(4D+4M))


Figure 32 Logical Device Expansion (When Changing the RAID Level)

RAID5 (3D+1P) 600GB x 4

RAID1+0 (4D+4M) 600GB x 8


LUN0
Expansion
LUN0

Unused 600GB x 4
LUN0

Expands the capacity by


adding four drives and
changes the RAID level

LDE works in terms of RAID group units. If a target RAID group contains multiple volumes, all of the data in the
volumes is automatically redistributed when LDE is performed. Note that LDE cannot be performed if it causes
the number of data drives to be reduced in the RAID group.
In addition, LDE cannot be performed for RAID groups in which the following conditions apply.
• RAID groups that belong to TPPs or FTRPs
• The RAID group that is registered as an REC Disk Buffer
• RAID groups in which WSVs are registered
• RAID groups that are configured with RAID5+0 or RAID6-FR
LDE may not be available when other functions are being used in the ETERNUS DX or the target RAID group.

59
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Optimization of Volume Configurations

For details on the functions that can be executed simultaneously and the number of the process that can be
processed simultaneously, refer to "Combinations of Functions That Are Available for Simultaneous Executions"
(page 202).

• If drives of different capacities exist in a RAID group that is to be expanded while adding drives, the small-
est capacity becomes the standard for the RAID group after expansion, and all other drives are regarded as
having the same capacity as the smallest drive. In this case, the remaining drive space is not used.
- If drives of different rotational speeds exist in a RAID group, the access performance of the RAID group is
reduced by the slower drives.
- Using the same interface speed is recommended when using SSDs.
- When installing SSDs in high-density drive enclosures, using SSDs that have the same drive enclosure
transfer speed is recommended.
• Since the data cannot be recovered after the failure of LDE, back up all the data of the volumes in the target
RAID group to another area before performing LDE.
• If configuring RAID groups with Advanced Format drives, the write performance may be reduced when ac-
cessing volumes created in the relevant RAID group from an OS or an application that does not support Ad-
vanced Format.

LUN Concatenation
LUN Concatenation is a function that is used to add new area to a volume and so expand the volume capacity
available to the server. This function enables the reuse of leftover free area in a RAID group and can be used to
solve capacity shortages.
Unused areas, which may be either part or all of a RAID group, are used to create new volumes that are then
added together (concatenated) to form a single large volume.
The capacity can be expanded during an operation.
Figure 33 LUN Concatenation

RAID5 (3D+1P) 300GB x 4 RAID5 (3D+1P) 300GB x 4

LUN0 LUN1
LUN2 Unused area

Concatenates an
Concatenation
unused area into
LUN2

RAID5 (3D+1P) 300GB x 4 RAID5 (3D+1P) 300GB x 4

LUN0 LUN1
LUN2 LUN2

LUN Concatenation is a function to expand a volume capacity by concatenating volumes.


Up to 16 volumes with a minimum capacity of 1GB can be concatenated.

60
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Optimization of Volume Configurations

Concatenation can be performed regardless of the RAID types of the concatenation source volume and the con-
catenation destination volume.
When there are concatenation source volumes in SAS disks or Nearline SAS disks, concatenation can be per-
formed with volumes in SAS disks or Nearline SAS disks.
For SSDs and SEDs, the drives for the concatenation source and destination volumes must be the same type (SSD
or SED).
From a performance perspective, using RAID groups with the same RAID level and the same drives (type, size,
capacity, and rotational speed (for disks), interface speed (for SSDs), and drive enclosure transfer speed (for
SSDs)) is recommended as the concatenation source.
The same key group setting is recommended for the RAID group to which the concatenation source volumes be-
long and the RAID group to which the concatenation destination volumes belong if the RAID groups are config-
ured with SEDs.
A concatenated volume can be used as an OPC, EC, or QuickOPC copy source or copy destination. It can also be
used as a SnapOPC/SnapOPC+ copy source.
The LUN number stays the same before and after the concatenation. Because the server-side LUNs are not
changed, an OS reboot is not required. Data can be accessed from the host in the same way regardless of the
concatenation status (before, during, or after concatenation). However, the recognition methods of the volume
capacity expansion vary depending on the OS types.
• When the concatenation source is a new volume
A new volume can be created by selecting a RAID group with unused capacity.
Figure 34 LUN Concatenation (When the Concatenation Source Is a New Volume)

Concatenation

10GB + 20GB + 30GB 60GB

Unused area

• When expanding capacity of an existing volume


A volume can be created by concatenating an existing volume into unused capacity.
Figure 35 LUN Concatenation (When the Existing Volume Capacity Is Expanded)

10GB

+
Concatenation

20GB + 30GB 60GB

Unused area

Only Standard type volumes can be used for LUN Concatenation. The encryption status of a concatenated volume
is the same status as a volume that is to be concatenated.
LUN Concatenation may not be available when other functions are being used in the device or the target vol-
ume.
For details on the functions that can be executed simultaneously, refer to "Combinations of Functions That Are
Available for Simultaneous Executions" (page 202).

61
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Optimization of Volume Configurations

• It is recommended that the data on the volumes that are to be concatenated be backed up first.
• Refer to the applicable OS and file system documentation before dynamically expanding the volume ca-
pacity because expanded volumes may not be recognized by some types and versions of server-side plat-
forms (OSs).
• When a volume that is using ETERNUS SF AdvancedCopy Manager to run backups is expanded via LUN Con-
catenation, the volume will need to be registered with ETERNUS SF AdvancedCopy Manager again.
• When specifying a volume in the RAID group configured with Advanced Format drives as a concatenation
source or a concatenation destination to expand the capacity, the write performance may be reduced when
accessing the expanded volumes from an OS or an application that does not support Advanced Format.

62
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Optimization of Volume Configurations

Wide Striping
Wide Striping is a function that concatenates multiple RAID groups by striping and uses many drives simultane-
ously to improve performance. This function is effective when high Random Write performance is required.
I/O accesses from the server are distributed to multiple drives by increasing the number of drives that configure
a LUN, which improves the processing performance.
Figure 36 Wide Striping
Server

CM#0 CM#1

ETERNUS DX

Wide Striping creates a WSV that can be concatenated across 2 to 64 RAID groups.
The number of RAID groups that are to be concatenated is defined when creating a WSV. The number of con-
catenated RAID groups cannot be changed after a WSV is created. To change the number of concatenated groups
or expand the group capacity, perform RAID Migration.
Other volumes (Standard, SDVs, SDPVs, or WSVs) can be created in the free area of a RAID group that is con-
catenated by Wide Striping.
WSVs cannot be created in RAID groups with the following conditions.
• RAID groups that belong to TPPs or FTRPs
• The RAID group that is registered as an REC Disk Buffer
• RAID groups with different stripe size values
• RAID groups that are configured with different types of drives
• RAID groups that are configured with RAID6-FR

If one or more RAID groups that are configured with Advanced Format drives exist in the RAID group that is to
be concatenated by striping to create a WSV, the write performance may be reduced when accessing the cre-
ated WSVs from an OS or an application that does not support Advanced Format.

63
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Encryption

Data Encryption
Encrypting data as it is being written to the drive prevents information leakage caused by fraudulent decoding.
Even if a drive is removed and stolen by malicious third parties, data cannot be decoded.
This function only encrypts the data stored on the drives, so server access results in the transmission of plain
text. Therefore, this function does not prevent data leakage from server access. It only prevents data leakage
from drives that are physically removed.
The following two types of data encryption are supported:
• Self Encrypting Drive (SED)
This drive type has an encryption function. Data is encrypted when it is written. Encryption using SEDs is rec-
ommended because SEDs do not affect system performance.
SEDs are locked the instant that they are removed from the storage system, which ensures no data is read or
written with these drives. This encryption prevents information leakage from drives that are stolen or replaced
for maintenance. This function also reduces discarding costs because SEDs do not need to be physically de-
stroyed.
• Firmware Data Encryption
Data is encrypted on a volume basis by the controllers (CMs) of the ETERNUS DX. Data is encrypted and unen-
crypted in the cache memory when data is written or read.
AES (*1) or Fujitsu Original Encryption can be selected as the encryption method. The Fujitsu Original Encryp-
tion method uses a Fujitsu original algorithm that has been specifically created for ETERNUS DX storage sys-
tems.

*1: Advanced Encryption Standard (AES)


Standard encryption method selected by The National Institute of Standards and Technology (NIST). The
key length of AES is 128 bits, 192 bits, or 256 bits. The encryption strength becomes higher with a longer
key length.
The following table shows the functional comparison of SED and firmware data encryption.

Function specification Self Encrypting Drive (SED) Firmware data encryption


Type of key Authentication key Encryption key
Encryption unit Drive Volume, Pool
Encryption method AES-256 Fujitsu Original Encryption/AES-128/
AES-256
Influence on performance None (equivalent to unencrypted drives) Yes
Key management server linkage Yes No

64
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Encryption

Encryption with Self Encrypting Drive (SED)


An SED has a built-in encryption function and data can be encrypted by controlling the encryption function of an
SED from the controller. An SED uses encryption keys when encrypting and storing data. Encryption keys cannot
be taken out of the drive. Furthermore, because SEDs cannot be decrypted without an authentication key, infor-
mation cannot be leaked from drives which have been replaced during maintenance, even if they are not physi-
cally destroyed.
Once an SED authentication key is registered to an ETERNUS DX, additional configuration on encryption is not
necessary each time a drive is added.
Data encryption by SED has no load on the controller for encryption process, and the equivalent data access per-
formance to unencrypted process can be ensured.
Figure 37 Data Encryption with Self Encrypting Drives (SED)

Access performance is the


Setting encryption when
same as when non-encrypted
adding new drives is not
drives are accessed.
required.

Self-encrypting drives

Non-self-encrypting drives

ETERNUS DX

The controller performs authentication by using the authentication key that is stored in the controller or by us-
ing the authentication key that is retrieved from the key server to access the drives. For the authentication key
that can be registered in the ETERNUS DX, this key can be automatically created by using the settings in ETER-
NUS Web GUI or ETERNUS CLI.
By linking with the key server, the authentication key of an SED can be managed from the key server. Creating
and storing an authentication key in a key server makes it possible to manage the authentication key more se-
curely.
By consolidating authentication keys for multiple ETERNUS DX storage systems in the key server, the manage-
ment cost of authentication keys can be reduced.
Key management server linkage can be used with an SED authentication key operation.
Only one unique SED authentication key can be registered in each ETERNUS DX.

• The firmware data conversion encryption function cannot be used for volumes that are configured with
SEDs.
• Register the SED authentication key (common key) before installing SEDs in the ETERNUS DX.
If an SED is installed without registering the SED authentication key, data leakage from the SED is possible
when it is physically removed.
• Only one key can be registered in each ETERNUS DX. This common key is used for all of the SEDs that are
installed. Once the key is registered, the key cannot be changed or deleted. The common key is used to
authenticate RAID groups when key management server linkage is not used.

65
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Encryption

Firmware Data Encryption


The firmware in the ETERNUS DX has the firmware data encryption function. This function encrypts a volume
when it is created, or converts a created volume into an encrypted volume.
Because data encryption with firmware is performed with the controller in the ETERNUS DX, the performance is
degraded, comparing with unencrypted data access.
The encryption method can be selected from the world standard AES-128, the world standard AES-256, and the
Fujitsu Original Encryption method. The Fujitsu Original Encryption method that is based on AES technology uses
a Fujitsu original algorithm that has been specifically created for ETERNUS DX storage systems. The Fujitsu Origi-
nal Encryption method has practically the same security level as AES-128 and the conversion speed for the Fujit-
su Original Encryption method is faster than AES. Although AES-256 has a higher encryption strength than
AES-128, the Read/Write access performance degrades. If importance is placed upon the encryption strength,
AES-256 is recommended. However, if importance is placed upon performance or if a standard encryption meth-
od is not particularly required, the Fujitsu Original Encryption method is recommended.
Figure 38 Firmware Data Encryption
Server A Server B Server C

Cannot be decoded
Encrypted

Encryption
setting for each LUN.

Unencrypted

ETERNUS DX

Encryption is performed when data is written from the cache memory to the drive. When encrypted data is read,
the data is decrypted in the cache memory. Cache memory data is not encrypted.
For Standard volumes, SDVs, SDPVs, and WSVs, encryption is performed for each volume. For TPVs and FTVs, en-
cryption is performed for each pool.

66
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Encryption

• The encryption method for encrypted volumes cannot be changed. Encrypted volumes cannot be changed
to unencrypted volumes.
To change the encryption method or cancel the encryption for a volume, back up the data in the encrypted
volume, delete the encrypted volume, and restore the backed up data.
• If a firmware encrypted pool (TPP or FTRP) or volume exists, the encryption method cannot be changed re-
gardless of whether the volume is registered to a pool.
• It is recommended that the copy source volume and the copy destination volume use the same encryption
method for Remote Advanced Copy between encrypted volumes.
• When copying encrypted volumes (using Advanced Copy or copy operations via server), transfer perform-
ance may not be as good as when copying unencrypted volumes.
• SDPVs cannot be encrypted after they are created. To create an encrypted SDPV, set encryption when creat-
ing a volume.
• TPVs cannot be encrypted individually. The encryption status of the TPVs depends on the encryption status
of the TPP to which the TPVs belong.
• FTVs cannot be encrypted individually. The encryption status of the FTVs depends on the encryption status
of the FTRP to which the FTVs belong.
• The firmware data encryption function cannot be used for volumes that are configured with SEDs.
• The volumes in a RAID6-FR RAID group cannot be converted to encrypted volumes.
When creating an encrypted volume in a RAID6-FR RAID group, specify the encryption setting when creating
the volume.

Key Management Server Linkage


Security for authentication keys that are used for authenticating encryption from Self Encrypting Drives (SEDs)
can be enhanced by managing the authentication key in the key server.
• Key life cycle management
A key is created and stored in the key server. A key can be obtained by accessing the key server from the
ETERNUS DX when required. A key cannot be stored in the ETERNUS DX. Managing a key in an area that is
different from where an SED is stored makes it possible to manage the key more securely.
• Key management consolidation
When multiple ETERNUS DX storage systems are used, a different authentication key for each ETERNUS DX can
be stored in the key server.
The key management cost can be reduced by consolidating key management.
• Key renewal
A key is automatically renewed before it expires by setting a key expiration date. Security against information
leakage can be enhanced by regularly changing the key.
The key is automatically changed after the specified period of time. Key operation costs can be reduced by
changing the key automatically. Also, changing the key by force can be performed manually.
The following table shows functions for SED authentication keys and key management server linkage.
Table 23 Functional Comparison between the SED Authentication Key (Common Key) and Key Management Serv-
er Linkage

Function SED authentication key Key Management Server Linkage


Key creation In the storage system Key server
Key storage In the storage system Key server

67
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Encryption

Function SED authentication key Key Management Server Linkage


Key renewal (auto/manual) No Yes
Key compromise (*1) No Yes
Key backup No Yes
Target RAID groups RAID groups (Standard, WSV, SDV), REC Disk Buffers, SDPs, TPPs, FTRPs, and
FTSPs (*2)

*1: The key becomes unavailable in the key server.


*2: The SED key group must be enabled after a pool or REC Disk Buffer is created, or after a pool capacity is
expanded.
An authentication key to access data of the RAID groups that are registered in a key group can be managed by
the key server.
RAID groups that use the same authentication key must be registered in the key group in advance.
Authentication for accessing the RAID groups that are registered in the key group is performed by acquiring the
key automatically from the key server when an ETERNUS DX is started.
As a key server for the key management server linkage, use a server that has the key management software
"ETERNUS SF KM" installed. IBM Security Key Lifecycle Manager can also be used as the key management soft-
ware.
Figure 39 Key Management Server Linkage

Business server An ETERNUS DX uses the authentication key


that is stored in the key server in order to
unlock the encryption.

ETERNUS DX Key server

Key group

RAID group RAID group


Exclusive
Key group authentication
key for a group

RAID group Common key


Global hot spare

SEDs (RAID group) that are not registered in a key server are encrypted by using the authentication key (com-
mon key) that is stored in the ETERNUS DX.
A hot spare cannot be registered in a key group.
For Global Hot Spares, an authentication key can be specified according to the setting of the key group for the
RAID groups when a Global Hot Spare is configured as a secondary drive for the RAID groups that are registered
in the key group.
For Dedicated Hot Spares, an authentication key can be specified according to the setting of the key group for
the target RAID group when a Dedicated Hot Spare is registered.

68
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Data Encryption

• If a LAN connection cannot be secured during SED authentication, authentication fails because the authen-
tication key that is managed by the key server cannot be obtained.
To use the key server linkage function, a continuous connection to the LAN must be secured.
• To use the authentication key in a key server, a key group needs to be created. Multiple RAID groups can be
registered in a key group. Note that only one key group can be created in each ETERNUS DX. One authenti-
cation key can be specified for each key group. The authentication key for a key group can be changed.
• Setting a period of time for the validity of the authentication key in the key server by using the ETERNUS DX
enables the key to be automatically updated by obtaining a new key from the key server before the validity
of the key expires. Access from the host (server) can be maintained even if the SED authentication key is
changed during operation.
• When linking with the key management server, the ETERNUS DX obtains the SED authentication key from
the key server and performs authentication when key management settings are performed, key manage-
ment information is displayed, and any of the following operations are performed.
- Turning on the ETERNUS DX
- Expanding the RAID group capacity (Logical Device Expansion)
- Forcibly enabling a RAID group
- Creating the key group
- Recovering SEDs
- Performing maintenance of drive enclosures
- Performing maintenance of drives
- Applying disk firmware
- Registering Dedicated Hot Spares
- Rebuilding and performing copy back (when using Global Hot Spares)
- Performing a redundant copy (when using Global Hot Spares)
- Turning on the disk motor with the Eco-mode

69
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
User Access Management

User Access Management

Account Management
The ETERNUS DX allocates roles and access authority when a user account is created, and sets which functions
can be used depending on the user privileges.
Since the authorized functions of the storage administrator are classified according to the usage and only mini-
mum privileges are given to the administrator, security is improved and operational mistakes and management
hours can be reduced.
Figure 40 Account Management

A B C

Monitor Admin StorageAdmin


System RAID Group
Status Display
Management Management

By setting which function can be


used by each user, unnecessary
access is reduced.
ETERNUS DX

AccountAdmin SecurityAdmin Maintainer


User Account Security Storage System
Management Management Management

D E F

Up to 60 user accounts can be set in the ETERNUS DX.


Up to 16 users can be logged in at the same time using ETERNUS Web GUI or ETERNUS CLI.
The menu that is displayed after logging on varies depending on the role that is added to a user account.

70
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
User Access Management

• Roles and available functions


Seven default roles are provided in the ETERNUS DX. The following table shows the roles and the available
functions (categories).
Table 24 Available Functions for Default Roles

Roles
Categories Storage Account Security Software
Monitor Admin Maintainer
Admin Admin Admin (*1)
Status Display ¡ ¡ ¡ ´ ¡ ¡ ´
RAID Group Management ´ ¡ ¡ ´ ´ ¡ ´
NAS Management ´ ¡ ¡ ´ ´ ¡ ´
Volume - Create / Modify ´ ¡ ¡ ´ ´ ¡ ´
Volume - Delete / Format ´ ¡ ¡ ´ ´ ¡ ´
Host Interface Management ´ ¡ ¡ ´ ´ ¡ ´
Advanced Copy Management ´ ¡ ¡ ´ ´ ¡ ´
Copy Session Management ´ ¡ ¡ ´ ´ ¡ ´
Storage Migration Management ´ ¡ ¡ ´ ´ ¡ ´
Storage Management ´ ¡ ´ ´ ´ ¡ ´
User Management ´ ¡ ´ ¡ ´ ´ ´
Authentication / Role ´ ¡ ´ ¡ ´ ´ ´
Security Setting ´ ¡ ´ ´ ¡ ´ ´
Maintenance Information ´ ¡ ´ ´ ¡ ¡ ´
Firmware Management ´ ¡ ´ ´ ´ ¡ ´
Maintenance Operation ´ ´ ´ ´ ´ ¡ ´

¡: Supported category ´: Not supported

*1: This is the role that is used for external software. A user account with a "Software" role cannot be used
with ETERNUS Web GUI or ETERNUS CLI.

• To use functions that require a license, a category that supports the function used to register the required
license must be selected.
• The default roles cannot be deleted or edited.
• The function categories for the roles cannot be changed.
• A role must be assigned when creating a user account.

71
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
User Access Management

User Authentication
Internal Authentication and External Authentication are available as logon authentication methods. RADIUS au-
thentication can be used for External Authentication.
The user authentication functions described in this section can be used when performing storage management
and operation management, and when accessing the ETERNUS DX via operation management LAN.

● Internal Authentication
Internal Authentication is performed using the authentication function of the ETERNUS DX.
The following authentication functions are available when the ETERNUS DX is connected via a LAN using opera-
tion management software.
• User account authentication
User account authentication uses the user account information that is registered in the ETERNUS DX to verify
user logins. Up to 60 user accounts can be set to access the ETERNUS DX.
• SSL authentication
ETERNUS Web GUI and SMI-S support HTTPS connections using SSL/TLS. Since data on the network is encrypted,
security can be ensured. Server certifications that are required for connection are automatically created in the
ETERNUS DX.
• SSH authentication
Since ETERNUS CLI supports SSH connections, data that is sent or received on the network can be encrypted.
The server key for SSH varies depending on the ETERNUS DX. When the server certification is updated, the serv-
er key is updated as well.
Password authentication and client public key authentication are available as authentication methods for SSH
connections.
The supported client public keys are shown below.
Table 25 Client Public Key (SSH Authentication)

Type of public key Complexity (bits)


IETF style DSA for SSH v2 1024, 2048, and 4096
IETF style RSA for SSH v2 1024, 2048, and 4096

● External Authentication
External Authentication uses the user account information (user name, password, and role name) that is regis-
tered on an external authentication server. RADIUS authentication supports ETERNUS Web GUI and the ETERNUS
CLI login authentication for the ETERNUS DX, and authentication for connections to the ETERNUS DX through a
LAN using operation management software.
• RADIUS authentication
RADIUS authentication uses the Remote Authentication Dial-In User Service (RADIUS) protocol to consolidate
authentication information for remote access.
An authentication request is sent to the RADIUS authentication server that is outside the ETERNUS system net-
work. The authentication method can be selected from CHAP and PAP. Two RADIUS authentication servers (the
primary server and the secondary server) can be connected to balance user account information and to create
a redundant configuration. When the primary RADIUS server failed to authenticate, the secondary RADIUS
server attempts to authenticate.

72
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
User Access Management

User roles are specified in the Vendor Specific Attribute (VSA) of the Access-Accept response from the server.
The following table shows the syntax of the VSA based account role on the RADIUS server.

Size
Item Value Description
(octets)
Type 1 26 Attribute number for the Vendor Specific At-
tribute
Length 1 7 or more Attribute size (calculated by server)
Vendor-Id 4 211 Fujitsu Limited (SMI Private Enterprise Code)
Vendor type 1 1 Eternus-Auth-Role
Vendor length 1 2 or more Attribute size described after Vendor type
(calculated by server)
Attribute-Specific 1 or more ASCII characters One or more assignable role names for suc-
cessfully authenticated users (*1)

*1: The server-side role names must be identical to the role names of the ETERNUS DX. Match the letter case
when entering the role names.
[Example] RoleName0

• If RADIUS authentication fails when "Do not use Internal Authentication" has been selected for "Authentica-
tion Error Recovery" on ETERNUS Web GUI, ETERNUS CLI, or SMI-S, logging on to ETERNUS Web GUI or ETER-
NUS CLI will not be available.
When the setting to use Internal Authentication for errors caused by network problems is configured, Inter-
nal Authentication is performed if RADIUS authentication fails on both primary and secondary RADIUS serv-
ers, or at least one of these failures is due to network error.
• So long as there is no RADIUS authentication response the ETERNUS DX will keep retrying to authenticate
the user for the entire "Timeout" period set on the "Set RADIUS Authentication (Initial)" menu. If authentica-
tion does not succeed before the "Timeout" period expires, RADIUS Authentication is considered to be a fail-
ure.
• When using RADIUS authentication, if the role that is received from the server is unknown (not set) for the
device, RADIUS authentication fails.

73
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
User Access Management

Audit Log
The ETERNUS DX can send information such as access records by the administrator and setting changes as audit
logs to the Syslog servers.
Audit logs are audit trail information that record operations that are executed for the ETERNUS DX and the re-
sponse from the system. This information is required for auditing.
The audit log function enables monitoring of all operations and any unauthorized access that may affect the
system.
Syslog protocols (RFC3164 and RFC5424) are supported for audit logs.
Information that is to be sent is not saved in the ETERNUS DX and the Syslog protocols are used to send out the
information. Two Syslog servers can be set as the destination servers in addition to the Syslog server that is used
for event notification.
Figure 41 Audit Log

Log in Information such as


↓ the storage system name,
Change settings the user/role,
↓ the process time,
Log out the process details,
and the process results

Audit log

System administrator

ETERNUS DX Syslog server

74
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Environmental Burden Reduction

Environmental Burden Reduction

Eco-mode
Eco-mode is a function that reduces power consumption for limited access disks by stopping the disks rotation
during specified periods or by powering off the disks.
Disk spin-up and spin-down schedules can be set for each RAID group or TPP. These schedules can also be set to
allow backup operations.
Figure 42 Eco-mode

Working Phase Disk spin-up Backup Phase Disk spin-down Working Phase

12 12 12

Off Off On Off

PM AM AM
(12:00 to 24:00) (0:00 to 5:00) (5:00 to 12:00)

SAS disk SAS disk SAS disk

Backup

Nearline SAS disk Nearline SAS disk Nearline SAS disk

Disks stopped Disks spinning Disks stopped

Control linked to usage


Backup disks spins
for five hours

The Eco-mode of the ETERNUS DX is a function specialized for reducing power consumption attributed to Massive
Arrays of Idle Disks (MAID). The operational state for stopping a disk can be selected from two modes: "stop mo-
tor" or "turn off drive power".
The disks to be controlled are SAS disks and Nearline SAS disks.
Eco-mode cannot be used for the following drives:
• Global Hot Spares (Dedicated Hot Spares are possible)
• SSDs
• Unused drives (that are not used by RAID groups)
The Eco-mode schedule cannot be specified for the following RAID groups or pools:
• No volumes are registered
• Configured with SSDs
• RAID groups to which the volume with Storage Migration path belongs
• RAID groups that are registered as an REC Disk Buffer
• TPPs where the Deduplication/Compression function is enabled
• FTSP

75
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Environmental Burden Reduction

• FTRP
For RAID groups with the following conditions, the Eco-mode schedule can be set but the disks motor cannot be
stopped or the power supply cannot be turned off:
• SDPVs are registered
• ODX Buffer volumes are registered
If disk access occurs while the disk motor is stopped, the disk is immediately spun up and can be accessed within
one to five minutes.
The Eco-mode function can be used with the following methods:
• Schedule control
Controls the disk motors by configuring the Eco-mode schedule on ETERNUS Web GUI or ETERNUS CLI. The op-
eration time schedule settings/management is performed for each RAID group and TPP.
• External application control (software interaction control)
Disk motor is controlled for each RAID group on ETERNUS SF Software.
The disk motors are controlled by interacting with applications installed on the server side and responding to
instructions from the applications. Applications which can be interacted with are as follows:
- ETERNUS SF Storage Cruiser
- ETERNUS SF AdvancedCopy Manager
The following hierarchical storage management software can be also linked with Eco-mode.
When using the Eco-mode function with these products, an Eco-mode disk operating schedule does not need to
be set. A drive in a stopped condition starts running when it is accessed.
• IBM Tivoli Storage Manager for Space Management
• IBM Tivoli Storage Manager HSM for Windows
• Symantec Veritas Storage Foundation Dynamic Storage Tiering (DST) function
The following table shows the specifications of Eco-mode.
Table 26 Eco-mode Specifications

Item Description Remarks


Number of registrable schedules 64 Up to 8 events (during disk operation) can be set for each
schedule.
Host I/O Monitoring Interval (*1) 30 minutes (default) Monitoring time can be set from 10 to 60 minutes.
The monitoring interval setting can be changed by users
with the maintenance operation privilege.
Disk Motor Spin-down Limit Count 25 (default) The number of times the disk is stopped can be set from
(per day) 1 to 25.
When it exceeds the upper limit, Eco-mode becomes un-
available, and the disks keep running.
Target drive SAS disks (*2) SSD is not supported.
Nearline SAS disks

*1: The monitoring time period to check if there is no access to a disk for a given length of time and stop the
drive.
*2: Self Encrypting Drives (SEDs) are also included.

76
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Environmental Burden Reduction

• To set Eco-mode schedule, use ETERNUS Web GUI, ETERNUS CLI, ETERNUS SF Storage Cruiser, or ETERNUS SF
AdvancedCopy Manager. Note that schedules that are created by ETERNUS Web GUI or ETERNUS CLI and
schedules that are created by ETERNUS SF Storage Cruiser or ETERNUS SF AdvancedCopy Manager cannot be
shared. Make sure to use only one type of software to manage a RAID group.
• Use ETERNUS Web GUI or ETERNUS CLI to set Eco-mode for TPPs. ETERNUS SF Storage Cruiser or ETERNUS SF
AdvancedCopy Manager cannot be used to set the Eco-mode for TPPs and FTRPs.
• Specify the same Eco-mode schedule for the RAID groups that configure a WSV. If different Eco-mode sched-
ules are specified, activation of stopped disks when host access is performed occurs and the response time
may increase.
• The operation time of disks varies depending on the Eco-mode schedule and the disk access.
- Access to a stopped disk outside of the scheduled operation time period causes the motor of the stopped
disk to be spun up, allowing normal access in about one to five minutes. When a set time elapses since
the last access to a disk, the motor of the disk is stopped.
- If a disk is activated from the stopped state more than a set amount of times in a day, the Eco-mode
schedule is not applied and disk motors are not stopped by the Eco-mode.
(Example 1) Setting the Eco-mode schedule via ETERNUS Web GUI
Operation schedule is set as 9:00 to 21:00 and there are no accesses outside of the scheduled period
1:00 9:00 21:00 0:00

Stop Scheduled operation Stop

Operation

The motor starts rotating The disk stops 10 min


10 min before the scheduled operation after the scheduled operation

(Example 2) Setting the Eco-mode schedule via ETERNUS Web GUI


Operation schedule is set as 9:00 to 21:00 and there are accesses outside of the scheduled period
Access
1:00 9:00 21:00 Stop accessing

Scheduled operation Stop Operation Stop

Operation Operation

Accessible in 1 to 5 min
The motor starts rotating The disk stops 10 min after the scheduled operation
10 min before the scheduled operation

77
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Environmental Burden Reduction

• Eco-mode schedules are executed according to the date and time that are set in the ETERNUS DX. To turn
on and turn off the disk motors according to the schedule that is set, use the Network Time Protocol (NTP)
server in the date and time setting in ETERNUS Web GUI to set automatic adjustment of the date and time.
• If the number of drives that are activated in a single drive enclosure is increased, the time for system activa-
tion may take longer (about 1 to 5 minutes). This is because all of the disks cannot be activated at the
same time.
• Even if the disk motor is turned on and off repeatedly according to the Eco-mode schedule, the failure rate
is not affected comparing to the case when the motor is always on.

Power Consumption Visualization


The power consumption and the temperature of the ETERNUS DX can be visualized with a graph by using the
ETERNUS SF Storage Cruiser integrated management software in a storage system environment. The ETERNUS
DX collects information on power consumption and the ambient temperature in the storage system. Collected
information is notified using SNMP and graphically displayed on the screens by ETERNUS SF Storage Cruiser.
Cooling efficiency can be improved by understanding local temperature rises in the data center and reviewing
the location of air-conditioning.
Understanding the drives that have a specific time to be used from the access frequency to RAID groups enables
the Eco-mode schedule to be adjusted accordingly.
Figure 43 Power Consumption Visualization
Power consumption Temperature

ETERNUS DX storage systems

ETERNUS SF Storage Cruiser

Collects power
consumption and
temperature data for
each storage system.

Server

78
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operation Management/Device Monitoring

Operation Management/Device Monitoring

Operation Management Interface


Operation management software can be selected in the ETERNUS DX according to the environment of the user.
ETERNUS Web GUI and ETERNUS CLI are embedded in the ETERNUS DX controllers.
Shared folder (NFS and CIFS) operations can be performed with ETERNUS Web GUI or ETERNUS CLI for the NAS
environment settings.
The setting and display functions can also be used with ETERNUS SF Web Console.

■ ETERNUS Web GUI


ETERNUS Web GUI is a program for settings and operation management that is embedded in the ETERNUS DX
and accessed by using a web browser via http or https.
ETERNUS Web GUI has an easy-to-use design that makes intuitive operation possible.
The settings that are required for the ETERNUS DX initial installation can be easily performed by following the
wizard and inputting the parameters for the displayed setting items.
SSL v3 and TLS are supported for https connections. However, when using https connections, it is required to
register a server certification in advance or self-generate a server certification. Self-generated server certifica-
tions are not already certified with an official certification authority registered in web browsers. Therefore, some
web browsers will display warnings. Once a server certification is installed in a web browser, the warning will not
be displayed again.
When using ETERNUS Web GUI to manage operations, prepare a Web browser in the administration terminal.
The following table shows the supported Web browsers.
Table 27 ETERNUS Web GUI Operating Environment

Software Guaranteed operating environment


Web browser Microsoft Internet Explorer 9.0, 10.0 (desktop version), 11.0 (desktop version)
Mozilla Firefox ESR 60

When using ETERNUS Web GUI to connect the ETERNUS DX, the default port number is 80 for http.

■ ETERNUS CLI
ETERNUS CLI supports Telnet or SSH connections. The ETERNUS DX can be configured and monitored using com-
mands and command scripts.
With the ETERNUS CLI, SSH v2 encrypted connections can be used. SSH server keys differ for each storage system,
and must be generated by the SSH server before using SSH.
Password authentication and client public key authentication are supported as authentication methods for SSH.
For details on supported client public key types, refer to "User Authentication" (page 72).

79
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operation Management/Device Monitoring

■ ETERNUS SF
ETERNUS SF can manage a Fujitsu storage products centered storage environment. An easy-to-use interface ena-
bles complicated storage environment design and setting operations, which allows easy installation of a storage
system without needing to have high level skills.
ETERNUS SF ensures stable operation by managing the entire storage environment.
With ETERNUS SF Storage Cruiser, integrated operation management for both SAN and NAS is possible.

■ SMI-S
Storage systems can be managed collectively using the general storage management application that supports
Version 1.6 of Storage Management Initiative Specification (SMI-S). SMI-S is a storage management interface
standard of the Storage Network Industry Association (SNIA). SMI-S can monitor the ETERNUS DX status and
change configurations such as RAID groups, volumes, and Advanced Copy (EC/REC/OPC/SnapOPC/SnapOPC+).

Performance Information Management


The ETERNUS DX supports a function that collects and displays the performance data of the storage system via
ETERNUS Web GUI or ETERNUS CLI. The collected performance information shows the operation status and load
status of the ETERNUS DX and can be used to optimize the system configuration.
ETERNUS SF Storage Cruiser can be used to easily understand the operation status and load status of the ETER-
NUS DX by graphically displaying the collected information on the GUI. ETERNUS SF Storage Cruiser can also
monitor the performance threshold and retain performance information for the duration that a user specifies.
When performance monitoring is operated from ETERNUS SF Storage Cruiser, ETERNUS Web GUI, or ETERNUS CLI,
performance information in each type is obtained during specified intervals (30 - 300 seconds) in the ETERNUS
DX.
The performance information can be stored and exported in the text file format, as well as displayed, from ETER-
NUS Web GUI. The performance information, which can be obtained, are indicated as follows.

● Volume Performance Information for Host I/O


• Read IOPS (the read count per second)
• Write IOPS (the write count per second)
• Read Throughput (the amount of transferred data that is read per second)
• Write Throughput (the amount of transferred data that is written per second)
• Read Response Time (the average response time per host I/O during a read)
• Write Response Time (the average response time per host I/O during a write)
• Read Process Time (the average process time in the storage system per host I/O during a read)
• Write Process Time (the average process time in the storage system per host I/O during a write)
• Read Cache Hit Rate (cache hit rate for read)
• Write Cache Hit Rate (cache hit rate for write)
• Prefetch Cache Hit Rate (cache hit rate for prefetch)

● Volume Performance Information for the Advanced Copy Function


• Read IOPS (the read count per second)
• Write IOPS (the write count per second)
• Read Throughput (the amount of transferred data that is read per second)

80
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operation Management/Device Monitoring

• Write Throughput (the amount of transferred data that is written per second)
• Read Cache Hit Rate (cache hit rate for read)
• Write Cache Hit Rate (cache hit rate for write)
• Prefetch Cache Hit Rate (cache hit rate for prefetch)

● Controller Performance Information


• Busy Ratio (CPU usage)
• CPU core usage

● CA Port Performance Information


• Read IOPS (the read count per second)
• Write IOPS (the write count per second)
• Read Throughput (the amount of transferred data that is read per second)
• Write Throughput (the amount of transferred data that is written per second)

● RA Port Performance Information


• Send IOPS (the number of data transmission per second)
• Receive IOPS (the number of times data is received per second)
• Send throughput (the amount of transferred data that is sent per second)
• Receive throughput (the amount of transferred data that is received per second)

● Host-LU QoS Performance Information


• Average IOPS (the average number of I/Os per second)
• Minimum IOPS (the minimum number of I/Os per second)
• Maximum IOPS (the maximum number of I/Os per second)
• Average throughput (average MB/s value)
• Minimum throughput (minimum MB/s value)
• Maximum throughput (maximum MB/s value)
• Total delay time (total delay time of commands by QoS control)
• Average delay time (average delay time per command by QoS control)

● Drive Performance Information


• Busy Ratio (drive usage)

• When the ETERNUS DX is rebooted, the performance monitoring process is stopped.


• If performance monitoring is started from ETERNUS SF Storage Cruiser, ETERNUS Web GUI or ETERNUS CLI
cannot stop the process.
• If performance monitoring is started from ETERNUS Web GUI or ETERNUS CLI, the process can be stopped
from ETERNUS SF Storage Cruiser.

81
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operation Management/Device Monitoring

Event Notification
When an error occurs in the ETERNUS DX, the event notification function notifies the event information to the
administrator. The administrator can be informed that an error occurred without monitoring the screen all the
time.
The methods to notify an event are e-mail, SNMP Trap, syslog, remote support, and host sense.
Figure 44 Event Notification

Mail SNMP Syslog server


server manager

E-mail SNMP Trap Syslog

Host sense Remote


support
Server (host) center

REMCS/AIS Connect

ETERNUS DX

The notification methods and levels can be set as required.


The following events are notified.
Table 28 Levels and Contents of Events That Are Notified

Level Level of importance Event contents


Error Maintenance is necessary Component failure, temperature error, end of
battery life (*1), rebuild/copyback, etc.
Warning Preventive maintenance is neces- Module warning, battery life warning (*1),
sary etc.
Notification (information) Device information Component restoration notification, user log-
in/logout, RAID creation/deletion, storage
system power on/off, firmware update, etc.

*1: Battery related events are notified only for the ETERNUS DX100 S4/DX200 S4.

● E-Mail
When an event occurs, an e-mail is sent to the specified e-mail address.
The ETERNUS DX supports "SMTP AUTH" and "SMTP over SSL" as user authentication. A method can be selected
from CRAM-MD5, PLAIN, LOGIN, or AUTO which automatically selects one of these methods.

82
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operation Management/Device Monitoring

● Simple Network Management Protocol (SNMP)


Using the SNMP agent function, management information is sent to the SNMP manager (network management/
monitoring server).
The ETERNUS DX supports the following SNMP specifications.
Table 29 SNMP Specifications

Item Specification Remarks


SNMP version SNMP v1, v2c, v3 —
MIB MIB II Only the information managed by the ETERNUS DX can
be sent with the GET command.
The SET command send operation is not supported.
FibreAlliance MIB 2.2 This is a MIB which is defined for the purpose of FC base
SAN management.
Only the information managed by the ETERNUS DX can
be sent with the GET command.
The SET command send operation is not supported.
Unique MIB This is a MIB in regard to hardware configuration of the
ETERNUS DX.
Trap Unique Trap A trap number is defined for each category (such as a
component disconnection and a sensor error) and a mes-
sage with a brief description of an event as additional in-
formation is provided.

● Syslog
By registering the syslog destination server in the ETERNUS DX, various events that are detected by the ETERNUS
DX are sent to the syslog server as event logs.
The ETERNUS DX supports the syslog protocol which conforms to RFC3164 and RFC5424.

● Remote Support
The errors that occur in the ETERNUS DX are notified to the remote support center. The ETERNUS DX sends addi-
tional information (logs and system configuration information) for checking the error. This shortens the time to
collect information.
Remote support has the following maintenance functions.
• Failure notice
This function reports various failures, that occur in the ETERNUS DX, to the remote support center. The mainte-
nance engineer is notified of a failure immediately.
• Information transfer
This function sends information such as logs and configuration information to be used when checking a fail-
ure. This shortens the time to collect the information that is necessary to check errors.
• Firmware download
The latest firmware in the remote support center is automatically registered in the ETERNUS DX. This function
ensures that the latest firmware is registered in the ETERNUS DX, and prevents known errors from occurring.
Firmware can also be registered manually.
However, NAS system firmware is not automatically downloaded.

83
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operation Management/Device Monitoring

● Host Sense
The ETERNUS DX returns host senses (sense codes) to notify specific status to the server. Detailed information
such as error contents can be obtained from the sense code.

• Note that the ETERNUS DX cannot check whether the event log is successfully sent to the syslog server. Even
if a communication error occurs between the ETERNUS DX and the syslog server, event logs are not sent
again. When using the syslog function (enabling the syslog function) for the first time, confirm that the
syslog server has successfully received the event log of the relevant operation.
• Using the ETERNUS Multipath Driver to monitor the storage system by host senses is recommended.
Sense codes that cannot be detected in a single configuration can also be reported.

84
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Operation Management/Device Monitoring

Device Time Synchronization


The ETERNUS DX treats the time that is specified in the Master CM as the system standard time and distributes
that time to other modules to synchronize the storage time. The ETERNUS DX also supports the time correction
function by using the Network Time Protocol (NTP). The ETERNUS DX corrects the system time by obtaining the
time information from the NTP server during regular time correction.
The ETERNUS DX has a clock function and manages time information of date/time and the time zone (the region
in which the ETERNUS DX is installed). This time information is used for internal logs and for functions such as
Eco-mode, remote copy, and remote support.
The automatic time correction by NTP is recommended to synchronize time in the whole system.
When using the NTP, specify the NTP server or the SNTP server. The ETERNUS DX supports NTP protocol v4. The
time correction mode is Step mode (immediate correction). The time is regularly corrected every three hours
once the NTP is set.

• If an error occurs in a system that has a different date and time for each device, analyzing the cause of this
error may be difficult.
• Make sure to set the date and time correctly when using Eco-mode.
The stop and start process of the disk motors does not operate according to the Eco-mode schedule if the
date and time in the ETERNUS DX are not correct.
Using NTP to synchronize the time in the ETERNUS DX and the servers is recommended.

Figure 45 Device Time Synchronization

NTP server

NTP

Date and Time

yyyy mm dd xx:xx:xx

Time Zone

GMT + 09.00

Daylight Saving Time

ETERNUS DX

85
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Power Control

Power Control

Power Synchronized Unit


A power synchronized unit detects changes in the AC power output of the Uninterruptible Power Supply (UPS)
unit that is connected to the server and automatically turns on and off the ETERNUS DX.
Figure 46 Power Synchronized Unit
Server Server
AC cable

AC cable

UPS UPS
for server for server
AC cable

AC cable

Power synchronized
unit
RS232C cable

ON OFF

ETERNUS DX

86
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Power Control

Remote Power Operation (Wake On LAN)


Wake On LAN is a function that turns on the ETERNUS DX via a network.
When "magic packet" data is sent from an administration terminal, the ETERNUS DX detects the packet and the
power is turned on.
To perform Wake On LAN, utility software for Wake On LAN such as Systemwalker Runbook Automation is re-
quired and settings for Wake On LAN must be performed.
The MAC address for the ETERNUS DX can be checked on ETERNUS CLI.
ETERNUS Web GUI or ETERNUS CLI can be used to turn off the power of an ETERNUS DX remotely.
Figure 47 Wake On LAN

Administration terminal

Wake On LAN utility

Packet transmission

LAN

ETERNUS DX

87
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Backup (Advanced Copy)

Backup (Advanced Copy)


The Advanced Copy function (high-speed copying function) enables data backup (data replication) at any point
without stopping the operations of the ETERNUS DX.
For an ETERNUS DX backup operation, data can be replicated without placing a load on the business server. The
replication process for large amounts of data can be performed by controlling the timing and business access so
that data protection can be considered separate from operation processes.
An example of an Advanced Copy operation using ETERNUS SF AdvancedCopy Manager is shown below.
Figure 48 Example of Advanced Copy
Conventional backup High-speed backup with Advanced Copy function

ETERNUS SF AdvancedCopy Manager


Backup software
Disk Backup Function Tape Backup Function

Backup
Volume Volume
volume
Tape Tape

Backup process
Operation Operation Operation Operation
(System down time)
→ Time → Time
System down time
Reduce the system down
time by using the high-speed
backup with Advanced Copy
function.

There are two types of Advanced Copy: a local copy that is performed within a single ETERNUS DX and a remote
copy that is performed between multiple ETERNUS DX storage systems.
Local copy functions include One Point Copy (OPC), QuickOPC, SnapOPC, SnapOPC+, and Equivalent Copy (EC),
and remote copy functions include Remote Equivalent Copy (REC).
The following table shows ETERNUS related software for controlling the Advanced Copy function.
Table 30 Control Software (Advanced Copy)

Control software Feature


ETERNUS Web GUI / ETERNUS CLI The copy functions can be used without optional software.
ETERNUS SF AdvancedCopy Manager ETERNUS SF AdvancedCopy Manager supports various OSs and ISV applica-
tions, and enables the use of all the Advanced Copy functions. This software
can also be used for backups that interoperate with Oracle, SQL Server, Ex-
change Server, or Symfoware Server without stopping operations.
ETERNUS SF Express ETERNUS SF Express allows easy management and backup of systems with a
single product.

88
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Backup (Advanced Copy)

The following functions (copy methods) are available.


Table 31 List of Functions (Copy Methods)

Control software
Number of available
Model ETERNUS Web GUI / ETERNUS SF Advanced-
sessions ETERNUS SF Express
ETERNUS CLI Copy Manager
ETERNUS DX100 S4/ 1,024 (*1) SnapOPC+ SnapOPC SnapOPC+
DX100 S3 2,048 (*2) SnapOPC+
QuickOPC
OPC
EC
REC
ETERNUS DX200 S4/ 2,048 (*1) SnapOPC+ SnapOPC SnapOPC+
DX200 S3 4,096 (*2) SnapOPC+
QuickOPC
OPC
EC
REC

*1: The values if the controller firmware version is earlier than V10L60 or if the "Expand Volume Mode" is disa-
bled.
*2: The values if the controller firmware version is V10L60 or later and if the "Expand Volume Mode" is ena-
bled.
A copy is executed for each LUN. With ETERNUS SF AdvancedCopy Manager, a copy can also be executed for each
logical disk (which is called a partition or a volume depending on the OS).
A copy cannot be executed if another function is running in the storage system or the target volume. For details
on the functions that can be executed simultaneously, refer to "Combinations of Functions That Are Available for
Simultaneous Executions" (page 202).

Backup (SAN)
Local Copy
The Advanced Copy functions offer the following copy methods: "Mirror Suspend", "Background Copy", and "Copy-
on-Write". The function names that are given to each method are as follows: "EC" for the "Mirror Suspend" meth-
od, "OPC" for the "Background Copy" method, and "SnapOPC" for the "Copy-on-Write" method.
When a physical copy is performed for the same area after the initial copy, OPC offers "QuickOPC", which only
performs a physical copy of the data that has been updated from the previous version. The SnapOPC+ function
only copies data that is to be updated and performs generation management of the copy source volume.

● OPC
All of the data in a volume at a specific point in time is copied to another volume in the ETERNUS DX.
OPC is suitable for the following usages:
• Performing a backup
• Performing system test data replication
• Restoring backup data (restoration after replacing a drive when the copy source drive has failed)

● QuickOPC
QuickOPC copies all data as initial copy in the same way as OPC. After all of the data is copied, only updated data
(differential data) is copied. QuickOPC is suitable for the following usages:

89
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Backup (Advanced Copy)

• Creating a backup of the data that is updated in small amounts


• Performing system test data replication
• Restoration from a backup

● SnapOPC/SnapOPC+ (*1)
As updates occur in the source data, SnapOPC/SnapOPC+ saves the data prior to change to the copy destination
(SDV/TPV/FTV). The data, prior to changes in the updated area, is saved to an SDP/TPP/FTRP. Create an SDPV for
the SDP when performing SnapOPC/SnapOPC+ by specifying an SDV as the copy destination.
SnapOPC/SnapOPC+ is suitable for the following usages:
• Performing temporary backup for tape backup
• Performing a backup of the data that is updated in small amounts (generation management is available for
SnapOPC+)
• SnapOPC/SnapOPC+ operations that use an SDV/TPV/FTV as the copy destination logical volume have the fol-
lowing characteristics. Check the characteristics of each volume type before selecting the volume type.
Table 32 Characteristics of SnapOPC/SnapOPC+ Operations with Each Type of Copy Destination Logical Volume

Item to compare SDV TPV/FTV


Ease of operation set- The operation setting is complex because a The operation setting is easy because a dedicated SDV
tings dedicated SDV and SDP must be set and SDP are not required
Usage efficiency of the The usage efficiency of the pool is higher The usage efficiency of the pool is lower because the al-
pool because the allocated size of the physical located size of the physical area is large with a chunk
area is small (8KB) size of 21MB / 42MB / 84MB / 168MB

*1: The difference between SnapOPC and SnapOPC+ is that SnapOPC+ manages the history of updated data as
opposed to SnapOPC, which manages updated data for a single generation only. While SnapOPC manages
updated data in units per session thus saving the same data redundantly, SnapOPC+ has updated data as
history information which can provide multiple backups for multiple generations.

● EC
An EC creates data that is mirrored from the copy source to the copy destination beforehand, and then suspends
the copy and handles each data independently.
When copying is resumed, only updated data in the copy source is copied to the copy destination. If the copy
destination data has been modified, the copy source data is copied again in order to maintain equivalence be-
tween the copy source data and the copy destination data. EC is suitable for the following usages:
• Performing a backup
• Performing system test data replication

• Prepare an encrypted SDP when an encrypted SDV is used.


• If the SDP capacity is insufficient, a copy cannot be performed. In order to avoid this situation, an operation
that notifies the operation administrator of event information according to the remaining SDP capacity is
recommended. For more details on event notification, refer to "Event Notification" (page 82).
• For EC, the data in the copy destination cannot be referenced or updated until the copy session is suspen-
ded. If the monitoring software (ServerView Agents) performs I/O access to the data in the copy destination,
an I/O access error message is output to the server log message and other destinations. To prevent error
messages from being output, consider using other monitoring methods.

90
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Backup (Advanced Copy)

Remote Copy
Remote copy is a function that copies data between different storage systems in remote locations by using the
"REC". REC is an enhancement of the EC mirror suspend method of the local copy function. Mirroring, snapshots,
and backup between multiple storage systems can be performed by using an REC.
An REC can be used to protect data against disaster by duplicating the database and backing up data to a re-
mote location.
The older models of the ETERNUS Hybrid Storage Systems and the ETERNUS Disk Storage Systems are connecta-
ble.

● REC
REC is used to copy data among multiple devices using the EC copy method. REC is suitable for the following
usages:
• Performing system test data replication
• Duplicating databases on multiple ETERNUS DX/AF storage systems
• Backing up data to remote ETERNUS DX/AF storage systems
Figure 49 REC

Main site Backup site

Management Management
server server

Remote copy
(REC)

SAN SAN
WAN

Operating Backup
volume volume

ETERNUS DX/AF ETERNUS DX/AF

The REC data transfer mode has two modes: the synchronous transfer mode and the asynchronous transfer
mode. These modes can be selected according to whether importance is placed upon I/O response time or com-
plete backup of the data is performed until the point when a disaster occurs.
Table 33 REC Data Transfer Mode

Data transfer mode I/O response Updated log status in the case of disaster
Synchronous transmission mode Affected by transmission delay Data is completely backed up until the point when
a disaster occurs.
Asynchronous transmission mode Not affected by transmission delay Data is backed up until a few seconds before a dis-
aster occurs.

■ Synchronous Transmission Mode


Data that is updated in a copy source is immediately copied to the copy destination. Write completion signals to
write requests for the server are only returned after both the write to the copy source and the copy to the copy
destination have been done. Synchronizing the data copy with the data that is written to the copy source guar-
antees the contents of the copy source and copy destination at the time of completion.

91
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Backup (Advanced Copy)

■ Asynchronous Transmission Mode


Data that is updated in a copy source is copied to the copy destination after a completion signal to the write
request is returned.
The Stack mode and the Consistency mode are available in the Asynchronous transmission mode. Selection of
the mode depends on the usage pattern of the remote copy. The Through mode is used to stop data transfer by
the Stack mode or the Consistency mode.
• Stack mode
Only updated block positions are recorded before returning the completion signal to the server, so waiting-for-
response affects on the server are small. Data transfer of the recorded blocks can be performed by an inde-
pendent transfer process.
The Stack mode can be used for a copy even when the line bandwidth is small. Therefore, this mode is mainly
used for remote backup.
• Consistency mode
This mode guarantees the sequential transmission of updates to the remote copy destination device in the
same order as the writes occurred. Even if a problem occurs with the data transfer order due to a transmission
delay in the WAN, the update order in the copy destination is controlled to be maintained.
The Consistency mode is used to perform mirroring for data with multiple areas such as databases in order to
maintain the transfer order for copy sessions.
This mode uses part of the cache memory as a buffer (REC Buffer). A copy via the REC Buffer stores multiple
REC session I/Os in the REC Buffer for a certain period of time. Data for these I/Os is copied in blocks.
When a capacity shortage for the REC Buffer occurs, the REC Disk Buffer can also be used. A REC Disk Buffer is
used as a temporary destination to save copy data.
• Through mode
After an I/O response is returned, this mode copies the data that has not been transferred as an extension of
the process.
The Through mode is not used for normal transfers. When STOPping or SUSPENDing the Stack mode or the
Consistency mode, this mode is used to change the transfer mode to transfer data that has not been transfer-
red or to resume transfers.

92
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Backup (Advanced Copy)

• When an REC is performed over a WAN, a bandwidth that supports the amount of updates from the server
must be secured. Regardless of the amount of updates from the server, a bandwidth of at least 50Mbit/s is
required for the Synchronous mode and a bandwidth of at least 2Mbit/s for the Consistency mode (when
data is not being compressed by network devices).
• When an REC is performed over a WAN, the round-trip time for data transmissions must be 100ms or less. A
setup in which the round-trip time is 10ms or less is recommended for the synchronous transmission mode
because the effect upon the I/O response is significant.
• For REC, the data in the copy destination cannot be referenced or updated until the copy session is suspen-
ded. If the monitoring software (ServerView Agents) performs I/O access to the data in the copy destination,
an I/O access error message is output to the server log message and other destinations. To prevent error
messages from being output, consider using other monitoring methods.
• When a firmware update is performed, copy sessions must be suspended.
• The following models support REC Disk Buffers.
- ETERNUS DX100 S4/DX200 S4
- ETERNUS DX500 S4/DX600 S4
- ETERNUS DX8900 S4
- ETERNUS DX100 S3/DX200 S3
- ETERNUS DX500 S3/DX600 S3
- ETERNUS DX8100 S3/DX8700 S3/DX8900 S3
- ETERNUS AF250 S2/AF650 S2
- ETERNUS AF250/AF650
- ETERNUS DX200F
- ETERNUS DX90 S2
- ETERNUS DX400/DX400 S2 series
- ETERNUS DX8000/DX8000 S2 series
• To use REC Disk Buffers, the controller firmware version of the ETERNUS DX must be V10L60-6000 or later,
or V10L61-6000 or later.
• When the ETERNUS DX90, the ETERNUS DX400 series, or the ETERNUS DX8000 series is used as the copy
destination, REC cannot be performed between encrypted volumes and unencrypted volumes.

93
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Backup (Advanced Copy)

Available Advanced Copy Combinations


Different Advanced Copy types can be combined and used together.

● Restore OPC
For OPC, QuickOPC, SnapOPC, and SnapOPC+, restoration of the copy source from the copy destination is com-
plete immediately upon request.
Figure 50 Restore OPC
OPC, QuickOPC,
SnapOPC, or SnapOPC+

Copy source Copy destination

Restoration from the copy destination


to the copy source (Restore OPC)

● EC or REC Reverse
Restoration can be performed by switching the copy source and destination of the EC or the REC.
Figure 51 EC or REC Reverse

EC or REC

Copy source Copy destination

Reverse

EC or REC

Copy source Copy destination

94
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Backup (Advanced Copy)

● Multi-Copy
Multiple copy destinations can be set for a single copy source area to obtain multiple backups.
In the multi-copy shown in Figure 52, the entire range that is copied for copy session 1 will be the target for the
multi-copy function.
When copy sessions 1 and 2 are EC/REC, updates to area A in the copy source (update 1) are copied to both copy
destination 1 and copy destination 2.
Updates to areas other than A in the copy source (update 2) are copied only to copy destination 2.
Figure 52 Targets for the Multi-Copy Function
Copy destination 1
Copy source
Copy session 1

Update 1 A Copy destination 2

Update 2

Copy session 2

Up to eight OPC, QuickOPC, SnapOPC, EC, or REC sessions can be set for a multi-copy.
Figure 53 Multi-Copy
ETERNUS DX/AF ETERNUS DX/AF

Copy Copy Copy


destination 1 destination 2 destination 6

Copy
destination 7
Copy Copy source
destination 3 ETERNUS DX/AF

Copy Copy Copy


destination 4 destination 5 destination 8

95
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Backup (Advanced Copy)

For a SnapOPC+, the maximum number of SnapOPC+ copy session generations can be set for a single copy
source area when seven or less multi-copy sessions are already set.
Figure 54 Multi-Copy (Including SnapOPC+)
ETERNUS DX/AF ETERNUS DX/AF

Copy Copy
destination 1 destination 2 Copy
destination 5

Copy Copy source Copy


destination 3 destination 6

ETERNUS DX/AF

Copy
destination 4
Copy
destination 7

Copy destination (SnapOPC+ generation data)

Note that when the Consistency mode is used, a multi-copy from a single copy source area to two or more copy
destination areas in a single copy destination storage system cannot be performed. Even though multiple multi-
copy destinations cannot be set in the same storage system, a multi-copy from the same copy source area to
different copy destination storage systems can be performed.
Figure 55 Multi-Copy (Using the Consistency Mode)
ETERNUS DX/AF

REC (Consistency)
Copy
ETERNUS DX/AF destination 1

Copy
REC (Consistency)
destination 2
Copy source
ETERNUS DX/AF

REC (Consistency)

Copy
destination 3

96
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Backup (Advanced Copy)

When performing a Cascade Copy for an REC session in Consistency mode, the copy source of the session must
not be related to another REC session in Consistency mode with the same destination storage system.
Figure 56 Multi-Copy (Case 1: When Performing a Cascade Copy for an REC Session in Consistency Mode)
ETERNUS DX/AF
ETERNUS DX/AF
OPC / QuickOPC / EC

REC (Consistency)

Copy
Copy source destination 1
REC (Consistency)

Copy
destination 2

ETERNUS DX/AF
REC (Consistency)

Copy
destination 3

Figure 57 Multi-Copy (Case 2: When Performing a Cascade Copy for an REC Session in Consistency Mode)
ETERNUS DX/AF
ETERNUS DX/AF
OPC / QuickOPC / EC

REC (Consistency)

Copy
Copy source destination 1
REC (Consistency)

Copy
destination 2

ETERNUS DX/AF
REC (Consistency)

Copy
destination 3

● Cascade Copy
A copy destination with a copy session that is set can be used as the copy source of another copy session.
A Cascade Copy is performed by combining two copy sessions.
In Figure 58, "Copy session 1" refers to a copy session in which the copy destination area is also used as the copy
source area of another copy session and "Copy session 2" refers to a copy session in which the copy source area is
also used as the copy destination area of another copy session.
For a Cascade Copy, the copy destination area for copy session 1 and the copy source area for copy session 2
must be identical or the entire copy source area for copy session 2 must be included in the copy destination area
for copy session 1.

97
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Backup (Advanced Copy)

A Cascade Copy can be performed when all of the target volumes are the same size or when the copy destination
volume for copy session 2 is larger than the other volumes.
Figure 58 Cascade Copy
OPC/QuickOPC/
OPC/QuickOPC/
SnapOPC/SnapOPC+/
EC/REC
EC/REC

1 2
Copy source Copy destination and source Copy destination

OPC/QuickOPC/
OPC/QuickOPC/ SnapOPC/SnapOPC+/
EC/REC EC/REC

1 2
Copy source Copy destination and source

Copy destination
: Copy session 1
: Copy session 2

Table 34 shows the supported combinations when adding a copy session to a copy destination volume where a
copy session has already been configured.
Table 34 Available Cascade Copy Combinations (When a Cascade Copy Performs Session 1 Followed by Session 2)

Copy session 1

Copy session REC syn-


2 chronous REC Stack REC Consisten-
OPC QuickOPC SnapOPC SnapOPC+ EC
transmis- mode cy mode
sion
OPC ¡ (*1) ¡ (*1) ´ ´ ¡ ¡ ¡ ¡
QuickOPC ¡ (*1) ¡ (*1) (*2) ´ ´ ¡ ¡ ¡ ¡
SnapOPC ¡ (*1) ¡ (*1) ´ ´ ¡ ¡ ¡ ¡
SnapOPC+ ¡ (*1) ¡ (*1) ´ ´ ¡ ¡ ¡ ¡
EC ¡ ¡ ´ ´ ¡ ¡ ¡ ¡
REC synchro- ¡ (*3) ¡ (*3) ´ ´ ¡ (*3) ¡ (*3) ¡ (*3) ¡
nous trans- (*3) (*4)
mission
REC Stack ¡ ¡ ´ ´ ¡ ¡ ¡ ¡
mode
REC Consis- ¡ (*3) ¡ (*3) ´ ´ ¡ (*3) ¡ ¡ (*3) ¡
tency mode (*3) (*4)

¡: Possible, ´: Not possible

*1: When copy session 2 is an OPC, QuickOPC, SnapOPC, or SnapOPC+ session, data in the copy destination of
copy session 1 is backed up. Data is not backed up in the copy source of copy session 1.
*2: This combination is supported only if the copy size in both the copy source volume and the copy destina-
tion volume is less than 2TB.
If the copy size is 2TB or larger, perform the following operations instead.

98
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Backup (Advanced Copy)

• When performing a temporary recovery


Use a Cascade Copy of QuickOPC (copy session 1) and OPC (copy session 2).
• When backing up two generations
Use a multi-copy that is configured with QuickOPC and QuickOPC.
*3: A Cascade Copy cannot be performed when a copy source and destination volume is in an older ETERNUS
storage system model.
*4: When copy session 1 uses the REC Consistency mode, the data transmission sequence of copy session 1 is
guaranteed, but the data transmission sequence of copy session 2 is not guaranteed.
Table 35 shows the supported combinations when adding a copy session to a copy source volume where a copy
session has already been configured.
Table 35 Available Cascade Copy Combinations (When a Cascade Copy Performs Session 2 Followed by Session 1)

Copy session 2

Copy session REC syn-


1 chronous REC Stack REC Consisten-
OPC QuickOPC SnapOPC SnapOPC+ EC
transmis- mode cy mode
sion
OPC ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡
QuickOPC ¡ ¡ (*1) ¡ ¡ ¡ ¡ ¡ ¡
SnapOPC ´ ´ ´ ´ ´ ´ ´ ´
SnapOPC+ ´ ´ ´ ´ ´ ´ ´ ´
EC ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡
REC synchro- ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡
nous trans-
mission
REC Stack ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡
mode
REC Consis- ¡ ¡ ¡ ¡ ¡ ¡ (*2) ¡ ¡ (*2)
tency mode

¡: Possible, ´: Not possible

*1: This combination is supported only if the copy size in both the copy source volume and the copy destina-
tion volume is less than 2TB.
If the copy size is 2TB or larger, perform the following operations instead.
• When performing a temporary recovery
Use a Cascade Copy of QuickOPC (copy session 1) and OPC (copy session 2).
• When backing up two generations
Use a multi-copy that is configured with QuickOPC and QuickOPC.
*2: When copy session 1 uses the REC Consistency mode, the data transmission sequence of copy session 1 is
guaranteed, but the data transmission sequence of copy session 2 is not guaranteed.

99
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Backup (Advanced Copy)

• To suspend a Cascade Copy where session 1 is performed before session 2 and session 2 is an EC or REC
session, perform the Suspend command after the physical copy for copy session 1 is complete.
• A Cascade Copy can be performed when the copy type for copy session 1 is XCOPY or ODX. The copy destina-
tion area for XCOPY or ODX and the copy source area for copy session 2 do not have to be completely identi-
cal. For example, a Cascade Copy can be performed when the copy source area for copy session 2 is only
part of the copy destination area for copy session 1.
XCOPY or ODX cannot be set as the copy type for copy session 2 in a Cascade Copy.
• For more details on XCOPY and ODX, refer to "Server Linkage Functions" (page 130).
• To acquire valid backup data in the copy destination for copy session 2, a physical copy must be completed
or suspended in all of the copy sessions that configure the Cascade Copy. Check the copy status for copy
sessions 1 and 2 when using the backup data.
However, if a Cascade Copy performs session 1 before session 2, and copy session 1 is an OPC or QuickOPC
session and copy session 2 is an OPC, QuickOPC, SnapOPC, or SnapOPC+ session, the data in the copy desti-
nation for copy session 2 is available even during a physical copy.
• If copy session 1 is an EC or REC session and copy session 2 is an OPC, QuickOPC, SnapOPC, or SnapOPC+
session, setting copy session 2 after setting copy session 1 to an equivalent or suspended state is recom-
mended.
• When stopping an OPC or QuickOPC session for copy session 1 during a physical copy, stop copy session 2 in
advance if copy session 2 is an OPC, QuickOPC, SnapOPC, or SnapOPC+ session.
• If copy session 2 is an EC or REC session, copy session 2 does not transition to an equivalent state until the
physical copy for copy session 1 is complete. For an EC session, a copy session cannot be suspended until
the session transitions to an equivalent state.
• If a Cascade Copy performs session 1 before session 2, and copy session 1 is an OPC or QuickOPC session, the
logical data in the intermediate volume when copy session 2 is started (the copy destination volume for
copy session 1) is copied to the copy destination volume for copy session 2. A logical data copy is shown
below.
OPC / QuickOPC

OPC / QuickOPC /
SnapOPC /
SnapOPC+ / EC / REC

: Copy session 1
: Copy session 2

100
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Backup (Advanced Copy)

A Cascade Copy that uses the three copy sessions can be performed with the following configuration.
Figure 59 Cascade Copy (Using Three Copy Sessions)
OPC/QuickOPC/
OPC/QuickOPC/EC REC
SnapOPC/SnapOPC+/
(Stack mode)
EC

Copy source Copy destination Copy destination Copy destination


and source and source
: Copy session 1
: Copy session 2
: Copy session 3

A Cascade Copy that uses the four copy sessions can be performed with the following configuration.
However, two EC sessions in the copy destination ETERNUS device cannot be "Active" at the same time.
Figure 60 Cascade Copy (Using Four Copy Sessions)

ETERNUS DX/AF ETERNUS DX/AF

A B C D E
EC REC EC EC
(Stack mode)

ETERNUS DX/AF ETERNUS DX/AF

A B C D E
QuickOPC REC EC EC
(Stack mode)

: Copy session 1
: Copy session 2
: Copy session 3
: Copy session 4

101
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Performance Tuning

Performance Tuning

Striping Size Expansion


Striping Size Expansion is a function to expand the stripe depth value by specifying the stripe depth when creat-
ing a RAID group.
Expansion of the stripe size enables advanced performance tuning. For normal operations, the default value
does not need to be changed.
An expanded stripe depth reduces the number of drives that are accessed. A reduced number of commands to
drives improves the access performance of the corresponding RAID1+0 RAID groups. However, it should be noted
that an expanded stripe depth may reduce the sequential write performance for RAID5.
The stripe depth values that are available for each RAID type are shown below.
Table 36 Available Stripe Depth

RAID type Drive configuration (*1) Available stripe depth


Mirroring (RAID1) 1D+1M —
High performance (RAID1+0) All drive configurations 64KB, 128KB, 256KB, 512KB, and 1,024KB
Striping (RAID0)
High capacity (RAID5) 2D+1P – 4D+1P 64KB, 128KB, 256KB, and 512KB
5D+1P – 8D+1P 64KB, 128KB, and 256KB
9D+1P – 15D+1P 64KB and 128KB
Reliability (RAID5+0) All drive configurations 64KB
High reliability (RAID6)
High reliability (RAID6-FR)

*1: D: Data, M: Mirror, P: Parity

• The read/write performance of the random access can be enhanced by changing the setting, however, note
that the performance can be degraded, depending on the used system.
• The following restrictions are applied to the RAID groups with expanded stripe sizes:
- Encryption and Logical Device Expansion cannot be performed on the volumes which belong to the RAID
group.
- RAID groups with different stripe sizes configured cannot coexist in the same TPP or FTSP pools.
- A WSV cannot be configured by concatenating RAID groups with different stripe sizes.
• "Stripe Depth 512KB" cannot be specified for a "RAID5 (4D+1P)" configuration that is used for TPPs and
FTSPs.
• "Stripe Depth 256KB" cannot be specified for a "RAID5 (8D+1P)" configuration that is used for TPPs and
FTSPs.

102
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Performance Tuning

Assigned CMs
A controller that controls access is assigned to each RAID group and manages the load balance in the ETERNUS
DX. The controller that controls a RAID group is called an assigned CM.
Figure 61 Assigned CMs

The assigned CM of the RAID


Switch Switch
group controls access

CM#0 CM#1
Assigned Assigned
CM CM

RAID group #0
RAID group #1

RAID group #2

ETERNUS DX

When the load is unbalanced between the controllers, change the assigned CM.
If an assigned controller is disconnected for any reason, the assigned CM is replaced by another controller. After
the disconnected controller is installed again and returns to normal status, this controller becomes the assigned
CM again.

103
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Smart Setup Wizard

Smart Setup Wizard


The Smart Setup Wizard is a wizard that simplifies the creation of Thin Provisioning Pools and configuration of
host affinity for configurations enabled with Thin Provisioning.
For the procedure on configuration using the Smart Setup Wizard, refer to "Configuration Guide (Basic)".

If a Thin Provisioning Pool has not been created, the Thin Provisioning Pool configuration is automatically
determined based on the type of drives and the number of drives installed in the ETERNUS DX.
• The priority for selecting drive types is as follows.
SSD > SSD SED > SAS > SAS SED > Nearline SAS > Nearline SAS SED
If multiple drive types exist, the drive type with the highest priority is selected to create a Thin Provisioning
Pool.
To create another Thin Provisioning Pool with the unselected drive types, this wizard cannot be used. Use
the dedicated function provided by this storage system to create a Thin Provisioning Pool.
• The RAID levels and the number of drives for RAID groups that configure the Thin Provisioning Pool are as
follows.

Drive type RAID level Number of drives


SSD and SSD SED RAID5 5 to 48
SAS, SAS SED, Nearline SAS, and Nearline SAS SED RAID6 7 to 48

• A Global Hot Spare is registered for each Thin Provisioning Pool.

The following shows an example of creating a Thin Provisioning Pool using the Smart Setup Wizard.

● For SSDs and SSD SEDs


RAID groups are created with RAID5, which has high storage efficiency.
Table 37 shows a guideline for the number of drives and user capacities when 1.92TB SSDs are installed and
Figure 62 shows an example RAID configuration.
Table 37 Guideline for the Number of Drives and User Capacities (When 1.92TB SSDs Are Installed)

RAID configuration that is to be


User capacity
created

Number of in- Capacity of the


user data area Hot spare Unused drive
stalled drives Per storage sys-
RAID group (equivalent Per RAID group
tem
number of
drives)
4 or less RAID groups - - - - -
cannot be cre-
ated
5 RAID5 ´ 1 3 1 0 Approximately Approximately
5.2TB 5.2TB
6 RAID5 ´ 1 4 1 0 Approximately Approximately
6.9TB 6.9TB
7 RAID5 ´ 1 4 1 1 Approximately Approximately
6.9TB 6.9TB

104
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Smart Setup Wizard

RAID configuration that is to be


User capacity
created

Number of in- Capacity of the


user data area Hot spare Unused drive
stalled drives Per storage sys-
RAID group (equivalent Per RAID group
tem
number of
drives)
8 RAID5 ´ 1 6 1 0 Approximately Approximately
10.4TB 10.4TB
9 RAID5 ´ 1 7 1 0 Approximately Approximately
12.2TB 12.2TB
10 RAID5 ´ 1 8 1 0 Approximately Approximately
13.9TB 13.9TB
11 RAID5 ´ 2 8 1 0 Approximately Approximately
6.9TB 13.9TB
12 RAID5 ´ 2 8 1 1 Approximately Approximately
6.9TB 13.9TB
13 RAID5 ´ 3 9 1 0 Approximately Approximately
5.2TB 15.7TB
14 RAID5 ´ 3 9 1 1 Approximately Approximately
5.2TB 15.7TB
15 RAID5 ´ 2 12 1 0 Approximately Approximately
10.4TB 20.9TB
16 RAID5 ´ 2 12 1 1 Approximately Approximately
10.4TB 20.9TB
17 RAID5 ´ 2 14 1 0 Approximately Approximately
12.2TB 24.4TB
18 RAID5 ´ 2 14 1 1 Approximately Approximately
12.2TB 24.4TB
19 RAID5 ´ 2 16 1 0 Approximately Approximately
13.9TB 27.9TB
20 RAID5 ´ 2 16 1 1 Approximately Approximately
13.9TB 27.9TB
21 RAID5 ´ 4 16 1 0 Approximately Approximately
6.9TB 27.9TB
22 RAID5 ´ 3 18 1 0 Approximately Approximately
10.4TB 31.4TB
23 RAID5 ´ 3 18 1 1 Approximately Approximately
10.4TB 31.4TB
24 RAID5 ´ 3 18 1 2 Approximately Approximately
10.4TB 31.4TB
25 RAID5 ´ 3 21 1 0 Approximately Approximately
12.2TB 36.6TB
26 RAID5 ´ 3 21 1 1 Approximately Approximately
12.2TB 36.6TB
27 RAID5 ´ 3 21 1 2 Approximately Approximately
12.2TB 36.6TB
28 RAID5 ´ 3 24 1 0 Approximately Approximately
13.9TB 41.8TB
29 RAID5 ´ 4 24 1 0 Approximately Approximately
10.4TB 41.8TB

105
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Smart Setup Wizard

RAID configuration that is to be


User capacity
created

Number of in- Capacity of the


user data area Hot spare Unused drive
stalled drives Per storage sys-
RAID group (equivalent Per RAID group
tem
number of
drives)
30 RAID5 ´ 4 24 1 1 Approximately Approximately
10.4TB 41.8TB
31 RAID5 ´ 6 24 1 0 Approximately Approximately
6.9TB 41.8TB
32 RAID5 ´ 6 24 1 1 Approximately Approximately
6.9TB 41.8TB
33 RAID5 ´ 4 28 1 0 Approximately Approximately
12.2TB 48.8TB
34 RAID5 ´ 4 28 1 1 Approximately Approximately
12.2TB 48.8TB
35 RAID5 ´ 4 28 1 2 Approximately Approximately
12.2TB 48.8TB
36 RAID5 ´ 5 30 1 0 Approximately Approximately
10.4TB 52.3TB
37 RAID5 ´ 4 32 1 0 Approximately Approximately
13.9TB 55.8TB
38 RAID5 ´ 4 32 1 1 Approximately Approximately
13.9TB 55.8TB
39 RAID5 ´ 4 32 1 2 Approximately Approximately
13.9TB 55.8TB
40 RAID5 ´ 4 32 1 3 Approximately Approximately
13.9TB 55.8TB
41 RAID5 ´ 5 35 1 0 Approximately Approximately
12.2TB 61.0TB
42 RAID5 ´ 5 35 1 1 Approximately Approximately
12.2TB 61.0TB
43 RAID5 ´ 6 36 1 0 Approximately Approximately
10.4TB 62.8TB
44 RAID5 ´ 6 36 1 1 Approximately Approximately
10.4TB 62.8TB
45 RAID5 ´ 6 36 1 2 Approximately Approximately
10.4TB 62.8TB
46 RAID5 ´ 5 40 1 0 Approximately Approximately
13.9TB 69.8TB
47 RAID5 ´ 5 40 1 1 Approximately Approximately
13.9TB 69.8TB
48 RAID5 ´ 5 40 1 2 Approximately Approximately
13.9TB 69.8TB

106
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Smart Setup Wizard

Figure 62 RAID Configuration Example (When 12 SSDs Are Installed)


RAID5 × 2, Hot spare × 1, Unused drive × 1

RAID5 group 1 (*1) Hot spare Unused drive

Drive#0 Drive#1 Drive#2 Drive#3 Drive#4 Drive#10 Drive#11

RAID5 group 2 (*1)

Drive#5 Drive#6 Drive#7 Drive#8 Drive#9

*1: The capacity of the user data area is equivalent to four drives.

● For SAS Disks, SAS SEDs, Nearline SAS Disks, Nearline SAS SEDs
RAID groups are created with RAID6, which has high storage efficiency.
Table 38 shows a guideline for the number of drives and user capacities when 1.2TB SAS disks are installed and
Figure 63 shows an example RAID configuration.
Table 38 Guideline for the Number of Drives and User Capacities (When 1.2TB SAS Disks Are Installed)

RAID configuration that is to be


User capacity
created

Number of in- Capacity of the


user data area Hot spare Unused drive
stalled drives Per storage sys-
RAID group (equivalent Per RAID group
tem
number of
drives)
6 or less RAID groups - - - - -
cannot be cre-
ated
7 RAID6 ´ 1 4 1 0 Approximately Approximately
4.2TB 4.2TB
8 RAID6 ´ 1 4 1 1 Approximately Approximately
4.2TB 4.2TB
9 RAID6 ´ 1 6 1 0 Approximately Approximately
6.4TB 6.4TB
10 RAID6 ´ 1 7 1 0 Approximately Approximately
7.4TB 7.4TB
11 RAID6 ´ 1 8 1 0 Approximately Approximately
8.5TB 8.5TB
12 RAID6 ´ 1 8 1 1 Approximately Approximately
8.5TB 8.5TB
13 RAID6 ´ 2 8 1 0 Approximately Approximately
4.2TB 8.5TB
14 RAID6 ´ 2 8 1 1 Approximately Approximately
4.2TB 8.5TB
15 RAID6 ´ 2 8 1 2 Approximately Approximately
4.2TB 8.5TB

107
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Smart Setup Wizard

RAID configuration that is to be


User capacity
created

Number of in- Capacity of the


user data area Hot spare Unused drive
stalled drives Per storage sys-
RAID group (equivalent Per RAID group
tem
number of
drives)
16 RAID6 ´ 2 8 1 3 Approximately Approximately
4.2TB 8.5TB
17 RAID6 ´ 2 12 1 0 Approximately Approximately
6.4TB 12.8TB
18 RAID6 ´ 2 12 1 1 Approximately Approximately
6.4TB 12.8TB
19 RAID6 ´ 2 14 1 0 Approximately Approximately
7.4TB 14.9TB
20 RAID6 ´ 2 14 1 1 Approximately Approximately
7.4TB 14.9TB
21 RAID6 ´ 2 16 1 0 Approximately Approximately
8.5TB 17.0TB
22 RAID6 ´ 2 16 1 1 Approximately Approximately
8.5TB 17.0TB
23 RAID6 ´ 2 16 1 2 Approximately Approximately
8.5TB 17.0TB
24 RAID6 ´ 2 16 1 3 Approximately Approximately
8.5TB 17.0TB
25 RAID6 ´ 3 18 1 0 Approximately Approximately
6.4TB 19.2TB
26 RAID6 ´ 3 18 1 1 Approximately Approximately
6.4TB 19.2TB
27 RAID6 ´ 3 18 1 2 Approximately Approximately
6.4TB 19.2TB
28 RAID6 ´ 3 21 1 0 Approximately Approximately
7.4TB 22.4TB
29 RAID6 ´ 3 21 1 1 Approximately Approximately
7.4TB 22.4TB
30 RAID6 ´ 3 21 1 2 Approximately Approximately
7.4TB 22.4TB
31 RAID6 ´ 3 24 1 0 Approximately Approximately
8.5TB 25.6TB
32 RAID6 ´ 3 24 1 1 Approximately Approximately
8.5TB 25.6TB
33 RAID6 ´ 4 24 1 0 Approximately Approximately
6.4TB 25.6TB
34 RAID6 ´ 4 24 1 1 Approximately Approximately
6.4TB 25.6TB
35 RAID6 ´ 4 24 1 2 Approximately Approximately
6.4TB 25.6TB
36 RAID6 ´ 4 24 1 3 Approximately Approximately
6.4TB 25.6TB
37 RAID6 ´ 4 28 1 0 Approximately Approximately
7.4TB 29.8TB

108
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
2. Basic Functions
Smart Setup Wizard

RAID configuration that is to be


User capacity
created

Number of in- Capacity of the


user data area Hot spare Unused drive
stalled drives Per storage sys-
RAID group (equivalent Per RAID group
tem
number of
drives)
38 RAID6 ´ 4 28 1 1 Approximately Approximately
7.4TB 29.8TB
39 RAID6 ´ 4 28 1 2 Approximately Approximately
7.4TB 29.8TB
40 RAID6 ´ 4 28 1 3 Approximately Approximately
7.4TB 29.8TB
41 RAID6 ´ 4 32 1 0 Approximately Approximately
8.5TB 34.1TB
42 RAID6 ´ 4 32 1 1 Approximately Approximately
8.5TB 34.1TB
43 RAID6 ´ 4 32 1 2 Approximately Approximately
8.5TB 34.1TB
44 RAID6 ´ 4 32 1 3 Approximately Approximately
8.5TB 34.1TB
45 RAID6 ´ 4 32 1 4 Approximately Approximately
8.5TB 34.1TB
46 RAID6 ´ 5 35 1 0 Approximately Approximately
7.4TB 37.3TB
47 RAID6 ´ 5 35 1 1 Approximately Approximately
7.4TB 37.3TB
48 RAID6 ´ 5 35 1 2 Approximately Approximately
7.4TB 37.3TB

Figure 63 RAID Configuration Example (When 15 SAS Disks Are Installed)


RAID6 × 2, Hot spare × 1, Unused drive × 2

RAID6 group 1 (*1) Hot spare Unused drive

Drive#0 Drive#1 Drive#2 Drive#3 Drive#4 Drive#5 Drive#12 Drive#13 Drive#14

RAID6 group 2 (*1)

Drive#6 Drive#7 Drive#8 Drive#9 Drive#10 Drive#11

*1: The capacity of the user data area is equivalent to four drives.

109
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions

This chapter describes the functions that are available when a SAN connection is used.

Operations Optimization (Deduplication/Compression)

• A single controller configuration differs from a dual controller configuration in the following ways:
- The Deduplication/Compression function cannot be used.
• The Deduplication/Compression function cannot be used if the Unified kit/Unified License is installed.

Deduplication/Compression
The Deduplication/Compression function analyzes duplicated data in every 4KB of the write data from the server,
and writes the duplicated data only once. After the first write, the data is referenced instead of writing the same
data again. This reduces the total write size. Also, with the Compression function further data reduction is realiz-
ed.
The Deduplication/Compression function can be used for the ETERNUS DX200 S4/DX200 S3.
The Deduplication/Compression function can not only perform both deduplication and compression at the same
time, but can also perform only deduplication or compression individually.
Overviews of the Deduplication/Compression function, the Deduplication function, and the Compression function
are described below.

110
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Operations Optimization (Deduplication/Compression)

● Deduplication/Compression Function
This function removes duplicate data blocks, compresses the remaining data blocks, and then stores the data.
Figure 64 Deduplication/Compression Overview

Business server 1 Business server 2

Data 1 A B C D A B E F Data 2
Data 1 is Data 2 is
written written

A B C D Analysis A B E F

Deduplication
Duplicate data blocks are removed

Compression:
Data blocks are compressed

A B C D E F

Target volume for deduplication/compression ETERNUS DX

● Deduplication Function
This function removes duplicate data blocks and stores the data.
Figure 65 Deduplication Overview

Business server 1 Business server 2

Data 1 A B C D A B E F Data 2
Data 1 is Data 2 is
written written

A B C D Analysis A B E F

Deduplication
Duplicate data blocks are removed

A B C D E F

Target volume for deduplication ETERNUS DX

111
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Operations Optimization (Deduplication/Compression)

● Compression Function
This function compresses each data block and stores the data.
Figure 66 Compression Overview

Business server 1 Business server 2

Data 1 A B C D A B E F Data 2
Data 1 is Data 2 is
written written

A B C D A B E F

Compression:
Data blocks are compressed

A B C D A B E F

Target volume for compression ETERNUS DX

The following table provides the function specifications for the Deduplication/Compression.
Table 39 Deduplication/Compression Function Specifications

Model ETERNUS DX200 S4/DX200 S3


Number of TPPs available for Deduplication/Compression set- 4
tings
The maximum logical capacity When one pool has one RAID Up to five times the DEDUP_SYS Volume (*2)
that can be a deduplication/ group
compression target (*1)
When one pool has two or Up to ten times the DEDUP_SYS Volume (*2)
more RAID groups
Logical capacity of the DEDUP_SYS Volume (*3) Expandable from 8TB (default) to 128TB (maximum)
Logical capacity of the DEDUP_MAP Volume (*3) Fixed (5,641,339MB)
Volume type TPV ¡ (*4)
Standard / FTV / WSV / SDV / ´
SDPV / VVOL / ODX

*1: To perform an efficient load balance of the Deduplication/Compression process, configuring two or more
RAID groups per pool is recommended.
*2: If a Deduplication/Compression Volume is created or expanded, expand the DEDUP_SYS Volume according
to the total capacity of the Deduplication/Compression Volumes. If the efficiency of the Deduplication/
Compression function cannot be estimated, the recommended total capacity of Deduplication/Compression
Volumes is a capacity smaller than the logical capacity of the DEDUP_SYS Volume.
*3: The Deduplication/Compression function can create Deduplication/Compression Volumes whose capacity is
equal to or larger than a DEDUP_SYS Volume. In environments where deduplication/compression is not ef-
fective, a write operation to the Deduplication/Compression Volume may fail due to a capacity shortage of
the DEDUP_SYS Volume.

112
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Operations Optimization (Deduplication/Compression)

In addition, if the DEDUP_SYS Volume capacity runs out or is close to running out, an SNMP Trap is sent.
*4: NAS user volumes are not supported.
The Memory Extension must be installed to use the Deduplication/Compression function. The Memory Extension
is installed as standard in the controller module for standard models of the ETERNUS DX200 S4.

● Performance When Using the Deduplication/Compression Function


The ETERNUS DX performs data deduplication/compression in synchronization with the I/O from the server.
• Using this function in environments where random access occurs is recommended. Data is intermittently stor-
ed in Deduplication/Compression Volumes because data is appended.
• I/O response may significantly degrade when compared to systems that do not use the Deduplication/
Compression function.
• Using this function in environments where the I/O size is 32KB or smaller is recommended. The performance is
affected in environments where the I/O size is large because data is deduplicated and compressed every 4KB.
• If the I/O size or the boundaries of the I/O address are not 4KB, the performance is affected because the parts
less than 4KB are read in the ETERNUS DX.
• If I/Os are issued to the Deduplication/Compression Volumes, the CPU usage rate increases. The performance
of non-Deduplication/Compression Volumes may also be affected.

• The performance may decline when the Deduplication/Compression function is enabled. Using the Dedupli-
cation/Compression function is not recommended for the volumes that are used to store performance-sensi-
tive data.
• Batch process (or sequential access) performance significantly degrades because data is written to drives
intermittently or a large amount of references and updates occur. In environments where sequential access
occurs, using the Deduplication/Compression function is not recommended.
• The Deduplication/Compression function becomes a disadvantage in terms of performance and capacity if a
volume stores data such as videos to which deduplication/compression is not effective and the volume is set
as a Deduplication/Compression Volume. Only enable either the Deduplication function or the Compression
function.

● Configuration Method
• Enabling the Deduplication/Compression function
From ETERNUS Web GUI or ETERNUS CLI, enable the Deduplication/Compression function for the TPP. Not only
can the Deduplication/Compression function be enabled, but the Deduplication function or the Compression
function can be individually enabled.
The Deduplication/Compression function can be enabled by performing one of the methods below.
Table 40 Method for Enabling the Deduplication/Compression Function

Condition of the TPP Chunk size (*1) Creation method


A newly created TPP 21MB Select AUTO mode and specify an option
for the Deduplication/Compression func-
tion to enable the function (*2)
A TPP that is created with a controller 21MB Select a target TPP to enable the Dedu-
firmware version earlier than V10L70 plication/Compression function (*3)
A TPP that is created with a controller 21MB Select a target TPP to enable the Dedu-
firmware version V10L70 and later (the plication/Compression function (*3) (*4)
Dedup Ready setting is specified when
the TPP is created)

113
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Operations Optimization (Deduplication/Compression)

Condition of the TPP Chunk size (*1) Creation method


A TPP that is created with a controller 21MB Select a target TPP to enable the Dedu-
firmware version V10L70 and later (the plication/Compression function (*3) (*4)
Dedup Ready setting is not specified
when the TPP is created) Other than 21MB Deduplication/Compression function can-
not be enabled (*5)

*1: The chunk size can be checked in the detail display of the TPP.
*2: This setting is available only if the TPP is created with the AUTO mode. In consideration of the load bal-
ancing for the Deduplication/Compression process, using this creation method is recommended. If the
TPP can only be configured with one RAID group, the Deduplication/Compression function cannot be ena-
bled.
*3: The Deduplication/Compression function can be enabled even if the TPP is configured with only one RAID
group. However, if the TPP is configured with one RAID group, the load of the Deduplication/Compression
processes cannot be balanced efficiently. Enabling the Deduplication/Compression function for a TPP that
is configured with two or more RAID groups is recommended.
*4: To create a TPP with a chunk size of 21MB, specify the Dedup Ready option when creating the TPP.
*5: This function cannot be performed if the chunk size of the TPP is not 21MB.
• Configuration method for the Deduplication/Compression function
Select the TPP where the Deduplication/Compression function is enabled, and create Deduplication/Compres-
sion Volumes (TPVs) in the selected TPP.
If I/O load exists in the ETERNUS DX, enabling or disabling the Deduplication/Compression function in the TPP
may take time. If I/O load exists, changing the setting of the Deduplication/Compression function for each TPP
is recommended.
Specify whether to enable or disable the Deduplication/Compression function for each TPV. TPVs (or Deduplica-
tion/Compression Volumes) where the Deduplication/Compression function is enabled and disabled can exist
together within the same TPP. However, these two types of TPVs should be located in separate TPPs.
Deduplication is performed for Deduplication/Compression Volumes within the same TPP. Deduplication is not
performed for data in different TPPs. In some cases, deduplication might not be performed even within the
same TPP.
To enable the Deduplication/Compression function for existing volumes, use the RAID Migration function.
Volumes that are to be created and the Deduplication/Compression setting for TPPs where the target volumes
can be created vary depending on the selection of "Deduplication" and "Compression".
• Volumes that are to be created
Table 41 Volumes That Are to Be Created depending on the Selection of "Deduplication" and "Compression"

Condition
Volumes that are to be created
Deduplication Compression
Enable Enable Deduplication/Compression Volumes where both Deduplica-
tion and Compression are enabled
Enable Disable Deduplication/Compression Volumes where only Deduplication
is enabled
Disable Enable Deduplication/Compression Volumes where only Compression
is enabled
Disable Disable TPVs for SAN where both Deduplication and Compression are
disabled

114
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Operations Optimization (Deduplication/Compression)

• Deduplication/Compression setting for TPPs where the volumes can be created


Table 42 Deduplication/Compression Setting for TPPs Where the Target Volumes Can Be Created

Condition Deduplication/Compression setting for the destination TPP


Both Deduplica- Both Deduplica-
Only Deduplica- Only Compression
Deduplication Compression tion and Compres- tion and Compres-
tion is enabled is enabled
sion are enabled sion are disabled
Enable Enable ´ ´ ¡ ´
Enable Disable ¡ ´ ´ ´
Disable Enable ´ ¡ ´ ´
Disable Disable ¡ ¡ ¡ ¡

¡: Volumes can be created


´: Volumes cannot be created

TPPs with the Deduplication/Compression function enabled have one of the following attributes: deduplica-
tion/compression, deduplication only, or compression only. The Deduplication/Compression Volume conforms
to the attribute of the TPP where the volume is created. TPVs where the Deduplication/Compression function
is enabled and disabled can exist together within each TPP.

● Deduplication/Compression System Volumes


The following internal volumes are created for each TPP where the Deduplication/Compression function is ena-
bled.
• One DEDUP_SYS Volume
• Two DEDUP_MAP Volumes
A single DEDUP_MAP Volume is created if the TPP only has one RAID group.
Check if the remaining area in the pool is sufficient before enabling the Deduplication/Compression function for
TPPs because the DEDUP_SYS Volume and the DEDUP_MAP Volumes are created within the maximum pool ca-
pacity.
Because the data after a deduplication/compression is stored in the DEDUP_SYS Volume, add RAID groups to the
TPP or expand the DEDUP_SYS Volume before the usage rate of the TPP or the usage rate of the DEDUP_SYS
Volume reaches 100%.
DEDUP_SYS Volumes cannot be expanded to a capacity larger than 128TB. If the capacity of the DEDUP_SYS Vol-
ume exceeds 128TB, use the RAID Migration function to migrate the Deduplication/Compression Volumes in the
TPP to non-Deduplication/Compression Volumes (TPVs) or other TPPs.
Apart from the data after deduplication/compression, the control information is written to the DEDUP_SYS Vol-
ume and the DEDUP_MAP Volume. The physical capacity that is used for the control information is the total of
the fixed capacity of 4GB maximum and the variable capacity (1 - 15%) according to the written size from the
server.

● Deduplication/Compression Volumes
The physical capacity may temporarily be larger than the logical capacity that is written because data is appen-
ded in Deduplication/Compression Volumes. If the I/O load is high, the physical capacity may run out. Monitoring
the physical capacity on a regular basis and enabling SNMP notifications are recommended.

115
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Operations Optimization (Deduplication/Compression)

• The deduplication rate may temporarily decrease due to a CM failure, a firmware update, or a blackout.
• If a failure occurs in a RAID group configuring the TPP, or a bad sector occurs in the DEDUP_SYS Volume or
the DEDUP_MAP Volume, the data of all the Deduplication/Compression Volumes in the TPP may be de-
leted.

● Functional Details
Figure 67 Details of the Deduplication/Compression Function

Server

A A
B B
C E
D F

I/O request ETERNUS DX

Thin Provisioning Pool (TPP)


A A Maps the Deduplication/
B B Compression Volume
(or the virtual volume)
C E
and the actual written data
D F
Deduplication/
Compression DEDUP_SYS (TPV)
Volume A
(a virtual volume DEDUP_MAP0 DEDUP_MAP1
that can be seen (TPV) (TPV) B
from the server)
C E
D F

Non-Deduplication/
Compression Actual written data after
Volume deduplication/compression

Note the following when using the Advanced Copy functions for Deduplication/Compression Volumes.
• The CPU usage rate may increase depending on the EC/OPC priority setting. Be cautious of an I/O perform-
ance reduction.
• The copy performance may be significantly reduced when compared to non-Deduplication/Compression Vol-
umes (TPVs).
• When using copies between ETERNUS DX storage systems, data without deduplication and compression is
sent to the copy destination. In addition, the bandwidth of the remote lines might not be fully utilized in
some cases.

116
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Operations Optimization (Deduplication/Compression)

● Operation of the Deduplication/Compression Volumes


The following table provides management functions for volumes related to Deduplication/Compression.
Table 43 Target Deduplication/Compression Volumes of Each Function

Deduplication/Compression
Action DEDUP_SYS Volumes DEDUP_MAP Volumes
Volumes
Creation ¡ ´ (*1) ´ (*1)
Deletion ¡ ´ (*2) ´ (*2)
Rename ¡ ´ ´
Format ¡ ¡ ´ (*3)
Eco-mode ´ ´ ´
TPV capacity expansion ¡ ¡ ´
RAID Migration ¡ ´ ´
Balancing ´ ´ ´
TPV/FTV capacity optimization ´ ¡ ¡
Modify threshold ´ ´ ´
Encrypt volume (*4) ¡ ´ (*1) ´ (*1)
Decrypt volume (*5) ¡ ´ ´
Advanced Copy function ¡ ´ ´
(Local copy)
Advanced Copy function ¡ ´ ´
(Remote Copy)
Forbid Advanced Copy ¡ ´ ´
Release reservation ¡ ´ ´
Performance monitoring ¡ ´ ´
Modify cache parameters ´ ¡ ¡
Create a LUN while rebuilding ¡ ´ (*1) ´ (*1)
LUN mapping ¡ ´ ´
QoS ¡ ´ ´
Create ODX Buffer volume ´ ´ ´
Storage Migration ¡ ´ ´
Non-disruptive Storage Migra- ¡ ´ ´
tion
Storage Cluster ¡ ´ ´
Extreme Cache Pool ´ ¡ ¡

*1: Automatically created when the Deduplication/Compression function is enabled for TPPs.
*2: Automatically deleted when the Deduplication/Compression function is disabled for TPPs.
*3: When DEDUP_SYS Volumes are formatted, DEDUP_MAP Volumes are also formatted.
*4: Encryption can be performed by creating a volume in the encrypted pool, or migrating a volume to an en-
crypted pool.
*5: Decryption of volumes is performed by specifying "Unencrypted" for the migration destination when mi-
grating the volumes.

117
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Improving Host Connectivity

Improving Host Connectivity

Host Affinity
The host affinity function prevents data from being damaged due to inadvertent storage access. By defining a
server that can access the volume, security can be ensured when multiple servers are connected.
Figure 68 Host Affinity

Permission for Server A


LUN#0 → Volume#0 ... LUN#255 → Volume#255

Permission for Server B


LUN#0 → Volume#256 ... LUN#255 → Volume#511

Server A

LUN#0

LUN#255
Volume#0

Server B Volume#255
Port
LUN#0

: Volume#256

LUN#255 :

Volume#511

Switch
Server C
Volume#512
LUN#0
:
ETERNUS DX
:
Volume#767
LUN#255
Port
Volume#768

:
Server D
Volume#1023
LUN#0

LUN#255

Permission for Server C


LUN#0 → Volume#512 ... LUN#255 → Volume#767

Permission for Server D


LUN#0 → Volume#768 ... LUN#255 → Volume#1023

118
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Improving Host Connectivity

The host affinity can be set by associating "Host Groups", "CA Port Groups", and "LUN Groups".
Figure 69 Associating Host Groups, CA Port Groups, and LUN Groups

Host group 1
ETERNUS DX
Server A
HBA LUN group 1

HBA
Switch
Port
Vol#0 Vol#1 Vol#2
Server B
Switch Port
HBA

HBA
CA port group 1

Server C CA port group 2


HBA

HBA
Switch
Port
Vol#10 Vol#11
Port
Server D
Switch
HBA

HBA LUN group 2

Host group 2

The host affinity can also be set by directly specifying the host and the CA port without creating host groups and
CA port groups.

● Host Group
A host group is a group of hosts that have the same host interface type and that access the same LUN group.
HBAs in multiple hosts can be configured in a single host group.

● CA Port Group
A CA port group is a group of the same CA type ports that are connected to a specific host group. A CA port group
is configured with ports that access the same LUN group, such as ports that are used for multipath connection to
the server or for connecting to the cluster configuring server. A single CA port group can be connected to multi-
ple host groups.

● LUN Group
A LUN group is a group of LUNs that can be recognized by the host and the LUN group can be accessed from the
same host group and CA port groups.
A LUN group is mapping information for LUNs and volumes.

• Host access must be prevented when changing or deleting already set host affinity settings. When adding a
new LUN to the host affinity settings, it is not necessary to stop host access.
• When servers are duplicated and connected using a cluster configuration to share a single ETERNUS DX
among multiple servers, cluster control software is required.

119
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Stable Operation via Load Control

iSCSI Security
For an iSCSI interface, the iSCSI authentication function can be used when the initiator accesses the target. The
iSCSI authentication function is available for host connections and remote copying.
The Challenge Handshake Authentication Protocol (CHAP) is supported for iSCSI authentication. For CHAP Au-
thentication, unidirectional CHAP or bidirectional CHAP can be selected. When unidirectional CHAP is used, the
target authenticates the initiator to prevent fraudulent access. When bidirectional CHAP is used, the target au-
thenticates the initiator to prevent fraudulent access and the initiator authenticates the target to prevent imper-
sonation.
Note that the Internet Storage Name Service (iSNS) is also supported as an iSCSI name resolution.

Stable Operation via Load Control

Quality of Service (QoS)

● QoS
The performance of a high priority server is guaranteed by configuring the performance limit of each connected
server.
When the load from one application is high in a storage integrated environment and sufficient resources to
process other operations cannot be secured, performance may be reduced.
The QoS function guarantees performance by limiting resources for applications with a low priority so that re-
sources are secured for high priority applications.
16 priority levels of bandwidth limits (maximum performance limits) can be configured on the hosts, CA ports,
volumes, and LUN groups. The performance configuration patterns of the bandwidth limit can be individually
changed via ETERNUS CLI.
In addition, scheduled operations (when setting a bandwidth limit for hosts, CA ports, and LUN groups) are pos-
sible by setting a duration using ETERNUS CLI.
Linking with ETERNUS SF Storage Cruiser significantly reduces the operation workload on the system administra-
tor when applying the QoS function because performance designing and tuning are automatically performed.
Figure 70 QoS
Server A Server B
Low priority High priority When QoS is not applied When QoS is applied

Processing requests The workload does not exceed


from A increase the upper limit even when
Server A Server A processing requests from A
increase

Required performance for A Workload upper limit for A

Affects the performance of B


Server B Server B

Required performance for B Workload upper limit for B

High High
I/O performance I/O performance
ETERNUS DX
Because the workload for A, which has a low Because the workload for A is lower than
priority, significantly increases, the required the upper limit, the workload for B, which has
performance for B cannot be maintained a high priority, can be reduced

120
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Stable Operation via Load Control

● REC Bandwidth Limit (Remote Copy QoS)


A bandwidth upper limit can be set for each copy path in the remote copy session.
Even if a specific path fails, the line bandwidth can be maintained without centralizing the load to other paths.
The bandwidth limit can be specified in increments of Mbit/s.
Figure 71 Copy Path Bandwidth Limit

Even if a specific path


Normal operation 1 path fails fails, steady load can
be maintained
REC path setting REC path setting

Effective line speed Effective line speed


400Mbit/s QoS 400Mbit/s QoS
Failure
200Mbit/s 200Mbit/s
RA 200Mbit/s
RA

200Mbit/s 200Mbit/s
RA 200Mbit/s
RA 200Mbit/s

ETERNUS DX ETERNUS DX

121
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Stable Operation via Load Control

Host Response
The response from the ETERNUS DX can be optimized by switching the setup information of the host response
for each connected server.
The server requirements of the supported functions, LUN addressing, and the method for command responses
vary depending on the connection environments such as the server OS and the driver that will be used. A func-
tion that handles differences in server requirements is supported. This function can specify the appropriate oper-
ation mode for the connection environment and convert host responses that respond to the server in the ETER-
NUS DX.
The host response settings can be specified for the server or the port to which the server connects. For details on
the settings, refer to "Configuration Guide -Server Connection-".
Figure 72 Host Response

Server A Server B Server C

ETERNUS DX

Host Response settings

for Server A for Server B for Server C

• If the host response settings are not set correctly, a volume may not be recognized or the desired perform-
ance may not be possible. Make sure to select appropriate host response settings.
• The maximum number of LUNs that can be mapped to the LUN group varies depending on the connection
operation mode of the host response settings.

122
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Stable Operation via Load Control

Storage Cluster
Storage Cluster is a function that allows continuous operations by using redundant connections to two ETERNUS
DX/AF storage systems so that even if the Primary storage fails, operations are switched to the Secondary stor-
age. Operations can continue without stopping access from the server if there are unexpected problems or if the
storage system is in an error state due to severe failures.
Volumes that are accessed from business servers remain accessible with the same drive or mount point even
after switching to the ETERNUS DX/AF. Transparent access from business servers is possible even after switching
to the ETERNUS DX/AF. Reallocating volumes or switching mount points is not required.
Storage Cluster is set for each logical volume. A paired configuration for the target volume is created by mirror-
ing between ETERNUS DX/AF storage systems.
If a failover occurs during operation, a link down error occurs in the CA port on the Primary storage, and the CA
port on the Secondary storage takes over. A maximum of 10 seconds is required for the automatic switchover,
but operations can continue with the server's I/O retry.
To use Storage Cluster, the ETERNUS SF Storage Cruiser Storage Cluster option is necessary.
Figure 73 Storage Cluster
LAN

Storage Cluster
Business server controller

SAN
I/O

CA Failover CA

Device
failure

ETERNUS DX/AF ETERNUS DX/AF


Primary storage Secondary storage

For Storage Cluster settings, create TFO groups that include target volumes (TFOV), and specify the connection
configuration and the policy for each group.

123
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Stable Operation via Load Control

TFOVs are mirrored and maintained equally as a pair in the Primary storage and the Secondary storage. Because
the remote copy technology is using synchronization, to mirror between storage systems, besides configuring
the Storage Cluster-related settings, the copy path must also be set.
Figure 74 Mapping TFOVs, TFO Groups, and CA Port Pairs

CA port pair

CA port pair

CA#0 CA#1 CA#0 CA#1

TFOV#0 TFOV#0

TFOV#1 TFOV#1

TFOV#2 TFOV#2

Create
an association
TFO group TFO group

Primary storage Secondary storage

• TFOV
Transparent Failover Volume (TFOV) is a volume in which the Storage Cluster setting is performed. Server ac-
cess is possible even when a failover occurs.
• TFO group
A Transparent Failover (TFO) group is the operations unit for a failover in a single ETERNUS DX/AF, and a Stor-
age Cluster failover is performed for each group.
A TFO group has two states. "Active" indicates that access from a business server is enabled and "Standby" indi-
cates that access from a business server is disabled.
• CA port pair
By sharing the WWPN/WWNN for the FC or by sharing the IP address and iSCSI name for the iSCSI with the CA
ports of two ETERNUS DX/AF storage systems, the Storage Cluster function performs a failover by controlling
the Link status of each CA port.
This pair of CA ports that share the WWPN and WWNN, or the IP address and iSCSI name is called a CA port
pair.

If a different IP address is used for the CA port pair with iSCSI, a failover and failback must be performed
manually so that the Primary storage and the Secondary storage recognize the path.

The following table provides the function specifications for the Storage Cluster.
Table 44 Storage Cluster Function Specifications

Item Specifications
Number of connectable storage systems 1
Business server connections Switch connections only

124
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Stable Operation via Load Control

Item Specifications
Copy path (connection between storage systems) Direct connection and remote connection
The maximum configurable capacity (per storage system) (*1) 2,048TB
Maximum number of TFO groups (per ETERNUS DX) 32
Failover Automatic ¡
Manual ¡
Failback Automatic ¡
Manual ¡
Automatic failover triggers ETERNUS DX/AF failure ¡
Power failure/shutdown ¡
RAID failure/RAID blockage ¡
CA port link down ¡
Storage Cluster Continuous Copy (*2) ¡

¡: Possible, ´: Not possible

*1: The total available capacity of TFOVs for a ETERNUS DX can be expanded by setting an expansion for the
total capacity of TFOVs. If the installed memory is not sufficient, the expansion setting cannot be per-
formed.
For details on setting an expansion for the total capacity of TFOVs, refer to "ETERNUS CLI User's Guide".
*2: Storage Cluster Continuous Copy is an ETERNUS DX/AF function that performs a copy on the Primary storage
and the Secondary storage simultaneously, and achieves consistency between the two storage systems.
This is achieved by operating Advanced Copy from a TFOV to a TFOV.

• After an error occurs in the Primary storage or RAID group, while the Primary storage is being switched to
the Secondary storage (max. 10 seconds), the Primary and Secondary storage cannot be accessed. There-
fore, business applications are expected to have a maximum I/O delay response of 10 seconds.
• Data mirroring between the Primary storage and the Secondary storage is performed by the Storage Cluster
during normal operations. When a Write is performed from the server to the Primary storage, data is also
transferred to the Secondary storage, and after the transfer is complete a Write completion is returned to
the server. Therefore, when the Storage Cluster is installed, the response degrades during a Write compared
to an environment without the Storage Cluster.
• Using OPC or QuickOPC for Advanced Copy from a TFOV is recommended.
• In environments that use an iSCSI configuration, a switchover of the storage systems that is required for a
failover or a failback takes from 30 seconds to 120 seconds to be complete. Server I/O must also be restar-
ted in some cases.
• The connection interface of business servers is FC or iSCSI (cannot be mixed). In addition, the switch con-
nection topology is supported.
• For details on the required environment for the Storage Cluster (OS, HBA, multipath driver, and cluster soft-
ware), refer to "Support Matrix".

125
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Data Migration

Data Migration

Storage Migration
Storage Migration is a function that migrates the volume data from an old storage system to volumes in a new
storage system without using a host in cases such as when replacing a storage system.
The migration source storage system and migration destination ETERNUS DX are connected using FC cables. Da-
ta read from the target volume in the migration source is written to the migration destination volume in the
ETERNUS DX.
Since Storage Migration is controlled by ETERNUS DX controllers, no additional software is required.
The connection interface is FC. In addition, the direct connection and switch connection topologies are suppor-
ted.
Online Storage Migration and offline Storage Migration are supported.
• Offline method
Stop the server during the data migration. Host access becomes available after the data migration to the mi-
gration destination volume is complete. Therefore, this method prevents host access from affecting the ETER-
NUS DX and can shorten the time of the migration. This method is suitable for cases requiring quick data mi-
gration.
• Online method
Host access becomes available after the data migration to the migration destination volume starts. Operations
can be performed during the data migration. Therefore, this method can shorten the time for the stopped op-
eration. This method is suitable for cases requiring continued host access during the data migration.
Figure 75 Storage Migration
The source storage system ETERNUS DX

FC

Storage Migration

The Storage Migration function migrates whole volumes at the block level. A data migration can be started by
specifying a text file with migration information that is described in a dedicated format from ETERNUS Web GUI.
The path between the migration source and the migration destination is called a migration path. The maximum
number of migration volumes for each migration path is 512.
Up to 16 migration source devices can be specified and up to eight migration paths can be created for each mi-
gration source device.
The capacity of a volume that is to be specified as the migration destination area must be larger than the migra-
tion source volume capacity.

126
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Data Migration

• For online Storage Migration, the capacity of the migration destination volume must be the same as the
migration source volume.
• For offline Storage Migration, stop server access to both the migration source volume and the migration
destination volume during a migration.
For online Storage Migration, stop server access to the migration source volume and the migration destina-
tion volume before starting a migration. In addition, do not access the migration source volume from the
server during a migration.
• Online storage migration can be manually resumed on the following volumes after the process (of deleting
a copy session) is complete.
- TPV/FTV capacity optimization is running
- Flexible Tier Migration is running
- An Advanced Copy session exists
• For the migration destination device, the FC port mode needs to be switched to "Initiator" and the port pa-
rameter also needs to be set.
• Make sure to delete the migration path after Storage Migration is complete.

127
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Non-disruptive Storage Migration

Non-disruptive Storage Migration


Non-disruptive Storage Migration is a function that migrates the volume data from an old storage system to vol-
umes in a new storage system without stopping a business server in cases such as when replacing a storage
system.
The connection interface between the migration source storage system (external storage system) and the mi-
gration destination storage system (local storage system) is only the FC cable. In addition, the direct connection
and switch connection topologies are supported.
Figure 76 Non-disruptive Storage Migration
Business server

Migration source Migration destination


storage system storage system
(external storage system) (local storage system)

FC cable

Importing the migration target volume from the external storage system

Table 45 Specifications for Paths and Volumes between the Local Storage System and the External Storage Sys-
tem

Item Quantity
The maximum number of multipath connections between the local storage 8 paths
system and the external storage system (per external storage system)
The maximum number of ports in the external storage system that can be 32 ports
connected from the local storage system (per FC-Initiator port)
The maximum number of migration target volumes that can be imported to 2,048 (DX100 S4/DX100 S3)
the local storage system (*1) 4,096 (DX200 S4/DX200 S3)
The maximum number of migration target volumes in the external storage 512
system that can be imported simultaneously to the local storage system

*1: The number of migration target volumes that are imported to the local storage system is added to the
number of volumes in the local storage system.
Connect the external storage system to the local storage system ETERNUS DX using FC cables. After the connec-
tion is established, add multipath connections between the local storage system and the business server to pre-
pare for the data migration.
After disconnecting the multipath connection between the external storage system and the business server, use
RAID Migration to read data from the migration target volume in the external storage system and write data to
the migration destination volume in the local storage system.

128
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Non-disruptive Storage Migration

Data consistency is ensured because the data remains in the migration source volume for consolidated manage-
ment during the data migration.

• Only FC ports (connected in the FC-Initiator mode) are supported for connections with external storage sys-
tems.
• The Non-disruptive Storage Migration License must be registered to use this function.
For details on the license, contact your sales representative.
• Only data migrations from the external storage system to the local storage system is supported.
Data migrations from the local storage system to the external storage system or between external storage
systems is not supported.
• The local storage system volume returns the same information as the External Volume even after a migra-
tion is completed.
• Do not set the copy operation suppression, the cache parameters, and the Volume QoS for the External Vol-
ume.
• The functions that can be used for the External Volumes are delete, rename, and RAID migration. Other
functions cannot be used until the data migration is successfully completed.
• Migration destination volumes in the local storage cannot be used for Storage Cluster even after the migra-
tion is completed.
• Make sure to delete the Non-disruptive Storage Migration License after the Non-disruptive Storage Migra-
tion is complete.

129
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Server Linkage Functions

Server Linkage Functions

Oracle VM Linkage
"Oracle VM Manager", which is the user interface of the "Oracle VM" server environment virtualization software,
can provision the ETERNUS DX.
"ETERNUS Oracle VM Storage Connect Plug-in" is required to use this function.
The Oracle VM Storage Connect framework enables Oracle VM Manager to directly use the resources and func-
tions of the ETERNUS DX in an Oracle VM environment. Native storage services such as Logical Unit Number
(LUN) creation, deletion, expansion, and snapshots are supported.
Figure 77 Oracle VM Linkage

Operation server
App App App

OS OS OS

Oracle VM

ETERNUS Oracle VM
Storage Connect Plug-in

SAN

• Creation of clones • LUN creation, deletion,


and expansion
• Setting and deletion of the
Management
App access groups (host affinity)
server
OS

Oracle VM Manager

App

OS

ETERNUS DX

: Linkage software

: ETERNUS DX software (setup tool)

130
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Server Linkage Functions

VMware Linkage
By linking with "VMware vSphere" (which virtualizes platforms) and "VMware vCenter Server" (which supports in-
tegrated management of VMware vSphere), the resources of the ETERNUS DX can be effectively used and system
performance can be improved.
In addition, by supporting Virtual Volumes that are supported by VMware vSphere 6, the system can be efficient-
ly operated.
Figure 78 VMware Linkage

Operation management server Server

ETERNUS SF Storage Cruiser App App App

OS OS OS
ETERNUS VASA Provider

• Profile-Driven Storage VMware


• Storage DRS

• Block Zeroing
• Hardware Assisted Locking
VASA

LAN VAAI

SAN
VAAI

Client PC Obtain vCenter server


information
VMware Web Client Obtain
VMware vCenter Server
information

ETERNUS vCenter Plug-in


The ETERNUS information is added
on the vSphere Client management
screen
ETERNUS DX

• Full Copy (XCOPY)

Copying in a storage system

: Linkage software

: ETERNUS DX software (setup tool)

131
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Server Linkage Functions

■ VMware VASA
vStorage API for Storage Awareness (VASA) is an API that enables vCenter Server to link with the storage system
and obtain storage system information. With VMware, VASA integrates the virtual infrastructure of the storage,
and enhances the Distributed Resource Scheduling (DRS) function and the troubleshooting efficiency.
ETERNUS VASA Provider is required to use the VASA function.
ETERNUS VASA Provider obtains and monitors information from the ETERNUS DX by using functions of ETERNUS
SF Storage Cruiser.
• Profile-Driven Storage
The Profile-Driven Storage function classifies volumes according to the service level in order to allocate virtual
machines with the most suitable volumes.
• Distributed Resource Scheduler (Storage DRS)
The Storage DRS function moves original data in virtual machines to the most suitable storage area according
to the access volume. Storage DRS balances the loads on multiple physical servers in order to eliminate the
need for performance management on each virtual machine.

■ VMware VAAI
vStorage APIs for Array Integration (VAAI) are APIs that improve system performance and scalability by using the
storage system resources more effectively.
The ETERNUS DX supports the following features.
• Full Copy (XCOPY)
Data copying processes can be performed in the ETERNUS DX without the use of a server such as when repli-
cating or migrating the virtual machine. With Full Copy (XCOPY), the load on the servers is reduced and the
system performance is improved.
• Block Zeroing
When allocating storage areas to create new virtual machines, it is necessary to zero out these storage areas
for the initialization process. This process was previously performed on the server side. By performing this
process on the ETERNUS DX side instead, the load on the servers is reduced and the dynamic capacity alloca-
tion (provisioning) of the virtual machines is accelerated.
• Hardware Assisted Locking
This control function enables the use of smaller blocks that are stored in the ETERNUS DX for exclusive control
of specific storage areas.
Compared to LUN (logical volume) level control that is implemented in "VMware vSphere", enabling access
control in block units minimizes the storage areas that have limited access using exclusive control and im-
proves the operational efficiency of virtual machines.

■ VMware vCenter Server


• vCenter linkage
Various information of the ETERNUS DX can be displayed on vSphere Web Client by expanding the user inter-
face of VMware Web Client. Because storage side information is more visualized, integrated management of
the infrastructure under a virtual environment can be realized and usability can be improved.
ETERNUS vCenter Plug-in is required to use this function.

132
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Server Linkage Functions

VMware VVOL
The ETERNUS DX supports Virtual Volumes (VVOLs) that are VMware vSphere dedicated logical volumes.
When using VVOLs, they are automatically created and copied in the ETERNUS DX while VMs are operated from
vSphere Client. As a result, operations can be simplified by eliminating the need to configure logical volumes
and backups in the storage system.

■ Operational Configuration
VVOL configuration and management are performed from ETERNUS SF Storage Cruiser. ETERNUS VASA Provider
(software) is required in the storage management server to coordinate vSphere with the ETERNUS DX. For details
on ETERNUS VASA Provider, refer to "VMware Linkage" (page 131).
Figure 79 VVOL (Operational Configuration)

VM management server App App App App App App

OS OS OS OS OS OS

vSphere vSphere
VMware
vCenter Server
VMware vSphere
Client

LAN

Storage SAN
management ETERNUS DX
server

ETERNUS VASA
Provider

ETERNUS SF
Storage Cruiser

133
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Server Linkage Functions

■ System Configuration
vSphere accesses VVOLs through Protocol Endpoints (PE). The server recognizes PEs as logical volumes. VVOLs
are created in a pool called a storage container. Storage containers and VVOLs correspond to FTRPs and FTVs of
the ETERNUS DX.
Figure 80 VVOL (System Configuration)

App App App App App App

OS OS OS OS OS OS

vSphere vSphere

ETERNUS DX
PE PE

VVOL
management VVOL VVOL VVOL VVOL
information
Storage container Storage container

● PE
PEs are control volumes to integrally manage multiple VVOLs.

● Storage Container (VVOL Datastore)


Storage containers are pools for creating VVOLs. The ETERNUS DX uses FTRPs as storage containers. Multiple
storage containers can be created in the ETERNUS DX.

● VVOL
VVOLs are logical volumes that are created in FTRPs. Multiple VVOLs can be created in a storage container.

134
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Server Linkage Functions

● Maximum VVOL Capacity


The following shows the maximum capacity that can be used for VVOLs from the maximum Thin Provisioning
Pool capacity that is set in the ETERNUS DX.
Table 46 Maximum VVOL Capacity

Item ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3


Maximum TPP capacity (reference) 2,048TB
Maximum VVOL capacity 256TB

● VVOL Management Information


Additional information called VVOL management information (metadata) is necessary for VVOLs. VVOL manage-
ment information is usually synchronized with the master information in the ETERNUS SF Storage Cruiser and
saved to a dedicated FTV within the ETERNUS DX container. The dedicated FTV is automatically created in the
storage container when creating a VVOL.
Table 47 VVOL Management Information Specifications

Item Explanation
Volume name (fixed) $VVOL_META
Volume type FTV
Usage System
Usage details VVOL management information
Capacity 1,040MB
Volume number (of each ETER- 1
NUS DX)

• When using VVOLs, LUN#224 to LUN#255 (or LUN numbers that are recognized on the server side) cannot
be used as Virtual Machine File System (VMFS) volumes because they are used for management.
• Use ETERNUS SF Storage Cruiser to change the VVOL settings. Do not change any of the settings from ETER-
NUS Web GUI or ETERNUS CLI, except to enable the VVOL function.
• If a VVOL management information dedicated FTV fails because of errors such as a RAID group failure, the
VVOL management information dedicated FTV must be recovered by recovering the RAID group, deleting
and recreating the VVOL management information dedicated FTV using ETERNUS CLI, and by performing a
backup (or synchronizing the VVOL management information) using ETERNUS SF Storage Cruiser.
• If the VVOL management information dedicated FTV is formatted, performing a backup (or synchronizing
the VVOL management information), and recovery of the VVOL management information dedicated FTV us-
ing ETERNUS SF Storage Cruiser is required.
• If the FTRP has already been created and the maximum Thin Provisioning Pool capacity and the chunk size
are changed, before setting VVOL using ETERNUS SF Storage Cruiser, execute the "set vvol-mode" command
of ETERNUS CLI to enable the VVOL function. For details on the chunk size, refer to "Thin Provisioning" (page
42).
• When registering multiple FTRPs in a storage container, do not mix FTRPs with different chunk sizes in the
same storage container.
• To use VMware vSphere Replication, the controller firmware version of the ETERNUS DX must be V10L80 or
later.

135
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Server Linkage Functions

Veeam Storage Integration


The operability and efficiency of Virtual Machine backups in virtual environments (VMware) are improved by us-
ing the ETERNUS DX storage snapshot integration with Veeam Backup & Replication provided by Veeam Soft-
ware.
Veeam Storage Integration is available for the ETERNUS DX100 S4/DX200 S4.
Figure 81 Veeam Storage Integration

Veeam backup server Backup repositories


- Job management - Data transfer
- Component management - Data storage

FUJITSU Plug-In for Veeam


Storage device where
Backup & Replication the backup data is stored

LAN

Server App App App


Data
OS OS OS

VMware

SAN FC, iSCSI

Data Backup proxies


- Backup and restore
of virtual machines
- Data transfer

ETERNUS DX

: Veeam Storage Integration software

: ETERNUS DX software (setup tool)

136
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Server Linkage Functions

• The controller firmware version of the ETERNUS DX must be V10L86 or later.


• The Veeam Storage Integration license must be obtained and registered in the ETERNUS DX.
• iSCSI and FC host interfaces are supported in Veeam Storage Integration for the connection between backup
proxies and the ETERNUS DX.
• To connect a Backup Proxy with the ETERNUS DX via an FC, the host affinity settings must be configured for
the Backup Proxy using ETERNUS CLI. For more details, refer to "ETERNUS CLI User's Guide".
• To enable the ETERNUS DX storage snapshot integration with Veeam Backup & Replication, FUJITSU Plug-In
for Veeam Backup & Replication must be installed to the Veeam backup server.
• If a volume has several snapshot generations and these snapshots have been created with different resolu-
tions, only the oldest snapshot generation can be deleted.
• The following volumes cannot be managed or operated by Veeam Backup & Replication:
- Volumes used for the Storage Cluster function
- Virtual Volumes (VVOLs)
- Volumes with Advanced Copy sessions except SnapOPC+ sessions
- Volumes with SnapOPC+ sessions created by ETERNUS SF AdvancedCopy Manager
• Veeam Backup & Replication jobs or operations may fail during a RAID migration, a Thin Provisioning Vol-
ume balancing, or a Flexible Tier Pool balancing.
• SnapOPC+ is used for Veeam Storage Integration.
Thin Provisioning Volumes (TPVs) or Flexible Tier Volumes (FTVs) are used as SnapOPC+ copy destination
volumes.
Configure an appropriate maximum pool capacity for the Thin Provisioning function by taking the total ca-
pacity of volumes used for Veeam Storage Integration and the number of snapshot generations into consid-
eration. For more details about the maximum pool capacity setting, refer to "Thin Provisioning Pool Man-
agement" in "ETERNUS Web GUI User's Guide".

Guidelines for the maximum pool capacity for the Thin Provisioning function:
Maximum pool capacity ³ total capacity of TPVs and FTVs + total capacity of volumes for Veeam Storage
Integration ´ (number of snapshot generations + 1)
• It is not recommended to use multiple Veeam Backup & Replication for managing a single ETERNUS DX.
In such configuration, an error might occur at the jobs that are in conflict with each other when being exe-
cuted from multiple Veeam Backup & Replication.

137
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Server Linkage Functions

• Veeam Storage Integration supports the following volumes.


Table 48 Volume Types That Can Be Used with Veeam Storage Integration

Volume type Copy source Copy destination


Standard ¡ ´
WSV ¡ ´
TPV ¡ (*1) ¡ (*1)
FTV ¡ ¡
SDV ´ ´
SDPV ´ ´

¡: Supported ´: Not supported

*1 : Deduplication/Compression Volumes (TPVs) are not supported.


• Copy destination TPVs/FTVs are automatically created when snapshots are created with Veeam Backup &
Replication.

138
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Server Linkage Functions

Microsoft Linkage
The ETERNUS DX supports integrated management of virtualized platforms and cloud linkage by using functions
in Windows Server and System Center.
Figure 82 Microsoft Linkage

Windows Server

Server application Backup software


Windows Server

VSS App App App

OS OS OS
ETERNUS VSS
Hardware Provider Hyper-V

SAN
• Space Reclamation
Instruction

ETERNUS DX

• Offloaded Data Transfer (ODX)

SMI-S Copying in a storage system

SCVMM management server

SCVMM management console

: Linkage software

: ETERNUS DX software (setup tool)

139
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Server Linkage Functions

■ Windows Server
The ETERNUS DX supports the following functions in Windows Server.
• Offloaded Data Transfer (ODX)
The ODX function of Windows Server 2012 or later offloads the processing load for copying and transferring
files from the CPU of the server to the storage system.
• Thin Provisioning Space Reclamation
The Thin Provisioning Space Reclamation function of Windows Server 2012 or later automatically releases
areas in the storage system that are no longer used by the OS or applications. A notification function for the
host is provided when the amount of allocated blocks of the TPV reaches the threshold.
• Hyper-V
Hyper-V is virtualization software for Windows Server.
By using the Hyper-V virtualized Fibre Channel, direct access to the SAN environment from a guest OS can be
performed. The volumes in the ETERNUS DX can be directly recognized and mounted from the guest OS.
• Volume Shadow Copy Service (VSS)
VSS is performed in combination with the backup software and the server applications that are compatible
with Windows Server VSS while online backups are performed via the Advanced Copy function for the ETERNUS
DX.
ETERNUS VSS Hardware Provider is required to use this function.
SnapOPC+ and QuickOPC can be used as the copy method.

To use the ODX function, the controller firmware version of the ETERNUS DX must be V10L80-2000 or later, or
V10L81-2000 or later.

■ System Center Virtual Machine Manager (SCVMM)


System Center is a platform to manage operations of data centers and clouds. This platform also provides an
integrated tool set for the management of applications and services.
SCVMM is a component of System Center 2012 or later that performs integrated management of virtualized envi-
ronments. The ETERNUS DX can be managed from SCVMM by using the SMI-S functions of the ETERNUS DX.

OpenStack Linkage
ETERNUS OpenStack VolumeDriver is a program that supports linkage between the ETERNUS DX and OpenStack.
By using the VolumeDriver for the ETERNUS DX, the ETERNUS DX can be used as a Block Storage for cinder. Creat-
ing volumes in the ETERNUS DX and assigning created volumes to VM instances can be performed via an Open-
Stack standard interface (Horizon).

140
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
3. SAN Functions
Server Linkage Functions

Logical Volume Manager (LVM)


The Logical Volume Manager is a management function that groups the save areas in multiple drives and parti-
tions and manages these areas as one logical drive. Adding drives and expanding logical volumes can be per-
formed without stopping the system. This function can be used on UNIX OSs (includes Linux).
LVM has a snapshot function. This function obtains any logical volume data as a snapshot and saves the snap-
shot as a different logical volume.
To use LUNs in the ETERNUS DX to configure an LVM, the LVM can be configured by registering LUNs in the
ETERNUS DX as physical volumes.
Figure 83 Logical Volume Manager (LVM)
Business server
Volume group

Logical volume
LVM

Physical group

Physical volume Physical volume Physical volume

LUN LUN LUN

RAID group #0 RAID group #1 RAID group #2

ETERNUS DX

141
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration

This chapter explains the connection configuration of the ETERNUS DX.

SAN Connection
FC, iSCSI, FCoE, and SAS are available as host interfaces. The server and the ETERNUS DX can be connected direct-
ly or via a switch.

Host Interface
This section describes each host interface.
The supported host interfaces vary between the ETERNUS DX100 S4/DX200 S4 and the ETERNUS DX100 S3/DX200
S3. For details about host interfaces, refer to "Overview" of the currently used storage systems.
When switches are used, zoning should be set for the switches to ensure the security of data.

■ Fibre Channel (FC)


The FC connection topologies that are supported are Fibre Channel Arbitrated Loop (FC-AL) and Fabric. Direct
connections and switch connections to the servers are available.
The following types of host interfaces are available:
• FC 32Gbit/s
This host interface is supported only for the ETERNUS DX100 S4/DX200 S4.
• FC 16Gbit/s
• FC 8Gbit/s
One of the following transfer rates can be specified:
• For FC 32Gbit/s
- 32Gbit/s
- 16Gbit/s
- 8Gbit/s
• For FC 16Gbit/s
- 16Gbit/s
- 8Gbit/s
- 4Gbit/s
• For FC 8Gbit/s
- 8Gbit/s
- 4Gbit/s

■ iSCSI
Direct connections and switch connections to servers are available.
The following types of host interfaces are available:
• iSCSI 10Gbit/s (10GBASE-SR/10GBASE-CR)
The transfer rate is fixed at 10Gbit/s.
10GBASE-CR is a form of communication that uses Twinax cables and is 10GBASE-SR-compliant.

142
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
SAN Connection

• iSCSI 10Gbit/s (10GBASE-T)


One of the following transfer rates can be specified:
- 10Gbit/s
- 1Gbit/s
• iSCSI 1Gbit/s
In order to maintain iSCSI performance, the iSCSI network should be physically separated from other types of
networks (such as networks for Internet access and file transfers).
• Operation Mode
The iSCSI 10Gbit/s operation mode is 10GBASE-SR, 10GBASE-CR, or 10GBASE-T.
The iSCSI 1Gbit/s operation mode is 1000BASE-T Full Duplex (FULL).
• CHAP
CHAP authentication can prevent unauthorized access. The following CHAP authentication methods are sup-
ported:
- Unidirectional CHAP
- Bidirectional CHAP
• Tag VLAN
The tag VLAN function is supported. 16 tags (VLAN ID) can be used for each port.
• Jumbo Frame
Enabling Jumbo Frame makes data transfer more efficient by increasing the amount of data that can be trans-
ferred for each Frame.
Table 49 Ethernet Frame Capacity (Jumbo Frame Settings)

Jumbo Frame settings Ethernet frame capacity


Enabled Up to 9000 bytes
Disabled Up to 1500 bytes

Server-side CPU load can be reduced by using Jumbo Frame. However, I/O performance may be reduced by
10% to 30%.

• Security Architecture for Internet Protocol (IPsec)


The IPsec function is not supported. Connect the server using a LAN switch that has the IPsec function as re-
quired.
• Internet Protocol
IPv4 and IPv6 are supported.
• Data Center Bridging (DCB)
iSCSI 10Gbit/s interfaces support the Data Center Bridging (DCB) function.
DCB is an enhanced function of traditional Ethernet and a standard for Fabric connections in data centers. The
DCB function allows connections to Converged Enhanced Ethernet (CEE) environments.

143
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
SAN Connection

■ FCoE
This host interface is supported only for the ETERNUS DX100 S3/DX200 S3.
Fabric is supported as a connection topology. The transfer rate is 10Gbit/s. When using an FCoE interface, con-
nect the ETERNUS DX to the FCoE switch. Direct connections to servers are not supported.

■ SAS
Simple, cost effective, and high performance network storage environment can be configured. Direct connec-
tions and switch connections to servers are available.
The following types of host interfaces are available:
• SAS 12Gbit/s
This host interface is supported only for the ETERNUS DX100 S4/DX200 S4.
• SAS 6Gbit/s
This host interface is supported only for the ETERNUS DX100 S3/DX200 S3.
One of the following transfer rates can be specified:
• For SAS 12Gbit/s
- 12Gbit/s
- 6Gbit/s
- 3Gbit/s
• For SAS 6Gbit/s
- 6Gbit/s
- 3Gbit/s
- 1.5Gbit/s

144
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
SAN Connection

Access Method
This section explains the connection configurations between server Host Bus Adapters (HBAs) and ETERNUS DX
host interface ports.

■ Single Path Connection


A single path configuration connects the ETERNUS DX to a server via a single path.
The server cannot access an ETERNUS DX when a component (such as a controller, HBA, switch, or cable) on the
path has a problem. The system must be stopped when a failed component on a path needs to be replaced or
when the controller firmware needs to be updated.
In a single path connection configuration, the path failover and load balancing functions are not supported.
A multipath connection configuration is recommended to maintain availability when a problem occurs.
Figure 84 Single Path Connection (When a SAN Connection Is Used — Direct Connection)

Server Server

HBA HBA

Port Port Port Port


CA CA
CM#0 CM#1

ETERNUS DX

Figure 85 Single Path Connection (When a SAN Connection Is Used — Switch Connection)

Server Server Server

HBA HBA HBA

Switch

Port Port Port Port


CA CA
CM#0 CM#1

ETERNUS DX

145
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
SAN Connection

■ Multipath Configuration
A multipath configuration connects the ETERNUS DX to a server via multiple paths (multipath). System reliability
is improved due to the path redundancy.
If a path fails, access can continue by using the path failover function that switches access from the failed path
to another path.
Figure 86 Multipath Connection (When a SAN Connection Is Used — Basic Connection Configuration)

Server Server

HBA HBA HBA HBA

Port Port Port Port


CA#0 CA#0
CM#0 CM#1

ETERNUS DX

Figure 87 Multipath Connection (When a SAN Connection Is Used — Switch Connection)

Server Server Server

HBA HBA HBA HBA HBA HBA

Switch Switch

Port Port Port Port


CA CA
CM#0 CM#1

ETERNUS DX

146
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
SAN Connection

When the ETERNUS DX is accessed from both of the servers, performance can be secured by connecting to the
ETERNUS DX using one host interface port on each of the four host interfaces.
Figure 88 Multipath Connection (When a SAN Connection Is Used — for Enhanced Performance)

Server Server

HBA HBA HBA HBA

Port Port Port Port Port Port Port Port


CA#0 CA#1 CA#0 CA#1
CM#0 CM#1

ETERNUS DX

• When configuring multipathing for reliability, make sure to configure a redundant connection for the con-
trollers of the ETERNUS DX. Configure paths to connect to different controllers (CM#0 and CM#1). Combina-
tions of host interface numbers (CA#0 and CA#1) in controllers do not need to be taken into consideration.
• Paths from a single server should be separately connected to a different host interface in case of host inter-
face failure.

■ Cluster Configuration
When servers are duplicated and connected using a cluster configuration to share a single ETERNUS DX among
multiple servers, cluster control software is required.

■ Storage Cluster Configuration


An ETERNUS DX/AF is duplicated and connected in a cluster configuration by using Storage Cluster.
The connection interface is FC or iSCSI. In addition, only the switch connection topology is supported. Direct con-
nections are not supported.
For more details on Storage Cluster, refer to "Storage Cluster" (page 123).

147
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
Remote Connections

Remote Connections
An FC or iSCSI interface is available for a remote connection.
When a remote connection is used, change the host interface port mode setting from "CA" to "RA".

For remote connections, different types of interfaces (FC, iSCSI 10Gbit/s (10GBASE-SR), iSCSI 10Gbit/s
(10GBASE-CR), iSCSI 10Gbit/s (10GBASE-T), iSCSI 1Gbit/s) cannot exist together on a REC path (connection be-
tween a local ETERNUS DX/AF and a remote ETERNUS DX/AF).
When different types of remote interfaces exist in the same ETERNUS DX/AF, make sure to use the same type
of interface for each REC path. For example, the following configuration is not supported because different
types of interfaces (FC and iSCSI) exist together on a REC path.
Figure 89 Example of Non-Supported Connection Configuration (When Multiple Types of Remote Interfaces
Are Installed in the Same ETERNUS DX/AF)
ETERNUS DX/AF ETERNUS DX/AF

FC-RA FC-RA
REC path
iSCSI-RA iSCSI-RA

The following configuration is supported because the same type of interface is used for each REC path.
Figure 90 Example of Supported Connection Configuration (When Multiple Types of Remote Interfaces Are In-
stalled in the Same ETERNUS DX/AF)
ETERNUS DX/AF ETERNUS DX/AF ETERNUS DX/AF

FC-RA FC-RA iSCSI -RA iSCSI -RA


REC path REC path
FC-RA FC-RA iSCSI -RA iSCSI -RA

148
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
Remote Connections

Remote Interfaces
Change the port mode of the host interface ports to use them as remote interfaces. For details on changing the
port mode, refer to "ETERNUS Web GUI User's Guide".
This section describes each remote interface.

■ Fibre Channel (FC)


Data transfer is performed between multiple ETERNUS DX/AF storage systems by using the host interface. The
ETERNUS DX/AF can be connected to the destination storage system directly or with the switch. A digital service
unit is required for a remote connection that is using a line.
Host interfaces with three different maximum transfer rates (32Gbit/s, 16Gbit/s, and 8Gbit/s) are available for
the ETERNUS DX100 S4/DX200 S4.
Host interfaces with two different maximum transfer rates (16Gbit/s and 8Gbit/s) are available for the ETERNUS
DX100 S3/DX200 S3.
Figure 91 An FC Connection for a Remote Copy between ETERNUS DX/AF Storage Systems (When Redundant
Paths Are Used)
ETERNUS DX/AF ETERNUS DX/AF
FC switch FC switch

FC-RA FC-RA

Copy source Copy destination


SAN (FC)

FC-RA FC-RA

FC switch FC switch

Figure 92 An FC Connection for a Remote Copy between ETERNUS DX/AF Storage Systems (When Lines Are Used)
ETERNUS DX/AF ETERNUS DX/AF

WAN (IP line)


Copy source Copy destination

FC-RA FC-RA

Digital service unit Digital service unit

149
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
Remote Connections

■ iSCSI
Data transfer is performed between multiple ETERNUS DX/AF storage systems by using the host interface. Direct
connections to a WAN are possible.
Host interfaces with two different maximum transfer rates (10Gbit/s and 1Gbit/s) are available.
Figure 93 An iSCSI Connection for a Remote Copy between ETERNUS DX/AF Storage Systems (When Lines Are
Used)
ETERNUS DX/AF ETERNUS DX/AF

WAN (IP line)


Copy source Copy destination

iSCSI-RA iSCSI-RA

Switch Switch

The IPsec function is not supported for iSCSI interfaces.


Select a LAN switch with the IPsec function to use the IPsec function to perform a remote copy.

150
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
Remote Connections

Connectable Models
The following table shows the models that can be connected with a remote copy, and the interfaces that can be
used.
Table 50 Connectable Models and Available Remote Interfaces

Remote interface
FC
Connectable model (*1)
ETERNUS DX100 S4/ ETERNUS DX100 S3/ iSCSI
DX200 S4 DX200 S3
ETERNUS DX100 S4/DX200 S4 ¡ ¡ ¡
ETERNUS DX500 S4/DX600 S4 ¡ ¡ ¡
ETERNUS DX8100 S4/DX8900 S4 ¡ ¡ ¡
ETERNUS DX100 S3/DX200 S3 ¡ ¡ ¡
ETERNUS DX500 S3/DX600 S3 ¡ ¡ ¡
ETERNUS DX8100 S3/DX8700 S3/DX8900 S3 ¡ ¡ ¡
ETERNUS AF250 S2/AF650 S2 ¡ ¡ ¡
ETERNUS AF250/AF650 ¡ ¡ ¡
ETERNUS DX200F ¡ ¡ ¡
ETERNUS DX90 S2 ¡ ¡ ¡
ETERNUS DX410 S2/DX440 S2 ¡ ¡ ¡ (*2)
ETERNUS DX8100 S2/DX8700 S2 ¡ ¡ ¡ (*2)
ETERNUS DX90 ´ ¡ ´
ETERNUS DX410/DX440 ´ ¡ ´
ETERNUS DX8100/DX8400/DX8700 ´ ¡ ´

*1: Firmware update may be required.


For information about firmware versions, contact your sales representative.
*2: This cannot be connected to a remote interface (iSCSI-RA) of the ETERNUS DX S2 series. Use a host inter-
face.

151
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
LAN Connection

LAN Connection
The ETERNUS DX requires a LAN connection for operation management.
In addition, information such as ETERNUS DX failures is notified to the remote support center.

Make sure to connect each controller to the LAN for operation management.

Specifications for the LAN ports of the ETERNUS DX are shown below.
• Operation Mode
Ethernet (1000BASE-T/100BASE-TX/10BASE-T)
• Internet Protocol
IPv4 and IPv6 are supported.

■ IP Addresses for the ETERNUS DX


In order to connect to the LAN for operation management, an IP address for ETERNUS DX must be prepared in
advance.

LAN for Operation Management (MNT Port)


In an ETERNUS DX, the system administrator logs on to the ETERNUS DX via a LAN to set the RAID configuration,
manage operations, and perform maintenance.
In addition, the failures that occur in the ETERNUS DX are notified to the remote support center. The remote
support uses MNT ports for a network connection by default. In this situation, the network connection for the
remote support is transferred via the LAN for operation management. When the network connection for the re-
mote support needs to be separated from the LAN for operation management, refer to "LAN for Remote Support
(RMT Port)" (page 154) and use the RMT ports to connect to the remote support center via a different network.

152
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
LAN Connection

■ Connection Configurations
MNT ports are used for connecting a LAN for operation management.
Figure 94 Connection Example without a Dedicated Remote Support Port

Administration SNMP
E-mail server NTP server
terminal manager

Syslog server Authentication


LAN switch server
Remote support
LAN for operation management center

IP address A

RMT ports are not used.


MNT

MNT
RMT

RMT

CM#0 CM#1

Controller enclosure

ETERNUS DX

The following figure provides connection examples for when setting the IP address of the Slave CM.
Figure 95 Connection Example When the IP Address of the Slave CM Is Set (and a Dedicated Remote Support
Port Is Not Used)

Administration SNMP
E-mail server NTP server
terminal manager

Authentication
Syslog server server
LAN switch
Remote support
LAN for operation management center

IP address A IP address B

RMT ports are not used.


MNT

MNT
RMT

RMT

The IP addresses of the Master CM


and the Slave CM are set.

CM#0 CM#1

Controller enclosure

ETERNUS DX

153
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
LAN Connection

LAN for Remote Support (RMT Port)


When the network connection for the remote support needs to be separated from the company LAN, use the
RMT ports to connect to the remote support center via a different network.

■ AIS Connect Function

● Overview of the AIS Connect Function


Figure 96 Overview of the AIS Connect Function
Demilitarized zone
Customer network Remote support center
(DMZ)

Monitoring
ETERNUS DX AIS Connect
server

Encrypted
[Encrypted] Internet

Customer

Firewall

Firewall
LAN Diagnosis

Access firewall

HTTPS tunnel

The diagram above describes the overview of the AIS Connect function. The left-hand side represents the cus-
tomer and the right-hand side represents the service provider. The connection set-up initiative is always from
the customer side based on regular AIS Connect agent contacts (Simple Object Access Protocol (SOAP) messag-
es) with the AIS Connect server that can be reached via the Internet.
AIS Connect agent requests to the AIS Connect server can be handled directly and immediately. AIS Connect serv-
er requests to the AIS Connect agent (remote access) cannot be sent until the next contact has been set up. If
remote access of the AIS Connect is enabled by the customer, the AIS Connect agent then executes the requests
from customer operations such as setting up a tunnel for "remote access" or initiating a file transfer.
Contact set-up and request processing are performed via an HTTPS tunnel. Under certain circumstances, the fire-
wall must be configured on the customer side to enable this type of tunnel to be set up. Likewise, proxies (plus
ID and password) can be specified during Internet access configuration.
AIS Connect agent can perform the following actions:
• Notifying events (Information event, Warning event, or Error event) in the ETERNUS DX to the AIS Connect
server
• Sending ETERNUS DX logs to the AIS Connect server
• Remote access from the AIS Connect server to the ETERNUS DX

154
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
LAN Connection

● Security Features
Figure 97 Security Features

Allowed

Rejected

Firewall
ETERNUS DX AIS Connect server
behind a firewall

Connection from an AIS Connect agent to the AIS Connect server can be set up via a SOAP message that is based
on HTTPS. Access can only be initiated by an AIS Connect agent at the ETERNUS DX site of the customer as illus-
trated by the diagram above. The AIS Connect server offers a certificate and the AIS Connect agent verifies this
certificate for every connection setup. All transferred data is protected against spy-out and manipulation.
For setup procedure for remote support (by AIS Connect), refer to "Configuration Guide (Basic)".

In some regions, the usage of AIS Connect is limited to contract customers.


Contact the Support Department for details.

■ REMCS
For setup procedure for remote support (by REMCS), refer to "Configuration Guide (Basic)".
The sections that are shown below explain how to set the ETERNUS DX for remote support. For details on the
settings, refer to "Configuration Guide (Web GUI)".

155
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
LAN Connection

■ Connection Configurations
For the ETERNUS DX, two IP addresses are required (one IP address for the MNT port and one IP address for the
RMT port).
Figure 98 Connection Example with a Dedicated Remote Support Port

Administration SNMP
terminal E-mail server NTP server
manager

Syslog server Authentication


LAN switch server

LAN for operation management

IP address C IP address A
MNT

MNT
RMT

RMT

Router

CM#0 CM#1
Remote support
Controller enclosure center

ETERNUS DX

Different LANs are used for MNT ports and RMT ports.

156
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
LAN Connection

The following figure provides connection examples and explains the necessary preparation for when the IP
address of the Slave CM is set.
For the ETERNUS DX, three IP addresses are required (two IP addresses for the MNT ports and one IP address
for the RMT port).
Figure 99 Connection Example When the IP Address of the Slave CM Is Set (and a Dedicated Remote Support
Port Is Used)

Administration SNMP
E-mail server NTP server
terminal manager

Syslog server Authentication


LAN switch server

LAN for operation management

IP address C IP address A IP address B


MNT

MNT
RMT

RMT

Router

CM#0 CM#1

Controller enclosure

ETERNUS DX
Remote support
center
Different LANs are used for MNT ports and RMT ports.
The IP addresses of the Master CM and the Slave CM are set.

LAN Control (Master CM/Slave CM)


This section explains how the LAN control controller of the ETERNUS DX operates.
When an ETERNUS DX has two controllers, the controller (CM) that is given the authority to manage the LAN is
called the Master CM and the other CM is called the Slave CM.
When an error occurs in the Master CM or LAN, the Master CM is switched automatically.

157
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
LAN Connection

IP addresses of the LAN ports are not assigned to each CM. These IP addresses are assigned to the role of master
or slave. If the Master CM is switched, the same IP addresses are reused. Therefore, even if the Master CM is
switched and the physical port is changed, access can be maintained via the same IP addresses. The MAC ad-
dress is not inherited.
Figure 100 LAN Control (Switching of the Master CM)

MNT Master CM Slave CM

MNT
RMT

RMT
IP address A
CM#0 CM#0 CM#1
has failed

Controller enclosure

ETERNUS DX

The Master CM is switched.

Slave CM Master CM
MNT
MNT
RMT

RMT

IP address A
CM#0 CM#1

Controller enclosure
When the Master CM is
ETERNUS DX switched, the IP addresses of
the previous Master CM are
taken over by the new
Master CM.

• Each CM has an LED that lights up green to identify when it is the Master CM.
• Setting the IP address of the Slave CM ensures that ETERNUS Web GUI or ETERNUS CLI can be used from the
Slave CM if an error occurs on the LAN path for the Master CM.
The Master CM and the Slave CM perform different functions. The Slave CM can only switch the Master CM
and display the status of the ETERNUS DX.
The IP address of the Slave CM does not need to be set for normal operation.
Figure 101 LAN Control (When the IP Address of the Slave CM Is Set)

LAN path
error

Master CM Slave CM
MNT

MNT
RMT

RMT

IP address A CM#0 IP address B CM#1

Controller enclosure
The IP address of the Slave CM is
ETERNUS DX used to switch the Master CM and
display the status of the ETERNUS DX.

158
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
LAN Connection

Network Communication Protocols


The usable LAN ports and functions are different depending on the usage and protocol.
The following table shows how the LAN ports may be used (by usage and protocol).
Table 51 LAN Port Availability

Port Master CM Slave CM


Usage Protocol tcp / udp Direction Remarks
number MNT RMT MNT RMT
ETERNUS Web GUI http / tcp 80 / 443 from ¡ ¡ △ (*1) △ (*1) Accessed
https from a
Web
browser
ETERNUS CLI telnet / tcp 23 / 22 from ¡ ¡ △ (*1) △ (*1) −
ssh
ftp (cli- tcp 21 to ¡ ¡ △ (*1) △ (*1) −
ent)
SNMP agent snmp udp 161 from ¡ ¡ ¡ ¡ −
trap snmp trap udp Must be to ¡ (*2) ¡ (*2) ´ ´ −
set
SMI-S http / tcp 5988 / from ¡ ´ ´ ´ Used for
https 5989 SMI-S cli-
ent com-
munica-
tion
http / tcp Must be to ¡ ´ ´ ´ Used for
https set event
communi-
cations
with the
SMI-S lis-
tener, etc.
SLP tcp 427 from/to ¡ ´ ´ ´ Used for
service in-
quiry
communi-
cation
from the
SMI-S cli-
ent
E-mail smtp (cli- tcp 25 (*3) to ¡ (*2) ¡ (*2) ´ ´ Used for
ent) failure
notifica-
tion, etc.
NTP NTP (cli- udp 123 to ¡ (*2) ¡ (*2) ´ ´ −
ent)
REMCS smtp tcp Must be to ¡ (*2) ¡ (*2) ´ ´ Used for
(remote support) set failure
notifica-
tion, etc.
http (cli- tcp Must be to ¡ (*2) ¡ (*2) ´ ´ Used for
ent) set firmware
down-
load, etc.
AIS Connect (remote https (cli- tcp 443 to ¡ (*2) ¡ (*2) ´ ´ —
support) ent)

159
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
LAN Connection

Port Master CM Slave CM


Usage Protocol tcp / udp Direction Remarks
number MNT RMT MNT RMT
Syslog Syslog udp Must be to ¡ (*2) ¡ (*2) ´ ´ −
(event notification set
and audit log send-
ing)
RADIUS Radius udp Must be to ¡ (*2) ¡ (*2) ´ ´ −
set
ping ICMP − − from ¡ (*2) ¡ (*2) ´ ´ −
KMIP (key manage- SSL tcp 5696 (*3) to ¡ (*2) ¡ (*2) ´ ´ −
ment)
ETERNUS DX Discovery Unique udp 9686 from ¡ ´ ´ ´ −
protocol

¡: Available / △: Available for some functions / ´: Not available

*1: Only the following functions are available:


• Checking the ETERNUS DX status
• Switching the Master CM
*2: May use either the MNT port or RMT port.
*3: Modifiable
For details on the port numbers for the Storage Foundation Software ETERNUS SF, refer to the manual of each
Storage Foundation Software ETERNUS SF.

160
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
Power Supply Connection

Power Supply Connection


Connect the power cords (AC cables) of the ETERNUS DX to the power sockets, the UPS sockets, or the power
control unit sockets.
For details about the types and the number of power sockets that can be used, refer to "Power Socket Specifica-
tions" in "Site Planning Guide".

Input Power Supply Lines


For details on input power supply lines, refer to "Site Planning Guide".

UPS Connection
It is recommended that an Uninterruptible Power Supply System (UPS) is used as the power supply source for the
ETERNUS DX to cope with power outages and momentary voltage drops in the normal power supply.
Note that when connecting an ETERNUS DX to a single UPS, the total value of the power requirements for all the
enclosures must not exceed the UPS output capacity.
When one of the power supply lines fails for redundantly configured UPS units with two power supply lines, all of
the power for the ETERNUS DX must be supplied from the other line. Select a UPS that can supply sufficient pow-
er so that the total value of the power requirements does not exceed the UPS output capacity in case only one
power supply line is available.
For details about the necessary UPS output capacity, refer to the specifications of the UPS that is used.
A UPS must satisfy the following conditions:

● Rating Capacity
Secure a sufficient rating capacity for the total value of the maximum power requirements for the enclosures
that are to be installed.
To find the maximum power level requirements for each enclosure, refer to "Installation Specifications" in "Site
Planning Guide".

● Supply Time
The total time for the server to shut down and for the ETERNUS DX to power off must be taken into consideration
for the power supply time of the battery.

● Switching Time during Power Failure


The normal power supply must be switched to UPS output within 15ms after a power failure occurs.

● Socket Type
If the power plug type and the socket type of the UPS do not match, it is necessary to equip the UPS (AC output)
with an appropriate type of socket. Request a qualified electrician to perform the necessary work to make termi-
nal block connections available.

● Power Supply Configuration


If a UPS is used, make sure that it supplies power to all the enclosures.
Configurations where the controller enclosure is powered by the UPS while the drive enclosures are powered di-
rectly from AC are not supported.

161
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
Power Synchronized Connections

Power Synchronized Connections


This section describes the connections to automatically control powering the ETERNUS DX on and off with a serv-
er.
In order to control powering the ETERNUS DX on and off with servers, the power control of the ETERNUS DX must
be linked with all of the connected servers.

Power Synchronized Connections (PWC)

■ Power Synchronized Unit


A power synchronized unit enables the ETERNUS DX to be powered on and off with a server. The power synchron-
ized unit detects changes in the AC power output of a UPS unit that is connected to a server (server UPS unit)
and automatically turns on and off the ETERNUS DX. In addition to server UPS units, units that control the AC
socket power output can also be connected. When three or more servers are connected, power can be synchron-
ized by adding an AC sensor unit.

● Power Synchronization via a Server UPS Connection


The power synchronized unit detects the AC power output of the target devices for power synchronization and
commands the ETERNUS DX to synchronize the power with the target devices.
When the power synchronized unit detects the AC power output of any server UPS unit, the power synchronized
unit commands the ETERNUS DX to turn on.
When the power synchronized unit does not detect AC power output in any of the server UPS units, the power
synchronized unit commands the ETERNUS DX to turn off.
The server UPS unit must have a function in the management software that controls the AC power output ac-
cording to when the server powers on and off. The server UPS unit must have one unused outlet to connect to
the power synchronized unit.
• When connecting one or two servers
Figure 102 Power Supply Control Using a Power Synchronized Unit (When Connecting One or Two Servers)

UPS setting
Turns on: 1 Turns on: 2
Turns off: 2 Turns off: 1

Server

UPS
management
software

UPS

Power synchronized unit 4

PWC
ETERNUS DX
An instruction is RS232C cable
issued to turn PWC
on/off the ETERNUS
DX when AC
output/no AC
output is detected. 3

162
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
Power Synchronized Connections

Powering on sequence
1 Power on of all the server UPS units

2 Server startup
The server OS startup is suspended until the ETERNUS DX startup is complete (*1).

3 Issuance of command to turn on the ETERNUS DX from the power synchronized unit

4 ETERNUS DX startup

*1: The server must be set to suspend server OS startup until the ETERNUS DX startup is complete.
Powering off sequence
1 Shutdown of all the servers

2 Shutdown of all the server UPS units

3 Issuance of command to turn off the ETERNUS DX from the power synchronized unit

4 ETERNUS DX shutdown

163
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
Power Synchronized Connections

• When connecting three or more servers


Figure 103 Power Supply Control Using a Power Synchronized Unit (When Connecting Three or More Servers)

Turns on: 1
Turns off: 2
Turns on: 2
Turns off: 1

Server#1
UPS unit for
Server#1
Server#3
UPS unit for
Server#3
AC sensor
unit#1

Server#0
UPS unit for
Server#0 Server#2
UPS unit for
Server#2
AC sensor
unit#0
Power synchronized
unit#0
PWC PWC
ETERNUS DX 4

PWC

ETERNUS DX 4

PWC

Powering on sequence
1 Power on of all the server UPS units

2 Server startup
The server OS startup is suspended until the ETERNUS DX startup is complete (*1).

3 Issuance of command to turn on the ETERNUS DX from the power synchronized unit

4 ETERNUS DX startup

*1: The server must be set to suspend server OS startup until the ETERNUS DX startup is complete.

164
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
4. Connection Configuration
Power Synchronized Connections

Powering off sequence


1 Shutdown of all the servers

2 Shutdown of all the server UPS units

3 Issuance of command to turn off the ETERNUS DX from the power synchronized unit

4 ETERNUS DX shutdown
Refer to the manual that is provided with a power synchronized unit for details about connection configurations
with power synchronized units and required settings.

Power Synchronized Connections (Wake On LAN)


By using Wake On LAN, an instruction to power on the storage system can be issued via the LAN.
"Magic packet" is sent by the utility software for Wake On LAN. The ETERNUS DX detects this packet and the pow-
er is turned on.
Figure 104 Power Supply Control Using Wake On LAN

Administration
Terminal

LAN

ETERNUS DX
1

2
MN T
CM#0

MN T
CM#1

Powering on sequence
1 A power on instruction is issued to the ETERNUS DX via the LAN.

2 ETERNUS DX startup

165
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations

Multiple options can be selected in the ETERNUS DX according to the customer's requirements. This chapter de-
scribes the installation conditions and standard installation rules for each component.

166
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Configuration Schematics

Configuration Schematics
The following diagrams show minimum and maximum configurations for ETERNUS DX storage systems.

■ Minimum Configuration
Figure 105 Minimum Configuration Diagram: ETERNUS DX100 S4/DX200 S4

CE
CA
CM#0 PANEL

Memory
CPU

BUD
IOC
BBU
EXP

PSU PSU

MP

PANEL: Operation Panel


CE: Controller Enclosure
CM: Controller Module
CA: Channel Adapter
Host Interface
Memory: System Memory
BUD: Bootup and Utility Device
• Backup area in case of power outage
• Storage area for firmware
BBU: Battery Backup Unit
Backup power source in case of power outage
IOC: I/O Controller
Controller to control I/O
EXP: SAS Expander
Expander chip for SAS connections
MP: Mid Plane
Board that is located between the front of the enclosure and the rear of the enclosure (the controller
(CM) or I/O module (IOM) side)
PSU: Power Supply Unit

167
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Configuration Schematics

Figure 106 Minimum Configuration Diagram: ETERNUS DX100 S3/DX200 S3

CE
CA
CM#0 PANEL

Memory
CPU

BUD
IOC
SCU
EXP

PSU PSU

MP

PANEL: Operation Panel


CE: Controller Enclosure
CM: Controller Module
CA: Channel Adapter
Host Interface
Memory: Cache Memory
BUD: Bootup and Utility Device
• Backup area in case of power outage
• Storage area for firmware
SCU: System Capacitor Unit
Backup power source in case of power outage
IOC: I/O Controller
Controller to control I/O
EXP: SAS Expander
Expander chip for SAS connections
MP: Mid Plane
Board that is located between the front of the enclosure and the rear of the enclosure (the controller
(CM) or I/O module (IOM) side)
PSU: Power Supply Unit

168
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Configuration Schematics

■ Maximum Configuration
Figure 107 Maximum Configuration Diagram: ETERNUS DX100 S4/DX200 S4

CE
CA CA CA CA
CM#0 PANEL CM#1

Memory Memory
CPU CPU

BUD BUD
IOC IOC
BBU BBU
EXP EXP

PSU PSU

MP

HD-DE
IOM#0 PANEL IOM#1
EXP EXP

PSU PSU

PSU PSU
MP

FEM FEM

DE
IOM#0 PANEL IOM#1
EXP EXP

PSU PSU
MP

PANEL: Operation Panel


CE: Controller Enclosure
CM: Controller Module
CA: Channel Adapter
Host Interface
Memory: System Memory

169
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Configuration Schematics

BUD: Bootup and Utility Device


• Backup area in case of power outage
• Storage area for firmware
BBU: Battery Backup Unit
Backup power source in case of power outage
IOC: I/O Controller
Controller to control I/O
EXP: SAS Expander
Expander chip for SAS connections
DE: Drive Enclosure
IOM: I/O Module
Unit that controls I/O between controllers and drives
HD-DE: High-density Drive Enclosure
FEM: Fan Expander Module
Cooling fan module that is installed in a high-density drive enclosure
MP: Mid Plane
Board that is located between the front of the enclosure and the rear of the enclosure (the controller
(CM) or I/O module (IOM) side)
PSU: Power Supply Unit

170
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Configuration Schematics

Figure 108 Maximum Configuration Diagram: ETERNUS DX100 S3/DX200 S3

CE
CA CA CA CA
CM#0 PANEL CM#1

Memory Memory
CPU CPU

BUD BUD
IOC IOC
SCU SCU
EXP EXP

PSU PSU

MP

HD-DE
IOM#0 PANEL IOM#1
EXP EXP

PSU PSU

PSU PSU
MP

FEM FEM

DE
IOM#0 PANEL IOM#1
EXP EXP

PSU PSU
MP

PANEL: Operation Panel


CE: Controller Enclosure
CM: Controller Module
CA: Channel Adapter
Host Interface
Memory: Cache Memory
BUD: Bootup and Utility Device
• Backup area in case of power outage
• Storage area for firmware

171
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Configuration Schematics

SCU: System Capacitor Unit


Backup power source in case of power outage
IOC: I/O Controller
Controller to control I/O
EXP: SAS Expander
Expander chip for SAS connections
DE: Drive Enclosure
IOM: I/O Module
Unit that controls I/O between controllers and drives
HD-DE: High-density Drive Enclosure
FEM: Fan Expander Module
Cooling fan module that is installed in a high-density drive enclosure
MP: Mid Plane
Board that is located between the front of the enclosure and the rear of the enclosure (the controller
(CM) or I/O module (IOM) side)
PSU: Power Supply Unit

172
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Configuration Schematics

■ Enclosure Connection Path


In the ETERNUS DX, multiple paths are used to connect a controller enclosure (CE) to drive enclosures (DE).
A drive enclosure has two independent drive interface ports. Path redundancy is maintained by connecting the
drive enclosure to two controllers directly. This configuration allows operation to continue even if one of the con-
nection paths fails.
Connect the controller enclosure to the drive enclosure as the following figures show.
Connection paths are not duplicated when only one controller is installed.
Figure 109 Enclosure Connection Path (When Only One Controller Is Installed)
DI
DE #0A
DI (IN)

DE #0x

DI (OUT)
DE #02
DI (IN)

DI (OUT)
DE #01
DI (IN)

DI (OUT)
CE

DI (OUT): Drive interface (OUT) port


DI (IN): Drive interface (IN) port

Figure 110 Enclosure Connection Path (When Two Controllers Are Installed)

DI DI (IN)
DE #0A
DI (IN) DI (OUT)

DE #0x

DI (OUT) DI (IN)
DE #02
DI (IN) DI (OUT)

DI (OUT) DI (IN)
DE #01
DI (IN) DI

DI (OUT) DI (OUT)
CE

DI (OUT): Drive interface (OUT) port


DI (IN): Drive interface (IN) port

173
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Optional Product Installation Conditions

Optional Product Installation Conditions


This section provides the types and number of optional products and explains policies for optional product esti-
mations.
Installation conditions for the following optional products are described in this section.
• Controller modules
• Memory Extension
• Host interfaces
• Unified License
• Drive enclosures
• I/O modules
• Drives

To install the following optional products, a specific firmware version may be required for the ETERNUS DX.
For details, refer to the "Firmware Release Information" section in "Overview" or refer to "Product List".
• ETERNUS DX100 S4/DX200 S4
- Host interfaces (FC 32Gbit/s)
• ETERNUS DX100 S3/DX200 S3
- Memory Extension
- Host interfaces (FC 16Gbit/s)
- Host interfaces (FC 8Gbit/s)
- Host interfaces (SAS 6Gbit/s)
- Host interfaces (iSCSI 10Gbit/s, 10GBASE-T)
- Unified kit
- Unified License
- High-density drive enclosures (12Gbit/s)
- Active optical cables
- Nearline SAS self-encrypting disks
- SSDs (12Gbit/s)
- Self encrypting SSDs
- Advanced Format disks

Controller Module
This section explains the installation conditions for controller modules.

■ Types of Controller Modules


The following types of controller are available; a controller that has no host interfaces and a controller that has
host interfaces.

174
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Optional Product Installation Conditions

■ Number of Installable Controller Modules


Up to two controller modules can be installed in a controller enclosure.

Memory Extension
This section explains the installation conditions for the Memory Extension.

When only one controller is installed in the controller enclosure, Memory Extension cannot be installed.

● ETERNUS DX100 S4
Memory Extension is required to use the Unified License.
This option cannot be installed if the Unified License (NAS function) is not used.
Only one Memory Extension can be installed for each ETERNUS DX.
The Memory Extension is installed as standard in the controller module for Unified models.

● ETERNUS DX200 S4
Memory Extension is required to use the Deduplication/Compression function or the Unified License.
This option cannot be installed if the Deduplication/Compression function or the Unified License (NAS function)
is not used.
Only one Memory Extension can be installed for each ETERNUS DX.
The Memory Extension is installed as standard in the controller module for standard models and the controller
module for Unified models.

● ETERNUS DX100 S3
Memory Extension is required to use the Unified License.
This option cannot be installed if the Unified kit/Unified License (NAS function) is not used.
Only one Memory Extension can be installed for each ETERNUS DX.
Memory Extension (ETFMCC-L / ETDMCCU-L) can be installed to expand the NAS function if a Unified kit
(ETFLN1U / ETFLN1U-L / ETDLN1U / ETDLN1U-L) or Memory Extension (ETFMCA / ETFMCA-L / ETDMCAU / ETDM-
CAU-L) is already installed.

● ETERNUS DX200 S3
Memory Extension is required to use the Deduplication/Compression function or the Unified License.
Memory Extension cannot be installed if the Deduplication/Compression function or the Unified kit/Unified Li-
cense (NAS function) is not used.
Only one Memory Extension can be installed for each ETERNUS DX.
Memory Extension (ETFMCB-L / ETDMCBU-L) can be installed to expand the NAS function if a Unified kit
(ETFLN2U / ETFLN2U-L / ETDLN2U / ETDLN2U-L) is already installed.

175
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Optional Product Installation Conditions

Host Interfaces
This section explains the installation conditions for host interfaces.

■ Types of Host Interfaces


The following types are available SAN or NAS connections.

● ETERNUS DX100 S4/DX200 S4


• SAN connection
- FC 32Gbit/s
- FC 16Gbit/s
- FC 8Gbit/s
- iSCSI 10Gbit/s (10GBASE-SR (*1)/10GBASE-CR)
- iSCSI 10Gbit/s (10GBASE-T)
- iSCSI 1Gbit/s (1000BASE-T)
- SAS 12Gbit/s
• NAS connection
- Ethernet 10Gbit/s (*1)
- Ethernet 1Gbit/s

*1: The SFP+ modules are not installed. The SFP+ modules are required to connect the FC cables.

● ETERNUS DX100 S3/DX200 S3


• SAN connection
- FC 16Gbit/s
- FC 8Gbit/s
- iSCSI 10Gbit/s (10GBASE-SR)
- iSCSI 10Gbit/s (10GBASE-CR)
- iSCSI 10Gbit/s (10GBASE-T)
- iSCSI 1Gbit/s (1000BASE-T)
- FCoE 10Gbit/s
- SAS 6Gbit/s
• NAS Connection
- Ethernet 10Gbit/s
- Ethernet 1Gbit/s

The features of each host interface are described below.

● FC
Fibre Channel (FC) enables high speed data transfer over long distances by using optical fibers and coaxial ca-
bles. FC is used for database servers where enhanced scalability and high performance are required.

176
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Optional Product Installation Conditions

● iSCSI
iSCSI is a communication protocol that transfers SCSI commands by encapsulating them in IP packets over Ether-
net.
Since iSCSI can be installed at a lower cost and the network configuration is easier to change than FC, iSCSI is
commonly used by divisions of large companies and by small and medium-sized companies where scalability
and cost-effectiveness are valued over performance.

● FCoE
Since Fibre Channel over Ethernet (FCoE) encapsulates FC frames and transfers them over Ethernet, a LAN envi-
ronment and an FC-SAN environment can be integrated. When there are networks for multiple I/O interfaces
(e.g. in a data center), the networks can be integrated and managed.

● SAS
Serial Attached SCSI (SAS) enables high speed data transfers by upgrading existing reliable SCSI connection
standards to allow serial transfers. SAS is commonly used for small-sized systems where performance and cost-
effectiveness are valued over scalability.

● Ethernet
Ethernet is a NAS connection interface. An Ethernet network can be accessed as a file server from multiple con-
nected clients with different operating systems, for the easy sharing of data. The ETERNUS DX has an internal file
system for connection to existing LAN, allowing Ethernet to be installed relatively easily. However, load on the
network will increase, and therefore Ethernet is not suitable for applications that require high transfer rates.
A Unified kit is required to install Ethernet host interface. Registering the Unified kit license to the storage sys-
tem enables the NAS function.

■ Number of Installable Host Interfaces


When one controller is installed, two host interfaces for base and expansion configurations (two host interfaces
per storage system) can be installed. When two controllers are installed, two host interfaces for base and expan-
sion configurations (four host interfaces per storage system) can be installed.
Different types of host interfaces can exist together in the same ETERNUS DX.

Unified License
Installing the Unified License requires the Memory Extension and an Ethernet host interface, or a controller
module that is equipped with an Ethernet host interface.
The Memory Extension is installed as standard in the ETERNUS DX100 S4/DX200 S4 controller module for Unified
models and the ETERNUS DX200 S4 controller module for standard models.
The Memory Extension must be ordered if the ETERNUS DX100 S4 controller module for SAN connections, the
ETERNUS DX200 S4 controller module for economy models, or the ETERNUS DX100 S3/DX200 S3 is used.

Unified kit/Unified License cannot be installed if the Deduplication/Compression function is used.

177
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Optional Product Installation Conditions

Drive Enclosures
This section explains the installation conditions for drive enclosures.

When only one controller is installed in the controller enclosure, high-density drive enclosures cannot be in-
stalled.

■ Types of Drive Enclosures


There are three types of drive enclosures that correspond to available drive sizes (2.5", 3.5", and high-density
3.5").
Twenty-four 2.5" drives can be installed in a single 2.5" type drive enclosure.
Twelve 3.5" drives can be installed in a single 3.5" type drive enclosure.
Sixty high-density 3.5" drives can be installed in a high-density drive enclosure.

■ Number of Installable Drive Enclosures


Up to ten drive enclosures can be installed.
The number of each type of drive enclosure that can be installed is shown below.
Table 52 Number of Installable Drive Enclosures

Type ETERNUS DX100 S4/DX100 S3 ETERNUS DX200 S4/DX200 S3


2.5" type drive enclosures 5 10
3.5" type drive enclosures 10 10
High-density drive enclosures 2 4

If different types of drive enclosures are installed, up to ten drive enclosures can be installed in the ETERNUS DX.
Drive enclosures can be installed until the total number of drive slots reaches the maximum number that can be
installed (144 for the ETERNUS DX100 S4/DX100 S3 and 264 for the ETERNUS DX200 S4/DX200 S3). If the num-
ber of drive slots reaches the limit, additional drive enclosures cannot be added.
For example, if four 2.5" type drive enclosures are installed in a ETERNUS DX100 S3 (2.5" type CE), up to two 3.5"
type drive enclosures can be added. However, a high-density drive enclosure cannot be added.

I/O Module
Up to two I/O modules can be installed in a drive enclosures.

178
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Optional Product Installation Conditions

Drives
This section explains the installation conditions for drives.

■ Types of Drives
The ETERNUS DX supports the latest drives that have the high-speed SAS (Max. 12Gbit/s) interface.
The following drive types can be installed in the ETERNUS DX. Some drive types have a data encryption function.
• SAS disks
• Nearline SAS disks
• SSDs
2.5" and 3.5" drive sizes are available.
Since 2.5" drives are lighter and require less power than 3.5" drives, the total weight and power consumption
when 2.5" drives are installed is less than when the same number of 3.5" drives is installed.
When the data I/O count is compared based on the number of drives in an enclosure (2.5" drives: 24, 3.5" drives:
12), the Input Output Per Second (IOPS) performance for each enclosure in a 2.5" drive configuration is superior
to a 3.5" drive configuration since more 2.5" drives can be installed in an enclosure than 3.5" drives.

● SAS Disks
SAS disks are high-performance and high-reliability disks. They are used to store high performance databases
and other frequently accessed data.

• The following disks are Advanced Format (512e) disks.


- 1.8TB SAS disks
- 2.4TB SAS disks
- 2.5" SAS self encrypting disks (2.4TB)
• When using Advanced Format (512e) disks, make sure that Advanced Format (512e) is supported by the
server OS and the applications. If the server OS and applications do not support Advanced Format (512e),
random write performance may be reduced.

● Nearline SAS Disks


Nearline SAS disks are high capacity cost effective disks for data backup and archive use. Nearline SAS disks can
store information that requires a low access rate at a reasonable speed more cost effectively than SAS disks.

179
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Optional Product Installation Conditions

• Nearline SAS disks are used to store data that does not need the access performance of SAS disks. They are
far more cost effective than SAS disks. It is recommended that SAS disks be used for data that is constantly
accessed or when high performance/reliability is required.
• If the ambient temperature exceeds the operating environment conditions, Nearline SAS disk performance
may be reduced.
• Nearline SAS disks can be used as Advanced Copy destinations and for the storage of archived data.
• When Nearline SAS disks are used as an Advanced Copy destination, delayed access responses and slower
copy speeds may be noticed, depending on the amount of I/O and the number of copy sessions.
• The following disks are Advanced Format (512e) disks.
- 2.5" Nearline SAS disks (2TB)
- 3.5" Nearline SAS disks (6TB)
- 3.5" Nearline SAS disks (8TB)
- 3.5" Nearline SAS disks (10TB)
- 3.5" Nearline SAS disks (12TB)
- 3.5" Nearline SAS disks (14TB)
- 3.5" Nearline SAS self encrypting disks (8TB)
- 3.5" Nearline SAS self encrypting disks (12TB)
• When using Advanced Format (512e) disks, make sure that Advanced Format (512e) is supported by the
server OS and the applications. If the server OS and applications do not support Advanced Format (512e),
random write performance may be reduced.
• For details on the RAID levels that can be configured with Nearline SAS disks that have 6TB or more, refer to
"Supported RAID" (page 16).

● SSDs
SSDs are reliable drives with high performance. SSDs are used to store high performance databases and other
frequently accessed data.
Flash memory as a storage media provides better random access performance than disks such as SAS disks and
Nearline SAS disks. Containing no motors or other moving parts, they are highly resistant to impact.
The ETERNUS DX supports SSDs that have a high level wear leveling function and sufficient reserved space. Note
that if the expected total write capacity is exceeded, the frequency of errors gradually increases, which may lead
to a reduction in the write performance. The ETERNUS DX has a function that shows the remaining capacity as a
percentage (health) in proportion to the expected total write capacity.
The number of rewrites may exceed the limit within the product warranty period if SSDs are used in a RAID1
configuration that has a high I/O access load. Using a RAID1+0, RAID5, RAID5+0, or RAID6 configuration is rec-
ommended.
The following types of SSDs are supported in the ETERNUS DX.
• MLC SSDs (with a 12Gbit/s or a 6Gbit/s interface speed) that are compatible with Extreme Cache Pools
• Value SSDs (with a 12Gbit/s interface speed) that are incompatible with Extreme Cache Pools
Value SSDs are available at a lower cost per capacity than conventional SSDs by optimizing the guaranteed write
endurance and the reserved space.
When random access (especially with write access) continues for several hours, the performance of Value SSDs
may be reduced compared with MLC SSDs that support Extreme Cache Pool.
Note that the product's actual capacity of a Value SSD differs from the capacity that is displayed in ETERNUS Web
GUI and ETERNUS CLI. For example, 2.00TB is displayed for a 1.92TB Value SSD.

180
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Optional Product Installation Conditions

If SSDs are installed in the high-density drive enclosure (6Gbit/s), the SSDs operate at 6Gbit/s regardless of
the interface speed.

The drive characteristics of SAS disks, Nearline SAS disks, and SSDs are shown below.
Table 53 Drive Characteristics

Type Reliability Performance Price (*1)


SAS disks ¡ ¡ ¡
Nearline SAS Disks △ △ ◎
SSDs ◎ ◎ △

◎: Very good ¡: Good △: Reasonable

*1: Bits are used to compare the prices for each drive type and the price goes up in the following order; ◎, ¡,
and △.

■ Number of Installable Drives


The maximum number of installable drives is 144 for the ETERNUS DX100 S4/DX100 S3 and 264 for the ETERNUS
DX200 S4/DX200 S3.
The following table shows the number of installable drives when the maximum number of drive enclosures are
installed.
Table 54 Number of Installable Drives

Type ETERNUS DX100 S4/DX100 S3 ETERNUS DX200 S4/DX200 S3


2.5" type drive enclosures 144 (*1) 264 (*1)
3.5" type drive enclosures 132 (*2) 132 (*2)
High-density drive enclosures 132 (*2) 252 (*2)

*1: Number of drives when 2.5" controller enclosures are used.


*2: Number of drives when 3.5" controller enclosures are used.
For details on the number of required hot spares, refer to "■ Number of Installable Hot Spares" (page 29) in "Hot
Spares" (page 29).

181
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Standard Installation Rules

Standard Installation Rules


This section describes the standard installation rules for the following optional products.
• Controller modules
• Host interfaces
• Drive enclosures
• I/O modules
• Drives

Controller Module
This section describes the standard installation rules for controller modules.
By installing controller modules, the controller enclosure can connect with the drive enclosure even if drives are
not installed.

■ Installation Order
Install CM#0 first, followed by CM#1.
Figure 111 Controller Installation Order

Controller 0 (CM#0) Controller 1 (CM#1)

Rear view of a controller enclosure

182
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Standard Installation Rules

Host Interface
This section describes the standard installation rules for host interfaces.

■ When Only One Controller Is Installed


Install CA#0 first, and then install CA#1.
Figure 112 Installation Diagram for Host Interfaces (When Only One Controller Is Installed)

Port Port Port Port


#0 #1 #0 #1
CA#0 CA#1

Expansion
Controller 0 (CM#0)

Rear view of a controller enclosure

■ When Two Controllers Are Installed

● Installation Order for the ETERNUS DX100 S4/DX100 S3


Install the host interfaces in CA#0 first, and then install CA#1.
The configuration is the same for Controller 0 (CM#0) and Controller 1 (CM#1).
Note that the positions of CA#0 and CA#1 depend on the controller type.
If an FC 16Gbit/s or FC 8Gbit/s is installed in the controller as the standard host interface, CA#0 is located on the
right side of the controller.
Figure 113 Host Interface Installation Diagram 1 (When Two Controllers Are Installed in the ETERNUS DX100 S4/
DX100 S3)

Port Port Port Port


Port Port Port Port
#0 #1 #0 #1
#0 #1 #0 #1
CA#0 CA#0
CA#1 CA#1
Expansion Expansion
Controller 0 (CM#0) Controller 1 (CM#1)

Rear view of a controller enclosure

183
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Standard Installation Rules

If an FC 16Gbit/s or FC 8Gbit/s that is sold separately from the controller is installed as the standard host inter-
face or if an FC 32Gbit/s, an iSCSI, an SAS, an FCoE, or an Ethernet is installed in the controller as the standard
host interface, CA#0 is located on the left side of the controller.
Figure 114 Host Interface Installation Diagram 2 (When Two Controllers Are Installed in the ETERNUS DX100 S4/
DX100 S3)

Port Port Port Port Port Port Port Port


#0 #1 #0 #1 #0 #1 #0 #1
CA#0 CA#1 CA#0 CA#1
Expansion Expansion
Controller 0 (CM#0) Controller 1 (CM#1)

Rear view of the controller enclosure

● Installation Order for the ETERNUS DX200 S4/DX200 S3


Install the host interfaces in CA#0 first, and then install CA#1.
The configuration is the same for Controller 0 (CM#0) and Controller 1 (CM#1).
Figure 115 Host Interface Installation Diagram (When Two Controllers Are Installed in the ETERNUS DX200 S4/
DX200 S3)

Port Port Port Port Port Port Port Port


#0 #1 #0 #1 #0 #1 #0 #1
CA#0 CA#1 CA#0 CA#1
Expansion Expansion
Controller 0 (CM#0) Controller 1 (CM#1)

Rear view of the controller enclosure

184
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Standard Installation Rules

Drive Enclosure
This section describes the installation order for drive enclosures.
2.5" type, 3.5" type, and high-density drive enclosures can be installed together in the ETERNUS DX.
Drive enclosures can connect with other enclosures even if drives are not installed.
The installation priority order for drive enclosures varies depending on the controller enclosure type.

■ 2.5" Type Controller Enclosure


1 High-density drive enclosures (12Gbit/s)
2 High-density drive enclosures (6Gbit/s)
3 2.5" type drive enclosures
4 3.5" type drive enclosures

■ 3.5" Type Controller Enclosure


1 High-density drive enclosures (12Gbit/s)
2 High-density drive enclosures (6Gbit/s)
3 3.5" type drive enclosures
4 2.5" type drive enclosures
Drive enclosures are installed above the controller enclosure according to the priority order.

I/O Module
This section describes the standard installation rules for I/O modules.
By installing I/O modules, the drive enclosure can connect with other enclosures even if drives are not installed.

■ Installation Order
Install IOM#0 first, and then install IOM#1.
Figure 116 I/O Module Installation Order

I/O module 0 (IOM#0) I/O module 1 (IOM#1)

Rear view of a drive enclosure

185
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Standard Installation Rules

Drive
This section explains the installation rules for drives.
The supported drives vary between the ETERNUS DX100 S4/DX200 S4 and the ETERNUS DX100 S3/DX200 S3. For
details about drives, refer to "Overview" of the currently used storage systems.

■ Drives for High-Density Drive Enclosures


The installation priority order for drives in high-density drive enclosures is shown below.
1 3.5" SSDs for high-density drive enclosures (1.6TB)
2 3.5" SSDs for high-density drive enclosures (1.92TB)
3 3.5" SSDs for high-density drive enclosures (3.84TB)
4 3.5" SAS disks for high-density drive enclosures (1.2TB/10krpm)
5 3.5" Nearline SAS disks for high-density drive enclosures (2TB/7.2krpm)
6 3.5" Nearline SAS disks for high-density drive enclosures (3TB/7.2krpm)
7 3.5" Nearline SAS disks for high-density drive enclosures (4TB/7.2krpm)
8 3.5" Nearline SAS disks for high-density drive enclosures (6TB/7.2krpm)
9 3.5" Nearline SAS disks for high-density drive enclosures (8TB/7.2krpm)
10 3.5" Nearline SAS disks for high-density drive enclosures (10TB/7.2krpm)
11 3.5" Nearline SAS disks for high-density drive enclosures (12TB/7.2krpm)
12 3.5" Nearline SAS disks for high-density drive enclosures (14TB/7.2krpm)
13 3.5" Nearline SAS self encrypting disks for high-density drive enclosures (4TB/7.2krpm)
14 3.5" Nearline SAS self encrypting disks for high-density drive enclosures (8TB/7.2krpm)
15 3.5" Nearline SAS self encrypting disks for high-density drive enclosures (12TB/7.2krpm)

186
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Standard Installation Rules

Install drives in the slots of a high-density drive enclosure from Slot#0 to Slot#59 in ascending order according to
the installation order.
Figure 117 Drive Installation Diagram for High-Density Drive Enclosures
Rear view

IOM/PSU/FEM IOM/PSU/FEM
Slot#48

Slot#50

Slot#51

Slot#52

Slot#53

Slot#54

Slot#55

Slot#56

Slot#57

Slot#58

Slot#59
Slot#49
Slot#36

Slot#37

Slot#38

Slot#39

Slot#40

Slot#41

Slot#42

Slot#43

Slot#44

Slot#45

Slot#46

Slot#47
Slot#24

Slot#25

Slot#26

Slot#27

Slot#28

Slot#29

Slot#30

Slot#31

Slot#32

Slot#33

Slot#34

Slot#35
Slot#12

Slot#13

Slot#14

Slot#15

Slot#16

Slot#17

Slot#18

Slot#19

Slot#20

Slot#21

Slot#22

Slot#23
Slot#10

Slot#11
Slot#0

Slot#1

Slot#2

Slot#3

Slot#4

Slot#5

Slot#6

Slot#7

Slot#8

Slot#9

Front view

187
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Standard Installation Rules

■ 2.5" Drives
The installation priority order for 2.5" drives is shown below.
1 2.5" SSDs (400GB) (MLC SSDs)
2 2.5" SSDs (400GB) (Value SSDs)
3 2.5" SSDs (800GB)
4 2.5" SSDs (960GB)
5 2.5" SSDs (1.6TB)
6 2.5" SSDs (1.92TB)
7 2.5" SSDs (3.84TB)
8 2.5" SSDs (7.68TB)
9 2.5" SSDs (15.36TB)
10 2.5" SSDs (30.72TB)
11 2.5" self encrypting SSDs (400GB)
12 2.5" self encrypting SSDs (800GB)
13 2.5" self encrypting SSDs (1.6TB)
14 2.5" self encrypting SSDs (1.92TB)
15 2.5" self encrypting SSDs (3.84TB)
16 2.5" self encrypting SSDs (7.68TB)
17 2.5" SAS disks (300GB/15krpm)
18 2.5" SAS disks (600GB/15krpm)
19 2.5" SAS disks (900GB/15krpm)
20 2.5" SAS disks (300GB/10krpm)
21 2.5" SAS disks (600GB/10krpm)
22 2.5" SAS disks (900GB/10krpm)
23 2.5" SAS disks (1.2TB/10krpm)
24 2.5" SAS disks (1.8TB/10krpm)
25 2.5" SAS disks (2.4TB/10krpm)
26 2.5" SAS self encrypting disks (900GB/10krpm)
27 2.5" SAS self encrypting disks (1.2TB/10krpm)
28 2.5" SAS self encrypting disks (2.4TB/10krpm)
29 2.5" Nearline SAS disks (1TB/7.2krpm)
30 2.5" Nearline SAS disks (2TB/7.2krpm)

188
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Standard Installation Rules

According to the installation order, install drives in the slots of a controller enclosure from Slot#0 to Slot#23 in
ascending order. Then, install drives in the slots of a drive enclosure from Slot#0 to Slot#23 in ascending order.
Figure 118 Installation Diagram for 2.5" Drives

Slot#10
Slot#11
Slot#12
Slot#13
Slot#14
Slot#15
Slot#16
Slot#17
Slot#18
Slot#19
Slot#20
Slot#21
Slot#22
Slot#23
Slot#0
Slot#1
Slot#2
Slot#3
Slot#4
Slot#5
Slot#6
Slot#7
Slot#8
Slot#9

189
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Standard Installation Rules

■ 3.5" Drives
The installation priority order for 3.5" drives is shown below.
1 3.5" SSDs (400GB) (MLC SSDs)
2 3.5" SSDs (400GB) (Value SSDs)
3 3.5" SSDs (800GB)
4 3.5" SSDs (960GB)
5 3.5" SSDs (1.6TB)
6 3.5" SSDs (1.92TB)
7 3.5" SSDs (3.84TB)
8 3.5" self encrypting SSDs (400GB)
9 3.5" self encrypting SSDs (800GB)
10 3.5" self encrypting SSDs (1.6TB)
11 3.5" self encrypting SSDs (1.92TB)
12 3.5" self encrypting SSDs (3.84TB)
13 3.5" Nearline SAS disks (2TB/7.2krpm)
14 3.5" Nearline SAS disks (3TB/7.2krpm)
15 3.5" Nearline SAS disks (4TB/7.2krpm)
16 3.5" Nearline SAS disks (6TB/7.2krpm)
17 3.5" Nearline SAS disks (8TB/7.2krpm)
18 3.5" Nearline SAS disks (10TB/7.2krpm)
19 3.5" Nearline SAS disks (12TB/7.2krpm)
20 3.5" Nearline SAS disks (14TB/7.2krpm)
21 3.5" Nearline SAS self encrypting disks (4TB/7.2krpm)
22 3.5" Nearline SAS self encrypting disks (8TB/7.2krpm)
23 3.5" Nearline SAS self encrypting disks (12TB/7.2krpm)
According to the installation order, install drives in the slots of a controller enclosure from Slot#0 to Slot#11 in
ascending order. Then, install drives in the slots of a drive enclosure from Slot#0 to Slot#11 in ascending order.
Figure 119 Installation Diagram for 3.5" Drives

Slot#8 Slot#9 Slot#10 Slot#11

Slot#4 Slot#5 Slot#6 Slot#7

Slot#0 Slot#1 Slot#2 Slot#3

190
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Recommended RAID Group Configurations

Recommended RAID Group Configurations


No restrictions on the installation location of drives apply if the same types of drives are used to create RAID
groups.
To improve reliability, the installation location of drives that configure a RAID group must be considered.
Data reliability when enclosures fail can be improved by selecting configuration drives that are distributed to a
larger number of enclosures when a RAID group is created.
For details on the recommended number of drives that configure a RAID group, refer to "RAID Group" (page 24).

■ Mirroring Configuration
RAID1+0(4D+4M) is used in the following examples to explain how drives are installed to configure a mirroring
RAID level.
The drive number is determined by the DE-ID of the enclosure and the slot number in which the drive is instal-
led. Starting from the smallest drive number in the configuration, half of the drives are allocated into one group
and the remaining drives are allocated into the other group. Each drive in the different groups are paired for
mirroring.
Example 1: All drives are installed in a single enclosure
Figure 120 Drive Combination 1

DE#00

A B C D A' B' C' D'

Mirroring

Example 2: Paired drives are installed in two different enclosures


Figure 121 Drive Combination 2

DE#00

A B C D

Mirroring

DE#01

A' B' C' D'

191
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Recommended RAID Group Configurations

Example 3: Paired drives are installed in three different enclosures


Figure 122 Drive Combination 3

DE#00

A B

Mirroring

DE#01

C A'

Mirroring

DE#02

B' C'

■ Double Striping Configuration with Distributed Parity


RAID5+0(2D+1P) ´2 is used in the following examples to explain how to install drives that are configured for
double striping with distributed parity.
The drive number is determined by the DE-ID of the enclosure and the slot number in which the drive is instal-
led. Drives are divided into two redundant set groups in ascending order of drive numbers.
Example 4: Drives are installed in two different enclosures
Figure 123 Drive Combination 4

DE#00

A D B

DE#01

E C F

Redundancy Group 1
:
Redundancy Group 2
:

192
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
5. Hardware Configurations
Recommended RAID Group Configurations

Example 5: Drives are installed in three different enclosures


Figure 124 Drive Combination 5

DE#00

A D

DE#01

B E

DE#02

C F

Redundancy Group 1
:
Redundancy Group 2
:

193
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
6. Maintenance/Expansion

Hot Swap/Hot Expansion


"Hot swap" allows components to be replaced, or allows the firmware to be updated while the system is running.
"Hot expansion" allows components to be added while the system is running.

■ Hot Swap/Hot Expansion (ETERNUS DX100 S4/DX200 S4)


The table below shows whether hot swap or hot expansion for components of the ETERNUS DX100 S4/DX200 S4
is possible.
Table 55 Hot Swap and Hot Expansion Availability for Components (ETERNUS DX100 S4/DX200 S4)

Component Hot swap Hot expansion Remarks


Controller enclosure (CE) ´ — Replace the controller enclosure (CE) when
the MP (*1) fails.
Controller module (CM) ¡ (*2) (*3) ¡ —
System memory ¡ (*2) — —
Memory Extension ¡ (*2) ¡ (*2) —
BBU ¡ (*2) — —
BUD ¡ (*2) — —
Controller firmware ¡ (*2) (*3) — Make sure to stop I/Os before implementa-
(*4) tion in a Unified configuration.
Host interface (FC-CA) ¡ (*2) ¡ (*2) —
Host interface (10G iSCSI-CA) ¡ (*2) ¡ (*2) —
Host interface (1G iSCSI-CA) ¡ (*2) ¡ (*2) —
Host interface (SAS-CA) ¡ (*2) ¡ (*2) —
Host interface (10G Ethernet-CA) ¡ (*2) (*3) ¡ (*2) (*3) —
Host interface (1G Ethernet-CA) ¡ (*2) (*3) ¡ (*2) (*3) —
Power supply unit (PSU) ¡ — —
Disk (HDD) ¡ ¡ —
SSD ¡ ¡ —
Operation panel (PANEL) ´ (*5) — —
Disk firmware ¡ (*4) — —
Drive enclosure (DE)/high-density drive enclo- ¡ ¡ Replace the drive enclosure (DE)/high-den-
sure (HD-DE) sity drive enclosure (HD-DE) when the MP
(*1) fails.
Power supply unit (PSU) ¡ — —
Disk (HDD) ¡ ¡ —
SSD ¡ ¡ —
Operation panel (PANEL) ´ (*5) — —
I/O module (IOM) ¡ ¡ (*6) —
Fan Expander Module (FEM) (*7) ¡ — —
Disk firmware ¡ (*4) — —

194
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
6. Maintenance/Expansion
Hot Swap/Hot Expansion

¡: Allowed / ´: Not allowed (cold swap is possible) / —: Not applicable

*1: Mid Plane. This is a board that is located between the front (drive side) and rear (controller (CM) or I/O
module (IOM) side) of the ETERNUS DX.
*2: All of the host interfaces on the CM that will have maintenance or expansion performed go offline. When a
multipath configuration is used, switch to the host paths of the CM that will not have maintenance per-
formed to continue operation.
*3: System volumes must be created if Unified License is installed.
*4: Depending on the changes in the firmware, this function may require I/Os to be temporarily stopped.
*5: Operation can be continued during a failure. The status of the ETERNUS DX can be monitored via ETERNUS
Web GUI or ETERNUS CLI.
*6: Hot expansion can only be performed when it is expanded together with a CM in a single-controller config-
uration.
*7: FEMs are installed in high-density drive enclosures.

195
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
6. Maintenance/Expansion
Hot Swap/Hot Expansion

■ Hot Swap/Hot Expansion (ETERNUS DX100 S3/DX200 S3)


The table below shows whether hot swap or hot expansion for components of the ETERNUS DX100 S3/DX200 S3
is possible.
Table 56 Hot Swap and Hot Expansion Availability for Components (ETERNUS DX100 S3/DX200 S3)

Component Hot swap Hot expansion Remarks


Controller enclosure (CE) ´ — Replace the controller enclosure (CE) when
the MP (*1) fails.
Controller module (CM) ¡ (*2) (*3) ¡ —
Cache memory ¡ (*2) — —
Memory Extension ¡ (*2) ¡ (*2) (*4) —
Controller firmware ¡ (*2) (*3) — Make sure to stop I/Os before implementa-
(*5) tion in a Unified configuration.
Host interface (FC-CA) ¡ (*2) ¡ (*2) —
Host interface (10G iSCSI-CA) ¡ (*2) ¡ (*2) —
Host interface (1G iSCSI-CA) ¡ (*2) ¡ (*2) —
Host interface (FCoE-CA) ¡ (*2) ¡ (*2) —
Host interface (SAS-CA) ¡ (*2) ¡ (*2) —
Host interface (10G Ethernet-CA) ¡ (*2) (*3) ¡ (*2) (*3) —
Host interface (1G Ethernet-CA) ¡ (*2) (*3) ¡ (*2) (*3) —
Power supply unit (PSU) ¡ — —
Disk (HDD) ¡ ¡ —
SSD ¡ ¡ —
Operation panel (PANEL) ´ (*6) — —
Disk firmware ¡ (*5) — —
Drive enclosure (DE)/high-density drive enclo- ¡ ¡ Replace the drive enclosure (DE)/high-den-
sure (HD-DE) sity drive enclosure (HD-DE) when the MP
(*1) fails.
Power supply unit (PSU) ¡ — —
Disk (HDD) ¡ ¡ —
SSD ¡ ¡ —
Operation panel (PANEL) ´ (*6) — —
I/O module (IOM) ¡ ¡ (*7) —
Fan Expander Module (FEM) (*8) ¡ — —
Disk firmware ¡ (*5) — —

¡: Allowed / ´: Not allowed (cold swap is possible) / —: Not applicable

*1: Mid Plane. This is a board that is located between the front (drive side) and rear (controller (CM) or I/O
module (IOM) side) of the ETERNUS DX.
*2: All of the host interfaces on the CM that will have maintenance or expansion performed go offline. When a
multipath configuration is used, switch to the host paths of the CM that will not have maintenance per-
formed to continue operation.
*3: System volumes must be created if Unified kit/Unified License is installed.

196
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
6. Maintenance/Expansion
User Expansion

*4: For an ETERNUS DX that is installed with one of the following.


• Unified kit
• Memory Extension (ETFMCA / ETFMCA-L / ETDMCAU / ETDMCAU-L)
*5: Depending on the changes in the firmware, this function may require I/Os to be temporarily stopped.
*6: Operation can be continued during a failure. The status of the ETERNUS DX can be monitored via ETERNUS
Web GUI or ETERNUS CLI.
*7: Hot expansion can only be performed when it is expanded together with a CM in a single-controller config-
uration.
*8: FEMs are installed in high-density drive enclosures.

User Expansion
Customers can expand (add) the following components.
To add components that are not described in this section, contact your sales representative or maintenance en-
gineer.
• Drives
• Drive enclosures
• Long Wave SFP+ Modules (16Gbit/s)

SSD Sanitization
The SSD sanitization function deletes data in the SSDs by using the sanitization function of the SSDs. The SSD
sanitization function can be used to delete user data for cases such as the disposal of the SSDs.
Executing the SSD sanitization function (or the sanitization command) initializes the entire NAND (or data/alter-
nate areas) of the SSDs. The SSDs can be used again after the SSD sanitization function is executed.

The Maintenance Operation policy is required to sanitize the SSDs. Also, change the status of the ETERNUS DX
to the maintenance status.

197
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
A. Function Specification List

This appendix shows the combinations of functions that can be executed at the same time and the targets for
each function.

List of Supported Protocols


Table 57 List of Supported Protocols

LAN for Operation


Item iSCSI (SAN) Ethernet (NAS)
Management
Operation mode 1000BASE-T/100BASE- 10GBASE-SR/ 10GBASE-SR/
TX/10BASE-T 10GBASE-CR/ 10GBASE-CR/
10GBASE-T/ 1000BASE-T
1000BASE-T
ETERNUS Web GUI http ¡ ´ ´
https (SSL v3, TLS) ¡ ´ ´
ETERNUS CLI SSH v2 ¡ ´ ´
telnet ¡ ´ ´
ftp (client) ¡ ´ ´
SMI-S http / https ¡ ´ ´
SLP ¡ ´ ´
NTP (time) NTP v4 ¡ ´ ´
E-mail SMTP (Client) ¡ ´ ´
SNMP SNMP v1, v2c, v3 ¡ ´ ´
Event notification and audit Syslog ¡ ´ ´
log sending
KMIP (key management) SSL ¡ ´ ´
Ping ICMP ¡ ¡ ¡
Network address IPv4, IPv6 ¡ ¡ ¡
Routing RIP v1, v2, RIPng ´ ´ ¡
File sharing (common) FTP/FXP ´ ´ ¡
File sharing (UNIX/Linux) NFSv2, v3, v4.0 ´ ´ ¡
File sharing (Windows) CIFS (SMB1.0, 2.0, 3.0) ´ ´ ¡
Authentication Kerberos v5 ´ ´ ¡
RADIUS ¡ ´ ´
CHAP ´ ¡ ´

¡: Supported ´: Not supported

198
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
A. Function Specification List
Target Pool for Each Function/Volume List

Target Pool for Each Function/Volume List


This section describes the functions which can be performed on RAID groups, pools, and volumes.

Target RAID Groups/Pools of Each Function


Action RAID group REC Disk Buffer SDP TPP FTRP (*1) FTSP (*1)
Components Standard, SDV, — SDPV TPV FTSP FTV
WSV
Max. ETERNUS 72 Up to two can 1 72 (*2) 15 72 (*2)
number DX100 S4/ be specified per
DX100 S3 REC Buffer
ETERNUS 132 1 132 (*2) (*3) 30 132 (*2)
DX200 S4/
DX200 S3
Create ¡ ¡ — (*4) ¡ ¡ ¡
Delete ¡ ¡ — (*5) ¡ (*6) ¡ (*6) (*7) — (*7)
Rename ¡ ¡ ´ ¡ ¡ ¡
Expand capacity ¡ (by LDE) ´ ¡ (by adding ¡ (by adding △ (by adding ¡ (by adding a
an SDPV) a RAID group) a child pool) RAID group)
Migration ¡ ´ ´ ´ ´ ¡
Logical Device Expan- ¡ ´ ¡ ´ ´ ´
sion (LDE)
Format (All area) ´ ¡ ´ ¡ ¡ — (*8)
Format (Unformatted ´ ´ ´ ¡ ¡ — (*8)
area)
Modify threshold ´ ´ ¡ ¡ ¡ — (*8)
Eco-mode setup ¡ ´ ´ ¡ ¡ ´
Switch controlling CM ¡ ¡ ¡ ¡ — ¡
Allocate REC Buffer ´ ¡ ´ ´ ´ ´
Key management server ¡ ¡ ¡ ¡ ¡ ¡
linkage

¡: Possible ´: Impossible - : N/A

*1: Only ETERNUS SF Storage Cruiser can perform operations on FTRP and FTSP.
*2: The maximum total number of TPPs and FTSPs.
*3: The Deduplication/Compression function can be enabled for a maximum of four TPPs.
*4: If an SDPV is created, an SDP is automatically created.
*5: If all of the SDPVs are deleted, SDPs are automatically deleted.
*6: When a volume exists in a pool, the pool cannot be deleted.
*7: If an FTRP is deleted, the FTSP, which is included in the FTRP, is also deleted. The FTRP needs to be deleted
to delete the FTSP.
*8: This operation is possible in units of parent pools (FTRPs).

199
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
A. Function Specification List
Target Pool for Each Function/Volume List

Target Volumes of Each Function


Standard ODX Buffer
Action Concaten- SDV SDPV TPV FTV (*1) WSV volume
Single (*2)
ated
Create ¡ ¡ (*3) ¡ ¡ ¡ ¡ ¡ ¡
Delete ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡
Rename ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡
Format ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡
Eco-mode (*4) ¡ ¡ ¡ ´ ¡ ´ ´ ´
TPV capacity ex- ´ ´ ´ ´ ¡ ´ ´ ¡
pansion
RAID Migration ¡ △ (*5) ´ ´ ¡ (*6) ¡ (*6) ¡ (*7) ¡
Logical Device Ex- ¡ ¡ ¡ ¡ ´ ´ ´ ¡
pansion (LDE)
LUN Concatena- ¡ ¡ ´ ´ ´ ´ ´ ¡
tion (*8) (*9) (*8) (*9)
Balancing ´ ´ ´ ´ ¡ ¡ (*10) ´ ¡
Tiering ´ ´ ´ ´ ´ ¡ ´ ¡
TPV/FTV capacity ´ ´ ´ ´ ¡ ¡ ´ ´
optimization
Modify threshold ´ ´ ´ △ (*11) ¡ △ (*11) ´ ¡
Encrypt volume ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡
(*12)
Decrypt volume ¡ ¡ ´ ´ ¡ ¡ ¡ ¡
(*13)
Advanced Copy ¡ ¡ ¡ (*14) ´ ¡ ¡ ¡ ¡
function
(Local copy)
Advanced Copy ¡ ¡ ¡ (*14) ´ ¡ ¡ ¡ ¡
function
(Remote Copy)
Forbid Advanced ¡ ¡ ´ ´ ¡ ¡ ¡ ´
Copy
Reserved and ´ ´ ´ ¡ ´ ´ ´ ´
forced deletion
Release reserva- ¡ ¡ ´ ´ ¡ ¡ ¡ ´
tion
Performance ¡ ¡ ¡ ´ ¡ ¡ ¡ ¡
monitoring
Modify cache pa- ¡ (*15) ¡ (*15) ¡ ¡ ¡ (*15) ¡ ¡ ¡
rameters
Extreme Cache ¡ ¡ ´ ´ ¡ ¡ ¡ ´
Pool
Create a LUN ¡ ¡ ¡ ¡ ¡ ¡ ¡ ´
while rebuilding
Storage Migration ¡ ¡ ´ ´ ¡ ¡ ¡ ´

¡: Possible, △: Partially possible, ´: Not possible

200
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
A. Function Specification List
Target Pool for Each Function/Volume List

*1 : FTV operations can only be executed with ETERNUS SF Storage Cruiser.


FTRP balancing can be executed with either ETERNUS Web GUI or ETERNUS CLI.
*2 : An ODX Buffer volume is an exclusive volume that is required to perform ODX. Standard volumes, TPVs, or
FTVs can be used for the ODX Buffer volume.
*3 : A concatenated volumes cannot be created at the same time as a volume. Existing volumes can be con-
catenated by using LUN Concatenation.
*4 : Eco-mode is configured for Standard volumes/SDVs in RAID group units that include the target volume,
and TPVs in pool (TPP) units.
*5 : If multiple volumes have been concatenated using the LUN Concatenation function, RAID Migration can
be executed only on concatenation source volumes.
*6 : Capacity expansion and RAID Migration cannot be performed at the same time.
*7 : The volume capacity can be expanded by specifying a relatively large capacity for the destination volume
when RAID Migration is performed.
*8 : The maximum number of concatenated volumes is 16.
*9 : If T10-DIF is enabled, this function cannot be executed.
*10: Specify in units of FTRPs. FTVs that need balancing are automatically selected and balanced for each FTSP.
*11: A threshold can be set to the pool which includes the target volume.
*12: Encryption of Standard volumes and SDVs can be performed when the volumes are created, or after the
volumes have been created.
Encryption of SDPVs can be performed only when the volumes are created. SDPVs cannot be encrypted
after they are created.
Encryption of TPVs and FTVs is performed by creating a volume in the encrypted pool, or migrating a vol-
ume to an encrypted pool.
*13: Decryption of volumes is performed by specifying "Unencrypted" for the migration destination when mi-
grating the volumes.
*14: SDVs are used as the copy destination for SnapOPC/SnapOPC+. SDVs are also used as the copy source if the
copy destination of SnapOPC/SnapOPC+ is also set as the copy source (Cascade Copy/restore).
*15: This is part of the cache parameter settings, and cannot be configured in the units of volumes.

201
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
A. Function Specification List
Combinations of Functions That Are Available for Simultaneous Executions

Combinations of Functions That Are Available for Simultaneous


Executions
This section describes the availability of simultaneous execution with other functions, the number of processes
that can be executed concurrently, and the capacity that can be processed concurrently for each function.

Combinations of Functions That Are Available for Simultaneous Executions


There are functions which cannot be performed concurrently when another function is being executed in the
ETERNUS DX.
The following table shows which function combinations can be executed at the same time.
Table 58 Combinations of Functions That Can Be Executed Simultaneously (1/2)

Already running process

Process to be Rebuild/
Logical De-
run Copyback/ Fast Recov- Format Vol- RAID Migra- LUN Concat- Advanced
vice Expan-
Redundant ery ume tion enation Copy (*1)
sion (LDE)
Copy
Format Volume ¡ ¡ ¡ (*3) ¡ (*4) ¡ (*4) ¡ (*3) ¡ (*4)
RAID Migration ¡ ¡ ¡ (*4) ¡ (*4) ¡ (*4) ¡ (*5) ¡ (*6)
Logical Device ¡ (*7) ¡ ¡ (*7) ¡ (*7) ´ ¡ (*7) ¡
Expansion
LUN Concatena- ¡ ¡ ¡ ¡ (*4) ¡ (*4) ¡ ¡ (*8)
tion
Advanced Copy ¡ ¡ ¡ ¡ (*9) ¡ ¡ ¡ (*10)
(*2)
Volume Conver- ¡ (*4) ¡ ¡ (*4) ¡ (*4) ¡ (*4) ¡ (*4) ¡
sion Encryption
Switch control- ¡ (*7) ¡ (*7) ¡ (*7) ´ ´ ¡ (*7) ¡ (*11)
ling CM
TPV capacity ex- ¡ ¡ ¡ ¡ ¡ ¡ ¡ (*8)
pansion
TPV Balancing ¡ ¡ ¡ (*4) ¡ (*4) ¡ ¡ ¡ (*6)
TPV/FTV capacity ¡ ¡ ¡ ¡ (*4) ¡ ¡ ¡
optimization
FTRP Balancing ¡ ¡ ¡ ¡ ¡ ¡ ¡

¡: Possible, ´: Not possible

Table 59 Combinations of Functions That Can Be Executed Simultaneously (2/2)

Already running process


Process to be Volume Con- Eco-mode TPV/FTV ca-
run TPV Balanc- Storage
version En- Disk Patrol (motor is pacity opti-
ing Cluster
cryption stopped) mization
Format Volume ¡ (*3) ¡ ¡ (*12) ¡ (*4) ¡ ¡ (*4)
RAID Migration ¡ (*4) ¡ ¡ (*12) ¡ (*4) ¡ (*4) ¡
Logical Device ¡ (*7) ¡ ¡ (*12) ¡ ¡ ¡
Expansion

202
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
A. Function Specification List
Combinations of Functions That Are Available for Simultaneous Executions

Already running process


Process to be Volume Con- Eco-mode TPV/FTV ca-
run TPV Balanc- Storage
version En- Disk Patrol (motor is pacity opti-
ing Cluster
cryption stopped) mization
LUN Concatena- ¡ (*4) ¡ ¡ (*12) ¡ ¡ ¡ (*4)
tion
Advanced Copy ¡ ¡ ¡ (*13) ¡ ¡ ¡
(*2)
Volume Conver- ¡ (*4) ¡ ¡ (*12) ¡ ¡ ¡
sion Encryption
Switch control- ¡ (*7) ¡ ¡ ¡ ¡ ¡
ling CM
TPV capacity ex- ¡ ¡ ¡ (*12) ¡ (*4) ¡ ¡
pansion
TPV Balancing ¡ ¡ ¡ (*12) ¡ (*4) ¡ (*4) ¡
TPV/FTV capacity ¡ ¡ ¡ (*12) ¡ (*4) ´ ¡
optimization
FTRP Balancing ¡ ¡ ¡ (*12) ¡ (*4) ¡ (*4) ¡

¡: Possible, ´: Not possible

*1 : This indicates that the copy session is being set or the copy session is already set.
*2 : This indicates the copy session setting operation.
*3 : Using the "Format Volume" function will cause losing the entire data in volumes.
*4 : The function cannot be executed when the same volume is set as the execution target.
*5 : LUN concatenated volumes can be set as the RAID Migration source.
Note that LUN concatenated volumes cannot be set as the RAID Migration destination.
*6 : RAID Migration that expands the capacity cannot be performed on any volumes with copy sessions in LUN
units.
*7 : The function cannot be executed when the same RAID group is set as the execution target.
*8 : When a copy session with a copy area that is specified in LUN units exists, volume capacity expansion
cannot be performed because copying cannot be processed for the expanded part. When a copy session is
specified for a logical disk, volume capacity expansion can be performed.
*9 : Copy sessions in LUN units cannot be performed on a volume that is being expanded with RAID Migration.
*10: Complies with Advanced Copy specifications (multi-copy/Cascade Copy).
*11: The REC session needs to be suspended or stopped before the operation.
*12: The motor-off state of Eco-mode is stopped, and the drive motor is activated (spin-up).
*13: EC/REC can be performed (The motor-off state of Eco-mode will be stopped).
OPC/QuickOPC/SnapOPC/SnapOPC+ cannot be performed (the ETERNUS DX returns an error).

203
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
A. Function Specification List
Combinations of Functions That Are Available for Simultaneous Executions

Number of Processes That Can Be Executed Simultaneously


The following upper limits are applied to the number of processes that are to be executed.
• Only one process can be executed with Logical Device Expansion (LDE). Two or more processes cannot be exe-
cuted simultaneously in the same device.
• The total number of TPV balancing, FTRP balancing, RAID Migration, and Flexible Tier Migration processes that
can be performed at the same time is 32.

Capacity That Can Be Processed Simultaneously


The following upper limit is applied to the capacity of processes that are to be executed.
• The total capacity of TPV balancing, FTRP balancing, RAID Migration, and Flexible Tier Migration processes that
can be performed at the same time is 128TB.

204
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems Design Guide (Basic)
Copyright 2019 FUJITSU LIMITED
P3AM-7642-25ENZ0
FUJITSU Storage ETERNUS DX100 S4/DX200 S4, ETERNUS DX100 S3/DX200 S3 Hybrid Storage Systems
Design Guide (Basic)
P3AM-7642-25ENZ0

Date of issuance: April 2019


Issuance responsibility: FUJITSU LIMITED

• The content of this manual is subject to change without notice.


• This manual was prepared with the utmost attention to detail.
However, Fujitsu shall assume no responsibility for any operational problems as the result of errors, omissions, or the
use of information in this manual.
• Fujitsu assumes no liability for damages to third party copyrights or other rights arising from the use of any information
in this manual.
• The content of this manual may not be reproduced or distributed in part or in its entirety without prior permission from
Fujitsu.

You might also like