Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

IBM Support Offerings My support  Downloads  Documents  Cases  Communities  Training  Other 

IBM Support
Search support or find a product 

V8.3.0.x Configuration Limits and Restrictions for IBM FlashSystem 9100 family

Preventive Service Planning

Abstract
This document lists the configuration limits and restrictions specific to IBM FlashSystem 9100 family software version 8.3.0.x

Content
The use of WAN optimisation devices such as Riverbed are not supported in native Ethernet IP partnership configurations containing FlashSystem
9100 enclosures.

Contact and feedback


REST API

Customers using the REST API to list more than 2000 objects may experience a loss of service, from the API, as it restarts due to memory
constraints.

It is not possible to access the REST API using a cluster's IPv6 address.

NVMe over Fibre Chanel

Hosts using the NVMe protocol cannot be mapped to HyperSwap or stretched volumes.

Volumes accessed by hosts using the NVMe protocol cannot be configured with multiple access I/O groups due to a limitation of the NVMe
protocol.

DRAID Strip Size

For candidate drives, with a capacity greater than 4TB, a strip size of 128 cannot be specified for either RAID-5 or RAID-6 DRAID arrays. For
these drives a strip size of 25
256 should be used.

Transparent Cloud Tiering

Transparent cloud tiering on the system is defined by configuration limitations and rules. Please click the link for details
https://www.ibm.com/support/knowledgecenter/STSLR9_8.3.0/com.ibm.fs9100_830.doc/svc_tctmaxlimitsconfig.html

The following restrictions apply for Transparent Cloud Tiering:

a. OBAC is not supported under TCT in v8.3. You cannot set up ownership groups and then use TCT commands as OBAC users. If you want to
use TCT, you need to use a non-OBAC user to execute the TCT commands either via the GUI or CLI;

b. When a cloud account is created, it must continue to use the same encryption type, throughout the life of the data in that cloud account - even if
the cloud account object is removed and remade on the system, the encryption type for that cloud account may not be changed while back up
data for that system exists in the cloud provider;

c. When performing re-key operations on a system that has an encryption enabled cloud account, perform the commit operation immediately after
the prepare operation. Remember to retain the previous system master key (on USB or in Keyserver) as this key may still be needed to retrieve
your cloud backup data when performing a T4 recovery or an import;

d. Restore_uid option should not be used when backup is imported to a new cluster;

e. Import of TCT data is only supported from systems whose backup data was created at v7.8.0.1;

f. Transparent cloud tiering uses Sig V2, when connecting to Amazon regions, and does not currently support regions that require Sig V4.

Encryption and TCT

There is an extremely small possibility that, on a system using both Encryption and Transparent Cloud Tiering, the system can enter a state where
an encryption re-key operation is stuck in 'prepared' or 'prepare_failed' state, and a cloud account is stuck in 'offline' state.
The user will be unable to cancel or commit the encryption rekey, because the cloud account is offline. The user will be unable to remove the
cloud account because an encryption rekey is in progress.
The system can only be recovered from this state using a T4 Recovery procedure.
It is also possible that SAS-attached storage arrays go offline.
There are two possible scenarios where this can happen:

Scenario A

1. Using USB encryption and Cloud;

2. A new USB key is prepared using chencryption -usb newkey -key prepare; /
3. The new presumptive key is deleted from all USB sticks before the new key is committed;

4. All nodes in the system are rebooted;


IBM Support Offerings My support  Downloads  Documents  Cases  Communities  Training  Other 
5. The cloud account will now be offline as it can't get the presumptive key. The cloud account can not be removed, and the encryption rekey can
not be completed or cancelled. The system will remain stuck in these cloud and encryption states;

6. Any SAS-attached arrays will be offline and locked;

7. The system can be restored by T4 to a previous config backup.

Scenario B

1. Using key server encryption and Cloud;

2. A new key server key is prepared using chencryption -keyserver newkey -key prepare;

3. The new presumptive key is deleted from the key server before the new key is committed;

4. All nodes in the system are rebooted;

5. The cloud account will now be offline as it can't get the presumptive key. The cloud account can not be removed, and the encryption rekey can
not be completed or cancelled. The system will remain stuck in these cloud and encryption states;

6. SAS-attached arrays are not affected;

7. The system can be restored by T4 to a previous config backup.

NPIV ( N_Port ID Virtualization )

Spectrum Virtualize family of products version 7.7 introduced support for NPIV ( N_Port ID Virtualization ) for Fibre Channel fabric attachment.
The following recommendations and restrictions should be followed when implementing the NPIV feature.

Operating systems not currently supported for use with NPIV:

Contact and feedback


• RHEL6 and earlier on IBM Power

• HPUX 11iV2

• Veritas DMP multipathing on Windows with RAID-5 volumes in VxVM

General requirements

Required SDD versions for IBM AIX and Microsoft Windows Environments:

1. IBM AIX Operating Systems require a minimum SDDPCM version of 2.6.8.0

2. Microsoft Windows requires a minimum SDDDSM version 2.4.7.0. The latest recommended level which resolves issues listed below is 2.4.7.1.

Path Optimization

User intervention may be required when changing NPIV states from "Transitional" to "Disabled". All Paths to a LUN with SDDDSM or
SDDPCM may remain "Non-Optimized" when NPIV is "Disabled" from "Transitional" state.

To resolve this issue please use the following instructions:

IBM AIX
For SDDPCM:
Run "pcmpath chgprefercntl device <device number>/<device number range>" on AIX. This will restore both Optimized and Non-
Optimized paths for all the LUNs correctly.

Windows 2008 and 2012


For SDDDSM:
Run "datapath rescanhw" on Windows. This will restore both Optimized and Non-Optimized paths for all the LUN's correctly.
This issue is resolved with SDDDSM version 2.4.7.1

Windows 2008 and 2012 Non-Preferred Paths with SDDDSM


When NPIV enters into Transitional state from Disabled, with all the SDDDSM paths in Non-Preferred state, the paths to the Virtual ports also
become Non-Preferred. This path configuration might cause IO failures as soon as NPIV moves into "Enabled" state.
As a work around user should configure "at least one preferred path" to each LUN, when in NPIV "Disabled" state.
This issue is resolved with SDDDSM version 2.4.7.1

Solaris
Emulex HBA Settings:
1. When implementing NPIV on Solaris 11 the default disk IO timeout needs to be changed to 120s by adding "set sd:sd_io_time=120" in
/etc/system file, A system reboot is required for the change to be implemented.
2. When ports on host HBA are connected to 16GB SAN, NPIV is not supported.

Other Operating Systems


Other operating Systems may also experience the same issue when changing the NPIV state from "Transitional" to "Disabled" in which case
the operating system specific rescan command should be used.

Fabric Attachment

NPIV mode on the FlashSystem 9100 family is only supported when used with Brocade or Cisco fibre channel SAN switches which are NPIV
capable.

Nodes in an IO group cannot be replaced by nodes with less memory when compressed volumes are present

If a customer must migrate from 64GB to 32GB memory node canisters in an IO group, they will have to remove all compressed volume copies in
that IO group. This restriction applies to 7.7.0.0 and newer software.

A customer must not:


/
1. Create an IO group with node canisters with 64GB of memory.

2. Create compressed volumes in that IO group.


IBM Support Offerings My support  Downloads  Documents  Cases  Communities  Training  Other 
3. Delete both node canisters from the system with CLI or GUI.

4. Install new node canisters with 32GB of memory and add them to the configuration in the original IO group with CLI or GUI.

HyperSwap

When using the HyperSwap function with software version 7.8.0.0 or later, please configure your host multipath driver to use an ALUA-based
path policy.

Due to the requirement for multiple access IO groups, SAS attached host types are not supported with HyperSwap volumes.

AIX Live Partition Mobility (LPM)

AIX LPM is supported with the HyperSwap function and AIX 7.

Clustered Systems

A FlashSystem 9100 system at version 8.2.0.0 or later requires native Fibre Channel SAN or alternatively 8Gbps/16Gbps Direct Attach Fibre
Channel connectivity for communication between all nodes in the local cluster.

Partnerships between systems for Metro Mirror or Global Mirror replication can be used with both Fibre Channel and Native Ethernet
connectivity. Distances greater than 300 metres are only supported when using an FCIP link or Fibre Channel between source and target.

16Gbps Fibre Channel Canister Connection

Please visit the IBM System Storage Inter-operation Center (SSIC) for supported 16Gbps Fibre Channel configurations supported with 16Gbps

Contact and feedback


node hardware. Note 16Gbps Node hardware is supported when connected to Brocade and Cisco 8Gbps or 16Gbps fabrics only. Direct
connections to 2Gbps or 4Gbps SAN or direct host attachment to 2Gbps or 4Gbps ports is not supported. Other configured switches which are not
directly connected to the 16Gbps Node hardware can be any supported fabric switch as currently listed in SSIC.

25Gbps Ethernet Canister Connection


25

Two optional 2-port 25Gbps


25 Ethernet adapter is supported in each node canister after v8.1.1.1 for iSCSI communication with iSCSI capable
Ethernet ports in hosts via Ethernet switches. However using two 25
25Gbps Ethernet adapters per node canister, will prevent adding this control
enclosure to an existing system or adding another control enclosure to a system made from this controller (sometime known as clustering), until a
future software release adds support for clustering via the 25Gbps
25 Ethernet ports. These 2-port 25Gbps
25 Ethernet adapters do not support FCoE.

There are two types of 25


25Gbps Ethernet adapter feature supported:

1. RDMA over Converged Ethernet (RoCE)

2. Internet Wide-area RDMA Protocol(iWARP)

Either will work for standard iSCSI communications, i.e. not using Remote Direct Memory Access (RDMA). A future software release will add
(RDMA) links using new protocols that support RDMA such as NVMe over Ethernet.

When use of RDMA with a 25


25Gbps Ethernet adapter becomes possible then RDMA links will only work between RoCE ports or between iWARP
ports.
i.e. from a RoCE node canister port to a RoCE port on a host or from an iWARP node canister port to an iWARP port on a host.

The 25
25Gbps adapters come with SFP28 fitted, which can be used to connect to switches using OM3 optical cables.

For Ethernet switches and adapters supported in hosts please visit the SSIC .

This is an example of a RoCE adapter for use in a host.


http://www.mellanox.com/related-docs/user_manuals/ConnectX-
4_Lx_Single_and_Dual_10_25_Gbs_Ethernet_SFP28_Port_Adapter_Card_User_Manual.pdf
25

This is an example of a iWARP adapter for use in a host.


https://www.chelsio.com/nic/unified-wire-adapters/t6225-cr/
25

Customers who wish to connect a 10Gb switch to a 25Gb


25 HBA should be aware that this is only supported via a SCORE request. Please contact
your IBM representative to raise a SCORE request.

IP Partnership

IP partnerships are supported on any of the available ethernet ports. Using an Ethernet switch to convert a 25
25Gb to a 1Gb IP partnership, or a
10Gb to a 1Gb IP partnership is not supported. Therefore the IP infrastructure on both partnership sites must match. Bandwidth limiting on IP
partnerships between both sites is supported.

Fabric Limitations

Only one FCF ( Fibre Channel Forwarder ) switch per fabric is supported.

Storage connected directly to a Cisco Fabric Extender (FEX) is not supported.

VMware vSphere Virtual Volumes (VVols)

The maximum number of Virtual Machines on a single VMware ESXi host in a FlashSystem 9100 / VVol storage configuration is limited to 680.

The use of VMware vSphere Virtual Volumes (VVols) on a system that is configured for HyperSwap is not currently supported with the
FlashSystem 9100 family.

/
Host Limitations

Windows 2016 HyperV


IBM Support Offerings My support  RHEL
Downloads  onDocuments
v7.1 guests HyperV,
Windows 2016 Cases Virtual
with Communities  areTraining
Fibre Channel,  Other 
not supported.

iSER
Operating systems not currently supported for use with iSER

• VMware ESXi 6.7 using Mellanox ConnectX-4 Lx EN

• Windows 2012 R2 using Mellanox ConnectX-4 Lx EN

• Windows 2016 using Mellanox ConnectX-4 Lx EN

FCoE
FCoE is not supported.

Microsoft Offload Data Transfer ( ODX ) and SDDDSM Requirements


From version 7.5.0 the Spectrum Virtualize family of products introduced support for Microsoft ODX. In order to utilise this function all windows
hosts accessing a FlashSystem 9100 are required to be at a minimum SDDDSM version of 2.4.5.0. Earlier versions of SDDDSM are not supported
when the ODX function is activated.

Windows NTP server


The Linux NTP client used by the FlashSystem 9100 family may not always function correctly with Windows W32Time NTP Server.

Oracle

Oracle Version and OS Restrictions that apply:


Oracle Release 11.2 any platform 1
Oracle Release 12.1 any platform
Restriction 1:

Contact and feedback


Oracle ASM disk groups may dismount with the following error

"Waited 15 secs for write IO to PST"

Recommendation

Increase the asm_hbeatiowait to 120 seconds to prevent this issue occurring.

Applies to Oracle Database - Enterprise Edition - Version 11.2.0.3 to 12.1.0.1 [Release 11.2 to 12.1] on any platform

Priority Flow Control for iSCSI/iSER

Priority Flow Control for iSCSI/ iSER is supported on Emulex & Chelsio adapters (SVC supported) with all DCBX enabled switches.

Maximum Configurations

Configuration limits for FlashSystem 9100 family:

Property Hardware Type Maximum Comments


Number
System (Cluster) Properties
Control enclosures per system 4 Each control enclosure contains two node canisters
(cluster)
Nodes per system 8 Arranged as four I/O groups
Nodes per fabric 64 Maximum number of FS9100 family system nodes that can be present on
the same Fibre Channel fabric, with visibility of each other
Fabrics per system 8 The number of counterpart SANs which are supported
Inter-cluster partnerships per system 3 A system may be partnered with up to three remote systems. No more
than four systems may be in the same connected set
IP Quorum devices per system 5
Data encryption keys per system 1024
Node Properties
Logins per node Fibre Channel 512 Includes logins from server HBAs, disk controller ports, node ports
WWPN within the same system and node ports from remote systems
Fibre Channel buffer credits per port 25
255 The number of credits granted by the switch to the node
- 8Gbps FC Adapter
Fibre Channel buffer credits per port 4095 The number of credits granted by the switch to the node
- 16Gbps FC Adapter
iSCSI sessions per node 1024 2048 in IP failover mode (when partner node is unavailable).
This limit includes both iSCSI Host Attach AND iSCSI Initiator sessions
iSER sessions per node 25
256
Managed Disk Properties
Managed disks (MDisks) per system 4096 The maximum number of logical units which can be managed by a
system, including internal arrays.

Internal distributed arrays consume 16 logical units.

This number also includes external MDisks which have not been
configured into storage pools (managed disk groups)
Managed disks per storage pool 128
(managed disk group)
Storage pools per system 1024
Parent pools per system 128
Child pools per system 1023 Not supported in a Data Reduction Pool
Managed disk extent size 8192 MB
/
Capacity for an individual internal - No limit is imposed beyond the maximum number of drives per array
managed disk (array) limits.
Maximum size is dependent on the extent size of the Storage Pool.
IBM Support Offerings My support  Downloads  Documents  Cases  Communities  Training
Comparison  Maximum
Table: Other Volume, MDisk and System capacity for
each extent size.
Capacity for an individual external 1 PB Note: External managed disks larger than 2 TB are only supported for
managed disk certain types of storage systems. Refer to the supported hardware matrix
for further details.
Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for
each extent size.
Total storage capacity manageable 32 PB Maximum requires an extent size of 8192 MB to be used
per system
This limit represents the per system maximum of 2^22 extents.

Comparison Table: Maximum Volume, MDisk and System capacity for


each extent size.
Data Reduction Pool Properties
Data Reduction Pools per system 4
Mdisks per Data Reduction Pool 128
Volumes per Data Reduction Pool 10000 - (Number
of Data
Reduction Pools
x 12)
Extents per I/O group per Data 128000
Reduction Pool
Volume (Virtual Disk) Properties
Basic Volumes (VDisks) per system 10000 Each Basic Volume uses 1 VDisk, each with one copy.

Contact and feedback


HyperSwap volumes per system 1250
25 Each HyperSwap volume uses 4 VDisks, each with one copy, 1 active-
active remote copy relationship and 4 FlashCopy mappings.
Volumes per I/O group 10000
(volumes per caching I/O group)
Volumes accessible per I/O group 10000
Thin-provisioned (space-efficient) 8192
volume copies in regular pools per
system
Compressed volume copies in 2048 Maximum requires a system containing four control enclosures; refer to
regular pools per system the Compressed volume copies in regular pools per I/O group limit below
Compressed volume copies in 512 With 32GB Cache upgrade and 2nd Compression Accelerator card
regular pools per I/O group installed.
Compressed volume copies in data - No limit is imposed here beyond the volume copy limit per data reduction
reduction pools per system pool
Compressed volume copies in data - No limit is imposed here beyond the volume copy limit per data reduction
reduction pools per I/O group pool
Deduplicated volume copies in data - No limit is imposed here beyond the volume copy limit per data reduction
reduction pools per system pool
Deduplicated volume copies in data - No limit is imposed here beyond the volume copy limit per data reduction
reduction pools per I/O group pool
Volumes per storage pool - No limit is imposed beyond the volumes per system limit
Fully-allocated volume capacity 256 TB
25 Maximum size for an individual fully-allocated volume.

Maximum size is dependent on the extent size of the Storage Pool.


Comparison Table: Maximum Volume, MDisk and System capacity for
each extent size.
Thin-provisioned (space-efficient) 256 TB
25 Maximum size for an individual thin-provisioned volume.
per-volume capacity for volumes
copies in regular and data reduction Maximum size is dependent on the extent size of the Storage Pool.
pools Comparison Table: Maximum Volume, MDisk and System capacity for
each extent size.
Compressed volume capacity in Pools containing 16 TB Maximum size for an individual compressed volume.
regular pools non-Flash
storage See this Flash for further information on this limit.

Pools containing 32 TB Maximum size is dependent on the extent size of the Storage Pool.
all-Flash storage Comparison Table: Maximum Volume, MDisk and System capacity for
each extent size.
Host mappings per system 64000 See also - volume mappings per host object below
Mirrored Volume (Virtual Disk) Properties
Copies per volume 2
Volume copies per system 10000
Total mirrored volume capacity per 1024 TB
I/O group
Generic Host Properties
Host objects (IDs) per system 2048 A host object may contain both Fibre Channel ports and iSCSI names
Host objects (IDs) per I/O group 512 Refer to the additional Fibre Channel and iSCSI host limits below
Volume mappings per host object 2048 Although IBM FlashSystem 9100 allows the mapping of up to 2048
volumes per host object, not all hosts are capable of accessing/managing
this number of volumes. The practical mapping limit is restricted by the
host OS, not IBM FlashSystem 9100.
Note: this limit does not apply to hosts of type adminlun (used to support
VMware vvols).
Total Fibre Channel ports and iSCSI 8192
names per system
/
Total Fibre Channel ports and iSCSI 2048
names per I/O group
Total Fibre Channel ports and iSCSI 32
My support 
names per host
IBM Support Offerings Downloads Documents  Cases  Communities  Training  Other 
object
iSCSI names per host object (ID) 8
Host Cluster Properties
Host clusters per system 512
Hosts in a host cluster 128
Fibre Channel Host Properties
Fibre Channel hosts per system 2048
Fibre Channel host ports per system 8192
Fibre Channel hosts per I/O group 512
Fibre Channel host ports per I/O 2048
group
Fibre Channel host ports per host 32
object (ID)
Simultaneous I/Os per node FC port 8Gbps FC 2048
Adapter
16Gbps FC 4096
Adapter
Direct-attach NPortIDs per port 16 1 Primary + 15 NPIV. FCP-SCSI only.
iSCSI Host Properties
iSCSI hosts per system 2048
iSCSI hosts per I/O group 512
iSCSI names per host object (ID) 8
iSCSI names per I/O group 512
iSCSI (SCSI 3) registrations per 512
VDisk

Contact and feedback


iSCSI Hardware Properties
10Gbps Ethernet ports per system 4 Onboard ports
iSER Host Properties
iSER hosts per system 2048
iSER hosts per I/O group 512
iSER names per host object (ID) 8
iSER (SCSI 3) registrations per 1024
VDisk
iSER Hardware Properties
25
25Gbps iWARP adapters per 3
canister
25Gbps ROCE adapters per canister
25 3
25Gbps iWARP ports per canister
25 6
25Gbps ROCE ports per canister
25 6
NVMe over Fibre Channel Host Properties
FC-NVMe hosts per system 32 Up to 32 FC-NVMe hosts are supported per system.
This limit is not policed by the Spectrum Virtualize software. Any
configurations that exceed this limit may experience significant adverse
performance impact.
FC-NVMe hosts per I/O group 16 This limit is not policed by the Spectrum Virtualize software. Any
configurations that exceed this limit may experience significant adverse
performance impact.
NVMe Qualified Names (NQNs) 2
per host object (ID)
Copy Services Properties
Remote Copy (Metro Mirror and 10000 This can be any mix of Metro Mirror and Global Mirror relationships.
Global
Mirror) relationships per system
Active-Active Relationships 1250
25
(HyperSwap) per system
Remote Copy relationships per - No limit is imposed beyond the Remote Copy relationships per system
consistency group (<=256
25 GMCV limit
relationships configured)
Remote Copy relationships per 200
consistency group (>256
25 GMCV
relationships configured)
Remote Copy consistency 25
256
groups per system
Total Metro Mirror and Global 1024 TB This limit is the total capacity for all master and auxiliary volumes in the
Mirror volume capacity per I/O I/O group.
group
Total number of Global Mirror with 25
256 60s cycle time (Change volumes used for active-active relationships do
Change Volumes relationships per not count towards this limit).
system 25
2500 300s cycle time (Change volumes used for active-active relationships do
not count towards this limit).
FlashCopy mappings per system 5000
FlashCopy targets 25
256
per source
FlashCopy mappings 512
per consistency group
FlashCopy consistency 500
groups per system
Total FlashCopy volume capacity 4096 TB
per I/O group
/
IP Partnership Properties
Inter-cluster IP partnerships per 1 A system may be partnered with up to three remote systems. A maximum
system of one of those can be IP and the other two FC.
My support 
I/O groups per
IBM Support Offerings Downloads Documents  Cases  Communities  Training  Other 
system 2 The nodes from a maximum of two I/O groups per system can be used for
IP partnership.
Inter site links per IP partnership 2 A maximum of two inter site links can be used between two IP
partnership sites.
Ports per node 1 A maximum of one port per node can be used for IP partnership.
Internal Storage Properties
SAS chains per control enclosure 2
Expansion enclosures per SAS chain 10
Expansion enclosures per control 20
enclosure
Drives per I/O group 760
Drives per system 3040
Min-Max drives per enclosure 0-12 Limit depends on the enclosure model
or
0-24
Non-Distributed RAID Array Properties
Arrays per system 128
Encrypted arrays per system 128
Drives per array 16
Min-Max member drives per RAID- 1-8
0 array
Min-Max member drives per RAID- 2-2
1 array
Min-Max member drives per RAID- 3-16
5 array

Contact and feedback


Min-Max member drives per RAID- 5-16
6 array
Min-Max member drives per RAID- 2-16
10 array
Hot spare drives - No limit is imposed
Distributed RAID Array Properties
Arrays per system 32 The presence of non-DRAID arrays will reduce this limit
Encrypted arrays per system 32 The presence of non-DRAID arrays will reduce this limit
Arrays per I/O group 10 The presence of non-DRAID arrays will reduce this limit
Drives per array 128
Min-Max member drives per RAID- 4-128
5 array
Min-Max member drives per RAID- 6-128
6 array
Rebuild areas per non-FCM array 1-4
Rebuild areas per FCM array 1
Min-Max stripe width for RAID-5 3-16
array
Min-Max stripe width for RAID-6 5-16
array
Max drive capacity for RAID-5 6 TB
array
External Storage System Properties
Storage system WWNNs per system 1024
(cluster)
Storage system WWPNs per system 1024
(cluster)
WWNNs per storage system 16
WWPNs per WWNN 16
LUNs (managed disks) per storage - No limit is imposed beyond the managed disks per system limit
system
System and User Management Properties
User accounts per system 400 Includes the default user accounts
User groups per system 25
256 Includes the default user groups
Authentication servers per system 1
NTP servers per system 1
iSNS servers per system 1
Concurrent open SSH sessions per 32
system
Event Notification Properties
SNMP servers per system 6
Syslog servers per system 6
Email (SMTP) servers per system 6 Email servers are used in turn until the email is successfully sent
Email users (recipients) per system 12
LDAP servers per system 6
REST API Properties
Threads per session 64
HTTP header size 16 KB
Objects per response 2000
Extents

The following table compares the maximum volume, MDisk and system capacity for each extent size.

Extent Maximum non Maximum Maximum Maximum thin- Maximum total Maximum Maximum Total storage
size thin- thin- compressed provisioned and thin-provisioned MDisk capacity DRAID Mdisk capacity /
(MB) provisioned provisioned volume size compressed and compressed in GB capacity in TB manageable per
volume volume (for regular volume size in capacity for all system*
capacity in capacity in GB pools) ** data reduction volumes in a
IBM Support Offerings My support  Downloads GB Documents  Cases  Communities
(for regular pools in GB single
Training
data Other 
pools) reduction pool
per IOgroup in
GB
16 2048 (2 TB) 2000 2TB 2048 (2 TB) 2048 (2 TB) 2048 (2 TB) 32 64 TB
32 4096 (4 TB) 4000 4TB 4096 (4 TB) 4096 (4 TB) 4096 (4 TB) 64 128 TB
64 8192 (8 TB) 8000 8TB 8192 (8 TB) 8192 (8 TB) 8192 (8 TB) 128 256 TB
25
128 16,384 (16 TB) 16,000 16TB 16,384 (16 TB) 16,384 (16 TB) 16,384 (16 TB) 25
256 512 TB
25
256 32,768 (32 TB) 32,000 32TB 32,768 (32 TB) 32,768 (32 TB) 32,768 (32 TB) 512 1 PB
512 65,536 (64 TB) 65,000 64TB 65,536 (64 TB) 65,536 (64 TB) 65,536 (64 TB) 1024 (1 PB) 2 PB
1024 131,072 (128 130,000 96TB ** 131,072 (128 131,072 (128 TB) 131,072 (128 TB) 2048 (2 PB) 4 PB
TB) TB)
2048 262,144 (256
25 260,000 96TB ** 262,144 (256
25 262,144 (256
25 TB) 262,144 (256
25 TB) 4096 (4 PB) 8 PB
TB) TB)
4096 262,144 (256
25 262,144 96TB ** 262,144 (256
25 524,288 (512 TB) 524,288 (512 TB) 8192 (8 PB) 16 PB
TB) TB)
8192 262,144 (256
25 262,144 96TB ** 262,144 (256
25 1,048,576 (1024 1,048,576 (1024 16384 (16 PB) 32 PB
TB) TB) TB) TB)

* The total capacity values assumes that all of the storage pools in the system use the same extent size.
** Please see the following Flash

Document Information

Contact and feedback


More support for:
IBM FlashSystem 9100 family

Software version:
8.3

Operating system(s):
Platform Independent

Document number:
885883

Modified date:
12 December 2019

You might also like