Professional Documents
Culture Documents
V8.3.0.x Configuration Limits and Restrictions For IBM FlashSystem 9100 Family
V8.3.0.x Configuration Limits and Restrictions For IBM FlashSystem 9100 Family
IBM Support
Search support or find a product
V8.3.0.x Configuration Limits and Restrictions for IBM FlashSystem 9100 family
Abstract
This document lists the configuration limits and restrictions specific to IBM FlashSystem 9100 family software version 8.3.0.x
Content
The use of WAN optimisation devices such as Riverbed are not supported in native Ethernet IP partnership configurations containing FlashSystem
9100 enclosures.
Customers using the REST API to list more than 2000 objects may experience a loss of service, from the API, as it restarts due to memory
constraints.
It is not possible to access the REST API using a cluster's IPv6 address.
Hosts using the NVMe protocol cannot be mapped to HyperSwap or stretched volumes.
Volumes accessed by hosts using the NVMe protocol cannot be configured with multiple access I/O groups due to a limitation of the NVMe
protocol.
For candidate drives, with a capacity greater than 4TB, a strip size of 128 cannot be specified for either RAID-5 or RAID-6 DRAID arrays. For
these drives a strip size of 25
256 should be used.
Transparent cloud tiering on the system is defined by configuration limitations and rules. Please click the link for details
https://www.ibm.com/support/knowledgecenter/STSLR9_8.3.0/com.ibm.fs9100_830.doc/svc_tctmaxlimitsconfig.html
a. OBAC is not supported under TCT in v8.3. You cannot set up ownership groups and then use TCT commands as OBAC users. If you want to
use TCT, you need to use a non-OBAC user to execute the TCT commands either via the GUI or CLI;
b. When a cloud account is created, it must continue to use the same encryption type, throughout the life of the data in that cloud account - even if
the cloud account object is removed and remade on the system, the encryption type for that cloud account may not be changed while back up
data for that system exists in the cloud provider;
c. When performing re-key operations on a system that has an encryption enabled cloud account, perform the commit operation immediately after
the prepare operation. Remember to retain the previous system master key (on USB or in Keyserver) as this key may still be needed to retrieve
your cloud backup data when performing a T4 recovery or an import;
d. Restore_uid option should not be used when backup is imported to a new cluster;
e. Import of TCT data is only supported from systems whose backup data was created at v7.8.0.1;
f. Transparent cloud tiering uses Sig V2, when connecting to Amazon regions, and does not currently support regions that require Sig V4.
There is an extremely small possibility that, on a system using both Encryption and Transparent Cloud Tiering, the system can enter a state where
an encryption re-key operation is stuck in 'prepared' or 'prepare_failed' state, and a cloud account is stuck in 'offline' state.
The user will be unable to cancel or commit the encryption rekey, because the cloud account is offline. The user will be unable to remove the
cloud account because an encryption rekey is in progress.
The system can only be recovered from this state using a T4 Recovery procedure.
It is also possible that SAS-attached storage arrays go offline.
There are two possible scenarios where this can happen:
Scenario A
2. A new USB key is prepared using chencryption -usb newkey -key prepare; /
3. The new presumptive key is deleted from all USB sticks before the new key is committed;
Scenario B
2. A new key server key is prepared using chencryption -keyserver newkey -key prepare;
3. The new presumptive key is deleted from the key server before the new key is committed;
5. The cloud account will now be offline as it can't get the presumptive key. The cloud account can not be removed, and the encryption rekey can
not be completed or cancelled. The system will remain stuck in these cloud and encryption states;
Spectrum Virtualize family of products version 7.7 introduced support for NPIV ( N_Port ID Virtualization ) for Fibre Channel fabric attachment.
The following recommendations and restrictions should be followed when implementing the NPIV feature.
• HPUX 11iV2
General requirements
Required SDD versions for IBM AIX and Microsoft Windows Environments:
2. Microsoft Windows requires a minimum SDDDSM version 2.4.7.0. The latest recommended level which resolves issues listed below is 2.4.7.1.
Path Optimization
User intervention may be required when changing NPIV states from "Transitional" to "Disabled". All Paths to a LUN with SDDDSM or
SDDPCM may remain "Non-Optimized" when NPIV is "Disabled" from "Transitional" state.
IBM AIX
For SDDPCM:
Run "pcmpath chgprefercntl device <device number>/<device number range>" on AIX. This will restore both Optimized and Non-
Optimized paths for all the LUNs correctly.
Solaris
Emulex HBA Settings:
1. When implementing NPIV on Solaris 11 the default disk IO timeout needs to be changed to 120s by adding "set sd:sd_io_time=120" in
/etc/system file, A system reboot is required for the change to be implemented.
2. When ports on host HBA are connected to 16GB SAN, NPIV is not supported.
Fabric Attachment
NPIV mode on the FlashSystem 9100 family is only supported when used with Brocade or Cisco fibre channel SAN switches which are NPIV
capable.
Nodes in an IO group cannot be replaced by nodes with less memory when compressed volumes are present
If a customer must migrate from 64GB to 32GB memory node canisters in an IO group, they will have to remove all compressed volume copies in
that IO group. This restriction applies to 7.7.0.0 and newer software.
4. Install new node canisters with 32GB of memory and add them to the configuration in the original IO group with CLI or GUI.
HyperSwap
When using the HyperSwap function with software version 7.8.0.0 or later, please configure your host multipath driver to use an ALUA-based
path policy.
Due to the requirement for multiple access IO groups, SAS attached host types are not supported with HyperSwap volumes.
Clustered Systems
A FlashSystem 9100 system at version 8.2.0.0 or later requires native Fibre Channel SAN or alternatively 8Gbps/16Gbps Direct Attach Fibre
Channel connectivity for communication between all nodes in the local cluster.
Partnerships between systems for Metro Mirror or Global Mirror replication can be used with both Fibre Channel and Native Ethernet
connectivity. Distances greater than 300 metres are only supported when using an FCIP link or Fibre Channel between source and target.
Please visit the IBM System Storage Inter-operation Center (SSIC) for supported 16Gbps Fibre Channel configurations supported with 16Gbps
Either will work for standard iSCSI communications, i.e. not using Remote Direct Memory Access (RDMA). A future software release will add
(RDMA) links using new protocols that support RDMA such as NVMe over Ethernet.
The 25
25Gbps adapters come with SFP28 fitted, which can be used to connect to switches using OM3 optical cables.
For Ethernet switches and adapters supported in hosts please visit the SSIC .
IP Partnership
IP partnerships are supported on any of the available ethernet ports. Using an Ethernet switch to convert a 25
25Gb to a 1Gb IP partnership, or a
10Gb to a 1Gb IP partnership is not supported. Therefore the IP infrastructure on both partnership sites must match. Bandwidth limiting on IP
partnerships between both sites is supported.
Fabric Limitations
Only one FCF ( Fibre Channel Forwarder ) switch per fabric is supported.
The maximum number of Virtual Machines on a single VMware ESXi host in a FlashSystem 9100 / VVol storage configuration is limited to 680.
The use of VMware vSphere Virtual Volumes (VVols) on a system that is configured for HyperSwap is not currently supported with the
FlashSystem 9100 family.
/
Host Limitations
iSER
Operating systems not currently supported for use with iSER
FCoE
FCoE is not supported.
Oracle
Recommendation
Applies to Oracle Database - Enterprise Edition - Version 11.2.0.3 to 12.1.0.1 [Release 11.2 to 12.1] on any platform
Priority Flow Control for iSCSI/ iSER is supported on Emulex & Chelsio adapters (SVC supported) with all DCBX enabled switches.
Maximum Configurations
This number also includes external MDisks which have not been
configured into storage pools (managed disk groups)
Managed disks per storage pool 128
(managed disk group)
Storage pools per system 1024
Parent pools per system 128
Child pools per system 1023 Not supported in a Data Reduction Pool
Managed disk extent size 8192 MB
/
Capacity for an individual internal - No limit is imposed beyond the maximum number of drives per array
managed disk (array) limits.
Maximum size is dependent on the extent size of the Storage Pool.
IBM Support Offerings My support Downloads Documents Cases Communities Training
Comparison Maximum
Table: Other Volume, MDisk and System capacity for
each extent size.
Capacity for an individual external 1 PB Note: External managed disks larger than 2 TB are only supported for
managed disk certain types of storage systems. Refer to the supported hardware matrix
for further details.
Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for
each extent size.
Total storage capacity manageable 32 PB Maximum requires an extent size of 8192 MB to be used
per system
This limit represents the per system maximum of 2^22 extents.
Pools containing 32 TB Maximum size is dependent on the extent size of the Storage Pool.
all-Flash storage Comparison Table: Maximum Volume, MDisk and System capacity for
each extent size.
Host mappings per system 64000 See also - volume mappings per host object below
Mirrored Volume (Virtual Disk) Properties
Copies per volume 2
Volume copies per system 10000
Total mirrored volume capacity per 1024 TB
I/O group
Generic Host Properties
Host objects (IDs) per system 2048 A host object may contain both Fibre Channel ports and iSCSI names
Host objects (IDs) per I/O group 512 Refer to the additional Fibre Channel and iSCSI host limits below
Volume mappings per host object 2048 Although IBM FlashSystem 9100 allows the mapping of up to 2048
volumes per host object, not all hosts are capable of accessing/managing
this number of volumes. The practical mapping limit is restricted by the
host OS, not IBM FlashSystem 9100.
Note: this limit does not apply to hosts of type adminlun (used to support
VMware vvols).
Total Fibre Channel ports and iSCSI 8192
names per system
/
Total Fibre Channel ports and iSCSI 2048
names per I/O group
Total Fibre Channel ports and iSCSI 32
My support
names per host
IBM Support Offerings Downloads Documents Cases Communities Training Other
object
iSCSI names per host object (ID) 8
Host Cluster Properties
Host clusters per system 512
Hosts in a host cluster 128
Fibre Channel Host Properties
Fibre Channel hosts per system 2048
Fibre Channel host ports per system 8192
Fibre Channel hosts per I/O group 512
Fibre Channel host ports per I/O 2048
group
Fibre Channel host ports per host 32
object (ID)
Simultaneous I/Os per node FC port 8Gbps FC 2048
Adapter
16Gbps FC 4096
Adapter
Direct-attach NPortIDs per port 16 1 Primary + 15 NPIV. FCP-SCSI only.
iSCSI Host Properties
iSCSI hosts per system 2048
iSCSI hosts per I/O group 512
iSCSI names per host object (ID) 8
iSCSI names per I/O group 512
iSCSI (SCSI 3) registrations per 512
VDisk
The following table compares the maximum volume, MDisk and system capacity for each extent size.
Extent Maximum non Maximum Maximum Maximum thin- Maximum total Maximum Maximum Total storage
size thin- thin- compressed provisioned and thin-provisioned MDisk capacity DRAID Mdisk capacity /
(MB) provisioned provisioned volume size compressed and compressed in GB capacity in TB manageable per
volume volume (for regular volume size in capacity for all system*
capacity in capacity in GB pools) ** data reduction volumes in a
IBM Support Offerings My support Downloads GB Documents Cases Communities
(for regular pools in GB single
Training
data Other
pools) reduction pool
per IOgroup in
GB
16 2048 (2 TB) 2000 2TB 2048 (2 TB) 2048 (2 TB) 2048 (2 TB) 32 64 TB
32 4096 (4 TB) 4000 4TB 4096 (4 TB) 4096 (4 TB) 4096 (4 TB) 64 128 TB
64 8192 (8 TB) 8000 8TB 8192 (8 TB) 8192 (8 TB) 8192 (8 TB) 128 256 TB
25
128 16,384 (16 TB) 16,000 16TB 16,384 (16 TB) 16,384 (16 TB) 16,384 (16 TB) 25
256 512 TB
25
256 32,768 (32 TB) 32,000 32TB 32,768 (32 TB) 32,768 (32 TB) 32,768 (32 TB) 512 1 PB
512 65,536 (64 TB) 65,000 64TB 65,536 (64 TB) 65,536 (64 TB) 65,536 (64 TB) 1024 (1 PB) 2 PB
1024 131,072 (128 130,000 96TB ** 131,072 (128 131,072 (128 TB) 131,072 (128 TB) 2048 (2 PB) 4 PB
TB) TB)
2048 262,144 (256
25 260,000 96TB ** 262,144 (256
25 262,144 (256
25 TB) 262,144 (256
25 TB) 4096 (4 PB) 8 PB
TB) TB)
4096 262,144 (256
25 262,144 96TB ** 262,144 (256
25 524,288 (512 TB) 524,288 (512 TB) 8192 (8 PB) 16 PB
TB) TB)
8192 262,144 (256
25 262,144 96TB ** 262,144 (256
25 1,048,576 (1024 1,048,576 (1024 16384 (16 PB) 32 PB
TB) TB) TB) TB)
* The total capacity values assumes that all of the storage pools in the system use the same extent size.
** Please see the following Flash
Document Information
Software version:
8.3
Operating system(s):
Platform Independent
Document number:
885883
Modified date:
12 December 2019