Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 35

HPE Alletra 9000 and 6000

Storage: Sizing and


Configuration Best Practices
Chuck Roman
Wayne Specht
Technical Marketing Engineering
Confidential Disclosure Agreement
– The information contained in this presentation is proprietary to Hewlett Packard enterprise and is offered in confidence, subject to the terms
and conditions of a binding Confidential Disclosure Agreement (CDA)
– HPE requires customers and partners to have signed a CDA in order to view this presentation
– The information contained in this presentation is HPE confidential
– This presentation is NOT to be used as a ‘leave behind’ for customers and information may only be shared verbally with HPE external
customers under CDA/NDA
– This presentation may be shared with Partners under CDA/NDA in hard-copy or electronic format for internal training purposes only
– Do not remove any classification labels, warnings or disclaimers on any slide or modify this presentation to change the classification level
– Do not remove this slide from the presentation
– HPE does not warrant or represent that it will introduce any product to which the information relates
– The information contained herein is subject to change without notice
– HPE makes no warranties regarding the accuracy of this information
– The only warranties for HPE products and services are set forth in the express warranty statements accompanying such products and
services
– Nothing herein should be construed as constituting an additional warranty
– HPE shall not be liable for technical or editorial errors or omissions contained herein
– Strict adherence to the HPE Standards of Business Conduct regarding this classification level is critical

12/22/22 HPE Confidential


Agenda

– Maximum Sustainable Performance (MSP)


– Alletra 9000 performance density
– Alletra 9000 vs. Primera MSP
– Alletra 9000 configuration rules
– Alletra 6000 sizing, configuration and performance
– NinjaSTARS release plans and demonstration

12/22/22 HPE Confidential


Maximum Sustainable Performance Definition
– Set of maximum steady state performance numbers that are achievable and sustainable without
extraordinary means. This is not a new concept to Alletra. It was introduced with Primera and version 4.0 of
the operating system.
– Using best practice setups and configurations these numbers are achievable “end-to-end” with negligible
user data cache hits for specific configuration and workloads. Includes metrics like volume provisioning type,
usable capacity, working set size and how much of the data will dedup or compress.
– See the below table for details on the systems and configurations used to achieve these numbers.
Platform Frontend Ports Backend Configuration Drives Working set size TPVV Working Set Size Data Data reduction
Reduction

2node 9060 8 32Gb FC ports per Node internal enclosure only 24 drives (NVMe SSDs) ~100TiB ~55TiB Dedup ratio: 1.5:1
node Compression ratio: 3:1

4node 9060 8 32Gb FC ports per Node internal enclosure only 48 drives (NVMe SSDs) ~200TiB ~110TiB Dedup ratio: 1.5:1
node Compression ratio: 3:1

2node 9080 8 32Gb FC ports per Node internal enclosure only 24 drives (NVMe SSDs) ~100TiB ~55TiB Dedup ratio: 1.5:1
node Compression ratio: 3:1

4node 9080 8 32Gb FC ports per Node internal enclosure only 48 drives (NVMe SSDs) ~200TiB ~110TiB Dedup ratio: 1.5:1
node Compression ratio: 3:1

12/22/22 HPE Confidential


What Does Maximum Sustainable Performance (MSP) Include
– The MSP numbers include application workloads and account for the average impact of background
processes that are not configurable/controlled by the user.
– The MSP numbers do not include the impact of user configurable background processes, advanced
feature functionality like Virtual or Remote Copy or hardware/software failure scenarios.
– See the below table for a summary.

MSP does account for average impact of the MSP does not account for impact of the below:
below:
• Deduplication garbage collection • Transient behaviors like chunklet initialization,
• Compression garbage collection first write penalties as a result of prefill operations
• System Reporter or admission of new hardware
• Defrag/Reclaim activities running • Virtual Copy snapshots (creating, maintaining or
• Primera UI updating)
• The on node intelligence running • Upgrade operations
• Replication operations via Remote Copy
• Hardware/Software failures
• Tuning and convert operations for volumes/CPGs

12/22/22 HPE Confidential


5
Performance Density Enhancements With Alletra 9000
– To achieve MSP with the A650/A670 the system required at least one extra enclosure per node pair, more
than 24 drives per node pair and 8 32 GB host ports per node pair to reach the system limits in all
workloads. 8U total.

– To achieve MSP with the Alletra 9060/9080, the system requires only the node enclosure per node pair, 24
drives per node pair and 8 32 GB host ports per node pair to reach the system limits in all workloads. 4U
total.

– When the NVMeo-F JBOF is available it will allow for more capacity but will not increase performance.

– Due to the updated controller design, operating system performance updates and the NVMe SSD drive
performance capabilities you now have an increased performance density with Alletra 9000 systems. Full
performance can be reached in 4U for a 4 node system. This is half the rack space required by HPE
Primera.

12/22/22 HPE Confidential


HPE ALLETRA 9000 PERFORMANCE
Performance numbers are based on MSP (Maximum Sustainable Performance)

2.1 Million IOPs


1 55GB/s 2 <250us latency 3

HPE Alletra 9000

1
HPE Alletra 9080 4N, 8KiB Random Reads TPVV RAID6, 2
256KiB Sequential Reads TPVV RAID6, 3 Min average latency for 8KiB Random Reads TPVV RAID6

HPE INTERNAL USE ONLY 7


Maximum Sustainable Performance Alletra TPVV IOPS RAID 6
Thin Provisioned (TPVV) volumes that are prefilled sequentially

4 node 9060 delivers:


• Up to 1.1 million IOPS of sustained performance

4 node 9080 delivers:


• Up to 2.1 million IOPS of sustained performance

12/22/22 HPE Confidential


Maximum Sustainable Performance Alletra TPVV GBs RAID 6
Thin Provisioned (TPVV) volumes that are prefilled sequentially

4 node 9060 delivers:


• Up to 47 GBs of sustained performance

4 node 9080 delivers:


• Up to 55 GBs of sustained performance

12/22/22 HPE Confidential


Maximum Sustainable Performance Alletra Data Reduction IOPS RAID 6
Data reduction volumes* with the data pattern used for writes resulting in dedup ratio of 1.5:1 and compression
ratio of 3:1. Same data pattern used for sequential prefill and for subsequent write workloads.

4 node 9060 delivers:


• Up to 620,000 IOPS of sustained performance

4 node 9080 delivers:


• Up to 1.2 million IOPS of sustained performance

*Dedup + compression
12/22/22 HPE Confidential
Maximum Sustainable Performance Alletra Data Reduction GBs Raid 6
Data reduction volumes* with the data pattern used for writes resulting in dedup ratio of 1.5:1 and compression
ratio of 3:1. Same data pattern used for sequential prefill and for subsequent write workloads.

4 node 9060 delivers:


• Up to 27 GBs of sustained performance

4 node 9080 delivers:


• Up to 33.5 GBs of sustained performance

*Dedup + compression
12/22/22 HPE Confidential
TPVV IOPS Performance Enhancements With Alletra 9000

With Thin-only, Alletra delivers more performance than Primera:


• 10% to 115% more IOPS with a Alletra 9060 vs Primera A650
• 11% to 57% more IOPS with a Alletra 9080 vs Primera A670
12/22/22 HPE Confidential
TPVV GBs Performance Enhancements With Alletra 9000

With Thin-only, Alletra delivers more performance than Primera:


• 18% to 173% more throughput with a Alletra 9060 vs Primera A650
• 8% to 67% more throughput with a Alletra 9080 vs Primera A670

12/22/22 HPE Confidential


Data Reduction IOPS Performance Enhancements With Alletra 9000

With Data Reduction, Alletra delivers more performance than Primera:


• 10% to 34% more IOPS with a Alletra 9060 vs Primera A650
• 8% to 29% more IOPS with a Alletra 9080 vs Primera A670

12/22/22 HPE Confidential


Data Reduction GBs Performance Enhancements With Alletra 9000

With Data Reduction, Alletra delivers more performance than Primera:


• 17% to 50% more throughput with a Alletra 9060 vs Primera A650
• 8% to 53% more throughput with a Alletra 9080 vs Primera A670

12/22/22 HPE Confidential


Alletra 9000 Model Scaling (TPVV)
IOPS increase with 9080 versus 9060 models

With Thin-only, 9080 vs 9060 provides:


• 35% to 90% more performance with 2 nodes
• 43% to 91% more performance with 4 nodes
12/22/22 HPE Confidential
Alletra 9000 Model Scaling (Data Reduction)
IOPS increase with 9080 versus 9060 models

With Data Reduction, 9080 vs 9060 provides:


• 38% to 95% more performance with 2 nodes
• 51% to 95% more performance with 4 nodes
12/22/22 HPE Confidential
Alletra 9000 Node Scaling (TPVV)
Performance increase with 4 node versus 2 node systems

With Thin-only, 4 vs 2 nodes provides:


• 80% to 172% more performance for Alletra 9060
• 79% to 171% more performance for Alletra 9080
12/22/22 HPE Confidential
Alletra 9000 Node Scaling (Data Reduction)
Performance increase with 4 node versus 2 node systems

With Data Reduction, 4 vs 2 nodes provides:


• 79% to 148% more performance for Alletra 9060
• 78% to 157% more performance for Alletra 9080
12/22/22 HPE Confidential
HPE ALLETRA 9000 LATENCY
Performance numbers are based on MSP (Maximum Sustainable Performance)

-8% minimum average latency +63% IOPS at 250us


vs. HPE Primera vs. HPE Primera
Latency
HPE Primera 1300
HPE Alletra

800

250us

245us 225us

IOPS 670 4N 9080 4N

9080 4N vs. 670 4N KIOPS with node enclosure only (48 x SSDs)
(8KiB Random Reads TPVV) (8KiB Random Reads TPVV)

HPE INTERNAL USE ONLY 20


Additional Alletra 9000 Performance Notes Including Remote
Copy Dual RCIP Ports
– The main focus was to ensure that Alletra 9000 met the product performance requirements
– Work was done to greatly reduce host IO performance impact of Defragmentation
– Overall Primera performance remains on par with the previous 4.2 release
– New to 9.3 both 10Gb Ethernet ports on Alletra 9000 nodes may be configured for RCIP. This also applies to
Primera systems with 4.3.
– Using both ports improves performance for systems previously bandwidth limited by a single port
– Greater improvement for large transfers
– Greater improvements on higher performance systems (e.g., 9080)
– Small transfers are typically CPU-limited and performance is about the same with one port or two
– Table illustrates typical performance improvement for large systems
– Synchronous remote copy between two 4 node 9080 systems
– Improvement using 2 RCIP ports per node vs. 1 RCIP port per node

12/22/22 HPE Confidential


HPE Alletra 9000 Configuration Rules
– Storage Base: all Alletra 9000 models use the 4-way chassis (there is no 2-way chassis)
– Controller nodes: Alletra 9000 can be configured with either 2 or 4 nodes
– It’s not possible to mix different node types (9060 and 9080) in the same chassis
– Host Adapters: each node can have min 1 and max 3 HBAs
– Different types of HBAs (FC, iSCSI) can be mixed in the same node
– All nodes must have the same HBA configuration (in a 4N array all nodes must have the same HBAs)
– SFPs: different SFPs can be used in the same HBA
– 16Gb and 32Gb SFPs can be mixed in the same FC HBA (any combination)
– 10Gb and 25Gb SFPs can be mixed in the same iSCSI HBA (only in pairs, two of each type)
– Drives: min quantity is 8 drives per node pair, max quantity is 24 per node pair
– Each node pair must have the same number of drives
– Min drive upgrade increment is 2 drives per node pair
– A new array must be configured with all drives of the same capacity (1.92TB, 3.84TB, 7.68TB, 15.36TB)
– It is possible to mix drives of different capacities after an upgrade but it’s not recommended

12/22/22 HPE Confidential


Alletra 6000
Sizing, configuration, and performance

12/22/22 HPE Confidential


23
HPE Alletra 6000 All Flash Array
System comparison with Nimble Storage Gen5 All Flash

Specifications ​ Alletra 6000 Nimble Storage Gen5


Base chassis form factor​ 4U​ 4U​
Number of controllers ​ 2​(Active/Standby) 2​(Active/Standby)
CPU family​ AMD Rome​ Intel Skylake
Number of cores per controller​ 8 to 128​ 6 to 28
Memory per controller (GB)​ 64 to 896​ 32 to 320
Max PCIe slots per controller 6 (Gen4) 3 (Gen3)
Max drives in base chassis​ 24 NVMe SSDs​ 48 SATA SSDs
368TB (24x 15.36TB 368TB (48x 7.68TB
Max raw capacity in base chassis​
SSDs)​ SSDs)
Max raw capacity per system​  Up to 1536TB​ Up to 1106TB
1.92TB to
Drive capacities​ 480GB to 7.68TB SATA
15.36TB NVMe​

12/22/22 HPE Confidential


RAID options
– Ordering options
– 24 drives of the same capacity
– Triple+ Parity RAID with 21 data drives and 3 parity drives (21+3)

– 12 drives of the same capacity


– Triple+ Parity RAID with 9 data drives and 3 parity drives (9+3)

– 12 drives of one capacity and 12 drives of a second capacity


– Two Triple+ Parity 9+3 RAID sets
– Driveset B requires activation

12/22/22 HPE Confidential


HPE Alletra 6000 configuration summary
(indicates post-launch availability)
Model 6010 6030 6050 6070 6090
Maximum single array raw capacity 92TB 184TB 552TB 1104TB 1536TB

Head shelf capacity (24-SSD packs) 23TB1, 23TB1, 46TB1, 23TB1, 46TB1, 23TB1, 46TB1, 92TB1, 23TB1, 46TB1, 92TB1,
46TB1, 46TB, 92TB1, 46TB, 92TB, 92TB1, 184TB1, 184TB1, 184TB1,
92TB 184TB 46TB, 92TB, 184TB, 46TB, 92TB, 184TB, 46TB, 92TB, 184TB,
368TB 368TB 368TB

Head shelf expansion drive sets 1 (12 drives) 1 (12 drives) 1 (12 drives) 1 (12 drives) 1 (12 drives)

Head shelf capacity expansion 23TB, 46TB 23TB, 46TB, 92TB 23TB, 46TB, 92TB, 23TB, 46TB, 92TB, 23TB, 46TB, 92TB,
(12-SSD packs) 184TB 184TB 184TB

Number of Expansion Shelves (number of 1 (2) 2 (4) 2 (4) 2 (4) 2 (4)


banks)
Expansion shelf capacity per bank 23TB1, 23TB1, 46TB1, 23TB1, 46TB1, 23TB1, 46TB1, 92TB1, 23TB1, 46TB1, 92TB1,
(24-SSD packs) 46TB1, 46TB 92TB1, 46TB, 92TB 92TB1, 184TB1, 184TB1, 184TB1,
46TB, 92TB, 184TB, 46TB, 92TB, 184TB, 46TB, 92TB, 184TB,
368TB 368TB 368TB

Number of supported NICs (per controller) 3 6 6 6 6

1.92TB, 1.92TB, 3.84TB, 1.92TB, 3.84TB, 1.92TB, 3.84TB, 1.92TB, 3.84TB,


SSD capacities (flash) 3.84TB 7.68TB 7.68TB, 15.36TB 7.68TB, 15.36TB 7.68TB, 15.36TB

1
Numbers quoted are for 12-SSD packs.
12/22/22 HPE Confidential
Number of drives affects performance

12/22/22 HPE Confidential


HPE Alletra 6000 All Flash Array
System comparison with Nimble Storage Gen5 All Flash
Specifications ​ Alletra 6000 Nimble Storage Gen5
Max 1GBase-T ports per controller 4 Up to 122
Max 10GBase-T ports per controller​ Up to 24​1 Up to 143,4
Max 10GbE SFP+ ports per controller​ Up to 24​1 Up to 122
Max 25GbE SFP28 ports per controller​ Up to 12​1 Up to 62
Max 100GbE QSFP28 ports per controller​ Up to 41 N/A
Max 32Gb FC ports per controller​ Up to 161 Up to 62
Storage Class Memory (DSCCM) support​ Yes​ Yes
Power supplies (quantity) 4 per system/2 per controller 2 per system
800W (high-line or low-line) 1200W (high-line or low-line)
Power supplies (wattage)
1600W (high-line) 3000W (high-line or low-line5)
Notes:
1. Divide by 2 for 6010 model
2. Multiply by 2/3 for AF20 models
3. Maximum of 10 ports for the AF20 models
4. Includes 2 on-board ports
5. No low-line on the AF80
12/22/22 HPE Confidential
Number of ports affects performance

12/22/22 HPE Confidential


HPE Alletra 6000 All Flash Array
Models and performance mapping with Nimble Storage Gen5
HPE Alletra 6090 All Flash Array 800

HPE Alletra 6070 All Flash Array

400 HPE Nimble Storage AF80 All Flash Array


HPE Alletra 6050 All Flash Array
PS
200
IO
HPE Nimble Storage AF60 All Flash Array
HPE Alletra 6030 All Flash Array

HPE Nimble Storage AF40 All Flash Array


HPE Alletra 6010 All Flash Array 100

HPE Nimble Storage AF20 All Flash Array

IOPS x1000, 4K random reads and writes,


50/50 mix
12/22/22 HPE Confidential
Performance comparison part 1

12/22/22 HPE Confidential


Performance comparison part 2

12/22/22 HPE Confidential


Best practices

– Use NinjaSTARS to see the effects of choices around capacity bundles, frontend ports, and so forth on the
array configuration
– Use the Primary Storage Sizer on the HPE InfoSight portal for detailed performance sizing, including
variables such as the effects of synchronous replication
– Order 24-drive capacity options whenever possible
– Less RAID overhead
– Better performance
– Choose the right PCIe options for your customer’s environment and concerns
– Switch compatibility
– Port density versus resiliency

12/22/22 HPE Confidential


NinjaSTARS Release Plan And Demonstration
– An updated version of NinjaSTARS containing the Alletra 9000 and 6000 systems
is Scheduled for May 5th.

– NinjaSTARS Demonstration

12/22/22 HPE Confidential


Thank you
Chuck Roman
chuck.roman@hpe.com
Wayne Specht
wayne.robert.specht@hpe.com

12/22/22 HPE Confidential

You might also like