Professional Documents
Culture Documents
Hp-3Par Performance Presentation: October 2010
Hp-3Par Performance Presentation: October 2010
PERFORMANCE PRESENTATION
October 2010
IOP/s
– Small block size (16k or smaller)
• There is little difference between IOPs that are 16K or smaller.
– Random access to entire device
– Multiple threads to the device
MB/s (Throughput)
– Large block size (256k or larger)
– Sequential access to entire device
– Single thread to the device
NOTES:
1. These are back end numbers. RAID overhead must be considered when calculating front end capability
2. Numbers reflect IO access from VLUN (host) to Physical Disk
3. IOP/s are less with larger blocks! (With 64k blocks, IOP/s are 67% of above)
4. As seen above, SSD IOPs vary greatly with IO Mix
5. SSD IOPs above are 4K. Larger blocks has impact on overall SSD performance
6. SSD writes are significantly slower than reads (2 – 3 times longer to complete)
7. SSD Sequential performance is slower than spinning disks
NOTES:
1. Appropriate number of disks and HBA’s needed to obtain node max
2. Numbers above are listed for a node-pair (2 nodes)
3. Disk IOP/s are limited by drive support (max config)
NOTES:
1. Appropriate number of disks and HBA’s needed to obtain node max
2. Numbers above are listed for a node-pair (2 nodes)
3. Disk IOP/s are limited by drive support (max config)
4. Host Write MB/s depend on set size (shown as default of 4 for R5, 3d+1p)
5. Backend IOP performance is fixed per node and front end depends on data to parity overhead. In this example
750MB/sec back end is 560 for host data and 190MB/sec for parity. If this was 7+1 the host could push 650 MB/s
NOTES:
1. Appropriate number of disks and HBA’s needed to obtain node max
2. Numbers above are listed for a node-pair
3. For all other Series, disk IOP/s are limited by drive support (max config)
4. Host Write MB/s depend on set size (shown as default of 8 for R6, 6d+2p)
5. Host Write MB/s equation is [(set size - 2) / (set size)] * [Disk Bandwidth]
– The Inserv is not cache centric like monolithic architectures. Cache is used
primarily as a buffer to facilitate data movement to and from disk.
– For spinning disks the Inserv will read a full 16K cache page from disk.
• If a host read is for less than 16K, we still read in the entire page.
• If a write is for less than a full page, we only write out the partial page with valid data.
• We will combine multiple sub-page size host writes in a single dirty page.
R1: TPVV
FE * [R + (>3W)] = BE
(0 Base R, 2 Base W, 1+ SA W)
R5: TPVV
FE * [R + (>5W)] = BE
(2 Base R, 2 Base W, 1+ SA W)
R6: TPVV
FE * [R + (>7.66W)] = BE
(3.33 Base R, 3.33 Base W, 1+ SA W)