Professional Documents
Culture Documents
VSP Architecture Overview V2 2
VSP Architecture Overview V2 2
VSP Architecture Overview V2 2
4
VSP Disk Choices
5
VSP Design
Each FED board has a Data Accelerator chip (DA, or LR for local
router) instead of 4 MPs. The DA routes host I/O jobs to the VSD board
that owns that LDEV and performs DMA transfers of all data blocks
to/from cache.
Each BED board has 2 Data Accelerators instead of 4 MPs. They route
disk I/O jobs to the owning VSD board and move data to/from cache.
Each BED board has 2 SAS SPC Controller chips that drive 8 SAS 6Gbps
switched links (over four 2-Wide cable ports).
Most MP functions have been moved from the FED and BED boards to
new multi-purpose VSD boards. No user data passes through the VSD
boards! Each VSD has a 4-core Intel Xeon CPU and local memory. Each
VSD manages a private partition within global cache.
Unlike the previous Hitachi Enterprise array designs, the FED board does
not decode and execute I/O commands. In the simplest terms, a VSP
FED accepts and responds to host requests by directing the host I/O
requests to the VSD managing the LDEV in question. The VSD processes
the commands, manages the metadata in Control Memory, and creates
jobs for the Data Accelerator processors in FEDs and BEDs. These then
transfer data between the host and cache, virtualized arrays and cache,
disks and cache, or HUR operations and cache. The VSD that owns an
6 LDEV tells the FED where to read or write the data in cache.
VSP LDEV Management
7
Paths Per LDEV
8
VSP I/O Operations
9
Performance on VSP
VSP can achieve very high port cache hit IOPS rates. Tests using
100% 8KB random read, 32 15K disks, RAID-10 (2+2), we saw:
USP V: 1 port, about 16,000 IOPS (2 ports-2MPs, 31,500 IOPS)
VSP: 1 port, about 67,000 IOPS (2 ports, 123,000 IOPS)
10
VSP
Architecture
Overview
11
Fully populated Dual Chassis VSP has 6 racks
RK-10
RK-00
11.8 ft
RK-01
RK-02 3.6 ft
12
VSP Single Chassis Architecture w/ Bandwidths
64 x 8Gbps FC
Ports
8 x 8Gbps FC Ports
per FED
8 BED
boards FED FED FED FED
8 DA FED FED FED FED
Processors
CM
DCA
16 x 1GB/s
Send
VSD 96 GSW links
VSD 16 x 1GB/s
CM
Receive DCA
VSD
VSD GSW GSW GSW GSW
32 x 1GB/s CM
16 x 1GB/s DCA
16 x 1GB/s Send Send
Send
16 x 1GB/s 32 x 1GB/s
16 x 1GB/s
Receive Receive
Receive
CM
DCA
To Other GSWs BED BED 4 BED boards
BED BED 8 SAS 256GB
16 x 1GB/s
Send Processors Cache
16 x 1GB/s 8 DA
Receive Processors
8 x 6Gbps SAS Links
VSP Single per BED
Chassis
32 x 6Gbps SAS
Links
13
VSP Single Chassis Grid Overview
8 FEDs
FED FED BED FED FED BED BED FED FED BED FED FED 4 BEDs
8
VSD VSD DCA DCA DCA DCA DCA DCA DCA DCA VSD VSD Cach
e
16
VSP Single Chassis - Boards CPU
Core
s
14
Dual Chassis Arrays
15
VSP Second Chassis - Uniform Expansion
8 FEDs
FED FED BED FED FED BED BED FED FED BED FED FED 4 BEDs
4 GSW Paths
to Chassis-1
8
VSD VSD DCA DCA DCA DCA DCA DCA DCA DCA VSD VSD Cach
e
16
VSP Second Chassis - Boards CPU
Core
s
16
VSP and USP V Table of Limits
17
Board Level
Details
18
Logic Box Board Layout
Cluster 2
CHA-1 (2QL) CHA-0 (2QU)
DCA-1 (2CD)
Cluster 1
Cluster 1
DCA-0 (1CA)
DKA-1 (1AL) DKA-0 (1AU)
DCA-1 (1CB)
CHA-1 (1EL) CHA-0 (1EU)
DCA-2 (1CE)
CHA-3 (1FL) CHA-2 (1FU)
DCA-3 (1CF)
SVP-0
VSP Chassis
#1
19
FED Port Labels (FC or FICON)
20
DKU and HDU
Overviews
21
DKU and HDU Map Front View, Dual Chassis
22
DKU and HDU Map Rear View, Dual Chassis
23
BED to DKU Connections (Single Chassis)
DKC-
32 6Gbps SAS Links
0
(16 2W cables)
BED-0 DKU- DKU- DKU- DKU- DKU- DKU- DKU- DKU-
00 01 02 03 04 05 06 07
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
BED-1
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
BED-0 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
Up to 1024 SFF (shown) or 640 LFF disks, 32 600MB/s SAS links (16
2W ports), 8 DKUs, 64 HDUs
24
Q and A
25