HCS G1000 Concepts

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 139

Operating and Managing

Hitachi Storage with Hitachi


Command Suite v8.x
Enterprise Storage Architecture

© Hitachi Data Systems Corporation 2014. All rights reserved.


Module Objectives

 Upon completion of this module, you should be able to:


• List distinct hardware components in Hitachi Virtual Storage Platform G1000
(VSP G1000)
• Describe internal controller chassis architecture
• Identify disk chassis (DKU) and hard disk layouts
• List the software packages for Virtual Storage Platform G1000
• List the supported RAID configurations
• Describe Hi-Track Monitor

© Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Virtual Storage
Platform G1000
(VSP G1000)

© Hitachi Data Systems Corporation 2014. All rights reserved.


HDS Storage Portfolio

Unified and Differentiated Family 4100


4080
4060
4040
Performance

3080/90 VSP G1000


VSP
e
rag
o
F1140 e St u sh
Fi l HUS VM F la m
t
e n i ng
e r
nag c Tie
a i
i te M ynam tion
S u g, D l i c a
HUS 100 Family d
an olin Re
p Content Storage
HUS 150 m o te
C om ic P mo
HUS hi m R e
a
HUS 130 i tac Dyn and
H hi l
a c o ca
110 t L
Hi
Functionality/Scalability
© Hitachi Data Systems Corporation 2014. All rights reserved.
VSP G1000 Overview

A Unified
Block/File
Enterprise
Storage System LFF/SFF Drive
Chassis
LFF/SFF Drive
Chassis
LFF/SFF Drive
Chassis
LFF/SFF Drive
Chassis
LFF/SFF Drive
Chassis
LFF/SFF Drive
Chassis

Feature rich and


tremendously Flash Module Flash Module
Drive Chassis Drive Chassis
flexible in
configurability LFF/SFF Drive
Chassis
LFF/SFF Drive
Chassis
LFF/SFF Drive
Chassis
LFF/SFF Drive
Chassis

Secondary controller Primary controller

© Hitachi Data Systems Corporation 2014. All rights reserved.


VSP G1000 Overview

Enterprise Array - Scalable up to 4.5PB, plus optional file option

Controller Chassis Controller Chassis


DKC-CBXB DKC-CBXA
© Hitachi Data Systems Corporation 2014. All rights reserved.
VSP Full Configuration — 6 Rack

 Maximum number of frames is 6: 2 DKC boxes and 16 DKU boxes

CBXA CBXA+CBXB
HDD (2.5”) 1152 2,304
HDD (3.5”) 576 1,152
Fibre Channel 64 (96*) 128 (192*)
Ports
Cache 1,024GB 2,048GB
* With BED slots used for FED

© Hitachi Data Systems Corporation 2014. All rights reserved.


Controller Chassis Components

 Controller Chassis (CBX) consists of:


• Front End Directors (FED)
• Virtual Storage Directors (VSD)
• Cache Path Control Adapter (CPC)
• Cache Memory Backup (BKM)
• Back End Directors (BED)
• Service Processor (SVP)
• Cooling Fan
• AC-DC Power Supply

© Hitachi Data Systems Corporation 2014. All rights reserved.


Dual Cluster Structure of the DKC

Cluster-2

Cluster-1

* DKC boards are always installed in pairs and purchased as options.


© Hitachi Data Systems Corporation 2014. All rights reserved.
DKC Components — Front End Director

 Front End Director (FED)/Channel Adapter (CHA) overview


• Controls data transfer between hosts and cache memory

0 © Hitachi Data Systems Corporation 2014. All rights reserved.


DKC Components — Front End Director

 DKC-0 Port Naming (16 port Option)

1 © Hitachi Data Systems Corporation 2014. All rights reserved.


DKC Components — Front End Director

 CBXB Port Naming (16 port Option)

2 © Hitachi Data Systems Corporation 2014. All rights reserved.


DKC Components — Front End Director

Mainframe Open System


FED port type FICON FICON Fibre Channel 8G Fibre Channel 16G
Shortwave Longwave
Model DKC-F801I- DKC-F801I- DKC-F801I-16FC8 DKC-F810I-8FC16
16MS8 16ML8
Data Transfer rate 2/4/8 2/4/8 2/4/8 400/800/1600
(MB/s)
# board installed(pairs) 1/2/3/4/5/6/7/8 1/2/3/4/5/6/7/8 1/2/3/4/5/6/7/8/ 1/2/3/4/5/6/7/8
(if BED slots are used) (9/10/11) (9/10/11) (9/10/11/12) (9/10/11/12)
# of ports per pair 16 16 8
# of ports (pairs) 16/32/48/64/80/96/112/128 16/32/48/64/80/96/112/1 8/16/24/32/40/48/56/64
(if BED slots are used) (144/160/176) 28 (72/80/88/96)
(144/160/176/192)
Max cable length (OM3) 500/380/150m 10km
Max cable length (OM3) 500/380/150m 380m/150m/100m
Shortwave SFP
Max cable length (OM3) 10km 10km
Longwave SFP

3 © Hitachi Data Systems Corporation 2014. All rights reserved.


DKC Components — Virtual Storage Director

 Virtual Storage Director (VSD) overview


• Contains main processors and operational control data memory
• Stores and manages internal operational metadata and state
• Array groups, LDEVs, external LDEVs, runtime tables, mapping data for
various software products
• Keeps the overall state of the system stored, referenced and executed
• Distributed to the appropriate I/O offload processors

MPB/VSD
4 © Hitachi Data Systems Corporation 2014. All rights reserved.
DKC Components — Cache Path Control Adapter

 Cache Path Control Adapter (CPC) Overview


• Caches user data blocks from drives via the BED during a read
• Caches data from the FED as part of a data write operation
• Provides data routing functions between FED, BED, VSD to Cache

CPC
5 © Hitachi Data Systems Corporation 2014. All rights reserved.
DKC Components — Cache Path Control Adapter

 Cache Path Control Adapter (CPC) Overview


• 2 TB Cache maximum (2 DKC Configuration)
▪ 8 DIMM slots per PCB @ 32GB DIMMs = 256GB per PCB
▪ 4 Options = 8 PCBs = 2048GB Cache

• CPC needs to be connected in 2 DKC Config Controller-0

CPC-3 CPC-1 CPC-0 CPC-2


▪ 16 custom fibre cables (Option) (Basic) (Basic) (Option)

Cable

CPC-7 CPC-5 CPC-4 CPC-6


(Option) (Option) (Option) (Option)
Controller-1

6 © Hitachi Data Systems Corporation 2014. All rights reserved.


DKC Components — Cache Path Control Adapter

 Cache Path Control Adapter (CPC) Overview


• Shared Memory
▪ Part of Cache reserved for Control Data/Management Information
• For example, WWNs, LUN Mapping, Program Product License Keys, Task and Task Status
▪ Allocated from a portion of the first CPC pair
▪ 16GB – 80GB

• SM capacity requirement is determined by


▪ The number of LDEVs
▪ The type of Program Products (PP) installed
▪ The features or capacity of certain PP used

7 © Hitachi Data Systems Corporation 2014. All rights reserved.


Cache Battery Backup Module (BKM)

 SSD capacity to backup cache data


• 128GB or 256GB
 BKMS: Small Ni-MH Battery for small cache configuration
 BKML: Large Ni-MH Battery for large cache configuration

8 © Hitachi Data Systems Corporation 2014. All rights reserved.


Power Failure & Cache Back Up

9 © Hitachi Data Systems Corporation 2014. All rights reserved.


DKC Components — Back End Director

 Back End Director (BED)


• Controls data transfer between the disks and the cache memory
• Two Types:
▪ Standard BED
▪ Encrypted BED
• License Key Required
• Encryption at Parity Group level

0 © Hitachi Data Systems Corporation 2014. All rights reserved.


DKC Components — Back End Director

Model Number DKC-F810I-SCA DKC-F810I-ESCA

Number of PCB 2 2

Maximum Number of Options per Storage System 4 4


Performance of SAS link 6Gb/sec 6Gb/sec
Data encryption function Not Supported Supported

4 SAS ports each 4 SAS ports each


Number of SAS Port per PCB with 4 x 6Gb/sec with 4 x 6Gb/sec
SAS links SAS links

Maximum Number of Drive Paths per Storage System 32 32

Maximum Number of Drives per SAS Port


288 288
(Under the 2.5-inch HDD Standard Model)

1 © Hitachi Data Systems Corporation 2014. All rights reserved.


Drive Chassis

 6Gb/s SAS Backend Front view Rear view Details

 Support a variety of SFF, LFF HDDs 8 Trays


SBX for
• HDD 300GB – 4TB Maximum of
SFF 2.5”
 192 SFF
7.2K 10K, 15K RPM
HDD/SSD
• SSD/Flash: 400GB – 3.2TB Height: 16U

 Maximum drives supported (2 controllers)


8 Trays
• Max. 2304 SFF HHD UBX for
Maximum of
• Max. 1152 LFF HDD LFF 3.5”
96 LFF

• Max. 384 SFF SSD HDD/SSD


Height: 16U

• Max. 192 FMD


4 Trays
FBX for
Maximum of
48 FMD
FMD
Height: 8U

2 © Hitachi Data Systems Corporation 2014. All rights reserved.


Drive Chassis - SBX & UBX

 SBX – 2.5” drive chassis


 UBX – 3.5” drive chassis

3 © Hitachi Data Systems Corporation 2014. All rights reserved.


Drive Chassis – FBX (Hitachi Accelerated Flash)

 Support Hitachi Accelerated


Flash Storage
 1 FBX (Flash Box) = 4 FMU
(Flash Memory Unit)
 12 FMDs (Flash Memory
Device) per FMU; 48 FMDs
per FBX
 Max 4 FBX (192 FMD) per 2
controllers

4 © Hitachi Data Systems Corporation 2014. All rights reserved.


Drive Location in SFF Drive Chassis

 Recommended 1 spare per 24 HDD : 7D+1P : 3D+1P : 14D+2P x : DKC No (0 or 1)


: 6D+2P : 2D+2D y : DKU No (0 ~ 5)

HDDxy7-00 HDDxy7-23

HDU-xy7 SSWxy7-1 SSWxy7-2

DKUPSxy7-1 DKUPSxy7-2

HDU-xy6 SSWxy6-1 SSWxy6-2


Michael Ang
DKUPSxy6-1 DKUPSxy6-2
2U
HDU-xy5 SSWxy5-1 SSWxy5-2

DKUPSxy5-1 DKUPSxy5-2

HDU-xy4 SSWxy4-1 SSWxy4-2

DKUPSxy4-1 DKUPSxy4-2
DKU-xy
HDU-xy3 SSWxy3-1 SSWxy3-2

DKUPSxy3-1 DKUPSxy3-2

HDU-xy2 SSWxy2-1 SSWxy2-2

16U DKUPSxy2-1 DKUPSxy2-2

HDU-xy1 SSWxy1-1 SSWxy1-2

DKUPSxy1-1 DKUPSxy1-2

10U HDU-xy0 SSWxy0-1 SSWxy0-2

DKUPSxy0-1 DKUPSxy0-2

HDDxy0-00 HDDxy0-23
( Rack-00) Front view Rear view
5 © Hitachi Data Systems Corporation 2014. All rights reserved.
Drive Location in LFF Drive Chassis

 Recommended 1 spare per 12 HDD x : DKC No (0 or 1)


7D+1P 3D+1P
14D+2P y : DKU No (0 ~ 5)
6D+2P 2D+2D

HDDxy7-08 HDDxy7-09 HDDxy7-10 HDDxy7-11 HDU-xy7 SSWxy7-1 SSWxy7-2


HDDxy7-04 HDDxy7-05 HDDxy7-06 HDDxy7-07
HDDxy7-00 HDDxy7-01 HDDxy7-02 HDDxy7-03 DKUPSxy7-1 DKUPSxy7-2

HDDxy6-08 HDDxy6-09 HDDxy6-10 HDDxy6-11 HDU-xy6 SSWxy6-1 SSWxy6-2


HDDxy6-04 HDDxy6-05 HDDxy6-06 HDDxy6-07
Michael Ang
HDDxy6-00 HDDxy6-01 HDDxy6-02 HDDxy6-03 DKUPSxy6-1 DKUPSxy6-2
2U
HDDxy5-08 HDDxy5-09 HDDxy5-10 HDDxy5-11 HDU-xy5 SSWxy5-1 SSWxy5-2
HDDxy5-04 HDDxy5-05 HDDxy5-06 HDDxy5-07
HDDxy5-00 HDDxy5-01 HDDxy5-02 HDDxy5-03 DKUPSxy5-1 DKUPSxy5-2

HDDxy4-08 HDDxy4-09 HDDxy4-10 HDDxy4-11 HDU-xy4 SSWxy4-1 SSWxy4-2


HDDxy4-04 HDDxy4-05 HDDxy4-06 HDDxy4-07
HDDxy4-00 HDDxy4-01 HDDxy4-02 HDDxy4-03 DKUPSxy4-1 DKUPSxy4-2
DKU-xy
HDDxy3-08 HDDxy3-09 HDDxy3-10 HDDxy3-11 HDU-xy3 SSWxy3-1 SSWxy3-2
HDDxy3-04 HDDxy3-05 HDDxy3-06 HDDxy3-07
HDDxy3-00 HDDxy3-01 HDDxy3-02 HDDxy3-03 DKUPSxy3-1 DKUPSxy3-2

HDDxy2-08 HDDxy2-09 HDDxy2-10 HDDxy2-11 HDU-xy2 SSWxy2-1 SSWxy2-2


HDDxy2-04 HDDxy2-05 HDDxy2-06 HDDxy2-07
16U HDDxy2-00 HDDxy2-01 HDDxy2-02 HDDxy2-03 DKUPSxy2-1 DKUPSxy2-2

HDDxy1-08 HDDxy1-09 HDDxy1-10 HDDxy1-11 HDU-xy1 SSWxy1-1 SSWxy1-2


HDDxy1-04 HDDxy1-05 HDDxy1-06 HDDxy1-07
HDDxy1-00 HDDxy1-01 HDDxy1-02 HDDxy1-03 DKUPSxy1-1 DKUPSxy1-2

10U HDDxy0-08 HDDxy0-09 HDDxy0-10 HDDxy0-11 HDU-xy0 SSWxy0-1 SSWxy0-2


HDDxy0-04 HDDxy0-05 HDDxy0-06 HDDxy0-07
HDDxy0-00 HDDxy0-01 HDDxy0-02 HDDxy0-03 DKUPSxy0-1 DKUPSxy0-2

( Rack-00) Front view Rear view


6 © Hitachi Data Systems Corporation 2014. All rights reserved.
Drive Location in FMD Drive Chassis

 Recommended 1 spare per 24 FMD x : DKC No (0 or 1)


7D+1P 3D+1P
14D+2P y : DKU No (0 ~ 5)
6D+2P 2D+2D

HDDxy7-03 HDDxy7-04 HDDxy7-05 HDU-xy7


SSWxy3-1 SSWxy3-2
HDDxy7-00 HDDxy7-01 HDDxy7-02
HDDxy6-03 HDDxy6-04 HDDxy6-05
HDU-xy6 DKUPSxy3-1 DKUPSxy3-2
HDDxy6-00 HDDxy6-01 HDDxy6-02

HDDxy5-03 HDDxy5-04 HDDxy5-05 HDU-xy5


SSWxy2-1 SSWxy2-2
HDDxy5-00 HDDxy5-01 HDDxy5-02
Michael Ang HDDxy4-03 HDDxy4-04 HDDxy4-05
HDU-xy4 DKUPSxy2-1 DKUPSxy2-2
2U HDDxy4-00 HDDxy4-01 HDDxy4-02 FMU-xy

HDDxy3-03 HDDxy3-04 HDDxy3-05 HDU-xy3


SSWxy1-1 SSWxy1-2
HDDxy3-00 HDDxy3-01 HDDxy3-02
HDDxy2-03 HDDxy2-04 HDDxy2-05
HDU-xy2 DKUPSxy1-1 DKUPSxy1-2
HDDxy2-00 HDDxy2-01 HDDxy2-02

HDDxy1-03 HDDxy1-04 HDDxy1-05 HDU-xy1


8U SSWxy0-1 SSWxy0-2
HDDxy1-00 HDDxy1-01 HDDxy1-02
HDDxy0-03 HDDxy0-04 HDDxy0-05
HDU-xy0 DKUPSxy0-1 DKUPSxy0-2
HDDxy0-00 HDDxy0-01 HDDxy0-02

10U Front view Rear view

( Rack-00)
7 © Hitachi Data Systems Corporation 2014. All rights reserved.
Service Processor (SVP) Overview

 The SVP is designed as an appliance and is an integral part of VSP G1000


storage
 The SVP is based on a reduced set Windows 7 OS
• Functionality like mail/web-browser, and so on, are strictly prohibited
 The main functions of the SVP is to:
• Monitor health of the array
• Log real time events
• Provide APIs (such as SMIs) to outside management stations
• Provide a call home service
• Maintain and configure the array by Hitachi representative
 The SVP is not customer accessible

8 © Hitachi Data Systems Corporation 2014. All rights reserved.


VSP G1000 View Architecture

9 © Hitachi Data Systems Corporation 2014. All rights reserved.


VSP G1000 Software
Packages

0 © Hitachi Data Systems Corporation 2014. All rights reserved.


VSP G1000 Software

Hitachi
Hitachi
Open

Command Hitachi Local Hitachi Remote


Command Suite
Suite Data Replication Replication
Analytics
Mobility

Hitachi Storage Virtualization


File Base (Entry/Ultra/Value)
Operating System (formerly BOS)

1 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Storage Virtualization Operating System

Virtual Storage Machine enabled Hitachi Dynamic Provisioning


Resource Partition Manager (MF/Open)
Virtual LVI Hitachi Virtual Partition Manager
Open Volume Management Universal Volume Manager
LUN Manager Hitachi Dynamic Link Manager Adv
Cache Residency Manager (Unlimited & VMware included)
Cache Residency Manager MF SNMP Agent
Performance Monitor Data Retention Utility
Storage Navigator Volume Retention Manager
RAIDCOM Volume Shredder
Hitachi Device Manager Sever Priority Manager
Hitachi Device Manager CLI Java API
HDvM SMI-S Provider Embedded SMI-S provider

2 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Command Suite Analytics

 Hitachi Command Suite Analytics solution provides the data center


analytics and holistic, end-to-end service level monitoring capabilities
for administrators who need to maximize system performance across
their data center
 HCS Analytics deliver detailed storage performance and capacity
analysis for Hitachi storage systems
 Usable Capacity
 Components
• Hitachi Tuning Manager
• Hitachi Command Director

3 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Command Suite Data Mobility

 Hitachi Command Suite Data Mobility solution facilitates the intelligent


placement of data within the IT infrastructure to optimize application
service levels with automated and proactive data movement
• REVIVE – underperforming applications
• RECLAIM – stranded storage capacity
• RENEW – overtaxed storage systems
 Usable Capacity
 Components
• Hitachi Tiered Storage Manager + VM2
• Hitachi Dynamic Tiering for Open

4 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Local Replication

 Hitachi Local Replication provides internal protection for full volume


clones and point-in-time virtual volumes
 Used Capacity
 Components
• Hitachi ShadowImage
• Hitachi ThinImage
• Hitachi Replication Manager

5 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Remote Replication

 Hitachi Remote Replication provides the ability to set primary


and secondary data center protection
• It also ensures compliance adherence for Recovery Point and
Recovery Time Objectives
 Used Capacity
 Components
• Hitachi TrueCopy
• Hitachi Universal Replicator
• Hitachi Replication Manager

6 © Hitachi Data Systems Corporation 2014. All rights reserved.


RAID Configuration

7 © Hitachi Data Systems Corporation 2014. All rights reserved.


Data Redundancy

 RAID Implementation

4 or 8 physical HDDs are


configured into a RAID group
(also called a parity group)

 Groups of 4, 8 or 16 HDDs are set up using 1 of 3 parity


options:
• Supported RAID levels
 RAID-1 (2D+2D)/(4D+4D)
 RAID-5 (3D+1P)/(7D+1P)
 RAID-6 (6D+2P)/(14D+2P)
8 © Hitachi Data Systems Corporation 2014. All rights reserved.
Supported RAID Configurations

 RAID-1
2D + 2D Configuration

A A’ B B’
E’ E F’ F
G G’ H H’
I I’ J J’

4D + 4D Configuration
A A’ B B’ C C’ D D’
E’ E F’ F G’ G H H’
I I’ J J’ K K’ L L’
M M’ N N’ O O’ P P’

9 © Hitachi Data Systems Corporation 2014. All rights reserved.


RAID Configurations

 RAID-5
3D + 1P Configuration

A B C P
D E P F
G P H I
P J K L

7D + 1P Configuration
A B C D E F G P
H I J K L M P N
O Q R S T P U V
W X Y Z P AA AB AC

0 © Hitachi Data Systems Corporation 2014. All rights reserved.


RAID Configurations

 RAID-6 — Striping with dual parity drives

6D + 2P Configuration

A B C D E F P Q
H I J K L P Q N
O R S T P Q U V
W X Y P Q AA AB AC

5 6 7
2 DATA 4
10 DATA 11 12

1 © Hitachi Data Systems Corporation 2014. All rights reserved.


Disk Sparing Operations

 Dynamic sparing (pre-emptive copy)


• Means that ORM (online read margin) diagnostics have determined a drive to be
suspect, or drive read/write error thresholds have been exceeded
• Storage system spares out the drive even though the drive has not completely
failed
• Data is copied to spare drive (not recreated)
 Correction copy (disk failure)
• Occurs when a drive fails
• If a spare is available, lost data is re-created on the spare, which logically becomes
part of the array group
▪ This mode invokes the DRR chip, where pre-emptive copy does not
• If a spare is not available, array group is at risk for a longer period of time than
normal, plus that group will continuously run in degraded mode until bad drive is
replaced
2 © Hitachi Data Systems Corporation 2014. All rights reserved.
Operating and Managing
Hitachi Storage with Hitachi
Command Suite v8.x
Hitachi Command Suite v8.x Overview

3 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Command Suite – New in HCS 8

 Support of VSP G1000


 Analytics Tab  extension of duration analyzable by minutes to 24h
 Import volumes and pools labels from VSP and HUS VM
 Support of Virtual Storage Machines
 Improved GUI by addition of options
 Native 64bit application
 Creation of Resource Groups based on DP Pool
 Analyze MP Blade wizard in Analytics Tab
 Improvement of Health Check Report (Pool busy rate / Port metrics)

4 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Command
Suite Overview

5 © Hitachi Data Systems Corporation 2014. All rights reserved.


HDS Storage Approach

 A common virtualized platform for all data, content and information

Unified Management

Structured Unstructured Semi-Structured Rich Media

Virtualized Infrastructure

6 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Command Suite

 Unified management and control


• Across all storage and compute platforms
• Across all data: file, block and content
• Across all functions and lines of business

Configure Analyze Mobilize Protect


Unified Management Framework
Compute Block File Unified Content Appliance

Hitachi Blade VSP G1000, HNAS HUS / HCP HDI


7 Server VSP, HUS HUS VM © Hitachi Data Systems Corporation 2014. All rights reserved.
Hitachi Command Suite – Unified Management

 Unified Management — Enterprise and Modular Storage

8 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Command Suite – Unified Management

 Unified Management — HNAS

9 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Command Suite – Configure/Resources

 Allows a user to manage storage system volumes


 Provides a single platform for centrally managing, configuring and
monitoring Hitachi storage systems
 Helps raise storage management efficiency in these environments and
reduce costs
 Presents a logical view of storage resources while maintaining
independent physical management capabilities
 Allows administrators to precisely control all managed storage systems
 Helps automate entire storage environments

0 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Command Suite – Configure/Resources

 Storage operations
• Allocating volumes
• Unallocating volumes
• Creating volumes
• Virtualizing storage systems (Virtualize external storage systems/volumes)
• Virtualizing storage capacity (HDP pools)
 Managing storage resources
• Group management of storage resources (logical groups)
• Virtual Storage Machine Management
• Searching storage resources and outputting reports
 User management
 Security settings
1 © Hitachi Data Systems Corporation 2014. All rights reserved.
Hitachi Command Suite – Mobility

 Manages data mobility across the data center, not just volumes or
pages within a storage ecosystem
 Allows you to place data when and where it is needed
 Provides customers with the unique ability to move data nondisruptively
across pools, volumes and storage array
 Works with Hitachi Dynamic Tiering to provide an efficient solution for
optimizing macro and micro data in and across storage pools and
volumes
 Available as MOBILITY tab on Command Suite GUI

2 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Command Suite – Analytics

 Hitachi Command Suite Analytics


• Integrated Hitachi Command Suite first aid assistance to identify whether the
problem is related to the storage system
• Integrated correlation wizard with filters to help isolate the problem area
• Enables quick identification and troubleshooting of performance bottlenecks

3 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Tuning Manager Overview

 Storage performance management


• Hitachi Tuning Manager (HTnM) is the Hitachi advanced storage
performance reporting application that maps, monitors and analyzes
network resource performance from the application to the storage logical
devices and also reports on the capacity used by the resources
 Provides
• Detailed storage performance reporting
• Custom storage reports and real time performance alerts
• Supports VMware virtual server environments
• Provides performance data to Hitachi Command Suite (Mobility) to create
performance metrics based tiers
• Provides performance data to Hitachi Command Suite (Replication) to
analyze replication performance
4 © Hitachi Data Systems Corporation 2014. All rights reserved.
Hitachi Dynamic Link Manager Advanced

 Features
• Wide range of operating environments
▪ Supports path failover and I/O load balancing for IBM ® AIX®, Microsoft Windows®,
HP-UX, VMware vSphere and Linux operating systems
▪ Complements Microsoft Windows Multipath I/O (MPIO) environments through
added automation including failback and path load balancing
▪ Supports all Hitachi storage systems as well as storage from other vendors
• Fault tolerant path management
▪ Enables access to data on all Hitachi storage systems in both direct attached
storage (DAS) and SAN environments with path failover and I/O load balancing
over multiple HBA cards
▪ Lists path information for all paths or for each host, HBA port, storage subsystem
and storage port
▪ Optimizes application performance by controlling path bandwidth
5 © Hitachi Data Systems Corporation 2014. All rights reserved.
Hitachi Dynamic Link Manager Advanced

 Features (continued)
• Group and event management
▪ Controls access to specific groups of Hitachi Dynamic Link Manager
hosts, based on user defined criteria, allowing administrators to create
customized management views
▪ Enhances troubleshooting capabilities as the location of path failures can
easily be pinpointed
▪ Integrates path failure alerts with common enterprise system management
platforms

6 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Command Director Overview

 Centralized business application management policies and operations


 Monitor compliance to application-based storage service levels
 Improves capacity utilization and planning of Hitachi storage
environments

7 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Command Director Addresses
Following Challenges
Global Dashboard Application Service Level
Storage Status Summary
Management
Quickly check storage status for my Assign service level objectives for my
data center and monitor any service applications and investigate any
level violations service level violations.
 Review global dashboard or overall  Define service level objectives per
storage utilization summary report application
 Near real time application status  Enforce application service levels
and service level monitoring Business Views and storage tier policies
Create

 Global reporting of defined  Drill down service level violations


Capture Business Operations “Business
“Busine Ge Fu Biz. App Business Applications
Biz. App B
Application
ss o n A
s” and
Operati g c –Geogra –Geography (UK)
–U –S assign pre-
on” r t phy –Function
defined
informa SA a ale C i (USA) (Marketing, Sales)
–U C “Business
tion p s r o –Functio
r –M Operations
K h e n n (Sales)
–Ja e ”
ark

Business Ops.
y a
By pa a eti By By By

Grouping
t
n t G ng F G F
e
e e –G u e u
Geograph Function Geograph Function

thresholds and when they have to isolate and investigate


y o SS m n y o n
a D c c
Function Geograph o
T r t O More t
“ y
h e i n “Business i
B e o l Views” o
u“Business n n y generated n

sView” automatica
B
igenerated F T lly based O
u
nautomaticau h on n
s
By elly based n By e By By “Business l
i
son cG n F G Ops. yF
n
USA s“Business te Sales u USA e Sales Grouping” u
e
Ops.

been exceeded bottlenecks


io s Gn o n
OGrouping” o e c c
s Biz. App A
Sales p nT USA o t Biz. App A O t
s h i n i
O
. e o l Biz. App B o

Business Views
Biz. App A Biz. App A p UK
G n n y n
s
r
UK UK . Marketin
o F T Biz. App B O
G g
u u h n
r
Sales p n Biz. App B e Biz. App B l
o
i c n y
u
n t
Biz. App B p
g i Marketin G
i

Business View of Utilization Capacity Management


” o g e
n
Marketing n UK g o

Biz. App B Biz. App B

Organize my storage assets to View and analyze historical


support the following business use utilization trends for the following
cases: activities:
Reports
 Align mission critical business  Identify underused storage
applications to tier 1 storage capacity
assets  Determine optimal deployment for
 Increase and optimize capacity new application workloads
8 utilization © Hitachi Data Systems Corporation 2014. All rights reserved.
 Properly plan future storage
Hitachi Command Director — Central HCS
Reporting and Operations

Hitachi Common
Hitachi Command Director
Data Reporting Model

Hitachi Tiered Storage


Hitachi Device Manager Hitachi Tuning Manager
Manager
9 © Hitachi Data Systems Corporation 2014. All rights reserved.
Hitachi Command Director

Merge storage performance data


from multiple instances of Hitachi
Tuning Manager

Merge storage configuration data


from multiple instances of Hitachi
Device Manager

Merge storage tier data from Hitachi


Tiered Storage Manager (optional)
0 © Hitachi Data Systems Corporation 2014. All rights reserved.
Hitachi Command Suite v8.x — Storage
Management Redefined
Private Cloud (Block) Enabled Storage Easier to Manage and Maintain
Management
 Unified management across multiple storage
 Components for delivering block storage as a systems
service within a private cloud 6 1  Single DVD installer
 Host agent and agent-less approaches
 New software licensing models

Improved Scalability and Performance


 Significantly more resources under
Hitachi Command Suite Integration
management Common 2
5  New GUIs and common interfaces
 Reduced time for common tasks
Customer  Task management with scheduling for multi-
 Improved CLI performance
Experience thread operations
with HCS v8  More data sharing and synchronization by
combining configuration and storage tier
3 information
4
Enhanced Usability and Workflow
 Usability enhancements for both novice and
Automated Tiered Storage Management
expert users
 Integrated use case wizards with best  Page based tiered storage management
practice defaults  Complements existing volume based storage
tier migration capabilities

1 © Hitachi Data Systems Corporation 2014. All rights reserved.


Easier to Manage and Maintain — Unified
Storage Management

SN
Hitachi Command Suite 8
Command Line Interface
Universal
Storage Platform

SN SN2 SNM2

Universal Storage File and content


Platform V storage platforms

Virtual Storage
Adaptive Modular
Platform Family
Storage
 Single management tool for all Hitachi storage systems and virtualized storage environments
 Common GUI and CLI — no need to switch to element managers for everyday storage management
tasks

2 © Hitachi Data Systems Corporation 2014. All rights reserved.


Operating and Managing
Hitachi Storage with Hitachi
Command Suite v8.x
Storage Operations

3 © Hitachi Data Systems Corporation 2014. All rights reserved.


Storage
Provisioning

4 © Hitachi Data Systems Corporation 2014. All rights reserved.


Storage Provisioning Operations

 Involves configuring logical volumes within the storage system before


they can be presented to hosts
 Initially, the customer engineer (CE) configures internal volumes when
the storage system is installed at the customer location
 Customer can also configure new volumes as needed
 Different types of volumes can be configured and allocated to hosts
based on:
▪ Performance requirements
▪ Cost requirements
▪ Availability requirements

5 © Hitachi Data Systems Corporation 2014. All rights reserved.


Storage Architecture Overview

1. Physical Devices – PDEV


2. PDEV grouped together
with RAID type: RAID-1+,
3. Parity Group/RAID RAID-5, RAID-6
Group
4. EMULATION specifies
smaller logical unit sizes
5. Logical Devices – LDEV
6. Assign addresses in
LDKC:CU:LDEV format
00:00:00
00:00:01
00:00:02

6 © Hitachi Data Systems Corporation 2014. All rights reserved.


Logical Disk Structures

 Parity groups
• Set of 4, 8 or 16 physical devices grouped together based on RAID type
• RAID types supported:
▪ RAID-10: 2+2 and 4+4
▪ RAID-5: 3+1 and 7+1
▪ RAID-6: 6+2 and RAID-6 (14D+2P)
• Set up by CE when disks are added
• The customer cannot change the RAID type
• Parity group is further carved into smaller storage units called LDEVs or
Logical Devices
▪ Also known as internal volumes
• Other names — RAID groups, array groups, ECC groups

7 © Hitachi Data Systems Corporation 2014. All rights reserved.


Logical Disk Structures

 Parity group addressing

• Element Manager for Enterprise (Storage Navigator)


▪ Format: [B4]-[HDD Location]
▪ Example: 1-1, E5-1, X2-10, V4-10

• Element Manager for Modular (Storage Navigator Modular 2)


▪ Format: [Number]
▪ Example: 1, 5, 2

8 © Hitachi Data Systems Corporation 2014. All rights reserved.


Logical Disk Structures

 Logical devices
• Internal (volumes)
• External (external volumes)
• Internal virtualized (DP volumes)

9 © Hitachi Data Systems Corporation 2014. All rights reserved.


Logical Disk Structures

 Parity Groups  Internal Volumes (LDEVs)

Volume
OR RAID TYPE EMULATION Volume
Volume
Volume

Physical Devices Parity Groups Volumes


Same Attributes RAID Types Emulation – Open-V
 Type  RAID-10: 2+2, 4+4  Min – 46MB
 Size  RAID-5: 3+1, 7+1  Max – 2.99TB
 Speed  RAID-6: 6+2, 14+2  Max 65K LDEVs

0 © Hitachi Data Systems Corporation 2014. All rights reserved.


Logical Disk Structures

 Parity Groups (External)

EXTERNAL VOL
VIRTUALIZE EXTERNAL VOL
EMULATION EXTERNAL VOL
EXTERNAL VOL

Physical Devices Parity Groups External Volumes


Same Attributes  No physical capacity Emulation – Open-V
 Type  No RAID type  Min – 46MB
 Size  Specify emulation – Open-V  Max – 4TB
 Speed  Mapping in control memory
1 © Hitachi Data Systems Corporation 2014. All rights reserved.
Logical Disk Structures

 Parity Groups  Internal Virtualized (DP Volumes)


Parity Groups

Pool

DP Volume
Parity Group or Volumes Size DP Volume
DP Volume
DP Volume

DP Pools
DP Volumes
 Aggregated physical capacity
Emulation – Open-V
Volumes  One or more volumes ≥ 8GB
 Min – 46MB
 Capacity allocation 42MB pages
 Max – 60TB
 Mapping in control memory
2 © Hitachi Data Systems Corporation 2014. All rights reserved.
Storage Operations –
Creating Dynamic
Provisioning (DP) Volumes

3 © Hitachi Data Systems Corporation 2014. All rights reserved.


Dynamic/Thin Provisioning Overview

 Dynamic Provisioning allows storage to be allocated to an


application without being physically mapped until it is actually used
 This as-needed method means storage allocations can exceed the
amount of storage that is physically installed
 Physical storage capacity can be added without application service
interruption
 Dynamic Provisioning provides following benefits:
• Reduces initial installation costs because you purchase only required
physical disk capacity at the start
• Decreases management expenses and idle time caused by changing
configuration of both the storage system and host

4 © Hitachi Data Systems Corporation 2014. All rights reserved.


Dynamic/Thin Provisioning Volumes

 Easier management
• Simpler planning (reactive  strategic management)
• Avoid or defer tough decisions about LDEV size
• Control, change and optimize: capacity, performance, reliability, cost tier
 Naturally balances performance
• With large pools, there are more spindles than static provisioning
• May scale performance by growing pool
 Over-provisioning capacity
• Simplify capacity planning
• Reduce need for urgent change
• Reduce need for this decision:
“Disk X on server Y has run out. We’ll have to reallocate the whole estate.”
5 © Hitachi Data Systems Corporation 2014. All rights reserved.
Dynamic/Thin Provisioning Volumes

 Wide striping and storage performance


• Dynamic Provisioning software eliminates need for outside experts to fine
tune application I/O performance
• Using wide striping techniques, the storage system automatically spreads the
I/O load of all applications accessing common pool of storage across
available spindles
• This process eliminates hot spots and optimizes I/O response times, leading
to consistently high application performance

6 © Hitachi Data Systems Corporation 2014. All rights reserved.


Dynamic/Thin Provisioning Volumes

 Static provisioning

Storage Admin Server Admin


Total – 100TB Total – 50TB
Allocated – 50TB In Use – 25TB
Utilization – 50% Utilization – 50%

Actual Storage Utilization = 25%


7 © Hitachi Data Systems Corporation 2014. All rights reserved.
Dynamic/Thin Provisioning Volumes

 Dynamic Provisioning
1. Create pool — pool of physical
volumes

3. Assign VOLs
to hosts
2. Create DP VOLs – virtual volumes
Physical capacity allocated from pool
as 42MB pages (as and when needed)
8 © Hitachi Data Systems Corporation 2014. All rights reserved.
Dynamic/Thin Provisioning Volumes

 Dynamic Provisioning (continued)

Storage Admin
Server Admin
Pool — 100TB
Allocated — 500TB
Pool Used — 25TB
Used — 25TB
Utilization — 25%
9 © Hitachi Data Systems Corporation 2014. All rights reserved.
Dynamic/Thin Provisioning Volumes

 Capacity monitoring
• Pool thresholds (utilized capacity versus pool capacity)
▪ SIM/SNMP notification in case pool utilization exceeds the thresholds
▪ Threshold 1 (System threshold): 1% to 100% (variable on VSP/AMS/HUS,
fixed on others)
▪ Threshold 2 (Usage rate threshold): 1% to 100% (variable)
▪ Subscription threshold:
• Subscription limit as a percentage of pool size
• 0% - 65534%

0 © Hitachi Data Systems Corporation 2014. All rights reserved.


Dynamic/Thin Provisioning Volumes

 Steps for creating Dynamic/Thin Provisioning volumes


• Create pools
• Create dynamic/thin provisioning volumes (create volumes)

Create a Create Pool task

Select Parity Groups

Advanced Options

Completed task creation

1 © Hitachi Data Systems Corporation 2014. All rights reserved.


Dynamic/Thin Provisioning Volumes

 Prerequisites
• Hitachi Dynamic Provisioning license on storage system
• Volumes to be added in pool should not be in use (by a host or any other
software)
• Pool volume size: 8GB to 4TB
• Pool volumes per pool: 1,024 max
• Number of pools: 128 max
• Up to 63,232 V-VOL per pool
• Page allocation unit: 42MB

2 © Hitachi Data Systems Corporation 2014. All rights reserved.


Dynamic/Thin Provisioning Volumes

 Launch the Create Pool dialog from General Tasks

3 © Hitachi Data Systems Corporation 2014. All rights reserved.


Dynamic/Thin Provisioning Volumes

 From the Create Pool dialog, launch the Add Parity Groups dialog
to add parity groups to the pool

4 © Hitachi Data Systems Corporation 2014. All rights reserved.


Dynamic/Thin Provisioning Volumes

 Create Pool — specify pool attributes

5 © Hitachi Data Systems Corporation 2014. All rights reserved.


Dynamic/Thin Provisioning Volumes

 Using pools
• Create Thin/Dynamic Provisioning volumes using the Create Volumes
task

6 © Hitachi Data Systems Corporation 2014. All rights reserved.


Dynamic/Thin Provisioning Volumes

 Expand a pool
• Allows you to add parity groups to pool
• All unallocated capacity on parity group is added to pool

7 © Hitachi Data Systems Corporation 2014. All rights reserved.


Dynamic/Thin Provisioning Volumes

 DP VOLs Operations
• Expand DP Volume (VSP G1000, VSP and HUS-VM only)

8 © Hitachi Data Systems Corporation 2014. All rights reserved.


Dynamic/Thin Provisioning Volumes

File Systems and HDP


File System Where is FS metadata written when HDP — Improve Capacity
Host
(FS) the FS is created? Efficiency
JFS (VxFS) Top only YES
HP-UX
HFS Write metadata every 10MB NO

Microsoft® Windows® Server 2003 NTFS Top only YES

VMWARE VMFS Top only YES


XFS Write metadata every 2GB YES
Linux
Ext2, Ext3 Write metadata every 128MB YES #2
UFS Write metadata every 52MB NO
Sun Solaris VxFS Top only YES
ZFS According to Sun, it is efficient YES
JFS Write metadata every 8MB #1 NO
IBM AIX® JFS2 Top only YES
VxFS Top only YES

#1
Although metadata spacing can be increased to 64MB by changing the setting of the Allocation
Group Size, the pool’s capacity is still consumed by 65% of the DP-VOL’s capacity, resulting in
minimal capacity efficiency
9 © Hitachi Data Systems Corporation 2014. All rights reserved.
#2
At FS creation, the pool’s capacity is consumed by 30% of the DP-VOL’s capacity
Dynamic/Thin Provisioning Volumes

 Common or separate pools


• Advantage of larger pools:
▪ There is more opportunity for smoothing out workload differences
• In general, the bigger the pool, the better
• Tests have been done on a Microsoft® Exchange database with separate
pools for log/data and a common pool
▪ Overall reduction in access time was observed for common pool
• Possible disadvantage of large pools with multiple workloads:
▪ You cannot prevent one workload from stealing all performance

0 © Hitachi Data Systems Corporation 2014. All rights reserved.


Dynamic/Thin Provisioning Volumes

 Guidelines for usage — MS Exchange


• Tests have been done on an Exchange database with separate pools for
log/data and a common pool
▪ Overall reduction in access time was observed for common pool
• You should, however, put database and log files on different virtual volumes
or V-VOLs so that cache algorithms can schedule for different
random/sequential characteristics
• Exchange 2003 and 2007 similar:
▪ Good for mail stores (.mdb)
▪ Storage group log files (.log)

1 © Hitachi Data Systems Corporation 2014. All rights reserved.


Dynamic/Thin Provisioning Volumes

 Guidelines for usage — VMware


• VMware is a popular choice with Hitachi Dynamic Provisioning
• VMFS reclaims space efficiently Most Recently Used Space Allocation
• Can leverage over-provisioning at VMware level (and where appropriate at
client OS level)
• Most important thing is to avoid:
▪ Putting too many guests on same LUN to limit issues with SCSI reserve
contention
• Where 1 DP-VOL = 1, LUN = 1 VMFS in VMware
▪ Recommendation is 5 per LUN, and no more than 10

2 © Hitachi Data Systems Corporation 2014. All rights reserved.


Dynamic/Thin Provisioning Volumes

 Guidelines for usage — VMware (continued)


• Thin friendliness
▪ When you add a LUN (DP-VOL) to VMware control, you either create a
VMFS on it or you define it as an RDM
• VMFS is thin friendly
▪ Only writes metadata at the top
▪ RDM writes nothing on the DP-VOL
• After creating VMFS, you create a virtual disk for each guest OS

3 © Hitachi Data Systems Corporation 2014. All rights reserved.


Instructor Demonstration – Storage Operations:
Create DP Pool, Create DP Volumes
 Storage Operations
• Create DP Pool
• Create DP Volumes

4 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Dynamic
Tiering Overview

5 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Dynamic Tiering Overview
Virtual Storage Pool
Data Heat Index
Platform G1000 Tier 1
High
Activity
Set
Least Tier 2
Referenced
Pages Normal
Working
Set

Tier 3

Dynamic
Tiering Quiet Data
Volume Set

6 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Dynamic Tiering Builds on Dynamic
Provisioning

All the benefits of Dynamic


Provisioning
Dynamic Tiering

Dynamic Further simplified management


Provisioning Further reduced OPEX

Better Performance

7 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Dynamic Tiering Specifications

 Require Dynamic Tiering license key installed on storage system


 Up to a maximum of 3 tiers in a pool
• Most often a SSD-SAS-SATA-EXTERNAL hierarchical model
▪ Pool’s tiers are defined by HDD type and RPM
• External storage supported
• Capacity of any tier can be added to the pool
• Pool shrinkage (remove volumes from pool) is supported
 Tier management
• Fills top tiers as much as possible
• Monitors I/O references
• Adjusts page placement according to trailing 24-hour heat map cyclically (adjustable from 30
minutes to 24 hours)
• Automatic or manual controls available
• Tier management (migration up and down tier) is automatic and built into system firmware

8 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Dynamic Tiering Specifications

 Each pool tier has calculated sustainable workload level


• Average I/O level measurement for pages located in tier is targeted to not
exceed calculated sustainable I/O level
▪ Some pages may not be moved up-tier if predicted workload level is too
high
• Tier workload level would normally not be a factor
▪ Most used pages would be located in highest tiers
 Page size is 42MB — not adjustable
 HDT in mainframe environment supported

9 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Dynamic Tiering Operations
 HDT Operations by HCS
I Initial Setup • Allocate DP volume
• Create HDT Pool • Create Tiering Policy
 Set Tiers and capacity • Apply Tiering Policy per DP volume
 Set Monitoring / Relocation schedule • Data Placement Profile setting
 Set threshold for capacity monitoring

C Check A Act
‒ Monitor pool capacity ‒ Expand Pool / Expand DP volume

‒ ‒ Edit HDT Pool


Monitor DP volume capacity
‒ Change Monitoring / Relocation schedule
‒ Monitor performance (HTnM)
‒ Change Buffer space
‒ Monitor SLO (HCmD)
‒ Change Tiering Policy per DP volume
‒ View Tier Properties
‒ Enable / Disable relocation per DP volume
‒ Change the setting of Data Placement Profile

00 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Dynamic Tiering Operations

 HCS allows you to enable HDT while creating new pool

01 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Dynamic Tiering Operations

 Add Parity Groups to HDT pool

02 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Dynamic Tiering Operations

 Create Pool — HDT Options

03 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Dynamic Tiering Operations

 Expand Pool dialog — Add Capacity to a Tier

04 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Dynamic Tiering Operations

 Delete Pools

05 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Dynamic Tiering Operations

HDT versus HDP Pool Detail

HDT

HDP

06 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Dynamic Tiering Operations

H/W Tier Tab on HDT Pool Detail


 Tier definition , Tier utilization (Used %)

07 © Hitachi Data Systems Corporation 2014. All rights reserved.


Monitor HDT Pool and Tier Usage

 Select HDT Pool > More Actions > View Tier Properties (HCS v7.5)

Note: You can also launch it from Mobility Tab


08 © Hitachi Data Systems Corporation 2014. All rights reserved.
Monitor HDT Pool and Tier Usage

 Tier properties screen


• Display information is created based on the last monitoring period information

09 © Hitachi Data Systems Corporation 2014. All rights reserved.


Creating new HDT Volumes

 Create Volume

10 © Hitachi Data Systems Corporation 2014. All rights reserved.


Hitachi Dynamic
Tiering Concepts and
Operations
Understanding Tier Properties

11 © Hitachi Data Systems Corporation 2014. All rights reserved.


© Hitachi Data Systems 2011. All rights reserved.
Module Objectives

 Upon completion of this module, you should be able to:


– Navigate to and display Tier Properties for the HDT pool and virtual volumes
using Storage Navigator 2 for the Hitachi VSP
– Describe the information displayed in a Tier Properties view
– Explain when Tier Properties data is generated and updated
– Interpret the status of an HDT Pool using the Tier Properties data
– Describe the differences between the information presented in the Tier
Properties table and Performance Graph
– Describe the meaning and purpose of Tier Range(s)
– Describe conditions when the Performance Graph is not displayed

12 © Hitachi Data Systems Corporation 2014. All rights reserved.


Accessing the HDT Pool Tier Properties View

2 4
3
13 © Hitachi Data Systems Corporation 2014. All rights reserved.
HDT Pool — Tier Properties — No Performance
Graph

No performance graph is displayed

14 © Hitachi Data Systems Corporation 2014. All rights reserved.


Tier Properties — Where the Data Is

 Identification of the object for which Tier Properties are shown


 Type of storage that supports the tier
 Total and Used % capacity of each tier
 Performance Utilization % of the tier
15 © Hitachi Data Systems Corporation 2014. All rights reserved.
Tier Properties Not displayed — DP-Only Pool

16 © Hitachi Data Systems Corporation 2014. All rights reserved.


Tier Properties — HDT Pool with Only One Tier

17 © Hitachi Data Systems Corporation 2014. All rights reserved.


HDT Pool Tier Properties With Performance
Graph

18 Monitoring period © Hitachi Data Systems Corporation 2014. All rights reserved.
Tier Ranges — Where the Data SHOULD Be

Where
the data
“IS”

Where
the data
“SHOULD BE”

19 © Hitachi Data Systems Corporation 2014. All rights reserved.


Viewing Tier Properties Data Using CLI

20 © Hitachi Data Systems Corporation 2014. All rights reserved.


Tier Properties After a Relocation

21 © Hitachi Data Systems Corporation 2014. All rights reserved.


Used Capacity Comparison Between Tier
Properties Table and Performance Graph
 Explaining differences in reported total data in the pool

Tier Properties
table reports
11.76GB used
capacity in pool

Performance
Graph reports
only 7GB
used capacity
22 inCorporation
© Hitachi Data Systems pool 2014. All rights reserved.
Tier Properties Display for Virtual Volume (V-
VOL)

23 © Hitachi Data Systems Corporation 2014. All rights reserved.


Tier Properties — What Makes It Work?

Average number of I/O per Hour

24 Capacity © Hitachi Data Systems Corporation 2014. All rights reserved.


Tier Range — Tier Capacity

Average number of I/O per Hour

Tier Range:
Intersection of tier
capacity and the
IOPH graph

25 Capacity © Hitachi Data Systems Corporation 2014. All rights reserved.


Targeted Amount of Data Per Tier
Average number of I/O per Hour

26 © Hitachi Data Systems Corporation 2014. All rights reserved.


Capacity
Tier Range Values Vary Over Time

Monitoring Session 1:
Lower IOPH, higher Tier Range Monitoring Session 2:
Higher IOPH, lower Tier Range

27 © Hitachi Data Systems Corporation 2014. All rights reserved.


“Synthetic” Workloads Do Not Best Represent
HDT

Tier’s performance
utilization negligible
 While this Tier Properties
example appears to “have
everything,” it is actually a
very poor example of HDT,
for reasons indicated.
Negligible
I/O rates Flat graph sections overall

28 © Hitachi Data Systems Corporation 2014. All rights reserved.


Customer Production Example

29 © Hitachi Data Systems Corporation 2014. All rights reserved.


Tier Range Information from raidcom
Command

 raidcom get dp_pool –key opt Command

30 © Hitachi Data Systems Corporation 2014. All rights reserved.


Adding or Removing a Tier Invalidates
Statistics
• Change HDT pool structure. • Change HDT pool structure.
• Expand pool. • Expand pool.
• Add pool volume to existing tier. • Add pool volume resulting
in a new tier.

Existing valid monitoring


data is retained.
Existing valid monitoring
31 data
© Hitachi is set
Data Systems to Invalid.
Corporation 2014. All rights reserved.
“Overhead” I/O is Not Counted in Monitoring

 I/Os that do not get counted:


– ‘Overhead’ I/Os (from format, relocate, pool rebalance, etc.)
 Pages that do not get counted:
– ‘New’ pages allocated to a DP volume still in their initial partial cycle
– Reclaimed pages
 DP volumes that do not get counted:
– Deleted DP volumes
– DP volumes disabled from relocation (if set using RAIDCOM)
– Initial Copy S-VOLs in their first partial cycle
– Hitachi ShadowImage® and Volume Migrations in their first partial
cycle
32  Cycles that do not count: © Hitachi Data Systems Corporation 2014. All rights reserved.
Virtual Storage
Machine Overview

33 © Hitachi Data Systems Corporation 2014. All rights reserved.


Virtual Storage Machine Overview

 Virtual Storage Machine is abstraction of hardware devices defined by virtual


storage software to realize Global Storage Virtualization
 Global Storage Virtualization provides continuous access to enterprise
applications by abstraction of physical storage
 VSP G1000 provides hardware platform, Hitachi Command Suite v8 provides
integrated management for Virtual Storage Machine with operational efficiency
Application

HCS v8 VSP
Virtual Storage Machine G1000
Integrated
management Abstraction
Virtualized

Global Storage Virtualization Heterogeneous


Storage Capacity

34 © Hitachi Data Systems Corporation 2014. All rights reserved.


Virtual Storage Machine Overview

Host

Allocated volume is recognized as;


[Model , S/N , LDEV Num] Virtual Model : VSP
= [VSP , 46011 , 00:10:00] Virtual S/N : 46011
Virtual Name: HDS001

HG

virtual Virtual
storage LDEV
machine 00:10:00

Physical Physical
LDEV
Resources
00:22:10
VSP
G1000

No impact to host by changing physical volumes


35 © Hitachi Data Systems Corporation 2014. All rights reserved.
Virtual Storage Machine Overview

 Resources Assigned
Resources Description
Storage Systems Specifying a physical storage system from any one of the VSP G1000 discovered in HCS. Virtual
Storage Machine will be created on the specified storage system.
Parity Groups Specifying existing parity group on the selected storage system. This is same purpose for adding
parity groups in resource group for access control.
The user who manages this virtual storage machine can create new volumes from the parity group.
LDEV IDs Specifying LDEVs can be used in the virtual storage machine.
You can specify LDEVs already created in the storage system or you can also reserve LDEV IDs
(physical LDEV IDs) to be used by the virtual storage machine.
Storage Ports Specifying existing ports on the selected storage system. This is same purpose for adding storage
ports in resource group for access control.
The user who manages this virtual storage machine can use the port when allocating volume.
Host Group Specifying host groups can be used in the virtual storage machine.
Numbers You can specify unused host groups already created in the storage system or you can also
specifying number of host groups will be used by the virtual storage machine per ports.

36 © Hitachi Data Systems Corporation 2014. All rights reserved.


Virtual Storage Machine Benefits

 Virtual Storage Machine provides the following benefits to customer’s


storage management.

 Mobility – non-disruptive migration


 Availability - global active device Mobility

Non-disruptive
migration Availability

Global Active
Device

37 © Hitachi Data Systems Corporation 2014. All rights reserved.


Virtual Storage Machine Benefits – Mobility

Users are given benefits of new model by


replacement.

No change of virtual ID during


Users don’t need to stop their daily migration
operations by using same virtual ID.
Host
Virtual Storage
Machine
 Performance 
migration  Scalability 
VSP
G1000
 Power-saving 
Brand new Brand new
@Year 2014
@Year 2010 Hardware @Year 2014
replace
38 © Hitachi Data Systems Corporation 2014. All rights reserved.
Virtual Storage Machine Benefits – Global Active Device

The host sees only a single virtual


volume (00:10:00) whereas two
Host physical volumes support it.

Virtual Model : VSP G1000


Virtual LDEV Number : 00:10:00
Virtual S/N : 66011
Virtual Name : HDS001

00:10:00

virtual storage machine

VSP 10:00:01 20:00:01


VSP G1000 VSP
G1000 Global Active Device Volume G1000

39 © Hitachi Data Systems Corporation 2014. All rights reserved.

You might also like