Download as pdf or txt
Download as pdf or txt
You are on page 1of 482

Architecting Storage

Performance with Hitachi


Storage
TSI1945 / TSE1945

For HDS internal use only. This document is not to be used for instructor-led
training without written approval from geo Academy leaders. In addition,
this document should not be used in place of HDS maintenance manuals
and/or user guides.

Courseware Version 3.0


Notice: This document is for informational purposes only, and does not set forth any warranty, express or
implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems. This
document describes some capabilities that are conditioned on a maintenance contract with Hitachi Data
Systems being in effect, and that may be configuration-dependent, and features that may not be currently
available. Contact your local Hitachi Data Systems sales office for information on feature and product
availability.
Hitachi Data Systems sells and licenses its products subject to certain terms and conditions, including limited
warranties. To see a copy of these terms and conditions prior to purchase or license, please call your local
sales representative to obtain a printed copy. If you purchase or license the product, you are deemed to have
accepted these terms and conditions.
THE INFORMATION CONTAINED IN THIS MANUAL IS DISTRIBUTED ON AN "AS IS" BASIS
WITHOUT WARRANTY OF ANY KIND, INCLUDING WITHOUT LIMITATION, ANY IMPLIED
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR
NONINFRINGEMENT. IN NO EVENT WILL HDS BE LIABLE TO THE END USER OR ANY THIRD PARTY
FOR ANY LOSS OR DAMAGE, DIRECT OR INDIRECT, FROM THE USE OF THIS MANUAL, INCLUDING,
WITHOUT LIMITATION, LOST PROFITS, BUSINESS INTERRUPTION, GOODWILL OR LOST DATA,
EVEN IF HDS EXPRESSLY ADVISED OF SUCH LOSS OR DAMAGE.
Hitachi Data Systems is registered with the U.S. Patent and Trademark Office as a trademark and service
mark of Hitachi, Ltd. The Hitachi Data Systems logotype is a trademark and service mark of Hitachi, Ltd.
The following terms are trademarks or service marks of Hitachi Data Systems Corporation in the United
States and/or other countries:
Hitachi Data Systems Registered Trademarks
Hi-Track, ShadowImage, TrueCopy, Essential NAS Platform, Universal Storage Platform

Hitachi Data Systems Trademarks


HiCard, HiPass, Hi-PER Architecture, HiReturn, Hi-Star, iLAB, NanoCopy, Resource Manager, SplitSecond,
TrueNorth, Universal Star Network

All other trademarks, trade names, and service marks used herein are the rightful property of their respective
owners.
NOTICE:
Notational conventions: 1KB stands for 1,024 bytes, 1MB for 1,024 kilobytes, 1GB for 1,024 megabytes, and
1TB for 1,024 gigabytes, as is consistent with IEC (International Electrotechnical Commission) standards for
prefixes for binary and metric multiples.
© Hitachi Data Systems Corporation 2012. All Rights Reserved
HDS Academy 1062

Contact Hitachi Data Systems at www.hds.com.

Page ii HDS Confidential: For distribution only to authorized parties.


Contents
INTRODUCTION ..............................................................................................................XI
Welcome and Introductions ........................................................................................................xi
Course Description .................................................................................................................... xii
Prerequisites ............................................................................................................................. xiii
Course Objectives ..................................................................................................................... xiv
Course Topics ............................................................................................................................xv
Learning Paths .......................................................................................................................... xvi
Collaborate and Share ............................................................................................................. xvii
HDS Academy Is on Twitter and LinkedIn .............................................................................. xviii
Social Networking — Academy’s Twitter Site ........................................................................... xix
Social Networking — Academy’s Open LinkedIn Site ...............................................................xx
1. BASICS REVIEW ..................................................................................................... 1-1
Module Objectives ................................................................................................................... 1-1
What Is Performance? ............................................................................................................. 1-2
Disk and RAID ......................................................................................................................... 1-3
Hard Disk Drive Structure and Components............................................................................ 1-3
Bottom Line on Drive Cost ....................................................................................................... 1-4
HDS RAID Levels .................................................................................................................... 1-5
Mechanical Latency and Random Disk IOPS .......................................................................... 1-6
RAID-1 ..................................................................................................................................... 1-7
RAID-1 and RAID-1+0 ............................................................................................................. 1-8
RAID Terminology.................................................................................................................... 1-9
RAID-1+0 ............................................................................................................................... 1-10
RAID Terminology.................................................................................................................. 1-11
RAID-5 ................................................................................................................................... 1-12
RAID Terminology.................................................................................................................. 1-17
RAID-6 ................................................................................................................................... 1-18
Understanding the Impact of the RAID Levels ...................................................................... 1-19
Estimating Disk (HDD) and RAID IOPS Performance ........................................................... 1-20
SATA Specifics ...................................................................................................................... 1-21
Module Summary ................................................................................................................... 1-22
Module Review ...................................................................................................................... 1-23
2. ENTERPRISE STORAGE ARCHITECTURE.................................................................... 2-1
Module Objectives ................................................................................................................... 2-1
Enterprise Storage Architecture — Hitachi Virtual Storage Platform ...................................... 2-2
Hitachi Virtual Storage Platform Full Configuration — 6 Rack ........................................ 2-2
DKC Overview — Logic Box ............................................................................................ 2-4
Front-End Directors .......................................................................................................... 2-5
Data Cache Adapters ....................................................................................................... 2-7
Virtual Storage Platform Memory Organization ............................................................... 2-9
Back-End Directors ........................................................................................................ 2-10
Back-End Director and Front-End Director Pairs ........................................................... 2-13
Virtual Storage Directors ................................................................................................ 2-14
Grid Switches ................................................................................................................. 2-16
VSP Chassis Bandwidth Overview ................................................................................ 2-18
DKU/HDU Overview ....................................................................................................... 2-19
BED SAS Links to DKU/HDU ......................................................................................... 2-23
Disk Types and Limits .................................................................................................... 2-25
SSD Drives ..................................................................................................................... 2-26
SAS Disks ...................................................................................................................... 2-27
SATA Disks .................................................................................................................... 2-28
Installable Disk Features ................................................................................................ 2-29
Enterprise Storage Architecture ............................................................................................. 2-30
Overview ........................................................................................................................ 2-30

HDS Confidential: For distribution only to authorized parties. Page iii


Architecting Storage Performance with Hitachi Storage
Contents

Hitachi Universal Storage Platform V Architecture ........................................................ 2-32


Hitachi Universal Storage Platform VM Architecture ..................................................... 2-33
Universal Storage Platform V FED and MP Distributed I/O .......................................... 2-34
FED Storage Processor Sharing ................................................................................... 2-35
Universal Storage Platform V BED ................................................................................ 2-36
Switched Back End ........................................................................................................ 2-37
Universal Storage Platform V BED and RAID Group Layout ........................................ 2-38
Random I/O Cache Structures ...................................................................................... 2-39
Architecture — Storage ................................................................................................. 2-40
RAID-5 Mechanism, Open-V ......................................................................................... 2-41
RAID-1+ Mechanism, Open-V ....................................................................................... 2-42
RAID-6 Mechanism, Open Systems .............................................................................. 2-43
RAID Overview .............................................................................................................. 2-44
RAID Write Penalty ........................................................................................................ 2-45
Module Summary .................................................................................................................. 2-46
Module Review ...................................................................................................................... 2-47

3. MODULAR STORAGE ARCHITECTURE — PART 1 ....................................................... 3-1


Module Objectives ................................................................................................................... 3-1
Hitachi Unified Storage Overview............................................................................................ 3-2
Hitachi Unified Storage 100 Family ................................................................................. 3-2
Unified Storage Block Module Specs .............................................................................. 3-3
Hitachi Unified Storage Architecture ....................................................................................... 3-4
Logical Architecture Overview ......................................................................................... 3-4
Dynamic Virtual Controller Overview....................................................................................... 3-5
Dynamic Virtual Controller Introduction ........................................................................... 3-5
LUN Management ........................................................................................................... 3-7
Hardware Load Balancing ....................................................................................................... 3-8
Hardware Load Balancing Overview ............................................................................... 3-8
Hitachi Unified Storage Cache Management ........................................................................ 3-10
Cache Memory and Host I/O ......................................................................................... 3-10
Default Layout (Dynamic Provisioning not Installed) ..................................................... 3-11
Memory Management Layer.......................................................................................... 3-12
Default Layout - Dynamic Provisioning Installed ........................................................... 3-14
Cache Usage with a Controller Failure .......................................................................... 3-15
Cache Layout Using Cache Partition Manager ............................................................. 3-16
Hitachi Unified Storage Back End ......................................................................................... 3-17
SAS Engine Overview ................................................................................................... 3-17
External Disk Tray — LFF SAS 12-Disk ........................................................................ 3-19
External Disk Tray — SFF SAS, SSD 24-Disk .............................................................. 3-20
External Disk Drawer — LFF SAS High Density ........................................................... 3-21
Hitachi Unified Storage Models Overview ............................................................................. 3-22
HUS 110 (Block Module) Architecture ........................................................................... 3-22
External Disk Tray Attachment — HUS 110.................................................................. 3-23
HUS 110 Configurations ................................................................................................ 3-24
HUS 110 (Block Module) Overview ............................................................................... 3-25
HUS 130 (Block Module) Architecture ........................................................................... 3-26
External Disk Tray Attachment — HUS 130.................................................................. 3-27
HUS 130 Configurations ................................................................................................ 3-28
HUS 130 (Block Module) Overview ............................................................................... 3-29
HUS 150 (Block Module) Architecture Overview........................................................... 3-30
External Disk Tray Attachment — HUS 150.................................................................. 3-31
HUS 150 Configurations ................................................................................................ 3-32
HUS 150 Overview ........................................................................................................ 3-33
Module Summary .................................................................................................................. 3-34
Module Review ...................................................................................................................... 3-35

Page iv HDS Confidential: For distribution only to authorized parties.


Architecting Storage Performance with Hitachi Storage
Contents

4. MODULAR STORAGE ARCHITECTURE — PART 2 ....................................................... 4-1


Module Objectives ................................................................................................................... 4-1
Hitachi Adaptable Modular Storage 2000 Architecture............................................................ 4-2
Product Line Positioning .................................................................................................. 4-2
Modular Storage Components ......................................................................................... 4-3
AMS 2100 Overview ........................................................................................................ 4-4
AMS 2300 Overview ........................................................................................................ 4-6
AMS 2500 Overview ........................................................................................................ 4-8
AMS 2000 Architecture Introduction .............................................................................. 4-10
AMS 2000 Architecture — Details and Usage Options ......................................................... 4-11
Logical View ................................................................................................................... 4-12
Tachyon Features .......................................................................................................... 4-13
Active-Active Symmetric Front End Design ................................................................... 4-14
Hitachi Dynamic Load Balancing Controller Architecture .............................................. 4-15
Back End Load Balancing .............................................................................................. 4-17
Hitachi Dynamic Load Balancing Controller .................................................................. 4-18
Dynamic Load Balancing of RGs/LUNs to Less Utilized Controller ............................... 4-19
AMS 2x00 — Owning Controller .................................................................................... 4-20
Active-Active — Full Cache Mirroring ............................................................................ 4-21
AMS 2000 Cache Access .............................................................................................. 4-22
Point-to-Point SAS Back-End Architecture .................................................................... 4-23
AMS 2x00 Back-End and Chassis Connectivity ............................................................ 4-25
Disk Types ..................................................................................................................... 4-26
RAID Levels ................................................................................................................... 4-27
RAID Overhead .............................................................................................................. 4-28
Module Summary ................................................................................................................... 4-29
Module Review ...................................................................................................................... 4-30

5. TIERS, RESOURCE POOLS AND WORKLOAD PROFILES .............................................. 5-1


Module Objectives ................................................................................................................... 5-1
What Is a Storage Tier? ........................................................................................................... 5-2
Storage Tier Examples ............................................................................................................ 5-3
What Is a Resource Pool? ....................................................................................................... 5-4
Application Properties and Workload Profile ........................................................................... 5-5
Batch versus Interactive Workloads ........................................................................................ 5-6
I/O Profiles — Used for Sizing ................................................................................................. 5-7
A More Complete I/O Profile .................................................................................................... 5-8
I/O Profiling — Why Is It Necessary? ...................................................................................... 5-9
Hitachi Dynamic Provisioning (HDP) ..................................................................................... 5-10
HDP Overview................................................................................................................ 5-10
HDP Benefits.................................................................................................................. 5-11
HDP-VOLume Overview ................................................................................................ 5-12
HDP Pools — Enterprise versus Modular ...................................................................... 5-13
Laws of Physics Still Apply ............................................................................................ 5-14
HDP Pool Design ........................................................................................................... 5-15
Determine Compatible Users of Shared Resources ...................................................... 5-16
Hitachi Dynamic Tiering (HDT) .............................................................................................. 5-17
HDT Overview ................................................................................................................ 5-17
Page Level Tiering ......................................................................................................... 5-19
HDT Benefits .................................................................................................................. 5-20
Improved Performance at Reduced Cost....................................................................... 5-21
How HDT Fits into Tiered Storage ................................................................................. 5-22
HDT Specifications......................................................................................................... 5-23
Tiering Policy.................................................................................................................. 5-25
Module Summary ................................................................................................................... 5-27
Module Review ...................................................................................................................... 5-28

HDS Confidential: For distribution only to authorized parties. Page v


Architecting Storage Performance with Hitachi Storage
Contents

6. PERFORMANCE TOOLS AND DATA SOURCES ............................................................ 6-1


Module Objectives ................................................................................................................... 6-1
Workload Generators and Benchmarking Products ................................................................ 6-2
Workload Generators and Benchmarking ....................................................................... 6-2
dd Utility ........................................................................................................................... 6-3
Iometer — Overview ........................................................................................................ 6-4
Iozone .............................................................................................................................. 6-8
Jetstress ........................................................................................................................ 6-10
SQLIO ............................................................................................................................ 6-11
Performance Monitoring Tools .............................................................................................. 6-12
Performance Monitoring Tools Overview ...................................................................... 6-12
I/O Profile Information for Windows ............................................................................... 6-13
I/O Profile for Solaris Using iostat.................................................................................. 6-17
I/O Profile for AIX and Linux Using nmon ...................................................................... 6-21
Storage Monitoring ........................................................................................................ 6-22
Performance Monitor ..................................................................................................... 6-23
Export Tool .................................................................................................................... 6-26
RAIDCOM CLI ............................................................................................................... 6-27
Modular Storage Systems — Monitoring Options ......................................................... 6-28
Austatistics — Capabilities ............................................................................................ 6-29
PFM — Capabilities ....................................................................................................... 6-30
Getting the PFM Stats — PFM Output Files ................................................................. 6-31
Getting the PFM Stats from the SNM2 GUI .................................................................. 6-32
Example PFM Output .................................................................................................... 6-33
AMS PFM Real-time Graphing ...................................................................................... 6-34
PFM Real-Time View of Tag Count ............................................................................... 6-35
Performance Management Software Suite ................................................................... 6-36
Performance Monitor Overview ..................................................................................... 6-37
Performance Monitor View of Statistical Information..................................................... 6-38
Performance Monitor Collecting Ranges ....................................................................... 6-39
Performance Management Challenge ........................................................................... 6-40
Introducing Tuning Manager.......................................................................................... 6-41
Centralized Performance and Capacity Management .................................................. 6-42
Tuning Manager Components ....................................................................................... 6-43
Resources Monitored by Tuning Manager .................................................................... 6-45
Types of Data Collected ................................................................................................ 6-46
Key Metrics .................................................................................................................... 6-47
Analytics Tab — HCS .................................................................................................... 6-48
Module Summary .................................................................................................................. 6-49
Module Review ...................................................................................................................... 6-50

7. ACQUISITION PLANNING .......................................................................................... 7-1


Module Objectives ................................................................................................................... 7-1
Planning for Performance........................................................................................................ 7-2
Response Time Factors ........................................................................................................ 7-10
Modular System Controller .................................................................................................... 7-11
Capacity Planning ................................................................................................................. 7-12
I/O Profile............................................................................................................................... 7-13
Storage Pools ........................................................................................................................ 7-14
Workload and Workload Profile Considerations .................................................................... 7-16
SAN and Hitachi NAS Platform Design ................................................................................. 7-18
SAN Design Basics ....................................................................................................... 7-18
SAN Design Principles .................................................................................................. 7-19
Core-Edge Model Sample ............................................................................................. 7-20
SAN Planning ................................................................................................................ 7-21
Planning a Hitachi NAS Platform Environment ............................................................. 7-22
Sample HNAS — Storage System Connectivity ........................................................... 7-23

Page vi HDS Confidential: For distribution only to authorized parties.


Architecting Storage Performance with Hitachi Storage
Contents

Sample Abstract HNAS Storage Pool Configuration ..................................................... 7-24


Storage Pool Recommended Practices ......................................................................... 7-25
Module Summary ................................................................................................................... 7-26
Module Review ...................................................................................................................... 7-27

8. DEPLOYMENT PLANNING — PART 1 ......................................................................... 8-1


Module Objectives ................................................................................................................... 8-1
VDEV, LDEV and LUN Concepts (Enterprise Storage) ........................................................... 8-2
Configuration Concepts — VDEV .................................................................................... 8-2
Four Types of Enterprise VDEVs ..................................................................................... 8-3
LU, HDEV, LUSE and LDEV ............................................................................................ 8-4
VDEV Size ....................................................................................................................... 8-5
Parity Group Name, VDEV Name and Number ............................................................... 8-6
LDEV Alignment within VDEVs ........................................................................................ 8-7
Supplementary Note on Pool Volumes ............................................................................ 8-9
Concatenated Parity Groups — VDEV Striping ............................................................. 8-10
LUN Mapped to LDEV on Concatenated Parity Group.................................................. 8-11
LUN Mapped to LUSE ................................................................................................... 8-12
Tag Command Queuing (TCQ) ............................................................................................. 8-13
Tagged Command Queuing ........................................................................................... 8-13
Why Do We Need TCQ? ................................................................................................ 8-14
Tagged Command Queuing ........................................................................................... 8-15
Varying the HBA LUN Queue Depth .............................................................................. 8-16
Random 4KB Read Test ................................................................................................ 8-17
Queuing — Impact of Writes in the Queue .................................................................... 8-18
Rules and What Happens When Things Go Wrong? .................................................... 8-19
Difference between Target Mode and LUN Queue Depth Mode ................................... 8-20
Checking the HBA LUN Queue Depth ........................................................................... 8-21
Example of Poor Performance Due to Low Execution Throttle (Target Mode) ............. 8-22
Modular Systems — Queue Details ............................................................................... 8-23
AMS Queue Depth Experiment...................................................................................... 8-24
Example of AMS Misconfiguration ................................................................................. 8-25
USP V — External Tag Count Definition........................................................................ 8-26
Basic External Tag Considerations — FC ..................................................................... 8-27
Basic External Tag Considerations — SATA ................................................................. 8-28
Module Summary ................................................................................................................... 8-29
Module Review ...................................................................................................................... 8-30
9. DEPLOYMENT PLANNING — PART 2 ......................................................................... 9-1
Module Objectives ................................................................................................................... 9-1
Application Monitoring.............................................................................................................. 9-2
Online Transaction Processing ........................................................................................ 9-2
Rich Media and Streaming ............................................................................................... 9-5
Rich Media ....................................................................................................................... 9-7
Email Systems ................................................................................................................. 9-8
®
Mail Metrics — Microsoft Exchange, Notes ................................................................. 9-10
Microsoft Exchange........................................................................................................ 9-11
Data Repositories for Exchange .................................................................................... 9-12
Exchange 2010 Considerations ..................................................................................... 9-17
Oracle Databases .......................................................................................................... 9-19
Decision Support Systems ............................................................................................. 9-20
Decision Support Systems Metrics ................................................................................ 9-22
Enterprise Resource Planning Metrics........................................................................... 9-23
Backup Systems ............................................................................................................ 9-24
Hitachi Network Attached Storage (HNAS) ........................................................................... 9-25
Gigabit Ethernet Protocol and Capacities ...................................................................... 9-25
2Gb Fibre Channel Protocol and Rates ......................................................................... 9-26
4Gb Fibre Channel Protocol and Rates ......................................................................... 9-27

HDS Confidential: For distribution only to authorized parties. Page vii


Architecting Storage Performance with Hitachi Storage
Contents

Basics ............................................................................................................................ 9-28


Architecture Overview ................................................................................................... 9-29
File System Reminder ................................................................................................... 9-30
File Logical Blocks ......................................................................................................... 9-31
High Throughput Rules of Thumb ................................................................................. 9-32
Multipathing ........................................................................................................................... 9-33
Options .......................................................................................................................... 9-33
Overview ........................................................................................................................ 9-34
HDLM Features ............................................................................................................. 9-35
Path Failover and Failback ............................................................................................ 9-36
Load Balancing .............................................................................................................. 9-37
Load Balancing in a Clustered Environment ................................................................. 9-38
Load Balancing Algorithms ............................................................................................ 9-39
Round Robin — Extended Round Robin ....................................................................... 9-41
Dynamic Link Manager GUI .......................................................................................... 9-42
Module Summary .................................................................................................................. 9-44
Module Review ...................................................................................................................... 9-45
10. STORAGE VIRTUALIZATION .................................................................................. 10-1
Module Objectives ................................................................................................................. 10-1
Storage Virtualization and Data Mobility — Enterprise Storage ........................................... 10-2
Introducing the Tiered Storage Concept ....................................................................... 10-2
Customer Storage Management Needs ........................................................................ 10-3
Controller-based Virtualization — Review ..................................................................... 10-4
Universal Volume Manager ........................................................................................... 10-5
Features ......................................................................................................................... 10-6
Configuring External Storage ........................................................................................ 10-7
Map External Storage .................................................................................................... 10-8
External Volumes and External Volume Group ............................................................. 10-9
Cache Mode Usage ..................................................................................................... 10-10
Workload Recommendations, Cached or Uncached .................................................. 10-11
Virtualization and Cache Enabled/Disabled ................................................................ 10-12
Rules of Thumb for Software Requirements ............................................................... 10-13
VSP Cache Mode ........................................................................................................ 10-14
What Is Tiered Storage Manager? .............................................................................. 10-15
Benefits of Tiered Storage Manager ........................................................................... 10-16
Virtual Partition Manager — Enterprise Storage ................................................................. 10-17
Virtual Partition Manager Overview ............................................................................. 10-17
Cache Logical Partition ................................................................................................ 10-19
Storage Logical Partition ............................................................................................. 10-20
Mode 454 Explained .................................................................................................... 10-21
Cache Partition Manager — Modular Storage .................................................................... 10-22
Cache Partition Manager Modular Storage Overview ................................................. 10-22
Functions of Cache Partition Manager ........................................................................ 10-24
Benefits of Cache Partition Manager ........................................................................... 10-27
Multipathing ......................................................................................................................... 10-28
Dynamic Link Manager ................................................................................................ 10-28
Load Balancing ............................................................................................................ 10-29
Multipathing Primer ...................................................................................................... 10-31
HDLM Multipathing Options ......................................................................................... 10-32
Multipath Options for the Enterprise Storage .............................................................. 10-33
MPXIO Multipathing Options ....................................................................................... 10-34
MPXIO Round Robin ................................................................................................... 10-35
VMware Multipath Options .......................................................................................... 10-36
Load Balancing Algorithms .......................................................................................... 10-37
Sequential Detection — Switching from RR to ERR ................................................... 10-38
Module Summary ................................................................................................................ 10-39
Module Review .................................................................................................................... 10-40

Page viii HDS Confidential: For distribution only to authorized parties.


Architecting Storage Performance with Hitachi Storage
Contents

11. CAPACITY VIRTUALIZATION ................................................................................. 11-1


Module Objectives ................................................................................................................. 11-1
Dynamic Provisioning ............................................................................................................ 11-2
What Is Dynamic Provisioning? ..................................................................................... 11-2
Benefits .......................................................................................................................... 11-5
Overview ........................................................................................................................ 11-6
Components ................................................................................................................... 11-7
How Does it Work? ........................................................................................................ 11-8
Setup and Monitoring ..................................................................................................... 11-9
Characteristics of File Systems on HDP Virtual Volumes............................................ 11-10
Large Pools versus Small Pools .................................................................................. 11-11
Pool Design Recommended Practices ........................................................................ 11-12
Additional Important Considerations ............................................................................ 11-13
HDP Pool Design Best Practices ................................................................................. 11-14
Dynamic Tiering ................................................................................................................... 11-15
HDT — Page Level Tiering .......................................................................................... 11-15
Hitachi Dynamic Tiering Benefits ................................................................................. 11-17
LDEV Design ................................................................................................................ 11-18
HDP and HDT Pool Design .......................................................................................... 11-19
HDT Tiers ..................................................................................................................... 11-20
Hitachi Dynamic Tiering Operations ............................................................................ 11-21
Hitachi Dynamic Tiering Operations ............................................................................ 11-23
HDT Performance Monitoring ...................................................................................... 11-26
HDP and HDT Pool Design .......................................................................................... 11-27
HDT Page Monitoring................................................................................................... 11-28
Migration Decisions ...................................................................................................... 11-29
Lab: Competing Workload Scenario ............................................................................ 11-30
Module Summary ................................................................................................................. 11-31
Module Review .................................................................................................................... 11-32

12. MONITORING AND TROUBLESHOOTING ................................................................. 12-1


Module Objectives ................................................................................................................. 12-1
Recap: What Is Performance? ............................................................................................... 12-2
Characterize the Issue ........................................................................................................... 12-3
Map Issue to Specific Storage Resources ............................................................................. 12-4
Assess Storage Resource Utilization..................................................................................... 12-5
Reporting Intervals ................................................................................................................. 12-6
Assessing Utilization Levels, Interactive or Batch ................................................................. 12-7
Design for Normal or Failure Operation ................................................................................. 12-8
Port Capacity Reserve to Accommodate Failure ................................................................... 12-9
Back End Director Utilization ............................................................................................... 12-10
Maximum Recommended Array Group Utilization .............................................................. 12-11
RAID-5 versus RAID-1+0 ..................................................................................................... 12-12
Maximum Recommended Cache Write Pending ................................................................. 12-13
Causes of High Write Pending ............................................................................................. 12-14
Identifying Write Pending Problems..................................................................................... 12-16
Interactive Workloads — Response Time Issues ................................................................ 12-17
Caution about Non-Legitimate Workloads ........................................................................... 12-18
Batch Workloads — Throughput Issues .............................................................................. 12-19
Distributing Workloads across Resource Pools ................................................................... 12-20
Replication Related Performance Issues............................................................................. 12-21
Enterprise and Modular Storage Similarities — RAID Group Utilization ............................. 12-22
Enterprise and Modular Storage Similarities — Cache Write Pending ............................... 12-23
Example of Tray-Contained RAID Groups with AMS 2500 and HDP — Recommended ... 12-24
Example of HDD Roamed RAID Groups with AMS 2500 — Not Recommended ............... 12-25
Tray-Contained and Roamed RAID Groups ........................................................................ 12-26
Manual Path Management — LUN Ownership With No Internal Load Balancing .............. 12-27

HDS Confidential: For distribution only to authorized parties. Page ix


Architecting Storage Performance with Hitachi Storage
Contents

Performance Related Service Offerings .............................................................................. 12-28


Module Summary ................................................................................................................ 12-29
Module Review .................................................................................................................... 12-30

COMMUNICATING IN A VIRTUAL CLASSROOM ................................................................ V-1


NEXT STEPS ............................................................................................................. N-1
GLOSSARY .............................................................................................................. G-1
EVALUATING THIS COURSE ....................................................................................... E-1

Page x HDS Confidential: For distribution only to authorized parties.


Introduction
Welcome and Introductions

 Student Introductions
• Name
• Position
• Experience
• Your expectations

HDS Confidential: For distribution only to authorized parties. Page xi


Introduction
Course Description

Course Description

 This 4 day course covers performance concepts and tools, design


principles, storage-related equipment and applications, I/O concepts,
storage configurations, assessment and planning, and design of
storage solutions.
 This course includes case studies that will allow learners to diagnose
performance issues and provide viable solutions.

Page xii HDS Confidential: For distribution only to authorized parties.


Introduction
Prerequisites

Prerequisites

 Recommended
• TSI2048 — Virtualization Solutions
• TSI1848 — Hitachi Tuning Manager Software v7.x Advanced Operations
• TSI0945 — Managing Storage Performance with Hitachi Tuning Manager
Software v7.x
 Required knowledge and skills
• Working knowledge of Hitachi Enterprise Storage, Hitachi Modular
Storage, Hitachi Optimization and Virtualization tools, and Hitachi
Monitoring and Reporting tools

HDS Confidential: For distribution only to authorized parties. Page xiii


Introduction
Course Objectives

Course Objectives

 Upon completion of the course, you should be able to:


• Describe performance design principles
• Describe workload profiling
• Identify metrics to be collected prior to configuring and sizing the storage
system for performance
• Describe best practices when configuring and sizing storage systems for
industry-standard applications
• Demonstrate how to configure and size Hitachi storage for performance
• Describe tiered storage virtualization techniques using Hitachi storage
systems
• Use Hitachi optimization and virtualization tools
• Use Hitachi monitoring and reporting tools to proactively monitor
performance
• Demonstrate ability to troubleshoot performance-related problems

Page xiv HDS Confidential: For distribution only to authorized parties.


Introduction
Course Topics

Course Topics

Modules Lab Activities


1. Basics Review
2. Enterprise Storage Architecture 1. Setting Up the Environment
3. Modular Storage Architecture — Part 1
4. Modular Storage Architecture — Part2
5. Tiers, Resource Pools and Workload 2. Understanding Performance
Profiles Metrics
6. Performance Tools and Data Sources 3. Using Workload Generators

7. Acquisition Planning 4. Creating a Baseline


8. Deployment Planning — Part 1 5. Troubleshooting Performance
9. Deployment Planning — Part 2 Scenarios

10. Storage Virtualization 6. Competing Workload Scenario


11. Capacity Virtualization
12. Monitoring and Troubleshooting

HDS Confidential: For distribution only to authorized parties. Page xv


Introduction
Learning Paths

Learning Paths

 Are a path to professional


certification
 Enable career advancement
 Are for customers, partners
and employees
• Available on HDS.com,
Partner Xchange and HDSnet
 Are available from the instructor
• Details or copies

HDS.com: http://www.hds.com/services/education/
Partner Xchange Portal: https://portal.hds.com/
HDSnet: http://hdsnet.hds.com/hds_academy/
Please contact your local training administrator if you have any questions regarding
Learning Paths or visit your applicable website.

Page xvi HDS Confidential: For distribution only to authorized parties.


Introduction
Collaborate and Share

Collaborate and Share

 Learn what’s new in the Academy


 Ask the Academy a question
 Discover and share expertise
 Shorten your time to mastery
 Give your feedback
 Participate in forums

Academy in theLoop!

theLoop:
http://loop.hds.com/community/hds_academy/course_announcements_and_feed
back_community ― HDS internal only

HDS Confidential: For distribution only to authorized parties. Page xvii


Introduction
HDS Academy Is on Twitter and LinkedIn

HDS Academy Is on Twitter and LinkedIn

Follow the HDS Academy on Twitter for regular


training updates.

LinkedIn is an online community that enables


students and instructors to actively participate in
online discussions related to Hitachi Data Systems
products and training courses.

These are the URLs for Twitter and LinkedIn:


http://twitter.com/#!/HDSAcademy
http://www.linkedin.com/groups?gid=3044480&trk=myg_ugrp_ovr

Page xviii HDS Confidential: For distribution only to authorized parties.


Introduction
Social Networking — Academy’s Twitter Site

Social Networking — Academy’s Twitter Site

 Twitter site
Site URL: http://www.twitter.com/HDSAcademy

HDS Academy Twitter Site URL: http://twitter.com/#!/HDSAcademy

HDS Confidential: For distribution only to authorized parties. Page xix


Introduction
Social Networking — Academy’s Open LinkedIn Site

Social Networking — Academy’s Open LinkedIn Site

 LinkedIn Group
Site URL: http://www.linkedin.com/groups?gid=3044480&trk=myg_ugrp_ovr

 Sign in

HDS Academy LinkedIn Site URL:


http://www.linkedin.com/groups?gid=3044480&trk=myg_ugrp_ovr

Page xx HDS Confidential: For distribution only to authorized parties.


1. Basics Review
Module Objectives

 Upon completion of this module, you should be able to:


• Define performance
• Describe disk drive characteristics and features
• List RAID level performance characteristics available with Hitachi storage
systems

RAID = Redundant array of independent disks

HDS Confidential: For distribution only to authorized parties. Page 1-1


Basics Review
What Is Performance?

What Is Performance?

 Fulfillment of an Expectation
 Performance = Reality – Expectations
 Happiness = Reality – Expectations
 Performance = Happiness
 Measure Reality
• Establish comprehensive data collection
 Ask about Customer Expectations
• Do quantifiable expectations exist?
• How are customer expectations not being met?

Fulfillment of an Expectation
 If both performance and happiness equal the same thing (reality minus
expectations), then it follows that performance must equal happiness
Measure Reality
 Establish comprehensive data collection
Ask about Customer Expectations
 Do quantifiable expectations exist?
 Throughput (IOPs, MB/sec); response time
 How are customer expectations not being met?
 Specific targets, timing or circumstances
 How do they know they are unhappy?

Page 1-2 HDS Confidential: For distribution only to authorized parties.


Basics Review
Disk and RAID

Disk and RAID

Hard Disk Drive Structure and Components

Head/Suspension
Cover
Gimbal
Base- Suspension
Disk
casting
Slider
(1mm )
Actuator
(VCM and head- Slider
Coil
suspension assembly Suspension
Rotational
Latency

Voice-Coil GMR read


Motor Disk Assembly
Element
(VCM) Write head
Seek Latency

Recording Medium
Lube
• If Random Access Spindle
– Mechanical Latency is Motor Overcoat
significant Recording layer
• If Sequential Access Growth control layer
– Mechanical Latency not Electronics
Card Substrate
usually important
• Cache Hits hide latency

GMR = Giant Magneto Resistive


Now let's look at the pieces in more detail. First, look in detail at the disk. The bulk
of it is made of aluminum or glass. That just provides mechanical strength.
Very thin layers of material are coated on the surface to provide the medium to store
information.
These are manufactured by passing the aluminum or glass substrates through a
huge vacuum system that deposits the thin layers only a few hundred atoms thick,
onto the disk one layer at a time, in a few seconds.
Next, let’s examine the head. The actual elements that do the writing and reading
are microscopic. They are carried on the back of a slider that is not much bigger than
a grain of sand. The slider is supported by a thin sheet of stainless steel attached to
an arm of aluminum.
All of this is rotated over any given track on the disk. The slider is supported over
the disk by a very thin film of air as the disk spins so that it can run for years with no
wear.

HDS Confidential: For distribution only to authorized parties. Page 1-3


Basics Review
Bottom Line on Drive Cost

Bottom Line on Drive Cost

 15 K RPM drives cost more than 10 K RPM drives.


• More platters and heads for same capacity
• More expensive actuator
• More expensive development Manufacturers
cannot spread
• Smaller production volume their costs across
technologies and
 10 K RPM drives cost more than SATA drives. drive families.
• Higher parts count
• Higher development cost
• Economies of scale difference
• Higher testing cost
 These cost differences are fundamental

15K RPM drives cost more than 10K RPM drives.


 Approximately double the number of platters and heads for the same capacity
10K RPM drives cost more than serial advanced technology attachment (SATA)
drives.
 Higher parts count (more, smaller platters, faster actuators)
 Higher development cost
 Huge difference in economies of scale:
 SATA uses same technology and manufacturing process as desktop drives
 Higher testing cost

Page 1-4 HDS Confidential: For distribution only to authorized parties.


Basics Review
HDS RAID Levels

HDS RAID Levels

 Modular Storage
• RAID-0, RAID-1, RAID-1+0, RAID-5, RAID-6
 Enterprise Storage
• RAID-1+, RAID-5, RAID-6

RAID-1+0 RAID-5 RAID-6


Description Data striping and Data striping with Data striping with
mirroring distributed parity two distributed
parities

• Good small block • Good small block • Good small block


random reads and random reads, but random reads, but
writes lower writes low writes
• Lower sequential • Good sequential • Good sequential
MB/sec MB/sec MB/sec

Hitachi data controllers are the most advanced in the storage industry and employ
advanced algorithms to manage performance. The intelligent controllers provide
disk-interface and RAID management circuitry, offloading these tasks to dedicated
embedded processors. All user data disks in the system are defined as part of a
RAID array group of one type or another.

HDS Confidential: For distribution only to authorized parties. Page 1-5


Basics Review
Mechanical Latency and Random Disk IOPS

Mechanical Latency and Random Disk IOPS

 Random disk IOPS rates can be estimated from the mechanical


latency.
• Formula to calculate IOPS rating of a disk:
1000 ms / (average rotational latency + average seek time) = IOPS

Example 1: 15 K RPM drive FC SATA SATA


with 2.01 ms average rotation
and 3.8 ms average seek Capacity (GB) 73 146 146 300 300 500 1000
1000 ms / (2.01 + 3.8) = 172 RPM 15000 10000 15000 10000 15000 7200 7200

Example 2: 10 K RPM drive Capacity (GB) 73 146 146 300 300 500 500
with 2.99 ms average rotation RPM 15000 10000 15000 10000 10000 7200 7200
and 4.9 ms average seek
1000 ms / (2.99 + 4.9) = 127 Av Latency (ms) 2.01 2.99 2.01 2.99 2.01 4.17 4.17
Av Seek (Write) 4.2 5.4 4.1 5.1 4.1 8.5 8.5

Example 3: 7200 RPM drive Av Seek (Read) 3.8 4.9 3.8 4.7 3.8 8.5 8.5
with 4.17 ms average rotation Av Tot Latency 6.21 8.39 6.11 8.09 6.11 12.67 12.67
and 8.5 ms average seek
Av IOPS (Read) 172 127 172 130 172 79 79
1000 ms / (4.17 + 8.5) = 79
Av IOPS (Write) 161 119 164 124 164 79 79

1 2 3

Page 1-6 HDS Confidential: For distribution only to authorized parties.


Basics Review
RAID-1

RAID-1

XYZ

or

ABC ABC XYZ XYZ


Copy #1 Copy #2 Copy #1 Copy #2 Copy #1 Copy #2

 Also called mirroring  For writes, a copy  For reads, the data
must be written to both can be read from
 Two copies of the data
disk drives either disk drive
 Requires 2x number  Two parity group disk  Read activity
of disk drives drive writes for every distributed over both
host write copies reduces disk
 Does not matter what drive busy status (due
previous data was; just to reads) to ½ of what
overwrite with new it would be to read
data from a single (non-
RAID) disk drive

HDS Confidential: For distribution only to authorized parties. Page 1-7


Basics Review
RAID-1 and RAID-1+0

RAID-1 and RAID-1+0

Performance Impact of Drive Failure

 Small performance impact with a dead drive


• All reads now need to go the remaining copy
• Only one copy of each write
 But …
• Need to copy data from remaining copy to spare drive as quickly as
possible, in order to restore redundancy
• Copy operation to spare will add significant workload to remaining good
drive
• Main RAID-1 performance issue after drive failure
 Heavier host I/O workload = slower copy operation
 Copy operation will slow response to host I/O

Page 1-8 HDS Confidential: For distribution only to authorized parties.


Basics Review
RAID Terminology

RAID Terminology

 RAID-1+0, RAID-0+1, RAID-10, RAID-01


 What exactly does HDS use?
• HDS implements RAID-1+0
Stripe
RAID-1+0
Mirror Mirror Mirror Mirror

ABC ABC DEF DEF GHI GHI JKL JKL

Mirror
RAID-0+1
Stripe Stripe

ABC DEF GHI JKL ABC DEF GHI JKL

Larger impact of recovery and risk of data loss

 RAID-1+0
• 2D+2D or 4D+4D
• With rotating copy

RAID-1+0 is available in 2 data plus 2 data (2D+2D) and 4 data plus 4 data (4D+4D)
disk configurations.
The configurations include a rotating copy, where the primary and secondary
stripes are toggled back and forth across the physical disk drives for performance.
RAID-1+0 is best suited to applications with low cache hit ratios, such as random
I/O activity and with high write to read ratios.

HDS Confidential: For distribution only to authorized parties. Page 1-9


Basics Review
RAID-1+0

RAID-1+0

ABCDEFGHIJKL
(Sequential write shown)

ABC ABC DEF DEF


GHI GHI JKL JKL
Copy #1 Copy #2 Copy #1 Copy #2

 Combines RAID-1 plus RAID-0


 Sequential
 Random
 In real life the chunk size would be 10s or 100s of KB

Combines RAID-1 plus RAID-0


 Striping
 Called “RAID-1+0”
Sequential
 Entire stripes are read at once in parallel, using either copy of each chunk.
 This uses half the drives in the parity group.
 Writes are duplicated to the 2 copies.
 Each stripe write hits all the drives in the parity group.
Random
 Reads are distributed over either copy of the entire parity group.
 Writes write a copy to each of 2 drives.
Again, in real life the chunk size would be 10s or 100s of KB.

Page 1-10 HDS Confidential: For distribution only to authorized parties.


Basics Review
RAID Terminology

RAID Terminology

 RAID-5
• 3D+1P or 7D+1P
• Data is striped with parity over RAID members

D1 D2 D3 D4 D5 D6 D7 P
P D1 D2 D3 D4 D5 D6 D7
D7 P D1 D2 D3 D4 D5 D6
D6 D7 P D1 D2 D3 D4 D5
D5 D6 D7 P D1 D2 D3 D4
D4 D5 D6 D7 P D1 D2 D3
D3 D4 D5 D6 D7 P D1 D2
D2 D3 D4 D5 D6 D7 P D1
Disk 1 Disk 2 Disk 3 Disk 4 Disk 5 Disk 6 Disk 7 Disk 8

RAID-5 disk arrangements consist of 4 disks (3D+1P) or 8 disks (7D+1P). Data is


striped across disks similar to RAID-1+0. However, RAID-5 keeps parity
information on each stripe of data for fault resilience. If a failure occurs, the contents
of the failed block can be recreated by reading back other blocks in the stripe and the
parity. Parity information is distributed throughout the array to minimize
bottlenecks when rebuilding data from a failed disk. The overhead of RAID-5 is
equivalent to one disk drive, regardless of the size of the array.
RAID-5 is best suited to applications using mostly sequential reads.

HDS Confidential: For distribution only to authorized parties. Page 1-11


Basics Review
RAID-5

RAID-5

Already an odd number of 1s in


0+1+0 this bit position, so parity bit is 0

10011 11111 00000 10011

Data Data Data (odd)


parity
With an even number of 1s in this bit position,
1+1+0 parity bit is set to 1 to keep an odd number of
1s in this bit position across the stripe

 Each parity bit is set as necessary to keep an odd number of “1” bits in that
bit position across the whole parity group (odd parity).
 Adding more data drives does not add more parity.

Parity Drive Failure

 If drive containing parity fails

10011 11111 00000 10011

Data Data Data Parity

• Data is still there


• Immediately reconstruct the parity on a spare disk drive
 In case a second drive fails

Page 1-12 HDS Confidential: For distribution only to authorized parties.


Basics Review
RAID-5

Data Drive Failure


Because there is now an even
number of “1” bits on the
remaining data disks, we know
that the missing data bit is a “1.”

10011 11111 00000 10011


A “0” bit here says there
Data Data Data Parity` originally was an odd
number of “1” data bits
in this position across
the data drives.

11111

 Reconstruct missing data


• Read corresponding chunk from all remaining data drives
• See how many “1” bits there are in each position
 Immediately reconstruct the parity on a spare disk drive
• In case a second drive fails

See how many “1” bits there are in each position.


 By comparing how many “1” bits there are in each bit position from the
remaining disk drives with what the parity tells you there originally were, you
can reconstruct the data.
 Better reconstruct the parity on a spare disk drive right away just in case a
second drive fails.
Random Read Hit

 Read hits operate at electronic


Read data #3
speed
 Just transfer data from cache
 Cache protocol time is short
Copy of data #3 • Just directory lookup and
00000 Cache transfer

10011 11111 00000 10011

Data #1 Data #2 Data #3 Parity

Cache protocol time is the microcode path length/MP busy time.

HDS Confidential: For distribution only to authorized parties. Page 1-13


Basics Review
RAID-5

Random Read Miss

 Read misses only operation


Read data #1
that sees disk drive speed
during normal operation
Copy of data #1
 Read miss service time =
10011
• MP protocol time
• (average) seek
Copy of data #3
00000 • (average) latency
Cache • Transfer
 Copy of data and some extra in
10011 11111 00000 10011 the same area usually kept in
cache
Data #1 Data #2 Data #3 Parity

Read missing
 Normal operation = not overloaded
 That is, only host I/O operation that does not complete at electronic speed with
just an access to cache
Copy of data and some extra in the same area is usually kept in cache
 Because hosts tend to ask for the same or close-by data later
 Loading of some extra is governed by the intelligent learning algorithm

Page 1-14 HDS Confidential: For distribution only to authorized parties.


Basics Review
RAID-5

Random Write Sequence

New data #2 01010


from host New data
Partial parity
corresponds to
remaining part of stripe 10011 - 01100 + 00110
without old data Old Partial New
New parity Old 11111 parity parity
data data
Cache

10011 .....
01010 00000 00110

Data #1 Data #2 Data #3 Parity

Sequence:
1. Read old data; read old parity.
2. Remove old data from old parity giving partial parity (parity for the rest of the
row).
3. Add new data into partial parity to generate new parity.
4. Write new data and new parity to disk.

HDS Confidential: For distribution only to authorized parties. Page 1-15


Basics Review
RAID-5

• Three writes complete destage operation


RAID-5
• Six I/Os to handle all four host random writes
• Without “gathering write” this would have been 16 I/Os
Random Write — Gathering Write
Cache is
allocated in
Individual 8 K writes 3rd 8 K block goes in new
64 K segments
going to the subsystem segment in existing slot

After receiving 4 host random writes, Cache is managed in


4th 8K block over- we have 2 partially filled slots within 256 K “slots” consisting
writes 1st 8 K block the same parity stripe (row) of up to 4 segments

2 “read old data” and 1


1st 8 K block “read old parity” operations
arrives Remove old data
Add new data into 2 cache slots map to a
partial parity disk “chunk” (RSD stripe)
64 K segment
allocated on 64 Old data #1 Old data #2 Old parity/Partial parity/New parity Cache
K boundary

8 K block positioned
within segment, only Old data #1 Old data #2 Old parity
partially occupies Data #1 Data #2 Data #3 Parity
segment

Open V chunk 2nd 8 K block arrives,


size is 512 K put in new segment
within new slot

This example is Open-V on Enterprise Storage


 Cache for Open-V is allocated in 64 K segments.
 Up to 4 segments may be allocated within a 256K slot.
 Each 256 K Open-V slot is mapped to one RAID-5 chunk (RSD stripe).
The 6 I/Os are three reads and three writes
 Without “gathering write” it would be 16 I/Os
 4 host I/Os x (read old data, read old parity, write new data, write new parity) =
16 I/Os
This example is to illustrate the concept, it does not often happen so conveniently.

Page 1-16 HDS Confidential: For distribution only to authorized parties.


Basics Review
RAID Terminology

RAID Terminology

 RAID-6
• 6D + 2P
• Data is striped with parity over RAID members

D1 D2 D3 D4 D5 D6 P1 P1
D2 D3 D4 D5 D6 P1 P2 D1
D3 D4 D5 D6 P1 P2 D1 D2
D4 D5 D6 P1 P2 D1 D2 D3
D5 D6 P1 P2 D1 D2 D3 D4
D6 P1 P2 D1 D2 D3 D4 D5
P1 P2 D1 D2 D3 D4 D5 D6
P2 D1 D2 D3 D4 D5 D6 P2
Disk 1 Disk 2 Disk 3 Disk 4 Disk 5 Disk 6 Disk 7 Disk 8

Like RAID-5, RAID-6 stripes blocks of data and parity across an array of drives.
However, RAID-6 maintains redundant parity information for each stripe of data.
This redundancy enables RAID-6 to recover from the failure of up to two drives in
an array, i.e., a double fault. Other RAID configurations can only tolerate a single
fault. As with RAID-5, performance is adjusted by varying stripe sizes.
RAID-6 is good for applications using the largest disks and performing many
sequential reads.

HDS Confidential: For distribution only to authorized parties. Page 1-17


Basics Review
RAID-6

RAID-6

D1 D2 D3 D4 D5 D6 P Q

“6D + 2P” parity group

 Extension of the RAID-5 concept


 Allows data to be reconstructed from remaining drives in a parity
group when any one or two drives fail
 Each RAID-6 host random write turns into 6 parity group I/O
operations
 Parity group sizes usually start at 6+2

RAID-5 concept uses 2 separate parity-type fields usually called “P” and “Q.” RAID-
6 allows data to be reconstructed from the remaining drives in a parity group when
any 1 or 2 drives have failed.
 The mathematics are beyond a basic course, but is the same as for ECC used to
correct errors in dynamic random-access memory (DRAM) or on the surface of
disk drives
Each RAID-6 host random write turns into 6 parity group I/O operations.
 Read old data, read old P, read old Q
 (Compute new P, Q)
 Write new data, write new P, write new Q
RAID-6 parity group sizes usually start at 6+2.
 Has the same space efficiency as RAID-5 3+1

Page 1-18 HDS Confidential: For distribution only to authorized parties.


Basics Review
Understanding the Impact of the RAID Levels

Understanding the Impact of the RAID Levels

Example

 Take
• a 4D+1P RAID-5 LUN,
• a 4D+4D RAID-1+0 LUN, and
• a 4D+2P RAID-6 LUN.
 Issue 4 I/Os.
 How many back end disk I/Os are generated?
RAID-5 RAID-1 RAID-6
Random Read 4 4 4
Random Write 16 8 24
Sequential Read 4 4 4
Sequential Write 5 8 6

HDS Confidential: For distribution only to authorized parties. Page 1-19


Basics Review
Estimating Disk (HDD) and RAID IOPS Performance

Estimating Disk (HDD) and RAID IOPS Performance

 HDD IOPS =
1000/ (Avg Rot Latency ms +
Avg Seek ms)

 Raw RG IOPS =
n * IOPS

 Actual IOPS =
Raw IOPS / (Read IOPS +
(Write IOPS * y)

 Max Queued IOPS =


Actual IOPS * z

n = number of hard disk drives (HDDs) in RAID group


y = 2 for RAID-1, 4 for RAID-5 and 6 for RAID-6
z = 1 for solid state drive (SSD) and SATA 1
z = from 1 to 1.2 for SATA 2
z = from 1 to 1.3-1.5 for heavily queued serial attached SCSI (SAS) and Fibre
Channel

Page 1-20 HDS Confidential: For distribution only to authorized parties.


Basics Review
SATA Specifics

SATA Specifics

 SATA is not designed for 24/7 operation.


• Spread load across as many HDDs as possible to reduce utilization.
• Do not use large RAID groups or rebuild times will be longer.
• Large RAID groups require more cache for writes.
• SATA writes require verification.
• Hitachi implements LRC.
• Testing and bathtub failure discussion.

 Try to spread load across as many HDDs as possible to reduce utilization; this
also helps rebuilds.
 Do not use large RAID groups or rebuild times will be longer because all disks
must be read.
 SATA writes require verification (misdirected write) and this uses extra CTL
CPU cycles.
 Hitachi implements LRC, but overhead with RAID-1+0.

HDS Confidential: For distribution only to authorized parties. Page 1-21


Basics Review
Module Summary

Module Summary

 Defined performance
 Described disk drive characteristics and features
 Listed RAID level performance characteristics available with Hitachi
storage systems

Page 1-22 HDS Confidential: For distribution only to authorized parties.


Basics Review
Module Review

Module Review

1. Define Performance.
2. List the RAID types supported by Hitachi storage systems.
3. What RAID type gives good small-block random reads and writes?
4. Which RAID type provides good sequential performance —
RAID-1+0 vs. RAID-5?

HDS Confidential: For distribution only to authorized parties. Page 1-23


Basics Review
Module Review

Page 1-24 HDS Confidential: For distribution only to authorized parties.


2. Enterprise Storage
Architecture
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the characteristics of Hitachi storage systems
▪ Front-end architecture that relates to performance optimization
▪ Cache architecture that relates to performance optimization
▪ Back-end architecture that relates to performance optimization
▪ Virtualized architecture and connectivity to external storage

HDS Confidential: For distribution only to authorized parties. Page 2-1


Enterprise Storage Architecture
Enterprise Storage Architecture — Hitachi Virtual Storage Platform

Enterprise Storage Architecture — Hitachi Virtual Storage


Platform

Hitachi Virtual Storage Platform Full Configuration — 6 Rack

6 frames maximum. 2 DKC boxes and 16 DKU boxes.

1 module 2 module
HDD (2.5 in.) 1,024 2,048

HDD (3.5 in.) 640 1,280

CHA ports 80 (96*1) 176 (192*1)

Cache 512GB 1,024GB

*1 ALL CHA configuration (Diskless)

RK-
12 RK-
11 RK-
10 RK-
00 RK-
01 RK-
02

A fully-configured Hitachi Virtual Storage Platform (VSP) system contains 2 DKC


boxes, in each of 2 separate racks, and 16 HDU boxes.
 A fully-configured VSP system requires 6 racks.
 Each 19 in. rack is 60 cm wide, outside edge to outside edge.
 The VSP rack is 110 cm deep including the rear door.
The total width of 6 racks is 278 mm or 11 inches wider, compared to the 5 cabinet,
fully-configured Hitachi Universal® Platform V (USP V).
The rack naming convention is a bit different from the RAID 600 USP V. Each rack
has a 2 digit identifying number. Each of the DKC racks will be RK-00 and RK-01
respectively. The HDU racks associated with its DKC rack uses the same left digit
followed by a 1 or 2 depending upon its physical position relative to the DKC rack.
Diskless configuration option is supported for the VSP. In the case of a diskless
configuration, more CHA ports are possible.

Page 2-2 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
Hitachi Virtual Storage Platform Full Configuration — 6 Rack

Controller Chassis (DKC)

Disk Chassis (DKU)

The VSP array can be configured as a single chassis or dual chassis. Each chassis has
at least 1 rack — a control rack (DKC) — in which there is a logic box for boards and
one or 2 optional disk units (DKUs).
A single chassis can be expanded by 1 or 2 disk expansion racks, each with up to 3
DKUs. A DKU is either of the small form factor (2.5 in. disks) or large form factor
(3.5 in. disks) type.
A dual chassis array will have 2 control racks and up to 4 disk expansion racks. The
logic boxes in each chassis are cross connected at the grid switch level to create a
single continuous array (not a cluster pair). A fully configured dual chassis (6‐rack)
array occupies a small footprint (3.6 ft by 11.8 ft) and consumes much less power
than the previous USP V design.

HDS Confidential: For distribution only to authorized parties. Page 2-3


Enterprise Storage Architecture
DKC Overview — Logic Box

DKC Overview — Logic Box

 Hitachi Virtual Storage Platform array is created with separate


intelligent components.
 Components are operated in parallel to achieve high performance,
scalability and reliability.
 Virtual Storage Platform uses 5 types of logic boards:
• Front-end directors
(Fibre Channel or
FICON ports)
• Data cache
Adapters (cache
boards)
• Back-end directors
(disk controllers)
• Virtual storage
directors (processor
boards)
• Grid switches

Unlike most storage arrays from other vendors, the VSP is a purpose built system
designed from the ground up to be a storage array and comprised of (where
appropriate) custom logic and processor ASICs. All Hitachi midrange and enterprise
storage arrays are purpose built. In VSP (like all Hitachi designs), there are separate
intelligent components from which the array is created. These components are
operated in parallel to achieve high performance, scalability, and reliability.
The VSP uses 5 types of logic boards:
 Grid switches
 Data cache adapters (cache boards)
 Front‐end directors (FC or FICON ports)
 Back‐end directors (disk controllers)
 Virtual storage directors (processor boards)

Page 2-4 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
Front-End Directors

Front-End Directors

 Front-end director (FED) board controls interaction between the


array and servers attached to host ports.
 Also manages connections to external storage or remote copy
products (Hitachi Universal Replicator).

The front-end director (FED) board controls the interaction between the array and
servers attached to the host ports. It also manages the connections of external
storage or remote copy products (Hitachi Universal Replicator). There are 2 types of
FED features (a pair of boards) available: a 16‐port Open Fibre and a 16‐port FICON.
The 2 types of interface options can be supported simultaneously by mixing FED
features (pairs of boards) within the VSP.
There are up to 8 FED boards per chassis (2 or 4 more can be added if BEDs are not
installed). Unlike the previous Hitachi Enterprise array designs, the FED board does
not decode and execute I/O commands. In the simplest terms, a VSP FED accepts
and responds to host requests by directing the host I/O requests to the VSD
managing the LDEV in question.
The VSD processes the commands, manages the metadata in Control Memory, and
creates jobs for the Data Accelerator processors in FEDs and BEDs. These then
transfer data between the host and cache, virtualized arrays and cache, disks and
cache, or replication operations and cache.
The VSD that owns an LDEV tells the FED where to read or write the data in cache.
This location will be within the partition allocated to that VSD from the Cache‐A
and Cache‐B pools.

HDS Confidential: For distribution only to authorized parties. Page 2-5


Enterprise Storage Architecture
Front-End Directors

 The 16‐port Open Fibre feature  The 16‐port FICON feature has
consists of two boards, each with eight 8 Gb/sec FICON ports.
eight 8 Gb/sec open fibre ports. Each port can auto negotiate to 1
Each port can auto negotiate to 1 of 3 host rates: 2 Gb/sec, 4
of 3 host rates: 2 Gb/sec, 4 Gb/sec or 8 Gb/sec
Gb/sec and 8 Gb/sec.

Open Fibre 16-Port Feature


The 16‐port Open Fibre feature consists of 2 boards, each with eight 8 Gb/sec Open Fibre
ports. Each port can auto negotiate to 1 of 3 host rates: 2 Gb/sec, 4 Gb/sec and 8 Gb/sec.
Each board has 1 Data Accelerator processor and 2 Tachyon QE‐8 four‐port processors.
The Data Accelerator processor communicates with the VSD boards, accepting or
responding to the host I/O commands. It is also a DMA engine to directly read or write
data to cache. Each Data Accelerator processor is actually two chips under one cover
working together: a processor and a programmable ASIC. The local RAM is used for
buffering host data, maintaining LDEV mapping tables (to the managing VSDs), and
housekeeping tables. There are 2 GSW ports, over which pass job requests to VSDs and
user data moving to or from cache.
FICON 16-port Feature
The 16‐port FICON feature is a board with eight 8 Gb/sec FICON ports, where each port
can auto negotiate to 1 of 3 host rates: 2 Gb/sec, 4 Gb/sec or 8 Gb/sec. Each board has 1
MHUB Data Accelerator processor and 4 Five‐Ex HTP processors. The MHUB processor
communicates with the VSD boards, accepting or responding to the host I/O commands.
It is also a DMA engine to directly read and write to cache. The MHUB chip is different
from the DA processor used on the Open Fibre boards.
The MHUB is actually 2 chips under 1 cover working together: a processor and a
programmable ASIC. The local RAM is used for buffering host data, maintaining LDEV
mapping tables (to the managing VSDs), and housekeeping tables. There are 2 GSW
ports, over which pass job requests to VSDs and user data moving to or from cache.

Page 2-6 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
Data Cache Adapters

Data Cache Adapters

 Data cache adapter (DCA) boards are memory boards that hold all
user data and the master copy of control memory (metadata).
 Up to 8 DCAs installed per chassis, with 16 GB to 64 GB of cache
per board (64 GB to 1024 GB per chassis).

The Data Cache Adapter (DCA) boards are the memory boards that hold all user data
and the master copy of Control Memory (metadata). There are up to 8 DCAs installed per
chassis, with 16 GB to 64 GB of cache per board (64 GB to 1024 GB per chassis).
The 2 boards of a feature must have the same RAM configuration, but each DCA feature
can be different. The first 2 DCA boards in the base chassis (but not in the expansion
chassis) have a region of up to 48 GB (24 GB per board) used for the master copy of
Control Memory. Each DCA board also has a 500 MB region reserved for a Cache
Directory.
This is a mapping table to manage pointers from LDEVs and allocated cache slots to those
LDEVs in that cache board. Each DCA board also has one or two on‐board SSD drives (63
GB each) for use in backing up the entire memory space in the event of an array
shutdown.
If the full 64 GB of RAM is installed on a DCA, it must have two 63 GB SSDs installed.
On‐board batteries power each DCA board long enough to complete several such
shutdown operations back‐to‐back in the event of repeated power failures before the
batteries have had a chance to charge back up.
The control memory requirement depends on program products licensed on the storage
system. For example, Hitachi Dynamic Provisioning requires 2 GB of additional control
memory and Hitachi Dynamic Tiering requires 8 GB of additional control memory.

HDS Confidential: For distribution only to authorized parties. Page 2-7


Enterprise Storage Architecture
Data Cache Adapters

 Each DCA cache board has 8 DDR3‐800 DIMM slots, organized as 4


banks of RAM
• Thus, 32 independent banks of RAM in all 8 features in a dual
chassis array

Each DCA cache board has 8 DDR3‐800 DIMM slots, organized as 4 banks of RAM (thus
32 independent banks of RAM in all 8 features in a dual chassis array). Each DCA board
can support 16‐ 64GB of RAM using the 8 GB DDR3 DIMMs.
The same amount of RAM must be installed on each DCA board of a feature pair, but
may differ among the installed features. The pair of boards for a DCA feature is installed
in different power domains in the Logic Box (Cluster‐1 and Cluster‐2).
Each DCA board has four 2 GB/sec full duplex GSW ports, each operating at a 1024
MB/sec send and 1024 MB/sec receive rate (concurrently). This provides for 8 GB/sec of
aggregate read‐write bandwidth (wire speed) per board to the FEDs and BEDs.
Each bank of DDR3‐800 RAM has a peak bandwidth of 6.25 GB/sec, or 25 GB/sec
possible per board, although each board is factory rated at 10 GB/sec. Due to the very
high speed of the RAM compared to the GSW ports, all four GSW ports can operate at
full speed.
Each DCA board has one or two 63 GB SSD drives installed. If there is a power outage,
these are the backup target for the entire cache space. In the case of a planned system
shutdown, these are the backup target for the Control Memory region on the first 2 DCA
boards installed in a system. During a power outage, the on‐board battery power keeps
the RAM, the SSDs and the board logic functioning while the DCA microprocessor de-
stages cache to the embedded SSD. There is enough battery power to support 2 or 3 such
outages back‐to‐back without recharging. One SSD is standard and the second one is
required when the RAM size per board is 64 GB.

Page 2-8 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
Virtual Storage Platform Memory Organization

Virtual Storage Platform Memory Organization

 Virtual Storage Platform has three parallel logical memory


systems that manage access to data, metadata, control tables and
array software:
1. Control Memory for configuration data, metadata and control tables
2. VSD Cache Partitions primary cache region for user data blocks
3. Local workspace on each Virtual Storage Directors (VSD), FED and
BED board

The memory systems used within the VSP design are quite different than what was
used in the USP V design. This is mostly due to the elimination of the USP V discrete
Shared Memory system. In addition, the VSP does not use dedicated I/O processors
(MPs) on FEDs and BEDs. The VSP uses a segmented cache system to provide a
common region for the Control Memory master copy and the buffer space for each
VSD board to manage its discrete set of LDEVs.

 Has 3 types of physical memory systems:


1. Data Cache, the array’s primary cache space of up to 512 GB per
chassis consisting of:
a. Control Memory master copy on first 2 DCA boards (4 GB – 24 GB
each)
b. Data Cache Directories (500 MB, 1 per DCA board)
c. VSD Cache Partitions (one per VSD board) in the User Data Cache
region
2. VSD Local Memory for process execution and a local copy of Control
Memory
 4 GB of local DDR2 RAM on each VSD board
3. FED and BED Local Memory for buffering data blocks and control
information such as the LDEV to VSD mapping tables
 There are 2 GSW ports per FED and 4 per BED

HDS Confidential: For distribution only to authorized parties. Page 2-9


Enterprise Storage Architecture
Back-End Directors

Back-End Directors

 Back-end director (BED) boards execute all I/O jobs received from
processor boards and control all reading or writing to disks
• 1 or 2 features (2 or 4 BED boards) per chassis

The Back‐end Director (BED) boards execute all I/O jobs received from the
processor boards and control all reading or writing to disks. There are 1 or 2 features
(2 or 4 BED boards) per chassis. Note that BEDs do not understand the notion of an
LDEV — just a disk ID, a block address and a cache address.
BED functions include the following:
 Execute jobs received from a VSD board
 Use DMA to move data in or out of data cache
 Create RAID‐5 and RAID‐6 parity with an embedded XOR processor
 Encrypt data on disk (if desired)
 Manage all reads or writes to the attached disks
Each BED board has eight 6 Gb/sec SAS links. There are up to 640 LFF disks or 1024
SFF disks per chassis attached to the 16 or thirty-two 6 Gb/sec SAS links from these
2 or 4 BED boards.

Page 2-10 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
Back-End Directors

 BED feature consists of 2 boards, each having four 2W SAS ports


• Each port accepts a cable that contains 2 independent 6 Gb/sec SAS links
• Each cable connects to 1 port on a DKU

A BED feature consists of two boards, each having four 2W SAS ports. Each port
accepts a cable which contains 2 independent 6 Gb/sec SAS links. Each cable
connects to one port on a DKU.
Thus there are 8 active SAS links per board.
The speed at which a link is driven (3 Gb/sec or 6 Gb/sec) depends on the interface
speed of the disk target per operation. Those disks with the 3 Gb/sec interface are
driven at that rate by the SAS link, and those disks that are 6 Gb/sec are driven at
that higher rate whenever a FED is communicating with them.
The speed in use of each SAS link is thus dynamic and depends on the individual
connection made through the switches moment by moment. Each BED board has 2
Data Accelerator processors, 2 SPC SAS I/O Controller processors, and 2 banks of
local RAM. There is a lot of processing power on each BED board.
The Data Accelerator processor communicates with the VSD boards, accepting or
responding to their I/O commands. It is also a DMA engine to read or write data to
cache. The DA processor also sends I/O commands to its powerful companion SPC
processor. Each SPC processor contains a high performance CPU that has the ability
to directly drive SAS or SATA disks (using SAS link protocol encapsulation), and
provides four 6 Gb/sec SAS links.
This Data Accelerator processor is actually 2 chips under one cover working
together: a processor and a programmable ASIC. This Data Accelerator processor is

HDS Confidential: For distribution only to authorized parties. Page 2-11


Enterprise Storage Architecture
Back-End Directors

different from the ones used in the FED boards. It contains the processor that — like
the DRR processors on USP V BED boards — manages RAID parity operations, data
encryption and disk rebuilds. Note the naming convention of the SPC 2W ports
(Ports 0 to 3) — these will be used in the SAS Engine discussion to follow.
The local RAM is used for buffering data, maintaining VDEV mapping tables (to the
managing VSDs), and other housekeeping tables. There are 4 GSW ports, over which
pass job requests to VSDs and user data moving to or from cache.

Page 2-12 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
Back-End Director and Front-End Director Pairs

Back-End Director and Front-End Director Pairs

 Operational control data is distributed to L2 memory.


• Data accelerator processors
▪ Transfer data to and from cache
▪ Execute host command requests
• Dual core, special purpose I/O processors built into a unique ASIC
package
• Designed to accelerate I/O and operational performance
• Offload latency sensitive processing tasks directly onto the BEDs and
FEDs

Image on left is a CHA. Image on right is a DKA.

HDS Confidential: For distribution only to authorized parties. Page 2-13


Enterprise Storage Architecture
Virtual Storage Directors

Virtual Storage Directors

 Virtual Storage Directors (VSD) is the Hitachi Virtual Storage Platform I/O
processing board.
• 2 or 4 installed per chassis

 Each VSD board executes all I/O requests for LDEVs assigned to it.

The VSD board is the VSP I/O processing board. There are 2 or 4 of these installed
per chassis. Each board includes 1 Intel 2.33GHz Core Duo Xeon CPU with 4
processor cores and 12 MB of L2 cache. There are 4 GB of local DDR2 RAM on each
board (2 DIMMs). This local RAM space is partitioned into 5 regions, with 1 region
used for each core’s private execution space, plus a shared Control Memory region
used by all 4 cores.
Each VSD board executes all I/O requests for those LDEVs (up to 16,320 LDEVs per
VSD of the 65,280 LDEV array limit) that are assigned to that board. No other VSD
board can process I/Os for these LDEVs.
This strict LDEV ownership by VSD eliminates all competition for access to Control
Memory and data blocks in cache. Ownership of a VSD’s LDEVs is temporarily
passed to the other VSD in that feature pair if one should fail, with that same
ownership returning upon replacement of the board.
No user data is processed within the VSD itself, so no user data is transferred across
the 4 GSW ports.

Page 2-14 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
Virtual Storage Directors

 All I/O requests are handled as jobs.


 Any core on a VSD can execute any job for any LDEV managed by that VSD board.
 Any LDEV can be accessed over any FED host port on either chassis.
• But the FED processor (Data Accelerator) will only direct I/O requests for an LDEV to the VSD
board that manages that LDEV.

Each FED maintains a local copy of the LDEV mapping tables in order to know
which VSD owns which LDEV. No other VSD boards will ever be involved in these
operations unless that VSD board fails.
The firmware loaded onto the VSD board contains all five types of previously
separated code on the USP V. Each Xeon core will schedule a process that depends
upon the nature of the job it is executing. A process will be of one of the following
types or their mainframe equivalents: Target, External (virtualization), BED, HUR
Initiator, HUR Target. In addition there will be system housekeeping type processes.
Target process – manages host requests to and from a FED board for a particular
LDEV.
External (virtualization) process – manages requests to or from a FED port used in
virtualization mode (external storage). Here the FED port is operated as if it were a
host to operate the external array.
BED process – manages the staging or destaging of data between cache blocks and
internal disks via a BED board.
HUR Initiator (MCU) process – manages the “respond” side of a Hitachi Universal
Replicator connection on a FED port.
RCU Target (RCU) process – manages the “pull” side of a remote copy connection
on a FED port.

HDS Confidential: For distribution only to authorized parties. Page 2-15


Enterprise Storage Architecture
Grid Switches

Grid Switches

 Grid Switches (GSWs) provide high speed cross‐connects among


other 4 types of boards
 2 or 4 GSWs installed per chassis

There can be 2 or 4 GSWs installed per chassis. Each board has 24 high speed ports,
where each port supports a full‐duplex rate (send plus receive) of 2048 MB/sec.
For every I/O request, FED or BED user data blocks directly go to the Cache
Memory Adapter boards, while all FED or BED metadata and job control traffic goes
to the Virtual Storage Director boards.
Note that VSD boards can only read and write to the reserved Control Memory
region of cache memory, as there is a hardware addressing interlock preventing
each VSD from accessing the host data portion of cache.

Page 2-16 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture

 Each grid switch (GSW) board has 24 full duplex 2 GB/sec ports
 Each PCI Express 4‐lane port can concurrently send at 1024 MB/sec
and receive at 1024 MB/sec
 GSW supports an aggregate peak load of 24 GB/sec send and
24 GB/sec receive (a 48 GB/sec full duplex aggregate)

Each GSW board has 24 full duplex 2 GB/sec ports, with each PCI Express 4‐lane
port able to concurrently send at 1024 MB/sec and receive at 1024 MB/sec. As such,
the GSW supports an aggregate peak load of 24 GB/sec send and 24 GB/sec receive
(a 48 GB/sec full duplex aggregate).
The 24 ports are used as follows:
 8 ports are used to connect to FED and BED ports. These ports see both data and
metadata traffic intermixed.
 4 ports are used to connect to VSD boards. These ports only see job request traffic
or system metadata (such as the address in cache for a target slot).
 8 ports are attached to DCA boards, which see Control Memory updates as well
as user data reads and writes.
 4 ports are used to cross connect that GSW to the matching GSW board in the
second chassis (if used).
There are no connections among the 2 or 4 GSWs within a chassis. Every FED and
BED board in a chassis attaches to 2 GSWs, while each VSD and DCA board attaches
to all 4 GSWs.

HDS Confidential: For distribution only to authorized parties. Page 2-17


Enterprise Storage Architecture
VSP Chassis Bandwidth Overview

VSP Chassis Bandwidth Overview

The VSP is not being offered in specific models but as basic model that is a starting point
for configurations that meet a variety of customer requirements.
The figure shows a single chassis array with all logic boards installed, with all cache
DIMMs installed in each DCA board.
The various peak wire speed bandwidths are shown for the different points in the array.
Wire speed rates are what can be achieved for a short burst in a laboratory environment
but aren’t to be expected in typical usage. But these are the rates that all vendors
advertise since user achievable rates depend greatly on the test environment and
workloads. So the only reference value is the electrical limits of the various components.
In the figure shown here, the numbers inside the colored arrows (such as 32 on the red
arrow to cache) indicate the number of Grid Switch paths. The numbers next to them
indicate the peak wire speed rates for sending or receiving. Each such GSW path has a 1
GB/sec send rate and a concurrent 1 GB/sec receive rate. So, 16 such ports gives an
aggregate of 16 GB/sec send and 16 GB/sec receive.
These peak rates (as send + receive GB/sec) for a fully loaded single chassis system
under sustained heavy loads using all ports are:
 GSWs to FEDs: 16 + 16GB/sec
 GSWs to DCAs: 32 + 32GB/sec
 GSWs to VSDs: 16+16GB/sec
 GSWs to BEDs: 16 + 16GB/sec
 GSWs (Chassis‐1) to GSWs (Chassis‐2): 16 + 16GB/sec

Page 2-18 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
DKU/HDU Overview

DKU/HDU Overview

 Each DKU box contains 8 HDU containers


 DKU is organized into a front and rear, each with 4 HDU sections of 10 or 16
disks.
SSW x 4
SSW x 4
Fan Assembly

Fan Assembly Latch


13U Rear

Front

HDD x 40
HDD x 40
Fan Assembly
LFF 3.5 HDD box – maximum 80 HDD

Each of the 1 to 8 DKU boxes per chassis contains eight HDU containers. The DKU is
organized into a front and rear, each with 4 HDU sections for 10 or 16 disks. The LFF
DKU holds 80 disks while the SFF DKU holds 128.
Shown here is an artist’s view of a DKU box. There are 2 fan doors on the front and 2
more on the rear. One fan door is removed in this picture. When the pair of fan
doors on the front or rear (an interlock prevents opening both at the same time) are
opened, those fans stop. The fans on the closed doors are then run at double speed
to maintain the same airflow. When these doors are closed, all fans run at half speed
(quieter).
An empty DKU weighs about 100 pounds.

HDS Confidential: For distribution only to authorized parties. Page 2-19


Enterprise Storage Architecture
DKU/HDU Overview

 LFF type DKU box can hold


80 disks — 10 in each HDU

 SFF type DKU box can hold


128 disks — 16 in each HDU

If the DKU box is the SFF type, it can hold 128 disks, with each HDU holding up to
16 disks. If it is the LFF type, it can hold 80 disks, with each HDU holding up to 10
disks.
Disks are added to 4 of the HDU containers within a DKU as sets of 4 (the Array
Group). Array Groups are installed (following a certain upgrade order) into specific
HDU disk slots in a DKU named region known as a “B4‐x.” Each B4‐x is the set of 4
HDUs either from the top section or the bottom section of each DKU box. Note that
all of the HDUs within a chassis are controlled by those two or four BED boards
installed in that chassis.
The generalized BED and DKU layout is shown. A standard performance single chassis
configuration uses only one BED feature (BED‐0), which provides 16 back‐end 6
Gb/sec SAS links to the set of 1‐to‐8 DKUs. The high performance single chassis
uses two BED features for a total of thirty-two 6 Gb/sec SAS links. This doubles the
number of active SAS links to the same sets of HDUs. In each case, with all 8
optional DKUs installed, there may be up to 1024 SFF disks or 640 LFF disks in a
single chassis.

Page 2-20 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
DKU/HDU Overview

 SAS links pass through the 2 quads of HDUs per DKU.

This illustration shows how the SAS links pass through the 2 quads of HDUs per
DKU. Notice how there is a vertical alignment by HDU.
Each colored line indicates 2 SAS 2W cables (1 from each BED feature) where each
cable has 2 active SAS links. So, on a chassis with 2 BED features, there are 32 SAS
links passing through that chassis’ stack of DKUs (1 to 8 of them), with 8 SAS links
passing through each stack of HDUs.

HDS Confidential: For distribution only to authorized parties. Page 2-21


Enterprise Storage Architecture
DKU/HDU Overview

 Organization of zones in a DKU into which parity groups are defined.


 Either the top 4 or bottom 4 HDUs per DKU are a zone.

This illustration shows the organization of zones in a DKU into which Parity Groups
are defined. Either the top 4 or bottom 4 HDUs per DKU are a zone. These zones are
usually referred to as a “B4‐x”, from which you get the names of the Parity Groups
when they are created. For 4‐disk Parity Groups (RAID‐ 5 3D+1P, RAID‐10 2D+2D),
all members come from a single B4 zone. For the 8‐disk Parity Groups
(RAID‐5 7D+1P, RAID‐10 4D+4D, RAID‐6 6D+2P), the 8 disks come from the two B4
zones within the same DKU. The name associated with the 8‐disk Parity Group is
taken from the B4 name of the 4-disk Parity Group in the lower zone.
Notice how these disk zones cross the SAS links and power domains (clusters).

Page 2-22 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
BED SAS Links to DKU/HDU

BED SAS Links to DKU/HDU

 8 6Gb/sec SAS links per HDU — switches can connect 8 disks within
the HDU or pass directly to the next HDU on a per link basis

Single chassis array with


a Standard Performance
configuration

VSP uses 8 6 Gb/sec SAS links per HDU, with the switches able to connect 8 disks
within that HDU at the same time, or to pass directly to the next HDU on a per link
basis.
The figure is a view of a single chassis array with a Standard Performance
configuration (1 BED feature) and 8 DKUs (only 4 DKUs are shown here to keep it
simple) with 64 HDUs (32 shown here) installed in a chassis. This provides sixteen
6Gb/sec SAS links (eight 2W cables) that attach to up to 640 LFF disks or 1280 SFF
disks (or some mixture of those).
A dual chassis array would have 2 of these structures side by side with up to 1280
LFF disks or 2048 SFF disks (or some mixture) in all. Each row of green and blue
boxes is one DKU with all 8 of its HDUs. This BED‐to‐DKU association is fixed
within each chassis.

HDS Confidential: For distribution only to authorized parties. Page 2-23


Enterprise Storage Architecture
BED SAS Links to DKU/HDU

 High Performance configuration (2 BED features) and same 4 DKUs

Single chassis array with


a High Performance
configuration

This figure is the same view as the previous slide, this time showing a single chassis
array with a High Performance configuration (2 BED features) and the same 4
DKUs. This provides 32-6Gb/sec SAS links (16 2W cables) to up to 640 LFF disks or
1280 SFF disks (or some mixture) when using all 8 DKUs. A dual-chassis array
would have 2 of these structures with up to 1280 LFF disks or 2048 SFF disks (or
some mixture).
Note that in the dual chassis configuration, while any FED board can interact with
any VSD or DCA cache board and any VSD can interact with any BED, there is an
exclusive association among the BED boards and the DKUs by chassis. There is no
cross‐chassis distribution at this level since that wouldn’t be possible. The SPC
processors manage certain HDUs from the DKUs within a chassis.

Page 2-24 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
Disk Types and Limits

Disk Types and Limits

 Review:
• A single chassis Virtual Storage Platform
supports a mixture of up to 1024 2.5 in. disks
(SSD, SAS) or 640 3.5 in. disks (SSD, SATA).
• SFF DKUs can hold 128 disks.
• LFF DKUs can hold 80 disks.
• There can be up to 8 DKUs per chassis.
• Total disk counts are determined by how
many DKUs are installed and of which types.
• A single chassis VSP has a limit of 128 SSD
drives.

A single chassis VSP supports a mixture (in Array Group installable features of four
disks) of up to 1024 2.5 in. disks (SSD, SAS) or 640 3.5 in. disks (SSD, SATA) or some
value in between.
Each SFF DKU can hold 128 disks and each LFF DKU can hold 80 disks. There may
be up to 8 DKUs per chassis, so the total disk counts will be determined by how
many DKUs are installed and of which types.
The single chassis VSP has an overall limit of 128 SSD drives, with up to 256 SSDs
possible in the dual chassis configuration.
In the next few slides, we will review various disk types and compare their rated
speeds.

HDS Confidential: For distribution only to authorized parties. Page 2-25


Enterprise Storage Architecture
SSD Drives

SSD Drives

 SSD drives are newest drive technology in storage arrays


 SSD drives appear costly when measuring dollars per gigabytes
• Much less costly if one considers dollars per I/O
 Currently available in 2 sizes:
• 200 GB SFF
• 400 GB LFF
• Both use the 3 Gb/sec SAS interface
• Each requires a different type of HDU

SSD drives are the newest drive technology in storage arrays. “Drive” is the proper
term since these are not “disks” in any sense. SSD drives are very costly in the “$ per
GB” metric, but fairly inexpensive in the “$ per IOPS” metric.
SSD drives are currently available in 2 sizes: a 200 GB SFF and a 400 GB LFF, both
currently using the 3 Gb/sec SAS interface.
Note that each requires a different type of HDU. With that in mind, if all drives on a
chassis are of the SFF type, it may be a better choice (economically and for maximum
overall drive counts) to stick with the smaller SSD since it will intermix in the DKUs
needed by the SAS drives. If there already are 1 or more LFF DKUs in a chassis, then
this would not be an issue.
Also keep in mind that the IOPS rate per SSD for these 2 models is not the same.
Having eight 400 GB SSDs in the array will yield about 20% less write performance
than when using 8 of the 200GB SSDs.
Also consider that using sixteen 200 GB SSDs will provide a large IOPS boost over
the 8 400 GB SSDs (equal capacity solution). So the cost justification needs to factor
this into the equation.

Page 2-26 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
SAS Disks

SAS Disks

 Primary disks used in Hitachi enterprise and midrange arrays


• Offer high performance, reliability and a wide range of capacities
 Performance levels fall between SSD and SATA
 SAS disks used in Hitachi Virtual Storage Platform are available in
15 K and 10 K RPM rotation speeds
 Sizes range from 146 GB to 600 GB
 SAS interface speed is 6 Gb/sec

SAS disks are the primary disks used in both enterprise and midrange arrays from
Hitachi. These are workhorse disks, with both high performance and good capacities.
They fall in between the random performance levels of SSD and SATA (closer to
SATA). All three disk types will have about the same sequential read performance
per matching RAID levels.
SAS disks are the same as Fibre Channel disks but with a SAS interface.
SAS disks used in the VSP come in 15K RPM and 10K RPM rotation speeds, and in
sizes ranging from 146 GB to 600 GB. The SAS interface speeds on all Hitachi
supported disks is 6 Gb/sec.
SAS disks are designed for high performance, having dual processors, dual host
ports and large caches. The dual host ports allow two concurrent interactions with
the attached BEDs. One port may be receiving or responding to a new I/O
commend while the other is transferring data (only 1 such transfer per disk at a
time).
The first SAS disks were of the usual Large Form Factor size (3.5 in.). Now they are
shifting to the smaller Small Form Factor (2.5 in.). This reduces cost and heat, as well
as reducing seek times (smaller platters).
In general, a 2.5 in. SFF 10K RPM SAS disk will have somewhat higher performance
than a 3.5 in. LFF 10K RPM disk, but lower performance than a 3.5 in. LFF 15K RPM
SAS disk.

HDS Confidential: For distribution only to authorized parties. Page 2-27


Enterprise Storage Architecture
SATA Disks

SATA Disks

 Offer very high capacities (up to 4 TB) at an “economy” level of


performance
 Best suited for archival purposes
 Not suitable for high levels of random workloads (like OLTP) with
even modest (20% or so) sustained write levels
 Hitachi recommends SATA volumes for nearline storage with a low
frequency of access but online availability
 Not likely that Virtual Storage Platform would be configured with a
high percentage of internal SATA disks

SATA disks offer very high capacities (now at 2 TB per disk but with 4 TB coming
soon) at an “economy” level of performance. They are best suited for archival duty
and are not suitable for high levels of random workloads (like OLTP) with even
modest (20% or so) sustained levels of write.
Due to the large capacities, most SATA disks are used in RAID‐6 configurations in
order to avoid the likelihood of a dual disk failure during the potentially long
rebuild times of a failed disk onto a spare disk.
However, the use of RAID‐6 carries a very high RAID write penalty factor of 6
internal array disk operations per host write request. With SATA, there are three
more such operations (read‐verify) per write, or 9 in all per host write request.
Some users will trade off usable capacity for a large increase in write performance
by the use of RAID‐10 rather than RAID‐6. RAID‐10 only carries a write penalty of 6
disk operations per small‐block host write request (two 512/520 pre‐reads, 2 writes
and 2 read‐verify operations). So the usable capacity is reduced from 80% (RAID‐6
6D+2P) to 50% (RAID‐10 4D).
Hitachi’s recommendation is to use SATA volumes for near‐line storage with a low
frequency of access but online availability. The decision to use up valuable internal
chassis disk slots with SATA disks rather than the much higher performing SSD or
SAS disks should be carefully considered. In many cases, the use of SATA disks
within virtualized storage on an external midrange product might make better sense.
However, there will be individual cases where the use of some internal slots for
SATA disks will solve a specific customer business problem. It is not expected that a
VSP would normally be configured with a high percentage of internal SATA disks.

Page 2-28 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
Installable Disk Features

Installable Disk Features

Nominal
Usable
Form Port Advertised Random
HDD Type RPM Size
Factor Speed Size (GB) Read
(GB)
IOPS
2TB SATA 7200 LFF 3 Gb/sec 2000 1832 80

600 GB SAS 10k SFF 6 Gb/sec 600 545 130

400 GB SSD n/a LSS 3 Gb/sec 400 364 5000+


300 GB SAS 10k SFF 6 Gb/sec 300 268 145
200 GB SSD n/a SFF 3 Gb/sec 200 191 5000+
146 GB SAS 15k SFF 6 Gb/sec 146 137 200

This table illustrates the advertised size (base‐10) and the typical usable size (based‐2,
after Parity Group formatting) of each type of disk. Also shown is the “rule of thumb”
average expected random IOPS rate for each type of disk.
The type and quantity of disks used in a VSP and the RAID levels chosen for those
disks will vary according to analysis of the user workload mix, cost, application
performance targets and usable protected capacity requirements.
All LDEVs (actually pointers to VDEVs containers) that are created from Parity
Groups that use SATA disks can be configured either as OPEN‐V or 3390‐M
emulation; therefore these internal SATA VDEVs are usable for mainframe volumes.

HDS Confidential: For distribution only to authorized parties. Page 2-29


Enterprise Storage Architecture
Enterprise Storage Architecture

Enterprise Storage Architecture

Overview

 Base controller frame and 1 to 4 optional disk array frames


 Controller boards are reduced to half previous size
• Fewer resources are affected by a service action
 Improved availability, configurability
and serviceability

CHA Type FED Packages Ports / Package


USP USP-V USP USP-V
Open Fibre 4 8 16,32 8,16
ESCON 4 8 16 8
FICON 4 8 8,16 8

back end Loops Max Disks


USP USP-V USP USP-V
1 BED 16 8 384 192
2 BED
3 BED
32
48
16
24
640
896
384
512
Up to
4 BED 64 32 1152 640 256 drives
5 BED - 40 - 768
6 BED - 48 - 896
7 BED
8 BED
-
-
56
64
-
-
1024
1152 Controller and
128 drives

Hitachi Universal Storage Platform® V is a high performance and large capacity disk
storage system at the high end that follows the architecture of Universal Storage
Platform and has an improved Hi-Star Net Architecture and a faster microprocessor.
A USP V can consist of one disk control frame (DKC), which can install 128 HDDs,
and up to 4 disk array frames (DKU) that each can install 256 HDDs and a
subsystem. It is a flexible configuration that goes from 5 to 1,152 HDDs (minimum
and maximum). The USP V is lined up with 2 models of a 3-phase AC power model
and a single-phase model, and each model is connectable to both mainframe systems
and open systems.
Number of disk drives:
 Up to 256 HDDs/16 disk paths (when 2 DKA pairs are installed), or
 Up to 640 HDDs/32 disk paths (when 4 DKA pairs are installed), or
 Up to 896 HDDs/48 disk paths (when 6 DKA pairs are installed), or
 Up to 1,152 HDDs/64 disk paths (when 8 DKA pairs are installed)

Page 2-30 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
Overview

The USP V new half-sized PCBs allow for a less costly, more granular expansion of
a system. For instance, there were typically 4-6 front-end director (FED) packages
installed in a Model Universal Storage Platform 600, and they could be a mixture of
Open Fibre, ESCON, FICON and iSCSI. However, this gives ave you a large number
of ports of a single type that you may not need, with a substantial reduction of other
port types that you may need to maximize.
With the new half-sized cards, you can have 8 CHA packages (or up to 16 at the
expense of disk BED packages), using any mixture of the interface types as before.
However, there are half as many ports per board so that fewer of the less-used port
types can be installed. Packages are still installed as pairs of PCB cards just as with
Universal Storage Platform.
The USP V back -nd disk adapter (BED) has four 4 Gb/sec loops supporting up to
128 physical disks. The Universal Storage Platform BED pair has 16-2 Gb/sec Fibre
loops supporting up to 256 physical disks. There are up to 8 BED packages (16 PCBs)
in USP V, while Universal Storage Platform has 4 BED packages (8 PCBs), both with
the same number of loops.
BED = Back-end Director
DKA = Disk Adapter
DKC = Disk Controller Unit

HDS Confidential: For distribution only to authorized parties. Page 2-31


Enterprise Storage Architecture
Hitachi Universal Storage Platform V Architecture

Hitachi Universal Storage Platform V Architecture

Metadata lives in
Shared Memory
BC difference All cache is
tables, CoW available for
tables, etc. data

Metadata and Cache paths are


control use for data only
separate
control paths
FEDs used for
Host, TC/HUR,
external storage
HDD
bandwidth
increases with Host bandwidth
more BEDs increases with
more FEDs

Page 2-32 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
Hitachi Universal Storage Platform VM Architecture

Hitachi Universal Storage Platform VM Architecture

USP VM uses the


same FEDs and
BEDs as USP V – no
mixing FED/BED as
with NSC55

HDS Confidential: For distribution only to authorized parties. Page 2-33


Enterprise Storage Architecture
Universal Storage Platform V FED and MP Distributed I/O

Universal Storage Platform V FED and MP Distributed I/O

CHPs can assist other CHPs on the same PCB • CHPs can only assist other CHPs on
CHP controls 2 FC paths Queue tag pool of 4096 is the same physical card.
(for example, 1A, 5A). managed by the CHP • CHPs can only assist other CHPs of
(across FC 2 ports). the same port mode (any target/TC
initiator/external initiator).
Universal Storage Platform – V Series • MP contention is reduced
1A 3A 5A 7A 1B 3B 5B 7B 2A 4A 6A 8A 2B 4B 6B 8B automatically.
• Sharing reduces as all MPs get busy
– the higher the CHP utilization
DX4 DX4 DX4 DX4 DX4 DX4 DX4 DX4
percentage, the less likely it is to
assist another CHP.

CHP
400MH CHP
400MH CHP
400MH CHP
400MH CHP
400MH CHP
400MH CHP
400MH CHP
400MH
z z z z z z z z

DTA
DTA DTA
DTA

2 X 1064 MB/sec 8 X 150 MB/sec 2 X 1064 MB/sec 8 X 150 MB/sec


DATA Only Meta-DATA Only DATA Only Meta-DATA Only

• Target load distribution waits


a period of minutes until it is
judged that the condition is
sustained.
• Initiator sharing is constant.

CHP = Channel Processor

Page 2-34 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
FED Storage Processor Sharing

FED Storage Processor Sharing

 Processors with the same mode can be shared within a package.


• Initiators cannot share load with targets
• Target port sharing only occurs with write IOPs
 When processor reaches 50%, offloading takes place to the other
processors on the package.
 During port design, consider placing all external paths on same
package to maximize processor sharing for external traffic and
smooth out hot processors.
 I/O metrics are still logged against the originating port/path (IOPS,
MB/sec).

HDS Confidential: For distribution only to authorized parties. Page 2-35


Enterprise Storage Architecture
Universal Storage Platform V BED

Universal Storage Platform V BED

RAID groups built


across a BED will show
even DKP MP utilization

DX4 DX4 DX4 DX4

DRR DRR DRR DRR DRR DRR DRR DRR

• Parity generation in
hardware
DKP DKP DKP DKP DKP DKP DKP DKP • Data still needs to be
pumped through the
paths by DKPs to
allow parity to be
calculated
• BED utilization will be
DTA MPA DTA MPA
DTA DTA
higher with RAID-5
and higher again with
RAID-6
2x1064 MB/sec 8 X 150 MB/sec 2x1064 MB/sec 8 X 150 MB/sec
DATA Only Meta-DATA Only DATA Only Meta-DATA Only

Page 2-36 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
Switched Back End

Switched Back End

SATA
SATA R-a-W occurs
in disk canister and
does not load BED R-a-W

or switch.

HDS Confidential: For distribution only to authorized parties. Page 2-37


Enterprise Storage Architecture
Universal Storage Platform V BED and RAID Group Layout

Universal Storage Platform V BED and RAID Group Layout

Page 2-38 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
Random I/O Cache Structures

Random I/O Cache Structures

Sequential I/O uses temporary pool of 256 KB segments


Initial Random Cache Allocation Per LDEV
256 KB cache slot 16
64 KB read segment 32
64 KB write segments 32
16 KB read sub-segments 128
16 KB write sub-segments 128
2 KB read cache blocks 512
2 KB write cache blocks 512

256 KB cache slot (random)


64 KB read segment 64 KB read segment 64 KB read segment 64 KB read segment

64 KB segment
16 KB sub-segment 16 KB sub-segment 16 KB sub-segment 16 KB sub-segment

16KB sub-segment
2 KB block 2 KB block 2 KB block 2 KB block 2 KB block 2 KB block 2 KB block 2K B block

HDS Confidential: For distribution only to authorized parties. Page 2-39


Enterprise Storage Architecture
Architecture — Storage

Architecture — Storage

Storage Overview

1. Physical Devices – PDEV


2. PDEV grouped together
with RAID type. RAID-1+,
RAID-5, RAID-6

3. Virtual Device(s) – VDEV

4. EMULATION specifies smaller


logical unit sizes

5. Logical Devices – LDEV

6. Assign addresses in
CU:LDEV format

00:00
00:01
00:02

Parity Groups are created from the physical disks.


A RAID level and emulation is applied to the group.
The emulation creates equal sized stripes called LDEVs (Logical Devices).
LDEVs are mapped into a Control Unit matrix (CU#:LDEV#).

Page 2-40 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
RAID-5 Mechanism, Open-V

RAID-5 Mechanism, Open-V

HDS Confidential: For distribution only to authorized parties. Page 2-41


Enterprise Storage Architecture
RAID-1+ Mechanism, Open-V

RAID-1+ Mechanism, Open-V

Page 2-42 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
RAID-6 Mechanism, Open Systems

RAID-6 Mechanism, Open Systems

HDS Confidential: For distribution only to authorized parties. Page 2-43


Enterprise Storage Architecture
RAID Overview

RAID Overview

RAID-10 RAID-5 RAID-6


Description Data striping and Data striping with Data striping with 2
mirroring distributed parity distributed parities
Number of
4/8 4/8 8
Disks
Benefit Highest performance Best balance of cost, Balance of cost, with
with data redundancy reliability and extreme emphasis on
Higher write IOPS performance reliability
per parity group than
with similar RAID-5
Disadvantage Higher cost per Performance penalty Performance penalty
number of physical for high percentage for all writes
disks of random writes

The factors in determining which RAID level to use are cost, reliability and
performance. The table above shows the major benefits and disadvantage of each
RAID type. Each type provides its own unique set of benefits so a clear
understanding of your customer’s requirements is crucial in this decision.

Page 2-44 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
RAID Write Penalty

RAID Write Penalty

HDD IOPS per Host


Read Write
SAS/SSD
RAID-10 1 2
RAID-5 1 4
RAID-6 1 6
SATA
RAID-10 1 4
RAID-5 1 6
RAID-6 1 9

Another characteristic of RAID is the idea of “write penalty.” Each type of RAID has
a different back‐end physical disk I/O cost, determined by the mechanism of that
RAID level. The table above illustrates the trade‐offs between the various RAID
levels for write operations. There are additional physical disk reads and writes for
every application write due to the use of mirrors or XOR parity.
Note that SATA disks are usually deployed with RAID‐6 to protect against a second
disk failure within the Parity Group during the lengthy disk rebuild of a failed disk.
Also note that you must deploy many more SATA disks than SAS disks to be able to
meet the same level of IOPS performance. To protect data, in the case of SATA disks,
there are additional physical I/O operations per write in order to compare what was
just written to disk with the data and parity blocks held in cache. With RAID‐6, there
are 3 such blocks: data, parity1 and parity2.

HDS Confidential: For distribution only to authorized parties. Page 2-45


Enterprise Storage Architecture
Module Summary

Module Summary

 Described the characteristics of Hitachi Enterprise storage systems


• Front-end architecture that relates to performance optimization
• Cache architecture that relates to performance optimization
• Back-end architecture that relates to performance optimization
• Virtualized architecture and connectivity to external storage

Page 2-46 HDS Confidential: For distribution only to authorized parties.


Enterprise Storage Architecture
Module Review

Module Review

1. List the Virtual Storage Platform logic boards.


2. The control memory can be spread across basic and optional
boards. (True/False)
3. How is the VSD local memory utilized?
4. How does a FED know what VSD manages which LDEV?
5. List the Universal Storage Platform logic boards.
6. On a USP-V how is load balancing between processors on a
package handled?
7. How are IO metrics calculated in a load balancing scenario?

HDS Confidential: For distribution only to authorized parties. Page 2-47


Enterprise Storage Architecture
Module Review

Page 2-48 HDS Confidential: For distribution only to authorized parties.


3. Modular Storage
Architecture — Part 1
Module Objectives

 Upon completion of the two modules, Modular Storage Architecture


— Part 1 and Part 2, you should be able to:
▪ Describe the characteristics of Hitachi modular storage systems
▪ Front-end architecture that relates to performance optimization
▪ Cache architecture that relates to performance optimization
▪ Back-end architecture that relates to performance optimization

HDS Confidential: For distribution only to authorized parties. Page 3-1


Modular Storage Architecture — Part 1
Hitachi Unified Storage Overview

Hitachi Unified Storage Overview

Hitachi Unified Storage 100 Family

Entry Mid Max


HUS 110 HUS 130 HUS 150

Unified
Storage

Block
Modules

Hitachi Command Suite

HUS = Hitachi Unified Storage

Page 3-2 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
Unified Storage Block Module Specs

Unified Storage Block Module Specs

Enterprise design
Host multipathing
Hardware load balancing
Dynamic optimized performance
99.999+% reliability
Throughput

HUS150
 16-32 GB cache
HUS 130  16 FC, 8 FC and 4 iSCSI
 16 GB cache  Mix up to 960 flash drives,
 16 FC, 8 FC and 4 iSCSI SAS and capacity SAS
HUS 110  Mix up to 264 flash drives,  32 SAS links (6 Gb/sec)
 8 GB cache SAS and capacity SAS  Max. 40 standard 2.5 in. trays
 8 FC and 4 iSCSI ports  16 SAS links (6 Gb/sec)  Max. 80 standard 3.5 in. trays
 Mix up to 120 flash drives,  Max. 10 standard 2.5 in. trays  Max. 20 dense 3.5 in. trays
SAS and capacity SAS  Max. 19 standard 3.5 in. trays
 8 SAS links (6 Gb/sec)  Max. 5 dense 3.5 in. trays
 Max. 4 standard 2.5 in. trays
 Max. 9 standard 3.5 in. trays

Scalability

Fibre Channel (FC)


Serial Attached SCSI (SAS)

HDS Confidential: For distribution only to authorized parties. Page 3-3


Modular Storage Architecture — Part 1
Hitachi Unified Storage Architecture

Hitachi Unified Storage Architecture

Logical Architecture Overview


Hitachi MH 0 1
A A
Controller Module
Host Controller Module FC iSCSI iSCSI FC

Supply
Power
Ports Xeon 2-core 3.5GB
3.5GB Xeon 2-core
LUN Management Region

1.73Hz RAM
RAM 1.73Hz I/O Module - I/O Module - I/O Module - I/O Module -
Ports Ports Ports Ports

Intel

6.4GB/s

6.4GB/s

6.4GB/s

6.4GB/s
3.2GB/s PCH Mngt
Mngt PCH 3.2GB/s
CPUs

Passive Backplane
NVRAM
NVRAM

RAID RAID
Processor Processor 16GB
16GB 6.4GB/s Crossover
(DCTL)

10.6GB/s
DDR3
10.6GB/s

RAID and DDR3 (DCTL) Cache


Cache
Cache
Engine
6.4GB/ 6.4GB/ 6.4GB/s 6.4GB/s
s s
I/O Module - I/O Module - I/O Module - I/O Module -
Disks Disks Disks Disks

Supply
Power
SAS CTLR SAS CTLR SAS CTLR SAS CTLR
SAS
8 x 6Gbps 8 x 6Gbps 8 x 6Gbps 8 x 6Gbps
LOAD BALANCING REGION

Engine Controller 0 Links Links Links Links Controller 1

0 1 2 3 0 1 2 3
SAS Wide Cable
(4 Links @
6Gbps each)

Disk
Enclosures Tray 0 Tray 1 Tray 2 Tray 3
24 2.5" HDDs 12 3.5" HDDs 24 2.5" HDDs 12 3.5" HDDs

12 3.5" HDDs 12 3.5" HDDs 48 3.5" HDDs


Tray 4 Tray 5 Tray 6 Tray 7

ENCLOSURE 960 Disks High Density Tray


“STACK” (SAS, SSD)

Page 3-4 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
Dynamic Virtual Controller Overview

Dynamic Virtual Controller Overview

Dynamic Virtual Controller Introduction

 Dynamic Virtual Controller


• Allows for simultaneous host access of any LUN on any host port on
either controller with very little added overhead
• Front‐end design allows for the concurrent use of operating system native
path management and host managed load balancing (such as Hitachi
Dynamic Link Manager, Windows MPIO, Solaris MPxIO/traffic manager,
AIX MPIO, Linux Device Mapper or Veritas DMP)

The Dynamic Virtual Controller of the Hitachi Unified Storage 100 family controllers
allows for simultaneous host access of any LUN on any host port on either
controller with very little added overhead. A host accessing a LUN via a port on
Controller‐0 can actually have most of the I/O request processed completely by
Controller‐1 with little intervention by the Intel CPU in Controller‐0. When a LUN is
accessed on ports of the non‐managing controller, the data is moved across the
inter‐DCTL bus into the alternate controller’s mirror region of cache by the
managing controller when the back‐end I/O is completed.

HDS Confidential: For distribution only to authorized parties. Page 3-5


Modular Storage Architecture — Part 1
Dynamic Virtual Controller Introduction

 LUN Management
• Hitachi Unified Storage uses a dynamic, global table of all configured
LUNS that determines which controller will execute the back-end part of
an I/O request for a LUN
• Execution is independent of which front-end port (either controller) is
involved in accepting or responding to the host I/O request
• All LUNs are initially automatically assigned by Hitachi Storage Navigator
Modular 2 on a round-robin basis to the controller I/O management lists
as LUNs are created
• The LUN management table is changed over time by the operation of the
Hardware I/O Load Balancing (HLB) feature (described below)

LUN Management by Storage System


A totally new concept for the midrange array arena
 Hitachi Unified Storage uses a dynamic, global table of all configured LUNS that
determines which controller will execute the back-end part of an I/O request for
a LUN
 Execution is independent of which front-end port (either controller) is involved
in accepting or responding to the host I/O request
 Eliminates the sysadmin’s need to micro-management of specific LUNs on
certain paths for certain hosts (*except for CoW/SI/TC – a PVOL-SVOL pair
must be on the same controller).
 All LUNs are initially automatically assigned by Hitachi Storage Navigator
Modular 2 on a round-robin basis to the controller I/O management lists as
LUNs are created
 The LUN management table is changed over time by the operation of the
Hardware I/O Load Balancing (HLB) feature. If enabled, HLB will remap
dynamically certain LUNS from one controller’s management list to the other
controller based on certain environmental conditions.

Page 3-6 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
LUN Management

LUN Management

“Direct” “Cross” “Direct” “Cross”

Controller 0 QE8 QE8 Controller 1

3.5GB Xeon 2-core Xeon 2-core 3.5GB


RAM 1.73Hz 1.73Hz RAM

Mngt PCH 3.2GB/s 3.2GB/s PCH Mngt

RAID Processor 6.4GB/s RAID Processor


(DCTL) (DCTL)

Local Local
Data #0 Data #1

Mirror of Mirror of
#1 #0

The LUN Management system allows for a host I/O present on any front-end port
to be processed by either controller. All LUNs are associated with 1 of the 2
controllers by the LUN Management Table. If a host request for a LUN arrives on a
port on the current managing controller, then that is called “Direct Mode.” If that
request is for a LUN managed by the other controller, that is known as “Cross
Mode.” In Cross Mode, the local Xeon processor directly sends the request to the
Xeon CPU on the other controller for execution over the inter-DCTL
communications bus.

HDS Confidential: For distribution only to authorized parties. Page 3-7


Modular Storage Architecture — Part 1
Hardware Load Balancing

Hardware Load Balancing

Hardware Load Balancing Overview

 Hardware Load Balancing


• Automatic change to the controller management tables of 1 or more LUNs
• Triggered by a sustained imbalance of Intel CPU average busy rates
between the 2 controllers
 Brings about a balance of controller CPU loads
• Does not affect the mapping of LUNs to front-end ports
• Only affects the assignment to a controller for processing those I/O
requests
• Any LUNs not associated with Replication LUNs are candidates for an I/O
management change each time it occurs

Hardware Load Balancing (HLB)


A feature that is distinctly different from the LUN Management system, but it makes use of
those capabilities
 HLB is the automatic change to the controller management tables of one or more
LUNs.
 Triggered by a sustained imbalance (probably 70% and 40% busy) of Intel CPU
average busy rates between the 2 controllers.
 After there has been some sustained degree of controller CPU busy imbalance
the HLB mechanism will activate and decide which LUNs are the best ones to
move to the other controller’s I/O management list in order to bring about a
balance of controller CPU loads.
 Does not affect the mapping of LUNs to front-end ports (they remain where they
are).
 Only affects the assignment to a controller for processing those I/O requests.
 Any LUNs not associate with Replication LUNs are candidates for an I/O
management change each time it occurs
 If a LUN is being accessed by a front-end port located on the non-managing
controller, then these I/O requests are mapped (via the LUN Management
mechanism) from the local Intel processor across to the Intel CPU in the currently
managing controller.

Page 3-8 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
Hardware Load Balancing Overview

 Host Managed Load Balancing


• Native path management, such as Windows MPIO, Solaris MPxIO/traffic
manager, AIX MPIO, Linux Device Mapper, Veritas DMP or Hitachi
Dynamic Link Manager, including various load balancing algorithms, are
fully supported.
 The use of host‐based path failover is still required for high
availability configurations.

While the LUN Management front‐end design of the Hitachi Unified Storage 100
family eliminates most host path assignment management between the ports on the
2 controllers, there is still a need for host path failover in the event of a path failure
(bad cable, complete loss of a controller, switch failure, etc.).

HDS Confidential: For distribution only to authorized parties. Page 3-9


Modular Storage Architecture — Part 1
Hitachi Unified Storage Cache Management

Hitachi Unified Storage Cache Management

Cache Memory and Host I/O

 Default Logic in Hitachi Unified Storage 100 family systems for writes
received from Hosts is “write back”
• Host issues a write.
• Data is written into cache and acknowledged to host.
• This data (dirty) is then asynchronously destaged to disks.

Page 3-10 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
Default Layout (Dynamic Provisioning not Installed)

Default Layout (Dynamic Provisioning not Installed)


CTL-0 Cache

System
Partition #0

MML
CTL-1 Mirror
User Data

Writes Writes
CTL-1 Cache

System

Partition #1
MML

CTL-0 Mirror
User Data

System Region User Data Region

Hitachi Dynamic Provisioning is used for virtualizing internal capacity (will be


discussed in later module).
The default cache configuration establishes a System Region, an MML Region and
a User Data Region. The System Region is sized when the subsystem is first
configured (or upgraded). The MML size is fixed according to the cache size and
model. The User Data Region is all of the remaining space. The User Data Region is
further split into 2 equal halves: the Base Partition (User Data Area, or UDA) and
the Mirror. The Mirror region is used as the target for the mirrored write blocks
from the other controller (over the inter-DCTL 6.4GB/s communications bus). The
cache size breakdowns are as follows:
 32 GB (HUS 150) - 11,160 MB UDA and 11,160 Mirror, 2048 MB MML and 8400
MB System
 16 GB (HUS 130) - 4660 MB UDA and 4660 Mirror, 1024 MB MML and 6040
MB System
 8 GB (HUS 110) - 1420 MB UDA and 1420 Mirror, 816 MB MML and 4536 MB
System
The MML Region accommodates the Copy Products and Quick Format work region.

HDS Confidential: For distribution only to authorized parties. Page 3-11


Modular Storage Architecture — Part 1
Memory Management Layer

Memory Management Layer

Cache Memory

User Data Region System Area

SI/TC/
Mirror User Data Area Others COW/TCE
MVM

Memory Management Layer

QF TCE COW TC SI/MVM

RG DP Pool DMLU

The default cache configuration for each controller is automatically split into 3
regions, where these are the System Region, the Memory Management Layer and the
User Data Region. The User Data Region is further split equally into the User Data
Area and the Mirror Area. The User Data Area is the buffer space for local host I/O
operations, while the mirror region is the mirror of the User Data Area on the other
controller in the subsystem pair.
Memory Management Layer (MML) is used to manage the metadata for several
Replication Products. It is a fixed size (per Hitachi Unified Storage model) and is
always present on each subsystem whether or not any Replication licenses are
installed. It provides a virtual memory space for cache for the following:
 Hitachi ShadowImage® In‐System Replication software bundle (SI)
 Hitachi Copy‐on‐Write Snapshot (CoW)
 Hitachi TrueCopy® Remote Replication (TC)
 Hitachi TrueCopy Extended Distance (TCe)
 Modular Volume Migration (MVM)
 Quick Format (QF)

Page 3-12 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
Memory Management Layer

 Cache Paging Areas


• RAID Group (RG)
 Hidden space set aside on each RAID Group
 Used by Quick Format function
• Differential Management Logical Unit (DMLU)
 Located in a user selectable location
 Used by SI, TC and MVM for their metadata
• Dynamic Provisioning (DP) Pool
• Space on a DP Pool for the metadata for Hitachi Copy-on-Write (CoW)
and Hitachi TrueCopy® Extended Distance (TCe)
• Space is allocated from the pool as needed

RG – RAID Group, a hidden space set aside on each RAID Group when they are
created, with the maximum size set at 5 GB. The size is a function of the number of
disks and their sizes. It is used by the Quick Format function.
DMLU – Differential Management Logical Unit, located in a user selectable
location, with a size that ranges from 10 GB to 128 GB. It is used by ShadowImage,
TrueCopy and volume migrator for their metadata. The target location may be a
regular LUN, a DPVOL or a hidden LUN. The DMLU function has actually existed
on Hitachi modular systems since the Thunder 9500 model.
DP Pool – space on a DP Pool for the metadata for Copy-on-Write and TrueCopy
Extended Distance. The maximum space taken from the pool is 50 TB. This space is
allocated from the pool as needed, just as with a standard DP-VOL.

HDS Confidential: For distribution only to authorized parties. Page 3-13


Modular Storage Architecture — Part 1
Default Layout - Dynamic Provisioning Installed

Default Layout - Dynamic Provisioning Installed


CTL-0 Cache

System and
Software Partition #0

MML
CTL-1 Mirror
User Data

Writes Writes
CTL-1 Cache

System and
Software

Partition #1
MML

CTL-0 Mirror
User Data

System Region User Data Region

Enabling the HDP and HDT keys is the only time the System Region will grow. The
HDT key is supposed to add extra space over what the HDP key creates, but our
tests don’t show this to be true.
When the licenses for HDP (high capacity mode) is enabled, the System Region
grows (reboot not required) to create the large workspace for this packages. The
default overall cache configuration changes to the following sizes:
 32 GB (HUS 150) - 9320 MB User Data and 9320 MB Mirror, 2048 MB MML
and 12,080 MB System
 16 GB (HUS 130) - 3920 MB User Data and 3920 MB Mirror, 1024MB MML and
7520 MB System
 8 GB (HUS 110) - 950 MB User Data and 950 MB Mirror, 816MB MML and
5476 MB System

Page 3-14 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
Cache Usage with a Controller Failure

Cache Usage with a Controller Failure

 Impact of controller failure


• Mirror region replaced by a second base partition
• Writes not mirrored
• Larger system region and smaller base partitions
CTL-0 Cache

System

Partition #0 Partition #1
MML

User Data User Data

When a controller fails, the surviving controller modifies its cache usage. The Mirror
Region is replaced by a second Base Partition to replace the one lost on the failed
controller. Writes are no longer mirrored. All of the LUNs that were managed by the
other controller are serviced by the remaining controller, and it uses the second
partition as the work space for those LUNs. If software was installed, the System
Region shown here would be much larger, and the 2 Base Partitions would be much
smaller.

HDS Confidential: For distribution only to authorized parties. Page 3-15


Modular Storage Architecture — Part 1
Cache Layout Using Cache Partition Manager

Cache Layout Using Cache Partition Manager

 Partitioning Cache
CTL-0 Cache

Partition 0

Partition 2

Partition 4
System

Master
MML

Sub-

Sub-
CTL-1 Mirror

Writes Writes
CTL-1 Cache

Partition 1

Partition 3

Partition 5
System

Master
MML

Sub-

Sub-
CTL-0 Mirror

System User Data Region


Region

Cache Partition Manager (CPM) allows for the creation of a Master Partition and one
or more small Sub-partitions in the User Data Area. The small Sub-partitions may be
used to isolate selected LUNs in cache so that I/O operations on those LUNs do not
affect the general population of LUNs. Additionally, the cache segment size may be
changed for each Sub-partition if desired. The default cache segment size is 16 KB.
The alternate choices for a Sub-partition are: 4 KB, 8 KB, 64 KB, 256 KB and 512 KB.

Page 3-16 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
Hitachi Unified Storage Back End

Hitachi Unified Storage Back End

SAS Engine Overview

 The SAS engine is the set of IOC


CTRL chips, SXP chips and SMX DMA Engine DMA Engine

chips. (DCTL) (DCTL)

PCI-e x8 PCI-e x8

 The DCTL processor directs disk I/O SAS IOC CTRL


Internal
Enclosure SAS IOC CTRL

requests to the SAS IOC. (110, 130)

 The SAS multiplexor chip (SMX) is the SXP


32 x 6Gb
SXP
32 x 6Gb

interface from the IOC to the external


cables. SMX SMX SMX SMX
Port 0 1 0 1

 On the 110/130 controller, there is an 4 SAS Wide


Cables

internal 32-port SAS expander unit


Each with 4
6Gbit/sec Links

(SXP) and controls 1 port from each of Enclosure 0


SXP SXP SXP SXP
Enclosure 3

the 12/24 internal disks.


• Same SXP used in the external trays
Enclosure 2
SXP SXP

SAS I/O Controller processor (IOC)


 CTRL processor contains dual CPUs
 It controls the SAS SSDs and disks
 It provides 8 x 6 Gbps SAS links
 In a major departure from standard FC-AL back-end designs, the IOC processor
will select one of the 8 SAS links over which to direct I/O requests to a specific
disk in a RAID Group
 HUS 110 and HUS 130 models have 1 SAS Engine per controller; HUS 150 has 2.
SAS Expander Processor (SXP)
 The 20-port and 32-port SXP modules are “south” of the IOC chips (controls 12
or 24 attached disks per tray)
 One per controller board (except the 150 model) for internal disks, 2 per external
disk tray (4 per high density unit, but those are actually used as dual 24 trays)

HDS Confidential: For distribution only to authorized parties. Page 3-17


Modular Storage Architecture — Part 1
SAS Engine Overview

 SXP controls which of the four 6 Gb/sec SAS links from the IOC will be cross-
connected to an individual disk for an I/O operation.
 This matrix connection feature implements automatic assignment of disks to
links where each RAID Group is defined.
SAS Multiplexor (SMX) interface chip
 Are connected to either an IOC processor or the SXP chip (not on HUS 150). The
SMX chip is where each Wide SAS cable ( four 6 Gb/sec SAS links) is attached to
the rear panel of a controller.

Page 3-18 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
External Disk Tray — LFF SAS 12-Disk

External Disk Tray — LFF SAS 12-Disk

Controller-1
Wide Cable Expander To
next
tray
4-SAS 4-SAS
Links Links
12 3.5"

SAS SAS SAS SAS SAS SAS SAS SAS


Disks

Disk Disk Disk Disk Disk Disk Disk Disk


Slot 0 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 10 Slot 11

To
next
tray
4-SAS 4-SAS
Links Links

Controller-0 Expander
Wide Cable

This is the LFF form factor 12-disk tray, which may be populated with any of the 3.5
in. SAS 7200 RPM drive choices. The 2 SAS Expanders in each tray attach each disk
port to any of the four 6Gbps SAS links in that Expander. Additional trays are daisy-
chained from the outbound SAS links port. HUS 110 and HUS 130 models have
either a similar 12-disk or the 24-disk tray as an internal disk bay.

HDS Confidential: For distribution only to authorized parties. Page 3-19


Modular Storage Architecture — Part 1
External Disk Tray — SFF SAS, SSD 24-Disk

External Disk Tray — SFF SAS, SSD 24-Disk

Controller-1
Wide Cable
Expander

4-SAS 4-SAS
Links Links To
next
tray
24 2.5"

SAS SAS SAS SAS SAS SAS SSD SSD


Disks

Disk Disk Disk Disk Disk Disk Disk Disk


Slot 0 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 22 Slot 23

To
next
tray
4-SAS 4-SAS
Links Links

Expander
Controller-0
Wide Cable

This is the SFF form factor 24-disk tray, which may be freely populated with 2.5 in.
SAS and SSD drives. The 2 SAS Expanders in each tray attach each disk port to any
of the four 6 Gb/sec SAS links in that Expander. Additional trays are daisy-chained
from the outbound SAS links port. HUS 110 and HUS 130 models have either a 24-
disk or a 12-disk equivalent tray for the internal disks.

Page 3-20 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
External Disk Drawer — LFF SAS High Density

External Disk Drawer — LFF SAS High Density

Controller-1
Expander
4-SAS 4-SAS
Links Links
Port-3
24 3.5"

SAS SAS SAS SAS SAS SAS SAS SAS


Disks

HDD HDD HDD HDD HDD HDD HDD HDD


Slot 0 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 22 Slot 23

Controller-0

4-SAS 4-SAS
Links Links
Port-2 Expander

Controller-1 Expander
4-SAS 4-SAS
Links Links
Port-1
24 3.5"

SAS SAS SAS SAS SAS SAS SAS SAS


Disks

HDD HDD HDD HDD HDD HDD HDD HDD


Slot 0 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 22 Slot 23

Controller-0

4-SAS 4-SAS
Links Links
Port-0 Expander

This is a representation of the high density 48-disk LFF drawer. It holds up to 48 SAS
7200 RPM 3.5 in. disks. The 2 pairs of SAS Expanders in each tray attach each disk
port to any of the 4 SAS links in that Expander. Additional drawers are daisy-
chained from the outbound SAS links port. This top-load drawer functions as 2
independent 24-slot trays.

HDS Confidential: For distribution only to authorized parties. Page 3-21


Modular Storage Architecture — Part 1
Hitachi Unified Storage Models Overview

Hitachi Unified Storage Models Overview

HUS 110 (Block Module) Architecture

HUS 110 does not have I/O modules.


The controller mother board has the following:
 1 Hitachi DCTL-H RAID ASIC
 1 Intel Xeon Core i CPU (1.73 GHz, single core)
 3.5 GB of CPU RAM (local workspace)
 Cache: 4 GB (1 x 4 GB DIMM) per controller
 1 SAS port
 4 Fibre Channel ports
 2 Ethernet ports
 1 slot for the expansion card

Page 3-22 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
External Disk Tray Attachment — HUS 110

External Disk Tray Attachment — HUS 110

I/O Module - I/O Module -


Disks Disks

SAS CTLR SAS CTLR


4 x 6Gbps 4 x 6Gbps
Controller 0 Links Links Controller 1

0 0
SAS Wide Cable
(4 Links @
6Gbps each)
Internal Disk Options:
Internal HDD 12 LFF Slots
Tray 0
24 SFF Slots

24 2.5" HDDs 120 Disks


Tray 1 (SAS, SSD)

12 3.5" HDDs
Tray 2

ENCLOSURE
“STACK”

This is a view of the attachment of 2 types of trays (in just 1 “stack”) to the 2
controllers. There are eight 6 Gb/sec SAS links in all, and 2 SAS engines.

HDS Confidential: For distribution only to authorized parties. Page 3-23


Modular Storage Architecture — Part 1
HUS 110 Configurations

HUS 110 Configurations

Controller Box Drive Box


(Base Units) (Expansion Units)

 2U LFF Standard Disk Tray


3.5 in. HDD x 12
SAS 7.2k HDD
 2U Block Module
2.5 in. x 24 internal HDDs
3.5 in. x 12 internal HDDs
 3U File Module  2U SFF Dense Disk Tray
Single node standard 2.5 in. HDD x 24
2 node cluster optional SAS 10k, SAS 15k and flash drive

The drives may be any mixture which is supported by that tray type.
The HUS 110 system chassis has a built‐in 3.5 in. 12‐disk or a 2.5 in. 24‐disk bay. It
can be expanded by use of nine 3.5” in.12‐disk trays or four 2.5 in. 24‐disk trays (or
some combination), with the 3.5 in.

Page 3-24 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
HUS 110 (Block Module) Overview

HUS 110 (Block Module) Overview

 Up to 120 disks

• Up to 15 SSDs

• SAS disks (1 internal 12/24 disk bay, up to four 24-slot or nine 12-slot disk trays)

 Two controllers

 Up to 12 host ports

 8 SAS back end links (6 Gb/sec)

 8 GB cache (21.2 GB/sec cache bandwidth)

 2048 LUNs and 50 RAID groups total

 I/O request limit of 512 tags per port

Host ports
 Eight 8 Gb/sec FC ports, plus pair of optional daughter cards that provides 4 x [1
GigE iSCSI or 10 Gb/sec iSCSI] ports
OR
 Four x [1 GigE iSCSI or 10 Gb/sec iSCSI] ports (all FC ports are license key
disabled)

HDS Confidential: For distribution only to authorized parties. Page 3-25


Modular Storage Architecture — Part 1
HUS 130 (Block Module) Architecture

HUS 130 (Block Module) Architecture

HUS 130 does not have I/O modules.


The controller mother board has the following:
 1 Hitachi DCTL-H RAID ASIC
 1 low power Intel Xeon Core i CPU (1.73 GHz, dual core)
 3.5 GB of CPU RAM (local workspace)
 2 SAS ports
 4 Fibre Channel ports
 2 Ethernet ports
 1 slot for the expansion card

Page 3-26 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
External Disk Tray Attachment — HUS 130

External Disk Tray Attachment — HUS 130

I/O Module - I/O Module -


Disks Disks

SAS CTLR SAS CTLR


8 x 6Gbps 8 x 6Gbps
Controller 0 Links Links Controller 1

0 1 0 1
SAS Wide Cable
(4 Links @
6Gbps each)
Internal Disk Options:
Internal HDD 24 2.5" HDDs 12 LFF Slots
Tray 0 Tray 1
24 SFF Slots

12 3.5" HDDs 24 2.5" HDDs


Tray 2 Tray 3

48 3.5" HDDs
Tray 4 Tray 5

ENCLOSURE
“STACK”

This is a view of the attachment of three types of trays (in 2 “stacks”) to the 2
controllers. There are sixteen 6 Gb/sec SAS links in all, and 2 SAS engines.

HDS Confidential: For distribution only to authorized parties. Page 3-27


Modular Storage Architecture — Part 1
HUS 130 Configurations

HUS 130 Configurations

Controller Box Drive Box


(Base Units) (Expansion Units)

 2U LFF Standard Disk Tray


3.5 in. HDD x 12 (SAS 7.2k)

 2U SFF Dense Disk Tray


2.5 in. HDD x 24 (SAS 10 k,
SAS 15k and flash drive)
 2U Block Module
2.5 in. x 24 internal HDDs
3.5 in. x 12 internal HDDs
 3U File Module  4U LFF Dense Disk Tray
Single node standard
3.5 in. HDD x 48 (SAS 7.2k)
2 node cluster (optional)

The drives may be any mixture which is supported by that tray type. The 130 system
chassis has a built‐in 3.5 in. 12‐disk or a 2.5 in. 24‐disk bay. It can be expanded by
use of nineteen 3.5 in. 12‐disk trays, ten 2.5 in. 24‐disk trays, or five 3.5 in. 48‐disk
drawers (or some combination).

Page 3-28 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
HUS 130 (Block Module) Overview

HUS 130 (Block Module) Overview

 Up to 264 disks

• Up to 15 SSDs

• SAS disks (1 internal 12/24 disk bay, up to ten 24-slot, nineteen 12-slot, or five
48-slot disk trays)

 Two controllers

 Up to 16 host ports:
• Eight 8 Gb/sec FC (built-in)
• Optional: eight 8Gb/sec FC (4-port modules) or four 10Gb/sec iSCSI
(2-port modules)

 16 SAS back-end links (6 Gb/sec)

 16 GB cache (42.4 GB/sec cache bandwidth)

 4096 LUNs and 75 RAID groups total

 I/O request limit of 512 tags per port, 8 MB maximum transfer size

HDS Confidential: For distribution only to authorized parties. Page 3-29


Modular Storage Architecture — Part 1
HUS 150 (Block Module) Architecture Overview

HUS 150 (Block Module) Architecture Overview

HUS 150 supports I/O modules.


The controller mother board has the following:
 1 Hitachi DCTL-H RAID ASIC
 1 low power Intel Xeon Core i CPU (1.73GHz, dual core)
 3.5 GB of CPU RAM (local workspace)
 4 SAS ports
 4 Fibre Channel ports
 2 Ethernet ports
 4 I/O modules

Page 3-30 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
External Disk Tray Attachment — HUS 150

External Disk Tray Attachment — HUS 150

I/O Module - I/O Module - I/O Module - I/O Module -


Disks Disks Disks Disks

SAS CTLR SAS CTLR SAS CTLR SAS CTLR


8 x 6Gbps 8 x 6Gbps 8 x 6Gbps 8 x 6Gbps
Controller 0 Links Links Links Links Controller 1

0 1 2 3 0 1 2 3
SAS Wide Cable
(4 Links @
6Gbps each)

Tray 0 Tray 1 Tray 2 Tray 3


24 2.5" HDDs 12 3.5" HDDs 24 2.5" HDDs 12 3.5" HDDs

12 3.5" HDDs 12 3.5" HDDs 48 3.5" HDDs


Tray 4 Tray 5 Tray 6 Tray 7

ENCLOSURE High Density Tray


“STACK”

This is a view of the attachment of the 3 types of trays (in 4 “stacks”) to the 2
controllers. There are 32 X 6 Gb/sec SAS links in all, and 4 SAS engines.

HDS Confidential: For distribution only to authorized parties. Page 3-31


Modular Storage Architecture — Part 1
HUS 150 Configurations

HUS 150 Configurations

Controller Box (Base Units) Drive Box (Expansion Units)

 2U LFF Standard Disk Tray


3.5 in. HDD x 12 (SAS 7.2k)

 2U SFF Dense Disk Tray


2.5 in. HDD x 24 SAS 10k, SAS
15k and flash drive)

 3U Block Module
No internal HDDs
 3U File Module
2 node cluster standard  4U LFF Dense Disk Tray
3.5 in. HDD x 48 (SAS 7.2k)

The disks may be any mixture of SAS or SSD drives that the HUS 100 family
supports. The 150 has no built‐in disk bay, but the system can support up to forty 3.5
in. 12‐disk trays, forty 24‐disk 2.5 in trays, or twenty 3.5 in. 48‐disk dense drawers
(or some combination). Due to SAS link timing windows, HUS 150 is limited to
having forty trays (as 10 per “stack”), so with the exclusive use of the 12‐disk trays,
the disk limit is 480.

Page 3-32 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
HUS 150 Overview

HUS 150 Overview

 Up to 960 disks

• Up to 30 SSDs

• SAS disks (up to forty 24-slot, eighty 12-slot, or twenty 48-slot external disk trays)

 Two controllers

 Up to 16 Host ports:
• Eight or sixteen 8 Gb/sec FC (4-port modules)
• Eight 8 Gb/sec FC and four 10 Gb/sec iSCSI (2-port modules)
• Eight 10 Gb/sec iSCSI (2-port modules)

 32 SAS link back-end (6 Gb/sec)

 32GB cache (21.2 GB/sec cache bandwidth)

 4096 LUNs and 200 RAID groups total

 I/O request limit of 512 tags per port, 8 MB maximum transfer size

HDS Confidential: For distribution only to authorized parties. Page 3-33


Modular Storage Architecture — Part 1
Module Summary

Module Summary

 The two modules, Modular Storage Architecture — Part 1 and Part 2,


describe the characteristics of Hitachi modular storage systems
• Front-end architecture that relates to performance optimization
• Cache architecture that relates to performance optimization
• Back-end architecture that relates to performance optimization

Page 3-34 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 1
Module Review

Module Review

1. Which statements are true related to Hitachi Unified Storage? (Select two.)
A. The HUS family has 6Gb/sec SAS back end ports.
B. The HUS family supports FC disks.
C. The HUS 130 supports 8GB DIMMs.
D. The HUS controller cache is backed up to flash for unlimited retention.
E. The AMS2500 model can be upgraded to HUS 150.

2. A customer purchases the following configuration — a CBSS with 3 DBLs. What is


the total number of disks, assuming all trays are full?
A. 96 disks
B. 60 disks
C. 128 disks
D. 48 disks

3. Which statement is false?


A. DBX supports NLSAS disks only.
B. SSD disks do not have an rpm rating as they do not have mechanical components.
C. HUS 110 has 8 SAS links per controller.
D. 8GB DIMMs are supported in HUS 150 only.

HDS Confidential: For distribution only to authorized parties. Page 3-35


Modular Storage Architecture — Part 1
Module Review

Page 3-36 HDS Confidential: For distribution only to authorized parties.


4. Modular Storage
Architecture — Part 2
Module Objectives

HDS Confidential: For distribution only to authorized parties. Page 4-1


Modular Storage Architecture — Part 2
Hitachi Adaptable Modular Storage 2000 Architecture

Hitachi Adaptable Modular Storage 2000 Architecture

Product Line Positioning


Performance /Connectivity/ Functionality

Adaptable
Modular
Storage 2500
Adaptable
Modular
Adaptable Storage 2300
Modular
Storage 2100

Price

First released in August 2008, the Hitachi Adaptable Modular Storage (AMS) family
replaced the previous generation of midrange Hitachi modular storage systems. The
AMS systems have much higher performance and incorporate several significant
design changes.

Page 4-2 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 2
Modular Storage Components

Modular Storage Components

 Key components of a Hitachi modular storage system:


• Controller CPU
• Array port
• Cache
• Disks and array group
• SAS controller
4Gb/sec

HDS Confidential: For distribution only to authorized parties. Page 4-3


Modular Storage Architecture — Part 2
AMS 2100 Overview

AMS 2100 Overview

Rev 1
Access other CTL ports
without involving the other
processor

Single core CPU local RAM

4Gb/sec 8 lane PCI express (no


more dual bus or PCI-x)

DCTL: RAID-XOR, datapath “pump”

Controller cache

8 path/link SAS CTL

Mix SAS and SATA


in same chassis

8 x 3 Gb links per chassis

Back-end load balancing (by LUN)

An Adaptable Modular Storage (AMS) 2100 storage system consists of 2 controllers,


either 4GB or 8GB of cache, and two choices of host port types with up to eight ports
total.
Cache
The memory bandwidth between each DCTL processor and its local cache is
4GB/sec. The 2100 has one bank of cache (with one DIMM slot) per controller, so
total cache will either be 4GB (2GB DIMMs) or 8GB (4GB DIMMs). There are two
2GB/sec inter-controller communications busses between the DCTL processors for
commands and communications, maintenance operations and duplexed (mirrored)
cache operations.
Disks
The 2100 has 16 SAS back end disk paths controlled by the two SAS engines and up
to 159 SAS, SSD, or SATA disks in the system. The disks may be any mixture of SAS,
SSD, or SATA disks that the Adaptable Modular Storage 2000 family supports. The
2100 supports up to 15 SSDs, which may be installed in any of the 15-disk enclosures
(intermixed). The 2100 chassis has a built-in 15-disk enclosure (RK), and the system
can grow by up to seven external disk enclosures (RKAK) or up to three dense trays
(RKAKX).

Page 4-4 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 2
AMS 2100 Overview

Rev 2

At the core of each REV2 2100 controller there is a 10th-generation Hitachi DCTL
processor (a high-performance RAID and DMA engine) and an Intel Core Duo
Celeron processor.
Each Adaptable Modular Storage 2100 controller (2 per system) includes:
 A DCTL processor (the I/O “pump” with cache control and RAID-XOR
functions)
 A 1.67 GHz Intel Core Duo Celeron Value series (low voltage) processor and 1
GB of local memory (not cache); this processor is the microcode engine or the
I/O management “brains”
 Cache: 2 GB (2 x 2 GB DIMMs) or 4 GB (2 x 4 GB DIMMs) per controller
 2 or 4 host ports, including:
 1 8 Gb/sec FC ports, plus 1 optional daughter card that provides:
 1 8 Gb/sec FC ports
OR
 2 GigE iSCSI ports
 1 SAS IOC controller providing eight active back-end SAS disk links
 2 GB/sec PCI Express (PCIe) internal busses

HDS Confidential: For distribution only to authorized parties. Page 4-5


Modular Storage Architecture — Part 2
AMS 2300 Overview

AMS 2300 Overview

Rev 1
More FC ports

Faster cache
access
4Gb/sec

Adaptable Modular Storage 2300 consists of 2 controllers, either 8 GB or 16 GB of


cache, and 2 choices of host port types with up to 16 ports total.
Cache
The memory bandwidth between each DCTL processor and its local cache is 8
GB/sec. AMS 2300 has two banks of cache (with one DIMM slot per bank) per
controller, so total cache will either be 8GB (2GB DIMMs) or 16GB (4GB DIMMs).
There are two 2 GB/sec inter-controller communications busses between the DCTL
processors for commands and communications, maintenance operations and
duplexed (mirrored) cache operations.
Disks
AMS 2300 has 16 SAS back-end disk paths controlled by the 2 SAS engines and up to
240 SAS, SSD or SATA disks in the system. The disks may be any mixture of SAS,
SSD or SATA disks that the Adaptable Modular Storage 2000 family supports. AMS
2300 supports up to 15 SSDs, which may be installed in any of the 15-disk enclosures
(intermixed). AMS 2300 chassis has a built-in 15-disk enclosure (RK), and the system
can grow by up to 7 external disk enclosures (RKAK) or up to 4 dense trays
(RKAKX).

Page 4-6 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 2
AMS 2300 Overview

Rev 2

At the core of each REV2 2300 controller there is a tenth generation Hitachi DCTL
processor (a high-performance RAID and DMA engine) and an Intel Core Duo
Celeron processor.
Each Adaptable Modular Storage 2300 controller (2 per system) includes:
 A DCTL processor (the I/O “pump” with cache control and RAID-XOR
functions)
 A 1.67 GHz Intel Core Duo Celeron Value series (low voltage) processor and 1
GB of local memory (not cache); this processor is the microcode engine or the
I/O management “brains”
 Cache: 4 GB (2 GB DIMMs) or 8 GB (4 GB DIMMs) per controller
 4, 6 or 8 host ports, including:
 4 8 Gb/sec FC ports, plus one optional daughter card that provides:
 4 8 Gb/sec FC ports
OR
 2 GigE iSCSI ports
 1 SAS IOC controller providing eight active back-end SAS disk links
 2 GB/sec PCI Express (PCIe) internal busses

HDS Confidential: For distribution only to authorized parties. Page 4-7


Modular Storage Architecture — Part 2
AMS 2500 Overview

AMS 2500 Overview

Rev 1
More FC ports

Dual core
faster processors

4GB/sec

Adaptable Modular Storage 2500 consists of 2 controllers, either 16 GB or 32 GB of


cache, and 2 choices of host port types with up to 16 ports total. At the core of each
REV2 2500 controller there is a tenth generation Hitachi DCTL processor (a high-
performance RAID and DMA engine) and an Intel Core Duo Xeon processor with
dual cores (X and Y).
Cache
The memory bandwidth between each DCTL-H processor and its local cache is 8
GB/sec. AMS 2500 has 2 banks of cache (with 2 DIMM slots) per controller, so total
cache will either be 16 GB (2 GB DIMMs) or 32 GB (4 GB DIMMs). There are two 2
GB/sec inter-controller communications busses between the DCTL processors for
commands and communications, maintenance operations and duplexed (mirrored)
cache operations.
Disks
AMS 2500 has 32 SAS back end disk paths controlled by the 4 SAS engines and up to
480 SAS, SSD or SATA disks in the system. The disks may be any mixture of SAS,
SSD or SATA disks that the Adaptable Modular Storage 2000 family supports. AMS
2500 supports up to 30 SSDs, which may be installed in any of the 15-disk enclosures
(intermixed). AMS 2500 has no built-in 15-disk enclosure, but can support up to 32
external 15-disk enclosures (RKAK) or up to 10 dense trays (RKAKX).

Page 4-8 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 2
AMS 2500 Overview

Rev 2

Each Adaptable Modular Storage 2500 controller (2 per system) includes:


 A DCTL processor (the I/O “pump” with cache control and RAID-XOR
functions)
 A 2 GHz Intel Core Duo Xeon dual core (cores X and Y) processor and 2 GB of
local memory (not cache); this processor is the microcode engine or the I/O
management “brains”
 Cache: 8 GB (2 GB DIMMs) or 16 GB (4 GB DIMMs) per controller
 4 daughter card slots, each providing:
 4 8 Gb/sec FC ports
OR
 2 GigE iSCSI ports
 2 SAS IOC controllers providing 16 active back-end SAS disk links (one is
controlled by Core X and the other by Core Y)
 2 GB/sec PCI Express (PCIe) internal busses

HDS Confidential: For distribution only to authorized parties. Page 4-9


Modular Storage Architecture — Part 2
AMS 2000 Architecture Introduction

AMS 2000 Architecture Introduction

 Details and usage options of the new design, including:


• Tachyon processor features
• Active-active symmetric front end design
• Hardware I/O load balancing feature
• SAS active matrix engine

Page 4-10 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 2
AMS 2000 Architecture — Details and Usage Options

AMS 2000 Architecture — Details and Usage Options

AMS 2000 Architecture


Details and usage options of the new
design, including:
 Tachyon processor features
 Active-active symmetric front end design
 Hardware I/O load balancing feature
 SAS active matrix engine

This section discusses some of the details and usage options of the new design.

HDS Confidential: For distribution only to authorized parties. Page 4-11


Modular Storage Architecture — Part 2
Logical View

Logical View

Page 4-12 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 2
Tachyon Features

Tachyon Features

 Uses the QE8 Tachyon processor


 Functions:
• Conversion of the Fibre Channel transport protocol to the PCIe
bus for use by 1 to 4 controller processors
• Provides simultaneous full duplex
operations of each of 2 or 4
ports
• Multiple DMA transfer modes
available for directly writing blocks
to cache
• Error detection and reporting
• Packet CRC encode and decode
offload engine

A Tachyon processor (single chip) is used to bridge the Fibre Channel host
connection to a usable form for internal use by the Xeon and DCTL processors.
The QE8 processors can provide very high levels of performance as they are
connected to the same high-performance, 2 GB/sec PCIe bus as the Xeon controller
and the DCTL RAID processor. The QE8 processor can drive all of its 8 Gb/sec ports
at full speed, whereas the older DX4 chip can only drive one of its two 4 Gb/sec
ports at full speed. However, for random small block loads, the controller cannot
drive all four of the QE8’s 8 Gb/sec ports at full speed.
Note that each QE8 operates as a companion processor to the Celerons/Xeons and
DCTL processors rather than a directly connected slave chip to the Xeon processor.
A useful application of this feature is when a management processor (the
Celeron/Xeon CPU) is rebooting for a microcode upgrade or certain configuration
changes, this action does not also reset the QE8 processor(s) on that same controller.
However, during a reboot the Tachyon processor(s) will tell the host ports on the
servers to suspend I/O for a short period to keep the connections alive, rather than
appear to be a port that has stopped working. This will prevent servers from taking
a port offline and switching to a failover port.

HDS Confidential: For distribution only to authorized parties. Page 4-13


Modular Storage Architecture — Part 2
Active-Active Symmetric Front End Design

Active-Active Symmetric Front End Design

 Allows for:
• Simultaneous host access of any LUN on any host port on either
controller with very little added overhead
• Concurrent use of operating system native path management and host
managed load balancing (such as Hitachi Dynamic Link Manager)
 LUNs are alternately assigned to different controllers and cores
 No manual assignment necessary for most general workload
installations
 System automatically balances loads internally
• Dynamic load balancing does not change host path access scheme

The Hitachi Adaptable Modular Storage 2000 family introduced the active-active
symmetric front-end design, a totally new concept to the midrange storage system
arena. The rigid concept of LUN ownership by controller has been replaced with a
more powerful method of LUN Management. Rather than a simple LUN ownership
by controller, now there is a dynamic, global table of all configured LUNs that
determines which controller will execute the back-end portion of an I/O request for
a LUN. This control list is independent of which front-end port (either controller) is
involved in the host I/O request. The active-active symmetric front end enables this
new capability and the corresponding freedom from micromanaging the appearance
of LUNs on certain paths for certain hosts.
All LUNs are initially automatically assigned by Hitachi Storage Navigator Modular
2 (HSNM 2) on a generally round robin basis to the controller I/O management lists
as LUNs are created. The table of I/O management is changed over time by the
operation of the hardware I/O load balancing feature (described below) that, if
enabled, will move certain LUNs from one controller’s management list to the other
controller. Management may also be changed manually via HSNM 2.
The active-active symmetric controller front end architecture allows for the access of
any LUN over any front-end host port on either controller. Note that managing the
inbound I/O request from a host on a port is an independent operation from the
actual execution of the I/O request.

Page 4-14 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 2
Hitachi Dynamic Load Balancing Controller Architecture

Hitachi Dynamic Load Balancing Controller Architecture

Simplified Installation
Benefits Quick and easy setting at installation:
1. No need to set controller ownership for each LU
2. Set host connection port without regards to controller ownership

Traditional AMS 2000 family

Paths set to
match controller
ownership Connect HBAs
of LUs to any controller

User not
User required
CTL0 CTL1 required to CTL0 CTL1
to set paths
0 1 2 3 set paths 1 3 0 2

LUs created LUs created


LUN 0 LUN 1 LUN 2 LUN 3 LUN 1 LUN 3 LUN 0 LUN 2

LU

0 Owning controller of LUN 0

The active-active symmetric controller that comes with the AMS 2000 simplifies the
set up and maintenance of the system. The administrator is not required to assign
controller ownership to each LU nor is required to set controller ownership to each
host port.

HDS Confidential: For distribution only to authorized parties. Page 4-15


Modular Storage Architecture — Part 2
Hitachi Dynamic Load Balancing Controller Architecture

Coordination With Path Management Software


Benefits When used with the front-end load balancing function of path
management software, the AMS 2000 family can respond with
balanced performance over multiple data paths.

Traditional AMS2000 family

Standard setting Path Path Path Path Standard setting Path Path Path Path
Manager Manager Manager Manager Manager Manager Manager Manager
(Load balancing) (Load balancing)
HBA HBA HBA HBA HBA HBA HBA HBA HBA HBA HBA HBA HBA HBA HBA HBA
Primary path
Alternate path
User not
User required required
CTL0 CTL1 CTL0 CTL1
to set paths to set
0 1 2 3 0 1 2 3
paths

LUN 0 LUN 1 LUN 2 LUN 3 LUN 0 LUN 1 LUN 2 LUN 3

0 Owning controller of LUN 0

Server-based path management software will balance the I/O load between the
controllers.

Page 4-16 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 2
Back End Load Balancing

Back End Load Balancing

Automatic Internal Load Balancing


Benefits  Optimal performance is achieved with minimal input from storage
administrator.
 Load balancing occurs automatically and evens out the utilization
rates of both controllers.
Traditional AMS 2000 family
(2) Analysis
(3) Planning (5) Change path

Need to manually reconfigure Load balancing is executed automatically


Automatically
Changes
(4) Change
setting
Performance Load balance
monitor function
(1) Send 0 1 1 2 3 0 1 1 2 3
CTL0 CTL1
information CTL0 CTL1

CPU usage: CPU usage: CPU usage: CPU usage:


70% 10% 70%  40% 10%  40%
LUN 0 LUN 1 LUN 2 LUN 3 LUN 0 LUN 1 LUN 2 LUN 3

Unique
0 Owning controller of LUN 0 for Midrange

The back end load balancing capabilities in the AMS 2000 family is unique for
modular products. If the CPU utilization rate for a controller exceeds 70% while the
utilization rate for the other controller is less than 10%, then the system will
automatically balance the load by re-routing traffic to the underutilized controller.
This will optimize the system performance without intervention required by the
administrator.

HDS Confidential: For distribution only to authorized parties. Page 4-17


Modular Storage Architecture — Part 2
Hitachi Dynamic Load Balancing Controller

Hitachi Dynamic Load Balancing Controller

Symmetric Active- Active


Controllers:
During normal operation I/O
can travel down any available
path to any front-end port
using the native host OS Symmetric Active-Active
multipathing capabilities. This and Load Balancing
allows I/O to be balanced over Controllers: A Powerful
all available paths without combination!
impacting performance. With features traditionally
SWITCH 1 SWITCH 2
found only in Enterprise class
Automatic Controller Load storage arrays, the AMS 2000
Balancing: family can dynamically
If the load on the controllers balance I/O over any
becomes unbalanced it will available path through any
automatically be balanced FE CTRL-0 FE CTRL-1
without the host having to
AMS 2500 front end port on either
controller. All this is done
adjust its primary path. High Speed without having to load any
BE CTRL-0 Buses BE CTRL-1
Automatic Load Balancing propriety path management
Controller: software.
If the load on the controllers
becomes unbalanced it will 0 1 2 3 4 5 0
automatically be balanced
without the host having to I/O is redirected to less busy controller
adjust its primary path.

Page 4-18 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 2
Dynamic Load Balancing of RGs/LUNs to Less Utilized Controller

Dynamic Load Balancing of RGs/LUNs to Less Utilized


Controller

LUN activity LUN activity


on CTL-1 on CTL-0

As the workload increases, and when CTL


utilization reaches 70% busy, LUNs start to
switch across to the other CTL, one at a time,
every minute until the load is evenly distributed.

HDS Confidential: For distribution only to authorized parties. Page 4-19


Modular Storage Architecture — Part 2
AMS 2x00 — Owning Controller

AMS 2x00 — Owning Controller

 A new meaning for owning controller (D-CTL and C-CTL)


 Does not have same critical importance as with other modular
Asymmetric Logical Unit Access (ALUA) storage systems
 Meaning is closer to managing controller
• SCSI commands are handled by owning controller
• DCTL hardware in owning controller duplexes the data into both sets of
cache — that is, no overhead
• Disk data access is by the owning controller
• Host data access is by same controller as accessing port
 AMS 2x00 controllers can accommodate non-owning commands
without the historical overheads
 All multipath schemes are supported without special settings

A completely new feature for the Adaptable Modular Storage 2000 family is that of
hardware back-end load balancing between the controllers. On an Adaptable
Modular Storage 2000 system, all back-end I/O for a LUN is performed by the
controller that currently manages that LUN (a very different mechanism than the
normal LUN ownership by controller). If there is an ongoing imbalance of loads
between the controllers, such as one averaging 72% busy and the other averaging 30
percent busy, the load balancing mechanism will decide to move management of
some of the hardest hit LUNs to the non-managing controller. This will shift the
back-end workload for those LUNs to the underutilized controller. Note, this is
independent of which host ports are accessing that LUN — a key observation to
make.

Page 4-20 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 2
Active-Active — Full Cache Mirroring

Active-Active — Full Cache Mirroring

Active-active commands Owning controller


from Port 0A and Port 1A

Data is fetched from same


cache as requesting port
4GB/sec

Full Cache
Mirroring (CMD • Owning controller
and Read Data processes the I/O
• DCTL fills both
Flow)
caches using
hardware mirror
logic

HDS Confidential: For distribution only to authorized parties. Page 4-21


Modular Storage Architecture — Part 2
AMS 2000 Cache Access

AMS 2000 Cache Access

Cache has a local data region as well as a separate mirror region that is updated by
the other controller. Not only are all blocks that are being written to disk mirrored in
the other controller’s cache, but all data blocks being read are copied there as well.
Therefore, each mirror region is an exact copy of the other controller’s local data
region. Each controller can supply cache hits for host reads from this mirror region
in local cache for LUNs managed by the other controller. Therefore, any host port
can have an I/O request for any LUN satisfied by the local controller through its
DCTL chip and local cache.
Figure above illustrates the two types of active-active symmetric operations that
occur on the Adaptable Modular Storage 2000 family. When an inbound I/O request
arrives at a port on the controller that currently manages that LUN (red traces), the
I/O operation is called “Direct Mode.” When an inbound I/O request arrives at a
port on the controller that does not currently manage that LUN (dotted blue traces),
this is referred to as “Cross Mode.” When both “Cross” and “Direct” I/O modes are
present (probably the normal operating state), this is called “Mixed Mode.” In
Adaptable Modular Storage 2300 lab tests, there was a fairly small overhead
measured (1% to 4% typically) for a Cross mode test when running a 50:50 mix of
Direct and Cross, using a large number of LUNs and running heavy concurrent test
workloads on 4 ports.

Page 4-22 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 2
Point-to-Point SAS Back-End Architecture

Point-to-Point SAS Back-End Architecture

 Up to 32 high performance
3 GB/sec point-to-point links
for 9600 MB/sec system
bandwidth
 Each link is available for
concurrent I/O
 No loop arbitration
bottlenecks
 8 SAS links to every disk
tray for redundancy
 Common disk tray for SAS
and SATA drives
 Any failed component is
identified by its unique
address

Beyond the performance advantages that a SAS back-end design provides, there are
also a number of simplicity advantages. The layout of the connections is simple and
easy to do. The cables and connections are keyed to ensure correct installations.
Finally, SAS and SATA disk drives use the same drive tray. This ensures maximum
system flexibility and reduces the cost of setting up intermixed environments.
The Hitachi Adaptable Modular Storage 2000 family with advanced Point-to-Point
back-end provides higher throughput, and greater I/O concurrency then designs
using legacy Fibre Channel Arbitrated Loop.

HDS Confidential: For distribution only to authorized parties. Page 4-23


Modular Storage Architecture — Part 2
Point-to-Point SAS Back-End Architecture

Each model in the family has a controller tray with dual controller boards. While the
AMS 2500 controller tray does not have any drive slots, the AMS 2100 and AMS
2300 controller trays have slots for the installation of up to 15 internal disks in the
controller tray. Additional storage can be added to the system by installing disk
trays. Standard disk trays are 3U high and have 15 disk slots each while high density
disk trays are 4U high and have 48 high capacity SATA disk slots.
Every controller board in a system controller tray has a DCTL RAID processor that
connects to either 1 or 2 SAS I/O controller processors (IOC) depending on the
model. AMS 2100 and AMS 2300 have 1 IOC per controller board while AMS 2500
has 2. Each of the IOCs has two SAS multiplexor (SMX) ports that connect to a SAS
expander (SXP) on one of the internal or external disk trays. The connection is made
with a wide (x4) SAS cable that provides four 3 Gb/sec SAS links. An illustration of
the SAS IOC and its connections is shown in Figure above.
The SXP is a SAS expander processor and functions to establish the links between
components. Two SXPs reside on each controller tray with internal disks (AMS 2100
and AMS 2300 systems) and each standard disk tray. There are 4 SXPs on each high
density disk tray. On a standard disk tray, 1 SXP is connected to Controller-0 and
the other is connected to Controller-1. As a result, each controller board has access to
all of the disk trays. Each disk tray has eight SAS links for communications between
the disks and the controllers.

Page 4-24 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 2
AMS 2x00 Back-End and Chassis Connectivity

AMS 2x00 Back-End and Chassis Connectivity

4 x 15 way switch matrix

Dynamic link
assignment to HDDs
(per I/O)

Dual ported HDDs

12 Gb/sec

3 Grit/sec

SAS and SATA Intermix

Figure above illustrates how the SXPs access the disk drives in each standard tray.
Since a SATA disk drive can be installed in a SAS slot, there is a common tray for
SAS and SATA disks. Each SAS disk is dual ported so that either SXP can access the
drive. The SATA disks are connected to an AA MUX (multiplexor) component,
inside the disk canister, that allows a single port SATA disk to be connected to each
SXP.
Whenever an I/O request is processed by the controller board, the SAS IOC will use
1 of its 8 SAS links for communicating with a disk. Eight links are available on each
of the controllers of the AMS 2100 and AMS 2300 systems and 16 links are available
on each controller of the AMS 2500. Therefore, these systems have plenty of back
end bandwidth available. The SMX port on the controller will route the I/O to the
first SXP expander. The SXP chip provides the actual connection between the IOC
SAS links and the disks in an enclosure. Since the AMS 2100 and AMS 2300
controllers have internal disks, there are 2 SXP chips in their controller modules.
There are no SXP chips in the AMS 2500 controller module since there are no
internal disks.
If the target disk address resides on the same tray as the first SXP, then the expander
will route the request from 1 of its 4 SAS links to the disk. Alternatively, if the target
disk address is not one of the disks on the same tray as the SXP, then the I/O will be
routed out the back of the expander to the next disk tray to which it is connected.
This process continues until the SXP is reached that has a direct connection to the
targeted disk.

HDS Confidential: For distribution only to authorized parties. Page 4-25


Modular Storage Architecture — Part 2
Disk Types

Disk Types

Actual Actual Typical


Advertised
HDD Type Capacity Capacity Physical Max.
Size (GB)
(base 10) (base 2) IOPS
2000 GB SATA
2000 1967.4 1832.2 80
7.2k RPM
1000 GB SATA
1000 983.7 916.1 80
7.2k RPM
600 GB SAS
600 585 (est.) 545 (est.) 180
15k RPM
450 GB SAS
450 439.44 409.26 180
15k RPM
300 GB SAS
300 287.62 267.9 180
15k RPM
200 GB SSD
200 195.82 191.2 –
SLC type

Note: Maximum number of SAS SSDs that may be installed per system is: 15 (2100,
2300) or 30 (2500).

Page 4-26 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 2
RAID Levels

RAID Levels

RAID-1/RAID-1+0 RAID-5 RAID-6


Mirroring/data Data striping with Data striping with
Description striping and distributed parity 2 distributed
mirroring parties
Minimum
2/4 3 4
number of disks
Maximum
2/16 16 30
number of disks
Highest Best balance of Balance of cost,
performance with cost, reliability with extreme
Benefits
data redundancy and performance emphasis on
reliability
Higher cost per Performance Performance
number of penalty for high penalty for all
Disadvantages
physical disks percentage of writes
random writes

HDS Confidential: For distribution only to authorized parties. Page 4-27


Modular Storage Architecture — Part 2
RAID Overhead

RAID Overhead

System IOPS per Host


Read Write
SATA, SAS or SSD
RAID1+0 1 2
RAID-5 1 4
RAID-6 1 6

Heavy writes/updates with RAID-5 can cause the data controller and RAID groups
to become busy (due to write penalty).

Page 4-28 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 2
Module Summary

Module Summary

 The two modules, Modular Storage Architecture — Part 1 and Part 2,


describe the characteristics of Hitachi modular storage systems:
• Front-end architecture that relates to performance optimization
• Cache architecture that relates to performance optimization
• Back-end architecture that relates to performance optimization

HDS Confidential: For distribution only to authorized parties. Page 4-29


Modular Storage Architecture — Part 2
Module Review

Module Review

1. Which feature of the Hitachi Adaptable Modular Storage family


allows for a host I/O present on any front end port to be processed
by either controller?
A. Active-Active Symmetric Front End
B. Hitachi Dynamic Provisioning
C. Enclosure Flexibility
D. SAS Matrix Engine

Page 4-30 HDS Confidential: For distribution only to authorized parties.


Modular Storage Architecture — Part 2
Module Review

2. Which feature of the Hitachi Adaptable Modular Storage Family


addresses the problem of allocated but unused space?
A. Active-Active Symmetric Front End
B. Hitachi Dynamic Provisioning
C. Enclosure Flexibility
D. SAS Matrix Engine

HDS Confidential: For distribution only to authorized parties. Page 4-31


Modular Storage Architecture — Part 2
Hitachi Dynamic Load Balancing Controller

Hitachi Dynamic Load Balancing Controller

3. Which Hitachi Adaptable Modular Storage model has a cache size


of either 4GB or 8GB?
A. AMS 2100
B. AMS 2300
C. AMS 2500
D. All of the above

Page 4-32 HDS Confidential: For distribution only to authorized parties.


5. Tiers, Resource
Pools and Workload
Profiles
Module Objectives

 Upon completion of this module, you should be able to:


• Define tiers, resource pools and workload profiles
• Identify the membership criteria for storage tiers and resource pools
• Differentiate between batch and interactive workloads
• Define Hitachi Dynamic Provisioning
• Define Hitachi Dynamic Tiering

HDS Confidential: For distribution only to authorized parties. Page 5-1


Tiers, Resource Pools and Workload Profiles
What Is a Storage Tier?

What Is a Storage Tier?

 Definition: Data storage with a specific set of physical properties,


value-added services, conditions and costs
• Physical properties
• Storage services
• Conditions

Physical Properties
 Physical properties of the storage determine the applications and service levels it
is suited to support.
 Media specifications (15K 144GB drive, SATA versus FC, SSD, and so on)
 Path specifications (4GB, dual path, and so on)
 Media failure protection (RAID-10, RAID-5, RAID-6)
 Storage Architecture (Enterprise versus Modular, Internal versus External)
 Virtualized internal storage versus virtualized external storage
Storage Services
 Value-added services, such as replication, are part of a tier definition.
 Portability (Hitachi Tiered Storage Manager)
 Replication (ShadowImage — SI, Hitachi Universal Replicator — HUR, Copy-on-
Write — CoW)
 Thin Provisioning
Conditions
 Conditions include characteristics such as dedicated versus shared storage
resources.

Page 5-2 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Storage Tier Examples

Storage Tier Examples

 Tier 1 often means different things at different sites.


 Tier numbers are relative within the scope of a customer or site.
 Examples:
• Tier 1 — Mission-critical data stored on 15K RPM Fibre Channel drives
• Tier 2 — Financial, less used data stored on 10K RPM Fibre Channel
drives
• Tier 3 — Event drive data stored on SATA drives

HDS Confidential: For distribution only to authorized parties. Page 5-3


Tiers, Resource Pools and Workload Profiles
What Is a Resource Pool?

What Is a Resource Pool?

 Resource Pool — A workload distribution domain shared by one or


more compatible users that spans multiple resource instances
 Storage Resource Pool — A workload distribution domain shared by
one or more compatible device owners that spans multiple storage
devices.
 Storage Pools
• Administrative Pools (host striped logical volume)
• 28D+4P
• Hitachi Dynamic Provisioning (HDP)
 Port redundancy group  port pool (pool of ports)
 Cache partition  cache pool (pool of cache)

Workload distribution mechanism is essential to definition of specific pools

Page 5-4 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Application Properties and Workload Profile

Application Properties and Workload Profile

 Business sensitivity and business value


• Budget
• Service Level Requirements
 Predictability of load levels
• Examples: Development and Data Warehouse workloads are very
unpredictable.
 Interactive versus Batch
• Interactive and batch workloads have conflicting optimization goals.
 Access density by I/O Profile
• I/O profile (read versus write, sequential versus random, request size)

HDS Confidential: For distribution only to authorized parties. Page 5-5


Tiers, Resource Pools and Workload Profiles
Batch versus Interactive Workloads

Batch versus Interactive Workloads

 Dimensions of performance measurement?


• Interactive Workloads
▪ Response Time at a throughput (OPS) with an IO profile
▪ Achieved by employing low resource utilization (% busy) with minimal
queuing
• Batch Workloads
▪ Throughput with an IO profile
• IOPs
• MBS
▪ Achieved by driving resource utilization as high as possible moderate
queuing
 Optimizing the use of a storage access resource for a batch
workload and an interactive workload are mutually exclusive.
• Consequently, batch and interactive workloads should generally not share
same storage access resource at the same time.

Page 5-6 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
I/O Profiles — Used for Sizing

I/O Profiles — Used for Sizing

 I/O rate
 Read/Write ratio
 Average transfer size
 Cache hit rate (if possible)
 Response time
 Disk capacity in GB

These factors are typically used to determine the number of disk


spindles, disk types and array groups.

Note that this profile does not identify the level of random writes,
the key criteria from choosing RAID-10 vs. RAID-5.

HDS Confidential: For distribution only to authorized parties. Page 5-7


Tiers, Resource Pools and Workload Profiles
A More Complete I/O Profile

A More Complete I/O Profile

Read Write
I/O Rate I/O Rate
Random Request Size Request Size
% Cache Hits
I/O Rate I/O Rate
Sequential Request Size Request Size

Page 5-8 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
I/O Profiling — Why Is It Necessary?

I/O Profiling — Why Is It Necessary?

 Read/Write %
• Reads consume fewer back end disk I/Os per host operation
• RAID-5, RAID-6, and SATA handle random writes inefficiently
 Cache Hits
• Reduce the number of back end operations per read
 Block size
• Small block = more IOPS, less MB/sec
• Large Block = fewer IOPS, more MB/sec, bus occupancy
 Random versus Sequential
• Random = disk head movement, fewer cache hits
• Sequential = less disk head movement, more cache hits
 % Random Writes, RAID-5 versus RAID-10
• Interactive applications > 5–10% and < 60% cache hits => RAID-10
• Batch Applications > 20% => RAID-10

HDS Confidential: For distribution only to authorized parties. Page 5-9


Tiers, Resource Pools and Workload Profiles
Hitachi Dynamic Provisioning (HDP)

Hitachi Dynamic Provisioning (HDP)

HDP Overview

 RAID Array Group — A group (pool) of disks arranged to provide


automatic workload distribution and no single point of failure. Logical
devices are presented to hosts from the array group.
 Hitachi Dynamic Provisioning (HDP) Pool — Pool of array groups
arranged to provide workload distribution over many array groups
and/or thin provisioning. Virtual logical devices are presented to
hosts from the array group.
• Uses thin provisioning technology that allows customers to allocate virtual
storage capacity based on their anticipated future capacity needs, using
virtual volumes instead of physical disks or volumes.
• Overall storage use rates may improve because customers can
potentially provide more virtual capacity to the application while fully using
fewer physical disks than would be formerly required.
• Performance Benefits — Wide Striping

Hitachi Dynamic Provisioning technology allows storage administrators to maintain


a pool of free space to service the data growth requirements of all their applications
without pre-committing the storage to them. This alleviates the poor use rates
common on traditional storage arrays where large amounts of capacity are allocated
to individual applications, but remain unused.
Dynamic Provisioning also simplifies and reduces costs of application storage
provisioning because it decouples provisioning to an application from the physical
addition of resources to the storage system and automates performance
optimization. Customers who have experienced these benefits on servers with
products such a VMware, where virtual systems can be created on ‘on-the-fly’ now
can do the same on storage systems as well.
And on top of this, Dynamic Provisioning adds the additional advantage in many
cases of improving storage performance.

Page 5-10 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
HDP Benefits

HDP Benefits

 Application storage provisioning simpler and faster for administrator


• Define an initial virtual volume, often without an increase in volume
capacity or change of configuration
• Add physical storage separately, non-disruptively
• Consolidates available capacity
 Reduces application outages, saves time, and keeps costs down
 Balanced distribution of throughput across pooled resources
 Ease of provisioning
 Ease of management

With Hitachi Dynamic Provisioning, application storage provisioning is much


simpler, faster, and less demanding on the administrator.
To configure additional storage for an application, the administrator can draw from
the Dynamic Provisioning pool without immediately adding physical disks.
When a (virtual) volume of the maximum anticipated capacity is defined initially,
the administrator does not have to increase the volume capacity and change the
configuration as often.
Additionally, when more physical storage is needed, the administrator is required
only to install additional physical disks to the Dynamic Provisioning disk pool
without stopping any host or applications during the process.
This decoupling of physical resource provisioning from application provisioning
simplifies storage management, reduces application outages, saves time, and keeps
costs down.

HDS Confidential: For distribution only to authorized parties. Page 5-11


Tiers, Resource Pools and Workload Profiles
HDP-VOLume Overview

HDP-VOLume Overview

Host Server only sees virtual volumes. Write data HDP-VOLume is a


virtual LUN which
does not demand
an identical physical
HDP-VOLume storage capacity.
(Virtual LUN)

Actual storage capacity


HDP Pool
in HDP Pool is assigned
when host writes data
to a Virtual LUN.

LDEVs LDEV LDEV LDEV LDEV LDEV LDEV LDEV LDEV

Array Groups/Disk Drives

What level of overprovisioning you use is extremely dependent on your individual


applications and usage patterns, and on your risk management preferences and
strategies. If you want to overprovision, it is critical to understand application
behavior and patterns in terms of file allocation and file growth (or shrinkage).
Long term thinness depends on application behavior.
This falls under the discipline of capacity planning. Lacking that, you must institute
artificial controls when putting unknown applications under Dynamic Provisioning.
Generally, target no higher than 70%-80%capacity use per pool.
A buffer should be provided for unexpected growth or a runaway application that
consumes more physical capacity than was originally planned. There should at least
be enough free space in the storage pool equal to the capacity that might be needed
for the largest as yet unallocated thin device.
Automating the monitoring of alert notifications is critical to maintaining an
effective Dynamic Provisioning operation, as well as adopting the operational
procedures to take immediate action when a pool threshold trigger is encountered.
The user selectable level should be set at where the pool cannot run out of free
capacity before additional pool capacity is added. Aside from the default and user-
specified pool thresholds available in Dynamic Provisioning through Storage
Navigator and Device Manager, a customer can implement additional user-specified
thresholds through monitoring capability in Tuning Manager.

Page 5-12 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
HDP Pools — Enterprise versus Modular

HDP Pools — Enterprise versus Modular

 On the Enterprise Storage


• All LDEVs must be OPEN-V.
• Between 8GB and 60TB in size.
 On the Modular systems
• Entire RAID groups are put in the pool.
• Up to 128TB.

HDS Confidential: For distribution only to authorized parties. Page 5-13


Tiers, Resource Pools and Workload Profiles
Laws of Physics Still Apply

Laws of Physics Still Apply

 A pool of array groups or disks generally has the same physical


characteristics as the array groups or disks within it.
• RAID-10 versus RAID-5
• Fibre Channel/SAS versus SATA
• 15K RPM versus 10K RPM
 Value added
• Wider workload distribution
• Shared access bandwidth
• Ease of administration

Page 5-14 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
HDP Pool Design

HDP Pool Design

 A storage pool is a subset of a storage tier.


• Storage tier specifies the type of array groups in the pool.
 The basics:
• How many pools?
• How big is each pool?
• Assign compatible users to their pools.

How big is each pool?


 Recommended practice: Between 4 and 32 array groups/pool, inclusive
 Up to 16 array groups is generally considered a manageable size.
 Incremental benefit decreases as size increases.
 Incremental risk increases as size increases.
 Failure is rare, but possible.
 Consider risk of failure.
 Consider consequences of failure, recovery time, and so on.
 Consider access capacity (IOPS, MBS) as well as storage requirements.
Assign compatible users to their pools.
 Assume users are share compatible (load they are running meets sharing
guidelines) unless demonstrated otherwise.
 Look for reasons why users may not be compatible for resource sharing.

HDS Confidential: For distribution only to authorized parties. Page 5-15


Tiers, Resource Pools and Workload Profiles
Determine Compatible Users of Shared Resources

Determine Compatible Users of Shared Resources

 Share unless there is a specific reason to isolate.


• Sharing resources improves asset utilization.
 Isolate to:
• Protect mission critical applications from outside influences
• Quarantine large unpredictable workloads
• Separate conflicting resource usage patterns (interactive versus batch)
• Compartmentalize to contain possible disruption
• Resolve organizational control conflicts
• Respect database management system (DBMS) data redundancy
requirements

Quarantine large unpredictable workloads


 For example, Development, Data Warehouse, and so on
Separate conflicting resource usage patterns (Interactive versus Batch)
 Interactive and Batch workloads use resources in conflicting ways
Compartmentalize to contain possible disruption
 Size matters; consider risk of failure; the recovery time objective
Respect DBMS data redundancy requirements
 A database and its journal of committed transactions should not reside on the
same media.
 If media independence is not maintained, the database may not be fully
recoverable from backups after a failure.
 The journal of committed transactions is required to roll the database forward
after a recovery from tape.

Page 5-16 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Hitachi Dynamic Tiering (HDT)

Hitachi Dynamic Tiering (HDT)

HDT Overview

Virtual Storage Pool Data Heat Index


Platform (VSP) Tier 1
High
Activity
Set

Least
Tier 2
Referenced
Pages
Normal
Working
Set

Tier 3

Dynamic
Tiering Quiet
Data Set
Volume

With Hitachi Dynamic Tiering the complexities and overhead of implementing


data lifecycle management and optimizing use of tiered storage are solved. Dynamic
Tiering simplifies storage administration by eliminating the need for time
consuming manual data classification and movement of data to optimize usage of
tiered storage.
Hitachi Dynamic Tiering automatically moves data on fine-grain pages within
Dynamic Tiering virtual volumes to the most appropriate media according to
workload to maximize service levels and minimize TCO of storage.
For example, a database index that is frequently read and written will migrate to
high performance flash technology while older data that has not been touched for a
while will move to slower, cheaper disks.
No elaborate decision criteria are needed; data is automatically moved according to
simple rules. One, two, or three tiers of storage can be defined and used within a
single virtual volume using any of the storage media types available for the Hitachi
Virtual Storage Platform (VSP). Tier creation is automatic based on user
configuration policies, including media type and speed, RAID level, and sustained

HDS Confidential: For distribution only to authorized parties. Page 5-17


Tiers, Resource Pools and Workload Profiles

I/O level requirements. Using ongoing embedded performance monitoring and


periodic analysis, the data is moved at a fine grain sub-LUN level to the most
appropriate tier. The most active data moves to the highest tier. During the process,
the system automatically maximizes the use of storage keeping the higher tiers fully
utilized.
Hitachi Dynamic Tiering is available on Virtual Storage Platform only.

Page 5-18 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Page Level Tiering

Page Level Tiering

 Different tiers of storage are now POOL A

in one pool. EFD/SSD


TIER 1

 If data becomes less active, it


migrates to lower level tiers. Last Referenced
SAS
TIER 2

 If activity increases, data will be


promoted to a higher tier.
 Since 20% of data accounts for Last Referenced
SATA
TIER 3

80% of the activity, only the active


part of a volume will reside on the
higher performance tiers.

The pool contains multiple tiers (not the other way around like in USPV/HDP).
The logical volumes have pages mapped to the pool (same as USPV/HDP). Those
pages can be anywhere in the pool on any tier in that pool.
The pages can move (migrate) within the pool for performance optimization
purposes (move up/down between tiers).
HDT will try to use as much of the higher tiers as possible. (T1 and T2 will be used
as much as possible while T3 will have more spare capacity.)
You can add capacity to any tier at any time. You can also remove capacity
dynamically. So, sizing a tier for a pool is a lot easier.
Quantity added/removed should be in ARRAY Group quantities.
The first version of HDT (with VSP at GA):
 Up to a maximum 3 tiers in a pool.
We will start with managing resources in a 3 tier approach. That may mean:
 1-Flash drives, 2-SAS, 3-SATA or
 1-SAS(15k), 2-SAS(10k), 3-SATA (or something else)
 The Pool’s tiers are defined by HDD type
 External storage is also supported as a Tier (lowest).

HDS Confidential: For distribution only to authorized parties. Page 5-19


Tiers, Resource Pools and Workload Profiles
HDT Benefits

HDT Benefits

Automate and Eliminate Complexities of Efficient Tiered Storage Use

Data Heat Index


 Solution Capabilities
Storage Tiers
• Automate data placement for higher
High
Activity
performance and lower costs
Set
• Simplified ability to manage all storage
tiers as single entity
Normal • Self-optimized for high performance
Working
Set and space efficiency
• Page-based granular data movement
for highest efficiency and throughput
 Business Value
Quiet
• Significant savings by moving data to
Data Set lower cost tiers
• Increase storage utilization up to 50%
• Easily align business application
needs to the right cost infrastructure

With Hitachi Dynamic Tiering, the complexities and overhead of implementing data
lifecycle management and optimizing use of tiered storage are solved. Dynamic
Tiering simplifies storage administration by eliminating the need for time
consuming manual data classification and movement of data to optimize usage of
tiered storage.
Hitachi Dynamic Tiering automatically moves data on fine-grain pages within
Dynamic Tiering virtual volumes to the most appropriate media according to
workload to maximize service levels and minimize TCO of storage.
For example, a database index that is frequently read and written will migrate to
high performance flash technology while older data that has not been touched for a
while will move to slower, cheaper disks.
No elaborate decision criteria are needed; data is automatically moved according to
simple rules. One, two, or three tiers of storage can be defined and used within a
single virtual volume, using any of the storage media types available for the Hitachi
Virtual Storage Platform. Tier creation is automatic based on user configuration
policies, including media type and speed, RAID level, and sustained I/O level
requirements. Using ongoing embedded performance monitoring and periodic
analysis, the data is moved at a fine grain page level to the most appropriate tier.
The most active data moves to the highest tier. During the process, the system
automatically maximizes the use of storage keeping the higher tiers fully utilized.

Page 5-20 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Improved Performance at Reduced Cost

Improved Performance at Reduced Cost

How It Works: the 80/20 Rule

Workload Capacity
Ratio(%)

High
I/O 50%
5% Tier1
80% of capacity
I/O 30% 15% Tier2 Tier 2
20% of I/O

I/O 20% 80% of I/O


80% Tier3
Tier 1 20% of capacity
Tier 0
Low I/O Ratio (%)

Research and Investigation


 Analyzed access locality of volumes for one year.
 80% of I/O concentrated in 20% of the total area.
 50% of I/O concentrated in 5% of the total area.

Why should Automatic Tiering work? Why does performance increase? Why do
costs lower?
Well it goes back to decades of seeing the same statistical behavior over and over. A
small population at any moment in time is vastly more active than the rest of the
population. The active population will have different members as time goes on but
the size of the active population remains relatively small.
Here we show that about 5% of the data has about 50% of the I/O traffic (this is
physical I/O after read cache hits). Another 15% of the data accounts for 30% of the
I/O traffic. Now you start to see the old 80/20 rule. We see this in cache
implementations, automated warehousing, commuter traffic and so on.

HDS Confidential: For distribution only to authorized parties. Page 5-21


Tiers, Resource Pools and Workload Profiles
How HDT Fits into Tiered Storage

How HDT Fits into Tiered Storage

 Just like media type, RAID Hitachi Dynamic Tiering volumes are
configurations, and speeds in another kind of volume in a tiered
volume design, Hitachi Dynamic storage architecture.
Tiering multitier volumes are
another way of delivering
tailored storage cost and
performance service levels.
 Depending on requirements,
Hitachi Dynamic Tiering pools
and volumes can be configured
to optimize for:
• Maximum performance
• Balanced performance and cost
• Minimum cost for lower tiers

Hitachi Dynamic Tiering volumes deliver superior service levels at lower cost.

With or without Hitachi Dynamic Tiering, a proper analysis and design of


individual customer requirements is needed.
The graphic illustrates the concept of tiering. The detailed labels and numbers do
not mean anything specific in this context.

Page 5-22 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
HDT Specifications

HDT Specifications

 Available on Virtual Storage Platform (VSP)


 HDT license Key installed on storage system
 Up to a maximum of three tiers in a pool
• Most often a SSD-SAS-SATA hierarchical model
▪ The pool’s tiers are defined by HDD type.
• External storage supported
• Capacity of any tier can be added to the pool. Capacity can also be
removed from a pool.

We will start with managing resources in a 3 tier approach. That may mean:
 1-SSD, 2-SAS, 3-SATA or
 1-SAS15k, 2-SAS10k, 3-SATA (or something else)
External Storage can also be added as a Tier.

 Tier management
• Fills top tiers as much as possible. Monitors I/O references. Adjusts page
placement according to trailing 24 hour heat map cyclically (adjustable
from 30 min to 24 hours).
• Automatic versus Manual controls are available.
• Tier management (migration up and down tier) is automatic and built into
the system’s firmware.
• Hitachi Dynamic Tiering is a unique product (HDP add-on).

HDS Confidential: For distribution only to authorized parties. Page 5-23


Tiers, Resource Pools and Workload Profiles
HDT Specifications

 Each tier in a pool has a calculated sustainable workload level.


• Collectively, the average I/O level measurements for pages located in the
tier is targeted to not exceed the calculated sustainable I/O level.
▪ Some pages may not be moved up-tier if the predicted workload level
is too high.
• Normally the tier’s workload level would not be a factor. Most used pages
would be located in the highest tiers.
 Page size is not adjustable
• Page size remains 42MB (Enterprise)

We also calculate the average sustainable I/O rate of the tier. We avoid over driving
a tier as well. In some cases a tier may not be able to handle all the heaviest used
pages, so we may elect to keep some in a lower tier. This is unlikely to be a real
problem in most cases, but never the less we watch for this outlier case.
We will have reporting in Device Manager and Tuning Manager with utilization
performance data that can be used to help, for example:
 Size tiers in pools
 Size pools
 Help chargeback (This is a topic that can get overly complex. Basic
representation is to chargeback on quality of service not on specific utilization
levels.)
 Determine DP-VOL spread across the tiers
 Determine I/O amounts per tier

Page 5-24 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Tiering Policy

Tiering Policy

 The Tiering Policies setting is a function that enables users to specify


tiers to be used by DP-VOL.
 For the DP-VOL with Tiering Policies specified, all pages are
allocated to tiers in the Tiering Policies.

 Example
• If tier 1 is specified for a DP-VOL, HDT relocates the DP-VOL pages to
the tier 1. In other words, no matter how low the I/O workload in the page
is, the page is allocated to the tier 1.

In order to maximize the entire pool performance, HDT allocates pages from higher
tier according to relative page ranking based on I/O workload. For this, a page with
lower I/O workload is likely to be allocated to lower tier. Depending on customer
environment, there is data with high importance but low I/O and such data should
be allocated to higher tier in some cases. However, because of low I/O workload,
DT allocates the data to tier 2 or tier 3 and the data with low importance and high
I/O workload may use tier 1. Setting Tiering Policies will help resolve such issues.
Note: The entire volume is locked to a tier as per the defined policy.

HDS Confidential: For distribution only to authorized parties. Page 5-25


Tiers, Resource Pools and Workload Profiles
Tiering Policy

Tier
allocation Description Pool: 1-Tier config. Pool: 2-Tier config. Pool: 3-Tier config.
policy

All tiers
All tiers are used All tiers are used All tiers are used All tiers are used
(Default)

Only the top tier is


Level 1 All tiers are used Only Tier 1 is used Only Tier 1 is used
used

Only top two tiers Only Tier 1/Tier 2 are


Level 2 All tiers are used All tiers are used
are used used

Level 3 Only Tier2 is used All tiers are used Only Tier 2 is used Only Tier 2 is used

Only bottom two Only Tier 2/Tier 3 are


Level 4 All tiers are used All tiers are used
tiers are used used

Only the lowest tier


Level 5 All tiers are used Only Tier 2 is used Only Tier 3 is used
is used

Policy Use case (Pool consists of two or more tiers(*))


1 - High throughput is required for all pages in the DP-VOL.
 - Quick response is required even for pages that are not accessed continuously
 (Example: Volume used for index in database)
2 - Response performance in the case where pages are allocated to the lowest tier is
not acceptable
 (Example: Volume used for online transaction processing)
3 - Tier2 performance is sufficient
 Stable throughput and response are required
 Response performance in the case where pages are allocated to the lowest tier is
not acceptable
 There is no locality of the DP-VOL
4 - Performance of the highest tier is not required
 Want to reduce cost required for maintaining the DP-VOL data as much as
possible
 Want other volume in the same pool to use the top tier
 (Example: Volume used for batch processing)
5 - Want to reduce cost required for maintaining the DP-VOL data as much as
possible
 Once accessed, it is not likely to be accessed again
 (Example: Volume used for archive, volume used for log)

Page 5-26 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Module Summary

Module Summary

 This module:
• Defined tiers, resource pools and workload profiles
• Identified the membership criteria for storage tiers and resource pools
• Differentiated between batch and interactive workloads
• Defined Hitachi Dynamic Provisioning (HDP)
• Defined Hitachi Dynamic Tiering (HDT)

HDS Confidential: For distribution only to authorized parties. Page 5-27


Tiers, Resource Pools and Workload Profiles
Module Review

Module Review

1. What are the key differences between Interactive and Batch


Workloads?
2. What standard metrics are used to create an IO profile for an
application?
3. What RAID level is recommended for use in HDP pools?
4. What is the page size for capacity allocation from a HDT pool?

Page 5-28 HDS Confidential: For distribution only to authorized parties.


6. Performance Tools
and Data Sources
Module Objectives

 Upon completion of this module, you should be able to:


• Identify industry-standard workload generators and benchmarking
products
• Identify parameters used in workload generators
• Identify industry-standard host-based tools used for performance data
monitoring and collection
• Demonstrate how to use Hitachi Performance Monitor to review
performance data
• Identify the performance data Hitachi Tuning Manager collects

HDS Confidential: For distribution only to authorized parties. Page 6-1


Performance Tools and Data Sources
Workload Generators and Benchmarking Products

Workload Generators and Benchmarking Products

Workload Generators and Benchmarking


 Available workload generators differ in capability and use.
• dd — Common UNIX program for low level copying and conversion of
raw data.
• Iometer — Easy to use workload generator and I/O measurement and
characterization tool.
• Vdbench — Disk and tape I/O workload generator and performance
reporter.
• Iozone — File system benchmark utility.
• SQLIO — Tool from Microsoft that is used to determine the I/O capacity
of a given configuration.
• Jetstress — Exchange Server benchmarking tool. It simulates disk I/O
load on a test server running Exchange to verify the performance and
stability of the disk subsystem.
• Loadsim — Benchmarking tool to simulate the performance load of MAPI
clients for Microsoft Exchange.

 Benchmark products are also available from the Transaction


Processing Council (TPC).
• Defines transaction processing and database benchmarks and delivers
trusted results to the industry
 TPC Benchmark App (TPC-App) is an application server and web
services benchmark.
 TPC-C simulates a complete computing environment where a
population of users executes transactions against a database.
 TPC-E Benchmark E (TPC-E) is a new online transaction processing
(OLTP) workload benchmark developed by the TPC.
 TPC Benchmark H (TPC-H) is a decision support benchmark.

The Transaction Processing Performance Council (TPC) defines transaction


processing and database benchmarks and delivers trusted results to the industry.
Storage Performance Council (SPC) is a non-profit corporation founded to define,
standardize, and promote storage subsystem benchmarks.

Page 6-2 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
dd Utility

dd Utility

 dd is a Unix command line utility


• Writes to a block device.
• Allows you to fine control block sizes, length of tests and use of the
buffer-cache.
• Gives you a nice and short report on transfer rate.
 dd Usage Guidelines
• Use dd command as follows to create a large file (e.g. 1024M x 50 count
= 50GB):
$ dd if=/dev/zero of=/tmp/big.file bs=1024M count=50
• Read test
 dd if=/dev/zero of=/dev/ bs=1M count=1024 oflag=direct
• Write test
 dd if=/dev/ of=/dev/null bs=1M count=1024 iflag=direct

"odirect“/”idirect” flag avoids using buffer-cache, so the test results should be


repeatable.
With “dd” you can easily get the sequential read and write speed. Note that “dd”
does not provide random access pattern itself, even though you can do it by
wrapping “dd” in another process with the “seek” and “skip” features.

HDS Confidential: For distribution only to authorized parties. Page 6-3


Performance Tools and Data Sources
Iometer — Overview

Iometer — Overview

 I/O subsystem
• Measurement tool
• Characterization tool
 Workload generator
 Single and clustered systems.
 Can emulate disk or network I/O load.
 Measurement and characterization of:
• Bandwidth of buses
• Latency of buses
• Network throughput
• Shared bus performance
• Disk performance
• Network Controller performance

Select a Outstanding I/Os =


Worker Queue Length
Select Note: This has
a Disk nothing to do with the
HBA LUN Queue
Depth

Page 6-4 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
Iometer — Overview

When you launch Iometer, the Iometer window appears, and a Dynamo workload
generator is automatically launched on the local computer.
The topology displays a hierarchical list of managers (copies of Dynamo) and
workers. Workers are threads within each copy of Dynamo.
Disk workers access logical drives by reading and writing a file called iobw.tst in the
root directory of the drive. If this file exists, the drive is shown with a plain yellow
icon. If the file does not exist, the drive is shown with a red slash through the icon.
(If this file exists but is not writable, the drive is considered read-only and is not
shown at all.)
Maximum Disk Size
The Maximum Disk Size control specifies how many disk sectors are used by the
selected workers. The default is 0, meaning the entire disk or iobw.tst file.
Starting Disk Sector
The Starting Disk Sector control specifies the lowest-numbered disk sector used by
the selected workers during the test. The default is 0, meaning the first 512-byte
sector in the disk or iobw.tst file.
# of Outstanding I/Os
The # of Outstanding I/Os control specifies the maximum number of outstanding
asynchronous I/O operations per disk the selected workers will attempt to have
active at one time. (The actual queue depth seen by the disks may be less if the
operations complete very quickly.) The default value is 1.
The value of this control applies to each selected worker and each selected disk.
For example, if you select a manager with 2 disk workers in the Topology panel,
select 4 disks in the Disk Targets tab, and specify a # of Outstanding I/Os of 8. In
this case, the disks will be distributed among the workers (2 disks per worker), and
each worker will generate a maximum of 8 outstanding I/Os to each of its disks. The
system as a whole will have a maximum of 32 outstanding I/Os at a time (2 workers
* 2 disks/worker * 8 outstanding I/Os per disk) from this manager.
Test Connection Rate
The Test Connection Rate control specifies how often the workers open and close
their disks.

HDS Confidential: For distribution only to authorized parties. Page 6-5


Performance Tools and Data Sources
Iometer — Overview

Pre-Defined
Workloads

The Access Specification tab lets you control the type of I/O each worker performs
to its selected targets.

Page 6-6 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
Iometer — Overview

Set Block Set Read/Write


Size Percentage

Set Sequential/
Random Percentage

Each access specification controls:


 The percent of transfers that are a given size
 What percent of those are reads or write
 What percent of accesses are random or sequential
 How many transfers occur in a burst
 How long to wait between bursts
 The alignment of each I/O on the disk
 The size of the reply, if any, to each I/O request

HDS Confidential: For distribution only to authorized parties. Page 6-7


Performance Tools and Data Sources
Iozone

Iozone

Automated
mode is
possible, but
be prepared
to wait a long
time!

Also, be
prepared to
ignore lots of
unnecessary
information

Iozone is a filesystem benchmark tool. The benchmark generates and measures a


variety of file operations. Iozone has been ported to many machines and runs under
many operating systems. It allows you to see how well record I/O occurs for files of
various sizes.
Iozone is good at detecting areas where the file IO might not be performing as well
as expected.
Additional information about IOzone can be found at http://www.iozone.org.

Page 6-8 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
Iozone

Seek Block Read:Write Processes =


Range Size Tests, Queue Length
Random,
Sequential

#./iozone –s 100m –r 4k –i 0 –i 2 -O –t 4 –b results.csv

IOPS output

HDS Confidential: For distribution only to authorized parties. Page 6-9


Performance Tools and Data Sources
Jetstress

Jetstress

Objective: meet the


IOPS target …
Set the
workload,
threads,
capacity, … and meet the
etc. Response target as well

Jetstress is a tool provided by Microsoft to help you verify the performance and
stability of a disk storage system prior to putting an Exchange server into
production.
Jetstress helps verify disk performance by simulating Exchange disk I/O load.
Specifically, Jetstress simulates the Exchange database and log file loads produced
by a specific number of users.
You can use available tools such as Performance Monitor, Event Viewer, or ESEUTIL
(Exchange Server Database Utilities) in conjunction with Jetstress to verify that your
disk subsystem meets or exceeds the performance criteria you establish.

Page 6-10 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
SQLIO

SQLIO

Read/Write Rand/Seq Queuing

sqlio -kW -s10 -frandom -o8 -b8 -LS -Fparam.txt timeout /T 60

Block Size

Disk Config

More Queuing

HDS Confidential: For distribution only to authorized parties. Page 6-11


Performance Tools and Data Sources
Performance Monitoring Tools

Performance Monitoring Tools

Performance Monitoring Tools Overview

 Performance monitoring tools host-based or storage system based


 Examples:
• Host-based monitoring:
 Windows: Performance Monitoring
 Solaris: iostat
 AIX, Linux: NMON
 Using Tuning Manager Server System Agents
• Storage monitoring:
 Performance Monitor for Modular Storage Systems
 Performance Monitor for Enterprise Storage Systems
 Using Tuning Manager RAID Agent
• Certain workload generators, such as IOmeter, also help monitor
performance.

Page 6-12 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
I/O Profile Information for Windows

I/O Profile Information for Windows

 Windows Performance Monitor (Perfmon) gets I/O Profile


information.

Av Disk bytes per Read


Av Disk bytes per Write
Av Disk Queue Length
Av Disk sec/Read
Av Disk sec/Write
Disk Bytes/sec
Disk Transfers/sec
Disk Reads/sec
Disk Writes/sec

 Perfmon counters that are useful for storage performance analysis:


• Logical Disk and Physical Disk Counters
 % Disk Read Time, % Disk Time, % Disk Write Time, % Idle Time
 Average Disk Bytes / { Read | Write | Transfer }
 Average Disk Queue Length, Average Disk { Read | Write } Queue
Length
 Average Disk sec / {Read | Write | Transfer}
 Current Disk Queue Length
 Disk Bytes / second, Disk {Read | Write } Bytes / second
 Disk {Reads | Writes | Transfers } / second
 Split I/O / second

HDS Confidential: For distribution only to authorized parties. Page 6-13


Performance Tools and Data Sources
I/O Profile Information for Windows

Logical Disk and Physical Disk Counters


The same counters are valuable in each of these counter objects.
Logical disk data is tracked by the volume manager (or managers), and physical
disk data is tracked by the partition manager.
% Disk Read Time, % Disk Time, % Disk Write Time, % Idle Time
These counters are of little value when there are multiple spindles behind “disks.”
Imagine a subsystem of 100 disks, presented to the operating system as 5 disks (each
backed by a 20-disk RAID-0+1 array). Now imagine that the administrator spans the
five physical disks with one logical disk (volume X:). One can assume that any
serious system needing that many spindles will have at least one request
outstanding to X: at any given time. This will make the “disk” appear to be 100%
busy and 0% idle, when in fact the 100-disk array might be up to 99% idle.
Average Disk Bytes / { Read | Write | Transfer }
Useful in gathering average, minimum, and maximum request sizes.
If the sample time is long enough, a request size histogram can be generated. If
possible, workloads should be observed separately; multi-modal distributions
cannot be differentiated using Perfmon if the requests are consistently interspersed.
Average Disk Queue Length, Average Disk { Read | Write } Queue Length
Useful in gathering concurrency data, including burstiness and peak loads.
For a discussion on what constitutes excessive queuing, see “Rules of Thumb” later
in this paper. These values represent the number of requests in-flight below the
driver taking the statistics. This means the requests are not necessarily queued but
could actually be in service or completed and on the way back up the path. Possible
in-flight locations include the following:
 Sitting in a SCSIport or Storport queue
 Sitting in a queue at an OEM driver
 Sitting in a disk controller queue
 Sitting in an array controller queue
 Sitting in a hard disk queue (that is, on-board a real spindle)
 Actively receiving service from a hard disk
 Completed, but not yet back up the stack to the point where the statistics are
gathered
Average Disk sec / {Read | Write | Transfer}
Useful in gathering disk request response time data, and possibly extrapolating service time
data.
These are probably the simplest indicators of storage subsystem bottlenecks. For a
discussion on what constitutes excessive response times, see “Rules of Thumb” later

Page 6-14 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
I/O Profile Information for Windows

in this paper. If possible, workloads should be observed separately; multi-modal


distributions cannot be differentiated using Perfmon if the requests are consistently
interspersed.
Current Disk Queue Length
An instantaneous measurement of the number of requests in flight and thus subject to
extreme variance.
As such, not of much use except to check for the existence of numerous short bursts
of activity, which is lost when averaged over the sample period.
Disk Bytes / second, Disk {Read | Write } Bytes / second
Useful in gathering throughput data.
If the sample time is long enough, a histogram of the array’s response to specific
loads (queues, request sizes, and so on) can be analyzed. If possible, workloads
should be observed separately.
Disk {Reads | Writes | Transfers } / second
Useful in gathering throughput data. If the sample time is long enough, a histogram
of the array’s response to specific loads (queues, request sizes, and so on) can be
analyzed. If possible, workloads should be observed separately.
Split I/O / second
Only useful if the value is not in the noise.
If it becomes significant, in terms of Split I/Os per second per spindle, further
investigation could be warranted to determine the size of the original requests being
split and the workload generating them.

HDS Confidential: For distribution only to authorized parties. Page 6-15


Performance Tools and Data Sources
I/O Profile Information for Windows

 Perfmon counters that are useful for storage performance analysis:


• Processor Counters
 % DPC Time, % Interrupt Time, % Privileged Time
 DPCs Queued / second
 Interrupts / second

Processor Counters
% DPC Time, % Interrupt Time, % Privileged Time
If Interrupt Time and Deferred Procedure Call Time are a large portion of Privileged
Time, the kernel is spending a significant amount of time processing I/Os. In some
cases, it works best to keep interrupts and DPCs affinitized to a small number of
CPUs on a multiprocessor system, in order to improve cache locality. In other cases,
it works best to distribute the interrupts and DPCs among many CPUs, so as to keep
the interrupt and DPC activity from becoming a bottleneck.
DPCs Queued / second
Another measurement of how DPCs are consuming CPU time and kernel resources.
Interrupts / second
Another measurement of how Interrupts are consuming CPU time and kernel
resources. Modern disk controllers often combine or coalesce interrupts so that a
single interrupt results in the processing of multiple I/O completions. Of course, it is
a trade-off between delaying interrupts (and thus completions) and economizing
CPU processing time.

Page 6-16 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
I/O Profile for Solaris Using iostat

I/O Profile for Solaris Using iostat

Wait defines the Queue not Actv defines the Queue accepted by the
yet accepted by the storage. storage, this should be up to the LUN
Queue Depth value.

Wait should be Zero unless the Actv Queue has reached


the LUN Queue Depth value or the Max_throttle value.

If the Wait Queue is non-zero while the Actv Queue has


not reached the LUN Queue Depth, then it can indicate
Tag exhaustion in the storage.

iostat is a utility that iteratively reports terminal, tape, and disk I/O activity. It also
reports CPU utilization.
iostat uses counters maintained by the kernel to measure throughput, utilization,
queue lengths, transaction rates, and service time.
iostat does not generate I/O.
The output of the iostat utility includes the following information:
 device name of the disk
 r/s – Reads per second
 w/s – Writes per second
 kr/s – Kilobytes read per second (The average I/O size during the interval can
be computed from kr/s divided by r/s.)
 kw/s – Kilobytes written per second (The average I/O size during the interval
can be computed from kw/s divided by w/s.)
 wait – Average number of transactions waiting for service (queue length)
This is the number of I/O operations held in the device driver queue waiting for
acceptance by the device.

HDS Confidential: For distribution only to authorized parties. Page 6-17


Performance Tools and Data Sources
I/O Profile for Solaris Using iostat

 actv – Average number of transactions actively being serviced (removed from


the queue but not yet completed)
This is the number of I/O operations accepted, but not yet serviced, by the
device.
 svc_t – Average response time of transactions, in milliseconds
The svc_t output reports the overall response time, rather than the service time,
of a device. The overall time includes the time that transactions are in queue and
the time that transactions are being serviced. The time spent in queue is shown
with the -x option in the wsvc_t output column. The time spent servicing
transactions is the true service time. Service time is also shown with the -x option
and appears in the asvc_t output column of the same report.
 %w – Percent of time there are transactions waiting for service (queue non-
empty)
 %b – Percent of time the disk is busy (transactions in progress)
 wsvc_t – Average service time in wait queue, in milliseconds
 asvc_t – Average service time of active transactions, in milliseconds
 wt – The I/O wait time is no longer calculated as a percentage of CPU time, and
this statistic will always return zero.

Page 6-18 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
I/O Profile for Solaris Using iostat

 iostat output

extended disk statistics


disk r/s w/s Kr/s Kw/s wait actv svc_t %w %b
sd9 33.1 8.7 271.4 71.3 0.0 2.3 15.8 0 27
Utilization U = %b / 100 = 0.27
Throughput X = r/s + w/s = 41.8
Size K = Kr/s + Kw/s / X = 8.2K
Concurrency N = actv = 2.3
Service time S = U / X = 6.5ms
Response time R = svc_t = 15.8ms

iostat is a utility that iteratively reports terminal, tape, and disk I/O activity. It also
reports CPU utilization.
iostat uses counters maintained by the kernel to measure throughput, utilization,
queue lengths, transaction rates, and service time.
iostat does not generate I/O.
The output of the iostat utility includes the following information:
 device name of the disk
 r/s – Reads per second
 w/s – Writes per second
 kr/s – Kilobytes read per second (The average I/O size during the interval can
be computed from kr/s divided by r/s.)
 kw/s – Kilobytes written per second (The average I/O size during the interval
can be computed from kw/s divided by w/s.)
 wait – Average number of transactions waiting for service (queue length)
This is the number of I/O operations held in the device driver queue waiting for
acceptance by the device.
 actv – Average number of transactions actively being serviced (removed from
the queue but not yet completed)

HDS Confidential: For distribution only to authorized parties. Page 6-19


Performance Tools and Data Sources

This is the number of I/O operations accepted, but not yet serviced, by the
device.
 svc_t – Average response time of transactions, in milliseconds
 The svc_t output reports the overall response time, rather than the service time,
of a device. The overall time includes the time that transactions are in queue and
the time that transactions are being serviced. The time spent in queue is shown
with the -x option in the wsvc_t output column. The time spent servicing
transactions is the true service time. Service time is also shown with the -x option
and appears in the asvc_t output column of the same report.
 %w – Percent of time there are transactions waiting for service (queue non-
empty)
 %b – Percent of time the disk is busy (transactions in progress)
 wsvc_t – Average service time in wait queue, in milliseconds
 asvc_t – Average service time of active transactions, in milliseconds
 wt – The I/O wait time is no longer calculated as a percentage of CPU time, and
this statistic will always return zero.

Page 6-20 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
I/O Profile for AIX and Linux Using nmon

I/O Profile for AIX and Linux Using nmon

nmon provides
comprehensive
performance
data.

The nmon tool is designed for AIX, and Linux. It can be used to measure and analyze
performance data such as:
 CPU utilization
 Memory use
 Kernel statistics and run queue information
 Disks I/O rates, transfers, and read/write ratios
 Free space on file systems
 Disk adapters
 Network I/O rates, transfers, and read/write ratios
 Paging space and paging rates
 CPU and AIX specification
 Top processors
 IBM HTTP Web cache
 User-defined disk groups
 Machine details and resources
 Asynchronous I/O — AIX only
 Workload Manager (WLM) — AIX only
 IBM TotalStorage® Enterprise Storage Server® (ESS) disks — AIX only
 Network File System (NFS)
 Dynamic LPAR (DLPAR) changes

HDS Confidential: For distribution only to authorized parties. Page 6-21


Performance Tools and Data Sources
Storage Monitoring

Storage Monitoring

 Storage monitoring tools


• Enterprise storage systems
▪ Performance Monitor
▪ Export Tool
▪ RAIDCOM CLI
• Modular storage systems
▪ austatistics
▪ auperform
▪ Performance Monitor (PFM)
• Tuning Manager
• Analytics Tab (HCS)

Page 6-22 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
Performance Monitor

Performance Monitor

Enterprise Storage

 Supports:
• All new interface
• User identifies CUs and HBAs to monitor. VSP collects resource data.
• VSP resource data supports graphing resource utilization, including:
▪ Controller, MPs, DRRs
▪ Cache
▪ Access Paths : CHA, DKA, MPs, Cache
▪ Ports, WWNs
▪ LDEVs, LUNs
▪ External Storage

Enterprise Storage

 Collects various statistics per resource:


• Utilization Rates
▪ Usage Rate (%)
• (… this applies to MP, DRR, Cache, CHA/DKA/MP/Cache ESW)
▪ Write Pending Rate (%)
• (… this applies to Cache)
• Host I/O Rates and Response
▪ (… these apply to Ports and WWNs)
▪ Throughput (IOPs)
▪ Data Trans (MB/s)
▪ Response Time (ms)

HDS Confidential: For distribution only to authorized parties. Page 6-23


Performance Tools and Data Sources
Performance Monitor

Enterprise Storage

 Collects various statistics per resource: (cont’)


• Disk I/O Rates and Response
▪ (… all or some apply to LDEVs, LUNs, PGs, External)
▪ Total Throughput (IOPs)
▪ Read Throughput (IOPs)
▪ Write Throughput (IOPs)
▪ Cache Hit (%)
▪ Data Trans (MB/s)
▪ Response Time (ms)
▪ Back trans (count/sec)
▪ Drive Usage Rate (%)
▪ Drive Access Rate (%)
▪ ShadowImage (%)

Page 6-24 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
Performance Monitor

VSP

 Accessed in 3 ways:
• Reports
• Accordion Menu
• General Tasks

The Accordion option (Performance Monitor) allows configuring CUs, WWNs, Short-
Range monitoring and so on, in addition to displaying the Monitor Performance
screen with a button. The Monitor Performance screen is the main tool for tracking
resource utilization.
The Reports option, like the Accordion option, allows for configuration and
monitoring, but with menu selections instead of buttons.
The General Tasks option does not provide for configuration; it goes straight to
Monitor Performance. If you look carefully in General Tasks, it says Monitor
Performance rather than Performance Monitor.

HDS Confidential: For distribution only to authorized parties. Page 6-25


Performance Tools and Data Sources
Export Tool

Export Tool

 Export Tool for Enterprise Storage allows you to export:


• monitoring data (statistics) shown in the Monitor Performance window to
text files.
• monitoring data on remote copy operations performed by TrueCopy,
TrueCopy for Mainframe, Universal Replicator, and Hitachi Universal
Replicator for Mainframe.
 Exported text files can be read in Excel or Word for performing
analysis.
 Version for Windows and Unix
 Use version of EXPORT that matches the microcode of the array

If you want to use the Export Tool, you must create a user ID for exclusive use of the
Export Tool before using it. Assign only the Storage Administrator (Performance
Management) role to the user ID for the Export Tool. We recommend that you do
not assign the roles other than the Storage Administrator (Performance
Management) role to the user ID for exclusive use of the Export Tool to manage the
storage system.
The user who is assigned to the Storage Administrator (Performance Management)
role may perform the following operations:
 To save the monitoring data into files
 To change the gathering interval
 To start or stop monitoring by the set subcommand
Refer to the Hitachi VSP Performance Guide to learn more about the installation and
usage of Export Tool.

Page 6-26 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
RAIDCOM CLI

RAIDCOM CLI

 RAIDCOM is a command included in CCI/RAID Manager and


supported on VSP to perform
 RAIDCOM can be used out-of- band.
 Useful for quickly collecting:
• Configuration information
• Performance for HDP/HDT Pools

HDS Confidential: For distribution only to authorized parties. Page 6-27


Performance Tools and Data Sources
Modular Storage Systems — Monitoring Options

Modular Storage Systems — Monitoring Options

9500 V
AMS500
AMS1000
only

AMS2x00

Free with Program GSS TRC


Resource Product Tool Tool
Manager

9500 V stands for Hitachi Thunder 9500 V Series Modular storage system.
AMS stands for Hitachi Adaptable Modular Storage.

Page 6-28 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
Austatistics — Capabilities

Austatistics — Capabilities

 No reflection of stress (except Inflow Threshold)


• Need PFM for stress and rates
 Requires manual calculation from multiple series for any Rate info
• For example, a collection per day/week/month for rate trending exercises
• Overall Read:Write ratio (for RAID level selection)
• Overall Cache Hit Rates
• Sequential Determination (for Pre-Fetch tuning)

HDS Confidential: For distribution only to authorized parties. Page 6-29


Performance Tools and Data Sources
PFM — Capabilities

PFM — Capabilities

 All Key Stress pointers:


• Write Cache, Write Hit (%), CTL Utilization, HDD Operating Rates
• New: LUN Response Times, LUN Tag Count
• Identify non-owning access (via wrong Controller)
• Enough to fix 95+% of all tuning issues
 Counters help with deeper analysis.

Page 6-30 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
Getting the PFM Stats — PFM Output Files

Getting the PFM Stats — PFM Output Files

 PFM Stats
• auperform
• StorNav
GUI

HDS Confidential: For distribution only to authorized parties. Page 6-31


Performance Tools and Data Sources
Getting the PFM Stats from the SNM2 GUI

Getting the PFM Stats from the SNM2 GUI

Page 6-32 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
Example PFM Output

Example PFM Output

HDS Confidential: For distribution only to authorized parties. Page 6-33


Performance Tools and Data Sources
AMS PFM Real-time Graphing

AMS PFM Real-time Graphing

• Useful for real-time


analysis
• Easy to use
• All the PFM metrics
can be graphed
• Fixed number of graph
points
• One graph per view
• Most recent 50 points
view

Page 6-34 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
PFM Real-Time View of Tag Count

PFM Real-Time View of Tag Count

O/S

AMS

Application

HDS Confidential: For distribution only to authorized parties. Page 6-35


Performance Tools and Data Sources
Performance Management Software Suite

Performance Management Software Suite

 Monitors and displays detailed usage data and trends (statistics)


• On physical hard drives, logical volumes and processors
• About disk drive workloads and traffic between hosts and storage systems

 Assists in analyzing trends in disk I/Os and detects peak I/O time
 Can help to identify system bottlenecks within the storage system

For enterprise storage systems, Performance Monitor lets you obtain usage statistics
about physical hard disk drives, volumes, processors, and other resources in your
storage system. Performance Monitor also lets you obtain statistics about workloads
on disk drives and traffic between hosts and the storage system.

Page 6-36 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
Performance Monitor Overview

Performance Monitor Overview

 Launched from Storage Navigator and allows monitoring and


facilitates tuning of a storage system
 Works in conjunction with other performance management
components
• Hitachi Server Priority Manager
• Hitachi Tiered Storage Manager
 Provides statistics on physical hard disk drives, logical volumes, and
processors:
• Resource utilization in your storage system
• Workloads on LDEVs and ports
 Assists in analyzing trends in disk I/Os and detects peak I/O time
 Helps to identify system bottlenecks
 Provides two types of data: long- and short-range

Performance Monitor lets you obtain usage statistics about the physical hard disk
drive, logical volumes, processors, or other resources in your storage system.
Performance Monitor also lets you obtain statistics about workloads on disk drives
and traffic between hosts and the storage system. The Performance Monitor panel
displays a line graph that indicates changes in the usage rates, workloads, or traffic.
You can view information in the panel and analyze trends in disk I/Os and detect
peak I/O time. If system performance is poor, you can use information in the panel
to detect bottlenecks in the system.
If Performance Monitor is not enabled, you cannot use Server Priority Manager.

HDS Confidential: For distribution only to authorized parties. Page 6-37


Performance Tools and Data Sources
Performance Monitor View of Statistical Information

Performance Monitor View of Statistical Information

 Collects various statistics per resource:


• Utilization Rates
 Usage Rate (%)
• (… this applies to MP, DRR, Cache, CHA/DKA/MP/Cache ESW)
 Write Pending Rate (%)
• (… this applies to Cache)
• Host I/O Rates and Response
 (… these apply to Ports and WWNs)
 Throughput (IOPs)
 Data Trans (MB/s)
 Response Time (ms)

 Collects various statistics per resource: (cont’)


• Disk I/O Rates and Response
 (… all or some apply to LDEVs, LUNs, PGs, External)
 Total Throughput (IOPs)
 Read Throughput (IOPs)
 Write Throughput (IOPs)
 Cache Hit (%)
 Data Trans (MB/s)
 Response Time (ms)
 Back trans (count/sec)
 Drive Usage Rate (%)
 Drive Access Rate (%)
 ShadowImage (%)

Page 6-38 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
Performance Monitor Collecting Ranges

Performance Monitor Collecting Ranges

 Short Range
• All the statistics that can be monitored by Performance Monitor are
collected and stored in short range
• For 64 or fewer control units
▪ Collects statistics at interval of between 1 and 15 minutes
▪ Stores them between 1 and 15 days
• For 65 or more control units
▪ Collects statistics at interval of between 5, 10, or 15 minutes
▪ Stores them between 8 hours and one day
 Long Range
• Collects statistics at fixed 15 minute intervals, and stores them for 3
months (93 days).

The usage statistics about resources (physical tab) in the storage system are collected
and stored also in long range, in parallel with short range. However, some of the
usage statistics about resources cannot be collected in long range.
Performance Monitor panels can display statistics within the range of the storing
periods above.
You can specify a part of the storing period to display statistics. All statistics, except
some information related to Volume Migration, can be displayed in short range on
Performance Monitor panels. In addition, usage statistics about resources in the disk
subsystem can be displayed in both short range and long range because they are
monitored in both ranges. When you display usage statistics about resources, you
can select the displayed range.

HDS Confidential: For distribution only to authorized parties. Page 6-39


Performance Tools and Data Sources
Performance Management Challenge

Performance Management Challenge

 The performance and capacity management challenge of a SAN


storage environment

Gather Data Server


SAN Storage
App

Device-specific Server tool Switch tool Storage tool


tools

Server Switch Storage


Correlate the data report report report

Interpret each report separately


Integrate the data manually
- Synchronize time stamps Spreadsheet
- Unify different data formats
- Correlate various reports

Troubleshooting requires a view of the path from the application to the storage
system. Without a tool that consolidates and normalizes all of the data, the system
administrator has difficulty distinguishing between the possible sources of problems
in the different layers involved. When a performance problem occurs or the “DB
application response time exceeds acceptable levels”, they must quickly determine if
the problem is in the application server or outside.
 Server/Application Analysis — Is the problem caused by trouble on the server?
(database (DB), file system, Host Bus Adapter (HBA)
 Fabric Analysis — Is there a SAN switch problem? (Port, ISL, and more)
 Storage Analysis — Is the storage system a bottleneck?
All of the data from the components of the Storage network must be gathered by
different device-specific tools and interpreted, correlated, and integrated manually,
including the timestamps, in order to find the root cause of a problem.
Some customers achieve this by exporting lots of data to spreadsheets and then
manually sorting and manipulating the data.

Page 6-40 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
Introducing Tuning Manager

Introducing Tuning Manager

 Consolidates and analyzes performance and capacity data while


hiding platform-dependent differences

Server External
App SAN Storage
Storage

Tuning Manager

Server
Oracle, SQL Server, DB
instances, tablespaces, file
Storage
systems, CPU Utility,
Switch Ports, LDEVs, parity group,
memory, paging, swapping,
Whole Fabric, each switch, cache utilization,
file system performance,
each port, MB/sec, performance IOPS, MB/sec,
capacity, and utilization,
frames/sec, and buffer and utilization
VMguests correlation
credits

The Tuning Manager is a collection of programs that provide information allowing


for central management of a SAN.
Performance and capacity data is collected from the Host Operating System (OS),
file system, switch and storage systems, providing correlation information between
Storage Arrays and Hosts, including VMguests.
Tuning Manager consolidates the data from the entire storage path. It hides much of
the platform dependent differences in performance and capacity data from OS to
database to file system to switch port to storage port to LDEV to parity group for
historical, current, and forecast data.
 Eliminates the user tasks of gathering and integrating metrics
 Provides a single performance view for end-to-end resources
 Uses automated metrics gathering and various reporting
Thus, Tuning Manager simplifies network management and reduces maintenance
costs.

HDS Confidential: For distribution only to authorized parties. Page 6-41


Performance Tools and Data Sources
Centralized Performance and Capacity Management

Centralized Performance and Capacity Management

 Proactive storage resource management requires:


• Understanding of all components and their baseline
• View of all components and their relationships to each other
• Historical database of storage
• View of resource performance at a past point-in-time
• Application-base trends per storage system or across multiple storage
systems

Proactive storage resource management requires:


 A thorough understanding is required of all components of your current
environment and their baseline, or normal behavior.
 The ability to view all SAN-attached storage systems, logical volumes, disk array
groups, switches, servers, databases, file systems, and their relationships to each
other.
 A historical database for analyzing trends that may signal potential problems in
applications or storage.
 The ability to view the performance of a resource at a specific past point-in-time,
so that you can correlate any recent configuration changes with changes in
application performance or response time.
 Application-base trends per storage system or across multiple storage systems.
Tuning Manager:
 Monitors storage capacity and performance metrics
 Maps and reports on storage resources with a focus on applications, servers
and storage
 Provides configurable alerts to provide notification when performance or
capacity thresholds exceeded
 Easy to use, intuitive graphical user interface (GUI)
 Easily generates management storage reports
 Forecasts storage requirements

Page 6-42 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
Tuning Manager Components

Tuning Manager Components

 Collection Manager
 Main Console
 Performance Reporter
 Agents

Collection Manager
Collection Manager is the basic component of the Tuning Manager server.
Main Console
The Main Console stores the configuration and capacity information that the Agents
and Device Manager collect from the monitored resources in the database.
The Main Console displays links to Performance Reporter.
Performance Reporter
Performance Reporter displays performance data collected directly from the Store
database of each Agent.
Agents
Agents manage, as monitored resources, Hitachi disk array storage systems, SAN
switches, file systems on hosts, operating systems, Oracle, and other applications
according to their features.
Agents also collect performance information (such as, the I/O count per second) and
capacity information (such as logical disk capacity) from these resources as
performance data.
Collection Manager
Collection Manager provides following functions:
• Managing Agents
• Managing events issued by Agents
• Controlling data transmission between a Tuning Manager server and Agents
Main Console
According to the specified time frame and interval, the Main Console displays
reports in which the data accumulated in the database is mapped to the performance
data managed by the Agents. The Tuning Manager server database is managed by
the relational database system HiRDB.
The Main Console displays links to Performance Reporter.

HDS Confidential: For distribution only to authorized parties. Page 6-43


Performance Tools and Data Sources
Tuning Manager Components

Performance Reporter
Performance Reporter provides a simple menu-driven method to develop your own
custom reports. In this way, Performance Reporter enables you to display Agent-
instance level reports and customized reports with a simple mouse click.
Performance Reporter also enables you to display reports in which the current status
of monitored targets is shown in real time. Performance Reporter does not connect
to HiRDB.
Agents
Agents run in the background and collect and record performance data. A separate
Agent must exist for each monitored resource. For example, Agent for RAID is
required to monitor storage systems, and Agent for Oracle is required to monitor
Oracle. The Agents can continually gather hundreds of performance metrics and
store them in the Store databases for instant recall.

Page 6-44 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
Resources Monitored by Tuning Manager

Resources Monitored by Tuning Manager

Server
Application Oracle, SQL Server, DB2,
Exchange, DB Instances,
Database Tablespaces, File systems,
OS CPU Util., Memory, Paging,
HBA Driver Swapping, File System
performance, capacity,
utilization

Switch
Whole Fabric, each switch,
each port, MB/sec,
frames/sec, buffer credits

Storage
Ports, LDEVs, Parity Group,
Cache utilization, performance
IOPS, MB/sec, capacity

HDS Confidential: For distribution only to authorized parties. Page 6-45


Performance Tools and Data Sources
Types of Data Collected

Types of Data Collected

 Storage Systems:  Servers:


• Performance IOPS (Read. Write, • Server capacity/utilization/performance
Total), MBs transferred/sec, history,
forecast, and real-time monitor • I/O Perform – total MB/sec, Queue lengths, read/write IOPS, I/O
wait time
• By all storage systems
• File system – Space allocated, used, available,
• By a single storage system
• Reads/Writes, Queue lengths
• By port
• Device File – Performance and capacity
• By processor
• CPU busy/wait, Paging/Swapping, process metrics, IPC Shared
• By LDEV memory, semaphores, locks, threads, and more
• Cache utilization • NFS client detail and NFS Server Detail
• By Disk Parity Group • HBA bytes transmitted/received

• By Database Instance, tablespace,


Index, and more  Applications:
• By HDP/HDT Pool
• Oracle Table Space Performance and capacity Buffer pools,
cache, data blocks read/write, Tablespaces used, free, and Logs
 SAN Switches: • Microsoft SQL Server Cache Usage: current cache hit %/trends,
Page Writes/sec, Lazy Writes per second, Redo Log I/O/second,
• Bytes Transmitted/Received, Network: packets sent/received
Frames Transmitted/Received by
SAN, by switch, and by port • DB2 Table Space Performance and capacity Buffer pools, cache,
data blocks read/write, Tablespaces used, free, and logs
• CRC Errors, Link Errors, Buffer
Full/Input Buffer Full, and more • Exchange database, shared memory queue, information store,
mailbox store, public store, and Exchange server processes

Note: For HDP Pools on Modular storage HTnM collects configuration data only.
The above metrics are collected assuming the relevant agents are installed and
configured to report data to Tuning Manager. These metrics could be the basis for
creating IO Profiles.

Page 6-46 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
Key Metrics

Key Metrics

 Examples of key metrics that Tuning Manager collects and monitors


Metric Name Name in Tuning Manager Value Description
I/O Rate IOPS I/Os per second
Read Rate Disk Reads/sec I/Os per second
Write Rate Disk Writes/sec I/Os per second
Read Block Size Avg Disk Bytes/Read Bytes transferred per I/O operation
Write Block Size Avg Disk Bytes/Write Bytes transferred per I/O operation
Time required to complete a Read I/O
Read Response Time Avg Disk Sec/Read
(Millisecond)
Time required to complete a Write I/O
Write Response Time Avg Disk Sec/Write
(Millisecond)
Average Number of disk requests queued for
Average Queue Length Avg Disk Queue Length
execution on one specific LUN
Read Hit Ratio Read Hit % % of Read I/Os satisfied from Cache
Read Hit Ratio Write Hit % % of Read I/Os satisfied from Cache
Disk % Busy I/O Usage % utilization for a Array Group
Average Write Pending Write Pending Rate Percentage of the Cache used for Write Pending
Port data transfer rate Port Transfer Data transfer rate (MB/sec)
Sequential content Disk sequential IOPS Number of IOPS in sequential mode
Random content Disk random IOPS Number of IOPS in random mode

HDS Confidential: For distribution only to authorized parties. Page 6-47


Performance Tools and Data Sources
Analytics Tab — HCS

Analytics Tab — HCS


 Analyze storage system performance based on predefined
thresholds

You can use the Analytics tab to analyze storage system performance for the Virtual
Storage Platform, Universal Storage Platform V/VM, or Hitachi USP and determine
if a performance bottleneck is related to a storage system.
Note that a Tuning Manager license is required to use the Analytics tab. For details
of Tuning Manager, see the Hitachi Command Suite Tuning Manager Software User
Guide. If there is an application performance problem, obtain the information needed
to identify a logical group, for example the host where the problem occurred, the
label of the volume, or the mount point. Based on the acquired information, specify
the period (up to the past 30 days) for analyzing the performance and the logical
group or the host that corresponds to the application in order to analyze the
performance bottleneck. Check that there is no resource that exceeds the threshold
values, and determine whether the cause of the performance bottleneck is the
storage system. If the problem exists in the storage system, check the performance
information of each metric within the storage system in more detail. If you could not
detect the problem correctly by using the specified threshold value, change the
threshold value.
For more detailed performance information, start Performance Reporter, to perform
such operations as analysis in minute units or analysis of long-term trends. If you
import a report definition to Performance Reporter in Tuning Manager in advance,
you can display a report with items that correspond to the items in the Identify
Performance Problems wizard.

Page 6-48 HDS Confidential: For distribution only to authorized parties.


Performance Tools and Data Sources
Module Summary

Module Summary

 Identified industry-standard workload generators and benchmarking


products
 Identified parameters used in workload generators
 Identified industry-standard host-based tools used for performance
data monitoring and collection
 Demonstrated how to use Hitachi Performance Monitor to review
performance data
 Identified the performance data Hitachi Tuning Manager collects

HDS Confidential: For distribution only to authorized parties. Page 6-49


Performance Tools and Data Sources
Module Review

Module Review

1. How are load generators used for performance testing?


2. What tool(s) is/are available for viewing performance data through
Storage Navigator?
3. What is the advantage of using Tuning Manager over Performance
Monitor?

Page 6-50 HDS Confidential: For distribution only to authorized parties.


7. Acquisition Planning
Module Objectives

 Upon completion of this module, you should be able to:


• Locate and define the metrics to be collected prior to configuring and
sizing storage systems for performance
• Identify the data to be collected when planning for capacity growth and
performance
• Identify the key aspects of analyzing the collected data when planning
systems for capacity growth and for performance
• Configure system capacity for planned growth while maintaining
customer's performance requirements
• Apply concepts of tiers, Resource Pools, and Workload Profiles
• Identify key components to check when planning for SAN scalability
• Identify key areas to check and monitor when planning for NAS scalability
and capacity

HDS Confidential: For distribution only to authorized parties. Page 7-1


Acquisition Planning
Planning for Performance

Planning for Performance

General Concepts

 Storage is a shared resource


 Consider short-term, intense workloads
 Use change control procedures
 Consider I/O
 Provide more bandwidth between disk and cache than front end
ports
 Do not disperse I/O of every application across every array group

Storage is a shared resource, similar to network bandwidth.


Carefully consider the impact of short-term, intense workloads.
Example: database loads/refreshes
Adhere to operational change control procedures to manage such events.
Consider I/O profile of hosts sharing array groups and ports.
Storage deployment should be engineered, to the extent possible, to provide every
server with more bandwidth between disk and cache than is provided by the front
end ports.
It is not necessary to disperse the I/O of every application in the pool across every
array group in the pool.
It is a good practice to standardize on the fewest different combinations of
technology and LDEV sizes required to satisfy application requirements. By
minimizing the number of emulation and RAID types in their configuration, storage
administrators can standardize their storage, thereby making their storage inventory
easier to manage.

Page 7-2 HDS Confidential: For distribution only to authorized parties.


Acquisition Planning
Planning for Performance

The storage deployment should be engineered, to the extent possible, to provide


every server with more bandwidth between disk and cache than is provided by the
front end ports. This may be accomplished by:
 Choosing the correct RAID type for the I/O profile
 Providing enough disks to service the workload with techniques such as
interleaved storage pools
 Automating the distribution of I/O traffic with host logical volume striping
 Ensuring that the manual distribution of traffic across array groups is as even as
possible across all of the disks made available to the server
It is not necessary to disperse the I/O of every application in the pool across every
array group in the pool. In fact it is common to define subsets of the pool as I/O
dispersal groups. Nonetheless, it is generally beneficial to distribute the I/O of each
application across more than the minimum number of array groups required to
provide the storage space. This in turn means that applications typically engage in
managed sharing of array group resources.

HDS Confidential: For distribution only to authorized parties. Page 7-3


Acquisition Planning
Planning for Performance

Metrics to Be Collected

 Front end port utilization


• Work to balance a system or predict new loads.
• High microprocessor utilization is an unambiguous indication of high port
utilization.
• Low microprocessor utilization is not by itself a definitive indication of low
port utilization.
• When port microprocessor utilization is low, throughput in MB/s must also
be examined before concluding that port utilization is low.
• Throughput constraints for small block I/O traffic, less than 64K in size,
typically manifest themselves as high microprocessor utilization.
• MP board Utilization on the VSP should be below 60%.

Front End Port Utilization


Storage administrators should work to balance a system or predict new loads. MB/s
and IOPS are typically used to plan load assignments because port pair
microprocessor requirements are unknown for new loads. Actual port pair capacity
depends on the I/O profile of the applications using the port pair and, in fact,
conflicting I/O profiles can reduce this capacity.
The actual port capacity does vary depending upon the profiles of the specific
workloads presented to the port by the servers. Hence, it is always important to
monitor port microprocessor use after implementation and make adjustments if
necessary.
It should be noted that reaching 100% utilization may not be possible due to the
burst profile of hosts. Planning for 70–80% as the maximum available utilization for
any resource is recommended. Consequently, it is good practice to plan for 35% to
50% utilization on a resource pair and approximately 60% utilization for
microprocessor resources deployed in active-active quads.
In order to achieve optimal use of the front end port pairs, the I/O load should be
evenly distributed across all port pairs as much as practical.
 I/O profiles should play a role in the grouping process.
 Group predictable and compatible I/O profiles.

Page 7-4 HDS Confidential: For distribution only to authorized parties.


Acquisition Planning
Planning for Performance

 Attempt to separate large block I/O and small block I/O onto different port pairs
when possible.
 Group sporadic I/O profiles.
 Consider redundancy requirements.
 Avoid introducing a single point of failure.
 Failover should not overwhelm a port or port pair; that is, avoid creating a
cascading failure.
 Service levels should play a role in the grouping process.
 Physically or logically isolate critical hosts.
Another factor to consider in planning port loading is what happens when a port
fails. If ports are deployed in pairs, then the port microprocessor use of each
member of the pair should be kept below approximately 40% during the normal
load cycle. This allows continued operations without degradation in the event that
one member of the pair fails and the surviving port must carry the entire load (now
80%). In cases where port pair microprocessor use is above 80%, it is likely that
hosts are negatively impacting each other as they compete for storage processing
resources.
One way to improve on the effective capacity of ports is to deploy I/O paths in
groups of four, also known as quads. If one path of a quad fails, then 25% of the
capacity available to the affected servers is lost. However, if one path of a pair fails,
then 50% of the capacity available to the affected servers is lost.
Assuming that 80% utilization represents effective full use, and assuming continued
operation without degradation is a goal, the net of this failure consideration is:
40% should be considered full utilization under normal operating circumstances for
port pairs and ACP pairs.
60% should be considered full utilization under normal operating circumstances for
paths in a balanced port quad.

HDS Confidential: For distribution only to authorized parties. Page 7-5


Acquisition Planning
Planning for Performance

Metrics to Be Collected

 Back end port utilization


• Recommended range is 35% to 50%.
• Failure on a single BED requires the remaining BED to service the entire
workload.
• If the workload on the failing back end port is above 50%, then it is likely
to cause I/O degradation on the surviving BED.
 Cache utilization
• Primary concern is the cache write pending rate.
• Key metric is % write pending—writes yet to be destaged to disk.
• Write Pending of 30% or below is considered normal operation.

% write pending is the percentage of cache occupied by writes that have yet to be
destaged to disk and is a measure of this accumulation of write data in cache.
Back End Port Utilization
Options to alleviate Back End Director (BED) overutilization:
Add more BEDs and disk enclosures. Evenly distribute the array groups across all
BED pairs in the storage system. This allows the BED utilization to remain within
recommended deployment practices and ultimately ensure the storage system is not
susceptible to I/O degradation caused by single-point failures.
Prevent BED over utilization by migrating LDEVs to another storage system.
For example to remediate BED over utilization on 9980V you can migrate some
LDEVs serviced by the BED to a USP V.
Cache Write Pending Rate
Write Pending of 30% or below is considered normal operation. Write pending of
40% or above warrants corrective action.
When the cache write pending percentage reaches 70%, the storage system stops
accepting new writes in an attempt to destage the writes currently in cache. This
type of spike has a severe impact on all hosts using the storage system.

Page 7-6 HDS Confidential: For distribution only to authorized parties.


Acquisition Planning
Planning for Performance

Metrics to Be Collected

 Array group utilization


• The key metrics when evaluating array group performance
are tracks/sec and % utilization.
• If workload on the failing back end port is above 50%, then it is likely to
cause I/O degradation on the surviving BED.
 Hard disk unit (HDU) busy percentage less than 75%
• HDU utilization should be less than 75%.
• Otherwise, there is a better than 75% chance of waiting on a physical I/O
request to the back end array group.
• Contention is unlikely when HDU utilization is under 50%.

Array Group Utilization


An array group with a high number of tracks per second could be a sign of over
utilization of that array group. An array group with utilization above 50% is a cause
for concern. If an array group is over-utilized it can cause degradation of I/O
performance to the host on reads and data backing up into cache on writes.
An array group with utilization above 50% is a cause for concern. Like
microprocessors, array groups typically run out of available capacity between 70%
and 100% utilization, sometimes as a consequence of the application’s burst profile.
The use of host level Logical Volume Manager (LVM) Striping and Storage Pools can
help to alleviate single array groups having a high tracks/sec by distributing that
I/O across a larger number of array groups.
To do this distribute LDEVs across multiple Array Groups and multiple BED pairs.
Use the host level LVM to create a host volume that is evenly distributed across
multiple Array Groups and multiple BED pairs.
Such striping increases the back end bandwidth available to a host volume. If the
back end bandwidth provided is greater than the front end bandwidth provided,
then a buildup of writes pending in cache cannot occur.
Another alternative is to migrate the volume to a DP-VOL and take advantage of
wide-striping capabilities of Hitachi Dynamic Provisioning.

HDS Confidential: For distribution only to authorized parties. Page 7-7


Acquisition Planning
Planning for Performance

HDU Busy Percentage


Look for over-busy HDUs.
If imbalanced, you can use any of the following options:
 Spread some load to another RAID group
 Change the RAID level to one more appropriate for the workload
 Use faster disks
 Add more disks to the RAID group (if allowed)
These options can also be tried in any allowed combination.

Page 7-8 HDS Confidential: For distribution only to authorized parties.


Acquisition Planning
Planning for Performance

Metrics to Be Collected

 Response time
• This threshold depends on the application needs and the Service Level
Agreement (SLA) for the application.
• Since the Logical Unit (LU) Response Time has a direct impact on
applications, this indicator should be monitored on key LUs to determine
deltas as loads increase.
 Watch out for worst performing LUs
• Use Performance Reporter or Tuning Manager to monitor worst
performance LUs.

For example:
In a Microsoft Exchange environment, the LU Read or write Response Time for the
Database Disk should be below 20ms in average and spikes should not be higher
than 50ms.
For the Temp Disk, the LU Response Time should be below 10 ms in average and
spikes should not be higher than 50ms.
In a Microsoft SQL Server environment, the LU Read Response Time for the
Database Disk should be below 20ms in average and spikes should not be higher
than 50ms.
 Less than 10ms is very good
 Between 10–20ms is okay
 Between 20–50ms is slow, needs attention
 Greater than 50ms is a serious I/O bottleneck

HDS Confidential: For distribution only to authorized parties. Page 7-9


Acquisition Planning
Response Time Factors

Response Time Factors

 Storage system port utilization


• Port load balancing is important.
• Protocol time on a physical port is very low — microseconds.
• A physical port can process thousands of I/O requests.
▪ Block size is critical for planning and analysis.
• Priority to a virtual port (HSD) within a physical port is managed with
Hitachi Server Priority Manager.
• Port priority can also be controlled at the storage system level (using
Server Priority Manager).

In contemporary storage systems, the overhead associated with protocol time on a


physical port is very low. It is currently calculated in microseconds, where it was
milliseconds.
This allows for one physical port to handle thousands of I/Os before reaching the
saturation point.
This saturation point will be reached faster when the average block size is large (64K
and higher).
Performance problems are seen when load balancing is not performed properly
during configuration and/or capacity planning. Performance problems could occur
because of:
 Too many HSDs on the same physical port
 Too many active LUNs on the same physical port
On the Enterprise Storage Systems, priority to a Virtual Port (HSD) can be assigned
within a physical port using Server Priority Manager.
Physical port priority can also be controlled at the storage system level.

Page 7-10 HDS Confidential: For distribution only to authorized parties.


Acquisition Planning
Modular System Controller

Modular System Controller

 Microprocessor (MP) utilization less than 75%?


• MP utilization should be under 75% on average.
• If utilization is higher than 75%, can you spread some load to the other
core (Adaptable Modular Storage 2500) or controller?
 Also, look for utilization imbalance between controllers.
• If there is an imbalance, can you spread some load to the other
controller?

Note: Load balancing attempts to redistribute the load between CPUs


and cores.

HDS Confidential: For distribution only to authorized parties. Page 7-11


Acquisition Planning
Capacity Planning

Capacity Planning

 Estimate the storage growth based on historical use.


 Hitachi Tuning Manager components allow you to analyze historical
storage system needs in the past twelve months.
• Tuning Manager Main Console
• Tuning Manager Performance Monitor

 For an OLTP environment, when planning from a performance


perspective, ask your customer to provide:
• Expected sustained peak IOPS
• Expected cache read hit rate
• Expected read I/O percentage

OLTP — On-Line Transaction Processing

Page 7-12 HDS Confidential: For distribution only to authorized parties.


Acquisition Planning
I/O Profile

I/O Profile

 I/O Profile information can be obtained from OS monitoring and


storage system statistics.

Maintenance Commands
iostat(1M)

r/s Reads per second


w/s Writes per second
(r/s + w/s = Total IOPS )
(100 x r/s /(r/s + w/s)) = Read% )
Kr/s Kilobytes read per second
(kr/s divided by r/s = av. Read Blocksize)
Kw/s Kilobytes written per second
(kw/s divided by w/s = av. Write Blocksize)
wait Average number of transactions waiting for service (queue length)
actv Average number of transactions actively being serviced
(removed from the queue but not yet completed)
svc_t Average service time, in milliseconds
%w Percent of time there are transactions waiting for service (queue non-empty)
%b Percent of time the disk is busy (transactions in progress)
wsvc_t Average service time in wait queue, in milliseconds
asvc_t Average service time active transactions, in milliseconds

Note: Block-sizes must be calculated

HDS Confidential: For distribution only to authorized parties. Page 7-13


Acquisition Planning
Storage Pools

Storage Pools

 A collection of array groups that are managed as a unit and service a


defined group of applications with compatible I/O profiles and service
objectives
 A management concept similar to divide and conquer
• Are not a product feature or anything more than a storage administration
discipline
 Encourages distributing the I/O of each application across the pool
• Rather than restricting an application to minimum number of array groups
required to supply requisite space
 Seeks to limit dispersal of the activity to storage pool or a subset of
the storage pool
• In essence, it means to disperse I/O, but do not scatter it beyond a
justified dispersal

Storage Pools
The administrative objective is to:
 Provide each member of the pool with access to as much array group bandwidth
(IOPS and MB/s) as it is likely to require, even in exceptional circumstances
 Distribute I/O as evenly as possible within the pool
 Keep the storage allocations for the group of applications within the pool
It is not necessary to disperse the I/O of every application in the pool across every
array group in the pool. In fact it is common to define subsets of the pool as I/O
dispersal groups. Nonetheless, it is generally beneficial to distribute the I/O of each
application across more than the minimum number of array groups required to
provide the storage space. This in turn means that applications typically engage in
managed sharing of array group resources.
This approach gives each application access to a larger maximum storage bandwidth
capacity from the array groups it uses. This benefit arises from the premise that it is
unlikely that every application in a dispersal group will have its peak bandwidth
requirements at the same moment.

Page 7-14 HDS Confidential: For distribution only to authorized parties.


Acquisition Planning
Storage Pools

Storage Pools also share an available space pool. For example, space is added to the
pool, one or more array groups at a time. Considering the Logical Volume Manager
recommendations earlier in this report, capacity will probably be added to Storage
Pools four array groups at a time. This fact alone makes a large number of small
pools inappropriate.
Storage Pools should be large enough to achieve I/O dispersal and bandwidth
sharing among applications, and large enough to reasonably avoid excessive
fragmentation of the available space pool. Storage pools should be small enough to
be manageable.

HDS Confidential: For distribution only to authorized parties. Page 7-15


Acquisition Planning
Workload and Workload Profile Considerations

Workload and Workload Profile Considerations

 Impact of a new or increasing workload on other workloads should


be considered when engineering the storage system or when
performing activities that are outside normal load profile.
• For example, activities such as database loads or refreshes can result in
a short term, intense impact on the bandwidth use of storage system
resources and can thereby cause contention for the now scarce
resources.
 Additionally, aggregate workload generated by multiple hosts starting
activities at the same time also can result in an intense impact on
bandwidth use.
 Possible solutions:
• Establishing operational change control procedures
• Staggering the start times of scheduled activities

The impact of a new or increasing workload on other workloads should be


considered when engineering the storage system or when performing activities that
are outside the normal load profile.
Establishing operational change control procedures can serve to manage these types
of events. Change control procedures enable the IT organization to make
appropriate business decisions to schedule these activities or to ensure that these
activities made the most efficient use of available resources.
Staggering the start times of scheduled activities can help to minimize the impact of
these activities by spreading the resource demand across a longer timeframe.

Page 7-16 HDS Confidential: For distribution only to authorized parties.


Acquisition Planning
Workload and Workload Profile Considerations

 Look for compatible I/O profiles


• Service objectives
• Demand cycles
• Request sizes

Compatible service objectives means not having applications that seek minimum
response time sharing the same resources with applications that seek maximum
throughput.
Demand cycles are most compatible when different servers sharing the same
resources have their peak demands at different times.
Request sizes means not mixing small block I/O with large block I/O on the same
array group or front end port at the same time. For example, it means not mixing
more than a difference of several doubles, such as 8K to 64K.

HDS Confidential: For distribution only to authorized parties. Page 7-17


Acquisition Planning
SAN and Hitachi NAS Platform Design

SAN and Hitachi NAS Platform Design

SAN Design Basics

 Common parameters for data center SAN designs include:


• Availability — Storage data must always be accessible to applications.
• Performance — Acceptable, predictable and consistent I/O response
time.
• Efficiency — No waste of resources (ports, bandwidth, storage, power).
• Flexibility — Optimize data paths to use capacity efficiently.
• Scalability — Grow connectivity and capacity as required over time.

 Common parameters for data center SAN designs include (cont’):


• Serviceability — Expedite troubleshooting and problem resolution.
• Reliability — Design redundancy and stability of operations into the SAN.
• Manageability — Streamline transport and storage administration.
• Cost — Design within budget and account for ongoing operational
expenses.

Page 7-18 HDS Confidential: For distribution only to authorized parties.


Acquisition Planning
SAN Design Principles

SAN Design Principles

 Minimize the number of fabrics to be managed.


 Minimize the number of switches per fabric.
 Limit fabric size.
 Use switches with a high level of RAS.1
 Avoid over-subscription that causes congestion or degraded
performance.
 Use the core-edge model for large environments.2
 Design for storage traffic.
 Keep it simple.

1 Reliability, availability, and serviceability (RAS)


2 See a sample on the next slide.

HDS Confidential: For distribution only to authorized parties. Page 7-19


Acquisition Planning
Core-Edge Model Sample

Core-Edge Model Sample

Page 7-20 HDS Confidential: For distribution only to authorized parties.


Acquisition Planning
SAN Planning

SAN Planning

 When planning to implement Hitachi TrueCopy® Synchronous in an


existing SAN environment, check:
• Status of each component
• Current utilization of these SAN infrastructure components
Determine if they are capable of handling the additional anticipated
TrueCopy I/O.
 To use ISLs in a SAN environment, check that there is enough ISL
bandwidth available to handle the additional anticipated TrueCopy
I/O.
 Other important issues to check for an ISL configuration:
• Distance
• Trunk setup
• Estimated throughput rates

HDS Confidential: For distribution only to authorized parties. Page 7-21


Acquisition Planning
Planning a Hitachi NAS Platform Environment

Planning a Hitachi NAS Platform Environment

 When planning the storage pool layout and the configuration for a
NAS Platform (HNAS) environment, ask the customer to provide:
• I/O behavior of the application that will run on the environment
• Data and file system structure

 In HNAS, you can change the read ahead chunk size property.
• Best to turn off Pre-fetch in the storage
 Modify this setting according to the Read I/O behavior of the
application which runs on NAS Platform.
 Optimal setting improves the Read I/O performance.

 When planning and designing an HNAS implementation with a


Enterprise Storage at the back end, consider some common
practices.
• Recommended — Create or use just one LDEV per physical RAID group.
• Not Recommended — Using LDEVs with different sizes to create the
HNAS Storage Pool.
• Recommended — Connect each HNAS node in a two node cluster
environment to different storage clusters. On USP-V/VM use separated
Fibre Channel front end MPs for the different connections to the HNAS
cluster nodes.
• Recommended — Choose a setup where the I/O workload gets
distributed and balanced at the storage system Fibre Channel front end
and the back end.

Page 7-22 HDS Confidential: For distribution only to authorized parties.


Acquisition Planning
Sample HNAS — Storage System Connectivity

Sample HNAS — Storage System Connectivity

HNAS Node 1 HNAS Node 2

FC Port 1 FC Port 3 FC Port 1 FC Port 3

CL1-A CL2-A

CL3-A CL4-A

CL1 CL2

HDS Confidential: For distribution only to authorized parties. Page 7-23


Acquisition Planning
Sample Abstract HNAS Storage Pool Configuration

Sample Abstract HNAS Storage Pool Configuration

HNAS Storage Pool

LDEV 1 LDEV 2 LDEV 3 LDEV 4

RAID Group 1 RAID Group 2 RAID Group 3 RAID Group 4

Page 7-24 HDS Confidential: For distribution only to authorized parties.


Acquisition Planning
Storage Pool Recommended Practices

Storage Pool Recommended Practices

 Cluster
• All file systems in a storage pool belong to EVSs on one node.
 Queue depth maximum 512 per Cluster (Modular Storage).
 The Node has a fixed SCSI Queue Depth of 32 per LUN.

 One LUN span over the complete RAID Group.


 Minimum of four SDs in a storage pool.
 Even number of SDs in pool.
 Although the architecture allows the initiators to discover up to 256 LUNs
each, the default limit is set to 32 LUNs per target.
 The LUN limit can be increased by changing the default up to 256 as the
maximum.
 Do not mix system drives with different performance characteristics (for
example, Type, Speed, RAID Type) within a storage pool.
 Expand pool by adding system drives of same size
 One storage pool in use by one node

HDS Confidential: For distribution only to authorized parties. Page 7-25


Acquisition Planning
Module Summary

Module Summary

 Located and defined the metrics to be collected prior to configuring


and sizing storage systems for performance
 Identified the data to be collected when planning for capacity growth
and performance
 Configured system capacity for planned growth while maintaining
customer's performance requirements
 Applied concepts of tiers, Resource Pools, and Workload Profiles
 Identified key components to check when planning for SAN
scalability
 Identified key areas to check and monitor when planning for NAS
scalability and capacity

Page 7-26 HDS Confidential: For distribution only to authorized parties.


Acquisition Planning
Module Review

Module Review

1. High microprocessor utilization is an indication of high port


utilization. (True/False)
2. What are the key metrics for measuring RAID group performance?
3. What is the recommended practice related to selection of LDEVs
for creating storage pools?

HDS Confidential: For distribution only to authorized parties. Page 7-27


Acquisition Planning
Module Review

Page 7-28 HDS Confidential: For distribution only to authorized parties.


8. Deployment Planning
— Part 1
Module Objectives

 Upon completion of this module, you should be able to:


• Describe concept of VDEV, Logical Devices, and LUNs and Tag
Command Queuing
• Describe best practices when configuring and sizing storage systems for
industry-standard
▪ Database applications
▪ File sharing and streaming applications
▪ Decision support applications
▪ Data warehousing applications
▪ Backup and archiving applications
▪ Data protection technologies
• Determine the appropriate load balancing algorithm to optimize storage
performance using Hitachi Dynamic Link Manager

HDS Confidential: For distribution only to authorized parties. Page 8-1


Deployment Planning — Part 1
VDEV, LDEV and LUN Concepts (Enterprise Storage)

VDEV, LDEV and LUN Concepts (Enterprise Storage)

Configuration Concepts — VDEV

 Logical container in which LDEVs are placed


 VDEV has enabled the subsystem microcode designers to treat
groups in similar ways
 For internal parity groups, VDEVs are formed when emulation mode
is set
 Where parity groups are concatenated, VDEVs are interleaved
across the parity group set
 A VDEV mapped to an external LUN may be slightly larger than the
mapped external LUN

The VDEV is the logical container in which LDEVs are placed.


The development of the VDEV has enabled the subsystem microcode designers to
treat the following groups in a very similar way :
1. Internal parity groups
2. External parity groups mapped to External LUNs
3. Copy-On-Write Snapshot V-VOL groups
4. Hitachi Dynamic Provisioning V-VOL groups (USP V and later)
For internal parity groups, VDEVs are formed when the emulation mode is set:
 If the parity group size is less than the maximum VDEV size (more detail to
follow), a single VDEV is mapped to the parity group.
Where parity groups are concatenated:
 VDEVs associated with the set of parity groups being concatenated are
interleaved (striped) across the parity group set
 Within each row across an extended stripe across the set of concatenated parity
groups, there is one RAID stripe from each VDEV on each of the parity groups
within the set.
 Successive RAID stripes from a single VDEV are mapped to parity groups within
the concatenated group on a round-robin basis. (See diagram on a later page.)
A VDEV mapped to an external LUN may be slightly larger than the mapped
external LUN
 This is to account for issues such as cache slot alignment
 However, the user size is still exactly the right one when using USP's OPEN-V

Page 8-2 HDS Confidential: For distribution only to authorized parties.


Deployment Planning — Part 1
Four Types of Enterprise VDEVs

Four Types of Enterprise VDEVs


Host CoW V-VOL and HDP volumes are
HBA HBA called virtual LDEVs because they do
not necessarily have physical disk
space allocated for their entire
CoW type VDEV (virtual) size.
Port Port
V-VOL
Group Pool ID
LU HDEV LDEV Pool ID
VDEV

Pool volumes are


LUSE LDEVs carved
Internal parity out of an internal
group type parity group
X-VOL
VDEV Group Pool ID VDEV, or carved
Pool ID out of an external
parity group type
HDP type VDEV.
VDEV

Parity group Port Port

External
parity group Port Port
type VDEV Virtual LDEVs are individually
mapped to Pools.

LU External
subsystem

HDEV = Head LDEV

HDS Confidential: For distribution only to authorized parties. Page 8-3


Deployment Planning — Part 1
LU, HDEV, LUSE and LDEV

LU, HDEV, LUSE and LDEV

• LDEV numbers for USP: 00:00 or CU:LDEV


• LDEV numbers for VSP/USP-V look like
00:00:00 or LDKC:CU:LDEV
• There is only one LDEV name space. That is,
Host each LDEV number within a subsystem is
unique.
HBA HBA
Path
1. Port
2. Host group or HSD Port Port
3. LUN
LDEV is a slice
LU HDEV LDEV of a VDEV
VDEV
Logical Unit

LUSE Head LDEV

• Host paths comprising a LU, map onto an HDEV


• HDEV identified by its LDEV number Comprised of 2 to 36 LDEVs
• HDEV visible using SVP configuration printout tool

LU
A Logical Unit is a virtual volume that is presented to a host as a path.
Path
A path is identified by three components:
 Port
 Host group or HSD (Host Storage Domain) identified by numeric ID or by name
 LU number (LUN)
HDEV
 Host paths comprising a LU map onto an HDEV, which is either mapped directly to
an LDEV or mapped to a LUSE.
 The HDEV is identified by its LDEV number, which for a single LDEV is the LDEV’s
number and which for a LUSE is the number of the first LDEV in the LUSE.
 The HDEV is visible using the SVP configuration printout tool.
LDEV
 LDEV numbers for the USP (RAID500) look like 00:00 or CU:LDEV.
 LDEV numbers for the USP V (RAID600) look like 00:00:00 or LDKC:CU:LDEV.
 There is only one LDEV name space. That is, each LDEV number within a subsystem
is unique.
LUSE
 A LUSE is comprised of from 2 to 36 LDEVs..
 Is identified by the LDEV number of the first LDEV in the LUSE (the head LDEV)

Page 8-4 HDS Confidential: For distribution only to authorized parties.


Deployment Planning — Part 1
VDEV Size

VDEV Size

 Maximum size depends on the type of VDEV


 For an external Open-V VDEV maximum VDEV size is exactly 60TB.
 For an internal (physical parity group) Open-V VDEV maximum size
is slightly less than 3TB.
 Copy-on-Write V-VOL groups have a size of 4TB on the USP V
 USP-V/VM and VSP HDP V-VOL groups have a size of 4TB
 Modular: Largest LUN is 128TB on HUS100 family

 There is a maximum VDEV size, that depends on the type of VDEV.


 Parity is not counted in the VDEV size.
 For an external Open-V VDEV (one that corresponds to an external LUN):
 The maximum VDEV size is exactly 60TB.
 For an internal (physical parity group) Open-V VDEV
 The maximum size is slightly less than 3TB.
 If the size of the data portion of the physical parity group (not counting
parity) is less than or equal to the max VDEV size, one VDEV is assigned.
 For parity groups where the size is less than or equal to 3TB, the entire
parity group is mapped to one VDEV.
 If the data portion of the size of the physical parity group is larger than the
maximum VDEV size, then as many maximum-size VDEVs as will fit are
assigned; if there is remaining space, an additional VDEV the size of the
remaining space is also assigned.
 For example, an Open-V 7+1 parity group with 750GB drives will have
two VDEVs, and with a single maximum size LDEV on each, the LDEV on
the first VDEV is 3,145,663.5MB or 6,442,318,848 (512 byte) blocks and the
LDEV on the second VDEV is1,785,126.0MB or 3,655,938,048 blocks.
 CoW V-VOL groups have a size of 4TB on the USP V
 This was previously 2TB on the USP
 VSP HDP V-VOL groups have a size of 60TB
 It was 2TB at the time the USP V first shipped
 Modular: Largest LUN is 128TB on HUS100 family

HDS Confidential: For distribution only to authorized parties. Page 8-5


Deployment Planning — Part 1
Parity Group Name, VDEV Name and Number

Parity Group Name, VDEV Name and Number

 Each parity group has a name


• Internal parity group names: “1-1”
• External parity group names: “1-1 #” or “E1-1”
• CoW V-VOL group names: “1-1 V” or “V1-1”
• HDP V-VOL group names: “1-1 X” or “X1-1”
 Each type of parity group name has a separate name space
 VDEV names are formed from the parity group name by adding a
suffix indicating the relative position of VDEV within parity group
• Here is what an Open-V 7+1 group of 750GB drives looks like:

 Each VDEV also has VDEV number that appears in SVP


configuration data and Performance Monitor data

Each parity group has a name.


 Internal parity group names look like “1-1”.
 External parity group names look like “1-1 #” or “E1-1”.
 These two forms are equivalent, meaning “1-1 #” and “E1-1” refer to the same
parity group.
 CoW V-VOL group names look like “1-1 V” or “V1-1”.
 Again, these two forms are equivalent.
 HDP V-VOL group names look like “1-1 X” or “X1-1”.
Each of the four types of parity group names has a separate name space, so “1-1”,
“1-1 #”, “1-1 V”, and “1-1 X” refer to different parity groups.

Page 8-6 HDS Confidential: For distribution only to authorized parties.


Deployment Planning — Part 1
LDEV Alignment within VDEVs

LDEV Alignment within VDEVs

 For physical disk parity groups, LDEVs are aligned on parity group
stripe (row) boundaries
 For CoW and HDP V-VOL groups, LDEVs are aligned on cache slot
boundaries.
 For emulation types other than Open-V, LDEVs are aligned on
logical emulation cylinder boundaries.
 Alignment of LDEVs on above boundaries will mean that LDEV size
will be rounded up to a multiple of the boundary interval for the
purpose of laying out the starting point of the LDEV within the VDEV.

For physical disk parity groups, LDEVs are aligned on parity group stripe (row)
boundaries
 Although LDEV sizes are in increments of one sector, each LDEV must start at
the beginning of a parity group stripe (row). Therefore you may have wasted
space at the end of the stripe that contains the last sectors in an LDEV.
 For example, in Open-V 3+1, the stripe size is 64 KB per logical track x 8 logical
tracks per chunk x 3 chunks per stripe = 1.5 binary MB per stripe.
 USP / NSC55 / USP V / USP VM Open-V cache slots contain from 1 to 4
cache segments, each corresponding to an Open-V logical track of 64KB.
Open-V logical cylinders consisting of 15 logical tracks (an old legacy
concept) come into play when sizing volumes in MB*. Use sizing in blocks
(sectors) to get exact volume sizes.
 For Mainframe 3390 3+1, stripe size is 58 binary KB per track x 8 tracks per
chunk x 3 chunks per stripe = 1392 binary KB per stripe.
For CoW and HDP V-VOL groups, LDEVs are aligned on cache slot boundaries.
 For Open-V, cache slot size is 4 x 64 binary KB = 256 binary KB

HDS Confidential: For distribution only to authorized parties. Page 8-7


Deployment Planning — Part 1
LDEV Alignment within VDEVs

For emulation types other than Open-V, LDEVs are aligned on logical emulation
cylinder boundaries.
 For Open-X, logical track size is 48 binary KB, logical cylinders are 15 logical
tracks, thus logical cylinder size is 15 x 48 binary KB = 720 binary KB.
This was true for 9900V's OPEN-V as well
Alignment of LDEVs on the above boundaries will mean that LDEV size will be
rounded up to a multiple of the boundary interval for the purpose of laying out the
starting point of the LDEV within the VDEV.
 This affects the number of LDEVs that fit within a VDEV.
 Some LDEV types also require control cylinders. This also affects the number of
LDEVs that fit within a parity group.*
*Note: See the LUN Expansion and Virtual LVI/LUN User’s Guide for more details.

Page 8-8 HDS Confidential: For distribution only to authorized parties.


Deployment Planning — Part 1
Supplementary Note on Pool Volumes

Supplementary Note on Pool Volumes

Host
HBA HBA

Port Port
V-VOL
Group Pool ID
LU HDEV LDEV Pool ID
VDEV

LUSE

• When internal LDEVs are assigned as CoW or HDP pool volumes,


the suffix “ P” is shown on the name of parity group that pool
volume LDEV is on
• For example, if a normal LDEV 00:12:34 which is on parity group
“1-1” is assigned as a CoW or HDP pool volume, from that point
on, LDEV 00:12:34 will show as being on parity group “1-1 P”.
In this case, the “P” is really a marker to indicate special
treatment as a pool volume, not that there is a parity group
whose name is “1-1 P”
• You can see this using the SVP configuration printout tool

When (internal) LDEVs are assigned as CoW or HDP pool volumes, the suffix “ P” is
shown on the name of the parity group that the pool volume LDEV is on.
For example, if a normal LDEV 00:12:34 which is on parity group “1-1” is assigned
as a CoW or HDP pool volume, from that point on, LDEV 00:12:34 will show as
being on parity group “1-1 P”. In this case, the “P” is really a marker to indicate
special treatment as a pool volume, not that there is a parity group whose name is
“1-1 P”.
You can see this using the SVP configuration printout tool.

HDS Confidential: For distribution only to authorized parties. Page 8-9


Deployment Planning — Part 1
Concatenated Parity Groups — VDEV Striping

Concatenated Parity Groups — VDEV Striping

LDEVs are allocated


within VDEVs
This VDEV shows This VDEV shows
as being on parity as being on parity
Each VDEV is laid out
VDEV group 1-1 VDEV group 5-1
on disk in units of one
chunk of 8 logical
tracks. For Open-V,
a chunk is 512 KB

0 1 2 3 4 5 6 P 0-6 0 1 2 3 4 5 6 P 0-6
8 9 10 11 12 13 P 7-13 7 8 9 10 11 12 13 P 7-13 7
16 17 18 19 20 P14-20 14 15 16 17 18 19 20 P14-20 14 15
24 25 26 27 P 21-27 21 22 23 24 25 26 27 P 21-27 21 22 23
32 33 34 P 28-34 28 29 30 31 32 33 34 P 28-34 28 29 30 31
40 41 P 35-41 35 36 37 38 39 40 41 P 35-41 35 36 37 38 39
48 P 42-48 42 43 44 45 46 47 48 P 42-48 42 43 44 45 46 47
P 49-55 49 50 51 52 53 54 55 P 49-55 49 50 51 52 53 54 55

7+1 parity group “1-1” 7+1 parity group “5-1”

 Parity groups 1-1 and 5-1 have been concatenated, meaning that VDEVs for parity groups
1-1 and 5-1 are now striped across both physical parity groups.
• This does not change parity group that VDEV starts on
• This does not change size of VDEV

Notes: This numbering layout shown above is for normal (non-concatenated) parity
groups. (Colors are accurate above, not necessarily the numbering of the chunks.)
Chunk numbering for concatenated parity groups may be laid out in a different
fashion so as to be able to read chunks 0–15 at once from all 16 drives.

Page 8-10 HDS Confidential: For distribution only to authorized parties.


Deployment Planning — Part 1
LUN Mapped to LDEV on Concatenated Parity Group

LUN Mapped to LDEV on Concatenated Parity Group

Host
HBA HBA

Port Port

VDEV
LU HDEV LDEV

• • • • • • • • • • • • •
• • • • • • • • • • • • • • •
• • • • • • • • • • • • • •
• • • • • • • • • • • • • • • •
• • • • • • • • • • • • • • • •
• • • • • • • • • • • • • •
• • • • • • • • • • • • • • • •

 Characteristics of concatenated parity group approach:


• Random reads/writes clustered within a few 10s of MB will be distributed
over all 16 drives
 Maximum size of an LDEV is the same as on a single parity group.
• For 300 GB drives, max LDEV size is about 2 TB (7 times 288GB).
 Concatenation is at the level of entire parity groups.

HDS Confidential: For distribution only to authorized parties. Page 8-11


Deployment Planning — Part 1
LUN Mapped to LUSE

LUN Mapped to LUSE

Host
HBA HBA

Port Port

LDEV
LUSE
LU HDEV

LDEV VDEV
VDEV

• • • • • • • • • • • • •
• • • • • • • • • • • • • • •
• • • • • • • • • • • • • •
• • • • • • • • • • • • • • • •
• • • • • • • • • • • • • • • •
• • • • • • • • • • • • • •
• • • • • • • • • • • • • • • •

 Characteristics of LUSE approach:


• Sequential reads/writes access 8 drives at once
• Random access clustered within a few 10s of MB will distribute
over 8 drives
 Maximum size of LUSE is about 60TB
 LUSEs are built at individual LDEVs level

Characteristics of LUSE approach:


 Sequential reads/writes access 8 drives at once (completely read through
each LDEV in turn).
 Random access clustered within a few 10s of MB will distribute over 8 drives
(within one LDEV).
 Maximum size of LUSE is about 60 TB
 LUSE can contain up to 36 LDEVs, but max size is about 60TB
 With any large LUN, be careful that you have a big enough queue depth to
handle the number of simultaneously outstanding I/O operations per LUN

Page 8-12 HDS Confidential: For distribution only to authorized parties.


Deployment Planning — Part 1
Tag Command Queuing (TCQ)

Tag Command Queuing (TCQ)

Tagged Command Queuing

 Technology built into disk subsystems that allows them to receive


multiple read and write requests and service them in any order with
objective of increasing levels of performance
 Order of completion is influenced by several factors:
• Elevator Optimization: Sequence of physical disk accesses can be
changed in order to reduce overall time spent seeking.
• Cache Hits: Data can be read or written to cache while storage
subsystem is waiting for other disk access to complete.

The Value of TCQ: A queue may not always be bad.


Systems with multiple resources, for example RAID Arrays with several disks
supporting each LUN, will exhibit the lowest Response Times when the I/O queue
is low. They will exhibit equally low utilization levels at the RAID group level. This
leads to the situation where not all the potential performance can be extracted
within any target Response Time. IOPS are left on the table.
Increasing the I/O queue past the point where the RAID groups start to become
heavily utilized usually results in a sharper rate of increase of Response Time
compared to IOPS. The point at which this change occurs is called the knee of the
curve, or hockey stick if the transition is sharp.
Implementing a level of queuing that approaches but does not pass the knee of the
curve would be the objective of the storage architect looking for a balance between
performance and cost.

 Native Command Queuing (NCQ) is the term for SATA queue


handling.
 In HDS Storage systems, TCQ is implemented at the host-storage
interface and NCQ implemented for the internal RAID SATA disks.

HDS Confidential: For distribution only to authorized parties. Page 8-13


Deployment Planning — Part 1
Why Do We Need TCQ?

Why Do We Need TCQ?

 Allows controller to carry out optimization through the ability to re-


order and overlap commands
 Cache hit can be serviced from the queue
 Allows concurrent disk operations
Server
 Optimization increases utilization efficiency Disk RAID set
of storage sub-system resources Controller
• Increased utilization of bus = improved throughput
• Increased number of commands processed = increased I/O rates
• Commands processed with less latency = reduced response times

Page 8-14 HDS Confidential: For distribution only to authorized parties.


Deployment Planning — Part 1
Tagged Command Queuing

Tagged Command Queuing


‘Active’ Commands Passed to
Subsystem (per volume)
Max_throttle, target queue depth, LUN
‘Waiting’ queue depth, etc.
Queue entries can
Commands Held be re-ordered Tag #008 Concurrency is
in Host Tag #007 effective with all
Tag #006 entries
Tag #005
Applications with
Tag #004
But not ahead of Tag #003
frequent access
Writes to same LUN Tag #002 patterns benefit
Tag #001 from Cache hits
Example:
Solaris sees:
Waiting = 6, Active = 8
W2K sees:
Server Queue = 14
I/Os Waiting
Tag #008
Tag #007
Disk
Tag #006 HBA Cache
Tag #005
Tag #004
Tag #003
Tag #002
Tag #001
Available Tags
RAID set
#009 to #256

W2K does not differentiate.


Note: Different O/S and Storage Response Times.

HDS Confidential: For distribution only to authorized parties. Page 8-15


Deployment Planning — Part 1
Varying the HBA LUN Queue Depth

Varying the HBA LUN Queue Depth

Random 4KB Read Test

 The application must also generate concurrent IOPS in order for I/O
Rates to benefit from a large TCQ count.

Is it worth using twice


the number of Tags
for a potential of 50%
improvement?

I/O Rate
Performance is
stuck here with
single threaded 50% Increase
workloads, and in IOPS
when TCQ=1.

Page 8-16 HDS Confidential: For distribution only to authorized parties.


Deployment Planning — Part 1
Random 4KB Read Test

Random 4KB Read Test

Response Chart

 Performance is not just I/O rates — another reason to have a large


TCQ count is response time.

Response is
usually at its best
TCQ=16 but
with single
also near the
threaded
mechanical
workloads, and
limit of 2 AGs
when TCQ=1
(8 drives)

HDS Confidential: For distribution only to authorized parties. Page 8-17


Deployment Planning — Part 1
Queuing — Impact of Writes in the Queue

Queuing — Impact of Writes in the Queue

IOMETER Queuing = from 1 to 20 on 9500, 2 x 4+1 RAID5 Vols

1200
Mix of Read and
Writes impairs
1000 optimization

800
I/O Rate Per sec

TCQ=16
600
TCQ=16 R/W

400

200

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
IOMETER Outstanding I/O Count

Page 8-18 HDS Confidential: For distribution only to authorized parties.


Deployment Planning — Part 1
Rules and What Happens When Things Go Wrong?

Rules and What Happens When Things Go Wrong?

 Rules
• Do not exceed the physical number of Tags per port/LUN.
▪ # Servers x # LUNs x LUNQD <= 512 (Each active I/O uses 1 Tag)
• No help from the factory if you break the rules!
Need to check
• Note: The rules can be broken in some cases. the specifications
of each storage
 Working out if it has gone wrong – and what happens system

• Unexplained drop in IOPS (but needs a high IOPS rate in first place)
• Unexplained increase in response
• Unexplained variability of response
• Need to calculate sum of active queues of all servers or LUNs on port

# Servers x # LUNs x LUNQD <= 512 (Each active I/O uses 1 Tag).

HDS Confidential: For distribution only to authorized parties. Page 8-19


Deployment Planning — Part 1
Difference between Target Mode and LUN Queue Depth Mode

Difference between Target Mode and LUN Queue Depth Mode

 USP V supports LUN Queue Depth Mode LUSE LUNs are


constrained by LUN
• Must set an HBA LUN queue depth value <32 queue depth limit
• No more than 512 tags per Vport
• No more than 1024 tags per physical port (2048 for VSP)
 AMS supports target mode and LUN Queue Depth Mode (512 tags
per physical port)
• No LUN queue depth limit (but you can still set HBA LUN queue depth if
you want)
• Can set HBA target mode = 512 without any consideration for LUN queue
depth
▪ Example: 4 servers on a Port should use target
mode = 128 for each server
LUSE LUNs are
unconstrained
when in target
mode

If AMS is used as an external storage the max queue-depth for the whole system is
limited to 500.

Page 8-20 HDS Confidential: For distribution only to authorized parties.


Deployment Planning — Part 1
Checking the HBA LUN Queue Depth

Checking the HBA LUN Queue Depth

HDS Confidential: For distribution only to authorized parties. Page 8-21


Deployment Planning — Part 1
Example of Poor Performance Due to Low Execution Throttle (Target Mode)

Example of Poor Performance Due to Low Execution Throttle


(Target Mode)

“I am having trouble
getting the throughput of
an AMS up in a
benchmark test. It is for a 12 RAID groups
BIG customer and they (108 HDDs)
have their hair on fire. were sharing a
Please give me a call
ASAP.” Tag limit of 16.

Page 8-22 HDS Confidential: For distribution only to authorized parties.


Deployment Planning — Part 1
Modular Systems — Queue Details

Modular Systems — Queue Details

 Maximum of 512 command data blocks (tags) available on each of


the host ports independent of the server’s queue depth per LUN
 Queue depth available for each LUN assigned to a port is dynamic
• Will depend on how many unused queue slots there are from port’s limit
of 512 at any particular moment
 Example — Four servers using a single port
• Each is assigned four LUNs from that port
• Hosts are all constantly active
• Then maximum dynamic queue depth per LUN would tend to be 32
▪ That is, 512 port QD ÷ [4 LUNs × 4 hosts ])
 Whether or not a particular RAID group can sustain the total of these
dynamic queue depths for all of its LUNs is a function of the disk type
used (FC, SATA) and RAID group size

Note: For performance reasons, when a AMS2000 is used in external storage


configuration the queue depth is restricted to 500 for the whole system.

HDS Confidential: For distribution only to authorized parties. Page 8-23


Deployment Planning — Part 1
AMS Queue Depth Experiment

AMS Queue Depth Experiment

Storage sees aggregate


HDLM supports iSCSI of the QD on each path
and FC together

iSCSI
FC
FC

LDEV 39

Page 8-24 HDS Confidential: For distribution only to authorized parties.


Deployment Planning — Part 1
Example of AMS Misconfiguration

Example of AMS Misconfiguration

Best Practice Guidelines


If queue = 64, then
Spread load across many HDDs 56 elements will be
queued/pending in
HBA server and most
HDD LUN Queue Server HDDs will be idle -
Depth = 8 Not Good
HDD LUN
HDD LUN
HDD LUN If queue reaches 64,
HBA all HDDs may
HDD LUN Target Queue Server potentially become
Depth = 64 active concurrently
HDD LUN
Logical
HDD LUN Again, if queue reaches
Volume LUN 64, all HDDs may
HDD LUN across LUN potentially become active
72 HDDs LUN concurrently
HDD LUN
LUN
8D+1P LUSE from HBA
LUN
8 RGs Logical LUN Queue Server
RAID group LUN
= 72 HDDs Volume Depth = 8
LUN
With
LUN
72 HDDs
Separate
LUNs

HDS Confidential: For distribution only to authorized parties. Page 8-25


Deployment Planning — Part 1
USP V — External Tag Count Definition

USP V — External Tag Count Definition

 Up to 512 tags per MP port pair (selectable per LUN) on USP V


 Adjustable LUN
queue depth

Note: Modular Storage supports up to 500 tags per subsystem when used as an
external storage. In the case of too high Queue Depth of I/Os from virtualizing
storage to AMS 2000 storage as external storage, a lot of dirty data can be easily
stored in the cache memory and that can inhibits new write commands in the
controller. Additionally, it takes much time to search a lot of dirty data. As a result,
CPU utilization becomes high and performance degradation can occur.

Page 8-26 HDS Confidential: For distribution only to authorized parties.


Deployment Planning — Part 1
Basic External Tag Considerations — FC

Basic External Tag Considerations — FC

 Example with 8 tags per LUN


USP External LUN
8 tags are allowed per
external LUN (USP).
RG 8 tags will allow 8
1:1:1 CVS LU concurrent I/Os.

Tag count stays at 8


although workload is
incrementing.

CVS LU 3 very active LUNs


3:3:1 CVS LU RG (3x8 tags) could
CVS LU saturate the external
RG.

3 very active CVS


3:1:1 CVS LU RG
LUNs will be throttled
CVS in the USP. The 3
Throttling should CVS CVS LUNs will share
be passed back the 8 tags.
to the server

HDS Confidential: For distribution only to authorized parties. Page 8-27


Deployment Planning — Part 1
Basic External Tag Considerations — SATA

Basic External Tag Considerations — SATA

8 tags are allowed


per LUN External LUN AMS passes on 4
tags per LUN
(processes any extra
1:1:1 CVS LU RG in s/w)

Multiple active LUNs


CVS LU will saturate the
3:3:1 CVS LU RG external RG and
CVS LU cause queuing in
AMS

Multiple active CVS


3:1:1 CVS LU RG LUNs will be throttled
CVS in the USP. The 3
CVS CVS LUNs will share
the 8 tags (4 in AMS
Throttling should be hardware, 4 in
passed back to the software).
server

Page 8-28 HDS Confidential: For distribution only to authorized parties.


Deployment Planning — Part 1
Module Summary

Module Summary

 Described concept of VDEV, Logical Devices, and LUNs and Tag


Command Queuing
 Described best practices when configuring and sizing storage
systems for industry-standard
▪ Database applications
▪ File sharing and streaming applications
▪ Decision support applications
▪ Data warehousing applications
▪ Backup and archiving applications
▪ Data protection technologies
 Provided instruction in how to determine the appropriate load
balancing algorithm to optimize storage performance using Hitachi
Dynamic Link Manager

HDS Confidential: For distribution only to authorized parties. Page 8-29


Deployment Planning — Part 1
Module Review

Module Review

1. What is a VDEV?
2. What is Command Tagged Queuing?

Page 8-30 HDS Confidential: For distribution only to authorized parties.


9. Deployment Planning
— Part 2
Module Objectives

 Upon completion of this module, you should be able to:


• Describe best practices when configuring and sizing storage systems for
▪ Hitachi NAS products
▪ Response time
▪ Throughput
• Describe best practices when configuring industry standard mail
messaging and collaboration applications

HDS Confidential: For distribution only to authorized parties. Page 9-1


Tiers, Resource Pools and Workload Profiles
Application Monitoring

Application Monitoring

Online Transaction Processing

 Aspects of Online Transaction Processing (OLTP) Workloads


• Consist mainly of processing application transactions
• Are typical of all organizations
• Includes order entry systems, online banking, airline reservations, and
eBusiness
• Will not see sudden changes in I/O rate
• Normally will follow a behavior pattern (for example, lunchtime slump)
• Interactive during day, batch during nights and weekends
 Typical Metrics
• R/W ratio: From 1:1 to 4:1
• I/O type: Random access
• Block size: From 4KB to 8KB

The running application is going to drive you to the metrics that you observe.
Airline reservations are actually a special case because small block size (less than
1KB) creates distinct requirements.

Page 9-2 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Online Transaction Processing

 Typical I/O Tendencies


• Five to ten percent of total I/O activity is sequential
 Relates to sequential writes to transaction logs and journals
 What to Watch
• I/O rate
• Physical read I/O response time
 RAID-1+0 recommended for database
 RAID-1+0 or RAID-5 recommended for logs. RAID-1+0 would be
used for enhanced write performance, (if needed), and minimized
performance impact of disk hardware failure
What to Watch
The most significant metric for OLTP workloads is I/O rate. The higher the I/O rate, the
higher the total number of transactions the OLTP application can process. This is
fundamental for applications such as online banking and reservation systems. When
attempting to maintain very high I/O rates, the most critical metric to watch is response time.
The key to good overall response time in an OLTP environment is good physical read I/O
response time. OLTP environments generate mostly random access I/O with a low locality
of reference (LOR). This means that the cache read hit ratio is likely to be low.
 What to watch with OLTP — IOPS and response curve

OLTP Workload Comparison


75% Random, 75% Read, blocksize=4KB
8 x 6D+1P (56 HDDs) vs 7 x 4D+4D and 1 x 3D+3D (62 HDDs)

40

35
Users define their
30 acceptable or target
response levels
Response in Msec

25
RAID1 15K
RAD5 15K
20
RAID5 10K
RAID1 10K
15

10

0
0 2000 4000 6000 8000 10000 12000 14000 16000
IOPS

Achievable IOPS Rates


will vary with different disk
and RAID choices

HDS Confidential: For distribution only to authorized parties. Page 9-3


Tiers, Resource Pools and Workload Profiles
Online Transaction Processing

Metrics
Name in Value Normal Bad
Metric Name As seen from
HTnM Description Value Value
I/O Rate IOPS I/Os per second N/A Host View
HTnM Rules of Thumb
Read Rate Disk Reads/sec I/Os per second N/A Host & Port View Guide Document
Write Rate Disk Writes/sec I/Os per second N/A Host & Port View • A starting point –
Read Block Size Avg Disk Bytes xfered per 4096 to N/A Host View you need to
Bytes/Read I/O operation 8192 ‘Calibrate’ your
Write Block Size Avg Disk Bytes xfered per 2048 to N/A Host View Applications and
Bytes/Write I/O operation 8192 fine-tune the values
Read Response Avg Disk Time required to 1 to 7 >7 Host & Port View that are OK in your
Time Sec/Read complete a environment.
Read I/O
(Millisecond) • This will help you
Write Response Avg Disk Time required to 1 to 2 >2 Host & Port View remember what is a
Time Sec/Write complete a good value or bad
Write I/O value if you need to
(Millisecond)
troubleshoot a
problem at a later
 Values shown in the Normal Value column are date.
planning estimates.
 You should baseline your I/O profile when all systems are in a good
and normal running state.

Response times in a throughput-oriented application may not be relevant.

Metrics

Normal
Metric Name Value Description
Value
Average Queue Average Number of disk requests queued for 1 to 8
Length execution on one specific LUN 8 is the Max Qdepth value

Read Hit Ratio % of Read I/Os satisfied from Cache 20 to 50

Disk % Busy % utilization for a Array Group < 50%

Average Write Percentage of the Cache used for Write


1% to 35%
Pending Pending

Bytes xfered per I/O operation 2048 to 4096

Page 9-4 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Rich Media and Streaming

Rich Media and Streaming

 Aspects of Rich Media and Streaming


• Includes Web advertising using advanced technology
▪ Streaming video
▪ Interactive applet downloads
▪ Page elements that change upon mouse over
• Includes video on-demand, such as CNN live on the Internet
• Long running, steady state multiple streams of sequential I/O
 Typical Metrics
• R/W ratio: 50 to 1 or greater
• I/O type: Sequential access
• Block size: 64 KB

HDS Confidential: For distribution only to authorized parties. Page 9-5


Tiers, Resource Pools and Workload Profiles
Rich Media and Streaming

 Typical I/O Tendencies


• Most requests are read I/Os
• Server sends enough blocks so streaming action does not lose frames
▪ Example: MPEG2 is 30 frames per second
• Limiting factor for application performance is network speed
 What to Watch
• Read data transfer rate, controlled by the read hit ratio
• Cache read hit ratio (of at least 90%)
 RAID-5 recommended for database

Typical I/O Tendencies


Typically the server must send enough blocks ahead of time, across the SAN, so that
the streaming action does not lose any frames — note that Mpeg2 is 30 frames per
second. In most cases, the limiting factor for application performance is SAN speed.
What to Watch
A successful rich media environment can maintain the streaming rate to satisfy all
application users. The key metric to watch when assessing rich media application
performance is read data transfer rate (DTR). The read hit ratio is critical to
maintaining a specific DTR.
The rich media read-ahead process requires a high sequential read hit rate, and this
depends on good physical read I/O response time. For smooth application
performance, cache read hit ratio must be at least at 90 percent. Using cache
residency for frequently accessed media, such as video, helps to ensure good read
hit rates.

Page 9-6 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Rich Media

Rich Media

Video, Seismic, PACS, Audio

Normal As seen
Metric Name Name in HTnM Value Description Bad Value
Value from
I/O Rate IOPS I/Os per second N/A N/A Host View
Read Rate Read IOPS I/Os per second N/A N/A Host View
Read Block Size Avg Disk Bytes xfered per 65536 N/A Host View
Bytes/Read I/O operation
Read Data xfer Avg Read Disk Bytes xfered per .5 to 1.875 < .5 MB/s per Host View
Rate per user (1) Bytes/sec second per user MB/s per user
user
Port Utilization Port Transfer Data Transfer Rate 50 to 187 >187 MB/s Port View
(100 users) (2) (MB/s) MB/s OK if Response
Time still good
Read Response Avg Disk Sec/Read Time required to 1 to 2 >2 Host & Port
Time complete a Read View
I/O (Millisecond)
Read Hit Ratio Read Hit % % of Read I/Os 90% to < 90 % Port View
satisfied from 100%
Cache

HDS Confidential: For distribution only to authorized parties. Page 9-7


Tiers, Resource Pools and Workload Profiles
Email Systems

Email Systems

 Aspects of Email Systems


• Text message and attachments of various file types and sizes
• Recipients inside and outside the organization
• Increasingly important to all enterprises
• Includes Microsoft Exchange and Lotus Notes
• Similar to OLTP
 Typical Metrics
• R/W ratio: From 5:1 to 1:1
• I/O type: Random read and write access
• Block size: 4KB

Page 9-8 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles

 Typical I/O Tendencies


• 10% of activity is sequential
 What to Watch
• I/O Rate — Higher I/O rate means more messages per second
• I/O response time
▪ Back end distribution
▪ Destage activity at the back
▪ Cache sizes
 RAID-1+0 recommended for database
 RAID-1+0 or RAID-5 recommended for logs
• RAID-1+0 would be used for enhanced write performance and minimize
performance impact of disk hardware failure

Typical I/O Tendencies


Approximately 10 percent of a typical email system’s I/O activity is sequential. The
sequential I/O activity is related to email transactions and consists of sequential
writes, typically moving large blocks (64KB).
What to Watch
I/O Rate is the most important metric to the email system. The higher the I/O rate,
the more messages per second the email system can deliver. Maintaining very high
I/O rates requires paying attention to the response time metric.
Achieving the best email system I/O response time depends on good back end
distribution of mailboxes across the storage system’s Array Groups. Because the
workload contains a high percentage of writes, destage activity at the back end also
can be very high. Larger cache sizes can help to maintain good I/O response time.
RAID-1+0 or RAID-5
RAID-1+0 would be used for enhanced write performance, (if needed), and
minimized performance impact of disk hardware failure.

HDS Confidential: For distribution only to authorized parties. Page 9-9


Tiers, Resource Pools and Workload Profiles
Mail Metrics — Microsoft® Exchange, Notes

Mail Metrics — Microsoft® Exchange, Notes

Bad
Metric Name Name in HTnM Value Description Normal Value As seen from
Value
I/O Rate IOPS I/Os per second N/A N/A Host View
Read Rate Read IOPS I/Os per second N/A N/A Host View
Write Rate Write IOPS I/Os per second N/A N/A Host View
Avg Disk Bytes xfered per
Read Block Size 4096 N/A Host View
Bytes/Read I/O operation
Avg Disk Bytes xfered per
Write Block Size 4096 N/A Host View
Bytes/Write I/O operation
Time required to
Read Response
Avg Disk Sec/Read complete a Read 1 to 5 >5 Host & Port View
Time
I/O (Millisecond)

Write Response Time required to


Avg Disk Sec/Write complete a Write 1 to 2 >2 Host & Port View
Time I/O (Millisecond)

Bad
Metric Name Name in HTnM Value Description Normal Value As seen from
Value
Average Queue Avg Disk Queue Average Number of 1 to 8 >8 Host View
Length Length disk requests
8 is the Max (Note that Max
queued for
Qdepth value Qdepth might vary
execution on one with different
specific LUN hardware settings)
Read Hit Ratio Read Hit % % of Read I/Os 35 to 65 < 35 Port View
satisfied from
Cache
Write Hit Ratio Write Hit % % of Write I/Os 100% < 100% Port View
satisfied from
(0% on USP)
Cache
Average Write Write Pending Rate Percentage of the 1% to 35% > 35% Port View
Pending Cache used for
Write Pending

Page 9-10 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Microsoft Exchange

Microsoft Exchange

 Key Data Repositories for Microsoft Exchange


• Temporary
• Database
• Transaction Logs
• SMTP Queue
• Page File

HDS Confidential: For distribution only to authorized parties. Page 9-11


Tiers, Resource Pools and Workload Profiles
Data Repositories for Exchange

Data Repositories for Exchange


 Temporary
• Stores all temporary files, especially for indexing
• Location for format conversions
• Data access will interfere with other transactions
• Should be in its own Array Group
• Key counters:
▪ Average Disk sec/Read ― Less than 10ms; no spikes higher than
50ms
▪ Average Disk sec/Write ― Same as above
▪ Average Disk Queue Length ― Less than the number of spindles

The operating system temporary drive is where all the format conversions occur,
such as from RTF to HTML. It is also the home for all temporary files created and
accessed during crawls performed by the Microsoft Index Server Indexing Service.
When first installed, the operating system sets the location for creation and use of
temporary files as the same disk used by the operating system itself. This means that
any I/O for the temp disk competes with I/O for programs and page file operations
being run from that drive. This competition for I/O impacts performance. To avoid
having the operating system compete with for I/O with the temp disk, it is
recommended that you change the global environment setting of TEMP to point to
another disk and, thereby, set the temp disk to its own disk.

 Database
• .edb file ― Stores all MAPI messages and tables
• .stm file ― Stores all non-MAPI data (dropped for 2007)
• Characterized by random I/O
• Should be on its own Array Group
• Counters:
▪ Average Disk sec/Read & Write ― Less than 20ms; no spikes higher
than 50ms
▪ Read Hit % ― Greater than 35%
▪ Disk Queue Length ― Lower than 9
▪ Read Response Time ― Less than 6ms
▪ Write Response Time ― Less than 3ms
Note: Synchronous replication will impact above, so also use:
▪ Database Page Fault Stalls/sec ― Should be 0

Page 9-12 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Data Repositories for Exchange

Exchange servers should generally have database write latencies under 20ms, with
spikes (maximum values) under 50ms. However, it is not always possible to keep
write latencies in this range when synchronous replication is in use. Database write
latency issues often do not become apparent to the end user until the database cache
is full and cannot be written to. When using synchronous replication, the
Performance Monitor Database Page Fault Stalls/sec counter is a better indicator of
whether the client is being affected by write latency than the Physical Disk\Average
Disk sec/Write counter.
On a production server, the value of the Database Page Fault Stalls/sec counter
should always be zero, because a database page fault stall indicates that the database
cache is full. A full database cache means that Exchange cannot place items in cache
until pages are committed to disk. Moreover, on most storage systems, read
latencies are affected by write latencies. These read latencies may not be detectable
at the default storage system Performance Monitor sampling rate. Remote procedure
call (RPC) latencies also increase as a consequence of database page fault stalls,
which can degrade the client experience.
Because disk-related performance problems can negatively affect the user experience,
it is recommended that administrators monitor disk performance as part of routine
system health monitoring. When analyzing a database logical unit number (LUN) in
a synchronously replicated environment, you can use the counters listed below to
determine whether there is any performance degradation on the disks:
 Average Disk sec/Read & Write – less than 20ms; no spikes higher than 50ms
 Read Hit % - greater than 35%
 Disk Queue Length – lower than 9
 Read Response Time – less than 6ms
 Write Response Time – less than 3ms
Note: Synchronous replication will impact the process the counters, so also use
this counter:
 Database Page Fault Stalls/sec – Should be 0

HDS Confidential: For distribution only to authorized parties. Page 9-13


Tiers, Resource Pools and Workload Profiles
Data Repositories for Exchange

 Transaction Logs
• Maintain database integrity
• Writes only — write performance is key
• Characterized by sequential I/O
• Should be in its own volume
• Counters:
▪ Average Disk sec/Write ― Less than 10ms; no spikes higher than
50ms
▪ Log Record Stalls/sec ― Less than 10 per second; no spikes higher
than 100 per second
▪ Log Threads Waiting ― Less than 10

The transaction log files maintain the state and integrity of your .edb and .stm files.
This means that the log files, in effect, represent the data. There is a transaction log
file set for each storage group. To increase performance, Exchange implements each
transaction log file as a database. If a disaster occurs and you have to rebuild your
server, use the latest transaction log files to rebuild your databases. If you have the
log files and the latest backup, you can recover all of your data. However, if you lose
your log files, the data is lost.
There are generally no reads to the log drives, except when restoring backups. This
means that write performance is essential to the transaction logs and any analysis
should closely observe this aspect. When analyzed per physical log disk, you can
use the counters listed below to determine whether there is any performance
degradation on the disks:
 Average Disk sec/Write – Less than 10ms; no spikes higher than 50ms
 Log Record Stalls/sec – Less than 10 per second; no spikes higher than 100 per
second
 Log Threads Waiting – Less than 10

Page 9-14 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Data Repositories for Exchange

 SMTP Queue
• Stores messages until processed by Exchange
• Characterized by random I/O
• Key counters:
▪ Average Disk sec/Read ― Less than 10ms; no spikes higher than
50ms
▪ Average Disk sec/Write ― Same as above
▪ Average Disk Queue Length ― Less than number of spindles

The SMTP queue stores SMTP messages until Exchange writes them to a database
(private or public), or sends them to another server or connector. SMTP queues
generally experience random, small I/Os.
When analyzed per physical SMTP queue disk, you can use the counters listed
below to determine if there is any performance degradation on the disks:
 Average Disk sec/Read – Less than 10ms; no spikes higher than 50ms
 Average Disk sec/Write – Same as above
 Average Disk Queue Length – Less than number of spindles

HDS Confidential: For distribution only to authorized parties. Page 9-15


Tiers, Resource Pools and Workload Profiles
Data Repositories for Exchange

 Page File
• Disk cache for physical memory
• Always used, even when there is excess RAM
• Key counters:
▪ Average Disk sec/Read ― Always less than 10ms
▪ Average Disk sec/Write ― Always less than 10ms
▪ Average Disk Queue Length ― Less than number of spindles

The page file acts as an extension of the physical memory, serving as an area where
the system puts unused pages or pages it will need later. The page file always sees
some use, even in machines with a good amount of free memory. This constant
utilization is because the operating system tries to keep in memory only the pages
that it needs and enough free space for operations. For example, a printing tool that
is used only at startup might have some of its memory paged to disk and never
brought back if it is never used.
In servers where the physical memory is being used heavily, it is important to
ensure that all access to the page file is as fast as possible and to avoid thrashing
situations. It is common for servers to start seeing errors in memory operations long
before the page file is full. So, observing usage patterns of the page file disk is more
important than how full the disk is. Use the counters listed below to determine
whether there is any performance degradation on the page file disk:
 Average Disk sec/Read – Always less than 10ms
 Average Disk sec/Write – Always less than 10ms
 Average Disk Queue Length – Less than number of spindles

Page 9-16 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Exchange 2010 Considerations

Exchange 2010 Considerations

 Exchange Server 2010 introduces several architectural changes


• Storage Groups
• Data Availability Groups
• Databases

 Exchange 2010 Mailbox Server Role Requirements Calculator


• http://gallery.technet.microsoft.com/Exchange-2010-Mailbox-Server-Role-/

Exchange 2010 Mailbox Server Role Requirements Calculator


http://gallery.technet.microsoft.com/Exchange-2010-Mailbox-Server-Role-/
Storage Groups
Exchange Server 2010 does not use storage group objects. This change from previous
versions of Exchange facilitates database mobility and means that Exchange data is
now protected at the database level instead of at the server level. This results in a
simpler and faster failover and recovery process. Limits on the number of databases
per server still exist; the maximum for the standard edition of Exchange Server 2010
is five and the maximum for the enterprise edition is 100.
Database Availability Groups
To support database mobility and site resiliency, Exchange Server 2010 introduces
Database Availability Groups (DAGs). A DAG is an object in Active Directory that
can include up to 16 Mailbox servers that host a set of databases; any server within a
DAG has the ability to host a copy of a mailbox database from any other server
within the DAG. DAGs support mailbox database replication and database and
server switchovers and failovers. Setting up a Windows failover cluster is no longer
necessary for high availability; however, the prerequisites for setting up a DAG are
similar to that of a failover cluster.

HDS Confidential: For distribution only to authorized parties. Page 9-17


Tiers, Resource Pools and Workload Profiles
Exchange 2010 Considerations

Databases
In Exchange Server 2010, the changes to the Extensible Storage Engine (ESE) enable
the use of large databases (approximately 2TB) on larger, slower disks while
maintaining adequate performance. The ESE uses larger and more sequential I/O,
and database maintenance routines like online defragmentation and checksums are
run continually in the background.
The Exchange Store’s database tables make better use of the underlying storage
system and cache and the store no longer relies on secondary indexing, making it
less sensitive to performance issues.

Page 9-18 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Oracle Databases

Oracle Databases

 Stripe Everything Across Every Disk


 Move Archive Logs to Different Disks
 Move Redo Logs to Separate Disks
 A block size of 8 KB is optimal for most systems.
• However, OLTP systems occasionally use smaller block sizes and DSS
systems occasionally use larger block sizes.

Stripe Everything Across Every Disk


The simplest approach to I/O configuration is to build one giant volume, striped
across all available disks.
Move Archive Logs to Different Disks
If archived redo log files are striped on the same set of disks as other files, then any
I/O requests on those disks could suffer when the database is archiving the redo
logs. Moving archived redo log files to separate disks provides the following
benefits:
The archive can be performed at very high rate (using sequential I/O).
Nothing else is affected by the degraded response time on the archive destination
disks.
The number of disks for archive logs is determined by the rate of archive log
generation and the amount of archive storage required.
Move Redo Logs to Separate Disks
In high-update OLTP systems, the redo logs are write-intensive. Moving the redo
log files to disks that are separate from other disks and from archived redo log files
has the following benefits:
Writing redo logs is performed at the highest possible rate. Hence, transaction
processing performance is at its best.
Writing of the redo logs is not impaired with any other I/O.
The number of disks for redo logs is mostly determined by the redo log size, which
is generally small compared to current technology disk sizes. Typically, a
configuration with two disks (possibly mirrored to four disks for fault tolerance) is
adequate. In particular, by having the redo log files alternating on two disks, writing
redo log information to one file does not interfere with reading a completed redo log
for archiving.

HDS Confidential: For distribution only to authorized parties. Page 9-19


Tiers, Resource Pools and Workload Profiles
Decision Support Systems

Decision Support Systems

 Interactive computer-based tools


• Help answer questions
• Solve problems
• Support or refute hypotheses
 Includes OLAP, EIS, GIS, or Spatial DSS
 Relatively short, intense read burst for queries
 Writes are long running intense writes
 Data warehouses are often loaded by a batch process
 Typical Metrics
• R/W ratio: 50 to 1 and above
• I/O type: Sequential access
• Block size: 64K and higher

OLAP = Online Analytical Processing


EIS = Executive Information Systems
GIS = Geographic Information System

Page 9-20 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Decision Support Systems

 Typical I/O Tendencies


• Most I/Os are sequential read
• Typically build queries to perform scans of relational database tables
• Index scans are also used to improve performance
 What to Watch
• Read data transfer rate, controlled by the read hit ratio.
• Physical read I/O response time determines the sequential read hit rate
• Cache read hit ratio at least at 90 percent
 RAID-5 recommended for database
 RAID-1+0 or RAID-5 recommended for logs
• RAID-1+0 would be used for enhanced write performance and minimized
performance impact of disk hardware failure

Typical I/O Tendencies


In a DSS environment, most I/Os are sequential read. Typically, a DSS user will
build queries to perform one or more scans of relational database tables. Index scans
also are used to improve performance.
What to Watch
The most significant metric to a DSS workload is the read DTR. Table scans perform
read-ahead processing. A high DTR helps to optimize this activity, delivering the
best query performance. To maintain a high DTR, the most critical metric to watch is
the read hit ratio.
Physical read I/O response time determines the success of the sequential read hit
rate in a DSS environment. This, in turn, determines how well the read-ahead
process works. Cache read hit ratio must always be at least at 90 percent. A value
less than 90% indicates that there is a problem.

HDS Confidential: For distribution only to authorized parties. Page 9-21


Tiers, Resource Pools and Workload Profiles
Decision Support Systems Metrics

Decision Support Systems Metrics

Normal
Metric Name Name in HTnM Value Description Bad Value As seen from
Value
I/O Rate IOPS I/Os per second N/A N/A Host View
Read Rate Read IOPS I/Os per second N/A N/A Host View
Read Block Size Avg Disk Bytes xfered per I/O 65536 to < 65536 Host View
Bytes/Read operation 262144
Port Data xfer Port Transfer Bytes xfered per 90 MB/s to < 90 MB/s Port View
Rate (1) second per port 170 MB/s per FC port
per FC port

Sequential Disk Sequential Number of IOPS in > 90% < 90% Port View
Content (2) IOPS Sequential mode

Random Content Disk Random Number of IOPS in < 10% > 10% Port View
IOPS Random mode
Read Response Avg Disk Time required to 1 to 10 > 10 Host & Port
Time Sec/Read complete a Read I/O View
(Millisecond)
Read Hit Ratio Read Hit % % of Read I/Os 90% to < 90 % Port View
satisfied from Cache 100%

Page 9-22 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Enterprise Resource Planning Metrics

Enterprise Resource Planning Metrics

Normal Bad
Metric Name Name in HTnM Value Description As seen from
Value Value
I/O Rate IOPS I/Os per second N/A N/A Host View
Read Rate Read IOPS I/Os per second N/A N/A Host View
Write Rate Write IOPS I/Os per second N/A N/A Host View
Read Block Size Avg Disk Bytes xfered per 4096 to 8192 N/A Host View
Bytes/Read I/O operation
Write Block Size Avg Disk Bytes xfered per 2048 to 4096 N/A Host View
Bytes/Write I/O operation
Read Response Avg Disk Sec/Read Time required to 1 to 5 >5 Host & Port View
Time complete a Read
I/O (Millisecond)
Write Response Avg Disk Sec/Write Time required to 1 to 2 >2 Host & Port View
Time complete a Write
I/O (Millisecond)
Read Hit Ratio Read Hit % % of Read I/Os 30 to 65 < 30 Port View
satisfied from
Cache
Average Queue Avg Disk Queue Average Number of 1 to 8 >8 Host View
Length Length disk requests 8 is the Max (Note that Max
queued for
Qdepth value Qdepth might vary
execution on one
with different
specific LUN hardware settings)

Enterprise Resource Planning = ERP


ERP is OLTP on a larger scale.

HDS Confidential: For distribution only to authorized parties. Page 9-23


Tiers, Resource Pools and Workload Profiles
Backup Systems

Backup Systems

 Typical Metrics
• R/W ratio
▪ Backup: 1:50
▪ Restore: 50:1
• I/O type: Sequential or random access
• Block size: 64 K and higher

Page 9-24 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Hitachi Network Attached Storage (HNAS)

Hitachi Network Attached Storage (HNAS)

Gigabit Ethernet Protocol and Capacities

MTU size Header data size "unused" bytes used bytes bits used Half duplex Half duplex Full duplex
(bytes) (bytes) (bytes) in a packet in a packet in a packet packets/sec ** MB/s ** MB/s ** $
9,018 104 8,192 722 8,296 66,368 16,002 125.0 175.0
4,450 104 4,096 250 4,200 33,600 31,607 123.5 172.9
1,518 104 1,414 0 1,518 12,144 87,451 117.9 165.1
1,518 104 1,024 390 1,128 9,024 117,686 114.9 160.9
1,518 104 512 902 616 4,928 215,503 105.2 210.5
1,518 104 20 1,394 124 992 1,070,565 20.4 40.8

Clock rate (HZ): 1,062,000,000 (Gigabit)

TCP/IP bytes ** Perfect World: does not include turnaround times of NIC, IP stack, or the O/S capabilites
Max Header 104 $ assumes a typical 40% gain in full duplex over half duplex
NFS-CIFS-ODBC 42
VLAN 4 NOTE: NFS Meta data transactions are very small packets. A high ratio of
CRC 18 these to larger data packets will significantly drop the peak throughput rates.
TCP 20
IP 20 NOTE: Full duplex mode doesn't work if the NIC card or IP stack aren't actually capable of it.
Just being able to set FD mode doesn't mean anything. Also true of fibre channel.

HDS Confidential: For distribution only to authorized parties. Page 9-25


Tiers, Resource Pools and Workload Profiles
2Gb Fibre Channel Protocol and Rates

2Gb Fibre Channel Protocol and Rates

Max rates: 2-Gbit half duplex


bits/sec: I/O Data Command est. FC apparent est.
2,124,000,000 xfer size Phase Phase data app app
max raw frames (with 1kb or 2kb payloads) size (KB) efficiency overhead frames IOPS MB/sec
195,941 1 20% 80% 39,188 39,188 38.3
100,759 2 28% 72% 28,213 28,213 55.1
100,759 4 48% 52% 48,364 24,182 94.5
100,759 8 66% 34% 66,501 16,625 129.9
100,759 16 80% 20% 80,607 10,076 157.4
100,759 32 92% 8% 92,698 5,794 181.1
100,759 64 94% 6% 94,713 2,960 185.0
100,759 128 96% 4% 96,729 1,511 188.9
100,759 256 98% 2% 98,744 771 192.9

In order to achieve these rates, there must be sufficient horsepower present at every
point in the FC path in both the server, (switch) and storage. Many current storage
systems on the market have enough horsepower to reach these rates on a single port,
but cannot scale at these rates across very many of their unshared ports. An unshared
port is one with a dedicated processor behind it. On virtually every system available,
there is shared logic behind each pair of ports.

Page 9-26 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
4Gb Fibre Channel Protocol and Rates

4Gb Fibre Channel Protocol and Rates

Max rates: 4-Gbit half duplex


bits/sec: I/O Data Command est. FC apparent est.
4,248,000,000 xfer size Phase Phase data app app
max raw frames (with 1kb or 2kb payloads) size (KB) efficiency overhead frames IOPS MB/sec
391,882 1 20% 80% 78,376 78,376 76.5
201,518 2 28% 72% 56,425 56,425 110.2
201,518 4 48% 52% 96,729 48,364 188.9
201,518 8 66% 34% 133,002 33,250 259.8
201,518 16 80% 20% 161,214 20,152 314.9
201,518 32 92% 8% 185,397 11,587 362.1
201,518 64 94% 6% 189,427 5,920 370.0
201,518 128 96% 4% 193,457 3,023 377.8
201,518 256 98% 2% 197,488 1,543 385.7

In order to achieve these rates, there must be sufficient horsepower present at every
point in the FC path in both the server, (switch) and storage. When deploying 4Gb
on most current systems on the market, you may see a boost in sequential
performance (maybe 1.4x) but an actual loss (up to 2X) for random workloads. This
is due to overloading the hardware and microcode with an overly high arrival rate
of frames.

HDS Confidential: For distribution only to authorized parties. Page 9-27


Tiers, Resource Pools and Workload Profiles
Basics

Basics

Page 9-28 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Architecture Overview

Architecture Overview

Block Versus File Level Access


Max rates: 2-Gbit half duplex
bits/sec: I/O Data Command est. FC apparent est.
2,124,000,000 xfer size Phase Phase data app app
max raw frames (with 1kb or 2kb payloads) size (KB) efficiency overhead frames IOPS MB/sec
195,941 1 20% 80% 39,188 39,188 38.3
100,759 2 28% 72% 28,213 28,213 55.1
100,759 4 48% 52% 48,364 24,182 94.5
100,759 8 66% 34% 66,501 16,625 129.9
100,759 16 80% 20% 80,607 10,076 157.4
100,759 32 92% 8% 92,698 5,794 181.1
100,759 64 94% 6% 94,713 2,960 185.0
100,759 128 96% 4% 96,729 1,511 188.9
100,759 256 98% 2% 98,744 771 192.9

 SAN has high deliverable performance and


scalability
 File sharing and locking easier to handle with
a NAS platform
 NAS storage is easy to add dynamically
 Easier, lower cost infrastructure
implementation
 Processing overhead of NIC and IP stack

HDS Confidential: For distribution only to authorized parties. Page 9-29


Tiers, Resource Pools and Workload Profiles
File System Reminder

File System Reminder

 For example, look up “NTFS” on Wikipedia


Directory Special Files
Index Headers Pagefile Cluster Bitmap
Boot Block Allocation Table
1101110100111
File 1011100010100
extent 0110010001111

Backup For example, each


Extension bit represents a
Boot directories Fragmented
Blocks File – part #1 cluster of 8 sectors =
4 KB

Even if only 2
Pagefile, Bitmap, sectors are
Fragmented
Directory Header required, the
File – part #2 whole cluster is
access is handled
allocated.
where?
Answer: Same as Tables
the data in the client Fragmented

Index
File – part #3
Metadata
File IDs, list of clusters
attributes

Page 9-30 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
File Logical Blocks

File Logical Blocks

 What are you measuring if you address File logical blocks in an NFS-
mounted volume?

Much of the HNAS sophistication will be bypassed.


IOPS at the back end of the HNAS may not compare against the IOPS from an HBA
in a SAN with the same workload!
LBA benchmarks such as IOZONE, IOMETER, NTIOGEN, VDBench, while
interesting may not be the best tool to demonstrate full HNAS potential.
Metadata operations need to be stressed as well if we want to demonstrate HNAS
capabilities (Ron’s WP).

HDS Confidential: For distribution only to authorized parties. Page 9-31


Tiers, Resource Pools and Workload Profiles
High Throughput Rules of Thumb

High Throughput Rules of Thumb

 Distribute load across all the available ports


 Use external striping where possible
 Stripe across both controllers
(alternating ports)
 Dedicate/map RAID group
resources per port
 Match external stripe size to RAID
stripe
 Use RAID-5 4D+1P or 8D+1P RGs
 Use Multi-Stream Mode with multiple streams!

 Distribute load across all the available ports


 Multiple ports = maximum bandwidth
 Use external striping where possible
 External striping is the best way of distributing load evenly
 Stripe across both controllers (alternating ports)
 The AMS loop architecture requires access over both CTLs to maximize the
MB/sec
 Dedicate/map RAID group resources per port
 Ensures the cleanest rotation of load across RGs and Ports
 Relatively few LUNs per RAID group to avoid thrashing
 Match external stripe size to RAID stripe
 Avoid any striping ‘wrap-around’ where an I/O stream hits the same set of
disks in the same I/O sequence or transfer
 Use RAID-5 4D+1P or 8D+1P
 4 x 64KB = 256KB, 8 x 64KB = 512KB (both options are easy to match with
external striping).
 AMS throughput is boosted for these RG sizes
 Use Multi-Stream Mode with multiple streams!
 The MS Mode Cache scan algorithm boosts sequential detection and cache
allocation

Page 9-32 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Multipathing

Multipathing

Options

 HDLM
• Use Extended Round Robin with USP
 DMP
• Use dmp_pathswitch_blks_shift to define
number of blocks issued per path before
switching (default = 1 MB)
 MPXIO
• Use LOAD_BALANCE_LBA to define LBA
region size per path before switching
(typically 32 MB)
 VMware
• MRU or FIXED (auto-restore)
 General comments
• Up to 4 paths per LUN with USP
• 2 paths per LUN with 9900V range

HDS Confidential: For distribution only to authorized parties. Page 9-33


Tiers, Resource Pools and Workload Profiles
Overview

Overview

 Increase Resilience
 Increase Performance
• More Paths = More MB/sec
• More Paths = More effective tags per LUN
• More Port Processors = More IOPS
• Full benefit of VDEV striping
 Cautions
• Shared processor paths will not increase IOPS
• Loss of sequential detection with round robin access
• Port Buffer overhead with too many Paths (five and over)
• Reduced number of LUNs

Page 9-34 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
HDLM Features

HDLM Features

 Multipathing
• HDLM enables multiple paths from host to devices, allowing access to
device even if a specific path is unavailable
• Multiple paths also can be used to share I/O workloads and improve
performance
 Path Failover
• HDLM automatically redirects I/O operations to alternate paths if a failure
occurs, allowing processing to continue without interruption
• With threat of I/O bottlenecks removed and data paths protected,
performance and reliability increase
 Failback
• When failed path becomes available, HDLM the recovered path back
online
 Ensures maximum number of paths is always available for load
balancing and failover

HDLM = Hitachi Dynamic Link Manager

 Load balancing
• HDLM intelligently allocates I/O requests across all available paths to
prevent a heavily loaded path from adversely affecting processing speed
• Load balancing ensures continuous operation at optimum performance
levels, along with improved system and application performance
 Path health checking
• HDLM automatically checks the path status at regular user-specified
intervals, eliminating the need to perform repeated manual path status
checks
• Proactive identification of issues

HDS Confidential: For distribution only to authorized parties. Page 9-35


Tiers, Resource Pools and Workload Profiles
Path Failover and Failback

Path Failover and Failback

Server Server
Applications Applications
HDLM HDLM

Failure Standby Failure Reduction


of
Balancing
Volume Volume Paths
Storage Storage
Simple Failover With Load Balancing
 HDLM provides continuous storage access and high
availability by distributing I/O over multiple paths
 Failover and fallback in either manual or automatic modes.
 Automated path health checks.
 Allows dynamic LUN addition and deletion without a server
reboot*.

*Note: O/S and array dependent; check system requirements for details.

Page 9-36 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Load Balancing

Load Balancing

Server Server
Applications Applications

Regular Driver HDLM

Load
I/O Bottleneck Balancin
g
Volumes Volumes
Storage Storage
Without Load Balancing With Load Balancing
 HDLM distributes storage access across multiple paths to improve
I/O performance with load balancing
 Bandwidth control at the HBA level, and in conjunction with Global
Availability Manager at the LUN level

When there is more than one path to a single device in a logical unit (LU), the
Dynamic Link Manager can distribute the load across the paths by using the paths to
issue I/O commands. Load balancing prevents a heavily loaded path from affecting
the performance of the entire system.

HDS Confidential: For distribution only to authorized parties. Page 9-37


Tiers, Resource Pools and Workload Profiles
Load Balancing in a Clustered Environment

Load Balancing in a Clustered Environment

 Microsoft Windows
• Microsoft Cluster Server
Active Host Standby Host
• Oracle RAC
• VERITAS Cluster Server Cluster
 Sun Solaris
• Sun Cluster HDLM HDLM
• VERITAS Cluster Server
• Oracle RAC
HBA HBA HBA HBA
 HP-UX
• MC/Serviceguard
Load Balance
• Oracle RAC
 AIX
CHA CHA
• HACMP
• VERITAS Custer Server
LUN
• Oracle RAC
 Linux Storage
• Redhat AS Bundle Cluster
• SuSE Linux Bundle Cluster
• VERITAS Cluster Server
• Oracle RAC

Load Balancing on Cluster


The HDLM allows load balancing over the clustering system in a safe manner.
The path failover and failback of the HDLM will work along with cluster’s node
failover and failback.

Page 9-38 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Load Balancing Algorithms

Load Balancing Algorithms

Load Balancing Description


Algorithm
Round Robin Distributes all I/Os among multiple paths
Extended round Robin Distributes I/Os to paths depending on whether the I/O
involves sequential or random access:
• For sequential access, a certain number of I/Os are
issued to one path in succession then next path is chosen
according to the round robin algorithm
• For random access, I/Os will be distributed to multiple
paths according to round robin algorithm

Load Balancing Description


Algorithm
Least I/Os I/O operations are issued to a path that has the least
number of I/Os being processed (regardless of I/O block
size)
Extended least I/Os I/O operations are issued to a path that has the least
(Default) number of I/Os being processed (regardless of I/O block
size):
• For sequential access, a certain number of I/Os are
issued to one path in succession then next path is chosen
according to the least I/Os algorithm
• For random access, I/Os are issued to multiple paths
according to the least I/Os algorithm

HDS Confidential: For distribution only to authorized parties. Page 9-39


Tiers, Resource Pools and Workload Profiles
Load Balancing Algorithms

Load Balancing Description


Algorithm
Extended least I/Os I/O operations are issued to a path that has least
(Default) number of I/Os being processed (regardless of I/O block
size):
• For sequential access, a certain number of I/Os are issued
to one path in succession then next path is chosen
according to least I/Os algorithm
• For random access, I/Os are issued to multiple paths
according to least I/Os algorithm
Extended least blocks I/O operations are issued to a path with least pending
I/O block size (regardless of I/O count):
• For sequential access, a certain number of I/Os are issued
to one path in succession then next path is chosen
according to least blocks algorithm
• For random access, I/Os are issued to multiple paths
according to least blocks algorithm

Page 9-40 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Round Robin — Extended Round Robin

Round Robin — Extended Round Robin

 Impact of Extended Round Robin versus Round Robin on Sequential


Detect • Treated as Random I/O
• No track preloading
Server Storage • Blocks held in Read Cache
Dynamic Link 5 3 1
Manager Not Sequential
RR
6 4 2 • Less efficient cache usage
• Less efficient back end I/O
• Sub-optimal response times

• Detected as Sequential I/O


• Tracks preloaded to cache
Server Storage • Cache returned to free queue
Dynamic Link 3 2 1
Manager 6 5 4 Sequential
ExRR
• Efficient cache usage
• Efficient back end I/O
• Optimized response times

Round Robin is a more appropriate scheme for the VSP because Sequential detection
is managed by the owning VSD processor in the VSP and not the port.

HDS Confidential: For distribution only to authorized parties. Page 9-41


Tiers, Resource Pools and Workload Profiles
Dynamic Link Manager GUI

Dynamic Link Manager GUI

Dynamic Link
Manager Version

Basic function settings


Load balancing
Path Health Checking
Auto failback
Intermittent Error Monitor

Error management
function settings
Select the severity of
Log and Trace Levels

Optional Parameters
 Load Balancing
 Path Health Check
 When enabled (default), Dynamic Link Manager monitors all online paths at
specified interval and puts them into Offline(E) or Online(E) status if a
failure is detected.
 There is a slight performance penalty due to extra probing I/O.
 The default interval is 30 minutes.
 Auto Failback
 When enabled (not the default), Dynamic Link Manager monitors all
Offline(E) and Online(E) paths at specified intervals and restores them to
online status if they are found to be operational. The default interval is one
minute.
 Intermittent Error Monitor
 Auto Failback must be On.
 Parameters are Monitoring Interval and Number of Times.
 Example: Monitoring Interval = 30 minutes

Page 9-42 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Dynamic Link Manager GUI

 Number of Times = 3 If an error occurs three times in a 30 minute period,


the path is determined to have an intermittent error and is removed and
subject to automatic failback. The path will display error status until
problem is corrected.
 Reservation Level: Used only by AIX and specific paths can be reserved for
I/O
 Remove LU
 Used with Microsoft Windows 2000 and Server 2003
 Removes the LUN when all paths to the LUN are taken offline

HDS Confidential: For distribution only to authorized parties. Page 9-43


Tiers, Resource Pools and Workload Profiles
Module Summary

Module Summary

 Described best practices when configuring and sizing storage


systems for Hitachi NAS products
• Response time
• Throughput
 Described best practices when configuring industry standard mail
messaging and collaboration applications

Page 9-44 HDS Confidential: For distribution only to authorized parties.


Tiers, Resource Pools and Workload Profiles
Module Review

Module Review

1. For OLTP applications what key metrics need to be tracked


closely?
2. For a Media Streaming application what key metrics need to be
tracked closely?

HDS Confidential: For distribution only to authorized parties. Page 9-45


Tiers, Resource Pools and Workload Profiles
Module Review

Page 9-46 HDS Confidential: For distribution only to authorized parties.


10. Storage
Virtualization
Module Objectives

 Upon completion of this module, you should be able to:


• Describe how multitiered storage solutions address customer’s
requirements
• Configure external storage for matching performance requirements
• Use Hitachi Tiered Storage Manager (HTSM) to optimize migration tasks
and storage tiers
• Use Hitachi Virtual Partition Manager (VPM) to optimize storage
performance
• Use Hitachi Cache Partition Manager (CPM) to optimize storage
performance
• Determine the appropriate load balancing algorithm to optimize storage
performance using Hitachi Global Link Manager (HGLM) Advanced

HDS Confidential: For distribution only to authorized parties. Page 10-1


Storage Virtualization
Storage Virtualization and Data Mobility — Enterprise Storage

Storage Virtualization and Data Mobility — Enterprise Storage

Introducing the Tiered Storage Concept

 As part of Data Life Cycle Management (DLM) strategy, the mission


of Tiered Storage is to provide to the application, over time, the
required level of:
• Performance
• Reliability
• Functionality
• Hibernation
• Deletion
 To properly deliver on its DLM strategy, Hitachi Data Systems has
introduced the best Tiered Storage suite of solutions of the industry.

Page 10-2 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Customer Storage Management Needs

Customer Storage Management Needs

 Organize storage resources into logically organized tiers based on


business needs
 Simplify the management of heterogeneous storage devices
 Provide storage resources as logical pooled resources that hide the
underlying complexity
 Group storage devices based on performance, availability, cost, and
security
 Proactively manage resources per service level agreements (SLA)
and operating level agreements (OLA)
 Dynamically move data following its natural lifecycle process

HDS Confidential: For distribution only to authorized parties. Page 10-3


Storage Virtualization
Controller-based Virtualization — Review

Controller-based Virtualization — Review

Microsoft  Controller-based
UNIX® Host UNIX® Host z/OS®
Windows®
Host Mainframe virtualization
 No virtualization layer
between host and
Fibre storage controller
Channel
Fibre Channel • No added complexity
SAN ESCON or or latency between
FICON
host and storage
• No cracking open of
Fibre Channel Single Pool of Fibre Channel packets
Heterogeneous • Support for mainframe
External Storage
Storage
Virtualization hosts, direct attach,
Controller
and SAN attached

USP / USP VM / VSP

Hitachi’s approach is to do the virtualization in our controller.


This is simply an extension to the virtualization that we have been developing over the
years.
The hosts talk directly to our controller. Hitachi is an end point to the application’s I/O.
There is no need for agents, proxies, APIs, or new standards. The standards already exist.
Hitachi is not dependent on the SAN. It can support FC Fabric, Direct attach, Mainframe
attach, or NAS.
The virtualization that Hitachi already does for internal disk arrays is simply extended to
external storage arrays. Attaching external storage is simply a matter of attaching their FC
ports to Universal Storage Platform ports either through a switch or direct attach. Again
there is no need for APIs. FC Standard FCS is all that is used. When the ports are
connected, the Universal Storage Platform will query the external storage to see LUNs are
attached. A cache image of the external LUNS is created in the Universal Storage
Platform and all Reads and Writes to the external storage now come through the high
performance Universal Storage Platform global cache.
Virtualizing behind the controller provides the benefit of applying all the current
functionality of Universal Storage Platform to the external storage. No need to reinvent
existing functionality.
No Vendor Lock-in means there is an option for one to one mapping. This enables a user
to disconnect from the Universal Storage Platform and use their storage in its native
attach if they choose. The “state” is maintained in the external device.

Page 10-4 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Universal Volume Manager

Universal Volume Manager

 Features and benefits


• Maps heterogeneous external storage resources as internal volumes
• Enables a dynamic tiered storage infrastructure
• Creates storage pools independent of physical location
• Simplifies the management of heterogeneous storage devices
• Integrates competitive/heterogeneous storage resources into a single
storage domain
• Makes enterprise features and program products available to internal and
external volumes
Clariion

VSP

EMC Symmetrix

Model AMS200

HDS Confidential: For distribution only to authorized parties. Page 10-5


Storage Virtualization
Features

Features

 Users can operate multiple storage systems as if they are all part of
a single storage system.
Fibre-Channel
Interface
Host 1 Host 2 Connection
Switch

External Storage System


Local Storage System

Target Target External External Port Port


Port Port Port Port WWN 0 WWN 1

Internal
Volumes

Mapping

Legend
:Volumes installed in the storage system
:Virtual volumes that do not have physical memory space
:Lines showing the concept of mapping

Universal Volume Manager provides the virtualization of a multi-tiered storage area


network comprised of heterogeneous storage systems. It enables the operation of
multiple storage systems connected to Hitachi Universal Storage Platform V/VM
storage systems as if they were all in one storage system and provides common
management tools and software.
The shared storage pool comprised of external storage volumes can be used with
storage system-based software for data migration and replication, as well as any
host-based application. Combined with Hitachi Volume Migration, Universal
Volume Manager provides an automated data lifecycle management solution, across
multiple tiers of storage.

Page 10-6 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Configuring External Storage

Configuring External Storage

 External storage is configured using the following procedures:


• Change the port attribute to external in local storage system
• Prepare a volume in the external system
• Set up physical links
• Discover the external system volumes
• Map an external volume
• Define the host path

HDS Confidential: For distribution only to authorized parties. Page 10-7


Storage Virtualization
Map External Storage

Map External Storage

Target External External Port Port


Port Port Port
WWN 0 WWN 1

Open-V

04:123 04:124 04:125 2 3 4

 Set the Emulation to Open-V External Volume


Mapping
 Add a CU:LDEV
 Create an External Volume Group
• Set Volume Attributes
 Cache Mode
 Inflow Control

Configure the external volumes as Open-V, which is mapped as an internal volume,


to define a path between the host and the internal storage system. Use the LUN
Manager feature to define a path. External volumes mapped as OPEN-V volumes
within the Universal Storage Platform V/VM storage systems allow you to access
existing data on the external volumes.

Page 10-8 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
External Volumes and External Volume Group

External Volumes and External Volume Group

 Map an external volume to Universal Storage Platform V and VM


1. Register to an External Volume Group (ExG, a Virtual Parity Group)
2. Set the external volume attributes:
 Cache Mode
Disabled: Local storage system signals host that an I/O operation has
completed only after local storage system has synchronously written data to
external storage system.
Enabled: Local storage system signals host that an I/O operation has
completed and then asynchronously destages data to external storage
system.
 Inflow Control
Disabled: The I/O from host during retry operation is written to cache
memory even after writing operation to external volume is impossible.
Enabled: The writing operation to cache is stopped and I/O from host is not
accepted when writing operation to external volume is impossible.

External Volume Attributes


Cache Mode
Cache mode specifies whether the write data from the host to the external storage system
is propagated synchronously or asynchronously. All I/O to and from the local storage
system (either Enable or Disable) always uses cache. Write operations are always backed
up in duplex cache.
If you select Enable, after receiving the data into the local storage system cache memory,
the local storage system signals the host that an I/O operation has completed and then
asynchronously destages the data to the external storage system.
If you select Disable, the local storage system signals the host that an I/O operation has
completed only after the local storage system has synchronously written the data to the
external storage system.
Inflow Control
Inflow control specifies whether the writing operation to the cache memory is stopped
(Enable) or continued (Disable) when the writing operation to the external volume is
impossible. By default, inflow control is set to Disable.
If you select Enable, the writing operation to cache is stopped and the I/O from the host
is not accepted when the writing operation to the external volume is impossible.
If you select Disable, the I/O from the host during the retry operation is written to the
cache memory even after the writing operation to the external volume is impossible.
When the writing operation to the external volume becomes normal, all the data in the
cache memory is written to the external volume (all the data is destaged).

HDS Confidential: For distribution only to authorized parties. Page 10-9


Storage Virtualization
Cache Mode Usage

Cache Mode Usage

 Write to virtual volume


• Cache mode = On: Write from host is written to cache memory in local storage
system, and completion of write operation is reported to host.
• Cache mode = Off: Write from host is written to the external storage, and
completion of the write operation is reported to host.

Write to LDEV Write to LDEV A read operation returns the data on the
whose cache whose cache local storage cache memory to HOST, if
mode is ON mode is OFF exists, regardless of the cache mode status.

Host FC-SW

VSP External storage

Target External External Port Port


port port port WWN 0 WWN 1

Cache
memory

Cache mode=On Cache mode=Off

Virtual VOL Mapping

Page 10-10 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Workload Recommendations, Cached or Uncached

Workload Recommendations, Cached or Uncached

 Large Block Streaming Reads and Writes: faster with cached I/O
• D-2-D backup, video, audio, imaging
 Random Writes: faster with cached I/O
• Database, file system
 Small Block Reads: use either cached or uncached

HDS Confidential: For distribution only to authorized parties. Page 10-11


Storage Virtualization
Virtualization and Cache Enabled/Disabled

Virtualization and Cache Enabled/Disabled

Cache Mode Enabled Cache Mode Disabled

Writes signalled
complete when in
VSP cache
Normal LRU cache
VSP Cache Note: must be Temporary management for reads
aware of write staging
pending!
Normal LRU cache Partition advised
management for Writes signalled complete
reads when in external storage
cache

Normal cache
External Storage management for
Cache reads, plus prefetch

E-LUN E-LUN

Page 10-12 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Rules of Thumb for Software Requirements

Rules of Thumb for Software Requirements

 Cached/Un-cached External Storage, Cache Partitioning, and Server


Priority

USP V

HDS Confidential: For distribution only to authorized parties. Page 10-13


Storage Virtualization
VSP Cache Mode

VSP Cache Mode

 On/Disabled Comparison with External HDP Pool on AMS 2100


1. IOPS

HDDs busy
in the AMS

Write Pending Write Pending


high in the AMS high in the VSP

 On/Disabled Comparison with External HDP Pool on AMS 2100


2. MB/sec

Benefit for Cache


Mode Enabled with
paced single-threaded
access

Write Pending in VSP is


67% and sequential
optimization is lost during
“full power” destage.

Page 10-14 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
What Is Tiered Storage Manager?

What Is Tiered Storage Manager?

 Tiered Storage Manager enables you to optimally reposition data


based on the application requirement
 Tiered Storage Manager repositions (migrates) data on storage
system volumes to other volumes by specifying a migration target

HDS Confidential: For distribution only to authorized parties. Page 10-15


Storage Virtualization
Benefits of Tiered Storage Manager

Benefits of Tiered Storage Manager

 Offers easy to use service-level based volume migration


 Gives non-interruptive, completely transparent volume movement
 Moves data safely when application requirements increase or
decrease
 Removes complexity in managing storage tiers
 Simplifies steps to migrate data before storage reassignment or
removal, such as migrating hardware coming off lease
 Eliminates restrictive data migration windows and requirements
 Helps respond to data retention standards and compliance issues
 Provides ability to use LDEVs as a migration target that are larger
than the source LDEV in a migration

Migration to a larger LDEV - This function carves the same size LDEV out of a larger
target LDEV. The migration is still same size LDEV to LDEV. Space freed from
carving out the target from a larger LDEV is returned to Array Group free space.
This function increases the target candidates for migrations and is helpful when the
user does not have the same size LDEVs available for the targets.

Page 10-16 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Virtual Partition Manager — Enterprise Storage

Virtual Partition Manager — Enterprise Storage

Virtual Partition Manager Overview

 Hitachi VPM enables data center administrators to perform logical


partitioning of ports, cache, and disk capacity, including external
storage, on the USP V and VM to create independently managed
and secure Private Virtual Storage Machines.
 These logical partitions help maintain quality of service by acting as
dedicated storage resources that are independently managed and
reserved for specific applications.
 VPM allows storage service providers or IT departments to isolate,
segment, and control storage for specific users, applications,
servers, or departments by creating logical partitions.
 Each logical partition is called a Private Virtual Storage Machine. It is
independently managed as if it were a separate storage system.

 Two main functions:


1. Storage Logical Partition (SLPR)
2. Cache Logical Partition (CLPR)
 SLPR allows you to divide the available storage among various
users, to lessen conflicts over use
 CLPR allows you to divide the cache into multiple virtual cache
memories, to lessen I/O contention
 SLPRs and CLPRs can be defined by using Storage Navigator
program
 VSP does not support creation of SLPRs

HDS Confidential: For distribution only to authorized parties. Page 10-17


Storage Virtualization
Virtual Partition Manager Overview

 Benefits
• Improves security
• Assures Quality of Service
• Storage resources optimized to application/business requirements
• Enables departmental view of storage

Improves Security
Virtual Partition Manager restricts access to data and resources from users and
storage administrators without authorization to that partition. It also restricts access
from users and administrators to data and resources outside of their authorized
partition.
Assures Quality of Service
Virtual Partition Manager dedicates resources (for example, cache, disk) for
exclusive use by specific applications to maintain priority and quality of service for
business-critical applications. You can secure and/or restrict access to storage
resources to ensure confidentiality for specific applications. You can also use Virtual
Partition Manager to adjust data storage resources dynamically to satisfy changing
business requirements.
Storage Resources Optimized to Application/Business Requirements
Virtual Partition Manager supports Services Oriented Storage Solutions from
Hitachi Data Systems to match data to appropriate storage resources based on
availability, performance, capacity, and cost. It improves flexibility by allowing
dynamic changes to cache partitions while in use.
Enables Departmental View of Storage
A Departmental view of storage delivers accountability and chargeback, segregation
of workload, facilitates departmental management and control within partitions,
and permits centralized control over departments.

Page 10-18 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Cache Logical Partition

Cache Logical Partition

 Created to guarantee Cache resources to LDEVs on a specific Parity Groups


 Created due to performance reasons
 Consists of:
• Cache (not shared with any other CLPR)
• Parity Groups (that can use the cache in the CLPR)

 Cache in a CLPR is not shared with any other CLPR


 Minimum Cache size is 4GB for a CLPR
• Can be increased dynamically, in increments of 2GB

 By default, all cache is allocated to CLPR0


• Can create 31 new CLPRs

 Creating a new CLPR involves:


• Reassigning Cache from CLPR0 to the new CLPR
• Reassigning Parity Groups from CLPR0 to new CLPR

 Deleting a CLPR moves the resources (Cache and Parity Groups) back to
CLPR0

 Example of CLPR use

Host A Host B Host A Host B

Cache Cache

If Host B load is low, Host A can Although Host B load is low,


use a large portion of cache. Host A cannot use the cache
assigned to CLPR of Host B.

HDS Confidential: For distribution only to authorized parties. Page 10-19


Storage Virtualization
Storage Logical Partition

Storage Logical Partition

 Created to allocate the storage system resources into two or more virtual
storage systems, each of which can be accessed only by the storage
administrator/storage partition administrator/users for that partition
 Are created for administrative and security reasons
 Consists of:
• One or more CLPRs
• One or more Target ports

 By default, all CLPRs/Ports are allocated to SLPR0


• Can create 31 new SLPRs

 Creating a new SLPR involves:


• Moving CLPRs from other SLPRs, into new SLPR
• Moving Ports from other SLPRs, into new SLPR
• Assigning License Capacity (from SLPR0)
• Creating Users for new SLPR

 Example of SLPR use

SLPR1 Target Port


SLPR2 SLPR0

CLPR1 CLPR2 CLPR0 = Non partitioned


cache area

Page 10-20 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Mode 454 Explained

Mode 454 Explained

Inflow Control
Normal occurs in CLPR.

WP=70 in any CLPR CLPR CLPR CLPR CLPR CLPR


Partition triggers #0 #1 #2 #3 #4 #5
system-wide 70% WP Limit
accelerated
destage.

Mode 454

Inflow Control still


occurs in CLPR.
Accelerated Destage

CLPR CLPR CLPR CLPR CLPR CLPR


#0 #1 #2 #3 #4 #5

70% WP Limit
Accelerated
Destage triggered Mode 454
Av. Limit
at an Average WP
level.

Accelerated Destage

HDS Confidential: For distribution only to authorized parties. Page 10-21


Storage Virtualization
Cache Partition Manager — Modular Storage

Cache Partition Manager — Modular Storage

Cache Partition Manager Modular Storage Overview

 What is the cache partition manager?


• A software feature that enables optimized data exchange between
storage and a host by assigning most suitable partition to a logical
unit according to data received from a host.

Page 10-22 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Cache Partition Manager Modular Storage Overview

 What is the cache partition manager?


• Cache partition manager enables cache memory to be sub-divided and
managed for more efficient use.
 Each divided portion of cache memory is called a partition.
• Volumes defined in Hitachi Unified Storage family systems can be
assigned to a partition.
 Customer can specify size of partition.
• An HUS 100 family system can be optimized in many different ways
depending upon characteristics of specific applications.

Cache Partition Manager is a priced optional feature of the disk array that enables
the user data area in the disk array to be used being divided more finely. Each of the
divided portions of the cache memory is called a partition. A volume defined in the
disk array is used being assigned to the partition. A user can specify a size of the
partition and a segment size (size of a unit of data management) of a partition can be
changed also. Therefore you can optimize the data reception/sending from/to a
host by assigning the most suitable partition to a volume according to a kind of data
to be received from a host.

HDS Confidential: For distribution only to authorized parties. Page 10-23


Storage Virtualization
Functions of Cache Partition Manager

Functions of Cache Partition Manager

 Modular storage systems typically have simple single block-size


caching, resulting in inefficient use of cache and I/O.

Typical 4KB database


blocks in 16KB cache
pages

75% Cache Wasted!

Data with a length of more than 4KB is scattered on cache memory.


• Cache hit rate in response to read command is effectively lowered.

In a typical cache memory (no partitioning), data is spread throughout cache and
inefficiently allocated. Cache can be fragmented while loading to or from RAID
groups. There is no way for the cache to treat a specific LUN more efficiently,
because it must accommodate all LUNs and all RAID stripes equally.

Page 10-24 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Functions of Cache Partition Manager

 Cache partition manager allows partitions sized in factors of 4KB.

Cache memory area is


divided into two
partitions:
Partition 0 (Master
Partition): 16 KB
Partition 2: 8 KB

• Specified data block fits more exactly into partition.


• Reduces required time to move data between cache and disks.

When handling a lot of small data with lengths shorter than 16KB, you can raise the cache
hit rate and lower response time (read) by specifying data to be processed via partition 2.

 Advantages of Selectable Segment Size, short data lengths

Example as to how partitioning enhances performance by increasing cache hit rate.

HDS Confidential: For distribution only to authorized parties. Page 10-25


Storage Virtualization
Functions of Cache Partition Manager

 Advantages of Selectable Segment Size, long data lengths

Example as to how partitioning enhances performance by decreasing overhead.


 Advantages of selectable stripe size

Fixed Stripe Size (64KB) Selectable Stripe Size

By modifying Cache Partitioning Manager to match stripe size, you can lower
overhead and improve cache utilization and efficiency.

Page 10-26 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Benefits of Cache Partition Manager

Benefits of Cache Partition Manager

 Benefits
• Delivers cache environment that can be optimized to specific customer
application requirements
• Less cache required for specific workloads
• Better hit rate for same cache size
• Better optimization of I/O throughput for mixed workloads

4KB Database blocks


in 4KB Cache pages

16KB File system


blocks in 16KB
Cache pages

512KB Video data


blocks in 64KB
Cache pages

Selectable segment size — Customize the cache segment size for a user application
Partitioned cache memory — Decrease negative effect on performance between
applications in a 'storage consolidated' system by dividing the cache memory into
multiple partitions individually used by each application
Selectable stripe size — Increase performance by customizing the disk access size

HDS Confidential: For distribution only to authorized parties. Page 10-27


Storage Virtualization
Multipathing

Multipathing

Dynamic Link Manager

 HDLM provides a round robin (RR) algorithm and an extended round


robin (ERR) algorithm in order to disperse and optimize the I/O
workload over multiple Fibre Channel paths.
• RR algorithm is used for random I/O
• ERR algorithm is used for sequential I/O

Page 10-28 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Load Balancing

Load Balancing

Load Balancing
Description
Algorithm
Round Robin Distributes all I/Os among multiple paths
Extended Round Distributes I/Os to paths depending on whether
Robin the I/O involves sequential or random access:
• For sequential access, a certain number of
I/Os are issued to one path in succession.
The next path is chosen according to the
round robin algorithm.
• For random access, I/Os will be distributed to
multiple paths according to the round robin
algorithm.

Load Balancing
Description
Algorithm
Least I/Os I/O operations are issued to path that has least
number of I/Os being processed*
Extended least I/O operations are issued to path that has least
I/Os (Default) number of I/Os being processed*
• For sequential access, a certain number of
I/Os are issued to one path in succession
– Next path is chosen according to least
I/Os algorithm
• For random access, I/Os are issued to
multiple paths according to the least I/Os
algorithm

* Regardless of I/O block size

HDS Confidential: For distribution only to authorized parties. Page 10-29


Storage Virtualization
Load Balancing

Load Balancing
Description
Algorithm
Least blocks I/O operations are issued to path with least
pending I/O block size*
Extended least I/O operations are issued to path with least
blocks pending I/O block size*
• For sequential access, a certain number of
I/Os are issued to one path in succession
 Next path is chosen according to least
blocks algorithm
• For random access, I/Os are issued to
multiple paths according to least blocks
algorithm

* Regardless of I/O count

Page 10-30 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Multipathing Primer

Multipathing Primer

 Multiple Paths to a LUN:


• Prime objective is to increase resilience and availability.
• Multiple paths can increase performance by distributing load across more
resources.
 There are several different multipathing products:
• Includes Hitachi Dynamic Link Manager (HDLM), MPXIO (Solaris), MPIO
(AIX), VMware
 There are different schemes for different storage systems and
operating systems:
• Round Robin (RR), Extended Round Robin (ERR), Most Recently Used
(MRR), Fixed, Logical Block Address range
 Some combinations of schemes require special attention or should
not be used with certain storage systems.

HDS Confidential: For distribution only to authorized parties. Page 10-31


Storage Virtualization
HDLM Multipathing Options

HDLM Multipathing Options

 Round Robin
• Use with AMS 2000 family (uses all available paths).
• HDLM uses only the Owner Path with the legacy modular systems and
AMS 500/1000.
 Extended Round Robin
• Use with Enterprise Storage to allow a sequence of sequential I/Os (up to
100) to be handled by the same port and improve prefetch and cache
efficiency.
• Nonsequential I/Os switch paths more frequently.
 Load Balancing Off
• Only uses the first discovered or currently active path.

Notes:
 Read performance is generally improved with multiple paths.
 Write performance can be limited by other factors, for example Write Pending.
 Performance may not increase with multiple paths if the number of active LUNs
is less than the number of paths.

Page 10-32 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Multipath Options for the Enterprise Storage

Multipath Options for the Enterprise Storage

 HDLM
• Use Extended Round Robin
 DMP
• Use dmp_pathswitch_blks_shift to define
number of blocks issued per path before
switching (default = 1 MB)
 MPXIO
• Use LOAD_BALANCE_LBA to define LBA
region size per path before switching (typically
32 MB)
 VMware
• MRU or FIXED (auto-restore)
 AIX MPIO
• Round Robin or None
 General comments
• Up to 4 paths per LUN with USP V

HDS Confidential: For distribution only to authorized parties. Page 10-33


Storage Virtualization
MPXIO Multipathing Options

MPXIO Multipathing Options

 Alternate Path (or No Load Balancing)


• Use with AMS 500/1000 and Thunder 9500 V modular systems. Set
primary access via D-CTL, with non-owning path as the alternate path.
 Round Robin (uses all available paths)
• Valid for AMS 2000 family but will cause non-owner access with previous
Hitachi modular storage systems.
• Will cause loss of sequential detection with USP, but may perform better if
number of active LUNs is less than the number of available paths.
 Logical Block (LBA Region Load Distribution)
• Use specifically with USP. Access to each LUN down one path for that
LBA region before switching for next region. Enables sequential detection
in the VSP, USP and Lightning 9900 V Enterprise storage system.
• Uses all available paths. Actual MPXIO Terminology
LOAD_BALANCE_NONE, /* Alternate pathing */
LOAD_BALANCE_RR, /* Round Robin */
LOAD_BALANCE_LBA /* Logical Block Addressing */

Page 10-34 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
MPXIO Round Robin

MPXIO Round Robin

 Why not to use MPXIO Round Robin with 9500V and AMS 500/1000:

• Round Robin will cause non-owner


access with AMS 500/1000 and
Thunder 9500 V.
• Non-owner access adds
processing and latency overhead.

Note: Round Robin, however,


is a valid multi-path scheme
for the AMS 2000 family.

HDS Confidential: For distribution only to authorized parties. Page 10-35


Storage Virtualization
VMware Multipath Options

VMware Multipath Options

 MRU
• Each host will continue to use its most recently used path unless an error occurs.
• When it detects a failure, it will try to failover to another path. If successful, this
becomes the new most recently used path.
• MRU should not be used with AMS 500/1000 or
Thunder 9500 V when there are multiple servers
accessing the same LUNs (VMware cluster).
 FIXED
• The host uses the defined path in preference to
any other.
• When it detects a failure, it will try to failover to
another path. If successful, this path is used until
the original path is restored. Access then switches back to the defined Fixed path.
• Fixed should be used with AMS 500/1000 or Thunder 9500 V in a clustered
environment.
 Round Robin (when available)
• Will be valid for the AMS 2000 family
Note: All options are valid for the VSP, USP, USP V, and AMS 2000 family.

Page 10-36 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Load Balancing Algorithms

Load Balancing Algorithms

 Impact of Extended Round Robin versus Round Robin on Sequential


Detect • Treated as Random I/O
• No track preloading
Server Storage • Blocks held in Read Cache

Dynamic Link 5 3 1
Manager Not Sequential
RR
6 4 2 • Less efficient cache usage
• Less efficient back-end I/O
• Suboptimal response times

• Detected as Sequential I/O


• Tracks preloaded to cache
Server Storage • Cache returned to free queue
Dynamic Link 3 2 1
Manager 6 5 4 Sequential
ExRR
• Efficient cache usage
• Efficient back-end I/O
• Optimized response times

HDS Confidential: For distribution only to authorized parties. Page 10-37


Storage Virtualization
Sequential Detection — Switching from RR to ERR

Sequential Detection — Switching from RR to ERR

Change from Round Robin to


Extended Round Robin:
• Sequential I/O is recognized
• Channel Processor (CHP)
Utilization reduces
• IOPS increases from 500 to 800

Change from Round


Robin to Extended round
Robin, IOPS increases
from 5,000 to 8,000 –
CHP utilization reduces

Page 10-38 HDS Confidential: For distribution only to authorized parties.


Storage Virtualization
Module Summary

Module Summary

 Described how multitiered storage solutions address customer’s


requirements
 Configured external storage for matching performance requirements
 Described the usage of Hitachi Tiered Storage Manager (HTSM) to
optimize migration tasks and storage tiers
 Described the usage of Hitachi Virtual Partition Manager (VPM) to
optimize storage performance
 Determine the appropriate load balancing algorithm to optimize
storage performance using Hitachi Global Link Manager (HGLM)
Advanced

HDS Confidential: For distribution only to authorized parties. Page 10-39


Storage Virtualization
Module Review

Module Review

1. Large Block Streaming Reads and Writes work well with


cache=disabled for external storage. (True/False)
2. VSP supports creation of up to 31 SLPRs. (True/False)

Page 10-40 HDS Confidential: For distribution only to authorized parties.


11. Capacity
Virtualization
Module Objectives

 Upon completion of this module, you should be able to:


• Use Hitachi Dynamic Provisioning (HDP) to improve storage performance
• Use Hitachi Dynamic Provisioning (HDT) to improve storage performance

HDS Confidential: For distribution only to authorized parties. Page 11-1


Capacity Virtualization
Dynamic Provisioning

Dynamic Provisioning

What Is Dynamic Provisioning?

 Hitachi Dynamic Provisioning (HDP) provides a data striping


functionality within a dynamic provisioning pool across the configured
dynamic provisioning volumes.
• Allows you to optimize performance for I/O workload in the storage
system and the applications storage consumption
• Allows you to expand a dynamic provisioning pool with additional volumes
online dynamically

Page 11-2 HDS Confidential: For distribution only to authorized parties.


Capacity Virtualization
What Is Dynamic Provisioning?

 To avoid future service interruptions, today it is common to overallocate


storage by 75% or more.
 With dynamic provisioning, disk capacity can be added as needed, when
needed.
Purchase
capacity
Initial purchase
As needed
and allocation
Allocated, but no disks
installed. Disks can be
Purchased,
added non-disruptively
Allocated
BUT UNUSED Warning message when
additional storage
required Purchased, Allocated,
BUT UNUSED Initial
purchase

Actual DATA What you Actual DATA


Need initially
Fat Provisioning Thin Provisioning

Fat Provisioning occurs on traditional storage arrays where large pools of storage
capacity are allocated to individual applications but remain unused (that is, not
written to) with storage utilization often as low as 50%.
Thin Provisioning is a mechanism that applies to large-scale centralized computer
disk storage systems. Thin Provisioning allows space to be easily allocated to servers,
on a just-enough and just-in-time basis.
Over Allocation is a mechanism that allows server applications to allocate more
storage capacity than has been physically reserved on the storage array itself. This
allows leeway in growth of application storage volumes, without having to
accurately predict which volumes will grow by how much. Physical storage capacity
on the array is dedicated only when data is actually written by the application, not
when the storage volume is initially allocated.

HDS Confidential: For distribution only to authorized parties. Page 11-3


Capacity Virtualization
What Is Dynamic Provisioning?

 Provision Virtual Capacity to Hosts/Application


• Virtual maximum capacity is specified and provisioned.
• Real capacity is provisioned from dynamic provisioning pool as host
writes are received in 42MB pages (Enterprise)/ 32MB pages (Modular).

V-VOL

Hitachi Dynamic Provisioning


Real Capacity Pool

Just in Time Space Allocation

The illustration shows the allocation, and timing of allocation, of the pages. Page is a
Hitachi Dynamic Provisioning (DP) construct. It is the unit of allocation and is
allocated upon receipt of a write for either the first unit of allocation for the DP V-
Volume or a write that requires additional allocation due to filling up previously
allocated pages.
So the light gray shades represent the allocation, but the data is not written until the
first write is received and settled, at which point the gray turns darker gray.
The point is that subsequent pages are not required to be contiguous and, in fact,
will be randomly spread over the real LDEVs within the DP Pool.

Page 11-4 HDS Confidential: For distribution only to authorized parties.


Capacity Virtualization
Benefits

Benefits

 Application Storage Provisioning Simpler and Faster for


Administrator
• Draw from the dynamic provisioning pool without adding physical disks
• Define an initial virtual volume, often without an increase in volume
capacity or change of configuration
• Add physical storage separately, nondisruptively
 Reduces application outages, saves time, and keeps costs down
 Performance/throughput improvements from pooling resources
 Ease of provisioning
 Ease of management

With Hitachi Dynamic Provisioning, application storage provisioning is much


simpler, faster, and less demanding on the administrator.
To configure additional storage for an application, the administrator can draw from
the Dynamic Provisioning pool without immediately adding physical disks.
When a (virtual) volume of the maximum anticipated capacity is defined initially,
the administrator does not have to increase the volume capacity and change the
configuration as often.
Additionally, when more physical storage is needed, the administrator is required
only to install additional physical disks to the Dynamic Provisioning disk pool
without stopping any host or applications during the process.
This decoupling of physical resource provisioning from application provisioning
simplifies storage management, reduces application outages, saves time, and keeps
costs down.

HDS Confidential: For distribution only to authorized parties. Page 11-5


Capacity Virtualization
Overview

Overview

Host server only sees virtual volumes


Write data HDP Volume is a
"Virtual LUN“ which
does not demand
an identical
HDP Volume physical storage
(Virtual LUN) capacity.

HDP Pool Actual storage capacity


in HDP Pool is assigned
when host writes data
to a “Virtual LUN.”

LDEVs LDEV LDEV LDEV LDEV LDEV LDEV LDEV LDEV

Array Groups/Disk Drives

Page 11-6 HDS Confidential: For distribution only to authorized parties.


Capacity Virtualization
Components

Components

Fibre Channel ports


connect to hosts.

DP volumes (virtual) Virtual representation of


capacity to HOSTS.

HDP shared memory, HDP


Dynamic Mapping Table,
HDP Control Information control data, DP-VOL
maps DP-VOL data to DP-VOL Directory directory, and page space
pages. control block. Maps virtual
Page Space Control Block
volume to real capacity.
HDP pools HDP pools; separate pools
for different characteristics
and requirements.

Real storage capacity for


HDP Pool Volumes target of HOST writes
represented by installed
parity groups and LDEVs.

Hitachi Dynamic Provisioning provides a host with virtual capacity


volumes; storage is assigned to the volume from a pool of real
storage volumes “Just in Time” for a write request from the host.

HDS Confidential: For distribution only to authorized parties. Page 11-7


Capacity Virtualization
How Does it Work?

How Does it Work?

 Storage Controller presents virtual volumes to hosts into which it maps


storage from a single physical pool as needed.
 Technical Implementation:
• Logical volume: Virtual volume – similar to a snapshot V-VOL. Logical volumes are
assigned Pool Volume storage as needed.
• Pool Volume: Physical volumes to store the actual data for logical volumes. Uses
similar technology as Snapshot Pool volumes.
• Dynamic Mapping Table: Dynamic access table between logical volumes and Pool
volumes.

Host Reporting

Logical/Virtual
Volumes
Physical Capacity
HDP-VOL HDP-VOL HDP-VOL
Pool-VOL
Dynamic
Mapping
Table
Controller

Page 11-8 HDS Confidential: For distribution only to authorized parties.


Capacity Virtualization
Setup and Monitoring

Setup and Monitoring

 Dynamic Mapping Table and Shared Memory — Service Processor (SVP)


 Configure Pool and HDP Volumes — SNM2 program and HDvM
 Monitoring Pool — Storage Navigator program, RAID manager CCI,
HDvM, and HTnM
• Pool information available:
▪ HDP Pool ID
▪ Capacity
▪ Amount of free space
▪ Thresholds (two values)
▪ LDEV# of HDP Pool volumes
▪ RAID level, drive type
• Virtual volume information available:
▪ Virtual capacity, capacity consumed from pool
▪ Threshold
▪ HDP Pool ID

HDS Confidential: For distribution only to authorized parties. Page 11-9


Capacity Virtualization
Characteristics of File Systems on HDP Virtual Volumes

Characteristics of File Systems on HDP Virtual Volumes

Platform Filesystem Metadata write DP Comments


HP-UX JFS(VxFs) Write at beginning only Expected benefits high

HFS Write all @ 10MB interval Virtual Volume Size = Pool Consumed
Microsoft Server NTFS Write at beginning only Expected benefits high
2003
Linux XFS Write all @ 2GB interval Expected benefits high

Ext2,Ext3 Write all @ 128MB interval FS create will use 30% of DP volume capacity

Solaris UFS Write all @ 52MB interval Virtual Volume Size = Pool Consumed

VxFs Write at beginning only Expected benefits high

ZFS Write at beginning only Expected benefits high


AIX JFS Write all @ 8MB interval#1 Virtual Volume Size = Pool Consumed

JFS2 Write at beginning only Expected benefits high

VxFS Write at beginning only Expected benefits high

• Minimum Dynamic Pool space consumed per Virtual Volume = 1 GB


• Other Operating Systems and File systems than those listed in the table
have not yet been examined. Long term behavior of each File System
depend upon application/user interaction.

This shows how different file systems deal with their meta data. If it is written once
in a single location, it will work well with HDP. If it writes meta data at regular
intervals over the whole filesystem, it will not work well with HDP because that
would cause the whole virtual volume to be fully provisioned at the start. The real
message is that there are some OS/FS that are great for HDP, some that are not, and
some in the middle. Understanding this leads to establishment of best practices.
The table above shows the potential capacity use by file system type for each OS. (FS
is created with the parameter set to default.) OS and FS should be carefully selected
when using Dynamic Provisioning

Page 11-10 HDS Confidential: For distribution only to authorized parties.


Capacity Virtualization
Large Pools versus Small Pools

Large Pools versus Small Pools

 Pro larger pools:


• There is reason to believe that the larger the pool, the better the function.
 Pro separate pools:
• Workload separation and Quality Of Service.
 A customer should NOT build a pool that is larger than they can
recover within their Recovery Time Objective (RTO).

Pro Larger Pools


 The larger the pool, the higher the peak load you can accommodate, so there will
be less chance of hot-spots.
 Capacity, performance, and cost all average out.
 You want to use all 4 BED pairs if possible.
 It should be easier to upgrade.
 You want large units of upgrade.
 If you have multiple pools, you may be left with the question of where to put the
capacity you have budget for.
Pro Separate Pools
 Unless you do something to prevent it, I/O will go to the greatest requester.
 This could mean that some workloads dominate others.
 You might wish to separate Prod and Test/Dev or OLTP and BATCH/Decision
Support.
 Recommend separating P-VOL and ShadowImage S-VOL (these are often
different tiers anyway).

HDS Confidential: For distribution only to authorized parties. Page 11-11


Capacity Virtualization
Pool Design Recommended Practices

Pool Design Recommended Practices

 HDP does not change the laws of disk mechanics.


• Individual HDP Pools must have enough HDDs and an appropriate RAID
level to support the IOPS load.
 Pools may include storage from exactly one storage tier.
• Disk type, RAID type, internal versus external
 HDP Pools must reside in exactly one cache partition.
• Array groups are assigned to cache partitions.
 RAID-6 is the recommended RAID Level for HDP and HDT Pools for
reliability reasons but RAID-1+0 or RAID-5 may be used if the I/O
profile demands it, and where pool recovery impact has been
assessed

Page 11-12 HDS Confidential: For distribution only to authorized parties.


Capacity Virtualization
Additional Important Considerations

Additional Important Considerations

 Evenly distribute array groups for a pool among the BEDs.


• Distribute workload evenly among the BEDs.
 Separate internal and external storage pools.
 For external pools, select array groups from exactly one external
storage system.
• For resiliency and operational flexibility.
 Typical: One LDEV per array group.
 Exceptions:
• Volume size restrictions on very large array groups require multiple
volumes (SATA, large drives).
• Some throughput requirements may require multiple volumes.

HDS Confidential: For distribution only to authorized parties. Page 11-13


Capacity Virtualization
HDP Pool Design Best Practices

HDP Pool Design Best Practices

 Expansion: Use normal techniques to determine requirements.


• More performance or more capacity?
▪ Use appropriate tools for #RG and #HDD requirements to check
performance
▪ Small pool, No performance requirement? Just add RGs.
▪ 4 to 7 RGs? Add another 4 to 7 RGs.
▪ More than 8 RGs? Add 8 RGs at a time.
▪ Rebalancing when available  2 RGs at a time.
• Larger Pools  better workload balancing
• Try to expand at 80%-full point.
▪ The lower the expansion point, the more load distribution is effective.

Page 11-14 HDS Confidential: For distribution only to authorized parties.


Capacity Virtualization
Dynamic Tiering

Dynamic Tiering

HDT — Page Level Tiering

POOL A
 Different tiers of storage are now EFD/SSD
TIER 1
in one pool.
 If data becomes less active, it
migrates to lower level tiers. Last Referenced
SAS
TIER 2

 If activity increases, data will be


promoted to a higher tier.
Last Referenced
 Since 20% of data accounts for SATA
TIER 3

80% of the activity, only the active


part of a volume will reside on the
higher performance tiers.

The pool contains multiple tiers (not the other way around like in USP V/HDP or
USP/HTSM).
The logical volumes have pages mapped to the pool (same as USP V/HDP). Those
pages can be anywhere in the pool on any tier in that pool.
The pages can move (migrate) within the pool for performance optimization
purposes (move up/down between tiers).
HDT will try to use as much of the higher tiers as possible. (T1 and T2 will be used
as much as possible while T3 will have more spare capacity.)
You can add capacity to any tier at any time. You can also remove capacity
dynamically. So, sizing a tier for a pool is a lot easier.
Quantity added/removed should be in ARRAY Group quantities.
The first version of HDT (with VSP at GA):
 Up to a maximum 3 tiers in a pool.

HDS Confidential: For distribution only to authorized parties. Page 11-15


Capacity Virtualization

We will start with managing resources in a 3 tier approach. That may mean:
 1-Flash drives, 2-SAS, 3-SATA or
 1-SAS(15k), 2-SAS(10k), 3-SATA (or something else)
 The Pool’s tiers are defined by HDD type
 No support for RAID-10
 No external storage supported (in v01)
 No mainframe support in v01

Page 11-16 HDS Confidential: For distribution only to authorized parties.


Capacity Virtualization
Hitachi Dynamic Tiering Benefits

Hitachi Dynamic Tiering Benefits

Automate and Eliminate Complexities of Efficient Tiered Storage Use


Data
Heat
Storage Tiers Index  Solution Capabilities
• Automate data placement for higher
High
Activity
performance and lower costs
Set • Simplified ability to manage all storage
tiers as single entity
Normal
• Self-optimized for high performance
Working and space efficiency
Set
• Page-based granular data movement
for highest efficiency and throughput

 Business Value
• Significant savings by moving data to
Quiet lower cost tiers
Data Set
• Increase storage utilization up to 50%
• Easily align business application needs
to the right cost infrastructure

With Hitachi Dynamic Tiering, the complexities and overhead of implementing data
lifecycle management and optimizing use of tiered storage are solved. Dynamic Tiering
simplifies storage administration by eliminating the need for time consuming manual
data classification and movement of data to optimize usage of tiered storage.
Hitachi Dynamic Tiering automatically moves data on fine-grain pages within Dynamic
Tiering virtual volumes to the most appropriate media according to workload to
maximize service levels and minimize TCO of storage.
For example, a database index that is frequently read and written will migrate to high
performance flash technology while older data that has not been touched for a while will
move to slower, cheaper disks.
No elaborate decision criteria are needed; data is automatically moved according to
simple rules. One, two, or three tiers of storage can be defined and used within a single
virtual volume, using any of the storage media types available for the Hitachi Virtual
Storage Platform. Tier creation is automatic based on user configuration policies,
including media type and speed, RAID level, and sustained I/O level requirements.
Using ongoing embedded performance monitoring and periodic analysis, the data is
moved at a fine grain page level to the most appropriate tier. The most active data moves
to the highest tier. During the process, the system automatically maximizes the use of
storage keeping the higher tiers fully utilized.

HDS Confidential: For distribution only to authorized parties. Page 11-17


Capacity Virtualization
LDEV Design

LDEV Design

 Because we can make a LUN of 60TB it doesn’t mean the OS can


handle it. The following can:
• AIX 5.2 TL08 or later, 5.3 TL04 or later
• Windows Server 2003 SP1 or later
• Red Hat Enterprise Linux AS 4 Update 1 or later
 Others may limit at 2TB. Broken HBA drivers may limit. For
information about the maximum LU capacity supported, contact your
operating system vendor.

 Always align your application data — HDP/HDT/Static/AMS/RAID


• It costs nothing, it may boost performance.
 Many small versus few large is still a question of factors outside of
the storage:
• Ability of the application/volume manager/OS to queue efficiently/
• From a storage point of view, fewer large.
• It is possible to trigger problems with some stripe over strip designs/
• Consider the stripe widening effect:
▪ Oracle ASM disk group — four LDEVs of 200GB, 100% full:
• Add one LDEV — four 200GB*100% + one 200GB*80%. (160GB waste)
• Expand LDEVs — four 250GB*80%

Page 11-18 HDS Confidential: For distribution only to authorized parties.


Capacity Virtualization
HDP and HDT Pool Design

HDP and HDT Pool Design

 Where performance matters, an HDP pool/HDT Tier should have:


1. RAID-5 for both SSD and 15K SAS
2. All array groups the same — internal/external, raid level, SAS/SATA,
speed, and model (the slower device kills the whole pool).
3. All array groups the same size is strongly recommended (you should not
use the extra capacity in the larger devices).
4. Place all array group capacity into a single pool/tier; do not split across
two pools/tiers or use for other purposes (other use will imbalance and
impact performance).
5. Use one Pool-VOL per array group (unless the array group is large and
split by the hardware into multiple VDEVs; in which case place all the
LDEVs into the pool).
6. Concatenated array groups are permitted but the other rules must be
followed (the size of the array group makes it pretty inflexible).
7. Use at least four array groups (if you do not, hotspots are likely).

 Informally — HDT tiers should follow the HDP pools guidelines.


 Each tier should independently follow the rules.
 SSD tiers may have less than four array groups (SSD should not
have hotspots).
 Ideally eight or fewer HDT pools:
• The microcode can only migrate eight pools at a time; more are possible
but expect less data agility.
 You may now remove Pool-VOLs from an HDP or HDT pool but:
 Consider the first array group you place in the pool carefully. It has
the pool DMT on so it cannot be removed.
• Also uses up capacity for DMT.

HDS Confidential: For distribution only to authorized parties. Page 11-19


Capacity Virtualization
HDT Tiers

HDT Tiers

 Maximum of three tiers in one pool (NOT restricted to:


SSD/SAS/SATA/EXTERNAL).
 Tier is defined/allocated by I/O throughput (response performance) and
availability of the media.
 Media with shortest response time are positioned as higher tiers, and media
with longer response time are positioned as lower tiers.
 Tier order is based on media type and rotational speed (rpm) only;
differences in performance according to RAID levels are excluded when
determining order of tiers.

Media supported by VSP Order Add Tier SSD Insert based on


of tiers order of tiers
Tier1 SAS Add Tier1 SSD
2.5" SSD(200GB) 1 Move other media
Tier2 SATA Tier2 SAS
3.5" SSD(400GB) to lower tiers
Tier3 Tier3 SATA
2.5" SAS15Krpm(146GB) 2
2.5" SAS10Krpm(300GB) 3 Delete Tier
2.5" SAS10Krpm(600GB) Delete
Tier1 SSD Tier1 SAS Move other media
2.5" SAS 7.2Krpm(500GB) 4 to upper tiers
Tier2 SAS Tier2 SATA
3.5" SATA(2TB) 5 Tier3 SATA Tier3

Page 11-20 HDS Confidential: For distribution only to authorized parties.


Capacity Virtualization
Hitachi Dynamic Tiering Operations

Hitachi Dynamic Tiering Operations

Create Pool — HDT Options

Tier Management
 If you select Auto, performance monitoring and tier relocation are automatically
performed.
 If you select Manual, you can manually perform performance monitoring and tier
relocation with the CLI commands.
Cycle Time
 You can also select ½ or 1 or 2 or 4 or 8 hr intervals.
 When you select 24 Hours, Monitoring period can be specified.
Monitoring Period field
 Specify the start and end time of performance monitoring in 00:00 to 23:59
(default value).
 One or more times must be taken between the starting time and the ending time,
and the starting time must be before the ending time.
 You can view the information gathered by performance monitoring with Storage
Navigator/Tuning Manager.
 When you select any of ½ Hour, 1 Hour, 2 Hours, 4 Hours or 8 Hours:

HDS Confidential: For distribution only to authorized parties. Page 11-21


Capacity Virtualization
Hitachi Dynamic Tiering Operations

 Performance monitoring is performed every hour you selected, starting at 00:00.


Note: When Auto is set and if all the V-VOL pages cannot be completed
migrating in one cycle, at the next cycle, the last processed V-VOL will start
being migrated with the updated information.
Monitoring Mode
 Continuous mode – Determines the need to relocate data based on a weighted
calculation of the I/O load over multiple monitoring cycles. The continuous
mode weighted calculations can result in the most recent high activity data being
promoted while data with historically low activity will be demoted.
 Periodic mode - only uses a single cycle of I/O load data.

Page 11-22 HDS Confidential: For distribution only to authorized parties.


Capacity Virtualization
Hitachi Dynamic Tiering Operations

Hitachi Dynamic Tiering Operations

 Cycle Management
• Auto Tier Cycle Management
▪ 24 hours or less: 8, 4, 2, 1 hours, 30 min
▪ anchored at midnight 00:00
4/19 0:00 Cycle 4/20 0:00 4/21 0:00 4/22 0:00

Time

Performance Performance monitoring Performance monitoring


monitoring

Page Migration Page Migration

includes zero page reclaim includes zero page reclaim

• Manual Tier Cycle Management


▪ Control start and end of cycles with CLI commands
▪ Maximum monitoring cycle length is seven days.

There are two concurrent pool tasks that occur repeatedly over time which give the
HDT Pool its functionality and behavior — Performance Monitoring tasks and Page
Migration tasks.
An HDT Pool can be configured so that the Monitoring and Migration tasks occur in
a cyclic fashion, triggered based on automated time settings. An HDT Pool can also
be configured for Manual operations.
At the end of a Monitoring phase, the collected page access metrics are frozen and
analyzed and are used to determine which data pages will be migrated and to which
tier during the following Migration phase. When the Pool is operating under Auto
configuration, the next Monitoring cycle will begin and will collect page access
metrics even while the Migration phase is running.
The HDT Monitoring logic excludes system internally-generated I/O such as that
generated by page migration or I/O that results from a drive sparing or correction
copy operation.

HDS Confidential: For distribution only to authorized parties. Page 11-23


Capacity Virtualization
Hitachi Dynamic Tiering Operations

The Pool and/or Tier status identifies whether either or both of the cycle processes
are running:

Status Meaning
STP “stopped” — neither monitoring or relocation cycle are
running
MON “monitoring” — monitoring cycle is running, relocation
cycle is not running
RLC “relocation” — relocation cycle is running, monitoring
cycle is not running
RLM “relocation and monitoring” — both relocation and
monitoring cycles are running concurrently
These are the status codes that are reported when the pool and tier data is reported
by the raidcom get dp_pool CLI command.
The status of the monitoring and relocation cycles is also visible in the Storage
Navigator 2 GUI.

Page 11-24 HDS Confidential: For distribution only to authorized parties.


Capacity Virtualization

 Cycle Time Control


• Periodic Mode
 Automatic
• Repeating every 30 minutes, 1,2,4,8, or 24 hours (anchored at midnight or at
the even hour).
• Relocation is started automatically when monitoring cycle ends.
• Next monitoring cycle is started
 Manual
• Administrator controls cycle start and stop.
• Use Storage Navigator 2 or CLI raidcom.
• Maximum monitoring cycle duration limited to 7 days.
• Continuous Mode
 Implements ongoing aggregation of monitoring metrics across multiple
monitoring cycles.
 Results in “quicker promotion” and “slower demotion” behavior.

Automatic, Manual or Continuous cycle settings can be set and changed either
through Storage Navigator 2 GUI and/or using CLI commands. An HDT pool’s
cycle control can be changed at any time.
When an HDT pool is configured for Automatic cycle control, the cycle period can be
set at one of the available intervals: 30 minutes, 1 hour, 2 hours, 4 hours, 8 hours or
24 hours. A cycle always starts at the “start of an hour.” That is, the next cycle will
start at the next “00” minutes of the next hour. It is important that you understand
this as you anticipate the cycle processing and availability of monitoring data from
the “next” full and successful monitoring cycle.
When an HDT pool is configured for Manual operation, the storage administrator
can start and stop monitoring and relocation cycles using either Storage Navigator 2
GUI and/or CLI raidcom command(s). The maximum duration for a manual
monitoring cycle is 7 days.
More recently, with VSP microcode version 2, the option of “Continuous mode”
monitoring is supported. Continuous mode aggregates the collected monitoring
metrics across multiple monitoring periods. In Periodic mode, each monitoring cycle
starts new with counting the page level accesses. The new Continuous mode results
in different page promotion and demotion behavior.

HDS Confidential: For distribution only to authorized parties. Page 11-25


Capacity Virtualization
HDT Performance Monitoring

HDT Performance Monitoring


 Tier boundaries are decided when either performance potential or installed
capacity is used up in a tier within an HDT pool.
1. Determines lower tier boundaries in an HDT pool Frequency distribution of
I/Os per page
 By placing pages starting from the highest tier to lower tiers
in a pool, lower boundary of a tier is decided as follows: Lower
• Tier Capacity: Capacity installed in a tier is used up. boundary
of a tier
• Tier Performance: Performance potential* of a tier is
reached.
(*) Maximum IOPS that a tier can handle. Decided by tier capacity
or tier performance
2. Determines tier range of a tier in an HDT pool
Tier1

 Determines tier ranges using the lower tier boundaries


 To avoid excess page relocations b/w tiers, gray zones Tier2
Gray
グレー
are placed above the lower boundaries (up to +20%), zones
ゾーン
which are upper boundaries of lower tiers. Tier3

3. Determines the right tiers for pages


 Based on the decided tier ranges, the appropriate tier for a page is decided.
 To avoid excess relocations, the pages in gray zones will be kept in the same tier.

Tier Performance Not Busy Not Busy Not Busy • No problem in any tiers that
can handle more I/O’s
Unused SATA
Used SSD Used SAS • SSD and SAS tiers are used
up but their performance
Tier Capacity Used SATA potential is still available.
Full Available
Full HDT pool No actions required.
Busy Not Busy
Tier Performance Not Busy • No problem in SSD tier that
can handle more I/O’s
Unused SATA
Used SSD Used SAS • SAS capacity is used up, and
Used SATA its performance needs your
Tier Capacity attention
Full Full HDT pool Available
Need to add more SAS drives.

Busy Not Busy Not Busy


Tier Performance
Unused SSD
• Performance potential is
Unused SATA used up in SSD tier!
Used SAS • SAS tier is OK
Used SSD • SATA tier is OK
Used SATA
Tier Capacity
Available HDT pool
Available Need more SSD drives.
Available

Page 11-26 HDS Confidential: For distribution only to authorized parties.


Capacity Virtualization
HDP and HDT Pool Design

HDP and HDT Pool Design

 As with HDP, do not mix drive Order of


VSP Media tiers
capacity in a tier. If you must, treat
2.5" SSD(200GB) 1
all drives as if they were of the 3.5" SSD(400GB)
smaller capacity. 2.5" SAS15Krpm(146GB) 2
2.5" SAS10Krpm(300GB) 3
 You should not mix RAID level in 2.5" SAS10Krpm(600GB)
one tier. 2.5" SAS 7.2Krpm(500GB) 4
3.5" SATA(2TB) 5
 Avoid:
• RAID-6 in higher tier with RAID-5 in lower tier:
▪ Concatenation in lower tier (without similar concatenation in higher)
• When supported — RAID-1+0 large-slow with RAID-5 small-fast

HDS Confidential: For distribution only to authorized parties. Page 11-27


Capacity Virtualization
HDT Page Monitoring

HDT Page Monitoring

 Monitoring started in the cycle after major LDEV events:


• Page allocation, page deletion, paircreate, migrate,
add/delete Pool-VOL, ZPR
 Monitor ignores management I/O:
• Page relocation (42MB pages, at least two collection cycles)
• Rebalance
• Failure processing (correction copy and so on)
 DP-VOLs with relocation disabled are not counted (so if used, you
might make wrong conclusions).

Page 11-28 HDS Confidential: For distribution only to authorized parties.


Capacity Virtualization
Migration Decisions

Migration Decisions

 To avoid pages bouncing in and out of a tier, data in the gray zone is
not migrated.
 Does relocation hurt?
• Relocation is capped at 3TB/day (36MB/s). Very rare.
• Similar to rebalance.
• A pool will have a minimum of 4*SATA AG.
• Supports 233MB/s sequential write (16TB/day).
• 18% absolute worst case. SAS: 5%.

Tier1

グレー
Tier2 Gray zone
ゾーン
Tier3

HDS Confidential: For distribution only to authorized parties. Page 11-29


Capacity Virtualization
Lab: Competing Workload Scenario

Lab: Competing Workload Scenario

Overview

 Test two competing workloads with and without Cache Partition


Manager
• Workload 1 — OLTP emulation with 75% Read, 75% Random, 4 KB
I/Os, 10GB across 2 x 3D+1P
• Workload 2 — Random Update with 90% Write, 100% Random 4 KB
I/Os, 10 GB across 2 x 3D+1P (unusual, benchmark-like workload)
 Scenario 1 — Baseline with 2 x OLTP, increasing intensity
 Scenario 2 — Baseline with 2 x Random Write, increasing intensity
 Scenario 3 — Both, without CPM, increasing intensity
 Scenario 4 — Both, with CPM, increasing intensity

Measurements

 IOMETER (5 minute stabilization per test)


• I/O Rate Per Sec
• Average Read Response
• Average Write Response
 HTnM
• Cache Write Pending
• Array Group Busy
• LDEV Read Response
• LDEV Write Response
 Notes:
• 10% Read load on the Random Write LUN to trigger HTnM measurement
on same screen

Page 11-30 HDS Confidential: For distribution only to authorized parties.


Capacity Virtualization
Module Summary

Module Summary

 Described the usage of Hitachi Dynamic Provisioning (HDP) to


improve storage performance
 Described the usage of Hitachi Dynamic Provisioning (HDT) to
improve storage performance

HDS Confidential: For distribution only to authorized parties. Page 11-31


Capacity Virtualization
Module Review

Module Review

1. What factors need to be considered while creating an HDP pool?

Page 11-32 HDS Confidential: For distribution only to authorized parties.


12. Monitoring and
Troubleshooting
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the methodology used to define and isolate a performance-
related problem
• Describe which Hitachi product is appropriate to use given a specific
performance problem and environment
• Analyze performance data, isolate bottlenecks and make
recommendations to solve performance related problems
• Identify the Hitachi Data Systems Professional Services offering related to
performance assessment and troubleshooting

HDS Confidential: For distribution only to authorized parties. Page 12-1


Monitoring and Troubleshooting
Recap: What Is Performance?

Recap: What Is Performance?

 Fulfillment of an expectation.
Performance = Reality – Expectations
Happiness = Reality – Expectations
Performance = Happiness
 Measure Reality
• Establish comprehensive data collection
 Ask about Customer Expectations
• Quantifiable expectations exist?
• How are customer expectations not being met?

Fulfillment of an Expectation
 If both performance and happiness = the same thing (reality minus expectations),
then it follows that performance must equal happiness
Measure Reality
 Establish comprehensive data collection
Ask about Customer Expectations
 Quantifiable expectations exist?
 Throughput (IOPs, MBS); Response Time
 How are customer expectations not being met?
 Specific targets; Timing or circumstances; How do they know they are unhappy?

Page 12-2 HDS Confidential: For distribution only to authorized parties.


Monitoring and Troubleshooting
Characterize the Issue

Characterize the Issue

 Dimensions of performance measurement?


• Interactive Workloads
▪ Response Time
▪ Achieved by employing low resource utilization and minimal queuing
• Batch Workloads
▪ Throughput
• IOPs
• MBS
▪ Achieved by employing maximum resource utilization with moderate
queuing
 Optimizing utilization of a storage access resource for a batch
workload and an interactive workload are mutually exclusive.
 Consequently, batch workloads and interactive workloads should
generally not share same storage access resource at same time.

HDS Confidential: For distribution only to authorized parties. Page 12-3


Monitoring and Troubleshooting
Map Issue to Specific Storage Resources

Map Issue to Specific Storage Resources

 Customer expectations relate to an application workload.


 An application workload is serviced by specific storage resources.
 Use HDvM to map application workloads to specific storage
resources.
 HDvM can also remember named lists of LDEVs as logical groups.

Page 12-4 HDS Confidential: For distribution only to authorized parties.


Monitoring and Troubleshooting
Assess Storage Resource Utilization

Assess Storage Resource Utilization

 Utilization = percent busy or occupied.


 Most storage performance problems are attributable to excessive
storage resource utilization.
• High Write Pending (high back end utilization)
• High front end director MP utilization
• High back end director MP utilization
• High Array Group utilization
 Storage resource throughput, utilization, and response time are
reported by:
• Performance Monitor
• Tuning Manager

HDS Confidential: For distribution only to authorized parties. Page 12-5


Monitoring and Troubleshooting
Reporting Intervals

Reporting Intervals

 Most performance issues are analyzed using 1-minute data intervals.


• Performance problems requiring shorter interval analysis are rare, but do
occur.
• Analysis of 1-minute interval data is generally limited to 1 to 2 day
durations.
• Short intervals avoid muting peaks by averaging.
 Longer intervals are mostly useful for workload cycle and trend
analysis.

Page 12-6 HDS Confidential: For distribution only to authorized parties.


Monitoring and Troubleshooting
Assessing Utilization Levels, Interactive or Batch

Assessing Utilization Levels, Interactive or Batch

 Minute-by-minute basis for interactive workloads


• High utilization in any minute is a point of concern.
• Scope of concern:
▪ Immediate, likely to cause perceptible response time problems
▪ Contingent, inadequate reserves to support processing during failures
 Average over time remaining in batch window for batch workloads
• High utilization by itself is not cause for concern; it is a design goal.
▪ Maximizing utilization per resource maximizes throughput per
resource, the optimization goal for batch processing.
▪ High response time is not cause for concern either. High response time
is a natural consequence of high utilization levels and moderate
queuing, the keys to maximizing throughput per resource.
• The key question:
▪ Is there adequate capacity to complete processing within the batch
window even after a component failure?

HDS Confidential: For distribution only to authorized parties. Page 12-7


Monitoring and Troubleshooting
Design for Normal or Failure Operation

Design for Normal or Failure Operation

50% 75%
Note:
Design/build for • Some applications are
normal operation more sensitive than
others.
• OLTP, mail and some DBs
Design/build for Allow for Utilization
failure operation
are response-critical.
during failure modes
• Backup and streaming
applications can push
Traffic utilization to a much
Light higher level without it ever
being a problem
System
• For example, 80% may be
explained:
red for a database, but
green for a backup or
batch application.

• Normal operation. • Thresholds are being • Thresholds are being


• No thresholds are exceeded occasionally. exceeded regularly.
being exceeded. • Increased monitoring • Corrective attention is
• Ability to is appropriate. recommended.
accommodate bursts • Bursts of load may • Further load may cause
of load without cause noticeable severe degradation.
noticeable impact. performance
degradation.

Page 12-8 HDS Confidential: For distribution only to authorized parties.


Monitoring and Troubleshooting
Port Capacity Reserve to Accommodate Failure

Port Capacity Reserve to Accommodate Failure

Maximum recommended Recommended


utilization level with all maximum planned
Ports per LDEV ports up utilization with one
port down
2 4
Interactive 30% 45% 60%
Batch 40% 60% 80%

 Two ports per LDEV is recommended typically, especially for cache-


friendly workloads, including sequential workloads.
 Four ports per LDEV is recommended for cache-unfriendly
workloads, that is random, low cache hits.

HDS Confidential: For distribution only to authorized parties. Page 12-9


Monitoring and Troubleshooting
Back End Director Utilization

Back End Director Utilization

 Maximum recommended BED utilization = 30–35% to allow for


preserving the level of performance during failure of the other BED
Pair component.
 Reserve capacity is required to accommodate failure.
 BEDs must always have sufficient capacity to destage incoming
writes from cache.

Page 12-10 HDS Confidential: For distribution only to authorized parties.


Monitoring and Troubleshooting
Maximum Recommended Array Group Utilization

Maximum Recommended Array Group Utilization

 For interactive workloads


• 50% during normal operations
• Utilization reserve required to accommodate failure
 For Batch workloads
• As high as possible, because batch metric is normally Elapsed Time
• Expected maximums of 70%-80%
 Depends on the burst profile of initiator
• Average utilization over time remaining in batch window should not
exceed 50%

HDS Confidential: For distribution only to authorized parties. Page 12-11


Monitoring and Troubleshooting
RAID-5 versus RAID-1+0

RAID-5 versus RAID-1+0

 RAID-5 is lower cost per MB and performs well for most workloads
except highly random write workloads.
 If the workload is material, and the:
• Percentage of random write operations is greater than 20%, then RAID-
10 is generally indicated.
• Percentage of random write operations is greater than 5% and the
workload is highly sensitive to response time, then RAID-10 is generally
indicated.
• Otherwise, RAID-5 is often recommended for its superior cost
effectiveness.

Page 12-12 HDS Confidential: For distribution only to authorized parties.


Monitoring and Troubleshooting
Maximum Recommended Cache Write Pending

Maximum Recommended Cache Write Pending

 Up to 30% write pending is considered normal.


 Frequent or sustained trips to 40% deserve attention.
 Frequent or sustained trips to 50% deserve prompt attention.
 On a USP, greater than 66% is an emergency destage/inflow control.

HDS Confidential: For distribution only to authorized parties. Page 12-13


Monitoring and Troubleshooting
Causes of High Write Pending

Causes of High Write Pending

 Write Pending is cache used by host writes to cache that have yet
to be destaged (written) to disk.
 High write pending is caused by rate of host writes to cache
exceeding storage system ability to transfer data from cache to disk.
 High write pending is caused by inadequate back end storage
access resources to service the host write workload.

 For internal storage


• Array Group Utilization (most common)
▪ Inadequately sized pool of Array Group resources
▪ Non-uniform distribution of workload among Array Groups in a pool
• Array Group Type
▪ Inappropriate use of RAID-5, RAID-6, and/or SATA drives for high
random write workloads
• BED MP Utilization
▪ Intense Hitachi ShadowImage® Replication workload causing high
utilization (fairly common)

Page 12-14 HDS Confidential: For distribution only to authorized parties.


Monitoring and Troubleshooting
Causes of High Write Pending

 For External Storage


• Longer data path requires more attention for high intensity workloads.
• Cache Mode:
▪ Enabled, I/O complete issued to host immediately.
▪ Disabled, I/O complete issued to host after I/O complete received from
external storage.
▪ For throughput, “best setting” depends upon the workload, determined
by empirical trial.
▪ Cache partitions are recommended as one method available to limit
the impact high Write Pending for external storage.
• Universal Storage Platform External Port utilization.
▪ Active back end resource for external storage is the FED port MP.
• Root cause of high Write Pending in the Universal Storage Platform can
lie within the external storage.
▪ Recursive analysis of the resources in the external storage.

Cache Mode:
 Enabled, I/O complete issued to host immediately.
More likely to cause Write Pending problems
 Disabled, I/O complete issued to host after I/O complete received from external
storage.
Less likely to cause Write Pending problems

HDS Confidential: For distribution only to authorized parties. Page 12-15


Monitoring and Troubleshooting
Identifying Write Pending Problems

Identifying Write Pending Problems

 Write pending should be among the first metrics checked during any
troubleshooting procedure.
• Easy to check and evaluate.
• Write pending problems impact all servers in a cache partition attempting
to write to the storage systems.
 When write pending is elevated, the cause is usually identified by
locating an intense write workload whose timing coincides with the
elevated write pending levels.

Page 12-16 HDS Confidential: For distribution only to authorized parties.


Monitoring and Troubleshooting
Interactive Workloads — Response Time Issues

Interactive Workloads — Response Time Issues

 The likely origin of a response time problem depends upon response


time expectations (see diagram).
 Cache hits and seek range are largely
application properties and to a lesser Max. disk seek
extent, characteristics of storage
deployment.
Min. disk seek
 For legitimate workloads, storage
resource utilization is primarily
influenced by appropriate assignment
of workloads to storage access resources, that is, ports and Array
Groups.

HDS Confidential: For distribution only to authorized parties. Page 12-17


Monitoring and Troubleshooting
Caution about Non-Legitimate Workloads

Caution about Non-Legitimate Workloads

 Not all workloads are legitimate


• Example: Poorly constructed SQL queries that perform unintended full
table scans rather than indexed table lookups
• Example: Using Benchmarks to test application performance because it is
difficult or impossible to use live data
 For legitimate workloads, the most common cause of poor response
time is either high FED MP utilization or high array group utilization.
 Most common cure for these problems is improved distribution of I/O
requests among an adequately sized pool of storage access
resources.

Not all workloads are legitimate.


 Example: Poorly constructed SQL queries that perform unintended full table
scans rather than indexed table lookups
Storage utilizations may be pushed to obvious high levels, but fixing these
problems may not be the correct solution.
 Example: Using Benchmarks to test application performance because it is
difficult or impossible to use live data.
It is very easy to use benchmark tools to create variety of load patterns but they
might not show the actual load generation the way applications do in real life or
production environments.

Page 12-18 HDS Confidential: For distribution only to authorized parties.


Monitoring and Troubleshooting
Batch Workloads — Throughput Issues

Batch Workloads — Throughput Issues

 Batch workloads are more likely to benefit from sequential operations


 Keys to successful sequential operations
• Accurate detection of sequential operations
 Path driver behavior
 Storage prefetch
• Large request sizes
 Application request size
 Sequential reblocking
 Uniform distribution of workload among an adequately sized pool of
storage access resources
• FED MP resources
• Array Groups

HDS Confidential: For distribution only to authorized parties. Page 12-19


Monitoring and Troubleshooting
Distributing Workloads across Resource Pools

Distributing Workloads across Resource Pools

 Pool: A workload distribution domain spanning multiple resource


instances shared by one or more compatible device owners.
 Distribute workloads, and share resources within defined, disjointed
pools.
 Share unless there is a specific reason to isolate:
• Sharing supports improved asset utilization.
• Isolate to:
▪ Protect mission-critical applications from outside influences
▪ Quarantine large unpredictable workloads (for example, Development)
▪ Separate conflicting resource usage patterns (interactive versus batch)
▪ Compartmentalize to contain possible disruption
▪ Resolve organizational control conflicts

Note: HDP is an example of a pool.

Page 12-20 HDS Confidential: For distribution only to authorized parties.


Monitoring and Troubleshooting
Replication Related Performance Issues

Replication Related Performance Issues

 Aggressive Use of ShadowImage


• Possible to initiate massively parallel pair creates each transferring whole
cylinders at a time from P-VOL to one or more S-VOLs.
• Highly aggressive transfers can takeover internal paths and BED
processor resources leaving inadequate resources to service other host
initiated workloads.
• If there are inadequate back end resources available to destage writes
arriving from hosts, then these writes will accumulate in cache as writes
pending. Elevated write pending levels can result in inflow control
retarding the pace of host requests.
 Replication Data Path Problems for Universal Replicator
• If there is inadequate bandwidth between an MCU and RCU to transfer
writes arriving at MCU from hosts, excess writes will accumulate in MCU
cache, and the UR journal.
• High write pending coupled with high journal activity are symptoms of a
bandwidth constraint.

ShadowImage copy pace=15.

HDS Confidential: For distribution only to authorized parties. Page 12-21


Monitoring and Troubleshooting
Enterprise and Modular Storage Similarities — RAID Group Utilization

Enterprise and Modular Storage Similarities — RAID Group


Utilization

 Array groups subject to similar issues as enterprise array groups


• Modular — 4+1 and 8+1 preferred RAID-5 sizes
• Enterprise offers 3+1, 7+1, and 6+2 sizes for RAID-5 and RAID-6

Enterprise reports
Array Group Utilization
or Array Group Busy %

Modular reports HDU


Utilization or Operating
Rate % and needs to
be viewed from both
Controllers to decide
the higher reported
level for that HDU.

Page 12-22 HDS Confidential: For distribution only to authorized parties.


Monitoring and Troubleshooting
Enterprise and Modular Storage Similarities — Cache Write Pending

Enterprise and Modular Storage Similarities — Cache Write


Pending

 Modular PFM reports cache queue information, but write pending


limits must be avoided in both enterprise and modular storage
systems.

Write Pending per


Partition is the key
modular metric, but
other lower limits can Write
apply: Pending % is
Middle Queue 30% the key
Physical Queue 40% enterprise
Per RAID group 25% metric.

Write Hit Rate % being less than 100% is another indication of high write pending.

HDS Confidential: For distribution only to authorized parties. Page 12-23


Monitoring and Troubleshooting
Example of Tray-Contained RAID Groups with AMS 2500 and HDP — Recommended

Example of Tray-Contained RAID Groups with AMS 2500 and


HDP — Recommended

RG #4

RG #3
RG #4

Lu #1 RG #2
RG #3

RG #2 RG #1

RG #1

RG in one tray
6D+1P (7 disks) is the magic
Best performance All SAS Ports are number (or sensible variations)
HDP Pool equally balanced
for a single LUN

Page 12-24 HDS Confidential: For distribution only to authorized parties.


Monitoring and Troubleshooting
Example of HDD Roamed RAID Groups with AMS 2500 — Not Recommended

Example of HDD Roamed RAID Groups with AMS 2500 — Not


Recommended

A 7D+1P RAID group spanning


A 7D+1P RAID group 8 disk trays will also work
spanning 4 disk trays will because tray Units #0 and #4
ensure back-end load are linked, #1 and #5 etc.
distribution through RAID
group striping.

Lu #1
Lu #1

8 disks per RAID group is the


Best performance All SAS Ports are magic number!
for a single LUN equally balanced (or other sensible variations)

HDS Confidential: For distribution only to authorized parties. Page 12-25


Monitoring and Troubleshooting
Tray-Contained and Roamed RAID Groups

Tray-Contained and Roamed RAID Groups

 A CPU Core manages I/O to all HDDs in a tray


• Easier for CPU Core to manage I/Os that overlap HDDs if they are in
same tray (also sequential processing)
 A few individual workloads perform better with HDDs roamed across
all trays
• Example: 1 busy LUN on 1 RAID group doing several streams of high
MB/sec sequential access
 Most busy systems with multiple workloads and active RAID groups
perform better when balanced across RGs that are Tray Contained
 Policy is to use easiest option that suits most workloads
• Doesn’t matter that some RGs overlap trays – small impact

Page 12-26 HDS Confidential: For distribution only to authorized parties.


Monitoring and Troubleshooting
Manual Path Management — LUN Ownership With No Internal Load Balancing

Manual Path Management — LUN Ownership With No Internal


Load Balancing

Manual control can be better


than the automatic scheme
for high MB/sec workloads
because they do not tend to
trigger CPU busy in the
same way as small block
Random access

Disable the
automatic internal
Load Balancing
function

Set the Ownership to the same


Controller as host path access

HDS Confidential: For distribution only to authorized parties. Page 12-27


Monitoring and Troubleshooting
Performance Related Service Offerings

Performance Related Service Offerings

 Storage Platform Assessment (SPA)


• Storage Assessment
• Storage Baseline
• Storage First Aid
 Storage ScoreCard Service
 Storage Deployment Consulting
 Tuning Manager Implementation
 Tuning Manager Remote Reporting Service

Page 12-28 HDS Confidential: For distribution only to authorized parties.


Monitoring and Troubleshooting
Module Summary

Module Summary

 Described which Hitachi product is appropriate to use given a


specific performance problem and environment
 Analyzed performance data, isolate bottlenecks and make
recommendations to solve performance related problems
 Identified the Hitachi Data Systems Professional Services offering
related to performance assessment and troubleshooting

HDS Confidential: For distribution only to authorized parties. Page 12-29


Monitoring and Troubleshooting
Module Review

Module Review

1. List some possible causes of high write pending.


2. What metric should be checked first in case of troubleshooting any
performance issues?
3. Batch Workloads are more likely to benefit from random workloads.
(True/False)

Page 12-30 HDS Confidential: For distribution only to authorized parties.


Communicating in a
Virtual Classroom —
Tools and Features
Virtual Classroom Basics

Overview of Communicating in a Virtual Classroom

 Chat
 Q&A
 Feedback Options
• Raise Hand
• Yes/No
• Emoticons
 Markup Tools
• Drawing Tools
• Text Tool

HDS Confidential: For distribution only to authorized parties. Page V-1


Communicating in a Virtual Classroom — Tools and Features
Reminders: Intercall Call-Back Teleconferencing

Reminders: Intercall Call-Back Teleconferencing

Page V-2 HDS Confidential: For distribution only to authorized parties.


Communicating in a Virtual Classroom — Tools and Features
Feedback Features — Try Them

Feedback Features — Try Them

Raise Hand Yes No Emoticons

HDS Confidential: For distribution only to authorized parties. Page V-3


Communicating in a Virtual Classroom — Tools and Features
Markup Tools (Drawing and Text) — Try Them

Markup Tools (Drawing and Text) — Try Them

Pointer Text Writing Drawing Highlighter Annotation Eraser


Tool Tools Colors

Page V-4 HDS Confidential: For distribution only to authorized parties.


Communicating in a Virtual Classroom — Tools and Features
Transferring Your Audio to Virtual Breakout Rooms

Transferring Your Audio to Virtual Breakout Rooms

 Automatic
• With Intercall / WebEx Teleconference Call-Back Feature
 Otherwise
• To transfer your audio from Main Room to virtual Breakout Room
1. Enter *9
2. You will hear a recording – follow instructions
3. Enter Your Assigned Breakout Room number #
 For example, *9 1# (Breakout Room #1)
• To return your audio to Main Room
 Enter *9

HDS Confidential: For distribution only to authorized parties. Page V-5


Communicating in a Virtual Classroom — Tools and Features
Intercall (WebEx) Technical Support

Intercall (WebEx) Technical Support

 800.374.1852

Page V-6 HDS Confidential: For distribution only to authorized parties.


Communicating in a Virtual Classroom — Tools and Features
Simulated Labs

Simulated Labs

Simulated Labs Overview

 Labs provide
• A video demonstration
• A practice mode
• An online help – a detailed lab guide that steps through the lab practice.
 You may want to first watch the video demonstration.
 Then, learn while practicing.

Note:
After you have completed the course, if you have access to the Learning Center, the
course and Simulated Labs are recorded in your Learning History. To access the
Simulated Labs to refresh your learning or for additional practice:
1. Log on to the Learning Center > My Learning > All Learning Activity > My
Completed Courses.
2. Clear date fields, then select Search to view complete learning history.
3. Find the course and hover over Actions, click View Learning Assignments.
4. Launch the lab module.

HDS Confidential: For distribution only to authorized parties. Page V-7


Communicating in a Virtual Classroom — Tools and Features
Simulated Labs Overview

1. Log onto the Hitachi Data Systems Learning Center at:


http://learningcenter.hds.com
2. Select the My Learning tab.

3. If a current course:
i. Locate the appropriate course and click View Sessions & Progress to
access the lab module.
Title Delivery Type Sessions Start Date Package Location Facility Attempts on Content Status Actions

<Course Virtual VILT M 01-11-2010 US EDT Not Confirmed View Sessions &
Name> Instructor 10:00 - Applicable Progress
Led 14:00 Drop

Learning Assignments Print | Export | Modify Table

Module Assignment Type Requirement Details Completion Status Completed On Actions


ii. Click Launch. Sessions Session Required Not Evaluated View All Sessions
Actions

Simulated Lab - Content Module Required Attempts Allowed: Not Evaluated Launch
<Course Code> Unlimited

Page V-8 HDS Confidential: For distribution only to authorized parties.


Communicating in a Virtual Classroom — Tools and Features
WebEx Hands-On Labs

WebEx Hands-On Labs

WebEx Hands-On Lab Operations

 From session, Instructor starts Hands-On remote lab


 Instructor assigns lab teams (lab teams assigned to a computer)
 Learners are prompted to connect to their lab computer
• Click Yes

 After connecting to lab computer, learners see a message asking


them to disconnect and connect to the new teleconference
• Click Yes
You do not need to hang
up and dial a new number,
Intercall auto connects you
to the lab conference.

HDS Confidential: For distribution only to authorized parties. Page V-9


Communicating in a Virtual Classroom — Tools and Features
WebEx Hands-On Lab Operations

 Instructor can join each lab team’s conference.


 Members of a lab group can communicate:
• With each other using CHAT and telephone
Lower right hand corner of
computer screen

• With Instructor using Raise Hand feature


 Only one learner is in control of the lab desktop at any one time.
• To pass control, select learner name and click Presenter Ball

Page V-10 HDS Confidential: For distribution only to authorized parties.


Your Next Steps

Validate your
knowledge and Check your Collaborate and
skills with progress in the share with fellow
certification learning paths HDS colleagues

Register, enroll and Get the latest


view additional course and
course offerings Academy updates

Review the course Hitac hi Data Get practical


Sys tems
description for Check your Ac ademy O pen advice and insight
supplemental personalized with HDS white
@ HDS Ac ademy
courses learning path papers

Learning Center:
http://learningcenter.hds.com
LinkedIn:
http://www.linkedin.com/groups?home=&gid=3044480&trk=anet_ug_hm&
goback=%2Emyg%2Eanb_3044480_*2
Twitter:
http://twitter.com/#!/HDSAcademy
White Papers:
http://www.hds.com/corporate/resources/
Certification:
http://www.hds.com/services/education/certification

HDS Confidential: For distribution only to authorized parties. Page N-1


Your Next Steps

Learning Paths:
APAC:
http://www.hds.com/services/education/apac/?_p=v#GlobalTabNavi

Americas:
http://www.hds.com/services/education/north-
america/?tab=LocationContent1#GlobalTabNavi

EMEA:
http://www.hds.com/services/education/emea/#GlobalTabNavi

theLoop:
http://loop.hds.com/index.jspa ― HDS internal only

Page N-2 HDS Confidential: For distribution only to authorized parties.


Training Course Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

—A— AL — Arbitrated Loop. A network in which nodes


AaaS — Archive as a Service. A cloud computing contend to send data, and only 1 node at a
business model. time is able to send data.

ACC — Action Code. A SIM (System Information AL-PA — Arbitrated Loop Physical Address.
Message). AMS — Adaptable Modular Storage.
ACE — Access Control Entry. Stores access rights APAR — Authorized Program Analysis Reports.
for a single user or group within the APF — Authorized Program Facility. In IBM z/OS
Windows security model. and OS/390 environments, a facility that
ACL — Access Control List. Stores a set of ACEs, permits the identification of programs that
so that it describes the complete set of access are authorized to use restricted functions.
rights for a file system object within the API — Application Programming Interface.
Microsoft Windows security model.
APID — Application Identification. An ID to
ACP ― Array Control Processor. Microprocessor identify a command device.
mounted on the disk adapter circuit board
(DKA) that controls the drives in a specific Application Management — The processes that
disk array. Considered part of the back end; manage the capacity and performance of
it controls data transfer between cache and applications.
the hard drives. ARB — Arbitration or request.
ACP Domain ― Also Array Domain. All of the ARM — Automated Restart Manager.
array-groups controlled by the same pair of Array Domain — Also ACP Domain. All
DKA boards, or the HDDs managed by 1 functions, paths, and disk drives controlled
ACP PAIR (also called BED). by a single ACP pair. An array domain can
ACP PAIR ― Physical disk access control logic. contain a variety of LVI or LU
Each ACP consists of 2 DKA PCBs to configurations.
provide 8 loop paths to the real HDDs. Array Group — Also called a parity group. A
Actuator (arm) — Read/write heads are attached group of hard disk drives (HDDs) that form
to a single head actuator, or actuator arm, the basic unit of storage in a subsystem. All
that moves the heads around the platters. HDDs in a parity group must have the same
AD — Active Directory. physical capacity.

ADC — Accelerated Data Copy. Array Unit — A group of hard disk drives in 1
RAID structure. Same as parity group.
Address — A location of data, usually in main
memory or on a disk. A name or token that ASIC — Application specific integrated circuit.
identifies a network component. In local area ASSY — Assembly.
networks (LANs), for example, every node Asymmetric virtualization — See Out-of-band
has a unique address. virtualization.
ADP — Adapter. Asynchronous — An I/O operation whose
ADS — Active Directory Service. initiator does not await its completion before
AIX — IBM UNIX. proceeding with other work. Asynchronous
I/O operations enable an initiator to have

HDS Confidential: For distribution only to authorized parties. Page G-1


multiple concurrent I/O operations in BIOS — Basic Input/Output System. A chip
progress. Also called Out-of-band located on all computer motherboards that
virtualization. governs how a system boots and operates.
ATA —Advanced Technology Attachment. A disk BLKSIZE — Block size.
drive implementation that integrates the BLOB — Binary Large OBject.
controller on the disk drive itself. Also
known as IDE (Integrated Drive Electronics) BPaaS —Business Process as a Service. A cloud
Advanced Technology Attachment. computing business model.

Authentication — The process of identifying an BPAM — Basic Partitioned Access Method.


individual, usually based on a username and BPM — Business Process Management.
password. BPO — Business Process Outsourcing. Dynamic
AUX — Auxiliary Storage Manager. BPO services refer to the management of
Availability — Consistent direct access to partly standardized business processes,
information over time. including human resources delivered in a
pay-per-use billing relationship or a self-
-back to top-
service consumption model.
—B— BST — Binary Search Tree.
B4 — A group of 4 HDU boxes that are used to BSTP — Blade Server Test Program.
contain 128 HDDs.
BTU — British Thermal Unit.
Back end — In client/server applications, the
Business Continuity Plan — Describes how an
client part of the program is often called the
organization will resume partially or
front end and the server part is called the
completely interrupted critical functions
back end.
within a predetermined time after a
Backup image—Data saved during an archive disruption or a disaster. Sometimes also
operation. It includes all the associated files, called a Disaster Recovery Plan.
directories, and catalog information of the
-back to top-
backup operation.
BADM — Basic Direct Access Method. —C—
BASM — Basic Sequential Access Method. CA — (1) Continuous Access software (see
HORC), (2) Continuous Availability or (3)
BATCTR — Battery Control PCB.
Computer Associates.
BC — Business Class (in contrast with EC,
Cache — Cache Memory. Intermediate buffer
Enterprise Class).
between the channels and drives. It is
BCP — Base Control Program. generally available and controlled as two
BCPii — Base Control Program internal interface. areas of cache (cache A and cache B). It may
BDW — Block Descriptor Word. be battery-backed.

BED — Back end director. Controls the paths to Cache hit rate — When data is found in the cache,
the HDDs. it is called a cache hit, and the effectiveness
of a cache is judged by its hit rate.
Big Data — Refers to data that becomes so large in
size or quantity that a dataset becomes Cache partitioning — Storage management
awkward to work with using traditional software that allows the virtual partitioning
database management systems. Big data of cache and allocation of it to different
entails data capacity or measurement that applications.
requires terms such as Terabyte (TB), CAD — Computer-Aided Design.
Petabyte (PB), Exabyte (EB), Zettabyte (ZB) Capacity — Capacity is the amount of data that a
or Yottabyte (YB). Note that variations of storage system or drive can store after
this term are subject to proprietary configuration and/or formatting.
trademark disputes in multiple countries at
the present time.

Page G-2 HDS Confidential: For distribution only to authorized parties.


Most data storage companies, including CFW — Cache Fast Write.
HDS, calculate capacity based on the CH — Channel.
premise that 1KB = 1,024 bytes, 1MB = 1,024
kilobytes, 1GB = 1,024 megabytes, and 1TB = CH S — Channel SCSI.
1,024 gigabytes. See also Terabyte (TB), CHA — Channel Adapter. Provides the channel
Petabyte (PB), Exabyte (EB), Zettabyte (ZB) interface control functions and internal cache
and Yottabyte (YB). data transfer functions. It is used to convert
CAPEX — Capital expenditure — the cost of the data format between CKD and FBA. The
developing or providing non-consumable CHA contains an internal processor and 128
parts for the product or system. For example, bytes of edit buffer memory.
the purchase of a photocopier is the CAPEX, CHA/DKA — Channel Adapter/Disk Adapter.
and the annual paper and toner cost is the CHAP — Challenge-Handshake Authentication
OPEX. (See OPEX). Protocol.
CAS — Column address strobe is a signal sent to a Chargeback — A cloud computing term that refers
dynamic random access memory (DRAM) to the ability to report on capacity and
that tells it that an associated address is a utilization by application or dataset,
column address. CAS-column address strobe charging business users or departments
sent by the processor to a DRAM circuit to based on how much they use.
activate a column address.
CHF — Channel Fibre.
CBI — Cloud-based Integration. Provisioning of a
standardized middleware platform in the CHIP — Client-Host Interface Processor.
cloud that can be used for various cloud Microprocessors on the CHA boards that
integration scenarios. process the channel commands from the
hosts and manage host access to cache.
An example would be the integration of
legacy applications into the cloud or CHK — Check.
integration of different cloud-based CHN — Channel adapter NAS.
applications into one application. CHP — Channel Processor or Channel Path.
CBU — Capacity Backup. CHPID — Channel Path Identifier.
CCHH — Common designation for Cylinder and CHSN or C-HSN— Cache Memory Hierarchical
Head. Star Network.
CCI — Command Control Interface. CHT — Channel tachyon. A Fibre Channel
CCIF — Cloud Computing Interoperability protocol controller.
Forum. A standards organization active in CICS — Customer Information Control System.
cloud computing.
CIFS protocol — Common internet file system is a
CDP — Continuous Data Protection. platform-independent file sharing system. A
CDR — Clinical Data Repository network file system accesses protocol
CDWP — Cumulative disk write throughput. primarily used by Windows clients to
communicate file access requests to
CE — Customer Engineer. Windows servers.
CEC — Central Electronics Complex. CIM — Common Information Model.
CentOS — Community Enterprise Operating CIS — Clinical Information System.
System.
CKD ― Count-key Data. A format for encoding
Centralized management — Storage data data on hard disk drives; typically used in
management, capacity management, access the mainframe environment.
security management, and path
management functions accomplished by CKPT — Check Point.
software. CL — See Cluster.
CF — Coupling Facility. CLI — Command Line Interface.
CFCC — Coupling Facility Control Code.

HDS Confidential: For distribution only to authorized parties. Page G-3


CLPR — Cache Logical Partition. Cache can be Cloud Fundamental —A core requirement to the
divided into multiple virtual cache deployment of cloud computing. Cloud
memories to lessen I/O contention. fundamentals include:
Cloud Computing — “Cloud computing refers to • Self service
applications and services that run on a • Pay per use
distributed network using virtualized
resources and accessed by common Internet • Dynamic scale up and scale down
protocols and networking standards. It is Cloud Security Alliance — A standards
distinguished by the notion that resources are organization active in cloud computing.
virtual and limitless, and that details of the Cluster — A collection of computers that are
physical systems on which software runs are interconnected (typically at high-speeds) for
abstracted from the user.” — Source: Cloud the purpose of improving reliability,
Computing Bible, Barrie Sosinsky (2011) availability, serviceability or performance
Cloud computing often entails an “as a (via load balancing). Often, clustered
service” business model that may entail one computers have access to a common pool of
or more of the following: storage and run special software to
• Archive as a Service (AaaS) coordinate the component computers'
activities.
• Business Process as a Service (BPaas)
CM ― Cache Memory, Cache Memory Module.
• Failure as a Service (FaaS)
Intermediate buffer between the channels
• Infrastructure as a Service (IaaS) and drives. It has a maximum of 64GB (32GB
• IT as a Service (ITaaS) x 2 areas) of capacity. It is available and
• Platform as a Service (PaaS) controlled as 2 areas of cache (cache A and
cache B). It is fully battery-backed (48 hours).
• Private File Tiering as a Service (PFTaas)
CM DIR — Cache Memory Directory.
• Software as a Service (Saas)
CM-HSN — Control Memory Hierarchical Star
• SharePoint as a Service (SPaas)
Network.
• SPI refers to the Software, Platform and
CM PATH ― Cache Memory Access Path. Access
Infrastructure as a Service business model.
Path from the processors of CHA, DKA PCB
Cloud network types include the following: to Cache Memory.
• Community cloud (or community CM PK — Cache Memory Package.
network cloud)
CM/SM — Cache Memory/Shared Memory.
• Hybrid cloud (or hybrid network cloud)
CMA — Cache Memory Adapter.
• Private cloud (or private network cloud)
CMD — Command.
• Public cloud (or public network cloud)
CMG — Cache Memory Group.
• Virtual private cloud (or virtual private
network cloud) CNAME — Canonical NAME.
Cloud Enabler —a concept, product or solution CNS — Cluster Name Space or Clustered Name
that enables the deployment of cloud Space.
computing. Key cloud enablers include: CNT — Cumulative network throughput.
• Data discoverability CoD — Capacity on Demand.
• Data mobility Community Network Cloud — Infrastructure
• Data protection shared between several organizations or
• Dynamic provisioning groups with common concerns.
• Location independence Concatenation — A logical joining of 2 series of
data, usually represented by the symbol “|”.
• Multitenancy to ensure secure privacy
In data communications, 2 or more data are
• Virtualization often concatenated to provide a unique
name or reference (e.g., S_ID | X_ID).

Page G-4 HDS Confidential: For distribution only to authorized parties.


Volume managers concatenate disk address cluster is provided with the 2 CSWs, and
spaces to present a single larger address each CSW can connect 4 caches. The CSW
spaces. switches any of the cache paths to which the
Connectivity technology — A program or device's channel adapter or disk adapter is to be
ability to link with other programs and connected through arbitration.
devices. Connectivity technology allows CTG — Consistency Group.
programs on a given computer to run
CTN — Coordinated Timing Network.
routines or access objects on another remote
computer. CU — Control Unit (refers to a storage subsystem.
The hexadecimal number to which 256
Controller — A device that controls the transfer of
LDEVs may be assigned.
data from a computer to a peripheral device
(including a storage system) and vice versa. CUDG — Control Unit Diagnostics. Internal
system tests.
Controller-based virtualization — Driven by the
physical controller at the hardware CUoD — Capacity Upgrade on Demand.
microcode level versus at the application CV — Custom Volume.
software layer and integrates into the
CVS ― Customizable Volume Size. Software used
infrastructure to allow virtualization across
to create custom volume sizes. Marketed
heterogeneous storage and third party
under the name Virtual LVI (VLVI) and
products.
Virtual LUN (VLUN).
Corporate governance — Organizational
compliance with government-mandated CWDM — Course Wavelength Division
regulations. Multiplexing.

CP — Central Processor (also called Processing CXRC — Coupled z/OS Global Mirror.
Unit or PU). -back to top-

CPC — Central Processor Complex. —D—


CPM — Cache Partition Manager. Allows for DA — Device Adapter.
partitioning of the cache and assigns a
DACL — Discretionary access control list (ACL).
partition to a LU; this enables tuning of the
The part of a security descriptor that stores
system’s performance.
access rights for users and groups.
CPOE — Computerized Physician Order Entry
DAD — Device Address Domain. Indicates a site
(Provider Ordered Entry).
of the same device number automation
CPS — Cache Port Slave. support function. If several hosts on the
CPU — Central Processing Unit. same site have the same device number
CRM — Customer Relationship Management. system, they have the same name.
CSS — Channel Subsystem. DAS — Direct Attached Storage.
CS&S — Customer Service and Support. DASD — Direct Access Storage Device.
CSTOR — Central Storage or Processor Main Data block — A fixed-size unit of data that is
Memory. transferred together. For example, the
C-Suite — The C-suite is considered the most X-modem protocol transfers blocks of 128
important and influential group of bytes. In general, the larger the block size,
individuals at a company. Referred to as the faster the data transfer rate.
“the C-Suite within a Healthcare provider.” Data Duplication — Software duplicates data, as
CSV — Comma Separated Value or Cluster Shared in remote copy or PiT snapshots. Maintains 2
Volume. copies of data.
CSW ― Cache Switch PCB. The cache switch Data Integrity — Assurance that information will
(CSW) connects the channel adapter or disk be protected from modification and
adapter to the cache. Each of them is corruption.
connected to the cache by the Cache Memory
Hierarchical Star Net (C-HSN) method. Each

HDS Confidential: For distribution only to authorized parties. Page G-5


Data Lifecycle Management — An approach to Device Management — Processes that configure
information and storage management. The and manage storage systems.
policies, processes, practices, services and DFS — Microsoft Distributed File System.
tools used to align the business value of data
with the most appropriate and cost-effective DFSMS — Data Facility Storage Management
storage infrastructure from the time data is Subsystem.
created through its final disposition. Data is DFSM SDM — Data Facility Storage Management
aligned with business requirements through Subsystem System Data Mover.
management policies and service levels DFSMSdfp — Data Facility Storage Management
associated with performance, availability, Subsystem Data Facility Product.
recoverability, cost, and what ever
parameters the organization defines as DFSMSdss — Data Facility Storage Management
critical to its operations. Subsystem Data Set Services.

Data Migration — The process of moving data DFSMShsm — Data Facility Storage Management
from 1 storage device to another. In this Subsystem Hierarchical Storage Manager.
context, data migration is the same as DFSMSrmm — Data Facility Storage Management
Hierarchical Storage Management (HSM). Subsystem Removable Media Manager.
Data Pipe or Data Stream — The connection set up DFSMStvs — Data Facility Storage Management
between the MediaAgent, source or Subsystem Transactional VSAM Services.
destination server is called a Data Pipe or DFW — DASD Fast Write.
more commonly a Data Stream.
DICOM — Digital Imaging and Communications
Data Pool — A volume containing differential in Medicine.
data only.
DIMM — Dual In-line Memory Module.
Data Protection Directive — A major compliance
Direct Access Storage Device (DASD) — A type of
and privacy protection initiative within the
storage device, in which bits of data are
European Union (EU) that applies to cloud
stored at precise locations, enabling the
computing. Includes the Safe Harbor
computer to retrieve information directly
Agreement.
without having to scan a series of records.
Data Stream — CommVault’s patented high
Direct Attached Storage (DAS) — Storage that is
performance data mover used to move data
directly attached to the application or file
back and forth between a data source and a
server. No other device on the network can
MediaAgent or between 2 MediaAgents.
access the stored data.
Data Striping — Disk array data mapping
Director class switches — Larger switches often
technique in which fixed-length sequences of
used as the core of large switched fabrics.
virtual disk data addresses are mapped to
sequences of member disk addresses in a Disaster Recovery Plan (DRP) — A plan that
regular rotating pattern. describes how an organization will deal with
potential disasters. It may include the
Data Transfer Rate (DTR) — The speed at which
precautions taken to either maintain or
data can be transferred. Measured in
quickly resume mission-critical functions.
kilobytes per second for a CD-ROM drive, in
Sometimes also referred to as a Business
bits per second for a modem, and in
Continuity Plan.
megabytes per second for a hard drive. Also,
often called data rate. Disk Administrator — An administrative tool that
displays the actual LU storage configuration.
DBMS — Data Base Management System.
Disk Array — A linked group of 1 or more
DCA ― Data Cache Adapter.
physical independent hard disk drives
DDL — Database Definition Language. generally used to replace larger, single disk
DDM — Disk Drive Module. drive systems. The most common disk
DDNS — Dynamic DNS. arrays are in daisy chain configuration or
implement RAID (Redundant Array of
DE — Data Exchange Software. Independent Disks) technology.

Page G-6 HDS Confidential: For distribution only to authorized parties.


A disk array may contain several disk drive DR — Disaster Recovery.
trays, and is structured to improve speed DRAC — Dell Remote Access Controller.
and increase protection against loss of data.
Disk arrays organize their data storage into DRAM — Dynamic random access memory.
Logical Units (LUs), which appear as linear DRP — Disaster Recovery Plan.
block paces to their clients. A small disk DRR — Data Recover and Reconstruct. Data Parity
array, with a few disks, might support up to Generator chip on DKA.
8 LUs; a large one, with hundreds of disk
drives, can support thousands. DRV — Dynamic Reallocation Volume.

DKA ― Disk Adapter. Also called an array control DSB — Dynamic Super Block.
processor (ACP); it provides the control DSF — Device Support Facility.
functions for data transfer between drives DSF INIT — Device Support Facility Initialization
and cache. The DKA contains DRR (Data (for DASD).
Recover and Reconstruct), a parity generator
DSP — Disk Slave Program.
circuit.
DTA —Data adapter and path to cache-switches.
DKC ― Disk Controller Unit. In a multi-frame
configuration, the frame that contains the DTR — Data Transfer Rate.
front end (control and memory DVE — Dynamic Volume Expansion.
components). DW — Duplex Write.
DKCMN ― Disk Controller Monitor. Monitors DWDM — Dense Wavelength Division
temperature and power status throughout Multiplexing.
the machine.
DWL — Duplex Write Line or Dynamic
DKF ― Fibre disk adapter. Another term for a Workspace Linking.
DKA.
-back to top-
DKU — Disk Array Frame or Disk Unit. In a
multi-frame configuration, a frame that
—E—
contains hard disk units (HDUs). EAV — Extended Address Volume.
DKUPS — Disk Unit Power Supply. EB — Exabyte.
DLIBs — Distribution Libraries. EC — Enterprise Class (in contrast with BC,
Business Class).
DKUP — Disk Unit Power Supply.
ECC — Error Checking and Correction.
DLM — Data Lifecycle Management.
ECC.DDR SDRAM — Error Correction Code
DMA — Direct Memory Access.
Double Data Rate Synchronous Dynamic
DM-LU — Differential Management Logical Unit. RAM Memory.
DM-LU is used for saving management ECM — Extended Control Memory.
information of the copy functions in the
cache. ECN — Engineering Change Notice.

DMP — Disk Master Program. E-COPY — Serverless or LAN free backup.


EFI — Extensible Firmware Interface. EFI is a
DMTF — Distributed Management Task Force. A
specification that defines a software interface
standards organization active in cloud
between an operating system and platform
computing.
firmware. EFI runs on top of BIOS when a
DNS — Domain Name System. LPAR is activated.
DOC — Deal Operations Center. EHR — Electronic Health Record.
Domain — A number of related storage array EIG — Enterprise Information Governance.
groups.
EMIF — ESCON Multiple Image Facility.
DOO — Degraded Operations Objective.
EMPI — Electronic Master Patient Identifier. Also
DP — Dynamic Provisioning (pool). known as MPI.
DP-VOL — Dynamic Provisioning Virtual Volume. EMR — Electronic Medical Record.

HDS Confidential: For distribution only to authorized parties. Page G-7


ENC — Enclosure or Enclosure Controller. The Failback — The restoration of a failed system
units that connect the controllers with the share of a load to a replacement component.
Fibre Channel disks. They also allow for For example, when a failed controller in a
online extending a system by adding RKAs. redundant configuration is replaced, the
EOF — End of Field. devices that were originally controlled by
the failed controller are usually failed back
EOL — End of Life.
to the replacement controller to restore the
EPO — Emergency Power Off. I/O balance, and to restore failure tolerance.
EREP — Error REPorting and Printing. Similarly, when a defective fan or power
ERP — Enterprise Resource Management. supply is replaced, its load, previously borne
by a redundant component, can be failed
ESA — Enterprise Systems Architecture.
back to the replacement part.
ESB — Enterprise Service Bus.
Failed over — A mode of operation for failure-
ESC — Error Source Code. tolerant systems in which a component has
ESCD — ESCON Director. failed and its function has been assumed by
ESCON ― Enterprise Systems Connection. An a redundant component. A system that
input/output (I/O) interface for mainframe protects against single failures operating in
computer connections to storage devices failed over mode is not failure tolerant, as
developed by IBM. failure of the redundant component may
render the system unable to function. Some
ESD — Enterprise Systems Division.
systems (e.g., clusters) are able to tolerate
ESDS — Entry Sequence Data Set. more than 1 failure; these remain failure
ESS — Enterprise Storage Server. tolerant until no redundant component is
ESW — Express Switch or E Switch. Also referred available to protect against further failures.
to as the Grid Switch (GSW). Failover — A backup operation that automatically
Ethernet — A local area network (LAN) switches to a standby database server or
architecture that supports clients and servers network if the primary system fails, or is
and uses twisted pair cables for connectivity. temporarily shut down for servicing. Failover
ETR — External Time Reference (device). is an important fault tolerance function of
mission-critical systems that rely on constant
EVS — Enterprise Virtual Server. accessibility. Also called path failover.
Exabyte (EB) — A measurement of data or data Failure tolerance — The ability of a system to
storage. 1EB = 1,024PB. continue to perform its function or at a
EXCP — Execute Channel Program. reduced performance level, when 1 or more
ExSA — Extended Serial Adapter. of its components has failed. Failure
tolerance in disk subsystems is often
-back to top-
achieved by including redundant instances
—F— of components whose failure would make
the system inoperable, coupled with facilities
FaaS — Failure as a Service. A proposed business
that allow the redundant components to
model for cloud computing in which large-
assume the function of failed ones.
scale, online failure drills are provided as a
service in order to test real cloud FAIS — Fabric Application Interface Standard.
deployments. Concept developed by the FAL — File Access Library.
College of Engineering at the University of FAT — File Allocation Table.
California, Berkeley in 2011.
Fault Tolerant — Describes a computer system or
Fabric — The hardware that connects component designed so that, in the event of a
workstations and servers to storage devices component failure, a backup component or
in a SAN is referred to as a "fabric." The SAN procedure can immediately take its place with
fabric enables any-server-to-any-storage no loss of service. Fault tolerance can be
device connectivity through the use of Fibre provided with software, embedded in
Channel switching technology. hardware or provided by hybrid combination.

Page G-8 HDS Confidential: For distribution only to authorized parties.


FBA — Fixed-block Architecture. Physical disk relies on TCP/IP services to establish
sector mapping. connectivity between remote SANs over
FBA/CKD Conversion — The process of LANs, MANs, or WANs. An advantage of
converting open-system data in FBA format FCIP is that it can use TCP/IP as the
to mainframe data in CKD format. transport while keeping Fibre Channel fabric
FBUS — Fast I/O Bus. services intact.
FC ― Fibre Channel or Field-Change (microcode FCP — Fibre Channel Protocol.
update) or Fibre Channel. A technology for FC-P2P — Fibre Channel Point-to-Point.
transmitting data between computer devices; FCSE — Flashcopy Space Efficiency.
a set of standards for a serial I/O bus FC-SW — Fibre Channel Switched.
capable of transferring data between 2 ports. FCU— File Conversion Utility.
FC RKAJ — Fibre Channel Rack Additional. FD — Floppy Disk or Floppy Drive.
Module system acronym refers to an
FDR — Fast Dump/Restore.
additional rack unit that houses additional
hard drives exceeding the capacity of the FE — Field Engineer.
core RK unit. FED — (Channel) Front End Director.
FC-0 ― Lowest layer on fibre channel transport. Fibre Channel — A serial data transfer
This layer represents the physical media. architecture developed by a consortium of
FC-1 ― This layer contains the 8b/10b encoding computer and mass storage device
scheme. manufacturers and now being standardized
by ANSI. The most prominent Fibre Channel
FC-2 ― This layer handles framing and protocol,
standard is Fibre Channel Arbitrated Loop
frame format, sequence/exchange
(FC-AL).
management and ordered set usage.
FICON — Fiber Connectivity. A high-speed
FC-3 ― This layer contains common services used
input/output (I/O) interface for mainframe
by multiple N_Ports in a node.
computer connections to storage devices. As
FC-4 ― This layer handles standards and profiles part of IBM's S/390 server, FICON channels
for mapping upper level protocols like SCSI increase I/O capacity through the
an IP onto the Fibre Channel Protocol. combination of a new architecture and faster
FCA ― Fibre Adapter. Fibre interface card. physical link rates to make them up to 8
Controls transmission of fibre packets. times as efficient as ESCON (Enterprise
FC-AL — Fibre Channel Arbitrated Loop. A serial System Connection), IBM's previous fiber
data transfer architecture developed by a optic channel standard.
consortium of computer and mass storage FIPP — Fair Information Practice Principles.
device manufacturers, and is now being Guidelines for the collection and use of
standardized by ANSI. FC-AL was designed personal information created by the United
for new mass storage devices and other States Federal Trade Commission (FTC).
peripheral devices that require very high FISMA — Federal Information Security
bandwidth. Using optical fiber to connect Management Act of 2002. A major
devices, FC-AL supports full-duplex data compliance and privacy protection law that
transfer rates of 100MBps. FC-AL is applies to information systems and cloud
compatible with SCSI for high-performance computing. Enacted in the United States of
storage systems. America in 2002.
FCC — Federal Communications Commission.
FLGFAN ― Front Logic Box Fan Assembly.
FCIP — Fibre Channel over IP, a network storage
technology that combines the features of FLOGIC Box ― Front Logic Box.
Fibre Channel and the Internet Protocol (IP) FM — Flash Memory. Each microprocessor has
to connect distributed SANs over large FM. FM is non-volatile memory that contains
distances. FCIP is considered a tunneling microcode.
protocol, as it makes a transparent point-to- FOP — Fibre Optic Processor or fibre open.
point connection between geographically
separated SANs over IP networks. FCIP

HDS Confidential: For distribution only to authorized parties. Page G-9


FPC — Failure Parts Code or Fibre Channel Global Cache — Cache memory is used on demand
Protocol Chip. by multiple applications. Use changes
FPGA — Field Programmable Gate Array. dynamically, as required for READ
performance between hosts/applications/LUs.
Frames — An ordered vector of words that is the
basic unit of data transmission in a Fibre GPFS — General Parallel File System.
Channel network. GSC — Global Support Center.
Front end — In client/server applications, the GSS — Global Solutions Services.
client part of the program is often called the GSSD — Global Solutions Strategy and
front end and the server part is called the Development.
back end.
GSW — Grid Switch Adapter. Also known as E
FRU — Field Replaceable Unit. Switch (Express Switch).
FS — File System. GUI — Graphical User Interface.
FSA — File System Module-A. GUID — Globally Unique Identifier.
FSB — File System Module-B. -back to top-
FSM — File System Module. —H—
FSW ― Fibre Channel Interface Switch PCB. A H1F — Essentially the floor-mounted disk rack
board that provides the physical interface (also called desk side) equivalent of the RK.
(cable connectors) between the ACP ports (See also: RK, RKA, and H2F).
and the disks housed in a given disk drive.
H2F — Essentially the floor-mounted disk rack
FTP ― File Transfer Protocol. A client-server (also called desk side) add-on equivalent
protocol that allows a user on 1 computer to similar to the RKA. There is a limitation of
transfer files to and from another computer only 1 H2F that can be added to the core RK
over a TCP/IP network. Floor Mounted unit. See also: RK, RKA, and
FWD — Fast Write Differential. H1F.
-back to top- HA — High Availability.
—G— HANA — High Performance Analytic Appliance,
GARD — General Available Restricted a database appliance technology proprietary
Distribution. to SAP.
Gb — Gigabit. HBA — Host Bus Adapter — An I/O adapter that
sits between the host computer's bus and the
GB — Gigabyte.
Fibre Channel loop and manages the transfer
Gb/sec — Gigabit per second. of information between the 2 channels. In
GB/sec — Gigabyte per second. order to minimize the impact on host
processor performance, the host bus adapter
GbE — Gigabit Ethernet.
performs many low-level interface functions
Gbps — Gigabit per second. automatically or with minimal processor
GBps — Gigabyte per second. involvement.
GBIC — Gigabit Interface Converter. HCA — Host Channel Adapter.
GDG — Generation Data Group. HCD — Hardware Configuration Definition.
GDPS — Geographically Dispersed Parallel HD — Hard Disk.
Sysplex. HDA — Head Disk Assembly.
GID — Group Identifier within the UNIX security HDD ― Hard Disk Drive. A spindle of hard disk
model. platters that make up a hard drive, which is
gigE — Gigabit Ethernet. a unit of physical storage within a
subsystem.
GLM — Gigabyte Link Module.
HDDPWR — Hard Disk Drive Power.

Page G-10 HDS Confidential: For distribution only to authorized parties.


HDU ― Hard Disk Unit. A number of hard drives HTTP — Hyper Text Transfer Protocol.
(HDDs) grouped together within a HTTPS — Hyper Text Transfer Protocol Secure.
subsystem.
Hub — A common connection point for devices in
Head — See read/write head. a network. Hubs are commonly used to
Heterogeneous — The characteristic of containing connect segments of a LAN. A hub contains
dissimilar elements. A common use of this multiple ports. When a packet arrives at 1
word in information technology is to port, it is copied to the other ports so that all
describe a product as able to contain or be segments of the LAN can see all packets. A
part of a “heterogeneous network," switching hub actually reads the destination
consisting of different manufacturers' address of each packet and then forwards
products that can interoperate. the packet to the correct port. Device to
which nodes on a multi-point bus or loop are
Heterogeneous networks are made possible by
physically connected.
standards-conforming hardware and
software interfaces used in common by Hybrid Cloud — “Hybrid cloud computing refers
different products, thus allowing them to to the combination of external public cloud
communicate with each other. The Internet computing services and internal resources
itself is an example of a heterogeneous (either a private cloud or traditional
network. infrastructure, operations and applications)
in a coordinated fashion to assemble a
HIPAA — Health Insurance Portability and
particular solution.” — Source: Gartner
Accountability Act.
Research.
HIS — (1) High Speed Interconnect. (2) Hospital Hybrid Network Cloud — A composition of 2 or
Information System (clinical and financial). more clouds (private, community or public).
HiStar — Multiple point-to-point data paths to Each cloud remains a unique entity but they
cache. are bound together. A hybrid network cloud
HL7 — Health Level 7. includes an interconnection.
HLQ — High-level Qualifier. Hypervisor — Also called a virtual machine
manager, a hypervisor is a hardware
HLU — Host Logical Unit. virtualization technique that enables
H-LUN — Host Logical Unit Number. See LUN. multiple operating systems to run
HMC — Hardware Management Console. concurrently on the same computer.
Hypervisors are often installed on server
Homogeneous — Of the same or similar kind. hardware then run the guest operating
Host — Also called a server. Basically a central systems that act as servers.
computer that processes end-user Hypervisor can also refer to the interface
applications or requests. that is provided by Infrastructure as a Service
Host LU — Host Logical Unit. See also HLU. (IaaS) in cloud computing.
Host Storage Domains — Allows host pooling at Leading hypervisors include VMware
the LUN level and the priority access feature vSphere Hypervisor™ (ESXi), Microsoft®
lets administrator set service levels for Hyper-V and the Xen® hypervisor.
applications. -back to top-
HP — (1) Hewlett-Packard Company or (2) High
Performance.
—I—
I/F — Interface.
HPC — High Performance Computing.
HSA — Hardware System Area. I/O — Input/Output. Term used to describe any
program, operation, or device that transfers
HSG — Host Security Group. data to or from a computer and to or from a
HSM — Hierarchical Storage Management (see peripheral device.
Data Migrator).
IaaS —Infrastructure as a Service. A cloud
HSN — Hierarchical Star Network. computing business model — delivering
HSSDC — High Speed Serial Data Connector. computer infrastructure, typically a platform

HDS Confidential: For distribution only to authorized parties. Page G-11


virtualization environment, as a service, along the same connection path. Also called
along with raw (block) storage and symmetric virtualization.
networking. Rather than purchasing servers, Interface —The physical and logical arrangement
software, data center space or network supporting the attachment of any device to a
equipment, clients buy those resources as a connector or to another device.
fully outsourced service. Providers typically
bill such services on a utility computing Internal bus — Another name for an internal data
basis; the amount of resources consumed bus. Also, an expansion bus is often referred
(and therefore the cost) will typically reflect to as an internal bus.
the level of activity. Internal data bus — A bus that operates only
IDE — Integrated Drive Electronics Advanced within the internal circuitry of the CPU,
Technology. A standard designed to connect communicating among the internal caches of
hard and removable disk drives. memory that are part of the CPU chip’s
design. This bus is typically rather quick and
IDN — Integrated Delivery Network. is independent of the rest of the computer’s
iFCP — Internet Fibre Channel Protocol. operations.
Index Cache — Provides quick access to indexed IOCDS — I/O Control Data Set.
data on the media during a browse\restore IODF — I/O Definition file.
operation.
IOPH — I/O per hour.
IBR — Incremental Block-level Replication or
Intelligent Block Replication. IOS — I/O Supervisor.

ICB — Integrated Cluster Bus. IOSQ — Input/Output Subsystem Queue.

ICF — Integrated Coupling Facility. IP — Internet Protocol.

ID — Identifier. IPL — Initial Program Load.

IDR — Incremental Data Replication. IPSEC — IP security.

iFCP — Internet Fibre Channel Protocol. Allows ISC — Initial shipping condition or Inter-System
an organization to extend Fibre Channel Communication.
storage networks over the Internet by using iSCSI — Internet SCSI. Pronounced eye skuzzy.
TCP/IP. TCP is responsible for managing An IP-based standard for linking data
congestion control as well as error detection storage devices over a network and
and recovery services. transferring data by carrying SCSI
commands over IP networks.
iFCP allows an organization to create an IP SAN
fabric that minimizes the Fibre Channel ISE — Integrated Scripting Environment.
fabric component and maximizes use of the iSER — iSCSI Extensions for RDMA.
company's TCP/IP infrastructure. ISL — Inter-Switch Link.
IFL — Integrated Facility for LINUX. iSNS — Internet Storage Name Service.
IHE — Integrating the Healthcare Enterprise. ISOE — iSCSI Offload Engine.
IID — Initiator ID. ISP — Internet service provider.
IIS — Internet Information Server. ISPF — Interactive System Productivity Facility.
ILM — Information Life Cycle Management. ISPF/PDF — Interactive System Productivity
Facility/Program Development Facility.
ILO — (Hewlett-Packard) Integrated Lights-Out.
ISV — Independent Software Vendor.
IML — Initial Microprogram Load.
ITaaS — IT as a Service. A cloud computing
IMS — Information Management System. business model. This general model is an
In-band virtualization — Refers to the location of umbrella model that entails the SPI business
the storage network path, between the model (SaaS, PaaS and IaaS — Software,
application host servers in the storage Platform and Infrastructure as a Service).
systems. Provides both control and data -back to top-

Page G-12 HDS Confidential: For distribution only to authorized parties.


—J— LCP — Link Control Processor. Controls the
optical links. LCP is located in the LCM.
Java — A widely accepted, open systems
programming language. Hitachi’s enterprise LCSS — Logical Channel Subsystems.
software products are all accessed using Java LCU — Logical Control Unit.
applications. This enables storage LD — Logical Device.
administrators to access the Hitachi
enterprise software products from any PC or LDAP — Lightweight Directory Access Protocol.
workstation that runs a supported thin-client LDEV ― Logical Device or Logical Device
internet browser application and that has (number). A set of physical disk partitions
TCP/IP network access to the computer on (all or portions of 1 or more disks) that are
which the software product runs. combined so that the subsystem sees and
Java VM — Java Virtual Machine. treats them as a single area of data storage.
Also called a volume. An LDEV has a
JBOD — Just a Bunch of Disks. specific and unique address within a
JCL — Job Control Language. subsystem. LDEVs become LUNs to an
JMP —Jumper. Option setting method. open-systems host.

JMS — Java Message Service. LDKC — Logical Disk Controller or Logical Disk
Controller Manual.
JNL — Journal.
LDM — Logical Disk Manager.
JNLG — Journal Group.
LDS — Linear Data Set.
JRE —Java Runtime Environment.
LED — Light Emitting Diode.
JVM — Java Virtual Machine.
LFF — Large Form Factor.
J-VOL — Journal Volume.
LIC — Licensed Internal Code.
-back to top-
LIS — Laboratory Information Systems.
LLQ — Lowest Level Qualifier.
—K— LM — Local Memory.
KSDS — Key Sequence Data Set.
LMODs — Load Modules.
kVA— Kilovolt Ampere.
LNKLST — Link List.
KVM — Kernel-based Virtual Machine or
Load balancing — The process of distributing
Keyboard-Video Display-Mouse.
processing and communications activity
kW — Kilowatt. evenly across a computer network so that no
-back to top- single device is overwhelmed. Load
balancing is especially important for
—L— networks where it is difficult to predict the
LACP — Link Aggregation Control Protocol. number of requests that will be issued to a
LAG — Link Aggregation Groups. server. If 1 server starts to be swamped,
requests are forwarded to another server
LAN — Local Area Network. A communications with more capacity. Load balancing can also
network that serves clients within a refer to the communications channels
geographical area, such as a building. themselves.
LBA — Logical block address. A 28-bit value that LOC — “Locations” section of the Maintenance
maps to a specific cylinder-head-sector Manual.
address on the disk.
Logical DKC (LDKC) — Logical Disk Controller
LC — Lucent connector. Fibre Channel connector Manual. An internal architecture extension
that is smaller than a simplex connector (SC). to the Control Unit addressing scheme that
LCDG — Link Processor Control Diagnostics. allows more LDEVs to be identified within 1
LCM — Link Control Module. Hitachi enterprise storage system.

HDS Confidential: For distribution only to authorized parties. Page G-13


Longitudinal record —Patient information from Mb — Megabit.
birth to death. MB — Megabyte.
LPAR — Logical Partition (mode). MBA — Memory Bus Adaptor.
LR — Local Router. MBUS — Multi-CPU Bus.
LRECL — Logical Record Length. MC — Multi Cabinet.
LRP — Local Router Processor. MCU — Main Disk Control Unit. The local CU of
LRU — Least Recently Used. a remote copy pair. Main or Master Control
LSS — Logical Storage Subsystem (equivalent to Unit.
LCU). MCU — Master Control Unit.
LU — Logical Unit. Mapping number of an LDEV. MediaAgent — The workhorse for all data
LUN ― Logical Unit Number. 1 or more LDEVs. movement. MediaAgent facilitates the
Used only for open systems. transfer of data between the data source, the
client computer, and the destination storage
LUSE ― Logical Unit Size Expansion. Feature used media.
to create virtual LUs that are up to 36 times
larger than the standard OPEN-x LUs. Metadata — In database management systems,
data files are the files that store the database
LVDS — Low Voltage Differential Signal information; whereas other files, such as
LVI — Logical Volume Image. Identifies a similar index files and data dictionaries, store
concept (as LUN) in the mainframe administrative information, known as
environment. metadata.
LVM — Logical Volume Manager. MFC — Main Failure Code.
-back to top- MG — Module Group. 2 (DIMM) cache memory
modules that work together.
—M—
MGC — (3-Site) Metro/Global Mirror.
MAC — Media Access Control. A MAC address is
a unique identifier attached to most forms of MIB — Management Information Base. A database
networking equipment. of objects that can be monitored by a
network management system. Both SNMP
MAID — Massive array of disks. and RMON use standardized MIB formats
MAN — Metropolitan Area Network. A that allow any SNMP and RMON tools to
communications network that generally monitor any device defined by a MIB.
covers a city or suburb. MAN is very similar Microcode — The lowest-level instructions that
to a LAN except it spans across a directly control a microprocessor. A single
geographical region such as a state. Instead machine-language instruction typically
of the workstations in a LAN, the translates into several microcode
workstations in a MAN could depict instructions.
different cities in a state. For example, the
Fortan Pascal C
state of Texas could have: Dallas, Austin, San
High-level Language
Antonio. The city could be a separate LAN
and all the cities connected together via a Assembly Language
switch. This topology would indicate a Machine Language
MAN. Hardware
MAPI — Management Application Programming Microprogram — See Microcode.
Interface.
MIF — Multiple Image Facility.
Mapping — Conversion between 2 data
Mirror Cache OFF — Increases cache efficiency
addressing spaces. For example, mapping
over cache data redundancy.
refers to the conversion between physical
disk block addresses and the block addresses M-JNL — Primary journal volumes.
of the virtual disks presented to operating MM — Maintenance Manual.
environments by control software.

Page G-14 HDS Confidential: For distribution only to authorized parties.


MMC — Microsoft Management Console. network that uses the same protocols as a
Mode — The state or setting of a program or standard network. See also cloud computing.
device. The term mode implies a choice, NFS protocol — Network File System is a protocol
which is that you can change the setting and that allows a computer to access files over a
put the system in a different mode. network as easily as if they were on its local
MP — Microprocessor. disks.

MPA — Microprocessor adapter. NIM — Network Interface Module.

MPI — (Electronic) Master Patient Identifier. Also NIS — Network Information Service (originally
known as EMPI. called the Yellow Pages or YP).

MPIO — Multipath I/O. NIST — National Institute of Standards and


Technology. A standards organization active
MPU — Microprocessor Unit. in cloud computing.
MS/SG — Microsoft Service Guard. NLS — Native Language Support.
MSCS — Microsoft Cluster Server. Node ― An addressable entity connected to an
MSS — Multiple Subchannel Set. I/O bus or network, used primarily to refer
MTBF — Mean Time Between Failure. to computers, storage devices, and storage
subsystems. The component of a node that
MTS — Multitiered Storage. connects to the bus or network is a port.
Multitenancy — In cloud computing, Node name ― A Name_Identifier associated with
multitenancy is a secure way to partition the a node.
infrastructure (application, storage pool and
network) so multiple customers share a NRO — Network Recovery Objective.
single resource pool. Multitenancy is one of NTP — Network Time Protocol.
the key ways cloud can achieve massive NVS — Non Volatile Storage.
economy of scale.
-back to top-
M-VOL — Main Volume.
—O—
MVS — Multiple Virtual Storage.
OCC — Open Cloud Consortium. A standards
-back to top- organization active in cloud computing.
—N— OEM — Original Equipment Manufacturer.
NAS ― Network Attached Storage. A disk array OFC — Open Fibre Control.
connected to a controller that gives access to OGF — Open Grid Forum. A standards
a LAN Transport. It handles data at the file organization active in cloud computing.
level.
OID — Object identifier.
NAT — Network Address Translation.
OLA — Operating Level Agreements.
NDMP — Network Data Management Protocol is
OLTP — On-Line Transaction Processing.
a protocol meant to transport data between
NAS devices. OLTT — Open-loop throughput throttling.
NetBIOS — Network Basic Input/Output System. OMG — Object Management Group. A standards
organization active in cloud computing.
Network — A computer system that allows
sharing of resources, such as files and On/Off CoD — On/Off Capacity on Demand.
peripheral hardware devices. ONODE — Object node.
Network Cloud — A communications network. OPEX — Operational Expenditure. This is an
The word "cloud" by itself may refer to any operating expense, operating expenditure,
local area network (LAN) or wide area operational expense, or operational
network (WAN). The terms “computing" expenditure, which is an ongoing cost for
and "cloud computing" refer to services running a product, business, or system. Its
offered on the public Internet or to a private counterpart is a capital expenditure (CAPEX).

HDS Confidential: For distribution only to authorized parties. Page G-15


ORM — Online Read Margin. nodes on a network, the signal that is
OS — Operating System. communicated over the pathway or a sub-
channel in a carrier frequency.
Out-of-band virtualization — Refers to systems
where the controller is located outside of the Path failover — See Failover.
SAN data path. Separates control and data PAV — Parallel Access Volumes.
on different connection paths. Also called PAWS — Protect Against Wrapped Sequences.
asymmetric virtualization.
PB — Petabyte.
-back to top-
PBC — Port By-pass Circuit.
—P— PCB — Printed Circuit Board.
PaaS — Platform as a Service. A cloud computing PCHIDS — Physical Channel Path Identifiers.
business model — delivering a computing PCI — Power Control Interface.
platform and solution stack as a service. PCI CON — Power Control Interface Connector
PaaS offerings facilitate deployment of Board.
applications without the cost and complexity
of buying and managing the underlying PCI DSS — Payment Card Industry Data Security
hardware, software and provisioning Standard.
hosting capabilities. PaaS provides all of the PCIe — Peripheral Component Interconnect
facilities required to support the complete Express.
life cycle of building and delivering web PD — Product Detail.
applications and services entirely from the PDEV— Physical Device.
Internet.
PDM — Policy based Data Migration or Primary
PACS – Picture Archiving and Communication Data Migrator.
System.
PDS — Partitioned Data Set.
PAN — Personal Area Network. A
PDSE — Partitioned Data Set Extended.
communications network that transmit data
wirelessly over a short distance. Bluetooth Performance — Speed of access or the delivery of
and Wi-Fi Direct are examples of personal information.
area networks. Petabyte (PB) — A measurement of capacity — the
amount of data that a drive or storage
Parity — A technique of checking whether data
system can store after formatting. 1PB =
has been lost or written over when it is
1,024TB.
moved from 1 place in storage to another or
when it is transmitted between computers. PFA — Predictive Failure Analysis.
Parity Group — Also called an array group. This is PFTaaS — Private File Tiering as a Service. A cloud
a group of hard disk drives (HDDs) that computing business model.
form the basic unit of storage in a subsystem. PGR — Persistent Group Reserve.
All HDDs in a parity group must have the PI — Product Interval.
same physical capacity.
PIR — Performance Information Report.
Partitioned cache memory — Separate workloads PiT — Point-in-Time.
in a “storage consolidated” system by
dividing cache into individually managed PK — Package (see PCB).
multiple partitions. Then customize the PL — Platter. The circular disk on which the
partition to match the I/O characteristics of magnetic data is stored. Also called
assigned LUs. motherboard or backplane.
PAT — Port Address Translation. PM — Package Memory.
PATA — Parallel ATA. Port — In TCP/IP and UDP networks, an
endpoint to a logical connection. The port
Path — Also referred to as a transmission channel,
number identifies what type of port it is. For
the path between 2 nodes of a network that a
example, port 80 is used for HTTP traffic.
data communication follows. The term can
refer to the physical cabling that connects the P-P — Point-to-point; also P2P.

Page G-16 HDS Confidential: For distribution only to authorized parties.


PPRC — Peer-to-Peer Remote Copy. PTF — Program Temporary Fixes.
Private Cloud — A type of cloud computing PTR — Pointer.
defined by shared capabilities within a PU — Processing Unit.
single company; modest economies of scale
Public Cloud — Resources, such as applications
and less automation. Infrastructure and data
and storage, available to the general public
reside inside the company’s data center
over the Internet.
behind a firewall. Comprised of licensed
software tools rather than on-going services. P-VOL — Primary Volume.
-back to top-
Example: An organization implements its
—Q—
own virtual, scalable cloud and business
units are charged on a per use basis. QD — Quorum Device
Private Network Cloud — A type of cloud QoS — Quality of Service. In the field of computer
network with 3 characteristics: (1) Operated networking, the traffic engineering term
solely for a single organization, (2) Managed quality of service (QoS) refers to resource
internally or by a third-party, (3) Hosted reservation control mechanisms rather than
internally or externally. the achieved service quality. Quality of
service is the ability to provide different
PR/SM — Processor Resource/System Manager.
priority to different applications, users, or
Protocol — A convention or standard that enables data flows, or to guarantee a certain level of
the communication between 2 computing performance to a data flow.
endpoints. In its simplest form, a protocol
QSAM — Queued Sequential Access Method.
can be defined as the rules governing the
-back to top-
syntax, semantics, and synchronization of
communication. Protocols may be —R—
implemented by hardware, software, or a RACF — Resource Access Control Facility.
combination of the 2. At the lowest level, a
RAID ― Redundant Array of Independent Disks,
protocol defines the behavior of a hardware
or Redundant Array of Inexpensive Disks. A
connection.
group of disks that look like a single volume
Provisioning — The process of allocating storage to the server. RAID improves performance
resources and assigning storage capacity for by pulling a single stripe of data from
an application, usually in the form of server multiple disks, and improves fault-tolerance
disk drive space, in order to optimize the either through mirroring or parity checking
performance of a storage area network and it is a component of a customer’s SLA.
(SAN). Traditionally, this has been done by
RAID-0 — Striped array with no parity.
the SAN administrator, and it can be a
tedious process. In recent years, automated RAID-1 — Mirrored array and duplexing.
storage provisioning (also called auto- RAID-3 — Striped array with typically non-
provisioning) programs have become rotating parity, optimized for long, single-
available. These programs can reduce the threaded transfers.
time required for the storage provisioning RAID-4 — Striped array with typically non-
process, and can free the administrator from rotating parity, optimized for short, multi-
the often distasteful task of performing this threaded transfers.
chore manually.
RAID-5 — Striped array with typically rotating
PS — Power Supply. parity, optimized for short, multithreaded
PSA — Partition Storage Administrator . transfers.
PSSC — Perl Silicon Server Control. RAID-6 — Similar to RAID-5, but with dual
PSU — Power Supply Unit. rotating parity physical disks, tolerating 2
physical disk failures.
PSUE — Pair SUspended Error.
RAM — Random Access Memory.
PSUS — Pair SUSpend.
RAM DISK — A LUN held entirely in the cache
PTAM — Pickup Truck Access Method.
area.

HDS Confidential: For distribution only to authorized parties. Page G-17


RAS — Reliability, Availability, and Serviceability RLGFAN — Rear Logic Box Fan Assembly.
or Row Address Strobe. RLOGIC BOX — Rear Logic Box.
RBAC — Role Base Access Control. RMF — Resource Measurement Facility.
RC — (1) Reference Code or (2) Remote Control. RMI — Remote Method Invocation. A way that a
RCHA — RAID Channel Adapter. programmer, using the Java programming
RCP — Remote Control Processor. language and development environment,
can write object-oriented programming in
RCU — Remote Control Unit or Remote Disk
which objects on different computers can
Control Unit.
interact in a distributed network. RMI is the
RD/WR — Read/Write. Java version of what is generally known as a
RDM — Raw Disk Mapped. RPC (remote procedure call), but with the
ability to pass 1 or more objects along with
RDMA — Remote Direct Memory Access.
the request.
RDP — Remote Desktop Protocol.
ROA — Return on Asset.
RDW — Record Descriptor Word.
RoHS — Restriction of Hazardous Substances (in
Read/Write Head — Read and write data to the Electrical and Electronic Equipment).
platters, typically there is 1 head per platter
ROI — Return on Investment.
side, and each head is attached to a single
actuator shaft. ROM — Read Only Memory.
RECFM — Record Format Redundant. Describes Round robin mode — A load balancing technique
the computer or network system which distributes data packets equally
components, such as fans, hard disk drives, among the available paths. Round robin
servers, operating systems, switches, and DNS is usually used for balancing the load
telecommunication links that are installed to of geographically distributed Web servers. It
back up primary resources in case they fail. works on a rotating basis in that one server
IP address is handed out, then moves to the
A well-known example of a redundant back of the list; the next server IP address is
system is the redundant array of handed out, and then it moves to the end of
independent disks (RAID). Redundancy the list; and so on, depending on the number
contributes to the fault tolerance of a system. of servers being used. This works in a
Redundancy — Backing up a component to help looping fashion.
ensure high availability. Router — A computer networking device that
Reliability — (1) Level of assurance that data will forwards data packets toward their
not be lost or degraded over time. (2) An destinations, through a process known as
attribute of any commuter component routing.
(software, hardware, or a network) that RPC — Remote procedure call.
consistently performs according to its RPO — Recovery Point Objective. The point in
specifications. time that recovered data should match.
REST — Representational State Transfer. RPSFAN — Rear Power Supply Fan Assembly.
REXX — Restructured extended executor. RRDS — Relative Record Data Set.
RID — Relative Identifier that uniquely identifies RS CON — RS232C/RS422 Interface Connector.
a user or group within a Microsoft Windows
domain. RSD — RAID Storage Division.

RIS — Radiology Information System. R-SIM — Remote Service Information Message.

RISC — Reduced Instruction Set Computer. RSM — Real Storage Manager.

RIU — Radiology Imaging Unit. RTM — Recovery Termination Manager.

R-JNL — Secondary journal volumes. RTO — Recovery Time Objective. The length of
time that can be tolerated between a disaster
RKAJAT — Rack Additional SATA disk tray. and recovery of data.

Page G-18 HDS Confidential: For distribution only to authorized parties.


R-VOL — Remote Volume. current IDE (Integrated Drive Electronics)
R/W — Read/Write. hard drives that use parallel signaling.
-back to top- SBOD — Switched Bunch of Disks.
—S— SBSC — Smart Business Storage Cloud.
SA — Storage Administrator. SBX — Small Box (Small Form Factor).
SC — (1) Simplex connector. Fibre Channel
SA z/OS — System Automation for z/OS.
connector that is larger than a Lucent
SAA — Share Access Authentication. The process connector (LC). (2) Single Cabinet.
of restricting a user's rights to a file system
SCM — Supply Chain Management.
object by combining the security descriptors
from both the file system object itself and the SCP — Secure Copy.
share to which the user is connected. SCSI — Small Computer Systems Interface. A
SaaS — Software as a Service. A cloud computing parallel bus architecture and a protocol for
business model. SaaS is a software delivery transmitting large data blocks up to a
model in which software and its associated distance of 15 to 25 meters.
data are hosted centrally in a cloud and are SD — Software Division.
typically accessed by users using a thin SDM — System Data Mover.
client, such as a web browser via the SDSF — Spool Display and Search Facility.
Internet. SaaS has become a common
delivery model for most business Sector — A sub-division of a track of a magnetic
applications, including accounting (CRM disk that stores a fixed amount of data.
and ERP), invoicing (HRM), content SEL — System Event Log.
management (CM) and service desk Selectable segment size — Can be set per partition.
management, just to name the most common Selectable Stripe Size — Increases performance by
software that runs in the cloud. This is the customizing the disk access size.
fastest growing service in the cloud market
SENC — Is the SATA (Serial ATA) version of the
today. SaaS performs best for relatively
ENC. ENCs and SENCs are complete
simple tasks in IT-constrained organizations.
microprocessor systems on their own and
SACK — Sequential Acknowledge. they occasionally require a firmware
SACL — System ACL. The part of a security upgrade.
descriptor that stores system auditing Serial Transmission — The transmission of data
information. bits in sequential order over a single line.
SAN ― Storage Area Network. A network linking Server — A central computer that processes
computing devices to disk or tape arrays and end-user applications or requests, also called
other devices over Fibre Channel. It handles a host.
data at the block level. Server Virtualization — The masking of server
SAP — (1) System Assist Processor (for I/O resources, including the number and identity
processing), or (2) a German software of individual physical servers, processors,
company. and operating systems, from server users.
SAP HANA — High Performance Analytic The implementation of multiple isolated
Appliance, a database appliance technology virtual environments in one physical server.
proprietary to SAP. Service-level Agreement — SLA. A contract
SARD — System Assurance Registration between a network service provider and a
Document. customer that specifies, usually in
measurable terms, what services the network
SAS —Serial Attached SCSI. service provider will furnish. Many Internet
SATA — Serial ATA. Serial Advanced Technology service providers (ISP) provide their
Attachment is a new standard for connecting customers with a SLA. More recently, IT
hard drives into computer systems. SATA is departments in major enterprises have
based on serial signaling technology, unlike adopted the idea of writing a service level

HDS Confidential: For distribution only to authorized parties. Page G-19


agreement so that services for their SIM RC — Service (or system) Information
customers (users in other departments Message Reference Code.
within the enterprise) can be measured, SIMM — Single In-line Memory Module.
justified, and perhaps compared with those
SLA —Service Level Agreement.
of outsourcing network providers.
SLO — Service Level Objective.
Some metrics that SLAs may specify include:
• The percentage of the time services will be SLRP — Storage Logical Partition.
available SM ― Shared Memory or Shared Memory Module.
• The number of users that can be served Stores the shared information about the
simultaneously subsystem and the cache control information
(director names). This type of information is
• Specific performance benchmarks to
used for the exclusive control of the
which actual performance will be
subsystem. Like CACHE, shared memory is
periodically compared
controlled as 2 areas of memory and fully non-
• The schedule for notification in advance of volatile (sustained for approximately 7 days).
network changes that may affect users
SM PATH— Shared Memory Access Path. The
• Help desk response time for various
Access Path from the processors of CHA,
classes of problems
DKA PCB to Shared Memory.
• Dial-in access availability
SMB/CIFS — Server Message Block
• Usage statistics that will be provided Protocol/Common Internet File System.
Service-Level Objective — SLO. Individual
SMC — Shared Memory Control.
performance metrics built into an SLA. Each
SLO corresponds to a single performance SMF — System Management Facility.
characteristic relevant to the delivery of an
SMI-S — Storage Management Initiative
overall service. Some examples of SLOs
Specification.
include: system availability, help desk
incident resolution time, and application SMP — Symmetric Multiprocessing. An IBM-
response time. licensed program used to install software
SES — SCSI Enclosure Services. and software changes on z/OS systems.

SFF — Small Form Factor. SMP/E — System Modification


Program/Extended.
SFI — Storage Facility Image.
SFM — Sysplex Failure Management. SMS — System Managed Storage.

SFP — Small Form-Factor Pluggable module Host SMTP — Simple Mail Transfer Protocol.
connector. A specification for a new SMU — System Management Unit.
generation of optical modular transceivers.
The devices are designed for use with small Snapshot Image — A logical duplicated volume
form factor (SFF) connectors, offer high (V-VOL) of the primary volume. It is an
speed and physical compactness, and are internal volume intended for restoration.
hot-swappable. SNIA — Storage Networking Industry
SHSN — Shared memory Hierarchical Star Association. An association of producers and
Network. consumers of storage networking products,
SID — Security Identifier. A user or group whose goal is to further storage networking
identifier within the Microsoft Windows technology and applications. Active in cloud
security model. computing.
SIGP — Signal Processor. SNMP — Simple Network Management Protocol.
SIM — (1) Service Information Message. A A TCP/IP protocol that was designed for
message reporting an error that contains fix management of networks over TCP/IP,
guidance information. (2) Storage Interface using agents and stations.
Module. (3) Subscriber Identity Module. SOA — Service Oriented Architecture.

Page G-20 HDS Confidential: For distribution only to authorized parties.


SOAP — Simple object access protocol. A way for SSCH — Start Subchannel.
a program running in one kind of operating SSD — Solid-state Drive or Solid-State Disk.
system (such as Windows 2000) to
SSH — Secure Shell.
communicate with a program in the same or
another kind of an operating system (such as SSID — Storage Subsystem ID or Subsystem
Linux) by using the World Wide Web's Identifier.
Hypertext Transfer Protocol (HTTP) and its SSL — Secure Sockets Layer.
Extensible Markup Language (XML) as the SSPC — System Storage Productivity Center.
mechanisms for information exchange.
SSUE — Split SUSpended Error.
Socket — In UNIX and some other operating
SSUS — Split SUSpend.
systems, socket is a software object that
connects an application to a network SSVP — Sub Service Processor interfaces the SVP
protocol. In UNIX, for example, a program to the DKC.
can send and receive TCP/IP messages by SSW — SAS Switch.
opening a socket and reading and writing Sticky Bit — Extended UNIX mode bit that
data to and from the socket. This simplifies prevents objects from being deleted from a
program development because the directory by anyone other than the object's
programmer need only worry about owner, the directory's owner or the root user.
manipulating the socket and can rely on the
Storage pooling — The ability to consolidate and
operating system to actually transport
manage storage resources across storage
messages across the network correctly. Note
system enclosures where the consolidation
that a socket in this sense is completely soft;
of many appears as a single view.
it is a software object, not a physical
component. STP — Server Time Protocol.
SOM — System Option Mode. STR — Storage and Retrieval Systems.
SOSS — Service Oriented Storage Solutions. Striping — A RAID technique for writing a file to
multiple disks on a block-by-block basis,
SPaaS — SharePoint as a Service. A cloud
with or without parity.
computing business model.
Subsystem — Hardware or software that performs
SPAN — Span is a section between 2 intermediate
a specific function within a larger system.
supports. See Storage pool.
SVC — Supervisor Call Interruption.
Spare — An object reserved for the purpose of
substitution for a like object in case of that SVC Interrupts — Supervisor calls.
object's failure. S-VOL — (1) (ShadowImage) Source Volume for
SPC — SCSI Protocol Controller. In-System Replication, or (2) (Universal
Replicator) Secondary Volume.
SpecSFS — Standard Performance Evaluation
Corporation Shared File system. SVP — Service Processor ― A laptop computer
mounted on the control frame (DKC) and
SPECsfs97 — Standard Performance Evaluation
used for monitoring, maintenance and
Corporation (SPEC) System File Server (sfs)
administration of the subsystem.
developed in 1997 (97).
Switch — A fabric device providing full
SPI model — Software, Platform and
bandwidth per port and high-speed routing
Infrastructure as a service. A common term
of data via link-level addressing.
to describe the cloud computing “as a service”
business model. SXP — SAS Expander.
SRDF/A — (EMC) Symmetrix Remote Data Symmetric virtualization — See In-band
Facility Asynchronous. virtualization.
SRDF/S — (EMC) Symmetrix Remote Data Synchronous — Operations that have a fixed time
Facility Synchronous. relationship to each other. Most commonly
used to denote I/O operations that occur in
SSB — Sense Byte.
time sequence, i.e., a successor operation does
SSC — SiliconServer Control. not occur until its predecessor is complete.
-back to top-

HDS Confidential: For distribution only to authorized parties. Page G-21


—T— Tiered Storage Promotion — Moving data
between tiers of storage as their availability
Target — The system component that receives a
requirements change.
SCSI I/O command, an open device that
operates at the request of the initiator. TLS — Tape Library System.
TB — Terabyte. 1TB = 1,024GB. TLS — Transport Layer Security.
TCO — Total Cost of Ownership. TMP — Temporary or Test Management Program.
TCP/IP — Transmission Control Protocol over TOD (or ToD) — Time Of Day.
Internet Protocol. TOE — TCP Offload Engine.
TDCONV — Trace Dump CONVerter. A software Topology — The shape of a network or how it is
program that is used to convert traces taken laid out. Topologies are either physical or
on the system into readable text. This logical.
information is loaded into a special TPC-R — Tivoli Productivity Center for
spreadsheet that allows for further Replication.
investigation of the data. More in-depth
TPF — Transaction Processing Facility.
failure analysis.
Track — Circular segment of a hard disk or other
TDMF — Transparent Data Migration Facility.
storage media.
Telco or TELCO — Telecommunications
Transfer Rate — See Data Transfer Rate.
Company.
Trap — A program interrupt, usually an interrupt
TEP — Tivoli Enterprise Portal.
caused by some exceptional situation in the
Terabyte (TB) — A measurement of capacity, data user program. In most cases, the Operating
or data storage. 1TB = 1,024GB. System performs some action, and then
TFS — Temporary File System. returns control to the program.
TGTLIBs — Target Libraries. TSC — Tested Storage Configuration.
THF — Front Thermostat. TSO — Time Sharing Option.
Thin Provisioning — Thin provisioning allows TSO/E — Time Sharing Option/Extended.
storage space to be easily allocated to servers T-VOL — (ShadowImage) Target Volume for
on a just-enough and just-in-time basis. In-System Replication.
THR — Rear Thermostat. -back to top-
Throughput — The amount of data transferred —U—
from 1 place to another or processed in a
UA — Unified Agent.
specified amount of time. Data transfer rates
for disk drives and networks are measured UBX — Large Box (Large Form Factor).
in terms of throughput. Typically, UCB — Unit Control Block.
throughputs are measured in kbps, Mbps UDP — User Datagram Protocol is 1 of the core
and Gb/sec. protocols of the Internet protocol suite.
TID — Target ID. Using UDP, programs on networked
Tiered storage — A storage strategy that matches computers can send short messages known
data classification to storage metrics. Tiered as datagrams to one another.
storage is the assignment of different UFA — UNIX File Attributes.
categories of data to different types of UID — User Identifier within the UNIX security
storage media in order to reduce total model.
storage cost. Categories may be based on
UPS — Uninterruptible Power Supply — A power
levels of protection needed, performance
supply that includes a battery to maintain
requirements, frequency of use, and other
power in the event of a power outage.
considerations. Since assigning data to
particular media may be an ongoing and UR — Universal Replicator.
complex activity, some vendors provide UUID — Universally Unique Identifier.
software for automatically managing the -back to top-
process based on a company-defined policy.

Page G-22 HDS Confidential: For distribution only to authorized parties.


—V— VOLID — Volume ID.
vContinuum — Using the vContinuum wizard, VOLSER — Volume Serial Numbers.
users can push agents to primary and Volume — A fixed amount of storage on a disk or
secondary servers, set up protection and tape. The term volume is often used as a
perform failovers and failbacks. synonym for the storage medium itself, but
VCS — Veritas Cluster System. it is possible for a single disk to contain more
than 1 volume or for a volume to span more
VDEV — Virtual Device.
than 1 disk.
VHD — Virtual Hard Disk.
VPC — Virtual Private Cloud.
VHDL — VHSIC (Very-High-Speed Integrated
VSAM — Virtual Storage Access Method.
Circuit) Hardware Description Language.
VSD — Virtual Storage Director.
VHSIC — Very-High-Speed Integrated Circuit.
VSP — Virtual Storage Platform.
VI — Virtual Interface. A research prototype that
is undergoing active development, and the VSS — (Microsoft) Volume Shadow Copy Service.
details of the implementation may change VTOC — Volume Table of Contents.
considerably. It is an application interface VTOCIX — Volume Table of Contents Index.
that gives user-level processes direct but
VVDS — Virtual Volume Data Set.
protected access to network interface cards.
This allows applications to bypass IP V-VOL — Virtual Volume.
processing overheads (for example, copying -back to top-
data, computing checksums) and system call —W—
overheads while still preventing 1 process
from accidentally or maliciously tampering WAN — Wide Area Network. A computing
with or reading data being used by another. internetwork that covers a broad area or
Virtualization — Referring to storage region. Contrast with PAN, LAN and MAN.
virtualization, virtualization is the WDIR — Directory Name Object.
amalgamation of multiple network storage
devices into what appears to be a single WDIR — Working Directory.
storage unit. Storage virtualization is often
WDS — Working Data Set.
used in a SAN, and makes tasks such as
archiving, backup and recovery easier and WFILE — File Object or Working File.
faster. Storage virtualization is usually
WFS — Working File Set.
implemented via software applications.
WINS — Windows Internet Naming Service.
There are many additional types of
virtualization. WLM — Work Load Manager.
Virtual Private Cloud (VPC) — Private cloud WORM — Write Once, Read Many.
existing within a shared or public cloud (for
example, the Intercloud). Also known as a WSDL — Web Services Description Language.
virtual private network cloud.
WTREE — Directory Tree Object or Working Tree.
VLL — Virtual Logical Volume Image/Logical
Unit Number. WWN ― World Wide Name. A unique identifier
for an open-system host. It consists of a 64-
VLUN — Virtual LUN. Customized volume. Size
bit physical address (the IEEE 48-bit format
chosen by user.
with a 12-bit extension and a 4-bit prefix).
VLVI — Virtual Logic Volume Image. Marketing
name for CVS (custom volume size). WWNN — World Wide Node Name. A globally
VM — Virtual Machine. unique 64-bit identifier assigned to each
Fibre Channel node process.
VMDK — Virtual Machine Disk file format.
VNA — Vendor Neutral Archive. WWPN ― World Wide Port Name. A globally
unique 64-bit identifier assigned to each
VOJP — (Cache) Volatile Jumper.
Fibre Channel port. A Fibre Channel port’s

HDS Confidential: For distribution only to authorized parties. Page G-23


WWPN is permitted to use any of several Zoning — A method of subdividing a storage area
naming authorities. Fibre Channel specifies a network into disjoint zones, or subsets of
Network Address Authority (NAA) to nodes on the network. Storage area network
distinguish between the various name nodes outside a zone are invisible to nodes
registration authorities that may be used to within the zone. Moreover, with switched
identify the WWPN. SANs, traffic within each zone may be
-back to top-
physically isolated from traffic outside the
zone.
—X— -back to top-

XAUI — "X"=10, AUI = Attachment Unit Interface.


XCF — Cross System Communications Facility.
XDS — Cross Enterprise Document Sharing.
XDSi — Cross Enterprise Document Sharing for
Imaging.
XFI — Standard interface for connecting 10Gb
Ethernet MAC device to XFP interface.
XFP — "X"=10Gb Small Form Factor Pluggable.
XRC — Extended Remote Copy.
-back to top-

—Y—
YB — Yottabyte.
Yottabyte — A highest-end measurement of data
at the present time. 1YB = 1,024ZB, or 1
quadrillion GB. A recent estimate (2011) is
that all the computer hard drives in the
world do not contain 1YB of data.
-back to top-

—Z—
z/OS — z Operating System (IBM® S/390® or
z/OS® Environments).
z/OS NFS — (System) z/OS Network File System.
z/OSMF — (System) z/OS Management Facility.
zAAP — (System) z Application Assist Processor
(for Java and XML workloads).
Zettabyte (ZB) — A high-end measurement of
data at the present time. 1ZB = 1,024EB.
zFS — (System) zSeries File System.
zHPF — (System) z High Performance FICON.
zIIP — (System) z Integrated Information
Processor (specialty processor for database).
Zone — A collection of Fibre Channel Ports that
are permitted to communicate with each
other via the fabric.

Page G-24 HDS Confidential: For distribution only to authorized parties.


Evaluating this Course
Please use the online evaluation system to help improve our courses.

Learning Center Sign-in location:


https://learningcenter.hds.com/Saba/Web/Main

HDS Confidential: For distribution only to authorized parties. Page E-1


Evaluating this Course

Page E-2 HDS Confidential: For distribution only to authorized parties.

You might also like