Download as pdf or txt
Download as pdf or txt
You are on page 1of 354

Student Guide for

Hitachi Data Systems Storage


Foundations
For HDS internal use only. This document is not to be used for instructor-led
training without written approval from GEO Academy leaders. In addition,
this document should not be used in place of HDS maintenance manuals
and/or user guides.

THI2264

Book 2 of 2

Courseware Version 10.0


Corporate Headquarters Regional Contact Information
2825 Lafayette Street Americas: +1 408 970 1000 or info@HDS.com
Santa Clara, California 95050-2639 USA Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@HDS.com
www.HDS.com Asia Pacific: +852 3189 7900 or hds.marketing.apac@HDS.com

© Hitachi Data Systems Corporation 2016. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Hitachi Content Platform Anywhere, Hitachi Live
Insight Solutions, ShadowImage, TrueCopy, Universal Storage Platform, Essential NAS Platform, Hi-Track, and Archivas are trademarks or registered trademarks of Hitachi Data
Systems Corporation. IBM, S/390, XRC, z/OS, and Flashcopy are trademarks or registered trademarks of International Business Machines Corporation. Microsoft, SQL Server,
Hyper-V, PowerShell, SharePoint, and Windows are trademarks or registered trademarks of Microsoft Corporation. All other trademarks, service marks, and company names are
properties of their respective owners.

ii
Contents
BOOK 1

Introduction ............................................................................................................ xvii

1. Hitachi Virtual Storage Platform G1000 Hardware ........................................... 1-1

2. Hitachi Virtual Storage Platform G200, G400, G600 and G800


Storage Architecture ............................................................................................... 2-1

3. Hitachi Virtual Storage Platform F400, F600 and F800 Storage Architecture
and Hitachi Flash Storage ....................................................................................... 3-1

4. Hi-Track Remote Monitoring System ................................................................ 4-1

5. Hitachi Storage Virtualization ........................................................................... 5-1

6. Hitachi Command Suite .................................................................................... 6-1

7. Hitachi Device Manager .................................................................................... 7-1

8. Hitachi Infrastructure Director ......................................................................... 8-1

9. Hitachi Tiered Storage Manager ....................................................................... 9-1

10. Hitachi Tuning Manager.................................................................................. 10-1

11. Hitachi Command Director.............................................................................. 11-1

12. Hitachi Dynamic Link Manager ....................................................................... 12-1

13. Hitachi Global Link Manager .......................................................................... 13-1

iii
Contents

BOOK 2

14. Business Continuity Overview ........................................................................ 14-1


Module Objectives ............................................................................................................................... 14-1
Hitachi Replication Products ................................................................................................................. 14-2
ShadowImage Replication .................................................................................................................... 14-3
Hitachi Thin Image .............................................................................................................................. 14-4
Hitachi TrueCopy Remote Replication Software ...................................................................................... 14-5
Hitachi Universal Replicator Software .................................................................................................... 14-6
Hitachi Replication Manager ................................................................................................................. 14-7
Tools Used for Setting Up Replication .................................................................................................... 14-8
Requirements for All Replication Products .............................................................................................14-10
Replication Operations ........................................................................................................................14-10
Copy Operations ................................................................................................................................14-11
Thin Provisioning “Awareness” .............................................................................................................14-12
Online Product Overview .....................................................................................................................14-13
Module Summary ...............................................................................................................................14-13
Module Review ...................................................................................................................................14-14

15. Hitachi In-System Replication Bundle ............................................................ 15-1


Module Objectives ............................................................................................................................... 15-1
Hitachi ShadowImage Replication ......................................................................................................... 15-2
Introducing ShadowImage Replication.............................................................................................. 15-2
ShadowImage Replication Overview ................................................................................................. 15-3
ShadowImage Replication RAID-Protected Clones ............................................................................. 15-4
Easy to Create ShadowImage Replication Clones ............................................................................... 15-5
ShadowImage Replication Consistency Groups .................................................................................. 15-6
Overview ....................................................................................................................................... 15-7
Applications ................................................................................................................................... 15-8
ShadowImage Replication Licensing ................................................................................................. 15-9
Management Resources .................................................................................................................15-10
Internal ShadowImage Replication Operation ................................................................................... 15-11
Operations ....................................................................................................................................15-12
paircreate Command .....................................................................................................................15-13
pairsplit Command.........................................................................................................................15-14
pairresync Command – Operation Types ..........................................................................................15-16
pairresync Command – Normal Resync ............................................................................................15-18

iv
Contents
pairresync Command – Reverse Resync ...........................................................................................15-19
pairsplit -S Command.....................................................................................................................15-20
Volume Grouping...........................................................................................................................15-20
Pair Status Transitions ...................................................................................................................15-21
Hitachi Thin Image .............................................................................................................................15-22
What Is Hitachi Thin Image? ..........................................................................................................15-22
Hitachi ShadowImage Replication Clones Versus Thin Image Snapshots ............................................. 15-25
Hitachi Thin Image Technical Details (1 of 3) ................................................................................... 15-27
Hitachi Thin Image Technical Details (2 of 3) ................................................................................... 15-27
Hitachi Thin Image Technical Details (3 of 3) ................................................................................... 15-28
Hitachi Thin Image Components .....................................................................................................15-28
Comparison: Hitachi Copy-on-Write Snapshot and Hitachi Thin Image ................................................ 15-29
Operations ....................................................................................................................................15-30
Module Summary ...............................................................................................................................15-38
Module Review ...................................................................................................................................15-38

16. Hitachi Remote Replication ............................................................................ 16-1


Module Objectives ............................................................................................................................... 16-1
Hitachi TrueCopy Remote Replication Bundle (Synchronous) ................................................................... 16-2
TrueCopy Remote Replication Bundle Overview ................................................................................. 16-2
Typical TrueCopy Remote Replication Bundle Environment ................................................................. 16-3
Basic TrueCopy Remote Replication Bundle Operation ....................................................................... 16-4
TrueCopy Remote Replication Bundle (Synchronous) ......................................................................... 16-6
How TrueCopy Remote Replication Works ........................................................................................ 16-7
Easy to Create Clones ..................................................................................................................... 16-8
Volume States ...............................................................................................................................16-10
Hitachi Universal Replicator .................................................................................................................16-11
Hitachi Universal Replicator Overview ..............................................................................................16-11
Hitachi Universal Replicator Benefits ................................................................................................16-12
Hitachi Universal Replicator Functions .............................................................................................16-13
Hitachi Universal Replicator Hardware .............................................................................................16-14
Hitachi Universal Replicator Components .........................................................................................16-15
Hitachi Universal Replicator Specifications ........................................................................................16-17
Three-Data-Center Cascade Replication ...........................................................................................16-19
Three-Data-Center Multi-Target Replication .....................................................................................16-20
Four-Data-Center Multi-Target Replication .......................................................................................16-21
Replication Tab in Hitachi Command Suite .......................................................................................16-21

v
Contents
Replication Tab in Hitachi Command Suite – Makes Controlling HUR Easier ......................................... 16-22
Hitachi High Availability Manager ....................................................................................................16-23
Complete Virtualized, High Availability and Disaster Recovery Solution ............................................... 16-24
Global-Active Device ...........................................................................................................................16-25
Global-Active Device Overview ........................................................................................................16-25
Global-Active Device – Components ................................................................................................16-27
Global-Active Device Software Requirements for VSP G1000.............................................................. 16-30
Global-Active Device – Specifications for VSP G1000 ......................................................................... 16-31
Hitachi Business Continuity Management Software ................................................................................ 16-32
Hitachi Business Continuity Manager Overview ................................................................................. 16-32
Hitachi Business Continuity Manager Functions ................................................................................ 16-33
Demo ................................................................................................................................................16-34
Online Product Overview .....................................................................................................................16-34
Module Summary ...............................................................................................................................16-35
Module Review ...................................................................................................................................16-36

17. Command Control Interface Overview ........................................................... 17-1


Module Objectives ............................................................................................................................... 17-1
Overview ............................................................................................................................................ 17-2
Example With ShadowImage Replication ............................................................................................... 17-6
Example With Hitachi TrueCopy ............................................................................................................ 17-7
Often Used Commands ........................................................................................................................ 17-8
Module Summary ................................................................................................................................ 17-9
Module Review ...................................................................................................................................17-10

18. Hitachi Replication Manager ........................................................................... 18-1


Module Objectives ............................................................................................................................... 18-1
Hitachi Replication Manager ................................................................................................................. 18-2
Centralized Replication Management ..................................................................................................... 18-3
Features Overview............................................................................................................................... 18-4
Overview ............................................................................................................................................ 18-5
Launching Hitachi Command Suite ........................................................................................................ 18-6
Centralized Monitoring ......................................................................................................................... 18-7
Centralized Monitoring ......................................................................................................................... 18-9
Features ............................................................................................................................................18-11
Positioning .........................................................................................................................................18-13
Architecture – Open Systems and Mainframe ........................................................................................18-14
Architecture – Open Systems With Application Agent ............................................................................. 18-16

vi
Contents
Components ......................................................................................................................................18-16
Managing Users and Permissions .........................................................................................................18-18
Resource Groups Overview..................................................................................................................18-19
Resource Group Function ....................................................................................................................18-20
Resource Groups ................................................................................................................................18-21
Resource Group Properties ..................................................................................................................18-22
Hitachi Command Suite Replication Tab................................................................................................18-23
HCS Replication Tab ......................................................................................................................18-23
HCS Replication Tab Operations ......................................................................................................18-24
Module Summary ...............................................................................................................................18-26
Module Review ...................................................................................................................................18-26

19. Hitachi Data Instance Director ....................................................................... 19-1


Module Objectives ............................................................................................................................... 19-1
HDS Data Protection Strategy ............................................................................................................... 19-2
Data Management – Today’s Challenges ........................................................................................... 19-2
Focus of Data Protection ................................................................................................................. 19-3
Goals of Data Protection ................................................................................................................. 19-4
Modern Approach to Data Protection ................................................................................................ 19-6
Business-Defined Data Protection: Goals........................................................................................... 19-7
Business-Defined Data Protection: Technologies................................................................................ 19-9
Introduction to Hitachi Data Instance Director ......................................................................................19-10
Hitachi Data Instance Director Overview ..........................................................................................19-10
A Common Scenario ......................................................................................................................19-11
Eliminate the Backup Window Problem ............................................................................................19-11
Easily Transform Backup Designs Into Policies (A Real Customer Example) ......................................... 19-14
What Are the Benefits of Hitachi Data Instance Director? .................................................................. 19-15
Features and Capabilities ....................................................................................................................19-16
Advanced Features to Modernize Your Data Protection Infrastructure ................................................. 19-16
Quantifiable Benefits ......................................................................................................................19-19
Storage-Based Protection With HDID...............................................................................................19-20
Capabilities for Block With HDID .....................................................................................................19-20
Capabilities for HNAS With HDID.....................................................................................................19-21
Capabilities for Host-Based Operational Recovery With HDID............................................................. 19-22
Hitachi Data Instance Director Block Orchestration ........................................................................... 19-23
Capabilities for HCP With HDID .......................................................................................................19-23
Archive File and Email Objects to HCP .............................................................................................19-24

vii
Contents
HDID Complimentary Products .......................................................................................................19-26
Unified Management ...........................................................................................................................19-27
How Many Backup Solutions Do You Use? .......................................................................................19-27
Data Protection Is Complicated .......................................................................................................19-27
Which Data Protection Options to Choose? ......................................................................................19-30
When Data Disaster Strikes ............................................................................................................19-32
Workflow-Based Policy Management ...............................................................................................19-32
Unique Graphical User Interface .....................................................................................................19-33
New: Multitenancy Support ............................................................................................................19-33
Example Deal With HDID ...............................................................................................................19-34
Demo ................................................................................................................................................19-35
Online Product Overview .....................................................................................................................19-36
Module Summary ...............................................................................................................................19-37
Module Review ...................................................................................................................................19-38

20. Hitachi NAS Platform ...................................................................................... 20-1


Module Objectives ............................................................................................................................... 20-1
Features ............................................................................................................................................. 20-2
Hitachi NAS Platform Single-Node Portfolio ............................................................................................ 20-3
Hitachi NAS 2-Node Cluster Portfolio January 2015 ................................................................................. 20-4
The Family of HUS File and HNAS Models .............................................................................................. 20-5
System Hardware (Front View) ............................................................................................................. 20-7
Hitachi NAS Platform 4040 Rear Panel ................................................................................................... 20-7
Hitachi NAS Platform 4060/4080/4100 Rear Panel .................................................................................. 20-8
Differences Between Models 4060 and 4080 .......................................................................................... 20-9
MMB and MFB Printed Circuit Boards ...................................................................................................20-10
Logical Elements in HNAS ...................................................................................................................20-11
EVS Migration (Failover)......................................................................................................................20-12
CIFS Shares and NFS Exports ..............................................................................................................20-13
HNAS 4000 Software Licensing ............................................................................................................20-14
HNAS Features ...................................................................................................................................20-15
Primary Deduplication Using HNAS .......................................................................................................20-15
HNAS Platform Snapshots Implementation............................................................................................20-16
Register Hitachi Unified Storage Into Hitachi Command Suite ................................................................. 20-17
Register HUS File Module/HNAS Into HCS .............................................................................................20-18
SMU Registration ................................................................................................................................20-19
Hitachi NAS File Clone.........................................................................................................................20-20

viii
Contents
Writable Clones ..................................................................................................................................20-21
Traditional Snapshot and NAS File Clone Differences ............................................................................. 20-21
Directory Clones .................................................................................................................................20-22
NDMP Backup Direct to Tape ...............................................................................................................20-22
HNAS Replication Access Point Replication ............................................................................................20-23
NAS Replication Object-by-Object ........................................................................................................20-23
Promote Secondary ............................................................................................................................20-24
Data Protection – Anti-Virus Support ....................................................................................................20-24
Data Migration Using Cross Volume Links .............................................................................................20-25
HNAS Data Migration to HCP ...............................................................................................................20-26
Data Migrator to Cloud Added .............................................................................................................20-26
Universal Migration .............................................................................................................................20-27
VSP G1000 Hardware .........................................................................................................................20-27
Global-Active Device and HNAS Integration ..........................................................................................20-28
Synchronous Disaster Recovery for HNAS Overview .......................................................................... 20-28
Why Is Global-Active Device Important to HNAS? ............................................................................. 20-30
Online Product Overview .....................................................................................................................20-32
Module Summary ...............................................................................................................................20-33
Module Review ...................................................................................................................................20-34

21. Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere ............. 21-1
Module Objectives ............................................................................................................................... 21-1
Hitachi Content Platform ...................................................................................................................... 21-2
What Is an HCP Object? .................................................................................................................. 21-2
What Is Hitachi Content Platform? ................................................................................................... 21-3
HCP Basics ..................................................................................................................................... 21-4
Fixed Content................................................................................................................................. 21-6
Categories of Storage ..................................................................................................................... 21-7
Object-Based Storage – Overview .................................................................................................... 21-8
Retention Times ............................................................................................................................. 21-9
Reviewing Retention ......................................................................................................................21-10
Policy Descriptions .........................................................................................................................21-11
HCP Integration With VLANs ...........................................................................................................21-12
Multiple Custom Metadata Injection ................................................................................................21-13
It’s not Just Archive Anymore .........................................................................................................21-14
Introducing Tenants and Namespaces .............................................................................................21-15
Internal Object Representation .......................................................................................................21-16

ix
Contents
HCP – Versatile Content Platform ....................................................................................................21-17
HCP Products .....................................................................................................................................21-17
Unified HCP G10 Platform...............................................................................................................21-18
HCP G10 With Local Storage...........................................................................................................21-19
HCP G10 With Attached Storage .....................................................................................................21-19
HCP S10 .......................................................................................................................................21-20
HCP S30 .......................................................................................................................................21-21
HCP S Node ..................................................................................................................................21-22
Direct Write to HCP S10/S30 ..........................................................................................................21-23
VMware and Hyper-V Editions of HCP ..............................................................................................21-24
Hitachi Data Ingestor ..........................................................................................................................21-24
Hitachi Data Ingestor (HDI)............................................................................................................21-25
What Is Hitachi Data Ingestor? .......................................................................................................21-25
How Does Hitachi Data Ingestor Work? ...........................................................................................21-26
Hitachi Data Ingestor Overview ......................................................................................................21-27
Hitachi Data Ingestor (HDI) Specifications .......................................................................................21-28
Major Components: Server + HBA, Switch and Storage..................................................................... 21-29
Protocols in Detail .........................................................................................................................21-29
How HDI Maps to HCP Tenants and Namespaces ............................................................................. 21-30
Content Sharing Use Case: Medical Image File Sharing ..................................................................... 21-31
A Quick Look: Migration, Stubbing and Recalling .............................................................................. 21-31
HDI Is Backup Free .......................................................................................................................21-32
HDI Intelligent Caching: Migration ..................................................................................................21-32
HDI Intelligent Caching: Stubbing ...................................................................................................21-33
File Retention Utility (WORM) .........................................................................................................21-33
Roaming Home Directories .............................................................................................................21-34
HDI With Remote Server .....................................................................................................................21-34
What Is HDI With Remote Server? ..................................................................................................21-34
Why HDI With Remote Server? .......................................................................................................21-35
Solution Components .....................................................................................................................21-35
HCP Anywhere ...................................................................................................................................21-35
HCP Solution With HCP Anywhere ...................................................................................................21-36
Hitachi Content Platform Anywhere .................................................................................................21-37
HCP Solution With HCP Anywhere ...................................................................................................21-38
Desktop Application Overview .........................................................................................................21-39
HCP Anywhere App in the App Store ...............................................................................................21-39
HCP Anywhere Features .................................................................................................................21-40

x
Contents
Demo ................................................................................................................................................21-40
Online Product Overviews ...................................................................................................................21-41
Module Summary ...............................................................................................................................21-41
Module Review ...................................................................................................................................21-42

22. Hitachi Compute Blade and Hitachi Unified Compute Platform ...................... 22-1
Module Objectives ............................................................................................................................... 22-1
Hitachi Compute Portfolio..................................................................................................................... 22-2
Hitachi Compute Blade 500 Series ......................................................................................................... 22-2
Compute Blade 500 Chassis And Components ........................................................................................ 22-4
Hitachi Compute Blade 500 Series ......................................................................................................... 22-5
CB500 Web Console ............................................................................................................................ 22-7
Hitachi Compute Blade 2500 Series ....................................................................................................... 22-8
Compute Blade 2500 Components - Front.............................................................................................. 22-9
Compute Blade 2500 Components - Rear..............................................................................................22-10
Hitachi Compute Blade 2500 Series ......................................................................................................22-11
CB2500 Web Console..........................................................................................................................22-11
Server Blade Options ..........................................................................................................................22-12
Compute Blade Platform Features ........................................................................................................22-13
What Is Logical Partitioning? ...............................................................................................................22-15
Compute Rack Server Family ...............................................................................................................22-16
Integrated Platform Management ........................................................................................................22-17
Hitachi Compute Systems Manager ......................................................................................................22-18
HCSM Resources – Compute Blade Chassis ...........................................................................................22-18
HCSM Resources – Compute Blade Servers ...........................................................................................22-19
HCSM Resources – Compute Blade Servers (continued) ......................................................................... 22-19
Demo ................................................................................................................................................22-20
Unified Compute Platform ...................................................................................................................22-20
Unified Compute Platform – One Platform for All Workloads .............................................................. 22-21
UCP With Unified Compute Platform Director ................................................................................... 22-22
Unified Compute Platform Family Overview ......................................................................................22-23
Unified Compute Platform 4000E – Entry-Level ................................................................................ 22-25
Demo ................................................................................................................................................22-27
Online Product Overviews ...................................................................................................................22-27
Module Summary ...............................................................................................................................22-28
Your Next Steps .................................................................................................................................22-29

xi
Contents

Appendix A: Hitachi Enterprise Storage Hardware –


Hitachi Virtual Storage Platform ............................................................................. A-1

Glossary .................................................................................................................. G-1

Evaluate This Course ............................................................................................... E-1

xii
14. Business Continuity Overview
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the business continuity solutions and the available software

Page 14-1
Business Continuity Overview
Hitachi Replication Products

Hitachi reference and user guides:

• Hitachi Virtual Storage Platform G1000 Hitachi ShadowImage User Guide


• Hitachi Virtual Storage Platform G1000 Hitachi Thin Image User Guide
• Hitachi Virtual Storage Platform G1000 Hitachi TrueCopy User Guide
• Command Control Interface Installation and Configuration Guide
• Hitachi Command Control Interface Command Reference
• Hitachi Command Control Interface User and Reference Guide
• Hitachi Virtual Storage Platform G1000 Hitachi Universal Replicator User Guide

Hitachi Replication Products


 Enterprise and entry-level enterprise storage systems
Hitachi In-System Heterogeneous
Hitachi Remote Replication
Replication bundle
Hitachi ShadowImage Heterogeneous Hitachi TrueCopy Remote Replication Software
Replication
Hitachi Copy-on-Write Snapshot Hitachi Universal Replicator (HUR)

 Modular storage systems

Hitachi In-System Replication bundle Hitachi Remote Replication


Hitachi ShadowImage Replication Hitachi TrueCopy Remote Replication bundle
Hitachi Copy-On-Write Snapshot Hitachi TrueCopy Extended Distance

• Hitachi Business Continuity Manager (BCM)


• Hitachi Dynamic Replicator (HDR)
• Hitachi ShadowImage In-System Replication software bundle
• Hitachi Thin Image (HTI)
• Hitachi TrueCopy Remote Replication bundle
• Hitachi Replication Manager (HRpM)
• Hitachi Universal Replicator (HUR)
• Mainframe compatible software

Page 14-2
Business Continuity Overview
ShadowImage Replication

ShadowImage Replication

 Features
• Full physical copy of a volume
• Immediately available for concurrent use by other
applications (after split)
• No host processing cycles required
• No dependence on operating system, file system or Production
database Copy of
Volume Production
• All copies are additionally RAID protected Volume
 Benefits Normal Point-in-
processing time copy
• Protects data availability continues for parallel
unaffected processing
• Simplifies and increases disaster recovery testing
• Eliminates the backup window
• Reduces testing and development cycles
• Enables nondisruptive sharing of critical information

Hitachi ShadowImage In-System Replication software bundle is a nondisruptive, host-


independent data replication solution for creating copies of any customer-accessible data within
a single Hitachi storage system. Hitachi ShadowImage Replication also increases the availability
of revenue-producing applications by enabling backup operations to run concurrently while
business or production applications are online.

Page 14-3
Business Continuity Overview
Hitachi Thin Image

Hitachi Thin Image

Benefits Features
 Reduce recovery time from data corruption or human errors
 Up to 1024 point-in-time snapshot copies
while maximizing Hitachi disk storage capacity
 Achieve frequent and nondisruptive data backup operations  Only changed data blocks stored for
while critical applications run unaffected maximum capacity utilization
 Accelerate application testing and deployment with always-  Version tracking of backups enables
available copies of current production information easy restores of just the data you need
 Significantly reduce or eliminate backup window time
 Near instantaneous restore reduces
requirements
downtime and improves recovery
 Improve operational efficiency by allowing multiple processes objectives
to run in parallel with access to the same information
 New greatly improved write performance
Host can access reduces response time to host
minimizing impact on users and
S-Vol TI Pool applications
P-Vol  Integration with industry-leading backup
software applications
S-Vol

An essential component of data backup and protection solutions is the ability to quickly and
easily copy data. On HUS VM and newer systems Hitachi provides this as Hitachi Thin Image.
This function provides logical, change-based, point-in-time data replication within Hitachi
storage systems for immediate business use. Business usage can include data backup and rapid
recovery operations, as well as decision support, information processing and software testing
and development.

• Maximum capacity of 2.1PB enables larger data sets or more virtual machines to be
protected.

• Maximum snapshots increased to 1024 for greater snapshot frequency and/or longer
retention periods

• Asynchronous operation greatly improves response time to host

• Enhanced for super-fast data recovery performance

Page 14-4
Business Continuity Overview
Hitachi TrueCopy Remote Replication Software

Hitachi TrueCopy Remote Replication Software

 Features  Benefits
• Synchronous solution • Disaster recovery solution
• Consistency group support • Allows for data migration
• The remote copy is always a mirror • Increases the availability of revenue
image producing applications
• Provides fast recovery with no data loss

1 2
P-VOL
4 3

• Hitachi TrueCopy provides a continuous, nondisruptive, host independent remote data-


replication solution for disaster recovery or data migration purposes. Using the TrueCopy
Remote Replication software, you can create and maintain mirror images of production
volumes at a remote location.

• TrueCopy Remote Replication software can be deployed with Hitachi Universal Replicator
software's asynchronous replication capabilities to provide advanced data replication
among multiple data centers.

• TrueCopy Remote Replication software can be integrated with Hitachi ShadowImage


Replication software to enable robust business-continuity solutions. This lets you create
a remote copy of primary site or production data that is automatically updated for
executing test and development tasks or for operations against production data.

Page 14-5
Business Continuity Overview
Hitachi Universal Replicator Software

Hitachi Universal Replicator Software

 Features  Benefits
• Asynchronous replication • Resource optimization
• Leverages Virtual Storage Platform • Mitigation of network problems and
significantly reduced network costs
• Performance-optimized disk-based journaling
• Enhanced disaster recovery capabilities
• Resource-optimized processes
through 3 Data Center solutions
• Advanced 3 Data Center capabilities • Reduced costs due to single pane of glass
• Mainframe and Open Systems support heterogeneous replication

Primary site Secondary site

WRT

Application Application
JNL JNL
Volume Volume

Journal is transferred asynchronously


Virtual Storage Platform Virtual Storage Platform

• The following describes the basic technology behind the disk-optimized journals:

o I/O is initiated by the application and sent to the Virtual Storage Platform.

o It is captured in cache and sent to the disk journal, at which point it is written to
disk.

o The I/O complete is released to the application.

o The remote system pulls the data and writes it to its own journals and then to
the replicated application volumes.

• Universal Replicator software sorts the I/Os at the remote site by sequence and time
stamp (mainframe) and guaranteed data integrity.

• Note that Universal Replicator software offers full support for consistency groups
through the journal mechanism (journal groups).

Page 14-6
Business Continuity Overview
Hitachi Replication Manager

Hitachi Replication Manager

 Single interface for performing all replication operations


• Managing replication pairs
 ShadowImage Replication
 Copy-On-Write Snapshot
 TrueCopy Remote Replication
 TrueCopy Extended Distance
 Universal Replicator
• Configuring
 Command devices
 Differential management LUs
 Copy-On-Write Snapshot pools
 TrueCopy ports
• Creating alerts
• GUI representation of replication environment

Replication Manager is a business continuity management framework that allows you to


centrally configure, monitor, and manage in-system or remote business continuity products for
both mainframe and open environments.

This uniquely integrated solution allows you to closely monitor critical storage components and
better manage recovery point objectives (RPO) and recovery time objectives (RTO).

This software tool simplifies replication management and optimizes the configuration,
operations and monitoring of the critical storage components of the replication infrastructure. It
leverages the volume replication capabilities of the Hitachi disk array storage systems to reduce
the workload involved in management tasks such as protecting and restoring system data.

Replication Manager reduces the need for manual configuration and provides true replication
function management and workflow capabilities.

Page 14-7
Business Continuity Overview
Tools Used for Setting Up Replication

Tools Used for Setting Up Replication

 Graphical user interface (GUI)


• Hitachi Storage Navigator and Storage Navigator Modular
• Storage centric
• Hitachi Device Manager
• Data center view of resources, limited or no monitoring options; primary
focus is provisioning
• Device Manager agent is required on one server
• Hitachi Replication Manager
• Geographically spread data center and site views, enhanced monitoring
and alerting features; primary focus is replication

• Use interface tools to manage replication.

• Interface tools can include the following:

o Device Manager (HDvM) – SN graphical user interface

o Device Manager – Replication Manager

o The command control interface (CCI)

Page 14-8
Business Continuity Overview
Tools Used for Setting Up Replication

 Command line Interface (CLI)


• Used to script replication process
• RAID Manager/CCI software
• RAIDCOM CLI (enterprise and entry-level storage systems only)
• Hitachi Open Remote Copy Manager configuration files
• Command device
• Differential Management Logical Unit (DMLU) (modular storage systems)

• CCI — Command Control Interface

o CCI represents the command line interface for performing replication operations.

• Open Remote Copy Manager (HORCM)

o HORCM files contain the configuration for volumes to be replicated and used by
the commands available through CCI

• DMLU = Differential Management Logical Unit

Page 14-9
Business Continuity Overview
Requirements for All Replication Products

Requirements for All Replication Products


 Any volumes involved in replication operations (source and destination):
• Should be same size (in blocks)
• Must be mapped to a port
 Source can be online and in use
 Destination must not be in use or mounted

 Intermix of RAID levels and drive type is supported

 Licensing is capacity independent for local and remote

Replication Operations
 Basic operations when working with replication products
• Paircreate
• Pairsplit
• Pairresync
• Pairsplit
 Commands are consistent across products (in-system or remote replication), but
implementation varies depending on the product
• In-system — all operations with volumes within the same storage system
• Remote — all operations with volumes across different storage systems
• Use manual to identify product specific operations with above commands
 A volume with source data is called a primary volume (P-VOL), and a volume to which the
data is copied is a secondary volume (S-VOL)
I/O operation
PAIR
P-Vol S-Vol
Host

Basic operations:

• Pair creation
• Splitting pairs
• Pair resynchronization

• Pair deletion

Page 14-10
Business Continuity Overview
Copy Operations

Copy Operations

 Data copy operations


• Initial copy
 Results in all data being copied from P-VOL to S-VOL
 Copies everything including empty blocks
• Update copy
 Only differentials are copied
I/O operation SMPL Differential Data
P-Vol S-Vol I/O operation
P-Vol S-Vol
Host COPY Host
P-Vol (PD) S-Vol Update
Initial
Copy PAIR Copy
P-Vol S-Vol
PAIR
P-Vol S-Vol

Creating a pair copies the P-VOL to the S-VOL.

If your Hitachi Virtual Storage Platform G1000 has encryption disk adapters (DKAs), you can
copy an encrypted volume to an unencrypted volume. There is no guard logic to enforce
copying encrypted P-VOLs to only encrypted S-VOLs. Unless there is a specific reason for the
data to become unencrypted, make sure you maintain the encryption by using only encrypted
S-VOLs.

Page 14-11
Business Continuity Overview
Thin Provisioning “Awareness”

Thin Provisioning “Awareness”

Pair create
instruction
P-VOL S-VOL POOL
Delete allocated page
(Write 0 and restore it
Usage 0%
to POOL)

Data copy Get a new page


(Only page allocated
area on P-VOL)

 Saves bandwidth and reduces initial copy time: In “thin-to-thin” replication pairings,
only data pages actually consumed (allocated) from the HDP pool need to be copied
during initial copy
 Reduce license costs: You only have to provision license capacity for capacity actually
consumed (allocated) from the HDP pool

Thin provisioning “awareness” applies to all Hitachi replication products, including Hitachi
Universal Replicator!

Page 14-12
Business Continuity Overview
Online Product Overview

Online Product Overview

 How Are You Going To Protect That Data?

https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKH
rZlDvXcv

https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKHrZlDvXcv

Module Summary

 In this module, you should have learned to:


• Describe the business continuity solutions and the available software

Page 14-13
Business Continuity Overview
Module Review

Module Review

1. List the software that offers a GUI for performing all replication
operations.

2. What options are available for performing replication using CLI?

Page 14-14
15. Hitachi In-System Replication Bundle
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the features, functions and principles of the Hitachi In-System
Replication bundle, including:
 Hitachi ShadowImage Replication
 Hitachi Copy-on-Write Snapshot
 Hitachi Thin Image

Page 15-1
Hitachi In-System Replication Bundle
Hitachi ShadowImage Replication

Hitachi ShadowImage Replication


This section discusses the features and functions of Hitachi ShadowImage Replication.

Introducing ShadowImage Replication

 ShadowImage Replication delivers business


continuity
• Simplifies and increases data protection and availability
• Eliminates traditional backup window
• Reduces application testing and development cycle VOL #1
times
• Allows flexible movement of data volumes across
platforms VOL #2

• Enables an uncorrupted copy of production data to be


restored if an outage occurs
• Allows disaster recovery testing without impacting
production

Hitachi ShadowImage® (SI) uses local mirroring technology to create and maintain a full copy
of a data volume within a Hitachi Virtual Storage Platform G1000 (VSP G1000) storage system.
Using SI volume copies (for example, as backups, with secondary host applications, for data
mining, for testing) allows you to continue seamless business operations without stopping host
application input/output (I/O) to the production volume.

It enables server-free backups, which allows customers to exceed service level agreements
(SLAs). It fulfills 2 primary functions:

• Copy open-systems data

• Back up data to a second volume

ShadowImage Replication allows the pair to be split and use the secondary volume for system
backups, testing and data mining applications while the customer’s business continues to run. It
uses either graphical or command line interfaces to create a copy and then control data
replication and fast resynchronization of logical volumes within the system

Page 15-2
Hitachi In-System Replication Bundle
ShadowImage Replication Overview

ShadowImage Replication Overview

 Creates RAID-protected full clone copies of customer-accessible data


within a single Hitachi storage system

 Supports replication between any storage systems within a virtualized


storage pool managed by Hitachi Virtual Storage Platform (VSP),
Hitachi Unified Storage (HUS) VM, or the VSP G1000 family

 Complements Hitachi Universal Replicator (HUR) with AT-TIME split


capability to create clones without suspending HUR relationships

 ShadowImage Replication can be used to replicate thin-provisioned


volumes created by Hitachi Dynamic Provisioning

Nondisruptive: No down time or impact to production application.

Customer Managed: User issues commands to manipulate full-volume copies.

Host Independent: No impact to server processing data. Replication is completely contained


within the storage system.

Data Replication: Continuous movement of data to secondary volumes independent of primary


volume writes.

RAID protection: A disk failure or Automatic Error correction is handled completely transparently
with no interruption.

At-Time split means that more than one pair in a CTG (Consistency Group) can be split at the
same time.

Page 15-3
Hitachi In-System Replication Bundle
ShadowImage Replication RAID-Protected Clones

ShadowImage Replication RAID-Protected Clones

 Use Hitachi ShadowImage Replication to create multiple clones of


primary data
 Open systems—10 copies total (1 primary, three 1st-level copies,
six 2nd-level copies) enterprise storage
 IBM z/OS or OS/390 platforms—4 copies total (1 primary, three 1st-level
copies)

 No host processing cycles required

 No dependence on O/S, file system or database

 Ability to automate creation of cyclical copies

 Volumes can be resynchronized after a split without another initial copy

ShadowImage In-System Replication software enables you to maintain system-internal copies


of all user data for purposes such as data backup or duplication. The RAID protected duplicate
volumes are created within the same system as the primary volume at hardware speeds.
ShadowImage In-System Replication software is used for UNIX-based and PC server data. It
can provide up to 9 duplicates of one primary volume for UNIX-based and PC server data only.

ShadowImage In-System Replication software for IBM z/OS protects mainframe data in the
same manner. For mainframes, ShadowImage In-System Replication software can provide up to
three duplicates of one primary volume.

In Storage Navigator (java interface), the Paircreate command creates the first Level 1 “S”
volume. The set command can be used to create a second and third Level 1 “S” volume. And
the cascade command can be used to create the Level 2 “S” volumes off the Level 1 “S”
volumes.

Page 15-4
Hitachi In-System Replication Bundle
Easy to Create ShadowImage Replication Clones

Easy to Create ShadowImage Replication Clones

 Select a volume you want to


replicate and identify another volume
to contain the clone

 Associate the primary and secondary


volumes and data is automatically
copied
Primary Secondary
 Primary volume maintains read and Volume Volume
write access during initial copy

 Primary and secondary volumes


remain synchronized

A pair is created when you:

• Select a volume that you want to duplicate. This becomes the P-VOL volume (P-VOL).

• Identify another volume to contain the copy. This becomes the secondary volume (S-
VOL).

o Associate the P-VOL and S-VOLs.

o Perform the initial copy.

• During the initial copy, the P-VOL remains available for read/write. After the copy is
completed, subsequent write operations to the P-VOL are regularly duplicated to the S-
VOL.

• The P-VOL and S-VOLs remain paired until they are split. The P-VOL for a split pair
continues to be updated but data in the S-VOL remains as it was at the time of the split.
The S-VOL contains a mirror image of the original volume at that point in time.

o S-VOL data is consistent and usable. It is available for read/write access by


secondary host applications.

o Changes to the P-VOL and S-VOLs are managed by differential bitmaps.

o You can pair the volumes again by resynchronizing the update data from P-VOL-
to-S-VOL or from S-VOL-to-P-VOL, as circumstance dictates.

Page 15-5
Hitachi In-System Replication Bundle
ShadowImage Replication Consistency Groups

ShadowImage Replication Consistency Groups

 Feature allows user to split a group of


pairs at the same time and maintain
consistency between volumes
CRM I/O CRM
All clones (S-VOLs) in a group are split P-VOL S-VOL
at exactly the same time
At the time of the split, all I/O to the I/O
Shipments Shipments
group is held until after the split P-VOL S-VOL
No host software required
 Insures consistency of data on all I/O
Inventory Inventory
volumes in the defined group P-VOL S-VOL

 Simplifies database backup and restore


or complex application testing

A consistency group (CTG) is a group of pairs on which copy operations are performed
simultaneously and in which the status of the pairs remains consistent. A consistency group can
include pairs that reside in up to 4 primary and secondary systems.

Use a consistency group to perform tasks on the SI pairs in the group at the same time,
including CTG pair-split tasks. Using a CTG to perform tasks ensures the consistency of the pair
status for all pairs in the group.

Page 15-6
Hitachi In-System Replication Bundle
Overview

Overview

 What does ShadowImage Replication do?


• Enables online backup
• Allows application development testing or tape
archival from the Hitachi ShadowImage Replication
mirror copy

 How does it work?


• Creates copy (S-VOL) of any Production Data (P-VOL)
VOL #1
active application volume (P-VOL)
• Allows the new copy to be used
by another application or system Backup Data (S-VOL)
VOL #2

• ShadowImage Replication allows you to create a single, local copy of any active
application volume while benefiting from full RAID protection.

o This mirrored copy can be used by another application or system for a variety of
purposes, including data mining, full volume batch cycle testing and backups.

• It can provide up to 9 secondary volumes (S-VOL) per primary volume (P-VOL) within
the same system to maintain redundancy of the primary volume

o It allows you to split and combine duplex volumes and provides you the contents
of static volumes without stopping the access.

• ShadowImage Replication operations are nondisruptive and allow the primary volume of
each volume pair to remain online for all hosts for both read and write I/O operations.

o ShadowImage Replication operations continue unattended to provide


asynchronous internal data backup.

Page 15-7
Hitachi In-System Replication Bundle
Applications

Applications

 Backup and recovery

 Data warehousing and data mining applications

 Application development

 Run benchmarks and reports

• Hitachi ShadowImage Replication is replication, backup and restore software that


delivers the copy flexibility customers need for meeting today’s unpredictable business
challenges.

• With ShadowImage Replication, customers can:

o Execute logical backups at faster speeds and with less effort than previously
possible

o Easily configure backups to execute across a storage area network

o Manage backups from a central location

o Increase the speed of applications

o Expedite application testing and development

o Keep a copy of data for backup or testing

o Ensure data availability

Page 15-8
Hitachi In-System Replication Bundle
ShadowImage Replication Licensing

ShadowImage Replication Licensing

 Total capacity of all P-VOLs and S-VOLs must be less than or equal to
the installed license capacity

 Copies of P-VOL do not count for license capacity

 Dynamic provisioning and dynamic tiering, and pool capacity being used
by the volumes is counted

ShadowImage Licensed Capacity Requirements

• The total capacity of all P-VOLs and S-VOLs must be less than or equal to the installed
license capacity. Volume capacity is counted only once, even if you use the volume more
than once. You do not need to multiply the capacity by the number of times a volume is
used. For example, a P-VOL used as the source volume for 3 pairs is counted only once.

• For a normal volume, the total volume capacity is counted, but for a DP-VOL (a virtual
volume used in dynamic provisioning, dynamic tiering or active flash) the pool capacity
being used by the volume is counted.

• After you start performing pair tasks, monitor your capacity requirements to keep the
used capacity within the capacity of the installed license.

• You can continue using ShadowImage Replication volumes in pairs for 30 days after
licensed capacity is exceeded. After 30 days, the only allowed operation is pair deletion.

Page 15-9
Hitachi In-System Replication Bundle
Management Resources

Management Resources

CMD-
DEV

ement Server with P-VOL Management Server with CCI


/HRpM • Create pairs
te pairs • Split pairs
pairs • Resynchronize pairs
nchronize pairs S-VOL • Delete pairs
e pairs
Must configure and map a
command device

ShadowImage Replication components include:

• Volume pairs (P-VOLs and S-VOLs) in the Hitachi storage system

• HCS/HRpM and/or command control interface (CCI) or RAID manager in the


management server

Command Control Interface

CCI is a tool that uses the command line interface to run commands that perform most of the
same tasks you can do with HDvM - SN.

You can either:


• Run pair commands directly from a host

• Script CCI commands to have pair operations performed automatically

Page 15-10
Hitachi In-System Replication Bundle
Internal ShadowImage Replication Operation

Internal ShadowImage Replication Operation

1. Write I/O
P-VOL
2. Write complete
3. Asynchronous write I/O
replication
S-VOL
Server/Host

Creating a pair causes Hitachi Virtual Storage Platform G1000 to start the initial copy. During
the initial copy, the P-VOL remains available for read and write operations from the host. After
the initial copy, Virtual Storage Platform G1000 periodically copies the differential data in the P-
VOL to the S-VOL. Subsequent write operations to the P-VOL are regularly duplicated to the S-
VOL. The data in the P-VOL is copied to the SVOL.

Initial Copy Workflow

• Initial copy is an operation VSP G1000 performs when you create a copy pair.

• Data on the P-VOL is copied to the S-VOL for the initial copy using the following
workflow.

• VSP G1000 goes through the following workflow to create an initial copy:

a. The S-VOLs are not paired. You create the copy pair.
b. The initial copy is in progress (COPY(PD)/COPY status). VSP G1000 copies the P-
VOL data to the S-VOL. A P-VOL continues receiving updates from the host
during the initial copy.
c. The initial copy is complete and the volumes are paired (PAIR status).

Page 15-11
Hitachi In-System Replication Bundle
Operations

Operations

Time
App App Backup App App

A A A B A B A A

Online Offline Online Online Online Offline Offline Offline


Initial Copy Split Suspend Resync Resume Reverse Sync

Pair Create Pair Suspend Pair Resynchronization Reverse Synch/Restore

All volumes, continuous RAID protection

• Hitachi ShadowImage Replication operations include:

o paircreate

o pairsplit

o pairresynchronize

Page 15-12
Hitachi In-System Replication Bundle
paircreate Command

paircreate Command

 Establishes a new Hitachi ShadowImage Replication pair


Host
P-VOL
P-VOL available to host
for R/W I/O operations All Data

S-VOL
Initial Copy
Data Bitmap
P-VOL Differential  Updates S-VOL after initial
Host copy
P-VOL available to host P-VOL  Writes I/O to P-VOL during
for R/W I/O operations
Differential Data initial copy—duplicated at
S-VOL by update copy after
Update Copy S-VOL initial copy

• The ShadowImage Replication paircreate operation establishes the newly specified


ShadowImage Replication pair.

• The volumes, which will become the P-VOL and S-VOL, must both be in the SMPL
(simplex) state before becoming a ShadowImage Replication pair.

• ShadowImage Replication initial copy operation copies all data from the P-VOL to the
associated S-VOL.

• P-VOL remains available to all hosts for read and write I/Os throughout the initial copy
operation.

• Write operations performed on the P-VOL during the initial copy operations will always
be duplicated to the S-VOL after the initial copy is complete.

• Status of the pair is COPY while the initial copy operation is in progress; the pair status
changes to PAIR when the initial copy is complete.

• You can select the pace for the initial copy operation when creating pairs.

• The following pace options are available:

o Slower

o Medium

o Faster

Page 15-13
Hitachi In-System Replication Bundle
pairsplit Command

• The slower pace minimizes the impact of ShadowImage Replication operations on


system I/O performance, while the faster pace completes the initial copy operation as
quickly as possible.

• The best timing is based on the amount of write activity on the P-VOL and the amount
of time elapsed between update copies.

• ShadowImage Replication also allows you to replicate data between DP volumes


between 2 pools; the capacity usage for S-VOL will match the capacity allocated to P-
VOL.

pairsplit Command

 Splits P-VOL and S-VOL pairs in the PAIR state

Host Pending Updates Complete Host


P-VOL fully accessible PAIR PAIR S-VOL can be made
for write I/Os during P-VOL S-VOL available for host access
split operation
PSUS SSUS

Differential Bitmap

Records all updates to P-VOL


and S-VOL while in split status

The ShadowImage Replication pairsplit operation performs all pending S-VOL updates (those
issued prior to the split command and recorded in the P-VOL bitmap) to make the S-VOL
identical to the state of the P-VOL when the suspend command was issued and then provides
full read/write access to the split S-VOL.

You can split existing pairs as needed and you can use the paircreate operation to create and
split pairs in one step. This feature provides point-in-time backup of your data and facilitates
real data testing by making the ShadowImage Replication copies (S-VOLs) available for host
access.

Page 15-14
Hitachi In-System Replication Bundle
pairsplit Command

When the split operation is complete, the pair status changes to PSUS (pair suspended) and you
have full read/write access to the split S-VOL.

• While the pair is split, the system establishes a bitmap for the split P-VOL and S-VOL
and records all updates to both volumes.

• The P-VOL remains fully accessible during the pairsplit operation.

 pairsplit illustration 10:00 a.m. status = PAIR

Host I/O
Changed data at
3, 10, 15 and 18

P-VOL Asynchronous updates S-VOL

10:00:55 a.m. pairsplit, status = PSUS

Host I/O
Host I/O
Changed data at
3, 10, 15 and 18 Changed data

P-VOL S-VOL

10:01 a.m.
Location
Host I/O

Changed data at 3, 10, 15 and 18


3, 10, 15 and 18
sent from P-VOL to S-VOL

P-VOL Updates  S-VOL

Pairsplit Data Flow

2. The P-VOL and S-VOL are in PAIR status as of 10:00 a.m. Data at address 3, 10, 15 and
18 are marked for copying as a result of host I/O.

3. The status of the P-VOL and S-VOL is changed to PSUS. The bitmap for the P-VOL
contains information about changes that still need to be copied over to the S-VOL.

4. Data at address 3, 10, 15 and 18 are sent across to the S-VOL from the P-VOL, making
the S-VOL identical to the P-VOL at the time of the split command.

Page 15-15
Hitachi In-System Replication Bundle
pairresync Command – Operation Types

pairresync Command – Operation Types

Host
Normal
PSUS SSUS Stop I/O to S-VOL
P-VOL S-VOL

PAIR Reverse
PAIR
Merge
Bitmaps

Sync S-VOL with P-VOL Sync P-VOL with S-VOL


NORMAL REVERSE
Copy direction = P-VOL to S-VOL Copy direction = S-VOL to P-VOL
S-VOL bitmap merged with P-VOL bitmap P-VOL bitmap merged with S-VOL bitmap
 copies all “flagged” data from P-VOL to  copies “flagged” data from S-VOL to P-VOL
S-VOL P-VOL is unavailable for R/W I/Os
P-VOL remains available to hosts for R/W
I/O operations

The Hitachi ShadowImage Replication pairresync operation resynchronizes the suspended pairs
(PSUS) or the suspended on error pairs (PSUE). When the pairresync operation starts, the pair
status changes to COPY(RS) or COPY(RS-R). The pair status changes to PAIR when the
pairresync operation completes.

ShadowImage Replication allows you to perform 2 types of pairresync operations:

• Normal: The normal pairresync operation resynchronizes the S-VOL with the P-VOL.

o The copy direction for a normal pairresync operation is P-VOL to S-VOL.

o The pair status during a normal resync operation is COPY(RS)

o The S-VOL becomes inaccessible to all hosts for write operations and the P-VOL
is accessible to all hosts for both read and write operations during a normal
pairresync.

o The normal pairresync operation can be executed for pairs with the status PSUS
and PSUE.

Page 15-16
Hitachi In-System Replication Bundle
pairresync Command – Operation Types

• Reverse: The reverse pairresync (pairresync –restore) operation synchronizes the P-


VOL with the S-VOL.

o The copy direction for a reverse pairresync operation is S-VOL to P-VOL.

o The pair status during a reverse resync operation is COPY(RS-R) and the S-VOL
becomes inaccessible to all hosts for write operations during a reverse pairresync
operation.

o The P-VOL is inaccessible for both read and write operations and the write
operations on P-VOL will always be reflected to the S-VOL.

When a pairresync operation is performed on a suspended pair (status = PSUS), the storage
system merges the S-VOL differential bitmap into the P-VOL differential bitmap and then copies
all flagged data from the P-VOL to the S-VOL. When a reverse pairresync operation is
performed on a suspended pair, the storage system merges the P-VOL differential bitmap into
the S-VOL differential bitmap and then copies all flagged data from the S-VOL to the P-VOL.
This ensures that the P-VOL and S-VOL are properly resynchronized in the desired direction.

Page 15-17
Hitachi In-System Replication Bundle
pairresync Command – Normal Resync

pairresync Command – Normal Resync

 Normal resync data flow


10:00 a.m. status = PSUS

Host I/O
Host I/O
Changed data at Changed data at
10,15,18 and 29 10, 19 and 23

P-VOL S-VOL

10:00:01 a.m. Pairresync (Normal)

Host I/O
Data at locations
Changed data 10, 15, 18, 19, 23 and 29
sent from P-VOL to S-VOL
P-VOL Updates  S-VOL

10:00:45 a.m. status = PAIR

Host I/O
Changed data
Update Copy

P-VOL Asynchronous Updates S-VOL

Normal Resync Data Flow

5. The status of the P-VOL and the S-VOL is PSUS (pair suspended)/SSUS (secondary
suspended) as of 10:00 a.m.
Data at locations 10, 15, 18 and 19 on the P-VOL are marked as changed.
Data at locations 10, 19 and 23 on the S-VOL are marked as changed.

6. At 10:00 a.m., a pairresync (normal) command is issued. The bitmaps for the P-VOL and
S-VOL are merged.
The resulting bitmap has locations 10, 15, 18, 19, 23 and 29 marked as changed.
Data at these locations are sent from the P-VOL to the S-VOL as part of an update copy
operation.

7. Once the update copy operation in step 2 is complete, the P-VOL and S-VOL are
declared a PAIR again.

Page 15-18
Hitachi In-System Replication Bundle
pairresync Command – Reverse Resync

pairresync Command – Reverse Resync

 Reverse resync data flow


10:00 a.m. status = PSUS

Host I/O

Host I/O
Changed data at Changed data at
10,15,18 and 29 10, 19 and 23

P-VOL S-VOL

10:00:01 a.m. Pairresync (Reverse)


Data at locations
Changed data 10, 15, 18, 19, 23 and 29
sent from S-VOL to P-VOL
P-VOL  Updates S-VOL

10:00:45 a.m. status = PAIR

Host I/O
Changed data
Update Copy

P-VOL Asynchronous Updates S-VOL

Reverse Resync Data Flow


• The status of the P-VOL and the S-VOL is PSUS as of 10:00 a.m.
Data at locations 10, 15, 18, and 19 on the P-VOL are marked as
changed.
Data at locations 10, 19 and 23 on the S-VOL are marked as changed.
• At 10:00 a.m. a pairresync (normal) command is issued.
The bitmaps for the P-VOL and S-VOL are merged.
The resulting bitmap has locations 10, 15, 18, 19, 23 and 29 marked as
changed.
The data at these location are sent from the S-VOL to the P-VOL as part
of an update copy operation.
• Once the update copy operation in step 2 is complete, the P-VOL and S-
VOL are declared a PAIR again.

Page 15-19
Hitachi In-System Replication Bundle
pairsplit -S Command

pairsplit -S Command

 Stops copy operations and changes volume status back to


simplex (SMPL)

PAIR PAIR
P-VOL S-VOL

SMPL SMPL
Stops copy operations to S-VOL

Volume Grouping

 Perform operations for pairs or a group of pairs

File Sever

EMail

VOL 3 VOL 13

VOL 4 VOL 14
Grouped
Database

P-VOLs S-VOLs

You can define or set up ShadowImage Replication pairs in groups, which enables you to issue
commands or perform operations for a single pair or a group of pairs.

Page 15-20
Hitachi In-System Replication Bundle
Pair Status Transitions

Pair Status Transitions

SMPL Any status except SMPL


PSUE pairsplit
pairsplit -E option
-S option P-VOL S-VOL

Not synchronized
paircreate
PAIR
COPY COPY (RS) pairresync
(PD) P-VOL S-VOL
P-VOL S-VOL
P-VOL S-VOL (Initial copy completed) Updated copy
Initial copy Updated copy
PSUS pairresync
pairsplit pairsplit
split option P-VOL S-VOL

Split pair

COPY (RS-R) pairresync


-restore option
P-VOL S-VOL

Update copy (reverse)

This illustration shows the ShadowImage Replication pair status transitions and the relationship
between pair status and ShadowImage Replication operations. Starting in the upper left of the
illustration, if a volume is not assigned to a ShadowImage Replication pair, its status is SMPL.

• When you create a pair, the status of the P-VOL and S-VOL changes to COPY(PD).

• When the initial copy operation is complete, the pair status becomes PAIR.

• If Hitachi Unified Storage cannot maintain PAIR status for any reason or if you suspend
on error the pair (pairsplit –E), the pair status changes to PSUE. When you suspend a
pair (pairsplit), the pair status changes to COPY(SP).

• When the pairsplit operation is complete, the pair status changes to PSUS to enable
you to access the suspended S-VOL.

• When you start a pairresync operation, the pair status changes to COPY(RS).

• When you specify reverse mode for a pairresync operation (pairresync –restore),
the pair status changes to COPY(RS-R) (data is copied in the reverse direction from the
S-VOL to the P-VOL).

• When the pairresync operation is complete, the pair status changes to PAIR.

• When you split or release a pair (pairsplit -S), the pair status changes to SMPL.

Page 15-21
Hitachi In-System Replication Bundle
Hitachi Thin Image

Hitachi Thin Image


This section discusses the features and functions of Hitachi Thin Image.

What Is Hitachi Thin Image?

Snapshot technology that rapidly creates up to 1,024


instant point-in-time copies for data protection or
application testing purposes
Read Write
 Saves up to 90% or more disk space by storing only
changed data blocks P - VOL

 Speeds backups from hours to a few minutes, virtually Only


eliminating traditional backup windows Changed
Pool Data Saved
 Near-instant restore of critical data to increase business
continuity
V-VOL V-VOL VOL
V-VOL V-VOL V-VOL
 Application- and OS-independent, but can be integrated
with application backup triggers V - VOL V - VOL V - VOL

 Fast, simple and reliable snapshot software Virtual Volumes

Hitachi Thin Image (HTI) Technical Details

• Licensing

o For VSP family, Thin Image is part of Hitachi In-System Replication bundle (ISR),
free license key for any customer that has In-System Replication bundle under
maintenance.

o For Hitachi Unified Storage VM, HTI is part of the local protection bundle, free
license key for any customer that has the bundle under maintenance

o Requires Hitachi Dynamic Provisioning (HDP) be licensed for the capacity of the
HTI pool when not using Dynamic Provisioning for source volumes.

• Pool

o Uses a special HTI pool that is created much like an HDP pool; cannot be shared
with a regular HDP pool

o Pool can be up to 4PB and can be dynamically grown and have a customizable
threshold

Page 15-22
Hitachi In-System Replication Bundle
What Is Hitachi Thin Image?

• Shared memory

o Does not use shared memory, rather a cache management device which is
stored in the HTI pool

• V-VOLS

o Uses V-VOLS much like Hitachi Copy-on-Write Snapshot. P-VOL and V-VOL
cannot exceed 4TB size

o Can create 1024 snaps with a max of 32K in an array

• Management

o Through Hitachi Storage Navigator, RaidCOM CLI (up to 1024 generations) or


CCI (up to 64 generation)

o Hitachi Replication Manager (HRpM) support in future HRpM versions

• Copy mechanism

o Uses a Copy-After-Write mechanism instead of Copy-on-Write Snapshot, except:

 When RAID1 pool or PVOL is used or external pool mechanism is Copy-


on-Write

 High cache write pending is >60%

• Advanced configuration

o Can be combined with Hitachi ShadowImage Replication, Hitachi Universal


Replicator, Hitachi TrueCopy, exactly like Copy-on-Write Snapshot (See the table
in the manual for complete details.)

o Can be used with consistency groups

o Note: Check with your HDS representative for currently supported configurations.

Page 15-23
Hitachi In-System Replication Bundle
What Is Hitachi Thin Image?

(3) Asynchronous upstage to cache (read miss)


(1) Host write
Thin Image
snapshot pair

Data B Data A Data A

Host P - VOL V - VOL HDP Snap Pool


(2) Write complete

 Subsequent writes to the same block for the same snapshot do not have to be
moved
 Single instance of data stored in HDP snap pool regardless of number of snaps

• Hitachi VSP family:

o The Thin Image software config uration includes a P-VOL, a number of V-


VOLs and a data pool (POOL).

o Data pool: Volumes in which only differential data is stored (POOL)

o Snapshot image: A virtual replica volume for the primary volume (V-VOL); this is
an internal volume that is held for restoration purposes

Page 15-24
Hitachi In-System Replication Bundle
Hitachi ShadowImage Replication Clones Versus Thin Image Snapshots

Hitachi ShadowImage Replication Clones Versus Thin Image


Snapshots

ShadowImage Replication software Thin Image snapshot software


All data is saved from P-VOL to S-VOL Only changed data is saved from P-VOL to data pool; pool is shared
by multiple snapshot images (V-VOL)

Main Backup Main Backup


Server Server Server Server

Read Write Read Write Read Write Read Write

P-VOL S-VOL P-VOL


Virtual Volumes

V-VOL V-VOL VOL


Differential
Data Save
Consistent read/write access is
available only in split states Pool Link

Size of physical volume

• The P-VOL and the S-VOL have exactly the same size in Hitachi ShadowImage
Replication.

• In Hitachi Thin Image, less disk space is required for building a V-VOL image since only
part of the V-VOL is on the pool and the rest is still on the primary volume.

Pair configuration

• Only 1 S-VOL can be created for every P-VOL in ShadowImage Replication.

• In Thin Image, there can be up to 64 V-VOLs per primary volume.

Restore

• A primary volume can only be restored from the corresponding secondary volume in
ShadowImage Replication.

• With Thin Image, the primary volume can be restored from any snapshot image (V-VOL).

Page 15-25
Hitachi In-System Replication Bundle
Hitachi ShadowImage Replication Clones Versus Thin Image Snapshots

 Simple positioning
• Clones should be positioned for data repurposing and data protection (for example, DR
testing) where performance is a primary concern
• Snapshots should be positioned for data protection (for example, backup) only where
space saving is the primary concern
ShadowImage Thin Image
P-VOL = S-VOL P-VOL ≥ V-VOL
Size of physical volume P-VOL = S-VOL P-VOL ≥ V-VOL

1:3/9 VSP Family 1:8 HUS 1:1024


Pair configuration P-VOL
P-VOL S-VOL
V-VOL V-VOL V-VOL V-VOL

P-VOL can be restored from S-VOL Restore from any V-VOL


Restore P-VOL S-VOL P-VOL
V-VOL V-VOL ….. V-VOL V-VOL

• Clones should be positioned for data repurposing and data protection (for example, DR
testing) where performance is a primary concern.

• Snapshots should be positioned for data protection (for example, backup) only where
space saving is the primary concern.

Page 15-26
Hitachi In-System Replication Bundle
Hitachi Thin Image Technical Details (1 of 3)

Hitachi Thin Image Technical Details (1 of 3)

 License
• Part of the Hitachi In-System Data Replication bundle
• Free license key for any customer that has In-System Replication bundle
under maintenance
• Requires a Hitachi Dynamic Provisioning (HDP) license for capacity of the
Thin Image (HTI) pool when not using HDP for source volumes

 Pool
• Uses a special HTI pool, which is created similarly to an HDP pool
• Cannot be shared with a regular HDP pool or with Hitachi Copy-on-Write
Snapshot
• Pool can be up to 4PB, grow dynamically and have a customizable threshold

Hitachi Thin Image Technical Details (2 of 3)

 Shared memory
• Does not use shared memory except for difference tables
• Uses a cache management device, which is stored in the HTI pool

 V-VOLs
• Uses V-VOLs like Hitachi Copy-on-Write Snapshot, but P-VOL and V-VOL
cannot exceed 4TB size
• Able to create 1,024 snapshots with a max of 32K in an array
• Does not have anonymous snapshot feature of Hitachi Unified Storage 100
Copy-on-Write

Page 15-27
Hitachi In-System Replication Bundle
Hitachi Thin Image Technical Details (3 of 3)

Hitachi Thin Image Technical Details (3 of 3)

 Management
• Managed through Hitachi Storage Navigator, RAIDCOM CLI (up to 1,024
generations) or CCI (up to 64 generations)
• Hitachi Replication Manager support in a future release
 Copy mechanism
• Employs a Copy-on-Write Snapshot instead of Copy-on-Write mechanism
whenever possible
 Advanced configuration
• Can be combined with Hitachi ShadowImage Replication, Hitachi Universal
Replicator and Hitachi TrueCopy software exactly like Copy-on-Write (see
tables in manual for complete details)
• Can be used with consistency groups

Hitachi Thin Image Components

 Thin Image basic components


• S-VOL – volume used by the host to access a snapshot and does not have physical
disk space
• Thin Image pool – consists of a group of logical volumes

Host can
access

S-VOL

P-VOL
TI Pool

Page 15-28
Hitachi In-System Replication Bundle
Comparison: Hitachi Copy-on-Write Snapshot and Hitachi Thin Image

Comparison: Hitachi Copy-on-Write Snapshot and Hitachi Thin


Image

Copy-on-Write Thin Image


Features
Snapshot (VSP G1000)
VSP G1000 support No Yes
Number of generations per system 16k 1M

Number of generations per P-VOL 1-64 1-1024

Pool capacity per pool 30TB 4PB


Pool capacity per system 30TB 12.3PB

Pool capacity per P-VOL 30TB 768TB


Number of pools per system 128 128
Copy method CoW CAW/CoW

For latest specifications, refer to technical documentation. You can access the current
replication customer documentation for Hitachi Virtual Storage Platform G1000 and Hitachi
Virtual Storage Platform at: http://www.hds.com/corporate/tech-docs.html

For the purposes of this table, CoW stands for Copy-on-Write and CAW stands for Copy-After-
Write.

Page 15-29
Hitachi In-System Replication Bundle
Operations

Operations

 Overview – Hitachi Copy-on-Write Snapshot and Hitachi Thin Image in


Copy-on-Write mode

1. Host writes 3. I/O complete


to cache goes back

4. New data
block moved to 2. If not previously
P-VOL moved (overwrite
condition), old data
block moved to pool

P-VOL S-VOL Pool

Copy-on-Write Method Workflow

In the Copy-on-Write method, store snapshot data in the following steps:

1. The host writes data to a P-VOL.

2. Snapshot data for the P-VOL is stored.

3. The write completion status is returned to the host after the snapshot data is stored.

Page 15-30
Hitachi In-System Replication Bundle
Operations

 Overview – Hitachi Thin Image Copy-After-Write Mode

1. Host writes to 2. I/O complete


cache goes back

4. New data block


moved to P-VOL 3. If not previously
moved (overwrite
condition), old data
block moved to pool

P-VOL S-VOL Pool

Copy-After-Write Method Workflow

In the Copy-After-Write method, store snapshot data in the following steps:

1. The host writes data to a P-VOL.

2. The write completion status is returned to the host before the snapshot data is stored.

3. Snapshot data for the P-VOL is stored in the background.

 Scenario – 3 V-VOLs per P-VOL, snapshots at 8-hour intervals


V1

P-VOL
V2

V3
Pool

Physical volume V-VOLs

Page 15-31
Hitachi In-System Replication Bundle
Operations

 Pairsplit for V1 issued Monday 8 a.m.


V1 Pairsplit issued Monday 8 a.m.
 Snapshot of P-VOL data at that time

P-VOL
V2

V3
Pool

Physical volume V-VOLs

 After snapshot at 8 a.m., application writes new P-VOL data to cache

Write after 8 a.m. Monday V1


Pairsplit issued Monday 8 a.m.

P-VOL
V2

V3
Pool

Physical volume V-VOLs

Page 15-32
Hitachi In-System Replication Bundle
Operations

 Old data moved to pool; new data destaged to P-VOL


V1 Pairsplit issued Monday 8 a.m.

P-VOL
V2

Pool
V3

Physical volume V-VOLs

 Pairsplit for V2 issued at 4 p.m.


V1 Pairsplit issued Monday 8 a.m.

P-VOL
V2 Pairsplit issued Monday 4 p.m.

Pool
V3

Physical volume V-VOLs

Page 15-33
Hitachi In-System Replication Bundle
Operations

 After snapshot at 4 p.m., application writes new P-VOL data to cache


Write after 4 p.m. Monday
V1 Pairsplit issued Monday 8 a.m.

P-VOL
V2 Pairsplit issued Monday 4 p.m.

Pool
V3

Physical volume V-VOLs

 Old data moved to pool; new data destaged to P-VOL


Write after 4 p.m. Monday V1 Pairsplit issued on Monday 8 a.m.

P-VOL
Pairsplit issued Monday 4 p.m.
V2

Pool
V3

Physical volume V-VOLs

Page 15-34
Hitachi In-System Replication Bundle
Operations

 Pairsplit for V3 issued at midnight

Write after 4 p.m. Monday V1 Pairsplit issued on Monday 8 a.m.

P-VOL
V2 Pairsplit issued Monday 4 p.m.

Pool
Pairsplit issued midnight
V3

Physical volume V-VOLs

 After snapshot at midnight, application writes new P-VOL data to cache


Write after midnight
V1 Pairsplit issued on Monday 8 a.m.

P-VOL
Pairsplit issued Monday 4 p.m.
V2

Pool
V3 Pairsplit issued midnight

Physical volume V-VOLs

Page 15-35
Hitachi In-System Replication Bundle
Operations

 Old data moved to pool, new data destaged to P-VOL

Write after Midnight Pairsplit issued on Monday 8 a.m.


V1

P-VOL
V2 Pairsplit issued Monday 4 p.m.

Pool
V3 Pairsplit issued midnight

Physical volume V-VOLs

 V1 resynced and split again Tuesday 8 a.m.


 Cycle starts again Pairsplit issued on Tuesday 8 a.m.
V1

P-VOL
V2 Pairsplit issued Monday 4 p.m.

Pool
V3 Pairsplit issued midnight

Physical volume V-VOLs

Page 15-36
Hitachi In-System Replication Bundle
Operations

 Restore is possible from any snapshot image (V-VOL)

Read/Write is possible
immediately after restore V1
command
d Write

V2
Restore
P-VOL
Only differential data are copied
V3

• Restoring a primary volume can be done instantly from any V-VOL because it does not
involve immediate moving of data from pool to P-VOL. Only pointers need to modify.

• The background data will then be copied from the pool to P-VOL.

• If the P-VOL became physically damaged, all V-VOLs would be destroyed as well and
then a restore is not possible.

Page 15-37
Hitachi In-System Replication Bundle
Module Summary

Module Summary

 In this module, you should have learned to:


• Describe the features, functions and principles of the Hitachi In-System
Replication bundle, including:
 Hitachi ShadowImage Replication
 Hitachi Copy-on-Write Snapshot
 Hitachi Thin Image

Module Review

1. Describe the usage of differential bitmaps in pair operations.

2. List the Hitachi ShadowImage Replication pair operation commands.

3. Under what conditions will ShadowImage Replication pairs be


suspended? What is the resulting status?

4. Describe the operational differences between Hitachi Copy-on-Write


Snapshot and ShadowImage Replication.

5. A ShadowImage Replication license has to be installed before


Copy-on-Write Snapshot operations are possible. (True/False)

Page 15-38
16. Hitachi Remote Replication
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the features, functions and principles of Hitachi TrueCopy Remote
Replication bundle and Hitachi Universal Replicator
• Describe the features and functions of global-active device

Page 16-1
Hitachi Remote Replication
Hitachi TrueCopy Remote Replication Bundle (Synchronous)

Hitachi TrueCopy Remote Replication Bundle (Synchronous)


This section discusses the Hitachi TrueCopy Remote Replication bundle for synchronous
replication.

TrueCopy Remote Replication Bundle Overview

 Hitachi TrueCopy Remote Replication bundle mirrors data between


Hitachi storage systems across metropolitan distances

 Supports replication between any storage systems within a virtualized


storage pool managed by Hitachi Virtual Storage Platform (VSP) family
products, Hitachi Unified Storage (HUS) VM

 Can be combined with Hitachi Universal Replicator (HUR) to support up


to 4 data centers in a multi-data center disaster recovery configuration

 Enables multiple, nondisruptive point-in-time copies in the event of


logical corruption up to the point of an outage when combined with
Hitachi ShadowImage Replication or Hitachi Thin Image software

• TrueCopy Remote Replication bundle:

o Is recommended for mission-critical data protection requirements that mandate


recovery point objectives of zero or near-zero seconds (RPO=0).

o Can remotely copy data to a second data center located up to 200 miles/320 km
away (distance limit is variable, but typically around 50–60 km for HUS).

o Uses synchronous data transfers, which means data from the host server
requires a write acknowledgment from the remote local, as an indication of a
successful data copy, before the server host can proceed to the next data write
I/O sequence.

• In addition to disaster recovery, use case examples for TrueCopy Remote Replication
bundle also include:

o Test and development

o Data warehousing and mining

o Data migration purposes

Page 16-2
Hitachi Remote Replication
Typical TrueCopy Remote Replication Bundle Environment

Typical TrueCopy Remote Replication Bundle Environment

Optional
Primary Secondary
Host Server Host Server
CCI CCI
Local Array Remote Array

Command
Command
Device
TrueCopy Device

P-VOL Connections: S-VOL


Fibre Channel or iSCSI

Modular Modular
only only

DM-LU DM-LU

DM-LU = Differential Management LU Management Workstation

Typical Hitachi TrueCopy Remote Replication Bundle Environment

• A typical configuration consists of the following elements (many, but not all require user
setup):

o Two Hitachi arrays — 1 on the local side connected to a host and 1 on the
remote side connected to the local array

o Connections are made through Fibre Channel or iSCSI

o A primary volume (P-VOL) on the local array that is to be copied to the


secondary volume (S-VOL) on the remote side (primary and secondary volumes
may be composed of several LUs)

o A differential management LU on local and remote arrays, which hold TrueCopy


Remote Replication bundle information when the array is powered down
(modular only)

o Interface and command software used to perform TrueCopy Remote Replication


bundle operations (Command software uses a command device (volume) to
communicate with the arrays.)

• TrueCopy Remote Replication bundle replication between Hitachi enterprise storage and
Hitachi modular storage is not supported.

Page 16-3
Hitachi Remote Replication
Basic TrueCopy Remote Replication Bundle Operation

• Steps to perform TrueCopy Remote Replication bundle operation:

o Preparation

 Make a plan and gather all required information.

 Map the necessary volumes to their servers.

 Install the TrueCopy Remote Replication bundle license key on both


systems.

 Create and configure command devices on both systems.

 Create and configure DM-LUs on both systems (modular only).

 Configure the remote copy paths.

• Minimum 2 paths between the sites for redundant configuration

o Install, configure and start CCI.

o Set up pairs, and then paircreate.

 CCI (RAID manager) is optional. (You can use the GUI; command devices
are only necessary when CCI is used.)

Basic TrueCopy Remote Replication Bundle Operation

 Duplicates production volume data to a remote site

 Data at remote site remains synchronized with local site as data changes occur

 Supported with Fibre Channel or iSCSI

 Utilizes synchronous data transfers from host server

 Requires write acknowledgment before new data is written, which ensures


RPO=0 data integrity

 Can be teamed with Hitachi ShadowImage Replication or Hitachi Thin Image


• Restore from 1 or more copies of critical data
• Cascade production data to other workgroups

Page 16-4
Hitachi Remote Replication
Basic TrueCopy Remote Replication Bundle Operation

About Hitachi TrueCopy Remote Replication Bundle

• TrueCopy creates a duplicate of a production volume to a secondary volume located at a


remote site.

• Data in a TrueCopy Remote Replication bundle backup stays synchronized with the data
in the local array.

o This happens when data is written from the host to the local array, then to the
remote system through Fibre Channel or iSCSI link.

o The host holds subsequent output until acknowledgement is received from the
remote array for the previous output.

• When a synchronized pair is split, writes to the primary volume are no longer copied to
the secondary side. Doing this means that the pair is no longer synchronous.

• Output to the local array is cached until the primary and secondary volumes are re-
synchronized.

• When resynchronization takes place, only the changed data is transferred, rather than
the entire primary volume, which reduces copy time.

• Use TrueCopy with ShadowImage Replication or Copy-on-Write Snapshot, on either or


both local and remote sites

o These in-system copy tools allow restoration from 1 or more additional copies of
critical data.

• Besides disaster recovery, TrueCopy Remote Replication bundle backup copies can be
used for test and development, data warehousing and mining or migration applications.

• Recovery objectives

o Recovery time objective (RTO): Time within which business functions or


applications must be restored

o Recovery point objective (RPO): Point in time to which data must be restored to
successfully resume processing

Page 16-5
Hitachi Remote Replication
TrueCopy Remote Replication Bundle (Synchronous)

TrueCopy Remote Replication Bundle (Synchronous)

 Zero data loss possible with fence-level = data


 Performance: dual write plus 1 round-trip latency plus overhead
 Up to 300 km supported – just 2 Americas customers with >100 km
 Consistency groups across open and mainframe supported

(1) Host Write (2) Synchronous Remote Copy

P-VOL S-VOL

(4) Write Complete (3) Remote Copy Complete

Previously 2 Round Trips;


Now, 1 Round Trip!

Remote Mirror of Any Data

• The remote copy is always identical to the local copy.

• Allows very fast restart/recovery with no data loss

• No dependence on host operating system, database or file system

o Distance limit is variable, but typically less than 25 miles (distance limit is
variable, but typically around 50–60 km for Hitachi Unified Storage)

• Impacts application response time

• Distance depends on application read/write activity, network bandwidth, response-time


tolerance and other factors.

o Remote I/O is not posted as complete to the application until it is written to a


remote system.

o The remote copy is always a mirror image

o Provides fast recovery with no data loss

o Limited distance – response-time impact

Page 16-6
Hitachi Remote Replication
How TrueCopy Remote Replication Works

How TrueCopy Remote Replication Works

2. Local storage device 3. Synchronously transfers data from


receives the write data in cache local system cache to the remote system
cache
#3:Data (4) #3:Data (4)
#2:Data (2) #2:Data (5) #2:Data (2) #2:Data (5)
#1:Data (1) #1:Data (3) #1:Data (1) #1:Data (3)

C/TG_0 C/TG_1 (5) (4) (3) (2) (1) C/TG_0 C/TG_1


1. Write I/O is
Orders DB
transferred Orders DB
Tablespace CRM
from host server Tablespace CRM
P-VOL P-VOL
S-VOL S-VOL
Dark Fiber/
Orders DB DWDM/ATM/IP
Dataspace Shipments Orders DB
P-VOL P-VOL Dataspace Shipments
S-VOL S-VOL

Orders DB
Log File Inventory Orders DB
P-VOL Log File Inventory
P-VOL
4. Write acknowledgement S-VOL S-VOL

Primary Site Remote Site

How Hitachi TrueCopy Remote Replication Bundle Works

TrueCopy Remote Replication bundle achieves zero recovery point objective (RPO) with an
immediate replication of data from the local storage system (P-VOL) over to the remote system
(S-VOL) using a first in first out (FIFO) mirrored data write sequence. Integrity of the replication
is maintained with acknowledgement from the remote system, which indicates a successful
write.

TrueCopy Remote Replication bundle synchronous data write sequence is summarized as


follows:

• Write I/O is transferred from server.

• When the local storage device (MCU) receives the write data in cache, TrueCopy Remote
Replication bundle synchronously transfers the data from the MCU’s cache to the remote
system’s (RCU) cache.

• RCU sends a write acknowledgement to MCU once the data is received in its cache.

• When the MCU receives the write acknowledgement, it sends I/O complete (channel end
and device end) to the host.

Page 16-7
Hitachi Remote Replication
Easy to Create Clones

Easy to Create Clones

 Create pair Simplex


• Establishes the initial copy between a local
Synchronizing
(P-VOL) and remote (S-VOL) volume P-VOL S-VOL
 Split pair
• The S-VOL is made identical to the P-VOL P-VOL
Paired
S-VOL
 Resynchronize pair
• Changes to P-VOL since a pair split is copied P-VOL
Spilt
S-VOL
to the S-VOL
 Swap pair P-VOL
Resync
S-VOL

• P-VOL and S-VOL roles are reversed


 Delete pair P-VOL Swap S-VOL

• Pairs are deleted and returned to simplex


(unpaired) status Delete
(Simplex)

Hitachi TrueCopy Remote Replication Bundle Pair Operations

Basic TrueCopy Remote Replication bundle operations consist of creating, splitting, re-
synchronizing, swapping and deleting a pair:

• Create pair:

o This establishes the initial copy using 2 logical units that you specify
o Data is copied from the P-VOL to the S-VOL
o The P-VOL remains available to the host for read and write throughout the
operation.
o Writes to the P-VOL are duplicated to the S-VOL
o The pair status changes to Paired when the initial copy is complete.
• Split:

o The S-VOL is made identical to the P-VOL and then copying from the P-VOL
stops.
o Read/write access becomes available to and from the S-VOL
o While the pair is split, the array keeps track of changes to the P-VOL and S-VOL
in track maps.
o The P-VOL remains fully accessible in Split status.

Page 16-8
Hitachi Remote Replication
Easy to Create Clones

• Resynchronize pair:

o When a pair is re-synchronized, changes in the P-VOL since the split are copied
to the S-VOL, making the S-VOL identical to the P-VOL again.

o During a resync operation, the S-VOL is inaccessible to hosts for write operations;
the P-VOL remains accessible for read/write.

o If a pair was suspended by the system because of a pair failure, the entire P-VOL
is copied to the S-VOL during a resync.

• Swap pair:

o The pair roles are reversed

• Delete pair

o The pair is deleted and the volumes return to Simplex status

Page 16-9
Hitachi Remote Replication
Volume States

Volume States

 Hitachi TrueCopy Remote Replication bundle volume pairs have 5


states:
COPY
• SMPL
• COPY
• PAIR pairsplit -S command
SMPL PAIR
pairresync command
• PSUS
• PSUE PSUS
PSUE
pairsplit -S command
On failure

TrueCopy Remote Replication bundle volume pairs have 5 typical states. These states are used
to manage the health of volume pairs.

• SMPL (simplex): This volume is not currently assigned to a volume pair.

• COPY: The initial copy operation for this pair is in progress; this pair is not yet
synchronized. During this status, the P-VOL has read and write access and S-VOL has
read only status.

• PAIR: This volume is synchronized. The updates to the P-VOL are duplicated to the S-
VOL.

• PSUS/SSUS (primary/secondary suspended): This pair is not synchronized because the


user has split this pair (pairsplit).

• PSUE (pair suspended due to error): This pair is not synchronized. It has been
suspended due to an error condition.

• pairresync command – after repairing of failure parts

Page 16-10
Hitachi Remote Replication
Hitachi Universal Replicator

Hitachi Universal Replicator


This section presents Hitachi Universal Replicator and includes general functions, benefits,
hardware and additional information.

Hitachi Universal Replicator Overview

 Universal Replicator (HUR) is an asynchronous, continuous, nondisruptive,


host-independent remote data replication solution for disaster recovery or data
migration over long distances
 HUR is available for the following Hitachi enterprise storage platforms:
• Hitachi Virtual Storage Platform
• Hitachi Virtual Storage Platform Gx00
• Hitachi Virtual Storage Platform G1000
 HUR and Hitachi ShadowImage Replication bundle can be used together in the
same storage system and on the same volumes to provide multiple copies of
data at the primary and/or remote sites
 TrueCopy Remote Replication bundle synchronous and HUR software can be
combined to allow advanced 3-data center configurations for optimal data
protection

Universal Replicator presents a solution to avoid cases when a data center is affected by a
disaster that stops operations for a long period of time.

In the Universal Replicator system, a secondary storage system is located at a remote site from
the primary storage system at the main data center and the data on the primary volumes (P-
VOLs) at the primary site is copied to the secondary volumes (S-VOLs) at the remote site
asynchronously from the host write operations to the P-VOLs.

Journal data is created synchronously with the updates to the P-VOL to provide a copy of the
data written to the P-VOL.

The journal data is managed at the primary and secondary sites to ensure the consistency of
the primary and secondary volumes.

TrueCopy Synchronous software and HUR can be combined together to allow advanced 3-data
center configurations for optimal data protection.

Page 16-11
Hitachi Remote Replication
Hitachi Universal Replicator Benefits

Hitachi Universal Replicator Benefits

 Ensure business continuity


 Optimize resource usage (lower the cache and resource consumption
on production/primary storage systems)
 Improve bandwidth utilization and simplify bandwidth planning
 Improve operational efficiency and resiliency (mitigate the impact of link
failures between sites)
 More flexibility in trading off between recovery point objective and cost
 Implement advanced multi data center support more easily
 Move data among levels of tiered storage systems more easily

Page 16-12
Hitachi Remote Replication
Hitachi Universal Replicator Functions

Hitachi Universal Replicator Functions

 Host I/O process completes immediately after storing write data to the cache memory of
primary storage system master control unit (MCU)
• Then the data is asynchronously copied to secondary storage system remote disk control unit
(RCU)
 MCU stores data to be transferred in journal cache to be destaged to journal volume in the
event of link failure
 Universal Replicator software provides consistency of copied data by maintaining write
order in copy process
• To achieve this, it attaches write order information to the data in copy process

3. Asynchronous
1. Write I/O
remote copy
P-VOL JNL-VOL
JNL-VOL
Primary host 2. Write complete 4. Remote copy complete S-VOL

Primary Storage (MCU) Secondary Storage (RCU)

Remote replication for a Universal Replicator (HUR) pair is accomplished using the master
journal volume on the primary storage system and the restore journal volume on the secondary
storage system. As shown in the following figure, the P-VOL data and subsequent updates are
transferred to the S-VOL by obtain journal, read journal and restore journal operations involving
the master and restore journal volumes.

Replication Operations

• Obtain journal: Obtain journal operations are performed when the primary storage
system writes journal data to the master journal volume.

• Journal copy: Journal copy operations are performed when journal data is copied from
the master journal volume to the restore journal volume on the secondary storage
system.

• Restore journal: Restore journal operations are performed when the secondary
storage system writes journal data in the restore journal volume to the S-VOL.

Page 16-13
Hitachi Remote Replication
Hitachi Universal Replicator Hardware

Hitachi Universal Replicator Hardware

 Remote connections (links)


• Bidirectional Fibre connections to send and receive data between MCU and RCU
• Minimum 4 initiator ports, 2 (redundancy) in each system
• Minimum 4 RCU target ports, 2 (redundancy) in each system
• Unlike TrueCopy Remote Replication, Universal Replicator remote copy connections
(links) are not assigned to control units
• Only Fibre Channel is supported

MCU RCU
Initiator RCU Target
Initiator RCU Target
RCU Target Initiator
RCU Target Initiator

MCU = system master control unit

RCU = system remote disk control unit

• Two Fibre connections enable pathway to send and receive data

• At least 4 fiber connections are required:

o Fibre connection 1 makes a request to remote site

o Fibre connection 2 reads journal command and journal copy

• Each site involved in data replication will include:

o Initiator > RCU target

 Two Initiator ports on each system for redundancy

o RCU target > initiator

 Two RCU target ports on each system for redundancy

• In Hitachi Virtual Storage Platform mid-range, all ports are bi-directional, therefore there
is no need to change the port attribute.

Page 16-14
Hitachi Remote Replication
Hitachi Universal Replicator Components

Hitachi Universal Replicator Components

 Master control unit (MCU)


• It is the storage array at the primary site and contains P-VOLs and master
journal group

 Remote control unit (RCU)


• It is the storage array at the remote site and contains S-VOLs and restore
journal group

 Journal group
• A journal group consists of data volumes and journal volumes
• Maintains volume consistency by operating on multiple data volumes with
one command

UR System Components

• The Hitachi Virtual Storage Platform G1000 systems at the primary and secondary sites.
The primary storage system (MCU) contains the P-VOLs and master journal volumes and
the secondary storage system (RCU) contains the S-VOLs and restore journal volumes.

• The master journal consists of the primary volumes and master journal volumes.

• The restore journal consists of the secondary volumes and restore journal volumes.

o The data path connections between the systems. The primary and secondary
VSP G1000 systems are connected using dedicated Fibre Channel data paths.
Data paths are routed from the Fibre Channel ports on the primary storage
system to the ports on the secondary storage system and from the secondary
storage system to the primary storage system.

o The Hitachi Universal Replicator software on both the primary storage system
and the secondary storage system

• The hosts connected to the primary and secondary storage systems. The hosts are
connected to the Virtual Storage Platform G1000 systems using Fibre Channel or Fibre
Channel over Ethernet (FCoE) target ports.

Page 16-15
Hitachi Remote Replication
Hitachi Universal Replicator Components

 Journal group (continued)


• Performs the same function as consistency groups in Hitachi TrueCopy
Remote Replication
• Master journal group in the MCU contains P-VOLs and master journal
volumes
• Restore journal group in the RCU contains S-VOLs and restore journal
volumes

 Journal volumes
• A journal volume stores differential data

 Command device if CCI is used

UR System Components

• The Hitachi Virtual Storage Platform G1000 systems at the primary and secondary sites.
The primary storage system (MCU) contains the P-VOLs and master journal volumes and
the secondary storage system (RCU) contains the S-VOLs and restore journal volumes.

• The master journal consists of the primary volumes and master journal volumes.

• The restore journal consists of the secondary volumes and restore journal volumes.

o The data path connections between the systems. The primary and secondary
VSP G1000 systems are connected using dedicated Fibre Channel data paths.
Data paths are routed from the Fibre Channel ports on the primary storage
system to the ports on the secondary storage system and from the secondary
storage system to the primary storage system.

o The Hitachi Universal Replicator software on both the primary storage system
and the secondary storage system

• The hosts connected to the primary and secondary storage systems. The hosts are
connected to the Virtual Storage Platform G1000 systems using Fibre Channel or Fibre
Channel over Ethernet (FCoE) target ports.

Page 16-16
Hitachi Remote Replication
Hitachi Universal Replicator Specifications

Hitachi Universal Replicator Specifications

 P-VOL and S-VOL


• Only Open-V emulation type is supported for Universal Replicator (HUR) pair volumes
• HUR requires a one-to-one relationship between the volumes of the pairs:
PVOL : SVOL = 1 : 1
• Supports external devices, dynamic provisioning (DP) volumes (V-VOLs)
 Journal group and journal volume
• Journal volumes are used to store differential data
• Journal groups are used to maintain volume consistency
• Only Open-V emulation type is supported for journal volumes
• Journal volume must not have a path definition
• Each of the journal volumes can have different volume sizes and different RAID
configurations in a single journal group
• Journal volumes can be added to journal groups dynamically

P-VOL and S-VOL Specifications

• Only Open-V emulation type is supported for HUR pair volumes.

• HUR requires a one-to-one relationship between the volumes of the pairs; PVOL : SVOL
= 1 : 1.

• Supports external devices and DP volumes (V-VOLs)

Hitachi Virtual Storage Platform G1000 Storage Systems

• HUR operations involve 2 VSP G1000 systems, one at the primary site and one at the
secondary site.

o The primary storage system consists of the main control unit and service
processor (SVP).

o The secondary storage system consists of the remote control unit and its SVP.

Page 16-17
Hitachi Remote Replication
Hitachi Universal Replicator Specifications

• Each Virtual Storage Platform G1000 system can function simultaneously as a primary
and secondary storage system.

o The primary storage system communicates with the secondary storage system
over dedicated Fibre Channel remote copy connections.

o The primary storage system controls the P-VOL and the following operations:

 Host I/Os to the P-VOL

 P-VOL data copy to the master journal

o The secondary storage system controls the S-VOL and the following operations:

 Initial copy and update copy between the P-VOL and the restore journal

 Journal commands to the primary storage system

 Journal data copy from the master journal to the restore journal

 Restore journal data copy to the S-VOL

 Pair status management and configuration (for example, rejecting write


I/Os to the S-VOLs)

Page 16-18
Hitachi Remote Replication
Three-Data-Center Cascade Replication

Three-Data-Center Cascade Replication

 Hitachi TrueCopy Remote Replication synchronous software and Hitachi


Universal Replicator (HUR) can be combined into a 3-data-center (3DC)
configuration

True Copy or HUR JNL-


HUR
P-VOL S-VOL JNL- S-VOL
JNL- JNL-
(sync) P-VOL VOL
VOL VOL
VOL

JNL Group JNL Group

TrueCopy S-VOL shared as Universal Replicator P-VOL in intermediate site

3DC Cascade Configuration With 3 HUR Sites

With HUR, you can set up 1 intermediate site and 1 secondary site for 1 primary site. It is
advisable that you create a HUR pair that connects the primary and secondary sites so that the
remote copying system that is created with the host operation site and backup site is configured
immediately in the event of a failure or disaster at the intermediate site. A HUR pair that is
created to make a triangle-shaped remote copy connection among the 3 sites is called HUR
delta resync pair. By creating a HUR delta resync pair in advance, you can transfer the copying
operations from between the primary and secondary sites, back to between the intermediate
and secondary sites in a short time when the intermediate site failure is corrected and the
intermediate site is brought back online.

Page 16-19
Hitachi Remote Replication
Three-Data-Center Multi-Target Replication

Three-Data-Center Multi-Target Replication

 Primary volume is shared P-VOL for 2 remote systems


 Mainframe supports up to 12x12x12 controller configurations
 Open systems support up to 4x4x4 controller configurations
 Requires Hitachi Disaster Recovery Extended and for mainframe, BCM extended

S-VOL
JNL-
TrueCopy (Sync) JNL- S-VOL
VOL
or HUR VOL

JNL-
JNL-
VOL
JNL Group
P-VOL VOL

Optional Delta Resync


Journal Group
HUR
JNL- S-VOL
JNL-
VOL
VOL

JNL Group

BCM=Business Continuity Manager

3DC Multi-Target Configuration With 3 Hitachi Universal Replicator Sites

With Universal Replicator (HUR), you can set up 2 secondary sites for 1 primary site. It is
advisable that you create a HUR pair that connects the 2 secondary sites so that the remote
copy system created with the host operation site and backup site can be created immediately in
the event of a failure or disaster at the primary site. A HUR pair that is created to make a
triangle-shaped remote copy connection among the 3 sites is called a HUR delta resync pair. By
creating a HUR delta resync pair in advance, you can transfer the copying operations from
between the secondary sites back to from the primary to the secondary site in a short time
when the failure is corrected and the primary site is brought back online.

Page 16-20
Hitachi Remote Replication
Four-Data-Center Multi-Target Replication

Four-Data-Center Multi-Target Replication

 Typically for migration


 Supported in both mainframe and 3DC JNL-
JNL- S-VOL
VOL
open systems environments Cascade VOL

HUR JNL Group

3DC
Multi-target TrueCopy (Sync) JNL-
JNL- S-VOL
VOL
VOL

JNL Group

Optional Delta 2DC


JNL- Resync HUR
JNL-
VOL
P-VOL
VOL

JNL-
S-VOL
Journal Group JNL-
VOL
VOL
HUR
JNL Group

Replication Tab in Hitachi Command Suite

Dashboard Analysis Flow


Wizard Mode Graph Mode
(For novice users) (For expert users)

Adding check items Adding graphs


associated with new metrics for new metrics

PDF Export

Adding new metrics


for export data

Page 16-21
Hitachi Remote Replication
Replication Tab in Hitachi Command Suite – Makes Controlling HUR Easier

Replication Tab in Hitachi Command Suite – Makes Controlling


HUR Easier

 Replication tab aids the user in the analysis of Hitachi Universal Replicator
performance problems and displays possible causes and solutions

Page 16-22
Hitachi Remote Replication
Hitachi High Availability Manager

Hitachi High Availability Manager

Host (Multipath
HOST* HOST* software is installed)
Application Application

Multipath Software Multipath Software Owner Path Non-Owner Path

Host Paths Failover


Failure

MCU RCU
VSP VOL Pair VSP
P-VOL S-VOL
P-VOL S-VOL
TrueCopy Paths

UVM UVM
Quorum
VSP VSP
Legend I/O Before the Failure
Any
External Storage I/O After the Failure

HAM = Hitachi High Availability Manager

• Zero recovery time objective – layer high availability on top of synchronous replication

• Hosts can be active-active if supported by OS and/or cluster software (for example,


Oracle/RAC)

o Windows Server Failover Clustering (WSFC) cluster running on a Windows 2008


and IBM PowerHA Cluster Manager for AIX (formerly HACMP) version 5.4.1 on a
system running AIX 6.1 are currently supported with Hitachi Dynamic Link
Manager v6.5 and later

o VMware (ESX 5.0 or later) supported for 2DC only

*Check with your HDS representative for currently supported configurations

Page 16-23
Hitachi Remote Replication
Complete Virtualized, High Availability and Disaster Recovery Solution

Complete Virtualized, High Availability and Disaster Recovery


Solution

 Copies of data in 3 locations: 2 local, 1 distant


 Disk can be internal or external (virtualized)
 Internal-to-external or vice versa supported
 Hitachi Dynamic Tiering supported

P-VOL S-VOL S-VOL

TrueCopy HUR
P-VOL S-VOL S-VOL

TrueCopy TrueCopy

E-VOL

E-VOL Quorum E-VOL

Hitachi High Availability Manager for 2DC H/A plus Hitachi Universal Replicator for out-of-region
disaster (3DC)

Not shown:

• SAN fabric inter-switch (ISL) connections

• Additional in-system replication copies that would be recommended for gold copies or
disaster recovery testing

*Check with your HDS representative for currently supported configurations.

Page 16-24
Hitachi Remote Replication
Global-Active Device

Global-Active Device
This section covers global-active device.

Global-Active Device Overview

High level definition of the features

Clustering Clustering
App/ App/ App/ App/
DBMS DBMS DBMS DBMS
Production Production
Servers Servers

~100km
Volumes (62 ml.) Volumes Volumes
Global-active device Active Active
LDEV ID 44:44
LDEV ID 22:22
Virtual
VSP G1000 S/N LDEV ID 22:22
12345
LDEV ID 22:22
VSP G1000 S/N 12345 VSP G1000 S/N 23456 VSP G1000 S/N 12345

Actual Configuration Configuration Recognized by Hosts

Global-active device enables concurrent references / updates of mirrored volumes virtualized


from the multiple storage systems to ensure high availability of host applications used with
Hitachi Virtual Storage Platform G1000

Global-active device enables you to create and maintain synchronous remote copies of data
volumes on the Virtual Storage Platform G1000 storage system. A virtual storage machine is
configured in the primary and secondary storage systems using the actual information of the
primary system and the global-active device primary and secondary volumes are assigned the
same virtual LDEV number in the virtual storage machine. Because of this, the pair volumes are
seen by the host as a single volume on a single storage system and both volumes receive the
same data from the host. A quorum disk located in a 3rd and external storage system is used to
monitor the global-active device pair volumes.

The quorum disk acts as a heartbeat for the global-active device pair, with both storage
systems accessing the quorum disk to check on each other. A communication failure between
systems results in a series of checks with the quorum disk to identify the problem for the
system able to receive host updates.

Page 16-25
Hitachi Remote Replication
Global-Active Device Overview

Global-active device provides the following benefits:

• Continuous server I/O when a failure prevents access to a data volume

• Server failover and failback without storage impact

• Load balancing through migration of virtual storage machines without storage impact

High level definition of the features

Even if P-VOL side fails, App/DBMS can If one storage system fails, hosts see
Clustering
continue without that some path has failed, but other
App/ App/disruption by using S-VOL
DBMS DBMS paths are still available to the volumes
Production Clustering
Servers App/ App/
DBMS DBMS
Production
~100km Servers

Volumes (62 ml.) Volumes


Global-active device
VSP G1000 S/N 12345
VSP G1000 S/N 12345 Volumes
VSP G1000 S/N 23456

VSP G1000 S/N 12345


Actual Configuration
Configuration Recognized by Hosts

Fault-tolerant Storage Infrastructure

With global-active device, host applications can run without disruption even if the storage
system fails. If a failure prevents host access to a volume in a global-active device pair, read
and write I/O can continue to the pair volume in the other storage system to provide
continuous server I/O to the data volume.

Page 16-26
Hitachi Remote Replication
Global-Active Device – Components

Differences between VSP G1000 global-active device and VSP Hitachi High Availability Manager (HAM)
Production
Servers

Global-active ~100km
Function HAM (62 ml.)
Device Volumes Volumes

Multipath I/O Active-active Active-passive VSP G1000 S/N 12345

VSP G1000 S/N 12345 VSP G1000 S/N 23456


HDLM, OS native
Multipath software HDLM
multipath VSP G1000 global-active device
Program product
Yes(*2) Yes(*3)
combination(*1) Production
HCS, Servers
Operation I/F RAID manager
RAID manager ~30km
Volumes (18 ml.) Volumes
Reserve SCSI-2, SCSI-3,
SCSI-2, ATS (Active) (Standby)
ATS

VSP S/N 45678 VSP S/N 56789

VSP HAM

HDLM = Hitachi Dynamic Link Manager

• *1 Combination with other replication program products

• *2 Target support microcode version may vary per program product

• *3 HAM supports Hitachi ShadowImage Replication on remote site only

Global-Active Device – Components


Components
 Primary and secondary storage systems
Clustering
 Paired volumes App/ App/
DBMS DBMS
 Virtual storage machine Production
Servers
 Paths and ports
 Alternate path software ~100km
 Cluster software Volumes (62 ml.) Volumes
Global-active device
VSP G1000 S/N 12345
VSP G1000 S/N 12345 VSP G1000 S/N 23456
Primary Secondary
Quorum
Storage Storage
System System

Page 16-27
Hitachi Remote Replication
Global-Active Device – Components

Storage Systems

A VSP G series system is required at the primary site and at the secondary site. An external
storage system for the quorum disk is also required which is connected to the primary and
secondary storage systems using Hitachi Universal Volume Manager.

The global-active device components are:

• Primary and secondary storage systems

o Serves virtualized volumes:

 Each virtualized volume consists of 2 volumes mirrored by global-active


device copy pairs.

 The production servers recognize mirrored volumes as a single volume,


as these volumes return the same device ID for inquiry command.

o Accepts concurrent write/read I/Os:

 Write I/O: All the updates are applied first to the primary storage system
and then to the secondary storage system.

 Read I/O: Handled by either of primary or secondary storage systems

o The virtual storage machine has the same model and serial number of the
physical storage system of global-active device pair target.

• Paired volumes: A global-active device pair consists of a P-VOL in the primary system
and an S-VOL in the secondary system.

• Virtual storage machine: A virtual storage system is configured in the secondary


system with the same model and serial number as the (actual) primary system.
The servers treat the virtual storage machine and the storage system at the primary site
as 1 virtual storage machine.

• Paths and ports: Global-active device operations are carried out between hosts and
primary and secondary storage systems connected by Fibre Channel data paths
composed of 1 of more Fibre Channel physical links. The data path, also referred to as
the remote connection, connects ports on the primary system to ports on the secondary
system. The ports are assigned attributes that allow them to send and receive data. One
data path connection is required, but 2 or more independent connections are
recommended for hardware redundancy.

Page 16-28
Hitachi Remote Replication
Global-Active Device – Components

• Alternate path software: Alternate path software is used to set redundant paths from
servers to volumes and to distribute host workload evenly across the data paths.
Alternate path software is required for the single-server and cross-path global-active
device system configurations.

• Cluster software: Cluster software is used to configure a system with multiple servers
and to switch operations to another server when a server failure occurs. Cluster
software is required when 2 servers are in a global-active device server-cluster system
configuration.

Components

 Quorum disk Clustering


App/ App/
• Enables primary and DBMS DBMS
secondary storage systems to Production
Servers
determine the global-active
device owner node in case of
~100km
failure Volumes (62 ml.) Volumes
Global-active device
• Any storage system is Primary VSP G1000 S/N 12345 Secondary
available, as long as it is Storage Storage
VSP G1000 S/N 12345 VSP G1000 S/N 23456
System System
supported by Hitachi Universal Quorum
Volume Manager

Quorum Disk

The quorum disk is:

• Required for global-active device

• Used to determine the storage system on which server I/O should continue when a
storage system or path failure occurs

• Virtualized from an external storage system that is connected to both the primary and
secondary storage systems

• Enables primary and secondary storage systems to determine the global-active device
owner node in case of failure. Any storage system is available, as long as it is supported
by Universal Volume Manager.

Page 16-29
Hitachi Remote Replication
Global-Active Device Software Requirements for VSP G1000

Global-Active Device Software Requirements for VSP G1000

Software Notes
CCI/RAIDCOM Required
Optional (recommended)
Required for management GUI
Hitachi Command Suite (HCS)
(Global Services Solutions (GSS) Hitachi Command Suite
Implementation Service needs to be quoted)
Hitachi Replication Manager (HRpM) Required if using Command Suite
Dynamic Link Manager is recommended for long distances
Hitachi Dynamic Link Manager (HDLM)
(more than 10 km)
Storage Virtualization Operating System (SVOS) Required
Universal Volume Manager (UVM) Required for both Virtual Storage Platform G1000s
Global-active device Required for both Virtual Storage Platform G1000s

Global-active device software requirements for Hitachi Virtual Storage Platform G1000 are:

• CCI/RAIDCOM

• Hitachi Command Suite (HCS)

• Hitachi Replication Manager (HRpM)

• Hitachi Dynamic Link Manager (HDLM)

• Storage Virtualization Operating System (SVOS)

• Universal Volume Manager (UVM)

• Global-active device

Note: For other VSP G series models, refer to specific model support matrix and technical
documentation.

Page 16-30
Hitachi Remote Replication
Global-Active Device – Specifications for VSP G1000

Global-Active Device – Specifications for VSP G1000

Item Specifications
Global-active device management Hitachi Command Suite v8.0.1 or later
Maximum number of volumes (creatable pairs) 64K
Maximum pool capacity 12.3PB
Maximum volume capacity 46MB to 59.9TB
Hitachi Dynamic Provisioning
Hitachi Dynamic Tiering
Supporting products in combination with global- Hitachi Universal Volume Manager
active device. All on either side or both sides Hitachi ShadowImage Replication
Hitachi Thin Image
Hitachi Universal Replicator with delta-resync
Campus distance support Can use any qualified path failover software

*Hitachi Dynamic Link Manager is required until Asymmetric


Metro distance support
Logical Unit Assignment (ALUA) support is available

Here are the global-active device system specifications for Hitachi Virtual Storage Platform
G1000. For other VSP G series models, refer to specific model support matrix and technical
documentation.

*Asymmetric Logical Unit Assignment (ALUA) is a SCSI protocol standard for working with
multiple paths between storage and servers (or virtual servers) where path ownership needs to
be managed and resolved. ALUA manages access states and path attributes using explicit or
implicit methods set up by a storage administrator. It is used with SANs, iSCSI, FCoE and so on.

Page 16-31
Hitachi Remote Replication
Hitachi Business Continuity Management Software

Hitachi Business Continuity Management Software


This section presents Hitachi Business Continuity Management software and includes a general
overview and functions.

Hitachi Business Continuity Manager Overview

 Business Continuity Manager (BCM) software for IBM z/OS offers the following
benefits and features:
• Provides a centralized, enterprise-wide replication management for IBM z/OS
mainframe environments
• Automates Hitachi Universal Replicator for z/OS, Hitachi ShadowImage In-System
Replication for z/OS and Hitachi TrueCopy Remote Replication for z/OS software
operations
• Provides access to critical system performance metrics and thresholds, allowing
proactive problem avoidance and optimum performance to ensure that service-level
objectives are met or exceeded
• BCM software auto-discovery capability eliminates hours of tedious input and costly
human error when configuring and protecting complex, mission critical applications and
data

BCM = Hitachi Business Continuity Manager

Page 16-32
Hitachi Remote Replication
Hitachi Business Continuity Manager Functions

Hitachi Business Continuity Manager Functions

 Defines copy groups that contain multiple replication objects with similar
attributes that can be managed with a single command
 Eliminates errors and streamlines management with auto-discovery for all
replication objects
 Views the status of all enterprise-wide replication objects in real time
 Accesses key replication metrics with built-in performance monitoring
 Provides automatic notification of key events completion, such as pair state
transitions, timeout thresholds and other system events
 Enables multisite remote replication management for wide-area disaster
protection with minimal data loss
• Uses FICON ports for host attachment
• Uses Fibre Channel for replication

Business Continuity Manager (BCM) delivers nondisruptive, periodic, point-in-time remote data
copies across any number of storage systems and over any distance.

BCM requires you to configure ports and command devices.

Page 16-33
Hitachi Remote Replication
Demo

Demo

 http://edemo.hds.com/edemo/OPO/HowProtectData/HowProtectDat
a_Video/HowProtectData.html

Online Product Overview

 Hitachi Data Replication Solutions

https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKH
rZlDvXcv

https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKHrZlDvXcv

Page 16-34
Hitachi Remote Replication
Module Summary

Module Summary

 In this module, you should have learned to:


• Describe the features, functions and principles of Hitachi TrueCopy Remote
Replication bundle and Hitachi Universal Replicator
• Describe the features and functions of global-active device

Page 16-35
Hitachi Remote Replication
Module Review

Module Review

1. Which statement is true, related to remote replication on Hitachi Virtual


Storage Platform?
A. Consistency group cannot be used in Hitachi TrueCopy Extended
Distance.
B. TrueCopy Remote Replication is the synchronous remote replication
product.
C. Differential Management Logical Unit (DMLU) is required for remote
replication.

Page 16-36
17. Command Control Interface Overview
Module Objectives

 Upon completion of this module, you should be able to:


• Identify components of the command control interface (CCI), also called a
RAID manager

Page 17-1
Command Control Interface Overview
Overview

Overview

 What is the command control interface (CCI)?


• Provides a command line interface for all Hitachi replication products
 Hitachi ShadowImage Replication
 Hitachi Copy-on-Write Snapshot
 Hitachi Thin Image
 Hitachi TrueCopy Remote Replication bundle
 Hitachi TrueCopy Extended Distance
 Hitachi Universal Replicator
• Is executed on a host
• Is an in-band admin tool (communicates through a Fibre Channel or iSCSI port)
 Use CCI to control and/or script
• Replication operations for all replication products
• Hitachi Data Retention Utility tasks

For the Hitachi Unified Storage (HUS) 100 family:

• Must use RAID Manager (CCI) when replicating to or from previous Unified Storage
models

• Can use RAID Manager (CCI) when replicating to or from HUS models

• Can use Hitachi Storage Navigator Modular GUI/CLI when replicating to or from HUS
models

Page 17-2
Command Control Interface Overview
Overview

 CCI components needed for replication products


• Hitachi command device
 Accepts commands from CCI
 Reports command results back to CCI
• Hitachi Open Remote Copy Manager (HORCM) instance
 Service or daemon
 Communicates with storage system and with other instance
• HORCM configuration file
 Defines communication paths (LAN)
 Defines volumes to be controlled
• HORCM commands
 Control and monitor copy operations

CCI components on the RAID storage system

These are 4 of the CCI components needed for the replication products. A product license (for
example, for Hitachi ShadowImage Replication) is also required.

• Hitachi Command Device

• CCI commands

• HORCM configuration file

• HORCM Instance

Command device

CCI commands are issued by the CCI software to the RAID storage system command device.

The command device:

• Is a user-selected, dedicated logical volume on the storage system that functions as the
interface to the CCI software on the host

• Is dedicated to CCI communications and cannot be used by any other applications

• Accepts CCI read and write commands that are issued by the storage system

• Returns read requests to the host

Page 17-3
Command Control Interface Overview
Overview

• Uses 32 MB and the remaining volume space is reserved for CCI and its utilities.

The volume designated as the command device is used only by the storage system and is
blocked from the user.

Configuration definition file

• The configuration definition file is a text file that is created and edited using any
standard text editor (for example, UNIX vi editor, Windows Notepad)

• The configuration definition file defines correspondences between the server and the
volumes used by the server

• There is a configuration definition file for each host server

• When the CCI software starts up, it refers to the definitions in the configuration
definition file

• The configuration definition file defines the devices in copy pairs and is used for host
management of the copy pairs, including:

o Hitachi ShadowImage Replication

o Hitachi ShadowImage for Mainframe

o Hitachi TrueCopy

o Hitachi TrueCopy for Mainframe

o Hitachi Copy-on-Write Snapshot

o Hitachi Thin Image

o Hitachi Universal Replicator

o Hitachi Universal Replicator for Mainframe

• ShadowImage, ShadowImage for Mainframe, Copy-on-Write Snapshot, and Thin Image


use the same configuration files and commands and the RAID storage system
determines the type of copy pair based on the S-VOL characteristics and (for Copy-on-
Write Snapshot and Thin Image) the pool type

Page 17-4
Command Control Interface Overview
Overview

 CCI environment establishes a


conversation

 Instances communicate through


UDP/IP

 In-band communication with


storage system:
• Through SCSI channel through
Fibre Channel or iSCSI
• Command device

UDP/IP = User Datagram Protocol/Internet Protocol

Command Execution Using In-band and Out-of-band Methods

The 2 methods for executing CCI commands are the in-band method and the out-of-band
method.

• In-band method: This method transfers a command from the client or server to the
command device of the storage system through Fibre Channel and executes the CCI
operation instruction.

• Out-of-band method: This method transfers a command from the client or server to
the virtual command device in the Service Processor in VSP G1000/G series maintenance
utility through LAN, assigning a CCI operation instruction to the storage system and
executes the CCI operation instruction. Out-of-band operations are supported on the
Hitachi Virtual Storage Platform and later storage systems.

Page 17-5
Command Control Interface Overview
Example With ShadowImage Replication

Example With ShadowImage Replication

 Hitachi ShadowImage Replication — 1 server and 2 HORCM instances

HORCM0.conf HORCM1.conf

HORCM
Commands
Server
Software HORCM
and HORCM
Application Instance0 Instance1

Command
Device

P-VOL S-VOL

ShadowImage Replication Example

The relationship between these 4 components includes the following:

• There are always at least 2 instances, each controlling one side of the replication. During
pair creation, it is determined which volume becomes the P-VOL and which becomes the
S-VOL

• Each instance relies on a configuration file to communicate with the other instance, as
well as to communicate with the storage system

• The configuration file defines the volumes that will be paired up

• If you have 2 instances, you will have 2 corresponding configuration files

• An environment variable has to be set to identify which HORCM instance is the


command interpreter

• When a command is issued, usually through a script, the "command interpreting"


instance sends the command to the CMD device. The subsystem then actuates the
command

Page 17-6
Command Control Interface Overview
Example With Hitachi TrueCopy

 Hitachi ShadowImage Replication — 2 servers and 2 HORCM instances

HORCM0.conf
HORCM LAN HORCM1.conf
HORCM
Commands Commands
Communication
between RAID
Server Manager instances Server
Software
HORCM Software
and Instance0 and HORCM
Application Application Instance1

Command
Device

P-VOL S-VOL

Example With Hitachi TrueCopy

 TrueCopy — 2 servers and 2 HORCM instances

HORCM0.conf
HORCM LAN HORCM1.conf
HORCM
Commands Commands
Communication
between RAID
Server Manager instances Server
Software HORCM Software
and Instance0 and HORCM
Application Application Instance1

Command Command
Device Device

P-VOL S-VOL

Page 17-7
Command Control Interface Overview
Often Used Commands

Often Used Commands

 For setup:
• raidscan - find volumes and show their status
• findcmdev - find command devices

 For running:
• pairdisplay - show pair synchronization status
• paircreate - create a pair
• sync - flush system buffers to disk
• pairsplit - split a pair, temporarily or permanent
• pairresync - resynchronize a pair after split
• pairvolchk - checks the attributes and status of a pair volume

Page 17-8
Command Control Interface Overview
Module Summary

Module Summary

 In this module, you should have learned to:


• Identify components of the command control interface (CCI), also called
RAID manager

Page 17-9
Command Control Interface Overview
Module Review

Module Review

1. CCI uses out-of-band communication with the storage to perform


replication operations. (True/False?)

Page 17-10
18. Hitachi Replication Manager
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the Hitachi Replication Manager (HRpM) functionality and how it
integrates with other Hitachi replication products

Replication Manager Manuals

• Hitachi Command Suite Replication Manager User Guide

• Hitachi Command Suite Replication Manager Configuration Guide

• Hitachi Command Suite Replication Manager Application Agent CLI User Guide

• Hitachi Command Suite Replication Manager Application Agent CLI Reference Guide

• Hitachi Replication Manager v8.x Release Notes

Page 18-1
Hitachi Replication Manager
Hitachi Replication Manager

Hitachi Replication Manager

 Centralizes and simplifies replication management, monitoring and reporting of Hitachi


replication operations – reports replication status
 Supports all replication operations on Hitachi enterprise and modular storage

Universal
Replicate Backup
Archive Snap

Data Protection
Software and
Management
ShadowImage Copy-on-Write/
HTI

TrueCopy
Universal Replicator

Hitachi Replication Manager centralizes and simplifies replication management by integrating


replication capabilities to configure, monitor and manage Hitachi replication products for in-
system or distance replication across both open systems and mainframe environments.

• The synchronous and asynchronous long-distance replication products, as well as the in-
system replication products, were discussed earlier in this course

o How do customers manage all of these copy and replication operations?

o Replication Manager gives customers a unified and centralized management GUI


to help them manage all of these operations

• This solution

o Builds on existing Hitachi technology by leveraging the powerful replication


capabilities of the arrays and by combining robust reporting, mirroring and
features previously available in separate offerings

o Decreases management complexity while increasing staff productivity and


providing greater control than previously available solutions, through a single
consistent user interface

Page 18-2
Hitachi Replication Manager
Centralized Replication Management

Centralized Replication Management

Hitachi Replication Manager


Configuration, Scripting, Analysis,
Task/Scheduler Management and Reporting

Copy-On-Write Business
Universal
Snapshot Thin Image ShadowImage TrueCopy Continuity
Replicator
Manager
Primary Secondary
Provisioning Provisioning

CCI HORCM

Cross-product, cross-platform, GUI-based replication management

CCI = Command Control Interface

Hitachi Open Remote Copy Manager (name of CCI executable)

• Replication Manager gives an enterprise-wide view of replication configuration and


allows configuring and managing from a single location

• Its primary focus is on integration and usability

• For customers who leverage in-system or distance replication capabilities of their


storage arrays, Replication Manager is the software tool that configures, monitors and
manages Hitachi storage array-based replication products for both open systems and
mainframe environments in a way that simplifies and optimizes:

o Configuration

o Operations

o Task management and automation

o Monitoring of the critical storage components of the replication infrastructure

Page 18-3
Hitachi Replication Manager
Features Overview

Features Overview

 As a fundamental management tool, Hitachi Replication Manager covers the entire life cycle of
replication management (configuration, monitoring, operations)
# Feature Item
Configuring prerequisites of local copy/remote copy
Configuring
• Command device
1 Prerequisites
• Thin Image pool, V-VOL
(GUI)
• UR journal group, remote path
Configuring Configuring copy groups/pairs
2 Replications(*) • Immediate execution
(GUI) • Scheduled execution
Managing pair status
• Monitoring the status
• Changing the status
Managing
3 Replications Monitoring metrics
(GUI) • C/T delta (RPO for async remote copy)*
• Journal usage
• Snapshot pool usage
• Troubleshooting of C/T delta increase
App-aware Backup/restore with application agent
4 Backup/restore • Immediate execution
(GUI/CLI) • Scheduled executions
(*) Volumes need to be provisioned before pair configuration

UR journal group = Hitachi Universal Replicator journal group

RPO = recovery point objective

*C/T delta = consistency time delta

Consistency time delta is how many seconds the target volume is behind the source volume and
can be interpreted as recovery point objective (RPO), which means how much data would be
lost in case of a disaster.

Page 18-4
Hitachi Replication Manager
Overview

Overview

 Hitachi Replication Manager is the software tool that configures, monitors and manages
Hitachi replication products in both open and mainframe environments for enterprise and
modular storage systems from a “single pane of glass”

 Key features
• Centralized management of replication
• Application-aware backups – Microsoft SQL Server and Exchange
• Visual representation of replication structures
• Task management – scheduling and automation of the configuration of replicated data volume pairs
• Immediate notification of error and potential issues based on user-defined thresholds
• Simple wizards for pair creation and changing pair status
• Reports consistency deltas between source devices and their targets
• Supports email and SNMP alert reporting

SNMP = Simple Network Management Protocol

RPO = recovery point objective

RTO = recovery time objective

Replication Manager (HRpM) configures, monitors and manages Hitachi replication products on
both local and remote storage systems. For both open systems and mainframe environments,
HRpM simplifies and optimizes the configuration and monitoring, operations, task management
and automation for critical storage components of the replication infrastructure. Users benefit
from a uniquely integrated tool that allows them to better control RPOs and RTOs.

Page 18-5
Hitachi Replication Manager
Launching Hitachi Command Suite

Launching Hitachi Command Suite

http://<HCS server IP address>:22015/ReplicationManager/


or
http://<HCS server hostname>:22015/ReplicationManager/

• In the web browser address bar, enter the URL for the management server where
Hitachi Replication Manager (HRpM) is installed. The User Login window appears

• When you log in to Replication Manager for the first time, you must use the built-in
default user account and then specify HRpM user settings

• The user ID and password of the built-in default user account are as follows:

o User ID: system

o Password: manager (default)

• If HRpM user settings have already been specified, you can use the user ID and
password of a registered user to log in

• If you enabled authentication using an external authentication server, use the password
registered in that server

Page 18-6
Hitachi Replication Manager
Centralized Monitoring

 Launch Replication Manager from Hitachi Command Suite main window

Hitachi Replication Manager can also be launched from the Command Suite main window Tools
menu option.

Centralized Monitoring

 Four views allow users to understand the replication environment depending on


the perspective
• Hosts view
• Storage Systems view
• Pair Configurations view
• Applications view

Hitachi Replication Manager provides the following 4 functional views that allow you to view pair
configurations and the status of the replication environment from different perspectives:

Page 18-7
Hitachi Replication Manager
Centralized Monitoring

• Hosts

o This view lists open hosts and mainframe hosts and allows you to confirm pair
status summaries for each host

• Storage Systems

o This view lists open and mainframe storage systems and allows you to confirm
pair status summarized for each

o A storage system serving both mainframe and open system pairs is recognized
as 2 different resources to differentiate open copy pairs and mainframe copy
pairs

• Pair Configurations

o This view lists open and mainframe hosts managing copy pairs with CCI or BCM
and allows you to confirm pair status summarized for each host

o This view also provides a tree structure along with the pair management
structure

• Applications

o This view lists the application and data protection status

o This view also provides a tree structure showing the servers and their associated
objects (Storage Groups, Information Stores and Mount Points)

Page 18-8
Hitachi Replication Manager
Centralized Monitoring

 Provides a quick alert mechanism of potential problems using SNMP or email


• Unexpected changes in copy status
• Exceeded user-defined thresholds
 Resource utilization
(journals and sidefile)
 Recovery point
objective (RPO) of
Target Copy Group

• Hitachi Replication Manager can send an alert when a monitored target, such as a copy
pair or buffer, satisfies a preset condition

• The conditions that can be set include:

o Thresholds for copy pair statuses

o Performance information

o Copy license usage

• You can specify a maximum of 1,000 conditions

• Alert notification is useful for enabling a quick response to a hardware failure or for
determining the cause of a degradation in transfer performance

• Alert notifications are also useful for preventing errors due to buffer overflow and
insufficient copy licenses, thereby facilitating the continuity of normal operation

• Because you can receive alerts by email or SNMP traps, you can also monitor the
replication environment while you are logged out of Replication Manager

Page 18-9
Hitachi Replication Manager
Centralized Monitoring

 Exporting Hitachi Replication Manager management information


• Determine cause of error
• Analyze performance information
 Write delay time (C/T delta*) on
a copy group basis
 Journal volume usage on a copy
group basis
 Journal volume usage on a
journal group basis (in open systems)
 History of received alerts
 Event logs
 Pool volume usage on a pool basis
(in open systems)

• You can export Replication Manager management information to a file in CSV or HTML
format

• Using the exported file, you can determine the cause of an error, establish corrective
measures and analyze performance information

o If necessary, you can edit the file or open it with another application program

o You can export a maximum of 20,000 data items at a time

• The following performance information items can be exported:

o Write delay time (C/T delta) on a copy group basis

o Journal volume usage on a copy group basis

o Journal volume usage on a journal group basis (in open systems)

o Pool volume usage on a pool basis (in open systems)

o The history of received alerts

o Event logs

• When you export management information, you can specify a time period to limit the
amount of information that will be exported

o You can export only information with a data retention period has not yet expired

Page 18-10
Hitachi Replication Manager
Features

o The retention period can be managed by a user with the Admin (Replication
Manager management) permission

*C/T delta = consistency time delta

Consistency time delta is how many seconds the target volume is behind the source volume and
can be interpreted as recovery point objective (RPO), which means how much data would be
lost in case of a disaster.

Features

 “Single pane of glass”


• Integrated console for multisite replication pairs
• Consolidated monitoring for copy pair status and remote copy metrics
 Visual representation of replication structure
• Copy groups, sites, all volume pairs

Visual Representation of Replication Structure

• Copy Groups: A group of copy pairs created for management purposes, as required by
a particular task or job

o By specifying a Copy Group, you can perform operations such as changing the
pair status of multiple copy pairs at once

o Using the My Copy Groups feature, a user can register a copy group into My
Copy Groups, choosing only those that are most important to monitor, to see
how copy groups are related and check copy pair statuses in a single window

o My Copy Groups is also the default screen after you log in to the Hitachi
Replication Manager interface

Page 18-11
Hitachi Replication Manager
Features

• Sites: With Replication Manager, you can define logical sites in the GUI just as you
would define actual physical sites (actual data centers)

o It allows you to manage resources more efficiently if you set up separate sites
because it is easier to locate a required resource among many resources
displayed in the GUI

 Pair volume lifecycle management


• Simplified replication configuration from setup to deletion
 Setup > Definition > Creation (Initial copy) > Operation > Monitoring > Alerting > Deletion

 Storage system configuration functions


• Set up functionality required for copy pair management
 Setting command devices, DMLU, journal groups and pools
 Setting up remote paths for remote replication

 Copy pair creation or deletion


• Pair Configuration Wizard
 Intuitive pair definition screen with topological view
• Task scheduler
 Scheduler functionality allows users to execute the copy operations at off-peak time

• DMLU = Differential Management Logical Unit

Page 18-12
Hitachi Replication Manager
Positioning

Positioning

Replication
Monitoring and
Hitachi Replication Manager Management

Configuration
Open Volumes

Navigator
Hitachi Device Manager Management

Storage

Volumes
BC

M/F
Manager1
RAID Manager Replication
Management

Modular Storage Enterprise Storage


SI
TC SI
UR Replication
CoW
CoW Technologies
TCMD TC TI
1 Optional
Remote In-System Remote In-System

• Replication Manager (HRpM) provides monitoring for both enterprise storage systems
(open and mainframe volumes) and modular storage systems (open volumes)

• HRpM requires, and is dependent on Hitachi Device Manager and uses RAID manager
(CCI) and Device Manager agent for monitoring open volumes

• Device Manager provides volume configuration management

• RAID manager (CCI) is used by HRpM for watching pair status

• For monitoring mainframe volumes, HRpM can work with or without Hitachi Business
Continuity Manager (BCM) software or mainframe agent

• HRpM supports monitoring of IBM environments (z/OS, z/VM, z/VSE and z/Linux) and
non-IBM environments using only Device Manager (without Business Continuity Manager
or mainframe agent installed). HRpM retrieves the status of TCS/TCA/SI, and Hitachi
Universal Replicator copy pairs directly from storage arrays, without depending on
mainframe host types. The minimum interval of automatic refresh for this configuration
is 30 minutes.
Diagram legend

• TC = Hitachi TrueCopy Remote Replication

• TCE = Hitachi TrueCopy Extended Distance

• SI = Hitachi ShadowImage Replication

Page 18-13
Hitachi Replication Manager
Architecture – Open Systems and Mainframe

• UR = Hitachi Universal Replicator

• CoW = Hitachi Copy-on-Write Snapshot

• TI = Hitachi Thin Image

• RAID Manager = Hitachi Command Control Interface (CCI)

Architecture – Open Systems and Mainframe

 Standard configuration of a site


Management Client Pair Mgmt Server (CCI Srv)
Modular Storage
Host Agent

HRpM Agent
Agent Base

Manager
SNM2
Common

Plug-in

RAID

(CCI)
CMD
Browser HDvM Agent Device
Plug-in

Host (Production Server)


IP Network

Management
Host Agent

Server HRpM Agent


Agent Base

Manager
Common

Plug-in
RAID

(CCI)

HRpM Server
HDvM Agent

SAN
Plug-in FC-
HDvM Server

HBase (*) Host (Production Server)


(No Agent and CCI) Enterprise
Storage
S/N SVP
Host (Mainframe, z/OS) CMD
Device
HTTP Server
BCM

A standard system configuration of a site is comprised of:

• Management server: Hitachi Replication Manager (HRpM) gets installed with Hitachi
Device Manager (HDvM). HBase is automatically installed by the Device Manager
installation. It is highly recommended to use the same version number, major and minor,
for the HDvM server and Replication Manager server

• Pair management server (open systems)

o Host agent: Only a single host agent is provided for HDvM and HRpM. One
agent install on the server works for HDvM and HRpM

o RAID manager (CCI): HRpM requires RAID manager to manage replication


pair volumes. The servers on which the RAID manager is installed must have a
host agent so that HRpM can recognize and manage the pair volume instances

Page 18-14
Hitachi Replication Manager
Architecture – Open Systems and Mainframe

• Pair management server (mainframes)

Hitachi Business Continuity Manager (BCM): Business Continuity Manager software works on
the mainframe and manages replication pair volumes assigned for the mainframe computers.
HRpM can monitor and manage the mainframe replication volumes by communicating with
BCM.

• Host (production server): A host runs application programs. The installation of the
HDvM agent is optional. HRpM can acquire the host information (host name, IP address
and mount point) if the agent is installed on it

o IBM HTTP server is required on the mainframe host when using either of the
following:

 IPv6 connection between HRpM and BCM

 HTTPS (secure) connection between HRpM and BCM

o BCM program itself does not have the above capabilities, so the IBM HTTP server
is used to perform these functions. The IBM HTTP server works as a proxy server
between HRpM and BCM

Diagram legend:

• HRpM = Hitachi Replication Manager

• HDvM = Hitachi Device Manager

• HBase = Hitachi Command Suite common component base

• BCM = Business Continuity Manager

Page 18-15
Hitachi Replication Manager
Architecture – Open Systems With Application Agent

Architecture – Open Systems With Application Agent

 Standard configuration of a site


Host (Application Server)
Management Client

Host Agent
HRpM Agent

Agent Base

Manager
Common
Plug-in

RAID

(CCI)
HDvM Agent Modular Storage
Plug-in
SNM2
Browser
CMD
Application Device

IP Network
Management Agent
Server
HRpM Server Host (Application

SAN
FC-
Backup/Import Server)
HDvM Server

Host Agent
Enterprise

Agent Base
HRpM Agent

Common

Manager
Plug-in Storage

RAID

(CCI)
HBase (*)
HDvM Agent S/N SVP
Plug-in
CMD
Device
Application
Agent
Application Server – MS
Exchange / MS SQL Server

Note: Depending on the configuration, backup servers are not required for SQL server
configurations.

* HBase – Represents the common components for Hitachi Command Suite.

Components

 Hitachi Replication Manager components


* One HRpM server can
• Management server manage and monitor volumes
 Hitachi Device Manager* from multiple HDvM servers
 Replication Manager
• Management client
 Web client
• Pair management server (open systems)
 Device Manager agent
 RAID manager (CCI)
• Pair management server (mainframe)
 Hitachi Business Continuity Manager or mainframe agent
• Host (application server)
• Application agent

Page 18-16
Hitachi Replication Manager
Components

Management server: A management server provides management information in response to


requests from management clients. Device Manager (HDvM) is a prerequisite software for
Replication Manager (HRpM). HRpM and HDvM are installed on the same management server.
If multiple sites are used, a management server is required for each site. Also, the management
server at the remote site can be used to manage pairs when the local site management server
fails.

Management client: A management client runs on a web browser and provides access to the
instance of HRpM.

Pair management server (open systems/mainframes): A pair management server


collects management information, including copy pair statuses and performance information for
remote copying. If multiple sites are used, at least one pair management server is required for
each site. More than one pair management server can be set up at each site. A pair
management server can also be a host (application server). CCI and a HDvM agent are installed
on each pair management server for open systems. Business Continuity Manager (BCM) or
mainframe agent is installed on each pair management server for mainframes.

Note: When determining whether to set up pair management servers to be independent of


hosts, consider security and the workloads on the hosts.

Host (application server): Application programs are installed on a host. A host can be used
as a pair management server, if required. The HDvM agent is optional if the server is used as a
host (and not pair management server).

Page 18-17
Hitachi Replication Manager
Managing Users and Permissions

Managing Users and Permissions

 Hitachi Replication Manager implements access control in 2 ways


• User management – roles and user permissions that restrict the operations users can
perform
• User resource groups – restricts the range of resources that specific users can access

 All users can set up personal profiles and Hitachi Replication Manager licenses
regardless of their permissions
 The built-in User ID, System, lets you manage all users in Hitachi Command
Suite—you cannot change or delete this user

The user ID, Peer, is internally used by an agent.

Page 18-18
Hitachi Replication Manager
Resource Groups Overview

Resource Groups Overview

 Provides access control functionality

 A collection of hosts, storage systems and applications grouped by


purpose and associated with a user for controlled access by the user

 Large environments require security management for resources such as


controlling who can access this storage system; an administrator is
assigned to hosts and systems that are grouped by a site or department

Rules for setting up resource groups:

• Multiple resources can be registered in each resource group, but each resource can be
registered in only one resource group

• A user can be granted access permissions for multiple resource groups (that is, the user
can be associated with more than 1 resource group)

• The default group All Resources cannot be deleted or renamed

o A new resource group named All Resources cannot be added

• All resources are automatically registered in the All Resources group

• Because a user logged in with the built-in account, System (the built-in account) is
permitted to access all resources

o The user is automatically registered in the All Resources group

• Any user can be added to the All Resources group if they do not belong to another
resource group

• Except for users logged in as System, users with the Admin (user management)
permission can belong to resource groups only when they also have the Admin, Modify
or View (Hitachi Replication Manager management) permission

Page 18-19
Hitachi Replication Manager
Resource Group Function

 Types of resource groups


• All Resources: System-defined, containing all
the resources in the storage system
• User-Defined: Users with administrative
privileges can define a resource group and add
resources, such as hosts and storage systems

 Users can only see the allocated resources


on the GUI

 A user can be associated with multiple


resource groups to increase the range of
operations

• Use the GUI to define logical sites just as you would define actual physical sites (actual
data centers)

• Users can view the resources that belong to the sites in the resource groups with which
the users have been associated

Resource Group Function

Page 18-20
Hitachi Replication Manager
Resource Groups

Process for creating resource groups:

• Create users

• Assign permissions to the users based on whether they will be managing Replication
Manager or they will also be creating other users

• Create resource groups – All Resources group is the default

• Add host and storage systems to user-defined resource group

• Assign users to user-defined resource group for accessibility control

Resource Groups

 Create a Resource Group

Creating a resource group:

• In the Explorer menu, click the Administration drawer and then select Resource
Groups

• Click Create Group to display the Create Resource Group dialog box

• Enter a resource group name in the Name field and then click OK

Page 18-21
Hitachi Replication Manager
Resource Group Properties

After the resource name has been created, assign hosts, storage systems, applications and
users.

Resource Group Properties

 Multiple resources can be


registered in each
resource group, but each
resource can be registered
to only 1 resource group
(exclusive registration)

• Newly created users do not belong to any resource group

• Users can be granted access permissions for multiple resource groups

Page 18-22
Hitachi Replication Manager
Hitachi Command Suite Replication Tab

• All resources are automatically registered in the All Resources group

• The default group, All Resources, cannot be deleted or renamed; a new resource group
named All Resources cannot be added

• The built-in admin account, System, is automatically registered in the All Resources
group

• There is no hierarchical structure

Hitachi Command Suite Replication Tab


This section shows how to use the Hitachi Command Suite replication tab.

HCS Replication Tab

Page 18-23
Hitachi Replication Manager
HCS Replication Tab Operations

HCS Replication Tab Operations

 Operations you can perform from the Replication tab in Hitachi


Command Suite window
• Hitachi Universal Replicator performance analysis
• Configure the threshold for network analysis
• Configure the threshold for M-JNL analysis
• Set up global-active device
• Launch Hitachi Replication Manager

Analyzing Hitachi Universal Replicator Performance

• The Universal Replicator (HUR) Performance Analysis window of the Replication tab
provides information for analyzing performance problems with data transfers between
the primary and secondary storage systems

• HUR asynchronously transfers data to the remote site. Delay times occur and differ
depending on the data transfer timing. This delay is known as C/T delta and is an
indicator for recovery point objective (RPO)*

• You can use the UR Performance Analysis window to:

o Identify instances where the C/T delta threshold was exceeded

• View the top 5 copy groups and their C/T delta rates in a chart. You can quickly identify
copy groups that consistently exceed the maximum write time delay (C/T delta threshold)

o Analyze the C/T delta threshold against the performance of primary, secondary,
and network resources. The analysis process supports 2 modes

o Wizard mode: a step-by-step guide that compares trends and helps users
identify the cause of the problem

o Advanced mode: a selection of charts that lets advanced users correlate multiple
trends and identify the problematic resource

Page 18-24
Hitachi Replication Manager
HCS Replication Tab Operations

Configuring Network Bandwidth for Analysis

• The UR Performance Analysis function gives users the option of supplying values for the
effective network bandwidth

• The effective network bandwidth is the actual speed at which data can be transmitted
on a remote path based on the replication environment. Check the network and supply a
proper bandwidth value for each path group

Configuring Metric Thresholds for M-JNL (master journal) Analysis

• The UR Performance Analysis window includes an option to set thresholds for the
following metrics:

o Host Write Transfer Rate to M-JNL (master journal)

o M-JNL Async Transfer Rate to RCU (remote control unit)

o Host write IOPS to M-JNL (IOPS = input/output operations per second)

• The threshold values are used to plot a horizontal lines in graphs indicating where the
limit has been exceeded. Although defaults are defined, the values should be based on
the replication environment

*C/T delta = consistency time delta

Consistency time delta is how many seconds the target volume is behind the source volume and
can be interpreted as recovery point objective (RPO), which means how much data would be
lost in case of a disaster.

Page 18-25
Hitachi Replication Manager
Module Summary

Module Summary

 In this module, you should have learned to:


• Describe the Hitachi Replication Manager functionality and how it integrates
with other Hitachi replication products

Module Review

1. What are the key features of Hitachi Replication Manager (HRpM)?


2. List the components of the Replication Manager configuration.

3. What role does the Hitachi Device Manager (HDvM) agent play in
Replication Manager operations?

Page 18-26
19. Hitachi Data Instance Director
Module Objectives

 Upon completion of this module, you should be able to:


• Describe Hitachi Data Instance Director (HDID) software features and
functions
• Describe what the integration of Hitachi Data Instance Director and Hitachi
Content Platform achieves
• Describe the unified approach to protecting, managing and reducing data
with Data Instance Director

Page 19-1
Hitachi Data Instance Director
HDS Data Protection Strategy

HDS Data Protection Strategy


This section provides information on the data protection strategy offered by Hitachi Data
Systems.

Data Management – Today’s Challenges

INCREASINGLY COMPLEX INFRASTRUCTURE

DATA GROWTH IS OUT OF CONTROL

BUDGETS DO NOT REFLECT REALITY

BUSINESS DEMANDS ARE NOT BEING MET

• The IT infrastructure has gotten extremely complex over time and data is everywhere.

o Many systems, applications

o Many locations

o Virtual machine sprawl

• Data growth is a problem

o Most industries average 40 - 60% annual growth in primary data

o This explodes when you consider “copy data” for backup, disaster recovery, test
and development, audit and e-discovery, archiving and many other needs

• Budgets for IT in general, and data management in particular, have not increased at the
rate of data growth (40% per year) or infrastructure sprawl

• At the same time, line of business managers are demanding increasing availability of
systems, applications and data

o They want you to reduce backup windows, backup more often to reduce
recovery time (RPO) and restore faster (RTO)

Page 19-2
Hitachi Data Instance Director
Focus of Data Protection

Focus of Data Protection

Service-Level-Based Protection: Backup windows, recovery point and time


objectives (RPO/RTO), retention, failure tolerance

Operational Local restoration of data following an event


Recovery Lost file, application, volume or system

Disaster Restore operations from (or at) another physical location


Recovery Fire, flood, earthquake, power failure or worse

Long-Term Retain data for governance and compliance


Retention Data mining, e-discovery, audits and reference archives

RTO = recovery time objective

RPO = recovery point objective

HDS approaches data protection, retention and recovery from a business-defined perspective by
addressing the individual service level requirements for different data management scenarios,
including:

• Operational recovery to address events such as a lost file or email, application, data
volume or an entire system. Human error or malicious behavior are the most prevalent
causes of these events, though hardware failures and software bugs also contribute.

• Disaster recovery is required when something impacts the ability to restore operations
locally. This can include common disasters such as fire or flood and will require either
sourcing a copy of the data from another location or restarting operations at another
location.

• Long-term recovery speaks to the need to retain certain data assets for prescribed
amounts of time and the ability to retrieve them within that time frame. The most
common retention policies are set to address regulatory or governance compliance
requirements to keep the assets for data mining and big data applications or as
reference archives such as product manuals.

Page 19-3
Hitachi Data Instance Director
Goals of Data Protection

• Many organizations suffer in one or more of these areas, either failing to meet the
business service level requirements or by over-spending for higher levels of protection
than are necessary for individual applications and data sets. These service levels include,
but are not limited to:

• Backup window: The amount of time allotted to complete a particular backup job; the
protected applications or systems are often unavailable for normal use during this time,
so shorter backup windows are desired. In some cases such as critical, always-on
applications, any backup window may be unacceptable.

• Recovery Point Objective (RPO) specifies the amount of time and therefore the amount
of new data at risk between backup operations. A shorter RPO equals less data at risk
due to a more granular point-in-time recovery capability.

• Retention specifies both how long to keep a data asset and when to expire or delete it
from the environment. An asset that is retained longer than its prescribed retention
period could actually become a liability.

• Not all data is of equal value and there may be some data assets that are okay to lose.
It is important to understand this tolerance to failure and loss and adjust protection and
spending levels accordingly.

Goals of Data Protection

Reduce the Improve Simplify


Amount of Data Performance Management

Operational  Reduce primary data by 40% or more


Recovery
 Reduce backup data by 90% or more

Disaster  Reduce or completely eliminate backup windows


Recovery
 Improve RPO by 95% or more
 Improve RTO from hours or days to seconds or minutes
Long-Term
Retention  Reduce backup administration costs by 50% to 75%

*As compared to traditional back-up approaches

Page 19-4
Hitachi Data Instance Director
Goals of Data Protection

As we roll out these new capabilities, we’re able to provide our customers with some impressive
improvements:

• Reduce primary data by 40% or more: Typically, as much as 60%-80% of data in


production systems is static (not actively changing) and not frequently accessed; this
data can be transparently moved to an archive tier of storage, such as Hitachi Content
Platform.

• Reduce backup data by 80% or more: A traditional backup process will force the
periodic completion of a full backup, usually once per week; more than 80% of this
week’s full backup will be redundant to last week’s backup (only 20% new or changed
data, week to week); simple data deduplication will eliminate the duplicated 80%. This
number improves over time as each subsequent full backup is reduced to just the new
data. The data change rate and the number of full backups retained will effect the
overall results.

• Reduce or completely eliminate backup windows: A typical backup window is measured


in hours, sometimes days, for large data sets; this is reduced to seconds or minutes
when using hardware-assisted snapshots, such as Hitachi Thin Image, and completely
eliminated when using true CDP technologies, such as found in Hitachi Data Instance
Director.

• Improve RPO by 95% or more: The load that typical backup processes put on
production systems limit it to being run once per day (each evening) for incremental
backups and once per week for full backups. This leaves about 24 hours of new data at
risk (the time between backups). Performing non-disruptive snapshots once per hour
reduces the amount of data at risk (the recovery point objective) by 95.8% (1/24);
using continuous data protection reduces it to near zero.

• Improve RTO from hours / days to seconds / minutes: Again, a function of the speed of
reverting a hardware-based snapshot versus copying the volume of data from backup
media.

• Reduce backup administration costs by 50% - 75%: Most organizations have deployed
multiple backup and disaster recovery tools to handle the diverse and distributed nature
of their data (different operating systems, applications, locations; virtual vs. physical;
servers versus workstations, and so on). Addressing all of these requirements in a single
admin console eliminates silos of licensing, training and certification, on-call personnel,
systems and storage.

Page 19-5
Hitachi Data Instance Director
Modern Approach to Data Protection

Modern Approach to Data Protection

Reduce the Improve Simplify


Amount of Data Performance Management

Operational  Archiving of  Storage-based  Mix-and-match


Recovery static data to protection technologies
self-protected ‒ Snapshots ‒ Eliminate “point
storage ‒ Clones solutions”
‒ Sync and async  Centralized
Disaster  Incremental-
replication administration
Recovery forever data ‒ Active-active
capture ‒ Policies
 Block-level ‒ Workflows
 Deduplication continuous data ‒ Monitoring
Long-Term  Copy data protection ‒ Reporting
Retention management

HDS is addressing these challenges in 3 ways:

• First, reduce the amount of data that needs protecting. This will take the load off of
production systems and reduce the costs of primary storage. We do this through
effective policy-based archiving (or tiering) of data to a totally self-protection storage
platform (Hitachi Continuous Data Protector). We also reduce the amount of data in the
secondary, or backup systems by:

o Only capturing incremental changes (avoiding redundant weekly full backups)


through deduplication and compression

o Unique copy data management that avoids the need to create additional copies
of data, such as for test and development operations

• Second, we improve backup and restore performance. Eliminate backup windows


through block-level continuous data protection on Windows systems. Our storage
systems include:

o High-performance, best-of-breed hardware-based snapshot

o Cloning and replication technologies that remove data protection processing from
the production environment and provide extremely fast backup and restore

Page 19-6
Hitachi Data Instance Director
Business-Defined Data Protection: Goals

• Third, let’s simplify the data management environment. As technology has evolved with
new systems and applications and operating models, new solutions have been needed
to protect them. Because the “big guys” in backup are very slow to adapt, new “point
solutions” are purchased and deployed to meet these needs.

How many different tools do you have for different operating systems, applications
(RMAN for Oracle, BR*Tools for SAP and so on.), virtual servers (Veeam anyone?),
remote offices, and desktops and laptops? Each new tool adds complexity, new costs
and new risks. HDS recognizes the need for different technological approaches to meet
specific service level objectives, but we bundle them all under a single, easy-to-use
administrative interface that enables the creation of policies and data movement
workflows, plus monitoring and reporting.

Business-Defined Data Protection: Goals

Operational Recovery Disaster Recovery

Drive backup window, RPO and Continuous availability


RTO toward ZERO Critical Near-instant failover

Faster backup and


Important Restore services in hours
restore

Full, incremental Selectively restore


Standard
backups in days or weeks

Long-Term Meet retention requirements. Reduce


Retention production, storage and backup costs

RPO=Recovery point objective

RTO=Recovery time objective

One way to simplify this discussion is to break the data into 3 classes of importance, such as
critical, important and standard data.

Page 19-7
Hitachi Data Instance Director
Business-Defined Data Protection: Goals

• Critical data – the things that drive the business and make you special need to be
protected from any loss or outage. These can be your e-commerce website, order
processing and CRM systems. We include large databases here because they can be
critical and protecting them is nearly impossible with traditional backup.

• Important data are things that are in process at a corporate level, like sales and
marketing programs, human resources information, manufacturing and inventory
information. They aren’t absolutely critical to the survival of the organization, but losing
them would have a serious impact.

• Standard data is the typical files that we all use in our jobs – spreadsheets,
presentations, documents, and so on. If you were to lose all that data it might be very
impactful to your individual productivity, but in the scope of the entire organization it
probably wouldn’t be a devastating loss.

Then we figure out what we need for local, operational recovery for each of these tiers:

• Critical – we need to back it up as fast and as often as possible (backup window and
RPO), and recover as fast as possible (RTO). Ideally, you would love to drive these SLOs
to zero.

• Important – traditional backup and restore processes have served you well here, but
they need to be faster to deal with the data growth you’re experiencing

• Standard – stick with what’s been working: traditional, scheduled full and incremental
backups. Again, improve performance and scalability where necessary.

For disaster recovery:

• Critical – we need the ability to provide continuous operations and immediately fail-over
in case of a catastrophic failure

• Important – ensure that your recovery capabilities meet your RTO, which can be
measured in hours, but not in days—you could be out of business by then

• Standard – you probably don’t need to restore everything; select what you need and
restore it over time

For long-term recovery, we simply need to meet data retention and expiration requirements.
This should be defined by policy and executed automatically. Additionally, look at this as an
opportunity to reduce overall costs, including in production, storage and backup.

Page 19-8
Hitachi Data Instance Director
Business-Defined Data Protection: Technologies

Business-Defined Data Protection: Technologies

Operational Recovery Disaster Recovery

Application-aware snapshots Clustering


Continuous data protection Critical Sync and async replication
Unified Management

Snapshots, CDP, or Async hardware or


Important
backup-to-disk software replication

Legacy backup Selective replication


Standard
processes by backup app

Long-Term Archiving and tiering to self-protecting


Retention storage or to the cloud

Choosing the right technology for each set of requirements is the key to meeting service level
objectives at the least possible cost.

On this slide, we’ve listed the best options that we see in most customer environments.

Page 19-9
Hitachi Data Instance Director
Introduction to Hitachi Data Instance Director

Introduction to Hitachi Data Instance Director


This section provides an introduction to the Hitachi Data Instance Director.

Hitachi Data Instance Director Overview

Operational
Recovery
 Unified
Disaster
• Storage-based snapshot, clone and
Recovery replication orchestration
Long-Term
Retention
• Host-based backup, continuous data
protection, live backup, archive

 Business-defined
• Policy-based, whiteboard-like workflow interface
• Easily meet complex data availability service levels

 Excellent fit with Hitachi block and file replication


• Supporting the Hitachi VSP family, Hitachi Unified Storage VM and
Hitachi NAS Platform

Hitachi Data Instance Director (HDID) provides unified data protection, enabling the
simplified creation and management of complex, business-defined policies to meet service
levels for availability. It is an excellent fit with Hitachi block and file replication solutions
supporting Virtual Storage Platform (VSP), including VSP F series and VSP G series, Hitachi
Unified Storage (HUS) VM and Hitachi NAS Platform (HNAS).

Data Instance Director offers the orchestration layer for remote replication, supporting
Hitachi True Copy and Hitachi Universal Replicator, local and remote snapshots and clones with
Hitachi Thin Image and Hitachi ShadowImage Replication, continuous data protection and
incremental backup forever, as well as file and email archiving. HDID is a unified data
protection solution that competitors cannot match. Hitachi Data Systems has the full solution:
the software to orchestrate and execute data protection, array-based replication software and
of course, the object, file, block and server platforms.

Page 19-10
Hitachi Data Instance Director
A Common Scenario

A Common Scenario

I need
 Local backup every hour to minimize RPO
 Monthly backup to keep for 7 years
 Daily copy to refresh test and development operations
 Real-time data mirror to a standby site for disaster recovery
 Older data moved to a less expensive tier of storage

With Hitachi Data Instance Director, I get


 A single solution that does all these things in a
single workflow
 Less load on my server because it copies new
data only once
 More sleep at night!

Eliminate the Backup Window Problem

 Continuously capture only new and changed data


 Automate and orchestrate storage-based snapshot, clone and remote
replication to improve recovery point objectives (RPO)

• Challenge: Impossible to meet backup window and SLAs

o Data growth and virtualization makes meeting windows nearly impossible.

o Shrinking backup windows from 24/7 operations make SLAs difficult.

Page 19-11
Hitachi Data Instance Director
Eliminate the Backup Window Problem

o SLAs are becoming more stringent – application uptime, 24/7, critical data . . .

• Solution: Hitachi Data Instance Director (HDID)

o Data Instance Director eliminates the backup window by managing data


instances intelligently

o Move MS Exchange emails to Hitachi Content Platform – enables long-term


archiving, as well as reduces amount of full and incremental backups

o Real-time data capture and movement – copies made immediately after created,
therefore, there is no artificial or constraint windows

o Abolish spike loads on VMs, servers, network

o Granular RPO to the minute

o Same data instances are used for other recovery purposes

• Benefits / Values

o Total control of RPO

 Set your recovery point objectives to your business needs. Mix and match
batch and continuous data movement anywhere it is needed.

o Archive email to Content Platform

 Improved long-term retention

 Near instant access to archived emails – more available

 Improved asset and resource utilization

o Workflow and dataflow

 For ease of use and quick setup, HDID provides a workflow user interface
giving you the power to set the data flow and replication policies per your
exact needs.

o Performance enhanced

 Global deduplication together with-byte transfer, drastically shrinks your


backup window and can often eliminate it completely. Restores
performance without rehydration, accelerating your recovery time with
the maximum possible speed

Page 19-12
Hitachi Data Instance Director
Eliminate the Backup Window Problem

o Real-time data capture

 Continuous data protection is an option within HDID. Achieve consistent


snapshot RPO down to the last minute. If you need data recoverable
down to the last second of data received, HDID does it. Our ability to
capture and transfer data in real-time to intelligent storage repositories
gives you the edge you need for maximum protection. Achieve consistent
snapshot RPO down to the last minute

o HDID takes away batch processes, network loads and performance hits to
production servers and virtual machine environments.

o Combines backup data streams with other data protection methods to increase
recovery and retention options in ways that legacy backup products simply
cannot do

o Unify multiple protection methods to handle local, remote and virtual data

 Total control of RPO by setting RPOs to your business needs


 Ease of use and quick setup of workflow and data flow
 Real-time data capture enables maximum protection and consistent
snapshot

Page 19-13
Hitachi Data Instance Director
Easily Transform Backup Designs Into Policies (A Real Customer Example)

Easily Transform Backup Designs Into Policies (A Real Customer


Example)

Currently: Each of the


relationships represents a
unique policy or process
(unique script required)

HDID: Workflow created in


less than 10 minutes
 Compared to 2 days

This slide is from an actual customer engagement. The image on the left is a photo of a process
flow that the customer described in the meeting, and noted that it would take him 2 days to
create this in his current EPIC environment, using IBM Tivoli Storage Manager. Our team
created the same flow in the HDID user interface in less than 10 minutes.

Page 19-14
Hitachi Data Instance Director
What Are the Benefits of Hitachi Data Instance Director?

What Are the Benefits of Hitachi Data Instance Director?

 Application support for SQL Server, Exchange and Oracle in block


environments

 Application support for Oracle RAC on Hitachi NAS Platform

 Support for Hyper-V and VMware on host agents

 Tier to Hitachi Content Platform or Microsoft Azure for Microsoft SQL


and Exchange using host agents

 Simplified management and orchestration of application consistent


snapshots and clones for the Hitachi Virtual Storage Platform G1000 /
Hitachi Virtual Storage Platform / Hitachi Unified Storage VM and NAS
Platform

RAC=Oracle Real Application Clusters

 Orchestration of remote replication and file and object replication for the
Virtual Storage Platform G1000 / Virtual Storage Platform / Unified
Storage VM and HNAS

 Host-based continuous data protection for non-Hitachi storage platforms

 GUI-based restores

 Built-in advanced scheduler

RAC= Oracle Real Application Clusters

Page 19-15
Hitachi Data Instance Director
Features and Capabilities

Features and Capabilities


This sections provides information about the features and capabilities of the Hitachi Data
Instance Director (HDID).

Advanced Features to Modernize Your Data Protection


Infrastructure

Archive files and emails to


Incremental-forever can
Network and staff-friendly Hitachi Content Platform;
reduce backup storage
solution for remote offices file archiving to
needs by > 90%
Microsoft AzureTM

Comprehensive
Unified
Advancedcopy
Built-In offsite replication for Built-In Cost Bare metal recovery to
Protection
Recovery
data and
disaster recovery Savings physical and virtual servers
management
Capabilities
Retention

Application-consistent
Streamlined, off-host
snapshot for Microsoft File versioning to capture
operations for virtual
Exchange, Microsoft SQL every change as it is saved
environments
Server, Oracle, and others

HDID includes many built-in features that add value.

1. Reduce costs

Incremental-forever data capture

• Incremental forever has several huge advantages over the traditional full + incremental,
grandfather-father-son models of backup. No full backups on the weekends, no
unnecessary duplication of data, and faster, more reliable restores.

• In a typical environment (100TB of production data, 0.2% daily change rate):

o Incremental forever vs. full + differential = 91.6% savings

o Incremental forever vs. full + incremental = 91.5% savings

Page 19-16
Hitachi Data Instance Director
Advanced Features to Modernize Your Data Protection Infrastructure

• With 12 weeks retention:

o Full + differential needs 1,336TB

o Full + incremental needs 1,312TB

o Incremental forever needs 112TB

• See full presentation on this


at: http://myhds.hds.com/portal/public/classic/assetDetails?asset=HDSIT_138450&navid
=1039

• Read the blog at: http://blogs.hds.com/hdsblog/2013/10/how-to-reduce-backup-


storage-requirements-by-more-than-90-without-data-deduplication.html

E-mail Archiving

• Tight HDID integration with HCP and Windows Azure

• Solve long-term retention needs

• Improve email system performance

• Near-instant access to archived emails

• Lower operational and equipment costs

Remote Offices

• Centrally controlled and monitored

• Data is transferred as it is created, trickles through the WAN for increased efficiency

• Smart remote sync between offices

2. Built-in recovery capabilities

Off-site replication for disaster recovery

• Network and storage friendly – sends only changed data blocks, which can be
deduplicated

• Schedule for periods of less network activity

• Use any supported storage repository

Page 19-17
Hitachi Data Instance Director
Advanced Features to Modernize Your Data Protection Infrastructure

Bare Metal Recovery

• No need to create a separate backup; HDID BMR will restore the OS (C:\) volume from
the normal backup

• Restore to a similar or dissimilar hardware platform – some new driver installation may
be required

• Restore to a physical or a virtualized server

o Excellent solution for migrating and cloning of entire systems

3. Advanced protection capabilities

File Versioning

• Capture EVERY version of a file, as every change is saved

o Other products only capture the latest version at the time of backup / archive

• Each new version is indexed

o Makes search and auditing a breeze

• Configuration of versioning takes seconds

• Easily mix with other operations

Virtual Environments

• VMware: off-host, non-disruptive protection

o Leverages vSphere APIs for Data Protection (VADP) and Changed Block Tracking
(CBT)

o Flexible restore options, such as:

 Restore a VM to its original host, or another host


 Cloning a VM, or restore to any specified datastore on any host
• Hyper-V: super-efficient in-guest solution

o Block-level, incremental data capture; deduplication; replication

o Application-consistent protection of Exchange and SQL

• Improve VM RPO from hours to 1 minute

Page 19-18
Hitachi Data Instance Director
Quantifiable Benefits

Application-consistent protection

• Integrated snapshot and clone support for Exchange, SQL Server and Oracle

o Captures all elements of the last transaction

• Other applications can be supported with custom pre- and post-scripts

o Put the application into a backup-ready state, call the copy operation, release the
application

• No impact on application performance

• One-Touch Recovery

o Restore the application and data in one pass

o Significantly reduces the recovery time

It all adds up to a unified copy data management solution

Quantifiable Benefits

Features Benefits
Backup, CDP, snapshot, replicate, archive, tier; eliminate the need for
Choose the right technologies for the job
point solutions
Block-level incremental-forever data Eliminate redundant copies, reduce backup storage requirements by 90%
capture (snapshot, CDP) or more
Eliminate backup windows and strain on production servers; enable more
Hardware-based snapshot orchestration
frequent protection (RPO) and fast recovery (RTO)
Simplify administration; eliminate the costs and risks of managing
Unique graphical interface
multiple tools
Reduce the amount of copy data by using 1 backup copy for multiple
Data instance management
purposes (test/development, audit)
Reduce primary storage requirements by 40% or more; enable retention
Archiving and tiering
policies for compliance
View and restore Virtual Infrastructure Integrator-managed VM-level
Integration with Hitachi Virtual
storage snapshots from the HDID user interface – provides a level of
Infrastructure Integrator
central visibility and control

CDP = continuous data protection

Page 19-19
Hitachi Data Instance Director
Storage-Based Protection With HDID

Storage-Based Protection With HDID

Capabilities for Block With HDID

RAC = Oracle Real Application Clusters

Page 19-20
Hitachi Data Instance Director
Capabilities for HNAS With HDID

Capabilities for HNAS With HDID

 Hitachi NAS Platform file clone SMU


• Snapshot-based replication with directory level
granularity
App / App /
• Allows read access to the target DBMS Async DBMS

 NAS Platform directory clone


• Allows space efficient, read/write file-consistent
clones
• Per directory, including subsidiary App / App /
DBMS DBMS
directory/files
Primary Clone
• Allows quick creation and rollback of an entire
directory with a single operation
 Application support on HNAS
• Oracle RAC

RAC = Oracle Real Application Clusters

• HNAS file clone

o Snapshot-based replication with directory level granularity, allowing to decrease


the backup data with file exclusion

o Allows read access to the target (can control access through SysLock)

o Creating snapshots with the “Snapshot rule” allows selecting specific source
snapshot for refreshing the replication or target snapshot for rollback during the
disaster recovery scenario (See #SBP0304 for the details)

o Clones are not supported. (The files are re-hydrated on the target and lose their
space efficiency)

o Not currently “firewall friendly” because EVS IP addresses (on source and target)
must be reachable from the SMU public IP address

• HNAS directory clone

o Allows space efficient, read/write file-consistent clones per directory (including


subsidiary directory/files); Directory clone can only be created in an empty
directory, and cannot be created in the root of the file system (must be
subdirectory)

Page 19-21
Hitachi Data Instance Director
Capabilities for Host-Based Operational Recovery With HDID

o Allows quick creation/rollback of entire directory with a single operation.

o This is suitable for structured data with multiple files (general application /
DBMS)

o Not atomic operation for entire directory (requires app-aware software for
correct operation)

Capabilities for Host-Based Operational Recovery With HDID

 Continuous data protection and software snapshots Production Data

• For critical Microsoft environments


• Captures every block-level change as it is written
• Drives backup window and RPO to near zero
• Integrated with Microsoft Volume Shadow Copy HDID
Server
Services (VSS) for application-consistent recovery

 Batch backup
• Traditional incremental or full backups for Microsoft
and Linux file systems
HDID Repository

Page 19-22
Hitachi Data Instance Director
Hitachi Data Instance Director Block Orchestration

Hitachi Data Instance Director Block Orchestration

Capabilities for HCP With HDID

 Archive is different from backup


• Backup = a copy for operational and disaster recovery
• Archive = a copy for long-term retention
 With Hitachi Data Instance Director, you can have both
in a single solution
• For Exchange and Windows file systems
• Tier to Hitachi Content Platform (email and files) or
Microsoft Azure (files)
 HDID is built on search engine technology
• Manages the metadata to easily find and retrieve archive
data

Real-time archiving

Everyone knows that a backup is not an archive. An archived file is a file "version" with
sufficient metadata attached that allows for easy search and retrieval. Backup products can't do
that. Also, backups give you multiple points of recovery in time for your whole data set or
specific parts of it. Archive products can't do that

Page 19-23
Hitachi Data Instance Director
Archive File and Email Objects to HCP

• But then came Data Instance Director . . . and HDID can do both

• When you take that version for archive, be it once a month or 2 seconds ago when that
last change took place, it is your choice. Searching for that file with rich metadata
variables or clicking through your file system directory structure? . . . also your choice

Repository search

• The ability to search repository data through a wide range of definable parameters such
as file extension type, user, owner and many others, and the ability to browse past
snapshot histories, is not just a great way to facilitate user level restore empowerment,
it is also an important way for IT managers and users to understand the overall data
itself

Archive File and Email Objects to HCP

 “Archive more, back up less”


• Reduce data to back up by 60% or more

 Save storage costs, solve long-term retention

Microsoft Exchange, Hitachi Data Hitachi Content


Windows Server Instance Director Platform

Solution: Hitachi Data Instance Director

• Data Instance Director reduces the backup window by offloading inactive email to
Content Platform (HCP), thus reducing 60-70% or more of stagnant email.

• Move MS Exchange emails to HCP – enables long-term archiving, as well as reduces


amount of full and incremental backups

Page 19-24
Hitachi Data Instance Director
Archive File and Email Objects to HCP

Benefits/Values

• Archive email messages and attachments to HCP

o Improve long-term retention

o Near instant access to archived emails – more available

o Improve asset and resource utilization

 Hitachi Data Instance Director (HDID)


leverages all that is special about
Hitachi Content Platform
• Highly scalable, self-protected storage
• Metadata index and search
 Data Instance Director keeps only the
file metadata in its repository
• Content Platform stores the data
• Enables HDID archive to scale massively
• Email preview without restore
 Stub (HSM) or remove (archive)

HSM = hierarchical storage management

Page 19-25
Hitachi Data Instance Director
HDID Complimentary Products

HDID Complimentary Products

 Operational recovery/test-development/disaster recovery


snapshots/clones
• Hitachi Thin Image (HTI) snapshot
 Storage for Hitachi Thin Image pool for primary and disaster recovery (if
applicable)
 In-system license capacity = PVOL capacity + HTI pools for primary and
disaster recovery (if applicable)
• ShadowImage clones
 Storage for ShadowImage SVOLs for primary and disaster recovery (if
applicable)
 In-system license capacity = PVOL capacity + SVOLs capacity for primary
and disaster recovery (if applicable)

 Disaster recovery
• TrueCopy Synchronous/Hitachi Universal Replicator
 Storage for PVOLs and SVOLs
 Hitachi Disaster Recovery bundle license on primary and secondary array

Page 19-26
Hitachi Data Instance Director
Unified Management

Unified Management
This sections provides an overview of the features and benefits of unified management
software provided by Hitachi.

How Many Backup Solutions Do You Use?

63% of
organizations
say they use
more than one
solution

Data Protection Is Complicated


Infrastructure Threats Service Level Technologies
Objectives
Batch
Backup

Lost Files, Backup Window


Continuous
Applications, Emails Data
Platforms Protection

Snapshot
RPO/RTO

System
Operating Replication
Failures
Systems

Retention Archive
or Tier

Locations Site-Level
Disaster Cloud
Budget

Performing backups and recoveries used to be fairly routine back when the IT infrastructure
was fairly simple and homogeneous. Not anymore.

Page 19-27
Hitachi Data Instance Director
Data Protection is Complicated

• First, there are many application, platform (physical, virtual, cloud) and data types and
each requires its own specific methods and processes to protect data correctly, often
requiring scripts or interfacing with APIs

• Then there are the different types of threats and each of these requires a different
approach to protection and recovery. (click) For example, you wouldn’t want to restore
an entire system just to recover a single file or email. And when you have a major
disaster, such as a fire or earthquake, you’ll want to recover the data from or at another
location

• Besides different applications, there are also different operating systems, so you’ll need
specialized agents and processes for each

• The same with different locations – data centers, disaster recovery sites, regional
headquarters, remote and branch offices, home offices, each with different levels of
requirements and different levels of local IT skills available.

• As you can see, you can have a lot of complexity in your data protection environment –
varied infrastructure mapped against a number of different threats. (click)

• Of course, not all data is created equal and to address this we have a number of
different service level objectives. These SLOs are defined to meet the (sometimes
contradictory) needs of the business.

• The first SLO is the backup window – this defines how long you can take to perform the
backup operation, whether its to protect against a file loss, a system failure or a site
outage. These may be different policies, and often are

• Next is the recovery point objective (RPO). This defines how often the data is protected.
A nightly backup defines an RPO of 24 hours. This also means that you are leaving up to
24 hours of your newest data at risk between backup jobs. That might not be
acceptable for more important or critical applications and data

• Recovery time objective (RTO) defines how fast the recovery should be accomplished
when something goes wrong. For example, a short RTO would point you to storing the
backup copy locally on disk so you can restore it fast

• Finally, there’s always the budget limitation. Applying the best protection techniques
across all of your data may be prohibitively expensive. Apply the best and fastest to only
your most critical data and use less expensive techniques for less important data

Page 19-28
Hitachi Data Instance Director
Data Protection is Complicated

• All of these many combinations of infrastructure, threats and service level objectives
(SLO) lead to a number of different technologies to meet these needs. No single
technology or solution works best for everything, but HDS is striving toward that goal.

• The most prevalent form of data protection is backup—either full or incremental or a


combination of both. Backup is fine for low level SLOs – relatively long backup windows,
24 hour RPOs and RTOs measured in hours or days. Backup doesn’t work well or at all
for more critical data. Standard backup performance can be significantly improved by
using Hitachi Protection Platform (HPP), a purpose-built backup appliance / virtual tape
library, as the repository target

• Continuous data protection, captures every change as it is written to the disk and
therefore avoids the need for a backup window. It also provides an RPO of near zero.
But it consumes a lot of backup disk storage since it captures every change that is
written, as compared to the point-in-time differences found in other models. Continuous
data protection is appropriate for your most critical data, but probably in combination
with either periodic snapshots or backup to reduce the continuous data protection
storage footprint

• Snapshot technologies, such as Hitachi Thin Image, especially those embedded within
storage systems, are a modern way of capturing changed data in a fast, frequent and
efficient manner. They store the data locally on the same storage array as the primary
data, so they aren’t suitable to protect against system and site failures on their own, but
they do eliminate the backup window and enable more frequent RPOs and much faster
RTOs for operational recovery

• Replication, also known as mirroring, sends a copy of the data to another location for
recovery following a disaster. There are several forms of replication, including storage-
based mechanisms that are synchronous (metro distances) and asynchronous (global
distances); as well as replication of the backup repository within most backup software
applications

• An effective way to improve overall IT costs and performance is to move inactive data
from production systems to an archive tier of storage. This movement should be policy
based, selecting files by their creation date, last access or modification date, application
and data type, owner, or other factors. Using Hitachi Content Platform (HCP), which
includes many self-protecting capabilities, as the archive object repository eliminates the
need to back up the archive

Page 19-29
Hitachi Data Instance Director
Which Data Protection Options to Choose?

• Cloud storage is also becoming a popular, though potentially risky target for backup and
archive data. Cloud services provide a monthly, pay-as-you-grow subscription model
which can be very appealing in some situations. Security, resiliency, reliability and long-
term viability are all things to consider when choosing a public cloud provider.

Overall, you can see that a fairly typical environment can require hundreds or even thousands
of policies that drive a number of protection, retention and recovery tools.

Which Data Protection Options to Choose?

Batch Backup
Operational recovery for noncritical data

Continuous Near-zero data loss for critical data


Data Protection

Snapshot
Application-aware, fast, low impact

Offsite disaster recovery, failover


Replication

Archive / Tier
Long-term retention

Administrators do have a lot of choices on how to address specific data protection, retention
and recovery requirements. Our vision is to simplify these choices by providing a comprehensive,
unified solution

In today’s world, the questions are:

• Which ones do you choose?

• Do they work together to provide a holistic solution?

• Do they overlap – creating more copies of data?

• Can you afford them all in regards to hardware, software, services and personnel?

Page 19-30
Hitachi Data Instance Director
Which Data Protection Options to Choose?

Traditional backup is still the obvious choice for many applications and data types and is often
used in addition to some of the more advanced technologies, such as snapshots and replication,
to a provide point-in-time copy, longer retention, and so on. The downside of backup is the
amount of time it takes to run a backup operation and applications usually need to be stopped
for the duration of that “backup window”. A related challenge is RPO,. Since it causes downtime,
backups are often run at night, leaving a whole day’s worth of new data at risk.

Continuous data protection eliminates the backup window and provides a near-zero RPO (much
less risk of data loss), but can put additional load on production servers, networks and storage.

Snapshots can be performed either in software (e.g. Windows Volume Shadow Copy Service
[VSS]) or in hardware (e.g. Hitachi Thin Image). These point-in-time solutions are fast and can
be run frequently, but by themselves they don’t address disaster recovery requirements.

Replication is a method of moving data to another site for disaster recovery and is often paired
with backup, continuous data protection or snapshots. By itself, however, replication does not
address data deletion or corruption threats – you end up with 2 copies of deleted / bad data.

Archiving is a great method to reduce the strains of backup and restore by moving older / static
data to a lower cost tier of storage, often tape but also an object store like Hitachi Content
Platform (HCP).

Backup-as-a-Service cloud offerings are starting to become the solution of choice for a lot of
smaller organizations and remote offices because they can eliminate most of the complexity and
administration and potentially reduce costs. However, there are challenges:

• Data security: Who can see your data?

• Access: Do you have a big enough Internet or WAN connection?

• Resilience: What happens when the cloud service breaks?

Page 19-31
Hitachi Data Instance Director
When Data Disaster Strikes

When Data Disaster Strikes

 Will the right person


“93% of companies that lost their data for
with the right training 10 days or more filed for bankruptcy
within one year of the disaster, and 50%
log into the right system filed for bankruptcy immediately”

and restore the right data – Source: U.S. National Archives and
Records Administration
to the right place

in a timely manner

without making anything else


worse?

As you add point solutions to handle specific needs, you are also adding cost, complexity and
risk.

Workflow-Based Policy Management

Whiteboard your data


protection policies
and workflows

IT managers and CIOs use whiteboards to create workflows and business processes. Hitachi
Data Instance Director does this as well. Use it like a whiteboard to create policies and data
flows, then enable it all within your environment with the click of a button.

Page 19-32
Hitachi Data Instance Director
Unique Graphical User Interface

Unique Graphical User Interface

 Build protection policies like you build


business policies

 Drag and drop process elements to


quickly and easily build complex
policies
• Protection type and data retention
• Application and file types
• Data paths and movement triggers
• Storage target and more

New: Multitenancy Support

 Support multiple tenants (users) Users / Customers


accessing Hitachi block devices at
the same time
 Each tenant has their own
IT as a Service including
• Hitachi Data Instance Director GUI HDID data protection
with restricted visibility based on
access rights
• Access rights to a restricted set of
resources, including pools, ports, host
groups and logical devices
 Add modern data protection to IT-as- Shared Resources
(Partitioned)
a-service offerings

• Support multiple tenants (users) accessing Hitachi block devices at the same time. Each
tenant has their own management GUI and access rights to a restricted set of resources,
including pools, ports, host groups and logical devices.

Page 19-33
Hitachi Data Instance Director
Example Deal With HDID

• Further positions HDS storage as the right choice for IT service providers, with the
ability to add modern, high-performance data protection services, supporting their
customers’ operational recovery and disaster recovery requirements.

 Restrictions configurable at the


repository (intelligent storage
manager) level for security

 Profile editor for ease of


management

Example Deal With HDID

 50TB database needs protecting


 Hitachi Data Instance Director software for 50 TBs of source data
 Production array
• 10-70 TB more disk for clone and/or snap space plus all G1000/VSP/HUSVM
hardware/software
• Hitachi In-System Replication bundle (HTI/ShadowImage) for that space (70-
110TB license)
 Protect 50TB database with 1 clone (50TB) and up to 1024 snaps (20%
change rate at 10TB) for test/development
 Up to 1024 snaps (20% change rate at 10TB) for operational recovery
• Remote replication bundle for 50TBs

Page 19-34
Hitachi Data Instance Director
Demo

 Disaster recovery array


• 50-70TB disk for Hitachi Universal Replicator S-VOL and clone and/or snap
space plus all Hitachi Virtual Storage Platform G1000/ Hitachi Virtual Storage
Platform / Hitachi Unified Storage VM hardware/software
• ISR bundle (Hitachi Thin Image/ShadowImage) for that space (20TB license)
• Remote Replication Bundle for 50TBs

Demo

 http://edemo.hds.com/edemo/OPO/HitachiDataInstanceDirector_HDI
D/HDID/HDID.html

Page 19-35
Hitachi Data Instance Director
Online Product Overview

Online Product Overview

 Hitachi Data Instance Director (HDID)

https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKH
rZlDvXcv

https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKHrZlDvXcv

Page 19-36
Hitachi Data Instance Director
Module Summary

Module Summary

 In this module, you should have learned to:


• Describe Hitachi Data Instance Director software features and functions
• Describe what the integration of Hitachi Data Instance Director and Hitachi
Content Platform achieves
• Describe the unified approach to protecting, managing and reducing data
with Data Instance Director

Page 19-37
Hitachi Data Instance Director
Module Review

Module Review

1. What does the integration of Hitachi Data Instance Director and Hitachi
Content Platform achieve?
A. Solves long-term retention needs
B. Improves system performance
C. Provides near-instant access to archive files and emails
D. Lowers operational and equipment costs

Page 19-38
20. Hitachi NAS Platform
Module Objectives

 Upon completion of this module, you should be able to:


• List Hitachi NAS Platform (HNAS) models
• Describe HNAS Platform architecture
• Describe basic HNAS concepts
• Discuss the integration of HNAS with global-active device

Page 20-1
Hitachi NAS Platform
Features

Features

 Unified storage to consolidate file, block and object data

 High performance hardware-accelerated architecture, 99.999%


availability and expansive scalability with the ability to cluster from 2 up
to 8 nodes and clustered namespace (aka single namespace)

 Fast, nondisruptive deduplication that supports up to 4 high speed


deduplication engines

 File cloning and high-speed object based replication

 Unified management – uses Hitachi Command Suite storage


management software

 Intelligent file tiering – enables policy-based tiering within the system to


alternate drive types to 3rd party, including Hitachi Content Platform

 Application-aware data protection – integrated backup and recovery


with Hitachi Application Protector

Page 20-2
Hitachi NAS Platform
Hitachi NAS Platform Single-Node Portfolio

Hitachi NAS Platform Single-Node Portfolio

Hitachi NAS Platform 4100


Price

140K IOPS per Node


32PB Max Capacity

3090 PA 4080
96K IOPS per Node 105K IOPS per Node
8PB Max Capacity 16PB Max Capacity

3090 4060
73K IOPS per Node 70K IOPS per Node
8PB Max Capacity 8PB Max Capacity License key
4040 Model Dongle
3080 65K IOPS per Node
4PB Max Capacity
41K IOPS per Node
4PB Max Capacity
Features/Capacity/Performance

Performance numbers are only used for comparison purposes. Hitachi NAS Platform 3090 is
shown without and with Hitachi NAS Performance Accelerator, NAS Platform 3090 PA is with
Performance Accelerator installed. For more exact and customer facing numbers, consult the
appropriate and updated performance
documents. http://www.spec.org/sfs2008/results/sfs2008.html

3080 = Hitachi NAS Platform 3080 (is planned to be EOS for new sales July 2015)
3090 = Hitachi NAS Platform 3090 (is planned to be EOS for new sales July 2015)
3090 PA = Hitachi NAS Platform 3090 including Performance Accelerator license
4040 = Hitachi NAS Platform 4040
4060 = Hitachi NAS Platform 4060
4080 = Hitachi NAS Platform 4080
4100 = Hitachi NAS Platform 4100

The above specifications are according to HNAS line card Version 12.0.3528.04 (last revised:
07/16/2015 )

Page 20-3
Hitachi NAS Platform
Hitachi NAS 2-Node Cluster Portfolio January 2015

Hitachi NAS 2-Node Cluster Portfolio January 2015

4100
Hitachi NAS Platform
Price

4080
280K IOPS
32PB Max Capacity
4060
210K IOPS
16PB Max Capacity
4040
140K IOPS
8PB Max Capacity
HNAS
F1140 130K IOPS
4PB Max Capacity

13K IOPS
336TB Max Capacity Features/Capacity/Performance

• Although the Hitachi NAS Platform (HNAS) F1140 and F1120 are based on a different
hardware platform, they belong to the complete HNAS offering. The HNAS F series is not
covered in further detail, as this is a different offering.

• The HNAS F1140 can also be ordered in a single node configuration.

• HNAS F IOPS internal estimate and the TB limit is restricted by the HDD.

• The HNAS F1140 is capable of supporting up to 1PB of data.

Page 20-4
Hitachi NAS Platform
The Family of HUS File and HNAS Models

The Family of HUS File and HNAS Models

F1140 4040 4060 4080 4100


Up to 2 Up to 2
# Nodes / Cluster 2 Nodes Up to 4 Nodes Up to 8 Nodes
Nodes Nodes
HUS 110 HUS 100
Arrays Supported HUS, HUS VM, VSP, VSP G1000
(Direct Connect) Family
4 2 1,2
2-node IOPS ~12,800 130,000 147,957 209,5191,2 293,1281,2
Throughput (MB/sec) Up to ~1,3004 Up to 7003 Up to 1,0003 Up to 1,5003 Up to 20003
5
Max Useable Capacity 336TB 4PB 8PB 16PB 16PB
File-level single
Primary Storage Deduplication Block–Level; Hardware Accelerated
instancing
File System Pool Size 336TB5 256TB
No; File-
Object-based Replication Yes
Asynchronous
Intelligent File Tiering Yes
Single Namespace No Up to Max Usable Capacity
IDC Price Band 5 5 6 7 8

Notes:

1. Based on SPECsfs_2008 NFSv3 Benchmark

2. Dual-node configuration

3. With mixed read/write workloads

4. Internal estimates, limited by HDD count

5. F1140 capable of 1PB, limited by HDD count

6. Using Performance Accelerator software

No 10GbE, SMB Signing, scalability of 4040, basic differences with 3090 and so on

Heap settings

Min Default Max

Max CIFS connections 7,490 24,000 24,000

Max open files 22,490 72,000 72,000

Performance numbers are only used for comparison purposes. HNAS 3090 is shown without and
with Performance Accelerator. HNAS 3090 PA is with Performance Accelerator installed. For
more exact and customer facing numbers consult the appropriate and updated performance
documents.

Page 20-5
Hitachi NAS Platform
The Family of HUS File and HNAS Models

http://www.spec.org/sfs2008/results/sfs2008.html

• 3080 = Hitachi NAS Platform 3080

• 3090 = Hitachi NAS Platform 3090

• 3090 PA = Hitachi NAS Platform 3090 including Performance Accelerator license

• Licensing

o Performance Accelerator is a licensed feature and will only be enabled if the


Performance Accelerator license is present

• Performance Accelerator is supported on:

o NAS 3090 only

 Performance Accelerator is installed by:


o Installing a Performance Accelerator license

o Performing a full system reboot

 If clustered, reboot one node at a time

• 4040 = Hitachi NAS Platform 4040

• 4060 = Hitachi NAS Platform 4060

• 4080 = Hitachi NAS Platform 4080

• 4100 = Hitachi NAS Platform 4100

Page 20-6
Hitachi NAS Platform
System Hardware (Front View)

System Hardware (Front View)

 NVRAM battery backup

 Dual, hot swappable HDDs

 Dual, hot swappable fans

Hitachi NAS Platform 4040 Rear Panel

2 x 10G ETHERNET 2 x 10G ETHERNET 6 x 1G ETHERNET PRIVATE 10/100 ETHERNET 4 x 1/2/4G FIBRE
CLUSTER PORTS NETWORK PORTS NETWORK PORTS 5-PORT SWITCH CHANNEL PORTS
(XFP) (XFP) (1000BASE-T COPPER) (100BASE-T COPPER) (SFP)

2 X REDUNDANT, 2 x 10/100/1000
HOT-SWAPPABLE ETHERNET
PSU MANAGEMENT PORTS

Five sets of Ethernet ports:

• 3 x 10/100/1000 Motherboard ports (RJ45)

o 2 active management ports, 1 inactive reserved for future use

Page 20-7
Hitachi NAS Platform
Hitachi NAS Platform 4060/4080/4100 Rear Panel

• 6 x 1G file serving ports (RJ45)

• 2 x 10G file serving ports (XFP) (Not supported on HNAS 4040)

• 2 x 10G cluster ports (XFP)

• Five port unmanaged switch (RJ45, no internal connections)

Can aggregate file serving ports:

• Up to 8 aggregations

• Cannot mix 1G and 10G ports in an aggregation

Also, USB ports, serial port, VGA, keyboard and mouse

Hitachi NAS Platform 4060/4080/4100 Rear Panel

2 x 10G ETHERNET 4 x 10G ETHERNET 4 x 2/4/8G FIBRE


CLUSTER PORTS NETWORK PORTS CHANNEL PORTS
(SFP+) (SFP+) (SFP+ )

2 X REDUNDANT, 2 x 10/100/1000 ETHERNET


HOT-SWAPPABLE PSU MANAGEMENT PORTS

Three sets of Ethernet ports:

• 3 x 10/100/1000 Motherboard ports (RJ45)

o 2 active management ports, 1 inactive reserved for future use

• 2 x 10G file serving ports (SFP+)

• 2 x 10G cluster ports (SFP+)

Can aggregate file serving ports:

Page 20-8
Hitachi NAS Platform
Differences Between Models 4060 and 4080

• Up to 4 aggregations

o Also, USB ports, serial port, VGA, keyboard and mouse

Differences Between Models 4060 and 4080

4060

• Add Model
license: • No key
Model 4080 • Join a 4060
• Join a 4080 cluster
cluster

4080
• A cluster-wide model type license is available for the Hitachi NAS Platform 4060 models.

• Applying this license to a 4060 will report the system as a 4080 and gain the limits of a
4080.

• When adding a 4060 node to a cluster or replacing a node with a spare node in an
existing cluster, the new node will inherit the model type from cluster-wide licenses.

• Spares are always the 4060 model.

• A new model key is needed only for replacement in 4080 single node configuration
scenarios.

Page 20-9
Hitachi NAS Platform
MMB and MFB Printed Circuit Boards

MMB and MFB Printed Circuit Boards

Inside Hitachi NAS Platform, 2 Printed Circuit Boards

Mercury Main Motherboard (MMB)


• Off the shelf x86 motherboard
• Single quad core processor
• Connected to 2 x 2.5” HDD (Linux SW RAID-1 configuration)
• Runs Debian Linux 5.0
• 4040: 8GB memory, 4060 and 4080 16GB memory, 4100 32GB
Mercury FPGA Board (MFB)
• Connects to MMB using four PCIe lanes
• Six FPGA chips
• 4040 24GB memory
• Model 4060 and 4080 50GB memory
• Model 4100 76GB memory

• Hitachi NAS Platform 3080 and 3090 documentation: Mercury Motherboard (MMB)

• NAS Platform 4040, 4060, 4080, and 4100 documentation: Main Motherboard (MMB)

• The Mercury main motherboard (MMB) contains a multi core CPU and 8, 16, 32GB of
system memory

o All of the software tasks runs on the MMB

o All the custom hardware functionality resides on the MFB

o The Mercury FPGA board (MFB) contains all the FPGA functionality found in
Hitachi NAS models

Page 20-10
Hitachi NAS Platform
Logical Elements in HNAS

Logical Elements in HNAS

LAG 1 LAG 2 LAG 1 LAG 2

EVS1 EVS2 EVS3


192.168.3.21 172.17.5.25 200.0.0.30

H
U
S
FS01/ FS03/
SP 01 SP 02 S
FS02/ FS04/
T
O
LUN/SD R
SD 00
SD SD 11
SD SD 22
SD SD 3 SD 4
5 SD 6
5 SD 76
SD SD 87
SD
A
G
RG
E

RG = Raid group

LUN = Logical unit number

HNAS = Hitachi NAS Platform

SD = System drive

SP = Storage pool

FS = File system

EVS = Enterprise virtual server

SHR = Share

Page 20-11
Hitachi NAS Platform
EVS Migration (Failover)

EVS Migration (Failover)

LAG 1 LAG 2 LAG 1 LAG 2

EVS1 EVS2 EVS3


192.168.3.21 172.17.5.25 200.0.0.30

H
U
S
FS01/ FS03/
SP 01 SP 02
FS02/ FS04/ S
T
O
SD 0 SD 1 SD 2 SD 3 5
SD 4 6
SD 5 SD 7
6 8
SD 7 LUN/SD R
A
G
RG E

RG = Raid group

LUN = Logical unit number

HNAS = Hitachi NAS Platform

SD = System drive

SP = Storage pool

FS = File system

EVS = Enterprise virtual server

SHR = Share

Page 20-12
Hitachi NAS Platform
CIFS Shares and NFS Exports

CIFS Shares and NFS Exports

172.17.5.25 172.17.5.25
SHR2 on
EVS1 (X:)
LAG 1

MNT2
NFS Export: DIR01 EVS1
SHR1 on Active Directory
EVS1 172.17.5.25

CIFS Share: DIR02


H
U
S
SP 01
FS02/ /DIR01 /DIR02 S
T
LUN/SD O
SD 0 SD 1 SD 2 SD 3
R
A
RG G
E

RG = Raid group

LUN = Logical unit number

HNAS = Hitachi NAS Platform

SD = System drive

SP = Storage pool

FS = File system

EVS = Enterprise virtual server

SHR = Share

Page 20-13
Hitachi NAS Platform
HNAS 4000 Software Licensing

HNAS 4000 Software Licensing

Bundled Features ENTRY VALUE ULTRA


NFS or SMB Protocols Both Both Both
Primary Dedupe (Base) √ √ √
Virtual Server (EVS) 2x 4x 64x
File System Audit and Rollback √ √ √
Quick Snapshot Restore √ √ √
High Availability and Cluster Namespace √ √ √
Hitachi Data Migrator √ √
File System Recover from Snapshot √ √
iSCSI Protocol √ √
Replication (IDR, IBR, ADC, Object) √
XVL (Cross Volume Links) √
Data Migrator to Cloud √
File Clone (BlueArc JetClone) √
Read Caching √
Synchronous Image Backup (BlueArc JetImage) √

Virtual Server Migration and Security √

NFS = Network File System

SMB = Server Message Block

There are 3 bundles that are offered:

• Unified entry bundle

• Unified value bundle

• Unified ultra bundle

What differentiates these 3 bundles?

Unified Entry Bundle

The unified entry bundle is the minimum required software licensing bundle. In the entry
bundle we offer both NFS and SMB protocols. Primary deduplication is included by default on all
the Hitachi NAS Platform models.

Unified Value Bundle

With the unified value bundle, we offer everything that the entry bundle does, but we also
incorporate the data migrator software. The data migrator can be used when customers would
like to migrate files from one tier of storage to another tier within the NAS Platform cluster. This
is where the value bundle comes into play.

Page 20-14
Hitachi NAS Platform
HNAS Features

Unified Ultra Bundle

Then, finally we have the unified ultra bundle which incorporates everything that the entry and
value bundles offer, but also incorporates the ability to replicate from one HNAS cluster to a
secondary HNAS cluster. In other words, for customers requiring any sort of disaster recovery
between 2 sites, the ultra bundle would be of value. The ultra bundle includes cross volume
links which allow for tiering of data to another platform, for example Hitachi Content Platform or
NetApp.

HNAS Features

 Deduplication
 HNAS data protection
• Hitachi Copy-on-Write Snapshot
• File clone/tree clone
• NDMP support
• Antivirus integration
• Replication
 File based replication
 Object based replication (Hitachi NAS Replication)
 Data migration
• Internal
• External (to NFSv3, to HCP)
• To cloud (HCP/Amazon S3)
 Universal migration

Primary Deduplication Using HNAS

After Dedupe

Page 20-15
Hitachi NAS Platform
HNAS Platform Snapshots Implementation

HNAS Platform Snapshots Implementation

1. Pre-snapshot file system view


Instant Snapshot @ t0: blocks A, B, C
2 Creation t1 2. Snapshot creation is instant, no data is copied t1.
Snapshot Live Write t2 3. When a write occurs to the file system at t2, a copy of
Read the Root Onode is created for the snapshot. This
Read snapshot Onode points to the preserved data blocks
6 5 4. The incoming data blocks B’ and C’ are written to new
Snapshot 3 Root 1 available blocks. The new block (B’ and C’) pointers
Onode Onode are added to the live Root Onode and the old pointers
are removed (B and C)
5. The live Root Onode is used when reading the live
4 volume, linking to live blocks (A, B’, C’)
B’ C’
6. The snapshot Onode is used when reading the
t2 snapshot volume, linking to the preserved blocks (B
and C) and shared blocks (A)
A B 7 C 7. Not all blocks are freed up upon snapshot deletion
t0

• Snapshots are done in hardware — with no performance loss on reads or writes.

• Snapshots are done within the file system and not with copy-on-write differential
volumes.

Quick Snapshot Restore is a licensed feature for rolling back one or more files to a previous
version of a Hitachi Copy-on-Write Snapshot.

For more information about this command line procedure, open the command line interface
(CLI) and run man snapshot, or refer to the Hitachi NAS Platform Command Line Reference.

If a file has been moved, renamed or hard linked since the snapshot was taken, Quick
Snapshot Restore may report that the file cannot be restored

If the file cannot be Quick Restored, it must be copied from the snapshot to the live file system
normally.

Page 20-16
Hitachi NAS Platform
Register Hitachi Unified Storage Into Hitachi Command Suite

Register Hitachi Unified Storage Into Hitachi Command Suite

Add Storage

Report

HUS

• Adding the storage system to be monitored and handled by Hitachi Device Manager
(HDvM) is not any different than adding in block only storage.

• Adding the Hitachi NAS Platform servers to the database enables Device Manager to
match the information given by the storage device and the HNAS server using the WWN
from HNAS and storage as the key.

• Therefore, it is essential to use secure storage domains, even if only HNAS is connected
to the storage.

• Not using secure storage domains requires manual mapping after both the HNAS and
storage is reported into the database.

MMW = worldwide name

Page 20-17
Hitachi NAS Platform
Register HUS File Module/HNAS Into HCS

Register HUS File Module/HNAS Into HCS

SMU Register
Register Admin EVS

Admin EVS

• The SMU registers into the database with information from the selected range of single
nodes or clusters.

• Additionally, Hitachi Device Manager (HDvM) also needs to be configured to


communicate to the admin EVS in the node/cluster.

• This will enable Device Manager programmatic to issue Hitachi NAS Platform commands
directly on the NAS Platform admin EVS instead of using link/and/launch to the SMU.

SMU = system management controller

EVS = exchange virtual server

Page 20-18
Hitachi NAS Platform
SMU Registration

SMU Registration

SMU = system management controller

The configuration frame to register the Hitachi NAS Platform cluster is found on the SMU under
Storage Management.

The IP address of the Hitachi Device Manager server, port number and user account is required
to let the SMU login to Device Manager and report into the database.

If more than one entity is managed by the SMU, the user can select which entities are reported.

Page 20-19
Hitachi NAS Platform
Hitachi NAS File Clone

Hitachi NAS File Clone

 What is Cloning?

• The making of an exact copy (RW ) of an object

• Instantaneously creates pointer-based writable snapshots of single files in a WFS-2 file


system.

• The source file and new file share unmodified user data blocks.

• Space efficient and instantaneous

• This is a new key feature as it allows administrators to rapidly deploy new virtual
machines by cloning the image/template file without consuming additional disk storage
space.

WFS-2 = Wise File System-version 2 – file system type used in a distributed environment

Page 20-20
Hitachi NAS Platform
Writable Clones

Writable Clones

Physical View of 256TB File System Logical View of 256TB File System
Instant Clone
Creation t1
2 VMDK 50GB
New Content t2
Live Read VMDK Read Original
5 3 6
VMDK VMDK 1
Clone A Original
Pointers
VMDK
VMDK
to same
blocks D
4
E Modified
Clone A 95GB
50GB
t2

A B C
t0

Total Space consumed = 95GB Total Space visible = 145GB

Traditional Snapshot and NAS File Clone Differences

Features Snapshot HNAS File Clone

Write ability Read Only Read and Write

Application Initiated
No Yes
(API)

Granularity level File system File

Licensed No Yes

File System version WFS1 and WFS2 WFS2 only

Page 20-21
Hitachi NAS Platform
Directory Clones

Directory Clones

 What is a directory clone?


/test • Mechanism to quickly create a writable snapshot of a directory tree
4you • Leverages file clones – but works on an entire tree
a  What are the benefits?
b • Quick snapshot of a directory tree
c • Space efficient
d • Read/write snapshot copy
allmine  Licensing
e • No new licensing
f • Enhancement to the existing file clone feature
2do  Miscellaneous
g • Wise File System-version 2 (WFS-2) only
h • Normal file support in initial release
• Non clonable files include: hard links, XVLs, named streams,
sockets, and so on

WFS-2 = Wise File System-version 2

NDMP Backup Direct to Tape

NDMP = Network Data Management Protocol

• NDMP – File-system-aware backup and restore using standard 3-rd party NDMP
software

Page 20-22
Hitachi NAS Platform
HNAS Replication Access Point Replication

HNAS Replication Access Point Replication

 Access points are transferred during replication


Primary File Server Disaster Recovery
Shares and File Server
Exports on
File System A

Network
Direction of replication

File System A File System B


Replication Replication
Source Target

NAS Replication Object-by-Object

EVS
Share/Export

Source Target

Page 20-23
Hitachi NAS Platform
Promote Secondary

Promote Secondary

EVS EVS
Share/Export Share/Export

Source Target

• Initiated from GUI or CLI.

Data Protection – Anti-Virus Support

AV Scanners  RPC and ICAP protocols supported


Network Clients  Management and configuration
• Inclusion list supported
• File scanned statistics provided
Scan request File Scan result • Standard configuration on antivirus
scanners

Supported antivirus solutions


Symantec Protection Engine
McAfee Virus Scan (with RPC support)
Trend Micro ServerProtect (with RPC
support)
CA AntiVirus protection

• Symantec Protection Engine

o Configured to use remote procedure call (RPC)

Page 20-24
Hitachi NAS Platform
Data Migration Using Cross Volume Links

• McAfee Virus Scan (with RPC support)

o Order the same version as for NetApp

• Trend Micro ServerProtect

o With RPC support

• CA AntiVirus protection

o With RPC support

On demand/offline virus scanning is dependent on the supplier of the antivirus scan engine.

Always consult the current Hitachi NAS Platform Independent Software Vendor reference list for
updates or contact product management if not listed as GA in the Features and Availability
Report (FAR).

Data Migration Using Cross Volume Links

EVS 1

FS-1 FS-2
F1

SAS 15K SAS NL

1KB CVL-1 pointing to the new position

CVL = cross volume link

CVL-1 with 1KB is not default. Creating a migration link will, after version 6.1, use CVL-2 also
called XVL as default.

Page 20-25
Hitachi NAS Platform
HNAS Data Migration to HCP

HNAS Data Migration to HCP

 If data migration target is HCP, it uses HTTP/HTTPS


• Hash the file before migrating, then compare it to the target before replacing the file on
the source with a cross volume link
• Hashing is done in software
• Data migration or data migration to cloud (HCP or Amazon HS3)
HTTP
HNAS HCP

FS-1 FS-2
Hash Hash

WORM
Compare

Data Migrator to Cloud Added


HTTP/HTTPs
Internal

HTTPs
HTTPs
HTTP
NFS

NFS

Page 20-26
Hitachi NAS Platform
Universal Migration

Universal Migration

NFS v3 from NetApp

EVS

1. Create XVLs to
external files
FS-1 FS-2 2. To share this XVL
as files
3. Copy in
FC SATA or WORM
background
4KB or 32KB CVL-2 pointing to the new position

VSP G1000 Hardware

Unified Block and File Configurations


Virtualization
Add up to 8 HNAS file
Controller
modules for
high-performance
iSCSI, NFS, CIFS,
file sharing and deduplication
Flash-Only
Block-Only Configurations Configuration
• Up to 2 controller
chassis/racks
• Up to 6 racks total
• Same upgrade Mainframe-Only
flexibility options as Configuration
with VSP

The Hitachi Virtual Storage Platform G1000 is a very flexible offering, with multiple
configurations, including unified block/file choices. With the introduction of the Virtual Storage
Platform G1000, we are also making it much easier to buy and deploy unified storage offerings
as one key configuration. Hitachi NAS Platform, (HNAS) is a leading high-performance NAS
engine in the industry and allows customers to further consolidate their NAS into their high-end
environment, eliminating separate management tools, consolidating on business continuity and
disaster recovery practices and accelerating their NAS performance.

Page 20-27
Hitachi NAS Platform
Global-Active Device and HNAS Integration

Global-Active Device and HNAS Integration


This section provides information on the Global Active Device and integration with Hitachi
Network Attached Storage (HNAS).

Synchronous Disaster Recovery for HNAS Overview

 Synchronous Disaster Recovery for Hitachi NAS Platform cluster


• Adds disaster recovery features to the existing Hitachi NAS Platform high
availability cluster solution
• Allows a cluster to be stretched over 2 locations
• A stretched 2-node cluster with 2 copies of data, which are implemented
using synchronous Hitachi TrueCopy
• Allows manual and automatic activation of the secondary copy of the data
• Has no restriction to NAS Platform functionality or protocols
• Works for Hitachi Unified Storage, Hitachi Unified Storage VM, Hitachi Virtual
Storage Platform and Hitachi Virtual Storage Platform G1000 systems and all
HNAS systems

Synchronous Disaster Recovery for HNAS is an external orchestration mechanism to synchronize


HNAS file systems and storage pools with the P-VOLS and S-VOLS of Truecopy and Universal
Replicator pairs when changing mirror roles of the devices in the mirror pairs. It provides the
following features:

• Adds disaster recovery features to the existing HNAS high availability cluster solution

• Allows a cluster to be stretched over 2 locations

• A stretched 2-node cluster with 2 copies of data, which are implemented using
synchronous TrueCopy .

• Allows manual and automatic activation of the secondary copy of the data.

• Has no restriction to HNAS functionality or protocols

• Works for HUS, HUS-VM, VSP and VSP G1000 systems and all HNAS systems.

Page 20-28
Hitachi NAS Platform
Synchronous Disaster Recovery for HNAS Overview

EVS EVS
FS FS FS
SPAN SPAN
System Drives

P P P P S S S S

TrueCopy

Location A Location B

Hitachi NAS Platform is aware of the mirror and the relationship between primary and
secondary disks (system drives). The NAS Platform works with the primary disks of the Hitachi
TrueCopy mirror, as the secondary disks are read-only. If the primary storage system fails
Synchronous Disaster Recovery for HNAS cluster offers a method to recover quickly, by
activating the secondary disks.

Exchange virtual server (EVS) failover is still managed by the usual HNAS cluster mechanisms.
Performing storage failover with Synchronous Disaster Recovery for HNAS will not necessarily
result in an EVS failover. EVS failover works in a few seconds. Storage failover can take 40-90
seconds.

Page 20-29
Hitachi NAS Platform
Why Is Global-Active Device Important to HNAS?

Why Is Global-Active Device Important to HNAS?

 Decouples Hitachi NAS Platform from


Servers with apps requiring Servers with apps requiring
mirror relationships High Availability High Availability

 Halves the number of devices


managed by NAS Platform
(compared to Synchronous Disaster
Recovery for Hitachi NAS Platform) Global-Active Device
 Role changes nearly transparent to Continuous Monitoring
HNAS
 Dramatically improves failover
scenarios Main site Remote
Secondary
QRM
Site
Quorum Site

How global-active device is different from Synchronous Disaster Recovery for HNAS

Synchronous Disaster Recovery for HNAS is an external orchestration mechanism to synchronize


HNAS file systems and storage pools with the P-VOLS and S-VOLS of Hitachi TrueCopy and
Hitachi Universal Replicator pairs when changing mirror roles of the devices in the mirror pairs.
With global-active device there is only a single volume that represents the pair and the role
changes are transparent to the host accessing the global-active device device. This removes the
requirement for the Synchronous Disaster Recovery for HNAS scripts and allows the role
changes to occur without HNAS file system downtime.

For Synchronous Disaster Recovery for HNAS, you need to configure servers known as
replication monitoring station (RMS) to run the Synchronous Disaster Recovery for HNAS scripts
(2 x RMS - one in each data center, RMS is the Linux server with the scripts) that control site
failover by changing the status (P-VOL/S-VOL) of the storage volumes on the local and remote
sites. For global-active device, there is no need for a dedicated server to run any scripts. All
aspects of storage failover is accomplished by the storage system (for example, Hitachi Virtual
Storage Platform G1000).

Before global-active device, Synchronous Disaster Recovery for HNAS was the only available
technology for implementing Synchronous Disaster Recovery for HNAS cluster. Now it is just an
alternate option for customers who have TrueCopy and want to use Synchronous Disaster
Recovery for HNAS with it.

Page 20-30
Hitachi NAS Platform
Why is Global-Active Device Important to HNAS?

• HNAS supports integration with global-active device from 12.2 with a number of caveats:

o Preferred paths to the primary storage system should be manually defined using
system drives path (sd-path).

o Support is limited to a maximum distance of 10km between the primary and


secondary storage system.

o Only 3 site configurations, where the global-active device quorum and system
management unit (SMU) reside on a 3rd site should be used.

• Support for automatically configured preferred pathing

o Uses the vital product data (VPD) code page read from the array to automatically
prefer the path to the global-active device primary volume (primary storage
system path)

o All the system drives of a storage pool must be set as P-VOL in the same storage

o The exchange virtual server using a file system part of a storage pool should be
online in the HNAS positioned in the site where resides the storage with the P-
VOLs

o This extends the distance limitation between sites back to the 100km which
applies to Synchronous Disaster Recovery for HNAS

Page 20-31
Hitachi NAS Platform
Online Product Overview

Online Product Overview

 Hitachi Unified Storage with Hitachi NAS Platform 4000 Series

https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKH
rZlDvXcv

https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKHrZlDvXcv

Page 20-32
Hitachi NAS Platform
Module Summary

Module Summary

 In this module, you should have learned to:


• List Hitachi NAS Platform (HNAS) models
• Describe NAS Platform architecture
• Describe basic HNAS concepts
• Integration of HNAS with global-active device

Page 20-33
Hitachi NAS Platform
Module Review

Module Review

1. Hitachi high-performance Hitachi NAS Platform is a ______:


• Filer
• Gateway
• Appliance
• Storage

2. List the data protection features of NAS Platform.

Page 20-34
21. Hitachi Content Platform, Hitachi Data
Ingestor and HCP Anywhere
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the Hitachi Content Platform (HCP) features and functions
• Describe the Hitachi Data Ingestor (HDI) functionality
• Describe the HCP Anywhere functionality

Page 21-1
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Hitachi Content Platform

Hitachi Content Platform


This section presents the Hitachi Content Platform (HCP) and describes how it operates. HCP is
a software product for archiving, backup and restore.

What Is an HCP Object?

Fixed-content data (Data)


• Once it’s in HCP, this data
cannot be modified

System metadata (Metadata)


• System-managed properties
describing the data
• Includes policy settings

Custom metadata
(Annotations)
• The metadata a user or
application provides to
further describe an object

Think of an HCP object as a bubble.

This bubble contains the actual data, system-generated metadata and custom
metadata/annotations.

This object lives independently within an HCP system.

This architecture allows for easy HW/SW upgrades and great scalability.

Object storage is a black box. Users and admins do not work with file systems, only with data
containers.

Page 21-2
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
What Is Hitachi Content Platform?

What Is Hitachi Content Platform?

 Hitachi Content Platform (HCP) is a distributed storage system designed to support large,
growing repositories of fixed-content data
 An HCP system consists of both hardware and software
• Stores objects that include both data and metadata that describes the data.
• Distributes these objects across the storage space
• Presents the objects as files in a standard directory structure
 An HCP repository is partitioned into namespaces
• Each namespace consists of a distinct logical grouping of objects with its own directory structure
 HCP provides a cost-effective, scalable and easy-to-use solution to the enterprise-wide
need to maintain a repository of all types of data
• From simple text files and medical image files to multi-gigabyte database images
 Access to HCP is via open standard access protocols: REST API HTTP(s), WebDAV,
NFS, CIFS, SMTP and HS3 (Amazon)

NFS: Network File System

CIFS: Common Internet File System

HTTP: Hypertext Transfer Protocol (World Wide Web protocol)

WebDAV: Web-based Distributed Authoring and Versioning (HTTP extensions)

SMTP: Simple Mail Transfer Protocol (Internet email)

NDMP: Network Data Management Protocol

Page 21-3
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP Basics

HCP Basics

 Deployed on commodity servers (nodes)


• Networked together to form a single system

 HCP system consists of both hardware and software

 Policies and services ensure data integrity

 Optimized for fixed-content


• Write-Once, Read-Many (WORM) storage

 Open protocols for data access


• HTTP based (REST, S3, WebDAV), NFS, CIFS, OpenStack Swift

• Hitachi Content Platform (HCP) is a distributed storage system designed to support large,
growing repositories of fixed-content data. HCP stores objects that include both data
and metadata that describes the data. It distributes these objects across the storage
space, but still presents them as files in a standard directory structure. HCP provides a
cost-effective, scalable, and easy-to-use solution to the enterprise-wide need to
maintain a repository of all types of data from simple text files and medical image files
to multi-gigabyte database images.

• HCP is optimized to work best with HTTP based APIs: REST and S3.

• REST API – Representational state transfer, stateless, using simple HTTP commands
(GET/PUT/DELETE)

o It translates HTTP requests into simple commands

o It is used by HCP-AW, HDI, HCP data migrator, HNAS and most 3rd party
middleware products

o HDS provides REST API developer’s guide – all our APIs are open and well
documented

• S3 API – Standard Cloud API, developed by Amazon

o S3 API works similarly to REST API

o S3 API is a standard cloud storage interaction protocol developed by Amazon

Page 21-4
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP Basics

o Thanks to S3 API it is possible to use any S3 client software – it will work with
HCP out of box

o Thanks to S3 support, it is possible to extend HCP capacity by connecting an S3


compatible storage. This can be public or private cloud storage.

o HCP S10 and S30 nodes are S3 compatible storage devices

• Comparing protocols

o Network File System (NFS) and Common Internet File System (CIFS) are value
added protocols

 NFS cannot by authenticated on HCP

 CIFS can be authenticated only with AD

 NFS and CIFS are good for migrations and/or application access

 NFS and CIFS don’t perform as well as Hypertext Transfer Protocol


(HTTP), the World Wide Web protocol

o Use HTTP based APIs whenever possible

o Other protocols

 WebDAV: Web-based Distributed Authoring and Versioning (HTTP


extensions)

 SMTP: Simple Mail Transfer Protocol (Internet email)

Page 21-5
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Fixed Content

Fixed Content

 What is Fixed Content?


• Data objects that have a long-term value, do not change over time, and are
easily accessible and secure

Legal Records Email


Satellite Images Digital Video

Biotechnology Medical Records

Organizations across all industries need to address the management of business-critical


information assets over time and have them accessible in the future. Such types of content
include documents, images, graphics, technical data and video, where more and more of the
content is being created digitally or converted from physical form into digital form.

Page 21-6
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Categories of Storage

Categories of Storage

Block-Level Storage Object-Level Storage

LUN Based Object Based

 Primary – online storage  Fixed-content storage –


• SAN Connected to Application long-term storage
• High Speed • IP network connected to
• Huge Capacity application
• LUN-Level Access • Object aware
• Policy enforcement
• Object-level access

• There is also file-level storage, which can fit somewhere between block-level and object-
level

• Note that access to data stored in HCP is always facilitated over IP networks; no access
to data over a SAN is possible

• HCP 500 has HBAs to connect to back end storage systems over a SAN

Page 21-7
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Object-Based Storage – Overview

Object-Based Storage – Overview

 HCP stores objects in a repository


 An object encapsulates
• Fixed-content data — An exact digital reproduction of data as it existed before it was stored in HCP
 Once it is in the repository, this fixed-content data cannot be modified
• System metadata — System-managed properties that describe the fixed-content data (for example,
its size and creation date)
 Includes policies, such as retention and shred settings, that influence how transactions and
services affect the object
• Custom metadata — Metadata that a user or application provides to further describe an object
 Specified as XML
 Can be used to create self-describing objects
 HCP can store multiple versions of an object, thus providing a history of how the data has
changed over time
• Each version is an object in its own right

HCP = Hitachi Content Platform


XML (Extensible Markup Language) is a set of rules for encoding documents
electronically.

Page 21-8
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Retention Times

Retention Times

 Information governance – retention timeframes are getting longer


Life Science/Pharmaceutical
Retention Timeframes by Industry
Processing food
2 years after commercial release
Manufacturing drugs 3 years after distribution
Manufacturing biologics 5 years after manufacturing of product
Healthcare HIPAA
All hospital records in original form 5 year minimum for all records
Medical records for minors From birth to 21 years
Length of patient’s life + 2 years
Full life patient care
Financial services 17a-4
3 years
Financial statements End-of-life of enterprise
Member reg. for broker/dealers End of account + 6 years
Trading account records
30 years from end of audit
OSHA
Sarbanes - Oxley Original correspondence 4 years after financial audit

1 2 3 4 5 10 15 20 25 50
Source: ESG

While government regulations have a significant impact on content archiving and preservation
for prescribed periods, compliance does not necessarily require immutable or Write Once, Read
Many (WORM)-like media. In many cases, the need for corporate governance of business
operations and the information generated are related to the need to retain authentic records.
This requirement ensures adherence to corporate records management policies as well as the
transparency of business activities to regulatory bodies. As this chart illustrates, the retention
periods for records are significant, from 2 years to near indefinite.

HIPPA – Health Insurance Portability and Accountability Act (HIPAA) of 1996


HIPAA has enacted several mandates to improve the access and portability of patient health
records while maintaining strict privacy and security. A critical aspect of the HIPAA privacy
ruling is Data Protection, requiring compliant backup methodologies to ensure the security and
confidentiality of patient records. Health care providers who engage in electronic transactions
must observe privacy safeguards to restrict the use and disclosure of individually identifiable
health information.

OSHA – Occupational Safety and Health Administration


A US government agency in the Department of Labor to maintain a safe and healthy work
environment.

Page 21-9
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Reviewing Retention

Sarbanes–Oxley
The Sarbanes-Oxley (SOX) Act, passed in the year 2002, outlines the procedures of storing
financial records. All companies and business organizations must comply with the SOX
procedures of financial records storage, ensuring that there are no accounting errors related to
scandals or illegal financial activities. The Sarbanes-Oxley Act legislates the time period during
which the financial records of the company must be maintained, along with the manner in
which the records should be kept.

Reviewing Retention

Retention Hold: A condition that prevents an object from being deleted by any means or
having its metadata modified, regardless of its retention setting, until it is explicitly released.

Page 21-10
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Policy Descriptions

Policy Descriptions

Retention Shredding
May
May
 Prevents file deletion before retention  Ensures no trace of file is
21
21
2036
period expires recoverable from disk after
 Can be set explicitly or inherited deletion
 Deferred retention option
 Can set a Retention Hold on any file

Indexing
 Determines whether an object will be Versioning
indexed for search  Object version consists of data,
system metadata and custom
metadata
Custom Metadata XML checking
 New object version is created
 Determines whether HCP allows when data changes
custom metadata to be added to a
 Write Seldom Read Many
namespace if it is not well-formed
(WSRM)
XML

An HCP policy is one or more settings that influence how transactions and

services work on objects in namespaces. Policies ensure that objects

behave in expected ways.

HCP supports these policies:

• • Retention

• • Shredding

• • Indexing

• • Versioning

• • Custom metadata XML checking

Page 21-11
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP Integration With VLANs

HCP Integration With VLANs

 Default: a single untagged network Cluster Management


Replication
• The HCP is connected to one VLAN (A) for
all network traffic vlan: M
vlan: R

 All cluster management is done over a


separated VLAN (M)
• May include tenant management

 Each tenant is connected to individual


networks VLAN (A), (B) and (C) for data
and tenant administration vlan: A
vlan: C
 Multiple tenants can share the same
network vlan: B

Tenant
 One tenant may have separate network Tenant
for data and one for tenant management admin data

HCP supports virtual networking only for the front-end network through which clients
communicate with the system and through which different HCP systems communicate
with each other. HCP does not support virtual networking for the back-end network
through which the HCP nodes communicate with each other.
In HCP, logical network configurations are referred to simply as networks. Each
network has a name, an IP mode (IPv4, IPv6, or Dual), one or more subnets defined for
the network, IP addresses defined on each subnet for none, some, or all of the nodes in
the HCP system, and some other settings.

Page 21-12
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Multiple Custom Metadata Injection

Multiple Custom Metadata Injection

Images such as X-rays and other medical scanning pictures have no content that can be
searched other than a file name, but can have embedded metadata such as billing details,
doctor and patient information and other relevant details regarding the actual object.

These details are invaluable for searching this type of content as functional in our Hitachi
Clinical Repository solution.

An HCP object can be associated with multiple sets of custom metadata. That is why we talk
about multiple custom metadata injection.

Custom metadata are also called annotations.

Each annotation is a separate .xml file.

Each annotation has its own URL path.

Page 21-13
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
It’s Not Just Archive Anymore

It’s Not Just Archive Anymore

ROBO – Remote Offices, Branch Offices – solution with HDI.

HCP can adapt the way no other content product can. It has a chance to grow in the archive
market and align to emerging markets such as the cloud. Think about active archiving. What
actually is archiving and what makes it active? Archiving means we are moving data from
expensive high performance storage to somewhere where it can be stored securely over long
periods of time. This is different from backup, where we create redundant copies. HCP has lots
of services that constantly work with data to ensure it is always healthy and securely stored.
The HCP services are what make archiving active. Old HCAP used to be a simple box with no
concept of multitenancy and with no authentication options. New HCP is a versatile and flexible
storage system that offers multiple deployment options. HCP is undergoing very turbulent
development – new features are added every year, these features bring significant
improvements in terms of possibilities the system can offer.

HCP always ensures backwards compatibility, meaning that even from the oldest system you
can upgrade to the newest version.

Because of this, there are some legacy features in the system, namely: default tenant, search
node references, blade chassis references, and so on.

Page 21-14
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Introducing Tenants and Namespaces

Introducing Tenants and Namespaces

 Each tenant and set of Physical HCP


namespaces is a virtual
HCP system Tenant 1 Tenant N

• Tenants – segregation of NS 1 NS 1

Tenant User Account 1

Tenant User Account N

Tenant User Account 1


Tenant User Account N
management
NS 2 NS 2
• Namespaces –


segregation of data


NS N NS N

In HCP v3.X and v4.X releases, the concept of data access accounts existed.

The data access account contained a set of assigned access permissions that identified what
a user could or could not do.

In HCP v5.0, the data access account was eliminated and the definition of the permissions was
moved into the user account of the individual users (more on this later in the course).

HCP limitations: 1000 tenants and 10000 namespaces.

HCP supports access control lists that allow users to manage permissions on the object level.

Page 21-15
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Internal Object Representation

Internal Object Representation

HCP

Fixed-content data (Data) External


File Database
System metadata
goes into the database

Custom metadata

Internal File Internal File

Customer data (and custom metadata)


goes into a file on disk

Customer object is broken into 2 pieces internally:

• Metadata goes into the database

• Customer data (and custom metadata) goes into a file on disk

Page 21-16
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP – Versatile Content Platform

HCP – Versatile Content Platform

REST/HTTP(S) NFS CIFS WebDAV SMTP Amazon S3

Best in Class Best in Class


Object Store Hybrid Cloud Storage

Running S node
internal disks or
Spindown NFS Devices S3 compatible Amazon S3 Google Cloud Hitachi Cloud Microsoft Azure
disks on arrays and compatible
disks on arrays storage

Private Cloud Public Cloud

HCP can store data coming from different sources (protocols) and can store data coming from
Hitachi Data Ingestor (HDI), HCP Anywhere and more than 100 applications. HCP can tier data
and store it where it is needed

Tiering is simple and policy based.

Page 21-17
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP Products

HCP Products
This section provides information on the Hitachi Content Platform (HCP) products.

Unified HCP G10 Platform

 Single server platform for all HCP offerings


• Vendor: Quanta
• Model: D51B-2U (Nitro)

 End Of Life for previous HCP offerings:


• HCP 500, HCP 500XL 1G
• HCP 500XL 10G, HCP 300

 2U rack mount server

 Local or attached storage options

 Available as upgrade for existing HCP systems

• 2U server enclosure

• Redundant fans and power supplies

(Left rear SATA HDD/SSD cage included - not shown)


• LSI RAID controller and Supercap (not shown)

• Six 4TB hard disk drives

• CPU and memory

o Two Intel E5-2620v3 CPUs

o 64GB memory (4 x 16GB DIMMs)

• G10 servers can me mixed with existing Hitachi Compute Rack (CR) 210H and CR 220S
based HCP systems.

Page 21-18
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP G10 With Local Storage

HCP G10 With Local Storage

 HCP G10 replacement for HCP 300 model (RAIN)

 Internal disks for OS and storage of metadata, data and indexes

 Six or twelve 4TB hard disk drives – RAID 6


• 14TB usable per node with 6 HDDs
• 28TB usable per node with 12 HDDs

 Compatible with existing HCP 300 nodes

 Compatible with HCP S10 and S30 nodes

 No SAN connectivity

• Customers who purchase a local storage HCP G10 system with 6 internal hard drives can
expand the internal capacity later by purchasing a “six-pack” upgrade. These six drives
are installed in each applicable node and a service procedure is run to add them into the
system. All RAID group creation, virtual drive creation, initialization, or formatting is
handled automatically – no manual configuration is required.

HCP G10 With Attached Storage

 HCP G10 replacement for HCP 500 and HCP 500 XL models

 Internal disks for OS and storage of metadata (like XL models)

 Data and indexes stored on externally attached storage array

 Six 4TB hard disk drives – RAID 6


• Metadata only

 Compatible with existing HCP 500 nodes


• HCP 500, HCP 500XL 1G, HCP 500XL 10G

 Compatible with S10 and S30 nodes

Page 21-19
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP S10

• OS is now always stored locally on the server’s internal drives, not on the array (as it
used to be in HCP 500). No requirement to set up boot LUNs on the HBA cards for
attached storage systems. Online array migration is possible on HCP G10 nodes because
the OS is stored on the internal drives.

HCP S10

 Economy storage option for all HCP systems


 HCP v7.2 supports direct write to S-nodes
 Single 4U tray with two controllers
 Connects through HCP front-end using S3 API
10GbE (x2) 10GbE (x2)

Controller 1 Controller 2
Mid-plane

= Half populated
168 TB (raw)
= Full populated
336 TB (raw)

• HCP S10 and S30 offer better data protection than offered by Hitachi Unified Storage
(HUS) and Hitachi Virtual Storage Platform (VSP) G family (20+6 enterprise Class (EC)
versus RAID 5/RAID 6)

• HCP S10/S30 licensing costs are lower than comparable array configurations per TB.

Page 21-20
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP S30

HCP S30

 Economy storage option for HCP

 HCP v7.2 supports direct write to S-nodes

 More cost effective than HCP S10 at 4 trays


• 2 Nitro server heads with SAS HBA
• 3 to 16 SAS-connected 4U expansion trays
• Maximum 16 trays in 2 racks per HCP S30 node
• Maximum 5.7PB with 6TB HDD
• Up to 80 HCP S30 nodes for a single HCP system
• Up to 465PB for a single HCP

HCP S30 has higher storage capacity (4.3PB usable) versus HUS/VSP G family

Page 21-21
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP S Node

HCP S Node

Software Features Capabilities


 Built for commodity hardware (cost efficient)  Self-checking and healing
 20+6 erasure code (EC) data protection  Versioning (by HCP)
 Fast data rebuilds in case of HDD failure  Compression (by HCP)
 Enhanced data durability/reliability  Encryption (by HCP)
 Retention/WORM (by HCP)
 Self-optimizing for best resource utilization
 Management and service UI and full MAPI
 Ease of use with Plug & Play and automation
 Storage protocol is S3
 Object single instancing
 Ready to be supported by other HDS products

• The software delivers high reliable and durable storage from commodity hardware
components.

• Implements state-of-the-art second generation erasure code data protection technology.

• Offer fast data re-protection of the largest HDD available now and in the future.

• Has self-optimize features. The user does not have to be concerned with configuring,
tuning, balancing resources (HDD).

• Besides a fully capable web user interface, the HCP S10 an be entirely managed and
monitored using Management Application Programming Interface (MAPI).

• No training required to operate or perform maintenance procedures.

• Communication between generic nodes and the HCP S10 nodes is S3 protocol based,
and as such ready to be supported by other HDS products like HNAS (august 2015).

• HCP objects stored on HCP S10 will fully support retention, WORM, versioning,
compression and encryption.

Page 21-22
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Direct Write to HCP S10/S30

Direct Write to HCP S10/S30

 Previously, S10 was only a tiering target for HCP nodes

 Any HCP model with v7.2 software now supports direct write to HCP
S10/S30

 HCP 300 and HCP G10 with local storage


• Local storage of metadata and indexes
• HCP S10/S30 storage of data
• HCP S10/S30 requires only 1 copy of data (data protection level [DPL] 1) –
can be configured for higher DPLs if multiple HCP S10/S30 units are
available
S30

• HCP G10 supports 10G front-end Ethernet networking and 1G back-end Ethernet
networking

• No SAN to configure or maintain (Ethernet based) – simple configuration wizard, no


storage configuration

• No distance limitations between HCP and HCP S10/S30 (standard Ethernet)

• Bandwidth available over customer network will determine performance

• Excellent performance locally or with HCP S10/S30 versus attached storage (see
following slides)

• HCP S30 has higher storage capacity (4.3PB usable) versus HUS/VSP G family

Page 21-23
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
VMware and Hyper-V Editions of HCP

VMware and Hyper-V Editions of HCP

 HCP v7.2 supports deployments in both VMware and Hyper-V

 Fully supported for production environments

 Demo/evaluation deployment also supported

 Benefits
• Easy and fast deployment
• Aligns with VMware and Hyper-V features
• No HCP hardware is needed

Open virtualization format (OVF) templates are part of every new HCP SW version release.

Using OVF templates make it faster to deploy HCP in VMware as you do not have to create VMs
manually nor do you need to install the OS.

If you wish to deploy four virtual nodes, you must deploy an OVF template 4 times.

When you have the required number of virtual nodes, you can start with HCP Application SW
install.

Page 21-24
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Hitachi Data Ingestor

Hitachi Data Ingestor


This section presents the Hitachi Data Ingestor (HDI) and describes how it operates. HDI is a
software product for archiving, backup and restore.

Hitachi Data Ingestor (HDI)

 Extend enterprise IT to the edge

Secure, Simple, Smart

What Is Hitachi Data Ingestor?

 Provides local and remote access to HCP for clients over CIFS and NFS

 Migrates content to a central HCP and maintains a local link to the


migrated content

 As a caching device, Hitachi Data Ingestor provides users and


applications with seemingly endless storage and a host of newly
available capabilities

 Hitachi Data Ingestor presents a standards-based file system interface to


applications to provide seamless access for users

 Provides wide range of advanced storage features through tight


integration with Hitachi Content Platform

Page 21-25
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
How Does Hitachi Data Ingestor Work?

A Hitachi Data Ingestor (HDI) system provides services that enable clients on different
platforms to share data in storage systems. An HDI system consists of file servers called nodes
and storage systems in which data is compacted and stored. The HDI system provides a file
system service to clients by way of the network ports on the nodes.

The HDI model determines whether HDI nodes can be set up in a redundant configuration. A
configuration where nodes are made redundant is called a cluster configuration, and a
configuration where a node is not made redundant with another node is called a single-node
configuration.

How Does Hitachi Data Ingestor Work?

 HDI works by replicating all files it receives to an HCP system

 Once HDI reaches the system defined threshold, based on automated


policies, HDI removes the content above the threshold from its cache
and replaces it with a pointer to where the content ultimately resides on
HCP

 Once a user reads a file, the file is transparently brought back into HDI
from HCP
• The file stays in HDI until the automated policy removes it from its cache
again
• If the file is changed after it is recalled, a new stub is created and the same
policy as above will apply

Page 21-26
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Hitachi Data Ingestor Overview

Hitachi Data Ingestor Overview

 A bottomless and backup-free filer


 Native file system access (CIFS and NFS) into HDI
 Migrates data to HCP using REST API and maintains a local link to the content
 Tightly integrated with HCP to provide seamless access and a wide range of advanced storage
features
 Provides management API for tight integration with HCP and 3rd party UIs
 Fully integrated with AD (Active Directory) and LDAP (Lightweight Directory Access Protocol)
 Support for leading WAN acceleration solutions

 Features
• All content migrated and backed up in HCP
• Advanced cache management supports 400 million files
Hitachi Data
• Supports hundreds of users per node Ingestor
• Transparent NAS migration for existing filers and servers
Hitachi Content
• File restore for user self-service Platform
• Content sharing in a distributed environment

• Operating as an on-ramp for users and applications at the edge is Hitachi Data Ingestor.
Data Ingestor connects to Hitachi Content Platform at a core data center, no application
recoding is required for applications to work with Data Ingestor and interoperate with
Content Platform. Users work with it like any NFS or CIFS storage. Because Data
Ingestor is essentially a caching device, it provides users and applications with
seemingly endless storage and a host of newly available capabilities. Furthermore, for
easier and efficient control of distributed IT, Hitachi Data Ingestor comes with a
Management API that enables integration with Hitachi Content Platform’s management
UI and other 3rd-party/home-grown management UIs. Thanks to the Management API
of the Data Ingestor, customers can even integrate HDI management into their
homegrown management infrastructures for deployment and ongoing management.

• HCP limits apply to the solution:

o 100 namespaces: 100 file systems across all attached HDI systems

• 400 million files per HDI

• Thousands of users per HDI

o Varies due to workload

Page 21-27
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Hitachi Data Ingestor (HDI) Specifications

Hitachi Data Ingestor (HDI) Specifications

 HDI offers following configuration options:

At the remote office locations, users have following options:

• HDI can be configured as a highly available Cluster Pair. These servers are SAN attached
to Hitachi storage. This serves as the user’s caching filer, mentioned in the previous
slide, where every single copy of a file gets stored eventually back to the HCP.

• HDI can also be configured as an HDI Single Node. This is a non-redundant


configuration that has internal direct attached storage. There is no SAN involved here.
Remember that we already have about 4 terabytes of local storage built into the server
itself and that is where the remote office users would write to.

• The 3rd type of configuration is the HDI VMware Appliance. HDI is deployed on the
VMware Hypervisor. With this type of configuration, the customer defines the hardware
and storage configuration. The storage does not have to be Hitachi storage on the back
end.

• In addition, the single node, VMware and remote server configurations can be remotely
configured, provisioned and managed using Hitachi Content Platform Anywhere (HCP
Anywhere) and installed at the remote site by nontechnical personnel. Just plug it in,
power it up, and it will import everything from HCP Anywhere. In all configurations, HDI
acts as a tiering solution, copying its resident files to HCP, and maintaining access to
those files for on-demand recall.

Page 21-28
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Major Components: Server + HBA, Switch and Storage

Major Components: Server + HBA, Switch and Storage

Cluster Single Node

Hitachi CR 220SM
Hitachi CR 210HM

Emulex LPe12002-M8 HBA


Dell 2824 IP Switch

Cluster Integrated Storage

Any HDS storage product

The integrated cluster system is called an appliance and it is integrated with a HUS 110 system
only.

Protocols in Detail

 CIFS
• Windows AD/NT authentication
• NTFS ACL
• Dynamic/Static user-mapping between Windows domain user and HDI
end user
• Level2 Opportunistic Lock support (Read client cache for multi users)
• Home Directory automatic creation
• ABE (Access based Enumeration) support

 NFS
• NFS V4/V3/V2
• NIS/LDAP user repository

CIFS = Common Internet file system


NFS = Network file system
NIS = Network Information Service
NTFS = Windows NT file system

Page 21-29
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
How HDI Maps to HCP Tenants and Namespaces

How HDI Maps to HCP Tenants and Namespaces

 Clients write to assigned file systems


 Each file system is mapped to its designated namespace
 Each namespace can be shared by multiple HDIs for read-only
Client Client Client Client Client Client

Branch A Branch B Branch C


FS 1 FS 2 FS 1 FS 2 FS 1 FS 2

HDI HDI HDI


RO
Tenant A Tenant B Tenant C
Namespace 1 Namespace 2 Namespace 1 Namespace 2 Namespace 1 Namespace 2

Hitachi Content Platform

Benefits
• Satisfy multiple applications, varying SLAs and workload types or organizations

• Determine utilization and chargeback per customer

• Edge dispersion: each HDI can access another when set up that way

• Enable advanced features at one branch or at more granular level

o Examples: replication, encryption, DPL levels (how many copies to keep),


compliance and retention, compression and versioning

Page 21-30
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Content Sharing Use Case: Medical Image File Sharing

Content Sharing Use Case: Medical Image File Sharing

 Hospital-A stores medical files to file system-A.a


 The files are migrated to namespace ‘a’ on HCP
 Hospital-B reads the files through file system-B.a, which is mapped
read only to the Namespace ‘a’ on HCP

Hospital-A Hospital-B

file system B.a file system B.b.


file system A.a

WAN Read
read/write access
read-only access
namespace a namespace b

A Quick Look: Migration, Stubbing and Recalling

 Application writes a file to HDI

 HDI replicates (Scheduled) the files to HCP

 When a file system capacity reaches 90% (the default), HDI deletes the files in excess
of the threshold and creates 4KB links (stubs) to replace them
• Users access (read) the files as they always had since links are transparent to clients

 Reading a link recalls the file into HDI

CIFS/NFS HDI REST/HTTP(S) HCP


Application WRITE MIGRATE
READ RECALL

Recalled files are deleted from HDI later and replaced by another link, based on HDI’s
system capacity.

Page 21-31
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HDI Is Backup Free

HDI Is Backup Free

 Migration
• Every file written to HDI is migrated to HCP
• Migration is a scheduled event
• Each file system can have its own migration policy
• Default migration interval: once per day

 Stubbing
• HDI keeps a copy of every file in its cache
• When the cache capacity reaches a defined threshold, candidate files are
deleted from the cache and replaced with a stub pointing to the file on HCP
 Default threshold is 90%, threshold is tunable
 Stub size is 4KB

• Recovery point objective (RPO) is the maximum tolerable period in which data might be
lost from an IT service due to a major incident

• Recovery time objective (RTO) is the duration of time and a service level within which a
business process must be restored after a disaster

HDI Intelligent Caching: Migration

 Migration
HDI HCP
File system Namespace
Created:
Oct. 9
Meta
file1Stubbed
ref Migrated 1
Object-ID1
Created:
Oct. 20 file2 ref Migrated 5 Object-ID2
Modified:
Oct. 10
Meta
file3Stubbed
ref Migrated 3
Object-ID3
Created:
Oct. 10
Meta
file4Stubbed
ref Migrated 2
Object-ID4
Modified:
Oct. 12
Meta
file5Stubbed
ref Migrated 4
Object-ID5

Page 21-32
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HDI Intelligent Caching: Stubbing

HDI Intelligent Caching: Stubbing

 Stubbing
HDI HCP
File system Namespace
Created: Stubbed
Oct. 9
Meta
Meta
data file1 ref
reference Stubbed
Object-ID1
Created:
Oct. 20 file2 ref Object-ID2
Modified: Stubbed
Oct. 10
Meta
Meta
data file3
reference
ref Stubbed
Object-ID3
Created: Stubbed
Oct. 10
Meta
Meta
data file4 ref
reference Stubbed
Object-ID4
Modified:
Oct. 12
Meta
file5Stubbed
ref Object-ID5

File Retention Utility (WORM)

 Features
• Enables write once read many file system
 Protects from intentional or unintentional modification
 File cannot be deleted during retention period if assigned
• Customer application can take advantage via published API
• “Auto-commit” automatically creates WORM file from read only* file
Item Description
Read Write file • Write and read the file
Read Only* file • Write not allowed
• Write allowed after removing read only flag*
• Deletion allowed
WORM file with retention • Write NOT allowed regardless of write permission
• Delete not allowed regardless of write permission
WORM file with expired • Write NOT allowed
retention • Deletion allowed if write permission is set
* Read Only is a state where write permission of the file is off or the file has CIFS read only attribute.

Page 21-33
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Roaming Home Directories

Roaming Home Directories

 HDI systems leverage a shared HCP Tenant for RHD

Seattle CIFS User Boston

File system Home Home file system


Directory Directory
file system file system
RW RO
HDI ‘A’ HDI ‘B’

HCP Tenant
Namespace for shared
Namespace for HDI ‘A’ Home Directories Namespace for HDI ‘B’

RHD = Roaming Home Directories

HDI With Remote Server


This section presents the Hitachi Content Platform and describes how it operates. HCP is a
software product for archiving, backup and restore.

What Is HDI With Remote Server?

 HDI with Remote Server has the following characteristics that separate
it from the other HDI configurations
• Small form factor
• Simple to set up
• Remote management from the central data center via HCP Anywhere
• Costs $1,500 - $2,000 per system, including HDI software

Page 21-34
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Why HDI With Remote Server?

Why HDI With Remote Server?

 Remote sites are small (3 – 30 employees)

 There is no IT presence
• No raised floor data center, no racks
• Typically administered remotely for networking and other core services

 A remote site solution therefore needs to be


• Able to fit in a small site; for example, next to a printer or copy machine
• Able to be set up by a non-IT person, and managed remotely by IT
• Low cost, to keep the solution cost lower

 Current HDI solutions do not meet these needs

Solution Components

 HDI with Remote Server


• Installed at remote sites by end user customers
Hitachi Data Ingestor
 HCP Anywhere Remote Server

• Configures, provisions and manages HDI with Remote Servers

 HCP
Hitachi Content
• Platform for DR and long term storage data from HDI with Remote Server Platform Anywhere

 Relationship to File Sync and Share


• HCP Anywhere provides the management interface for File Sync and Share,
plus HDI Remote Server
• Customers will use HCP Anywhere to manage one or the other, or both
solutions together
Hitachi Content Platform

HCP Anywhere
This section presents Hitachi Content Platform Anywhere and describes how it operates. HCP
Anywhere is a software product for archiving, backup and restore.

Page 21-35
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP Solution With HCP Anywhere

HCP Solution With HCP Anywhere

 An HCP Anywhere system consists of both hardware and software and uses Hitachi
Content Platform (HCP) to store data

 HCP Anywhere is a combination of hardware and software that provides 2 major features:
• File synchronization and sharing
 This feature allows users to add files to HCP Anywhere and access those files from nearly any
location
 When users add files to HCP Anywhere, HCP Anywhere stores the files in an HCP system and
makes the files available through the user's computers, smartphones and tablets
 Users can also share links to files that they have added to HCP Anywhere
• HDI device management
 This feature allows an administrator to remotely configure and monitor HDI devices that have
been deployed at multiple remote sites throughout an enterprise

What is HCP Anywhere?

Hitachi Content Platform Anywhere (HCP Anywhere), is an option for Hitachi Content Platform
that provides a fully integrated, on-premises solution for safe, secure file synchronization and
sharing.

What is “synchronization and sharing”?

Also called “sync and share,” these are cloud-based software packages that connects a number
of devices to the same set of files. Think of consumer offerings like Dropbox and Apple iCloud
where a file can be created or added on one device and shows up on all other devices
registered to that account. Users love them because their desktop files are also on their laptops,
their smartphones and their tablets. Sync and share works by storing data in the cloud so that
any web-enabled device can send and receive updates. In the context of enterprise IT, this
technology can be a big headache. It puts corporate data outside of IT’s control and into risky
consumer clouds. What both parties need is a solution that IT departments can deploy on their
terms and lets users sync and share their work related files in a safe and secure manner.

What does an HCP customer need to use HCP Anywhere?

An HCP customer just needs the base hardware, an HCP Anywhere POD and seat licenses for
each user.

Page 21-36
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Hitachi Content Platform Anywhere

Hitachi Content Platform Anywhere

 File sync and mobile NAS access from your own cloud

Secure, Simple, Smart

• It’s safe and secure – encryption, access control, on-premises, IT managed, remote
wipe, and more

• It’s easy to use – active directory integration, client apps, self-registration, and more

• It’s efficient – backup free, compression, single instancing, spin-down, multiple media
types, metadata only, and more

• Provide file sync and share capabilities from within IT

• Avoid turning over control, security, protection of data to others

• Retain proper stewardship/governance of data

• Eliminating unnecessary copying of data

• Reduce risk of non-compliance, compromising intellectual property

• Deliver better protection of data on laptops, tablets, smartphones

Page 21-37
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP Solution With HCP Anywhere

HCP Solution With HCP Anywhere

Browsers Mobiles HDI LES Desktops

Internal network
Clients Anywhere - Public or private networks
HTTPS

Active Directory Services


Web Servers REST APIs Web Servers REST APIs
Notification Notification
Sync Server Postgres Sync Server Postgres
Server Server
Application and DB Server Application and DB Server Other customer IT
Infrastructure: DNS, NTP,
Virus scanning , etc
Replication (back-end network)
HCP anywhere POD Enterprise IT

An HCP Anywhere system consists of both hardware and software and uses Hitachi Content
Platform (HCP) to store data.

HCP Anywhere is a combination of hardware and software that provides two major features:

• • File synchronization and sharing

This feature allows users to add files to HCP Anywhere and access those files from nearly any
location. When users add files to HCP Anywhere, HCP Anywhere stores the files in an HCP
system and makes the files available through the user's computers, smartphones and tablets.
Users can also share links to files that they have added to HCP Anywhere.

• HDI device management


This feature allows an administrator to remotely configure and monitor HDI devices that have
been deployed at multiple remote sites throughout an enterprise.

HCP Anywhere nodes

An HCP Anywhere system includes 2 servers, called nodes, that are networked together. The
physical disks in each node form 3 RAID groups. Both nodes run the complete HCP Anywhere
software. Additionally, the system keeps copies of essential system data on both nodes. These
features combine to ensure the continuous availability of the system in case of a node failure.

There are 4 major pieces to creating an HCP Anywhere solution. First is the HCP itself as it
provides the storage platform on which HCP Anywhere runs. For HCP Anywhere there are 3
components: the base hardware consisting of 2 Dell Ethernet switches, the HCP Anywhere POD

Page 21-38
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Desktop Application Overview

(which can be a VMware installation or a preconfigured POD consisting of 2 Hitachi CR210H


nodes and supports up to 20,000 users), and the per user seat licenses available in packs of
100, 500, 2000 and 5000 seats. An Enterprise license is available for 5000 users or more and
can be ordered in the exact quantity required.

Desktop Application Overview

 HCP Anywhere application creates folder named HCP Anywhere on your


computer
• Use folder as you would any other folder on your computer
• Save files to it, drag-and-drop files into it, delete files from it and so on

 Contents are automatically synchronized with the HCP Anywhere system and
other devices on which you have installed HCP Anywhere

HCP Anywhere App in the App Store

 Search for Hitachi Content Platform Anywhere in the App Store from
your iOS device, or play store from your Android device

 Install in your mobility device

Page 21-39
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP Anywhere Features

HCP Anywhere Features

 File Sharing through link (public and private)

 Windows, Mac, iPad/iPhone iOS, Android Devices

 Share Folders with other users

 Active Directory integration

 Show File History

 Show deleted items

 Cache in mobility devices

Demo

 https://www.hds.com/groups/public/documents/webasset/content-
archive-platform.html?M=content-platform_data-ingestor

 https://www.hds.com/hdscorp/groups/public/documents/webasset/hit
achi-cp-anywhere-opo.html

Page 21-40
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Online Product Overviews

Online Product Overviews

 Hitachi Content Platform and Hitachi Data Ingestor Product Demo

 Hitachi Content Platform Anywhere

https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKH
rZlDvXcv

https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKHrZlDvXcv

Module Summary

 In this module, you should have learned to:


• Describe the Hitachi Content Platform (HCP) features and functions
• Describe the Hitachi Data Ingestor (HDI) functionality
• Describe the HCP Anywhere functionality

Page 21-41
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Module Review

Module Review

1. Which 2 statements are true regarding the definition of fixed content?


a. Content that cannot be archived and restored
b. Content that can only be changed by the system administrator
c. Static data that is in a final state
d. Content that will not and cannot change

2. What functionality does HDI offer for HCP?

Page 21-42
22. Hitachi Compute Blade and Hitachi
Unified Compute Platform
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the Hitachi Compute Blade and Hitachi Unified Compute Platform
offerings
• Describe how customers can benefit from using Hitachi Compute Blade and
Hitachi Unified Compute Platform

Page 22-1
Hitachi Compute Blade and Hitachi Unified Compute Platform
Hitachi Compute Portfolio

Hitachi Compute Portfolio

 High availability, performance and


scalability for the enterprise
HIGH-END
BLADE  Optimized for virtualization, maximizing
utilization in the data center
CB 2500
 Highly dense chassis design optimized
for virtualization and consolidation
MIDRANGE
BLADES  Compact and flexible with a variety of
CB 500 blade and connectivity options
 Compact and flexible application server
RACK- platform
OPTIMIZED  Available exclusively in Hitachi file and
CR 220 content solutions

CB 2500 = Hitachi Compute Blade 2500

CB 500 = Hitachi Compute Blade 500

CR 220 = Hitachi Compute Rack 220

Hitachi Compute Blade 500 Series

 Hitachi Compute Blade 500 Features


• 6U houses up to 8 standard server blade modules or up to 4 double-width
blades with support of multi-blade SMP configuration
• Integrated chassis management
• Mainframe class high availability features
• Latest Intel Xeon processors
• Large maximum memory configurations
• Hardware based logical partition (LPAR)
• IO flexibility via expansion blades
• Choice of switched fabric options, including IP, FC and converged fabric

Page 22-2
Hitachi Compute Blade and Hitachi Unified Compute Platform
Hitachi Compute Blade 500 Series

Hitachi Compute Blade 500 combines the high-end features needed in today's mission critical
data center with the high compute density and adaptable architecture you need to lower costs
and protect investment. The flexible architecture and logical partitioning feature of Hitachi
Compute Blade 500 allow configurations to exactly match application needs, and multiple
applications to easily and securely co-exist in the same chassis.

The CB 500 is integrated into a number of solutions including several of the Unified Compute
Platform (UCP) offerings.

• 6U chassis houses up to 8 server blade modules

• Integrated chassis management features simplify and accelerate installation and


maintenance tasks

• Mainframe class high availability features (hot-swap, redundant components, automated


failover)

• Latest Intel Xeon processors for state-of-the-art performance

• Multiple blade options for workload flexibility

• Large maximum memory configurations

• Hardware based logical partition (LPAR) capability for robust, secure, high performance
virtualization

• IO flexibility via expansion blades, to add optional dedicated disk storage or I/O
expansion capability

• Choice of switched fabric options, including IP, FC and converged fabric

• Securely host multiple workloads in a single blade chassis

Page 22-3
Hitachi Compute Blade and Hitachi Unified Compute Platform
Compute Blade 500 Chassis And Components

Compute Blade 500 Chassis And Components

Front Rear

 USB ports (2)  Management Module(s)

 Front Panel  Switch modules (4)


 Fans (6)
 Half-wide blade server slots (8)
 Power Supplies (4)

Here is a front and rear view of a Compute Blade 500 chassis. These images are taken from the
Web Console view of a training system and it is not fully populated.

A CB 500 system will contain:

• One or more management modules

• Power Supplies

• Fans

• Network switches

These 4 component types are located in the rear of the chassis.

The server blades slots are in the front of the CB 500 chassis. The CB 500 front panel is also
located in the front of the chassis. The front panel includes status LEDs and two USB ports.

This information and comparable diagrams can be found in the documentation including in the
following resources:

• Hitachi Compute Blade 500 Series Getting Started Guide (MK-91CB500002)

• Hitachi Compute Blade 500 Series System Overview Guide (MK-91CB500001)

Page 22-4
Hitachi Compute Blade and Hitachi Unified Compute Platform
Hitachi Compute Blade 500 Series

Hitachi Compute Blade 500 Series

 Enterprise class capabilities


• Performance
 Intel Xeon E5v3 and E7v3 series processors with up to two CPUs per blade.
• Scalability
 Each standard-width blade may have up to two sockets, with up to 18 cores per
socket
• Reliability
 CB 500 chassis is fully redundant and components are hot-swappable
• Flexibility
 Configuration: Supports wide range of OS and virtualization solutions
 Workload: Allows you to run your most demanding workloads, including I/O-
intensive applications such as OLTP, Web serving, and HPC

Enterprise-Class Capabilities Hitachi Compute Blade 500 is a true enterprise-class blade server,
and it is important to understand exactly what that term means. It is "enterprise-class" in terms
of performance, scalability, reliability and configuration flexibility, as outlined below:

• Performance: CB 500 supports blades based on the latest and most powerful Intel
Xeon E5v3 and E7v3 series processors with up to eight CPUs (in SMP mode). It meets
the performance needs of large-scale systems that require extremely high compute
power and I/O today. The extensible CB 500 architecture can support multiple blade
types, including future generations of Intel processor. Standard-width CB 500 blades can
also be expanded to support additional high-density disk (HDD) storage or additional
PCI slots with an expansion blade installed in an adjacent blade slot.

• Scalability: The robust, rack-mountable 6U chassis of CB 500 houses up to eight server


blade modules; each standard-width blade may have up to two sockets, with up to 18
cores per socket. Double-width blades can support up to two E7 processors with up to
18 cores per socket. It will give you 36 cores per blade in normal mode and up to 144
cores in CB500 and CB2500 Chassis in 4-Blade SMP mode. Each blade may be
configured to support up to 30 logical partitions. Memory is expandable up to 48 DIMMs
in the CB520X blade, allowing up to 1,536GB to be configured per blade using 32GB
DIMMs. Each standard-width blade supports up to two I/O mezzanine cards to connect
to the chassis (four in the double-width blade), allowing each standard blade to support
up to four I/O switch modules (usually configured as two redundant pairs).

Page 22-5
Hitachi Compute Blade and Hitachi Unified Compute Platform
Hitachi Compute Blade 500 Series

• Reliability: CB 500 chassis is fully redundant and components are hot-swappable.


These components include: redundant switch and management modules, extremely
reliable backplane and I/O, N +1 or fully redundant power supply modules, and N+M
blade failover protection (referring to "M" backup blades for every "N" active server
blades, so failover is cascading). In the event of blade hardware failure, the system
automatically detects the fault and identifies the problem by indicating the faulty module,
allowing immediate failure recovery. In addition, the CB520Hv3 blade takes advantage
of the latest high availability features built into the Intel Xeon processor, such as
memory mirroring and rank sparing.

• Configuration flexibility: CB 500 supports Windows and/or Linux OSs, and a wide
range of virtualization solutions, including native LPAR, providing a high level of
flexibility and investment protection. The system can easily be configured to the exact
number of sockets, processor cores, I/O slots, memory and other components required
to optimally support your application without bottlenecks. The chassis can be configured
and managed via simple GUI HTML-based Web interface, which is seamlessly integrated
with the Hitachi Command Suite tools used to manage Hitachi storage products.

• Workload flexibility: The enterprise-class capabilities of CB 500 make it suitable for a


wide variety of applications. CB 500 allows you to run your most demanding workloads,
whether they are I/O-intensive applications such as online transaction processing and
Web serving, or high-performance computing (HPC), with extremely high performance,
reliability, manageability, scalability and flexibility. Because of this extreme flexibility, CB
500 is the ideal platform to run mission-critical applications and consolidate systems at
the edge, application or database tiers ... or all three.

Reference: https://www.hds.com/assets/pdf/hitachi-compute-blade-500-whitepaper.pdf

Page 22-6
Hitachi Compute Blade and Hitachi Unified Compute Platform
CB 500 Web Console

CB 500 Web Console

• The Compute Blade 500 Web Console GUI interface is accessed using a supported web
browser from a correctly configured system console computer. The CB 500 Web Console
is access by using the CB 500 Management Module’s IP address on the customer’s data
center management LAN.

• The CB 500 Web Console maintains consistency with the style, “look and feel” of the
Hitachi Command Suite (HCS 7.x) product.

• The Compute Blade 500 Web Console has 4 view tabs, Dashboard, Resources, Alerts,
and Administration. The Dashboard view, shown here, is displayed by default when you
connect to the Web Console. From the Dashboard view you can quickly determine which
components are installed in the CB 500 and their status. This is visually represented by
the front and rear view chassis graphics in the center of the display.

• Other important information shown in the header areas includes the CB 500
Management Module IP address, the description that this is the Web Console interface,
the fact that this is a Compute Blade 500 system and the user who is signed on.

Page 22-7
Hitachi Compute Blade and Hitachi Unified Compute Platform
Hitachi Compute Blade 2500 Series

Hitachi Compute Blade 2500 Series

 12U chassis houses up to 14 standard


server blade modules or up to 8 double-
width blades with support of multi-blade
SMP configuration
 Latest Intel Xeon processors
 Large maximum memory configurations
 IO flexibility via a choice of fabric options,
including IP, FC and converged fabric, PCIe
cards, and expansion blades
 Provides all other features of CB 500 such
as LPAR

SMP = Symmetric Multiprocessing

NOTE: Top slot is not available for Half-wide blades.

With the latest Intel Xeon E5v3 and E7v3 family processors, Hitachi Compute Blade (CB) 2500
delivers enterprise computing power and performance, as well as unprecedented scalability and
configuration flexibility. This helps you to lower costs and protect investment. The flexible
architecture and logical partitioning feature of CB 2500 allow configurations to exactly match
application needs, and enables multiple applications to easily and securely co-exist in the same
chassis.

• 12U chassis houses any combination of up to 14 standard server blades or up to 8


double width blades (with multi-blade SMP configuration).

• Integrated chassis management features simplify and accelerate installation and


maintenance tasks.

• Mainframe class high availability features (hot-swap, redundant components, automated


failover).

• Latest Intel Xeon processors for state-of-the-art performance.

• Multiple blade options for workload flexibility.

• Large maximum memory configurations.

Page 22-8
Hitachi Compute Blade and Hitachi Unified Compute Platform
Compute Blade 2500 Components - Front

• Hardware based logical partition (LPAR) capability for robust, secure, and high
performance virtualization.

• IO flexibility via a choice of fabric options, including IP, FC and converged fabric, PCIe
cards and expansion blades.

• Securely host multiple workloads in a single blade chassis.

Compute Blade 2500 Components - Front

 2 management modules

 14 half-width or 8 full-width blades

 LCD touch console panel

Hardware that makes up the CB 2500: Server Chassis, Server Blade, PCI expansion blade,
management module , switch module, I/O board module, power supply module, fan module,
and fan control module.

Page 22-9
Hitachi Compute Blade and Hitachi Unified Compute Platform
Compute Blade 2500 Components - Rear

Compute Blade 2500 Components - Rear

 2 management LAN modules

 6 power-supply modules

 2 switch modules

 28 I/O board modules

 10 fan modules

 2 fan control modules

The following shows the number of modules that can be installed in the CB 2500 server chassis:

• A maximum of 8 full-width blades or a maximum of 14 half-width blades can be installed.

• A maximum of 2 switch modules can be installed.

• 2 management modules and 2 management LAN modules are installed.

• A maximum of 6 power supply modules can be installed.

• 8 fan modules are installed and 2 fan control modules that control these fan modules
are installed.

• A maximum of 28 I/O board modules can be installed.

This information and comparable diagrams can be found in the documentation including in the
following resources:

• Hitachi Compute Blade 500 Series Getting Started Guide (MK-99CB25000034)

Page 22-10
Hitachi Compute Blade and Hitachi Unified Compute Platform
Hitachi Compute Blade 2500 Series

Hitachi Compute Blade 2500 Series

Enterprise Class Capabilities


Blade options support Intel Xeon E5 and E7 series processors with up to two
Performance CPUs per blade

Robust, rack-mountable 12U chassis houses up to eight E7-based server-blade


Scalability
modules or up to 14 E5-based blades (or a combination)
Reliability Fully redundant chassis with hot-swappable components
Supports both Windows and/or Linux operating systems
Configuration Wide range of virtualization solutions provides high level of flexibility and
Flexibility investment protection
Includes native LPAR logical partitioning

Runs your most demanding workloads,


Workload
Flexibility Includes I/O-intensive applications such as online transaction processing, Web
serving or high-performance computing (HPC)

CB 2500 Web Console

• The Web console runs on a Web browser that is set up in the system console. The Web
console can manage and set all of the equipment installed in a server chassis.

• Refer slide 17 to view the management module of CB2500.

Page 22-11
Hitachi Compute Blade and Hitachi Unified Compute Platform
Server Blade Options

Server Blade Options

 CB 520H B3 server blade type


• Half-width
• Faster processors - Intel Xeon E5-2600v3 series
 2 CPU with 18/16/14/12/10/8/6/4 cores per CPU

• Larger memory capacities


 DDR4 RDIMM 8/16/32 GB; LR-DIMM 32/64 GB
 Maximum slots 24
 Maximum memory 1536 GB (64 GB x 24 RDIMMs)

• Local HDD – maximum 2 with capacity 3.6 TB


• One on-board LAN and one Mezzanine slot

Current description of supported server blade types can be found at

• http://www.hds.com/products/compute-blade/compute-blade-
500.html?WT.ac=us_mg_pro_cb500

 CB 520X B2 server blade type


• Full-width with SMP capabilities (2-blades or 4-blades)
• Faster processors - Intel Xeon E7-8800v3 series
 2 CPU with 18/4 cores per CPU

• Larger memory capacities - DDR4 RDIMM 8/16/32 GB


 Maximum slots 48 - Memory 1536 GB (32GB x 48 RDIMMs) or 3072 GB (64GB x
48 RDIMMS)

• Local HDD – maximum 2 with capacity 3.6 TB


• Two on-board LAN and
two Mezzanine slot

Current description of supported server blade types can be found at

• http://www.hds.com/products/compute-blade/compute-blade-
500.html?WT.ac=us_mg_pro_cb500

Page 22-12
Hitachi Compute Blade and Hitachi Unified Compute Platform
Compute Blade Platform Features

Compute Blade Platform Features

 Reduced cost through flexibility

 Greater security, isolation and performance with Hitachi logical


partitioning feature (LPAR)

 Open industry standard blade hardware platform

 Run on x86 technology and also support industry standard PCIe slots

 Lower total support and upfront capital costs

 Hitachi symmetric multiprocessing (SMP) technology lets you scale up


your server environment to satisfy future growth

• Reduce cost through flexibility, with SMP scalability, plan and purchase for now, without
worry or planning for excess resources you may need tomorrow

• Have greater security, isolation and performance with the Hitachi logical partitioning
feature (LPAR), which delivers significantly lower cost enablement than other software
solutions

• Support changing requirements with an open industry standard blade hardware platform.
Get away from proprietary UNIX server environments and switched only I/O fabrics

• Hitachi blade servers run on x86 technology and also support industry standard PCIe
slots

• One single vendor: Whether it’s enterprise storage or servers, Hitachi is the only partner
who can provide enterprise level hardware and support services on a global scale

• Blade Server 2000 covers a wide range of computing environments from PC server
consolidation to mission critical systems

o Investment in one single platform can meet a wide variety of computing needs
thus lowering total support and upfront capital costs

• Hitachi symmetric multiprocessing (SMP) technology lets you purchase server resources
to satisfy compute requirements today without the need to anticipate future growth and
compute requirements of tomorrow

Page 22-13
Hitachi Compute Blade and Hitachi Unified Compute Platform
Compute Blade Platform Features

o When additional resources are required, purchase the additional blades and SMP
connectors to scale up your server environment to satisfy future growth

• With up to 1TB addressable memory when using 4 blade SMP, a wide range of high
performance applications can run on the Blade Server 2000 platform, alleviating the
need for high end UNIX systems

o This reduces the high hardware, software and support costs that are typically
associated with UNIX or RISC systems

• Balanced architecture design of Hitachi blade servers insures there are no bottlenecks

o The hybrid I/O design of Blade Server 2000 supports both an integrated
switched fabric and direct access PCIe Gen 2 Slots to meet the I/O requirements
of the most demanding applications

• Blade Server 320 offers space and power efficiencies

o Using SSD in a Blade Server 320 lets you support applications that require faster
storage access

Page 22-14
Hitachi Compute Blade and Hitachi Unified Compute Platform
What Is Logical Partitioning?

What Is Logical Partitioning?

 Logical Partitioning (LPAR) is a hardware/firmware-based Type 1


Hypervisor operating system virtualization layer

 Controls and allocates resources from a physical computer (or physical


partition in the case of the Blade Server Model 2500)
• Allocates into multiple logical computers (or logical partitions – LPARs)
 Each LPAR capable of
running independent
operating systems
and applications

Logical Partitioning (LPAR) is Hitachi’s firmware-based implementation of a Type 1 Hypervisor

A hypervisor (aka: virtual machine monitor) is a virtualization platform that allows multiple
operating systems to run on a host computer at the same time. The term usually refers to an
implementation using full virtualization. Hypervisors are currently classified in 2 types:

• Type 1 hypervisor is software that runs directly on a given hardware platform (as an
operating system control program)

o A guest operating system thus runs at the second level above the hardware

o The classic type 1 hypervisor was CP/CMS, developed at IBM in the 1960s,
ancestor of IBM's current z/VM

o More recent examples are Xen, VMWare’s ESX Server and Sun's Logical Domains
Hypervisor (released in 2005)

• Type 2 hypervisor is software that runs within an operating system environment

o A guest operating system thus runs at the 3rd level above the hardware

o Examples include VMWare Server (formally GSX) and Workstation, as well as


Microsoft’s Virtual PC and Microsoft Virtual Server products

For additional information, reference

• http://en.wikipedia.org/wiki/Hypervisor

Page 22-15
Hitachi Compute Blade and Hitachi Unified Compute Platform
Compute Rack Server Family

Compute Rack Server Family

CR 210H xM

CR 220H xM

CR 220S xM

• Hitachi and Hitachi Data System offer a line of Compute Rack servers

• Compute Rack 210 servers require 1 rack unit (1U) of space and Compute Rack 220
servers take up 2 rack units (2U)

• The current Compute Rack line includes:

o CR 210H xM High Performance, 1U Mth or 21st generation

o CR 220H xM High Performance, 2U, Mth or 21st generation

o CR 220S xM High Storage Capacity, 2U, Mth or 21st generation

• Each of these rack servers offers the ability to configure 1 or 2 Intel Xeon processors

o Memory capacity per CPU ranges up to 256GB

o The High Storage Capacity server gives up some memory in trade for the ability
to offer more configured HDDs

• It is valuable to mention the Compute Rack servers here as some of the Hitachi Unified
Compute Platform (UCP) solutions use Compute Rack servers

• The Compute Rack servers may be part of the solution’s core functionality or may be
configured as management server(s) for the UCP solution

Page 22-16
Hitachi Compute Blade and Hitachi Unified Compute Platform
Integrated Platform Management

Integrated Platform Management


Blades

Service Processor

LPARs

Chassis

Hitachi Compute Systems Manager

Hitachi Compute Systems Manager is a standalone set of optional management tools designed
for data center management of multiple chassis via a graphical intuitive interface that provides
point-and-click simplicity. At the system level, HCSM provides centralized management and
monitoring of extended systems containing multiple chassis and racks.

Unified Dashboard

HCSM allows the various CB 500 system components to be managed through a unified interface,
which is seamlessly integrated with Hitachi Command Suite (see Figure 11). When rack
management is used, an overview of all Hitachi Compute Blade racks, including which servers,
storage and network devices are installed, can be quickly and easily obtained. In the event of
any system malfunction, the faulty part can be located at a glance.

In addition, HCSM software provides the ability to define and manage the logical system
configuration of each element to be managed by using the service name. With traditional blade
servers, management of both the logical system and the system's physical resources is required.
When definitions are made with service names (such as sales or stock) within Hitachi Compute
Blade management suite, there is no longer any need for administrators to concern themselves
with the management of physical resources.

HCSM provides centralized system management and control of all server, network and storage
resources. This includes the ability to set up and configure servers, monitor server resources,
integrate with enterprise management software (SNMP), phone home and manage server
assets.

Page 22-17
Hitachi Compute Blade and Hitachi Unified Compute Platform
Hitachi Compute Systems Manager

Hitachi Compute Systems Manager

 System management tool that allows


seamless integration into Hitachi Command
Suite (HCS) to provide a single management
view of servers and storage

 HCSM provides
• Usability (GUI integrated with HCS)
• Scalability (10,000 heterogeneous servers)
• Maintainability and serviceability

 Basic functionality included with server at no HCSM GUI


additional charge

 Additional functionality and capability available


via optional plug-in modules

HCSM Resources – Compute Blade Chassis

 Chassis details can be viewed in Resources tab of HCSM

 Different tabs available for component management

Page 22-18
Hitachi Compute Blade and Hitachi Unified Compute Platform
HCSM Resources – Compute Blade Servers

HCSM Resources – Compute Blade Servers

 View Compute Blades

HCSM Resources – Compute Blade Servers (continued)

 Displays Compute Blade details


• Condition
• Configuration
• Firmware
• CPU, Memory, I/O
• Logical Partitioning
• Power Management
• More Actions

Page 22-19
Hitachi Compute Blade and Hitachi Unified Compute Platform
Demo

Demo

 http://edemo.hds.com/edemo/OPO/3D_CB2500/CB2500_Main.html

 http://edemo.hds.com/edemo/OPO/CB500/CB500.html?M=cb500-
res

Unified Compute Platform


This next section provides information on Hitachi Unified Compute Platform (UCP).

Page 22-20
Hitachi Compute Blade and Hitachi Unified Compute Platform
Unified Compute Platform – One Platform for All Workloads

Unified Compute Platform – One Platform for All Workloads

Microsoft®
Microsoft Citrix
SAP ERP Exchange
Sharepoint® XenDesktop
Server

Oracle Microsoft VMware


SAP HANA
Database SQL Server ® Horizon View

Service Orchestration

Orchestration
Infrastructure
Bare Metal OS Hypervisor
Reference architecture Compute Blades
for ISV software
IP and SAN Networks
Integrated with ISV software
and single point of support Storage plus DR and Backup
Open, Reliable Platform with High Performance and Automation

• The Hitachi Unified Compute Platform takes the four basic infrastructure components of
compute, storage, network and software and UNIFIES them into single, platform
solution. It’s a bundled solution offering so you can create a more modern and nimble
data center.

• The Hitachi Unified Compute Platform is designed from the business requirements down,
and built from the bottom up to achieve a more converged virtualized infrastructure,
leveraging the converged stack to make the most efficient decisions about how to
perform the function, and then executing it consistently and predictably based on
architecture design created to support the business requirements. This end-to-end
platform supports multiple architectures, both stateless and stateful, that are comprised
of multiple vendors infrastructure components, resulting in a holistic converged IT that
will be aligned to meet your business needs

• This new way of leveraging the converged infrastructure stack allows us to execute
based on the most efficient path, the overall architecture, and the business
requirements.

Virtual solution that’s flexible and scalable, transforming data center infrastructure into a private
cloud at your own pace

Page 22-21
Hitachi Compute Blade and Hitachi Unified Compute Platform
UCP With Unified Compute Platform Director

UCP With Unified Compute Platform Director

Unified Compute Platform Director

VSP, HUS VM or HUS storage

CB 500 compute blade

Brocade and Cisco networking

Hitachi Compute Rack 210 management server

The enterprise-class solutions combine the highest quality systems and the most advanced
architecture. UCP creates a framework on which to build a robust converged cloud
infrastructure that includes data protection and management capabilities. UCP includes best-of-
breed Hitachi blade servers (powered by Intel Xeon processors), industry-leading Hitachi
storage, SAN switches from Brocade, and Ethernet networks from Brocade or Cisco. Hitachi
blade servers are known for their superior quality and have advanced functionality that makes
them uniquely suited to support mission-critical applications and a converged cloud
infrastructure.

Page 22-22
Hitachi Compute Blade and Hitachi Unified Compute Platform
Unified Compute Platform Family Overview

ON-DEMAND ON-DEMAND ON-DEMAND AUTOMATION


for Operational
SERVERS STORAGE NETWORKS
Agility
with Greater with Greater with Greater
VM Density Pool Utilization Efficiency

Each UCP solution for VMware vSphere and Microsoft Private Cloud (Hyper-V®) is configured to
maximize the value of your server virtualization environment. Equipment is neither
overpurchased nor overprovisioned. It is architected as a fully integrated platform for deploying
IT as a service. And it improves organizational agility by quickly deploying new applications and
services to respond to changes in business needs and integrate them into current environments.

Unified Compute Platform Family Overview

Model 4000E 4000 6000


(Small – Large) (Medium – Large)
Solutions SAP HANA, MS Private Cloud, MS SharePoint, MS SQL, VMware vSphere, Oracle DB

Management UCP Director + Director Operation Center UCP Director

Server CB500 (2-16) CB500 (2 – 128) CB2500

Storage HUS-130, HUS-VM, HUS-150, HUS-VM, VSP G1000 HUS-VM, VSP


VSP G200 VSP G200 VSP G400 VSP G400 G1000
Networking Cisco converged Brocade Fibre Channel; Cisco or Brocade IP

Page 22-23
Hitachi Compute Blade and Hitachi Unified Compute Platform
Unified Compute Platform Family Overview

Model HSP UCP 1000 UCP 2000 UCP Select


Solution VMWare EVO Rail TBA VMware vSphere w/
Cisco UCS
Management UCP Director Op Ctr UCP Director UCP Dir for UCS

Server 2U4N Rack (4-16) 2U4N Rack (4-16) 2U4N Rack (4-16) Cisco UCS

Storage Internal Disks Internal Disk VSP G200 HUS-VM, VSP G1000

Networking Brocade NSX and vSAN Brocade Cisco Converged

• Each model is targeted to a different class of performance, scale, availability and


flexibility based on the workload requirements and is offered at different price points.

• The UCP 6000 high end converged system is designed for high-end application
optimized environments. It integrates the Hitachi CB 2500 high performance blade
servers and supports mid to high-end storage systems such as HUS VM and VSP G1000.
For example, solutions for SAP and Oracle that use the CB 2500 chassis and deliver the
highest performance and availability will be called UCP 6000 for SAP HANA and UCP
6000 for Oracle Database RAC.

• The UCP 4000 mid-range converged system is designed primarily for enterprise-class
virtualized workloads as well as specific application optimized environments. It uses the
CB500 high-density blade chassis. UCP 4000 for VMware vSphere and UCP 4000 for
Microsoft Private Cloud are examples of these solutions. For UCP solutions that limit
scaling to a maximum of 16 blade servers, the models include letter “E” for “Entry Level”
(e.g. UCP 4000E for VMware vSphere).

• The UCP 2000 entry level system is targeted at ROBO (Remote Office or Branch Office)
for tier 2 and tier 3 applications. It integrates a new 2U 4Node rackmount server as well
as the new VSP GS200 with networking. UCP 2000 VMware vSphere is the first of many
solutions which will use this model number.

Page 22-24
Hitachi Compute Blade and Hitachi Unified Compute Platform
Unified Compute Platform 4000E – Entry-Level

• The UCP 1000 hyper-converged system is also targeted at ROBO and tier 2 and tier 3
applications. It uses a rackmount server with internal disks. The first solution is the
UCP 1000 for VMware EVO: RAIL, which includes virtual SAN and virtual networking with
its hypervisor (virtualization software).

• CP Director, offering best-in-class automated management and orchestration for


converged solutions, is central to the success of Hitachi UCP. It is:

• Currently available on the UCP 4000 and 4000E

• Will support other models in the future.

Unified Compute Platform 4000E – Entry-Level

Enterprise-class converged for the mid-market


Manage virtual and physical infrastructure from VMware vCenter or
Microsoft System Center with Unified Compute Platform Director

Enterprise-class density, availability, and performance


On-demand compute and storage

Deploy in 5 days or less

Mid-market pricing and packaging

Page 22-25
Hitachi Compute Blade and Hitachi Unified Compute Platform
Unified Compute Platform 4000E – Entry-Level

 Lower cost and faster deployment Front Back


 Compute, network, and initial storage in 1 rack
 More storage can be installed on additional racks
as needed

 System Components
 Unified Compute Platform Director (for VMware or
Microsoft)
 Hitachi Compute Blade B500 (up to 16 blades)
 Choice of modular or enterprise storage
• Hitachi Unified Storage VM or HUS 130;
Hitachi Virtual Storage Platform (VSP) bolt-on
 Converged networks
• Cisco Nexus 5548
 Non-HA management server (Optional HA cluster)

Page 22-26
Hitachi Compute Blade and Hitachi Unified Compute Platform
Demo

Demo

 https://www.hds.com/go/tour-ucp/

Online Product Overviews

 Hitachi Compute Blade 500 Online Product Overview

 Hitachi Compute Blade 2500 - 3D Intro Video

 Hitachi Compute Blade - Logical Partitioning Capabilities from Hitachi


Data Systems

https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKH
rZlDvXcv

https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKHrZlDvXcv

Page 22-27
Hitachi Compute Blade and Hitachi Unified Compute Platform
Module Summary

Module Summary

 In this module, you should have learned to:


• Describe the Hitachi Compute Blade and Hitachi Unified Compute Platform
offerings
• Describe how customers can benefit from using Hitachi Compute Blade and
Hitachi Unified Compute Platform

Page 22-28
Hitachi Compute Blade and Hitachi Unified Compute Platform
Your Next Steps

Your Next Steps

Validate your knowledge and skills with certification.


Follow us on social media:

@HDSAcademy
Check your progress in the Learning Path.

Review the course description for supplemental courses, or


register, enroll and view additional course offerings.

Get practical advice and insight with HDS white papers.

Ask the Academy a question or give us feedback on this course


(employees only).

Join the conversation with your peers in the HDS Community.

Certification: http://www.hds.com/services/education/certification

Learning Paths:

• Customer Learning Path (North America, Latin America, and


APAC): http://www.hds.com/assets/pdf/hitachi-data-systems-academy-customer-
learning-paths.pdf

• Customer Learning Path (EMEA):


http://www.hds.com/assets/pdf/hitachi-data-systems-academy-customer-training.pdf

• All Partners Learning


Paths: https://portal.hds.com/index.php?option=com_hdspartner&task=displayWebPage
&menuName=PX_PT_PARTNER_EDUCATION&WT.ac=px_rm_ptedu

• Employee Learning Paths:


http://loop.hds.com/community/hds_academy

Learning Center: http://learningcenter.hds.com

White Papers: http://www.hds.com/corporate/resources/

Page 22-29
Hitachi Compute Blade and Hitachi Unified Compute Platform
Your Next Steps

For Partners and Employees –


theLoop: http://loop.hds.com/community/hds_academy/course_announcements_and_feedback
_community

For Customers, Partners, Employees – Hitachi Data Systems


Community: https://community.hds.com/welcome

For Customers, Partners, Employees – Hitachi Data Systems Academy link to


Twitter: http://www.twitter.com/HDSAcademy

Page 22-30
A. Hitachi Enterprise Storage Hardware –
Hitachi Virtual Storage Platform
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the architecture, essential components and features of the Hitachi
enterprise storage systems
• Describe the tools available for the management of Hitachi enterprise storage
systems

User and Reference Guides


• Hitachi Virtual Storage Platform G1000 Encryption License Key User Guide

• Hitachi Virtual Storage Platform G1000 Hardware Guide Mainframe Host Attachment and
Operations Guide

• Hitachi Virtual Storage Platform G1000 Mainframe System Administrator Guide

Page A-1
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Hitachi Enterprise Storage Hardware Overview

• Hitachi Virtual Storage Platform G1000 Performance Guide Hitachi Virtual Storage
Platform G1000 Product Overview

• Hitachi Virtual Storage Platform G1000 Provisioning Guide for Mainframe Systems

• Hitachi Virtual Storage Platform G1000 Provisioning Guide for Open Systems Open-
Systems Host Attachment Guide

Hitachi Enterprise Storage Hardware Overview


This section presents the several hardware components that make up the Hitachi Virtual
Storage Platform (VSP).

Virtual Storage Platform Introduction

Innovative Virtualization Unique 3D Scaling Sustainable IT


Centralized, secured and simplified Technology 40% less power and half the footprint
storage management with versus competition
High performance grid scale-up and scale-
multi-vendor storage support
out storage architecture with advanced
virtualization capabilities and connectivity
Data Resilience and
Seamless Migration and Dynamic Mobility Protection
100% availability Enables fluid storage architecture Proven replication and continuous data
90% reduction on USP V/VM migration Simplifies QoS and lower storage TCO for protection options to simplify backup
effort with no application outage block and file data and business continuity needs

Hitachi Virtual Storage Platform

USP V/VM = Hitachi Universal Storage Platform V/VM

QoS = Quality of Service

TCO = Total Cost of Ownership

Page A-2
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
VSP Full Configuration – 6 Rack

VSP Full Configuration – 6 Rack

 The maximum number of frames is 6


 The system contains 2 DKC boxes and 16 DKU boxes

1 Module 2 Module
HDD (2.5”) 1,024 2,048
HDD (3.5”) 640 1,280
CHA ports 80 (96*1) 176 (192*1)
Cache 512GB 1024GB
RK-12 *1 ALL CHA configuration (Diskless)
RK-11
RK-10
RK-00
RK-01
RK-02

CHA = Channel Adapter

DKC = Disk Controller Unit

DKU = Disk Unit

HDD = Hard Disk Drive

• A fully-configured VSP system contains 2 DKC Boxes, in each of 2 separate racks, and
16 HDU Boxes

• A fully-configured VSP system requires 6 racks

o Each 19” rack is 60 cm wide, outside edge to outside edge

o The VSP rack is 110 cm deep including the rear door

o The total width of 6 racks is 278 mm or 11 inches wider, compared to the 5


cabinet, fully-configured USP V

• The table on this page also shows cache capacity

• The rack naming convention is a bit different from the RAID 600 USP V.

o Each rack has a two digit identifying number

o Each of the DKC racks will be RK-00 and RK-01 respectively

Page A-3
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
VSP 2-Module System

o The HDU racks associated with its DKC rack uses the same left digit followed by
a 1 or 2 depending upon its physical position relative to the DKC rack

• Diskless configuration option is supported for the VSP

o In the case of a diskless configuration, more CHA ports are possible

Controller Chassis (DKC)


Disk Chassis (DKU)

VSP 2-Module System

 Each module has 1 logic box and HDD box connected with it
 The basic module and option module are called Module-0 and Module-1
respectively
 Module-0 and Module-1 are connected via Grid Switch (GSW) PCB
Option Module (Module-1) Basic Module (Module-0)

HDD BOX HDD BOX HDD BOX HDD BOX HDD BOX HDD BOX
#1-7 #1-4 #1-1 #0-1 #0-4 #0-7

HDD BOX HDD BOX HDD BOX HDD BOX HDD BOX HDD BOX
#1-6 #1-3 #1-0 #0-0 #0-3 #0-6

HDD BOX HDD BOX HDD BOX HDD BOX


#1-5 #1-2 Logic BOX Logic BOX #0-2 #0-7

RK-12 RK-11 RK-10 RK-00 RK-01 RK-02

PCB – printed circuit board

Page A-4
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Control and Drive Chassis Structure

Control and Drive Chassis Structure


 Control chassis includes  Drive chassis includes  Drives
 Virtual Storage Directors  Drives  80 x 3.5 in drives
 Cache  SAS links  128 x 2.5 in drives
 FED and BED adapters  Fan assembly opens to exchange drive
 Power supplies online
 Service processors

VSD x 4 PS x 4 Drives x 40 Drives x 40


SVP x 2 SSW x 4

14U 13U

SAS x 4

PS x 4

FED x 8
Cache x 8 Control Chassis BED x 4
3.5 in. Drive Chassis

• Looking at the control chassis, the design is a more modular, blade style structure

• Virtual Storage Directors and Cache adapters are added to the front

• FEDs, BEDs and Grid Switch adapters are added to the back

• Services Processors are accessed from the back of the system as well

• Two control chassis (14 rack units high) can be combined to operate as a single unit

• The drive chassis (13 rack units high) contain either 2.5” or 3.5” drives

• Fan doors are moved aside in order to service drives online

• Fans in the opposite side run faster to move air when the other side is open

Page A-5
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
19 Inch Industry Standard Width Rack – DKC

19 Inch Industry Standard Width Rack – DKC

42U Frame
(19 inch)
13U

13U
DKU Box
DKU Box
Front side Rear side
13U
DKU Box

14U 14U
DKC Box

DKC Box
Front side Rear side

• The VSP system is assembled in a rack frame

o The rack is a Hitachi-custom rack that conforms to the industry standard 19 inch
width

o The rack is 42U high and 1100 mm (43.3 inches) deep

• The DKC Box and DKU Box containers are used to hold the VSP components

o The DKC Box is 14U high and the DKC Box is 13U high

o The first rack will contain 1 DKU Box, 1 front and 1 rear

o This rack may also contain 1 or 2 DKU Boxes

Page A-6
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Disk Unit (DKU) Frames

Disk Unit (DKU) Frames

13U

13U
DKU Chassis
(SFF or LFF chassis)

13U

SFF = Small Form Factor

LFF = Large Form Factor

• The VSP DKU frame is also a 19 inch industry standard width rack

• All racks in a VSP system have the same outside dimensions

• A rack that is used as a DKU frame can hold 3 DKU Boxes, each 13U high

• A hard disk nit (HDU) Box contains HDDs in both the front and rear

• The weight of the DKU is 80 kg

• The VSP supports 2 different internal HDU Box structures — one that holds Large Form
Factor (LFF) 3.5” disk drives and one that holds Small Form Factor (SFF) 2.5” disk drives

o One HDU Box can be either for LFF or SFF disk drives but not mixed

o A mix of HDU Boxes can be configured in 1 VSP system

o LFF and SFF HDU Boxes can be mixed in any configuration in the VSP

 An LFF HDU Box can contain a maximum of 80 HDDs


o An SFF HDU Box can contain a maximum of 128 HDDs

Page A-7
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
DKC Components

DKC Components

 Controller Chassis (DKC) consists of:


• Channel Adapter (CHA) or Front End Directors (FED)
• Disk Adapter (DKA) or Back End Directors (BED)
• Cache Memory Adapter or Data Cache Adapter (DCA)
• Grid Switch Adapter (GSW)
• Virtual Storage Directors (VSD)
• Service Processor (SVP)
• Cooling fan
• AC-DC power supply

• The battery and the Cache Flash Memory are also installed in the CMA to prevent data
loss from a power outage or other event

• The storage system continues to operate when a single point of failure occurs, by
adopting a duplexed configuration for each control board (CHA, DKA, CPC, GSW and
VSD), a redundant configuration for the AC-DC power supply and the cooling fan

• The addition and the replacement of the components and the upgrade of the microcode
is an online operation

• The SVP allows the engineers to set and modify the system configuration information,
and also can be used for checking the system status

• The SVP can also be configured to report system status and errors to Service Center and
enables the remote maintenance of the storage system

Page A-8
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Dual Cluster Structure of the DKC

Dual Cluster Structure of the DKC

1. DKC-0
CACHE-2CH DKCPS-01 DKCPS-02
DKCFAN-021 DKCFAN-020
CACHE-2CG SVP-OPTION/HUBBOX-01
CACHE-2CD CHA-2RL CHA-2RU
DKCFAN-023 DKCFAN-022 CHA-2QL CHA-2QU
CACHE-2CC
DKA/CHA-2ML DKA/CHA-2MU
MP-2MD DKC DKC
DKCFAN-024 ESW-2SD FAN- FAN-
MP-2MC Cluster 2 ESW-2SC 026 026
SSVPMN-0
MP-2MA Cluster 1 ESW-1SA DKC DKC
DKCFAN-014 FAN- FAN-
MP-2MB ESW-1SB 016 016
DKA/CHA-1AL DKA/CHA-1AU
CACHE-2CA
DKCFAN-013 DKCFAN-012 CHA-1EL CHA-1EU
CACHE-2CB CHA-1FL CHA-1FU
CACHE-2CE SVP-BASIC
DKCFAN-011 DKCFAN-010
CACHE-2CF DKCPS-03 DKCPS-00
Front view of DKC 0 (SKCPANEL is opened.) Rear view of DKC 0

• The components on VSP are installed in a 2 cluster structure

o The diagrams on this page show the front view and rear view of DKC#0

o In both the front and rear of the DKC Box, Cluster 1 components are found in
the lower slots and Cluster 2 components are found in the upper slots of the DKC
Box

• When a VSP system includes 2 modules, both DKC#0 and DKC#1, the DKC slots in the
second module have different identification codes

o The main printed circuit board (PCB) types are found in the same slot locations
and the cluster boundaries are the same in both DKC modules

• PCB types

o Data Cache Adapter (CACHE )

o Virtual Storage Director (MP)

o Grid Switch (ESW)

o Back End Director (DKA)

o Front End Director (CHA)

Page A-9
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
DKC Components

DKC Components

 Front End Director (FED)/Channel Adapter (CHA)


• Controls data transfer between the hosts and the cache memory
• Two types
 Mainframes – Fiber Connectivity (FICON)
 Open Systems – Fibre Channel (FC)

• Fibre Channel Specs


 Max FED only options – 8
• In addition 4 CHA options can be installed instead of DKA options
 8-port or 16-port options (4 or 8 ports per CHA PCB)
 8 Gb/sec (Auto negotiate 2/4/8 Gb/sec)
 Max Ports
• FED Slots only – 64 (8 port option), 128 (16 port option)
• FED+DKA Slots – 96 (8 port option), 192 (16 port option)

DKC = Disk Controller Unit

Note: One Option consists of 2 PCBs. One gets installed in Cluster 1 (CL1) and the second in
Cluster 2 (CL2).

 Back End Director (BED)/Disk Adapter (DKA)


• Controls data transfer between the disks and the cache memory
• Diskless VSP – Zero (0) BED options
 BED specs  Disk specs
• Max BED Options – 4 • Max Disk Drives per SAS port
• Max BED options are reduced if FED • 128 (2.5” Drives)
installed in BED slots
• 80 (3.5” Drives)
• 8 SAS ports per option
• 32 SAS ports maximum  Disk types supported
• 6 Gb/sec • SSD
• SAS
• SATA

DKC = Disk Controller Unit

Page A-10
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Back-End Director and Front-End Director Pairs

• The Back End Director (BED) boards execute all I/O jobs received from the processor
boards and control all reading or writing to disks

o There are 1 or 2 features (2 or 4 BED boards) per chassis

• BED functions include the following:

o Execute jobs received from a VSD board

o Use DMA to move data in or out of data cache

o Create RAID-5 and RAID-6 parity with an embedded XOR processor

o Encrypt data on disk (if desired)

o Manage all reads or writes to the attached disks

• Each BED board has 8 6Gb/sec SAS links

o There are up to 640 LFF disks or 1024 SFF disks per chassis attached to the 16
or 32 6Gb/sec SAS links from these 2 or 4 BED boards

Back-End Director and Front-End Director Pairs

 Operational control data is distributed to L2 memory


• Data accelerator processors
 Transfer data to and from cache
 Execute host command requests
• Dual core, special purpose I/O processors built into a unique ASIC package
• Designed to accelerate I/O and operational performance
• Offload latency sensitive processing tasks directly onto the BEDs and FEDs

Page A-11
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
DKC Components

DKC Components

 Cache Memory Adapter


• Caches the user data blocks from drives via the BED during a read
• Caches data from the FED as part of a data write operation
• All write data is mirrored in cache
• Only one copy of cached data when performing read operations
• Up to 64 GB of data cache and a total of 512GB in a single module
 Two modules can contain 1024GB of cache memory
• Non-volatile data protection
 Backup cached user data
 Also used to protect the state
• Allows for 3 copies
 One operational state
 Two backup copies (mirrors)
 Backed up to an onboard SSD

DKC = Disk Controller Unit

• The Data Cache Adapter (DCA) boards are the memory boards that hold all user data
and the master copy of Control Memory (metadata)

• There are up to 8 DCAs installed per chassis, with 8GB to 32GB of cache per board
(32GB to 256GB per chassis) when using the current 4GB DIMM (as part of the 16GB
feature set)

o The 2 boards of a feature must have the same RAM configuration, but each DCA
feature can be different

• The first 2 DCA boards in the base chassis (but not in the expansion chassis) have a
region of up to 48GB (24GB per board) used for the master copy of Control Memory

• Each DCA board also has a 500MB region reserved for a Cache Directory

• Each DCA board also has 1 or 2 on‐board SSD drives (31.5GB each) for use in backing
up the entire memory space in the event of an array shutdown due to power failure

o If the full 32GB of RAMM is installed on a DCA, it must have two 31.5GB SSDs
installed

o On‐board batteries power each DCA board long enough to complete several such
shutdown operations back‐to‐back in the event of repeated power failures before
the batteries have had a chance to charge back up

Page A-12
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Cache PCB and Component Specifications

Cache PCB and Component Specifications

# Item Specification
1 Board model name DKC-F710I-CPC A pair of CM Boards
(Included a 32GB-SSD and a Battery)
2 DIMM model name DKC-F710I-C16G 4GB-DIMM x 4
DKC-F710I-C32G* 8GB-DIMM x 4
3 SSD model name DKC-F710I-BM64 32GB-SSD x 2
DKC-F710I-BM128* 64GB-SSD x 2
5 Spare parts (FRU) CM Board (without SSD, Battery, and DIMM)
Cache Memory (4/8 GB DIMM)
Battery (12v)
SSD (32/64*GB)
6 Firmware on CM board 1) Backup and Battery Firmware: Memory backup and Battery
charge/discharge control
Online firmware update is available
2) SSD Firm: SSD internal control

SSD = Solid State Disk

• The table on this page identifies the component codes for the Cache PCB, the cache
memory DIMMs and the SSDs

• This information also indicates that the firmware for the battery management and
memory backup is non-disruptively upgradeable starting with V01

• Online upgrade of the SSD internal control firmware was added with V02

Page A-13
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
DKC Components

DKC Components

 Virtual Storage Director (VSD)


• Main processors and operational control data memory
• Stores and manages internal operational metadata and state
 Array groups, LDEVs, external LDEVs, runtime tables, mapping data for various software
products
• Overall state of the system stored, referenced, and executed
• Distributed to the appropriate I/O offload processors on FED/BED

• The VSD board is the VSP I/O processing board

o There are 2 or 4 of these installed per chassis

o Each board includes one Intel 2.33GHz Core Duo Xeon CPU with 4 processor
cores and 12MB of L2 cache

o There is 4GB of local DDR2 RAM on each board (2 DIMMs)

o This local RAM space is partitioned into 5 regions, with 1 region used for each
core’s private execution space, plus a shared Control Memory region used by all
4 cores

• Each VSD board executes all I/O requests for the LDEVs that are assigned to that board

• No other VSD board can process I/Os for these LDEVs

• No user data is processed within the VSD itself

• The firmware loaded onto the VSD board contains 5 types of code and each Xeon core
will schedule a process that depends upon the nature of the job it is executing

• A process will be of one of the following types or their mainframe equivalents:

o Target process – Manages host requests to and from a FED board for a particular
LDEV

Page A-14
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
DKC Components

o External (virtualization) process – Manages requests to or from a FED port used


in virtualization mode (external storage); here the FED port is operated as if it
were a host to operate the external array

o BED process – Manages the staging or de-staging of data between cache blocks
and internal disks via a BED board

o HUR Initiator (MCU) process – Manages the respond side of a Hitachi Universal
Replicator connection on a FED port

o RCU Target (RCU) process – Manages the pull side of a remote copy connection
on a FED port

• In addition there will be system housekeeping type processes

 Grid Switch (GSW)


• Provides multiple high performance interconnect paths to transfer user data between
FED/BED and DCA
• Total of 96 ports at 24 ports per GSW board
 The GSWs are the hub of the HiStar-E Network

• The core of the VSP HiStar-E Network architecture is the Express Switch which is
identified by the acronym GSW

o The internal HiStar-E Network interconnections are different due to the


separation of the microprocessors on the new MP PCBs

o The GSW provides the highly redundant, high performance interconnection paths
among the other main components of the HiStar-E Network as shown by the
schematic diagram on this page

Page A-15
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
DKC Components

• The Grid Switch boards supply the high performance interconnect paths that cross
connect all of the other boards

• There are 2 or 4 GSW boards per chassis

• Each GSW board has 24 unidirectional ports, each port having a send path and a receive
path, with each operating at 1024MBps

• As such, the GSW supports an aggregate peak load of 24GB/sec send and 24GB/sec
receive (or 48GB/sec overall)

• Eight of the GSW paths connect to the FED and BED boards for a total of 192 GB/sec full
duplex

• Eight other paths attach to the cache boards

• Four more paths connect to VSD boards, and the final 4 paths cross connect to the
matching GSW board in the second chassis (if used)

• There are no connections among the 2 or 4 GSWs within a chassis

o Every board in a chassis attaches to two or four GSW

Page A-16
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
VSP Hardware Architecture

VSP Hardware Architecture

Interfaces with
offload processors
Processors

SSD protected Cache Memory

Control Memory

Control Memory Backup


Interfaces with
offload processors

The diagram on this page shows how the DKC components in a single DKC system are
connected. A single DKC VSP system can support a maximum of 4 DKA features for a total of 16
back end data loops.

Page A-17
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Architecture Layout Dual Node

Architecture Layout Dual Node

When the VSP configuration needs to expand to more back end capacity and/or access, the
second DKC, module 1 must be added. The 2 DKCs are connected between their respective
GSWs.

Service Processor

 Service Processor (SVP) is a blade server that runs the application for
performing hardware and software maintenance functions
 The SVP PC provides 3 main functions
• Human interfaces
• Storage system health monitoring and reporting
• Performance monitor capabilities
 The SVP Application, Web Console and Storage Navigator applications
run on the SVP PC
 If the SVP PC fails or is unavailable, the movement of I/O is not affected
 Support for High Reliability Kit — Second SVP

The purpose of the SVP blade PC is to provide the human interface to the Virtual Storage
Platform

Page A-18
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Service Processor

Functions of the SVP include:

• Internal communication with the Micro Processors (MPs)

• Interface to perform microprogram exchange

• Platform where the GUIs run including:

o Web Console

o Storage Navigator 2 (SN2) (web services)

o SVP Application

• Connection point for the Virtual Storage Platform (VSP) to the customer LAN

• Connection point for the CE Laptop

• Connection point for Hi-Track Monitor reporting connection

• Collects and reports internal errors (SIMs) and alarms

• Collects and reports workload and performance information through the SVP Monitor

• Interface through which to download dump information

• Interface through which to download or backup configuration information

A Maintenance PC is used to connect to the SVP for storage system management and
administration.

Page A-19
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
DKU Box Structure

DKU Box Structure

 HDD Box contains HDDs and SSW(SAS Switch)-PKs


 Fan-assembly hinges on the outside edge of the HDU Box
 Fan assembly door opens to provide access to the HDD slots for replacement, insertion and removal
operations
SSW x 4 Fan Assembly
SSW x 4

Fan Assembly Latch


13U

Rear
Side
Front
Side

HDD x 40
Fan Assembly HDD x 40
LFF 3.5HDD box – maximum 80 HDD

• A closer view of the HDU Box structure is shown on this page

o The example shown is the LFF structure which holds 3.5” HDDs

o Each HDU Box has 2 fan door assemblies on the front and 2 on the rear

o From this diagram, you can see how the fan door assembly blocks access to the
HDD slots when the fan doors are in their normal operating position

• The fan assembly latch mechanism is located from front-to-back between the upper sets
of HDDs

o The fan door latch is pushed or pulled, depending on which side you are
standing on and on which side you need to open a fan door

• Note: Technicians who have worked with early versions of the VSP system have found
that the fan door latch mechanisms are a bit touchy; you may have to walk to the other
side of the system if the fan door latch gets stuck

• The SSW components are installed along the sides of the HDU Box in both the front and
rear

Page A-20
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Expanded Support for Multiple HDD Types

Expanded Support for Multiple HDD Types

DKU Chassis
 Can choose SFF or LFF chassis

DKU(for LFF)
 Can mount 80 HDDs in a chassis
VSP using SATA HDD or SSD
 Can mount 2 DKU chassis in 1st Rack DKU(for SFF)
 Can mount 3 DKU chassis in 2nd and 3rd Rack  Can mount 128 HDDs in a chassis using SAS HDD or SSD

Consistent or Mixed HDU Boxes Supported

When all drives are 3.5”, max # is 1280


HDD(80sp) HDD(80sp) HDD(80sp) HDD(80sp) HDD(80sp) HDD(80sp)

HDD(80sp) HDD(80sp) HDD(80sp) HDD(80sp) HDD(80sp) HDD(80sp)

HDD(80sp) HDD(80sp) 2nd Module 1st Module HDD(80sp) HDD(80sp)


(Logic) (Logic)

When all drives are 2.5”, max # is 2048


HDD(128sp) HDD(128sp) HDD(128sp) HDD(128sp) HDD(128sp) HDD(128sp)

HDD(128sp) HDD(128sp) HDD(128sp) HDD(128sp) HDD(128sp) HDD(128sp)

HDD(128sp) HDD(128sp) 2nd Module 1st Module HDD(128sp) HDD(128sp)


(Logic) (Logic)

2.5” box and 3.5” box can be intermixed


HDD(80sp) HDD(128sp) HDD(128sp) HDD(80sp) HDD(128sp) HDD(128sp)

HDD(80sp) HDD(128sp) HDD(128sp) HDD(128sp) HDD(80sp) HDD(128sp)

HDD(80sp) HDD(80sp) 2nd Module 1st Module HDD(128sp) HDD(128sp)


(Logic) (Logic)

The VSP supports 2 different internal HDU Box structures:

• 1 that holds Large Form Factor (LFF) 3.5” disk drives

• 1 that holds Small Form Factor (SFF) 2.5” disk drives

Page A-21
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
VSP Back End Cabling

• 1 HDU Box can be either for LFF or SFF disk drives but not mixed

o A mix of HDU Boxes can be configured in one VSP system

o LFF and SFF HDU Boxes can be mixed in any configuration in the VSP

• An LFF HDU Box can contain a maximum of 80 HDDs

o An SFF HDU Box can contain a maximum of 128 HDDs

• The part number for an SFF HDU Box is DKC-F710I-SBX

• The part number for an LFF HDU Box is DKC-F710I-UBX

VSP Back End Cabling

VSP Back End Path Cabling Pattern

Page A-22
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
VSP DKU and HDU Numbering – Front View

VSP DKU and HDU Numbering – Front View

VSP DKU and HDU Numbering

• In the VSP, each DKU is identified by a 2 digit number which includes the number of the
DKC, 0 or 1, and a number for the DKU within the module

• Remember that the DKC#0 module can be configured on either the right or the left as
compared to its DKC#1 partner

• This will add complexity to component identification at the customer site (in a 2-module
system, you will need to know or be able to determine the position of DKC#0 and
DKC#1 relative to each other)

Page A-23
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
VSP DKU and HDU Numbering – Back View

VSP DKU and HDU Numbering – Back View

VSP DKU and HDU Numbering

VSP B4 Layout – Front View

VSP DKU and B4 Numbering

Page A-24
24-port SAS 24-port SAS
expander expander


1
0
1
0

15
15
IOC
DKA0

2W 2W
24-port SAS 24-port SAS
expander expander

24-port SAS 24-port SAS

Cluster 1
expander expander


1
0
1
0

15
15
VSP SAS Back End Paths

IOC
DKA1

2W 2W
VSP B4 Layout – Back View

24-port SAS 24-port SAS


expander expander

24-port SAS 24-port SAS


expander expander


1
0
1
0
IOC

15
15
DKA0

2W 2W
24-port SAS 24-port SAS
VSP DKU and B4 Numbering

expander expander

Cluster 2
24-port SAS 24-port SAS
expander expander


1
0
IOC

1
0

15
15
DKA1

2W 2W

24-port SAS 24-port SAS


expander expander

(to DKU 1-7)


Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform

Page A-25
VSP B4 Layout – Back View
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Encryption Support

Encryption Support

 All Virtual Storage Platforms are


encryption capable
• Every BED has an encryption
capability built in
• Encryption needs to be enabled by
software license key

 New encryption mode of operations


• XTS-AES 256 bit encryption
RAID Group 1 RAID Group 2 RAID Group 3

 Expanded key support


• 32 keys per array
• Encryption as access control

Every VSP is encryption capable and requires a license key to be installed to be activated.

Each array supports up to 32 encryption keys per platform allowing for encryption to be used as
an access control mechanism within the array. This allows for different classifications of data to
be stored on the same array with encryption providing a data leakage prevention mechanism.

Disk Sparing

 Sparing Operations
• Dynamic Sparing (Preemptive Copy)
 Preemptive means that the ORM (Online Read Margin) diagnostics have determined a drive to
be suspect, or drive read/write error thresholds have been exceeded
 The storage system spares out the drive even though the drive has not completely failed
 Data is copied to spare drive (not recreated)

• Correction Copy (Disk Failure)


 Correction Copy occurs when a drive fails
• If a spare is available, lost data is re-created on the spare, which logically becomes part of
the array group
• This mode invokes the DRR chip, where pre-emptive does not
 If a spare is not available
• The array group is at risk for a longer period of time than normal
• That group runs in degraded mode continuously until the bad drive replaced

Page A-26
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Architecture – Storage

Architecture – Storage

Storage overview
1. Physical Devices – PDEV
2. PDEV grouped together with RAID
type: RAID-1+, RAID-5, RAID-6

3. Parity Group/RAID
Group/Array Group
4. Emulation specifies smaller logical
unit sizes

5. Logical Devices – LDEV

6. Assign addresses in
LDKC:CU:LDEV format
00:00:00
00:00:01
00:00:02

• Parity Groups are created from the physical disks

• A RAID Level and emulation is applied to the group

• The emulation creates equal sized stripes called LDEVs (Logical Devices)

• LDEVs are mapped into a LDKC, Control Unit matrix (LDKC#:CU#:LDEV#)

• Control Unit (CU)

o A Control Unit is a logical entity

o All the Logical Devices (LDEVs) that have been carved out of a RAID Group have
to be a part of a Control Unit (up to 256 on the Universal Storage Platform V)

o There can be a maximum of 256 LDEVs in each CU

• Logical DKC (LDKC)

o An LDKC is a set of Control Unit tables

o Each LDKC contains 256 (hex FF) control unit tables

o Currently LDKC can be set as 0 or 1

o LDKC 0 contains open LDEVs only

o LDKC 0&1 can be used in mainframe environments

Page A-27
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Data Redundancy

• Emulations

o When you wish to carve out LDEVs from a RAID group, you must specify the size
of the LDEVs

o The storage system supports various emulation modes which specify the size of
each LDEV in a RAID group

o Each RAID group can have only one emulation type

o Storage system can have multiple RAID groups with a different emulations, such
as Open-V

Data Redundancy

 RAID Implementation

4 or 8 physical HDDs are configured into a


RAID group (also called a parity group)

Groups of 4 or 8 (16 or 32 with concatenation) HDDs are set up using 1 of


the 3 parity options:
 Supported RAID levels
• RAID-1 (2D+2D)/(4D+4D)
• RAID-5 (3D+1P)/(7D+1P)
• RAID-6 (6D+2P)

The Concatenated Array Group feature allows you to configure all of the space from either 2
or 4 RAID-5 (7d+1p) Array Groups into an association of 16 or 32 drives whereby all LDEVs
created on these Array Groups are actually striped across all of the elements. Recall that a slice
(or partition) created on a standard Array Group is an LDEV (Logical Device), becoming a LUN
(Logical Unit) once it has been given a name and mapped to a host port.

Page A-28
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Supported RAID Configurations

Supported RAID Configurations

 RAID-1
RAID-1 (2D + 2D) Configuration

A A’ B B’
E’ E F’ F
G G’ H H’
I I’ J J’
RAID-1 (4D + 4D) Configuration

A A’ B B’ C C’ D D’
E’ E F’ F G’ G H H’
I I’ J J’ K K’ L L’
M M’ N N’ O O’ P P’

Description:

• Also called mirroring

• Two copies of the data

• Requires twice the number of disk drives

• For writes, a copy must be written to both disk drives

• Two parity group disk drive writes for every host write

• Do not care about what the previous data was, just over-write with new data

• For reads, the data can be read from either disk drive

• Read activity distributed over both copies reduces disk drive busy (due to reads) to 1/2
of what it would be to read from a single (non-RAID) disk drive

Advantages: Best performance and fault-tolerance

Disadvantages: Uses more raw disks to implement which means a more expensive solution

Page A-29
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
RAID Configurations

RAID Configurations

 RAID-5
RAID-5 (3D + 1P) Configuration

A B C P
D E P F
G P H I
P J K L

RAID-5 (7D + 1P) Configuration

A B C D E F G P
H I J K L M P N
O Q R S T P U V
W X Y Z P AA AB AC

• For sequential reads and writes, RAID-5 is very good

o It’s very space efficient (smallest space for parity), and sequential reads and
writes are efficient, because they operate on whole stripes

• For low access density (light activity), RAID-5 is very good

o The 4x RAID-5 write penalty is (nearly) invisible to the host, because it is


asynchronous

• For workloads with higher access density and more random writes, RAID-5 can be
throughput-limited due to all the extra parity group I/O operations to handle the RAID-5
write penalty

• In the RAID-5 design data are written to the 1st three disks and the 4th disk has an
error correction data set that would allow any 1 failing block to be reconstructed from
the other 3

o This method has the advantage that only, effectively 1 disk is used out of the 4
for error correction, (parity), information

• Small-sized records are intensively read and written randomly in transaction processing

o This type of processing generates many I/O requests for transferring small
amounts of data

Page A-30
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
RAID Configurations

o In such a situation, greater importance is placed on increased I/O performance


(parallel I/O processing) than on increase in the rate of transferring large-volume
data

o RAID-5 has been introduced to be suitable for this type of transaction processing

o Parity calculated by XOR-ing bits of data on the stripe

o Overlapping I/O requests allowed

• Recovery from failure:

o Missing data recalculated from parity and stored on spare

 RAID-6 – Striping With Dual Parity Drives

RAID-6 (6D + 2P) Configuration

A B C D E F P N
H I J K L P Q V
O R S T P Q U AC
W X Y P Q AA AB Q

Description

• Sometimes called RAID-5DP

• Two parity schemes used to store parity on different drives

• An extension of the RAID-5 concept that uses 2 separate parity-type fields usually called
P and Q

• Allows data to be reconstructed from the remaining drives in a parity group when any 1
or 2 drives have failed
*The math is the same as for ECC used to correct errors in DRAM memory or on the
surface of disk drives

Page A-31
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Hitachi Enterprise Storage Software Tools

• Each host random write turns into 6 parity group I/O operations

o Read old data, read old P, read old Q

o (Compute new P, Q)

o Write new data, write new P, write new Q

• Parity group sizes usually start at 6+2

o This has the same space efficiency as RAID-5 3+1

• Recovery from failure:

o Missing data recalculated from parity and stored on spare

• Advantages:

o Very high fault-tolerance

o Duplicate parity provides redundancy during correction copy

• Disadvantages:

o Uses additional space for second parity

o Slower than RAID-5 due to second parity calculation

Hitachi Enterprise Storage Software Tools


This section presents the several software tools that support the Hitachi Data Systems
hardware (Hitachi Virtual Storage Platform — VSP)

Page A-32
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Software Tools for Configuring VSP

Software Tools for Configuring VSP

Web Console and SN CLI (RAIDCOM)


SVP Application

Hitachi Command Suite

SN — Storage Navigator

Maintenance and Administration GUI Comparison

GUI Interface Runs on Customer use Functions Used by


the SVP permitted
SVP Application Yes No Hardware maintenance, Hardware HDS CE and Authorized
configuration, Microcode Partners, only
exchange, LUN Mapping, LUSE,
VLL
Web Console Yes No License Keys HDS CE and Authorized
Partners, only
Storage Navigator yes yes Provisioning LUN Mapping, LUSE, Customer Storage
VLL Replication (SI, TC and UR) Administrators and Storage
Performance Monitor Partitioning Partition, Admins and
System Admins
Device Manager No Yes Provisioning LUN Mapping, Customer Storage
LUSE,VLL, Shadow Image Administrators and Storage
Replication Performance Monitor Partition, Admins and
System Admins

This table compares the 4 main GUI applications that are used to view and manage the Virtual
Storage Platform (VSP) storage systems

Page A-33
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
SVP Application

• 3 of the GUI applications run on the SVP PC:

o The SVP Application

o Web Console

o Storage Navigator

• The fourth GUI interface is Hitachi Command Suite Device Manager software, which is
installed and runs on a Microsoft Windows® or Sun® Solaris host other than the SVP PC

• The SVP Application and the Web Console applications are used primarily by the
maintenance engineer

SVP Application

The SVP Application is used by the Engineers for doing hardware and software maintenance.
The application is launched by accessing the Web Console application.

Page A-34
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Storage Navigator/Web Console

Storage Navigator/Web Console

The Web Console is Storage Navigator that is accessed on the SVP as a user with the
maintenance user account.

Storage Navigator Login Screen

Storage Navigator GUI is accessed from an end user PC, via the public IP LAN, using a
supported web browser. In the customer environment, this public LAN may be a secured
management LAN within the customer’s network environment.

Page A-35
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Command Line Interface

We use the term public LAN to differentiate from the internal LAN within the VSP storage
system. Storage Navigator should never be accessed and used on the VSP internal LAN.

Command Line Interface

 RAIDCOM
• In-band
• Out-of-band
• Command Control
Interface (CCI)

• The Virtual Storage Platform (VSP) is the first enterprise storage system to include a
unified, fully compatible command line interface

o The VSP Command Line Interface (CLI) supports all storage provisioning and
configuration operations that can be performed through SN

• The CLI is implemented through the raidcom command

• The example on this page shows the raidcom command that retrieves the configuration
information about an LDEV

• For in-band CCI operations, the command device is used, which is a user-selected and
dedicated logical volume on the storage system that functions as the interface to the
storage system for the UNIX/PC host

• The dedicated logical volume is called command device and accepts commands that are
executed by the storage system

• For out-of-band CCI operations, a virtual command device is used

• The virtual command device is defined by specifying the IP address for the SVP

Page A-36
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Hitachi Command Suite 7

• CCI commands are issued from the host and transferred via LAN to the virtual command
device (SVP) and the requested operations are then performed by the storage system

Hitachi Command Suite 7

The customer can also use the Device Manager component of the Hitachi Command Suite 7
storage management software products to view and administer the VSP storage system as well
as any other HDS storage system.

Page A-37
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Storage Navigator Interface

Storage Navigator Interface

Storage Navigator Features

 Storage Navigator (SN)  Storage Management Operation


• Reduced administration training costs • Storage Navigator (SN)
• Reduction in required operations  Architecture-oriented

• Simplified multi-platform management  Many steps and clicks for operation


 Slow performance impression
• Scripting for automated multi-command
submissions  Many user interfaces
• Virtual Storage Platform with SN
 Use-case oriented operation
 Fewer steps and clicks
 Faster operational performance
 Unified user interface (GUI and CLI)

Page A-38
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Storage Navigator Setup

Storage Navigator Setup

 SN2 is included with the SVP during storage system installation

 If your client meets hardware, browser, flash and Java requirements,


you should be able to login with a simple URL
http:\\<ip-address | hostname>

 SN2 setup options include


• IPv6 communication
• SSL encryption

 Setup Login Message for Login Page


• Requires Security Administrator Modify permission

Storage Navigator Security

 External Authentication of Privileged


Users
• RADIUS and LDAP support
• Allows organizations to leverage existing
authentication data
• Authentication support
 Active Directory
 LDAP

External authentication of privileged users (storage admins) allows for the storage array to
integrate into the customer existing security and compliance infrastructure (e.g., existing
authentication data), generally directory services environments such as Active Directory. This
allow customers to provision and de-provision storage management access with existing
infrastructure.

Page A-39
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Storage Navigator GUI

Storage Navigator GUI

Storage Navigator Management Tasks

 With Storage Navigator, you can perform the following functions


• Provision the storage system
• View and manage the storage system configuration
• View system Alerts
• Monitor and tune performance
• Run Reports
• Acquire logs for actions and commands performed on the storage system

The above list represents some of the common tasks that can be performed as part of day-to-
day operations.

Page A-40
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Storage Navigator Provisioning Tasks

Storage Navigator Provisioning Tasks

 Provisioning tasks supported through SN include managing the


following components:
• LDEVs (Create, Delete, Shred, Edit)
• Pools (Create, Delete, Expand, Shrink)
• External Volumes (Add, Delete)
• Storage Allocation
• Manage Host Groups (Create, Edit, Delete)
• Add/Delete LUN, WWN
• Edit Port Configuration
• Speed, Topology, Fabric

• You can connect multiple server hosts of different platforms to one port of your storage
system

o When configuring your system, you must group server hosts connected to the
storage system by host groups

• For example, if HP-UX hosts and Windows hosts are connected to a port, you must
create one host group for HP-UX hosts and also create another host group for Windows
hosts

o Next, you must register HP-UX hosts to the corresponding host group and also
register Windows hosts to the other host group

Page A-41
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Storage Navigator System Information Display

Storage Navigator System Information Display

 Viewing Storage System information includes


• Hardware and Storage Allocation Summaries
• Components
• Parity Groups
• LDEVs
• Storage Pools
• Host/Port Groups
• External Storage
• Create and View HTML Reports or
CSV file
• Examine a system or …
• Verify changes to a system
• Perform Task Management
• License Key Installation

Storage Navigator can be used for viewing storage configuration information. The information
can also downloaded as HTML or CSV reports.

Storage Navigator System Alerts

 SN System Alerts
• SN can be used to view Service Information Messages (SIM)
generated on the storage
• These are alerts related to
• H/W component failure
• S/W configuration/operations issues
• Pools capacity thresholds exceeded
• LDEVs blocked
• License related issues
• External Storage Issues
• Replication Issues

Page A-42
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Attributes of License Keys

Attributes of License Keys

 License Keys delivered in text format in a file with the .plk extension

 License keys have four dimensions


• Program Product (PP)
• Serial Number of the storage system
• Time duration
• Storage capacity permitted to be managed in GB or TB

• License key actually 75 character string (truncated in this view)


• Storage capacity and time duration not provided in human-readable format

• Storage Navigator must be license key enabled

o When the customer or engineer accesses Storage Navigator with no license keys
installed, Storage Navigator will open the License Key interface by default

o No other Storage Navigator functions will be possible until the license keys have
been installed

 The time duration for license keys can be 1 of 3 values:


o Permanent – Normal, agreed and long-term licensed usage

o Temporary – Up to 120 days; used for trial and evaluation projects

o Emergency – 7 to 30 days; used when a key is needed quickly, but there also
must be negotiation and agreement reached with the customer regarding the
licenses

Page A-43
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Licensing — Capacity Definitions and Key Types

Licensing — Capacity Definitions and Key Types

Capacity Definitions

• Usable Capacity
• Used Capacity
• Unlimited Capacity

License • Permanent
Time • Emergency – 30 days
Duration • Temporary – 120 days
Types • Term – 365 Days

Two new storage capacity definitions have been created: usable capacity and used capacity

• Usable capacity is calculated based on the real, internal storage and the connected
external storage capacity of a VSP

o The program products of the BOS are licensed based on usable capacity

• Used capacity is the total allocated capacity including any and all replicated copies

o That is to say the total of all P-VOLs and all S-VOLs capacities are added
together to determine the used capacity

o Used capacity is the basis for the licenses for replication program products

Some program products will be capacity free

• When the customer buys the license for a capacity free program product, the customer
is entitled to use that product functionality against an unlimited capacity

This information is provided so that you are aware that there may be differences in licensed
capacity calculation for different program products within the VSP system and also as compared
to older enterprise storage systems

Whenever a license key expires, the current configuration is retained but no new configuration
changes are allowed with related the program product

Page A-44
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Licensing – SN GUI

Licensing – SN GUI

 Licensing GUI on SN

Module Summary

 In this module, you should have learned to:


• Describe the architecture, essential components and features of the Hitachi
Enterprise storage systems
• Describe the tools available for the management of Hitachi Enterprise
storage systems

Page A-45
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Module Review

Module Review

1. What drives are available for the VSP?

2. What is the backend architecture of the VSP?

3. What is the maximum number of frames?

4. For a 2-module system what is the maximum number of drives that can
be installed?

5. How much cache can be installed in a 2-module system?

Page A-46
Training Course Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

—A— AIX — IBM UNIX.


AaaS — Archive as a Service. A cloud computing AL — Arbitrated Loop. A network in which nodes
business model. contend to send data and only 1 node at a
AAMux — Active-Active Multiplexer. time is able to send data.

ACC — Action Code. A SIM (System Information AL-PA — Arbitrated Loop Physical Address.
Message). AMS — Adaptable Modular Storage.
ACE — Access Control Entry. Stores access rights APAR — Authorized Program Analysis Reports.
for a single user or group within the APF — Authorized Program Facility. In IBM z/OS
Windows security model. and OS/390 environments, a facility that
ACL — Access Control List. Stores a set of ACEs permits the identification of programs that
so that it describes the complete set of access are authorized to use restricted functions.
rights for a file system object within the API — Application Programming Interface.
Microsoft Windows security model.
APID — Application Identification. An ID to
ACP ― Array Control Processor. Microprocessor identify a command device.
mounted on the disk adapter circuit board
(DKA) that controls the drives in a specific Application Management — The processes that
disk array. Considered part of the back end; manage the capacity and performance of
it controls data transfer between cache and applications.
the hard drives. ARB — Arbitration or request.
ACP Domain ― Also Array Domain. All of the ARM — Automated Restart Manager.
array-groups controlled by the same pair of Array Domain — Also ACP Domain. All
DKA boards, or the HDDs managed by 1 functions, paths and disk drives controlled
ACP PAIR (also called BED). by a single ACP pair. An array domain can
ACP PAIR ― Physical disk access control logic. contain a variety of LVI or LU
Each ACP consists of 2 DKA PCBs to configurations.
provide 8 loop paths to the real HDDs. Array Group — Also called a parity group. A
Actuator (arm) — Read/write heads are attached group of hard disk drives (HDDs) that form
to a single head actuator, or actuator arm, the basic unit of storage in a subsystem. All
that moves the heads around the platters. HDDs in a parity group must have the same
AD — Active Directory. physical capacity.

ADC — Accelerated Data Copy. Array Unit — A group of hard disk drives in 1
RAID structure. Same as parity group.
Address — A location of data, usually in main
memory or on a disk. A name or token that ASIC — Application specific integrated circuit.
identifies a network component. In local area ASSY — Assembly.
networks (LANs), for example, every node Asymmetric virtualization — See Out-of-Band
has a unique address. virtualization.
ADP — Adapter. Asynchronous — An I/O operation whose
ADS — Active Directory Service. initiator does not await its completion before

HDS Confidential: For distribution only to authorized parties. Page G-1


proceeding with other work. Asynchronous this term are subject to proprietary
I/O operations enable an initiator to have trademark disputes in multiple countries at
multiple concurrent I/O operations in the present time.
progress. Also called Out-of-Band BIOS — Basic Input/Output System. A chip
virtualization. located on all computer motherboards that
ATA —Advanced Technology Attachment. A disk governs how a system boots and operates.
drive implementation that integrates the BLKSIZE — Block size.
controller on the disk drive itself. Also
known as IDE (Integrated Drive Electronics). BLOB — Binary large object.

ATR — Autonomic Technology Refresh. BP — Business processing.

Authentication — The process of identifying an BPaaS —Business Process as a Service. A cloud


individual, usually based on a username and computing business model.
password. BPAM — Basic Partitioned Access Method.
AUX — Auxiliary Storage Manager. BPM — Business Process Management.
Availability — Consistent direct access to BPO — Business Process Outsourcing. Dynamic
information over time. BPO services refer to the management of
-back to top- partly standardized business processes,
including human resources delivered in a
—B— pay-per-use billing relationship or a self-
B4 — A group of 4 HDU boxes that are used to service consumption model.
contain 128 HDDs. BST — Binary Search Tree.
BA — Business analyst. BSTP — Blade Server Test Program.
Back end — In client/server applications, the BTU — British Thermal Unit.
client part of the program is often called the
Business Continuity Plan — Describes how an
front end and the server part is called the
organization will resume partially or
back end.
completely interrupted critical functions
Backup image—Data saved during an archive within a predetermined time after a
operation. It includes all the associated files, disruption or a disaster. Sometimes also
directories, and catalog information of the called a Disaster Recovery Plan.
backup operation.
-back to top-
BASM — Basic Sequential Access Method.
BATCTR — Battery Control PCB.
—C—
CA — (1) Continuous Access software (see
BC — (1) Business Class (in contrast with EC,
HORC), (2) Continuous Availability or (3)
Enterprise Class). (2) Business Coordinator.
Computer Associates.
BCP — Base Control Program.
Cache — Cache Memory. Intermediate buffer
BCPii — Base Control Program internal interface. between the channels and drives. It is
BDAM — Basic Direct Access Method. generally available and controlled as 2 areas
BDW — Block Descriptor Word. of cache (cache A and cache B). It may be
battery-backed.
BED — Back end director. Controls the paths to
the HDDs. Cache hit rate — When data is found in the cache,
it is called a cache hit, and the effectiveness
Big Data — Refers to data that becomes so large in of a cache is judged by its hit rate.
size or quantity that a dataset becomes
awkward to work with using traditional Cache partitioning — Storage management
database management systems. Big data software that allows the virtual partitioning
entails data capacity or measurement that of cache and allocation of it to different
requires terms such as Terabyte (TB), applications.
Petabyte (PB), Exabyte (EB), Zettabyte (ZB) CAD — Computer-Aided Design.
or Yottabyte (YB). Note that variations of

Page G-2 HDS Confidential: For distribution only to authorized parties.


CAGR — Compound Annual Growth Rate. CDWP — Cumulative disk write throughput.
Capacity — Capacity is the amount of data that a CE — Customer Engineer.
storage system or drive can store after CEC — Central Electronics Complex.
configuration and/or formatting.
CentOS — Community Enterprise Operating
Most data storage companies, including HDS, System.
calculate capacity based on the premise that
1KB = 1,024 bytes, 1MB = 1,024 kilobytes, Centralized Management — Storage data
1GB = 1,024 megabytes, and 1TB = 1,024 management, capacity management, access
gigabytes. See also Terabyte (TB), Petabyte security management, and path
(PB), Exabyte (EB), Zettabyte (ZB) and management functions accomplished by
Yottabyte (YB). software.

CAPEX — Capital expenditure — the cost of CF — Coupling Facility.


developing or providing non-consumable CFCC — Coupling Facility Control Code.
parts for the product or system. For example, CFW — Cache Fast Write.
the purchase of a photocopier is the CAPEX,
and the annual paper and toner cost is the CH — Channel.
OPEX. (See OPEX). CH S — Channel SCSI.
CAS — (1) Column Address Strobe. A signal sent CHA — Channel Adapter. Provides the channel
to a dynamic random access memory interface control functions and internal cache
(DRAM) that tells it that an associated data transfer functions. It is used to convert
address is a column address. CAS-column the data format between CKD and FBA. The
address strobe sent by the processor to a CHA contains an internal processor and 128
DRAM circuit to activate a column address. bytes of edit buffer memory. Replaced by
(2) Content-addressable Storage. CHB in some cases.
CBI — Cloud-based Integration. Provisioning of a CHA/DKA — Channel Adapter/Disk Adapter.
standardized middleware platform in the CHAP — Challenge-Handshake Authentication
cloud that can be used for various cloud Protocol.
integration scenarios.
CHB — Channel Board. Updated DKA for Hitachi
An example would be the integration of Unified Storage VM and additional
legacy applications into the cloud or enterprise components.
integration of different cloud-based
Chargeback — A cloud computing term that refers
applications into one application.
to the ability to report on capacity and
CBU — Capacity Backup. utilization by application or dataset,
CBX —Controller chassis (box). charging business users or departments
based on how much they use.
CC – Common Criteria. In regards to Information
Technology Security Evaluation, it is a CHF — Channel Fibre.
flexible, cloud related certification CHIP — Client-Host Interface Processor.
framework that enables users to specify Microprocessors on the CHA boards that
security functional and assurance process the channel commands from the
requirements. hosts and manage host access to cache.
CCHH — Common designation for Cylinder and CHK — Check.
Head. CHN — Channel adapter NAS.
CCI — Command Control Interface. CHP — Channel Processor or Channel Path.
CCIF — Cloud Computing Interoperability CHPID — Channel Path Identifier.
Forum. A standards organization active in CHSN or C-HSN— Cache Memory Hierarchical
cloud computing. Star Network.
CDP — Continuous Data Protection. CHT — Channel tachyon. A Fibre Channel
CDR — Clinical Data Repository. protocol controller.
CICS — Customer Information Control System.

HDS Confidential: For distribution only to authorized parties. Page G-3


CIFS protocol — Common internet file system is a • Private cloud (or private network cloud)
platform-independent file sharing system. A • Public cloud (or public network cloud)
network file system accesses protocol
• Virtual private cloud (or virtual private
primarily used by Windows clients to
network cloud)
communicate file access requests to
Windows servers. Cloud Enabler —a concept, product or solution
that enables the deployment of cloud
CIM — Common Information Model.
computing. Key cloud enablers include:
CIS — Clinical Information System.
• Data discoverability
CKD ― Count-key Data. A format for encoding
• Data mobility
data on hard disk drives; typically used in
the mainframe environment. • Data protection
CKPT — Check Point. • Dynamic provisioning
CL — See Cluster. • Location independence

CLA – See Cloud Security Alliance. • Multitenancy to ensure secure privacy

CLI — Command Line Interface. • Virtualization

CLPR — Cache Logical Partition. Cache can be Cloud Fundamental —A core requirement to the
deployment of cloud computing. Cloud
divided into multiple virtual cache
memories to lessen I/O contention. fundamentals include:

Cloud Computing — “Cloud computing refers to • Self service


applications and services that run on a • Pay per use
distributed network using virtualized • Dynamic scale up and scale down
resources and accessed by common Internet
protocols and networking standards. It is Cloud Security Alliance — A standards
distinguished by the notion that resources are organization active in cloud computing.
virtual and limitless, and that details of the Cloud Security Alliance GRC Stack — The Cloud
physical systems on which software runs are Security Alliance GRC Stack provides a
abstracted from the user.” — Source: Cloud toolkit for enterprises, cloud providers,
Computing Bible, Barrie Sosinsky (2011). security solution providers, IT auditors and
Cloud computing often entails an “as a other key stakeholders to instrument and
service” business model that may entail one assess both private and public clouds against
or more of the following: industry established best practices,
standards and critical compliance
• Archive as a Service (AaaS) requirements.
• Business Process as a Service (BPaas)
CLPR — Cache Logical Partition.
• Failure as a Service (FaaS)
Cluster — A collection of computers that are
• Infrastructure as a Service (IaaS) interconnected (typically at high-speeds) for
• IT as a Service (ITaaS) the purpose of improving reliability,
• Platform as a Service (PaaS) availability, serviceability or performance
(via load balancing). Often, clustered
• Private File Tiering as a Service (PFTaaS) computers have access to a common pool of
• Software as a Service (SaaS) storage and run special software to
• SharePoint as a Service (SPaaS) coordinate the component computers'
activities.
• SPI refers to the Software, Platform and
Infrastructure as a Service business model. CM ― (1) Cache Memory, Cache Memory Module.
Cloud network types include the following: Intermediate buffer between the channels
and drives. It has a maximum of 64GB (32GB
• Community cloud (or community x 2 areas) of capacity. It is available and
network cloud) controlled as 2 areas of cache (cache A and
• Hybrid cloud (or hybrid network cloud)

Page G-4 HDS Confidential: For distribution only to authorized parties.


cache B). It is fully battery-backed (48 hours). Corporate governance — Organizational
(2) Content Management. compliance with government-mandated
CM DIR — Cache Memory Directory. regulations.

CME — Communications Media and CP — Central Processor (also called Processing


Unit or PU).
Entertainment.
CPC — Central Processor Complex.
CM-HSN — Control Memory Hierarchical Star
Network. CPM — Cache Partition Manager. Allows for
partitioning of the cache and assigns a
CM PATH ― Cache Memory Access Path. Access
partition to a LU; this enables tuning of the
Path from the processors of CHA, DKA PCB
system’s performance.
to Cache Memory.
CPOE — Computerized Physician Order Entry
CM PK — Cache Memory Package. (Provider Ordered Entry).
CM/SM — Cache Memory/Shared Memory. CPS — Cache Port Slave.
CMA — Cache Memory Adapter. CPU — Central Processing Unit.
CMD — Command. CRM — Customer Relationship Management.
CMG — Cache Memory Group. CSA – Cloud Security Alliance.
CNAME — Canonical NAME. CSS — Channel Subsystem.
CNS — Cluster Name Space or Clustered Name CS&S — Customer Service and Support.
Space. CSTOR — Central Storage or Processor Main
CNT — Cumulative network throughput. Memory.
CoD — Capacity on Demand. C-Suite — The C-suite is considered the most
important and influential group of
Community Network Cloud — Infrastructure individuals at a company. Referred to as
shared between several organizations or “the C-Suite within a Healthcare provider.”
groups with common concerns.
CSV — Comma Separated Value or Cluster Shared
Concatenation — A logical joining of 2 series of Volume.
data, usually represented by the symbol “|”.
In data communications, 2 or more data are CSVP — Customer-specific Value Proposition.
often concatenated to provide a unique CSW ― Cache Switch PCB. The cache switch
name or reference (such as, S_ID | X_ID). connects the channel adapter or disk adapter
Volume managers concatenate disk address to the cache. Each of them is connected to the
spaces to present a single larger address cache by the Cache Memory Hierarchical
space. Star Net (C-HSN) method. Each cluster is
Connectivity technology — A program or device's provided with the 2 CSWs, and each CSW
ability to link with other programs and can connect 4 caches. The CSW switches any
devices. Connectivity technology allows of the cache paths to which the channel
programs on a given computer to run adapter or disk adapter is to be connected
routines or access objects on another remote through arbitration.
computer. CTG — Consistency Group.
Controller — A device that controls the transfer of CTL — Controller module.
data from a computer to a peripheral device
CTN — Coordinated Timing Network.
(including a storage system) and vice versa.
CU — Control Unit. Refers to a storage subsystem.
Controller-based virtualization — Driven by the
The hexadecimal number to which 256
physical controller at the hardware
microcode level versus at the application LDEVs may be assigned.
software layer and integrates into the CUDG — Control Unit Diagnostics. Internal
infrastructure to allow virtualization across system tests.
heterogeneous storage and third party CUoD — Capacity Upgrade on Demand.
products.
CV — Custom Volume.

HDS Confidential: For distribution only to authorized parties. Page G-5


CVS ― Customizable Volume Size. Software used context, data migration is the same as
to create custom volume sizes. Marketed Hierarchical Storage Management (HSM).
under the name Virtual LVI (VLVI) and Data Pipe or Data Stream — The connection set up
Virtual LUN (VLUN). between the MediaAgent, source or
CWDM — Course Wavelength Division destination server is called a Data Pipe or
Multiplexing. more commonly a Data Stream.
CXRC — Coupled z/OS Global Mirror. Data Pool — A volume containing differential
-back to top- data only.
—D— Data Protection Directive — A major compliance
and privacy protection initiative within the
DA — Device Adapter.
European Union (EU) that applies to cloud
DACL — Discretionary access control list (ACL). computing. Includes the Safe Harbor
The part of a security descriptor that stores Agreement.
access rights for users and groups.
Data Stream — CommVault’s patented high
DAD — Device Address Domain. Indicates a site performance data mover used to move data
of the same device number automation back and forth between a data source and a
support function. If several hosts on the MediaAgent or between 2 MediaAgents.
same site have the same device number
Data Striping — Disk array data mapping
system, they have the same name.
technique in which fixed-length sequences of
DAP — Data Access Path. Also known as Zero virtual disk data addresses are mapped to
Copy Failover (ZCF). sequences of member disk addresses in a
DAS — Direct Attached Storage. regular rotating pattern.
DASD — Direct Access Storage Device. Data Transfer Rate (DTR) — The speed at which
data can be transferred. Measured in
Data block — A fixed-size unit of data that is
kilobytes per second for a CD-ROM drive, in
transferred together. For example, the
bits per second for a modem, and in
X-modem protocol transfers blocks of 128
megabytes per second for a hard drive. Also,
bytes. In general, the larger the block size,
often called data rate.
the faster the data transfer rate.
DBL — Drive box.
Data Duplication — Software duplicates data, as
in remote copy or PiT snapshots. Maintains 2 DBMS — Data Base Management System.
copies of data. DBX — Drive box.
Data Integrity — Assurance that information will DCA ― Data Cache Adapter.
be protected from modification and
DCTL — Direct coupled transistor logic.
corruption.
DDL — Database Definition Language.
Data Lifecycle Management — An approach to
information and storage management. The DDM — Disk Drive Module.
policies, processes, practices, services and DDNS — Dynamic DNS.
tools used to align the business value of data DDR3 — Double data rate 3.
with the most appropriate and cost-effective
storage infrastructure from the time data is DE — Data Exchange Software.
created through its final disposition. Data is Device Management — Processes that configure
aligned with business requirements through and manage storage systems.
management policies and service levels DFS — Microsoft Distributed File System.
associated with performance, availability,
recoverability, cost, and what ever DFSMS — Data Facility Storage Management
parameters the organization defines as Subsystem.
critical to its operations. DFSM SDM — Data Facility Storage Management
Data Migration — The process of moving data Subsystem System Data Mover.
from 1 storage device to another. In this

Page G-6 HDS Confidential: For distribution only to authorized parties.


DFSMSdfp — Data Facility Storage Management 8 LUs; a large one, with hundreds of disk
Subsystem Data Facility Product. drives, can support thousands.
DFSMSdss — Data Facility Storage Management DKA ― Disk Adapter. Also called an array control
Subsystem Data Set Services. processor (ACP). It provides the control
DFSMShsm — Data Facility Storage Management functions for data transfer between drives
Subsystem Hierarchical Storage Manager. and cache. The DKA contains DRR (Data
Recover and Reconstruct), a parity generator
DFSMSrmm — Data Facility Storage Management circuit. Replaced by DKB in some cases.
Subsystem Removable Media Manager.
DKB — Disk Board. Updated DKA for Hitachi
DFSMStvs — Data Facility Storage Management Unified Storage VM and additional
Subsystem Transactional VSAM Services. enterprise components.
DFW — DASD Fast Write. DKC ― Disk Controller Unit. In a multi-frame
DICOM — Digital Imaging and Communications configuration, the frame that contains the
in Medicine. front end (control and memory
DIMM — Dual In-line Memory Module. components).
Direct Access Storage Device (DASD) — A type of DKCMN ― Disk Controller Monitor. Monitors
storage device, in which bits of data are temperature and power status throughout
stored at precise locations, enabling the the machine.
computer to retrieve information directly DKF ― Fibre disk adapter. Another term for a
without having to scan a series of records. DKA.
Direct Attached Storage (DAS) — Storage that is DKU — Disk Array Frame or Disk Unit. In a
directly attached to the application or file multi-frame configuration, a frame that
server. No other device on the network can contains hard disk units (HDUs).
access the stored data. DKUPS — Disk Unit Power Supply.
Director class switches — Larger switches often DLIBs — Distribution Libraries.
used as the core of large switched fabrics.
DKUP — Disk Unit Power Supply.
Disaster Recovery Plan (DRP) — A plan that
describes how an organization will deal with DLM — Data Lifecycle Management.
potential disasters. It may include the DMA — Direct Memory Access.
precautions taken to either maintain or DM-LU — Differential Management Logical Unit.
quickly resume mission-critical functions. DM-LU is used for saving management
Sometimes also referred to as a Business information of the copy functions in the
Continuity Plan. cache.
Disk Administrator — An administrative tool that DMP — Disk Master Program.
displays the actual LU storage configuration.
DMT — Dynamic Mapping Table.
Disk Array — A linked group of 1 or more
physical independent hard disk drives DMTF — Distributed Management Task Force. A
generally used to replace larger, single disk standards organization active in cloud
drive systems. The most common disk computing.
arrays are in daisy chain configuration or DNS — Domain Name System.
implement RAID (Redundant Array of DOC — Deal Operations Center.
Independent Disks) technology.
A disk array may contain several disk drive Domain — A number of related storage array
trays, and is structured to improve speed groups.
and increase protection against loss of data. DOO — Degraded Operations Objective.
Disk arrays organize their data storage into DP — Dynamic Provisioning (pool).
Logical Units (LUs), which appear as linear
DP-VOL — Dynamic Provisioning Virtual Volume.
block paces to their clients. A small disk
array, with a few disks, might support up to DPL — (1) (Dynamic) Data Protection Level or (2)
Denied Persons List.

HDS Confidential: For distribution only to authorized parties. Page G-7


DR — Disaster Recovery. EHR — Electronic Health Record.
DRAC — Dell Remote Access Controller. EIG — Enterprise Information Governance.
DRAM — Dynamic random access memory. EMIF — ESCON Multiple Image Facility.
DRP — Disaster Recovery Plan. EMPI — Electronic Master Patient Identifier. Also
DRR — Data Recover and Reconstruct. Data Parity known as MPI.
Generator chip on DKA. Emulation — In the context of Hitachi Data
DRV — Dynamic Reallocation Volume. Systems enterprise storage, emulation is the
logical partitioning of an Array Group into
DSB — Dynamic Super Block. logical devices.
DSF — Device Support Facility. EMR — Electronic Medical Record.
DSF INIT — Device Support Facility Initialization
ENC — Enclosure or Enclosure Controller. The
(for DASD).
units that connect the controllers with the
DSP — Disk Slave Program. Fibre Channel disks. They also allow for
DT — Disaster tolerance. online extending a system by adding RKAs.
DTA —Data adapter and path to cache-switches. ENISA – European Network and Information
Security Agency.
DTR — Data Transfer Rate.
EOF — End of Field.
DVE — Dynamic Volume Expansion.
EOL — End of Life.
DW — Duplex Write.
EPO — Emergency Power Off.
DWDM — Dense Wavelength Division
Multiplexing. EREP — Error Reporting and Printing.

DWL — Duplex Write Line or Dynamic ERP — Enterprise Resource Planning.


Workspace Linking. ESA — Enterprise Systems Architecture.
-back to top- ESB — Enterprise Service Bus.

—E— ESC — Error Source Code.


ESD — Enterprise Systems Division (of Hitachi).
EAL — Evaluation Assurance Level (EAL1
through EAL7). The EAL of an IT product or ESCD — ESCON Director.
system is a numerical security grade ESCON ― Enterprise Systems Connection. An
assigned following the completion of a input/output (I/O) interface for mainframe
Common Criteria security evaluation, an computer connections to storage devices
international standard in effect since 1999. developed by IBM.
EAV — Extended Address Volume. ESD — Enterprise Systems Division.
EB — Exabyte. ESDS — Entry Sequence Data Set.
EC — Enterprise Class (in contrast with BC, ESS — Enterprise Storage Server.
Business Class). ESW — Express Switch or E Switch. Also referred
ECC — Error Checking and Correction. to as the Grid Switch (GSW).
ECC.DDR SDRAM — Error Correction Code Ethernet — A local area network (LAN)
Double Data Rate Synchronous Dynamic architecture that supports clients and servers
RAM Memory. and uses twisted pair cables for connectivity.
ECM — Extended Control Memory. ETR — External Time Reference (device).
ECN — Engineering Change Notice. EVS — Enterprise Virtual Server.
E-COPY — Serverless or LAN free backup. Exabyte (EB) — A measurement of data or data
storage. 1EB = 1,024PB.
EFI — Extensible Firmware Interface. EFI is a
specification that defines a software interface EXCP — Execute Channel Program.
between an operating system and platform ExSA — Extended Serial Adapter.
firmware. EFI runs on top of BIOS when a -back to top-
LPAR is activated.

Page G-8 HDS Confidential: For distribution only to authorized parties.


—F— achieved by including redundant instances
of components whose failure would make
FaaS — Failure as a Service. A proposed business the system inoperable, coupled with facilities
model for cloud computing in which large- that allow the redundant components to
scale, online failure drills are provided as a assume the function of failed ones.
service in order to test real cloud
deployments. Concept developed by the FAIS — Fabric Application Interface Standard.
College of Engineering at the University of FAL — File Access Library.
California, Berkeley in 2011. FAT — File Allocation Table.
Fabric — The hardware that connects Fault Tolerant — Describes a computer system or
workstations and servers to storage devices component designed so that, in the event of a
in a SAN is referred to as a "fabric." The SAN component failure, a backup component or
fabric enables any-server-to-any-storage procedure can immediately take its place with
device connectivity through the use of Fibre no loss of service. Fault tolerance can be
Channel switching technology. provided with software, embedded in
Failback — The restoration of a failed system hardware or provided by hybrid combination.
share of a load to a replacement component. FBA — Fixed-block Architecture. Physical disk
For example, when a failed controller in a sector mapping.
redundant configuration is replaced, the FBA/CKD Conversion — The process of
devices that were originally controlled by converting open-system data in FBA format
the failed controller are usually failed back to mainframe data in CKD format.
to the replacement controller to restore the FBUS — Fast I/O Bus.
I/O balance, and to restore failure tolerance.
FC ― Fibre Channel or Field-Change (microcode
Similarly, when a defective fan or power
update). A technology for transmitting data
supply is replaced, its load, previously borne
between computer devices; a set of
by a redundant component, can be failed
standards for a serial I/O bus capable of
back to the replacement part.
transferring data between 2 ports.
Failed over — A mode of operation for failure-
FC RKAJ — Fibre Channel Rack Additional.
tolerant systems in which a component has
Module system acronym refers to an
failed and its function has been assumed by
additional rack unit that houses additional
a redundant component. A system that
hard drives exceeding the capacity of the
protects against single failures operating in
core RK unit.
failed over mode is not failure tolerant, as
failure of the redundant component may FC-0 ― Lowest layer on Fibre Channel transport.
render the system unable to function. Some This layer represents the physical media.
systems (for example, clusters) are able to FC-1 ― This layer contains the 8b/10b encoding
tolerate more than 1 failure; these remain scheme.
failure tolerant until no redundant FC-2 ― This layer handles framing and protocol,
component is available to protect against frame format, sequence/exchange
further failures. management and ordered set usage.
Failover — A backup operation that automatically FC-3 ― This layer contains common services used
switches to a standby database server or by multiple N_Ports in a node.
network if the primary system fails, or is FC-4 ― This layer handles standards and profiles
temporarily shut down for servicing. Failover for mapping upper level protocols like SCSI
is an important fault tolerance function of an IP onto the Fibre Channel Protocol.
mission-critical systems that rely on constant FCA ― Fibre Channel Adapter. Fibre interface
accessibility. Also called path failover. card. Controls transmission of fibre packets.
Failure tolerance — The ability of a system to FC-AL — Fibre Channel Arbitrated Loop. A serial
continue to perform its function or at a data transfer architecture developed by a
reduced performance level, when 1 or more consortium of computer and mass storage
of its components has failed. Failure device manufacturers, and is now being
tolerance in disk subsystems is often standardized by ANSI. FC-AL was designed

HDS Confidential: For distribution only to authorized parties. Page G-9


for new mass storage devices and other physical link rates to make them up to 8
peripheral devices that require very high times as efficient as ESCON (Enterprise
bandwidth. Using optical fiber to connect System Connection), IBM's previous fiber
devices, FC-AL supports full-duplex data optic channel standard.
transfer rates of 100MB/sec. FC-AL is FIPP — Fair Information Practice Principles.
compatible with SCSI for high-performance Guidelines for the collection and use of
storage systems. personal information created by the United
FCC — Federal Communications Commission. States Federal Trade Commission (FTC).
FCIP — Fibre Channel over IP. A network storage FISMA — Federal Information Security
technology that combines the features of Management Act of 2002. A major
Fibre Channel and the Internet Protocol (IP) compliance and privacy protection law that
to connect distributed SANs over large applies to information systems and cloud
distances. FCIP is considered a tunneling computing. Enacted in the United States of
protocol, as it makes a transparent point-to- America in 2002.
point connection between geographically FLGFAN ― Front Logic Box Fan Assembly.
separated SANs over IP networks. FCIP
relies on TCP/IP services to establish FLOGIC Box ― Front Logic Box.
connectivity between remote SANs over FM — Flash Memory. Each microprocessor has
LANs, MANs, or WANs. An advantage of FM. FM is non-volatile memory that contains
FCIP is that it can use TCP/IP as the microcode.
transport while keeping Fibre Channel fabric FOP — Fibre Optic Processor or fibre open.
services intact.
FQDN — Fully Qualified Domain Name.
FCoE – Fibre Channel over Ethernet. An
encapsulation of Fibre Channel frames over FPC — Failure Parts Code or Fibre Channel
Ethernet networks. Protocol Chip.
FCP — Fibre Channel Protocol. FPGA — Field Programmable Gate Array.
FC-P2P — Fibre Channel Point-to-Point. Frames — An ordered vector of words that is the
FCSE — Flashcopy Space Efficiency. basic unit of data transmission in a Fibre
FC-SW — Fibre Channel Switched. Channel network.
FCU— File Conversion Utility. Front end — In client/server applications, the
FD — Floppy Disk or Floppy Drive. client part of the program is often called the
front end and the server part is called the
FDDI — Fiber Distributed Data Interface.
back end.
FDR — Fast Dump/Restore.
FRU — Field Replaceable Unit.
FE — Field Engineer.
FS — File System.
FED — (Channel) Front End Director.
FedRAMP – Federal Risk and Authorization FSA — File System Module-A.
Management Program. FSB — File System Module-B.
Fibre Channel — A serial data transfer FSI — Financial Services Industries.
architecture developed by a consortium of
FSM — File System Module.
computer and mass storage device
manufacturers and now being standardized FSW ― Fibre Channel Interface Switch PCB. A
by ANSI. The most prominent Fibre Channel board that provides the physical interface
standard is Fibre Channel Arbitrated Loop (cable connectors) between the ACP ports
(FC-AL). and the disks housed in a given disk drive.
FICON — Fiber Connectivity. A high-speed FTP ― File Transfer Protocol. A client-server
input/output (I/O) interface for mainframe protocol that allows a user on 1 computer to
computer connections to storage devices. As transfer files to and from another computer
part of IBM's S/390 server, FICON channels over a TCP/IP network.
increase I/O capacity through the FWD — Fast Write Differential.
combination of a new architecture and faster -back to top-

Page G-10 HDS Confidential: For distribution only to authorized parties.


—G— only 1 H2F that can be added to the core RK
Floor Mounted unit. See also: RK, RKA, and
GA — General availability. H1F.
GARD — General Available Restricted HA — High Availability.
Distribution.
Hadoop — Apache Hadoop is an open-source
Gb — Gigabit. software framework for data storage and
GB — Gigabyte. large-scale processing of data-sets on
Gb/sec — Gigabit per second. clusters of hardware.
GB/sec — Gigabyte per second. HANA — High Performance Analytic Appliance,
a database appliance technology proprietary
GbE — Gigabit Ethernet.
to SAP.
Gbps — Gigabit per second.
HBA — Host Bus Adapter — An I/O adapter that
GBps — Gigabyte per second. sits between the host computer's bus and the
GBIC — Gigabit Interface Converter. Fibre Channel loop and manages the transfer
of information between the 2 channels. In
GCMI — Global Competitive and Marketing
order to minimize the impact on host
Intelligence (Hitachi).
processor performance, the host bus adapter
GDG — Generation Data Group. performs many low-level interface functions
GDPS — Geographically Dispersed Parallel automatically or with minimal processor
Sysplex. involvement.
GID — Group Identifier within the UNIX security HCA — Host Channel Adapter.
model. HCD — Hardware Configuration Definition.
gigE — Gigabit Ethernet. HD — Hard Disk.
GLM — Gigabyte Link Module. HDA — Head Disk Assembly.
Global Cache — Cache memory is used on demand HDD ― Hard Disk Drive. A spindle of hard disk
by multiple applications. Use changes platters that make up a hard drive, which is
dynamically, as required for READ a unit of physical storage within a
performance between hosts/applications/LUs. subsystem.
GPFS — General Parallel File System. HDDPWR — Hard Disk Drive Power.
GSC — Global Support Center. HDU ― Hard Disk Unit. A number of hard drives
(HDDs) grouped together within a
GSI — Global Systems Integrator.
subsystem.
GSS — Global Solution Services.
Head — See read/write head.
GSSD — Global Solutions Strategy and
Heterogeneous — The characteristic of containing
Development.
dissimilar elements. A common use of this
GSW — Grid Switch Adapter. Also known as E word in information technology is to
Switch (Express Switch). describe a product as able to contain or be
GUI — Graphical User Interface. part of a “heterogeneous network,"
consisting of different manufacturers'
GUID — Globally Unique Identifier.
products that can interoperate.
-back to top-
Heterogeneous networks are made possible by
—H— standards-conforming hardware and
H1F — Essentially the floor-mounted disk rack software interfaces used in common by
(also called desk side) equivalent of the RK. different products, thus allowing them to
(See also: RK, RKA, and H2F). communicate with each other. The Internet
itself is an example of a heterogeneous
H2F — Essentially the floor-mounted disk rack
network.
(also called desk side) add-on equivalent
similar to the RKA. There is a limitation of HiCAM — Hitachi Computer Products America.

HDS Confidential: For distribution only to authorized parties. Page G-11


HIPAA — Health Insurance Portability and infrastructure, operations and applications)
Accountability Act. in a coordinated fashion to assemble a
HIS — (1) High Speed Interconnect. (2) Hospital particular solution.” — Source: Gartner
Information System (clinical and financial). Research.
Hybrid Network Cloud — A composition of 2 or
HiStar — Multiple point-to-point data paths to
cache. more clouds (private, community or public).
Each cloud remains a unique entity but they
HL7 — Health Level 7. are bound together. A hybrid network cloud
HLQ — High-level Qualifier. includes an interconnection.
HLS — Healthcare and Life Sciences. Hypervisor — Also called a virtual machine
manager, a hypervisor is a hardware
HLU — Host Logical Unit.
virtualization technique that enables
H-LUN — Host Logical Unit Number. See LUN. multiple operating systems to run
HMC — Hardware Management Console. concurrently on the same computer.
Hypervisors are often installed on server
Homogeneous — Of the same or similar kind.
hardware then run the guest operating
Host — Also called a server. Basically a central systems that act as servers.
computer that processes end-user
Hypervisor can also refer to the interface
applications or requests.
that is provided by Infrastructure as a Service
Host LU — Host Logical Unit. See also HLU. (IaaS) in cloud computing.
Host Storage Domains — Allows host pooling at Leading hypervisors include VMware
the LUN level and the priority access feature vSphere Hypervisor™ (ESXi), Microsoft®
lets administrator set service levels for Hyper-V and the Xen® hypervisor.
applications. -back to top-
HP — (1) Hewlett-Packard Company or (2) High
Performance.
HPC — High Performance Computing. —I—
HSA — Hardware System Area. I/F — Interface.
HSG — Host Security Group. I/O — Input/Output. Term used to describe any
HSM — Hierarchical Storage Management (see program, operation, or device that transfers
Data Migrator). data to or from a computer and to or from a
peripheral device.
HSN — Hierarchical Star Network.
IaaS —Infrastructure as a Service. A cloud
HSSDC — High Speed Serial Data Connector.
computing business model — delivering
HTTP — Hyper Text Transfer Protocol. computer infrastructure, typically a platform
HTTPS — Hyper Text Transfer Protocol Secure. virtualization environment, as a service,
Hub — A common connection point for devices in along with raw (block) storage and
a network. Hubs are commonly used to networking. Rather than purchasing servers,
connect segments of a LAN. A hub contains software, data center space or network
multiple ports. When a packet arrives at 1 equipment, clients buy those resources as a
port, it is copied to the other ports so that all fully outsourced service. Providers typically
segments of the LAN can see all packets. A bill such services on a utility computing
switching hub actually reads the destination basis; the amount of resources consumed
address of each packet and then forwards (and therefore the cost) will typically reflect
the packet to the correct port. Device to the level of activity.
which nodes on a multi-point bus or loop are IDE — Integrated Drive Electronics Advanced
physically connected. Technology. A standard designed to connect
Hybrid Cloud — “Hybrid cloud computing refers hard and removable disk drives.
to the combination of external public cloud IDN — Integrated Delivery Network.
computing services and internal resources
iFCP — Internet Fibre Channel Protocol.
(either a private cloud or traditional

Page G-12 HDS Confidential: For distribution only to authorized parties.


Index Cache — Provides quick access to indexed IOC — I/O controller.
data on the media during a browse\restore IOCDS — I/O Control Data Set.
operation.
IODF — I/O Definition file.
IBR — Incremental Block-level Replication or
IOPH — I/O per hour.
Intelligent Block Replication.
IOPS – I/O per second.
ICB — Integrated Cluster Bus.
IOS — I/O Supervisor.
ICF — Integrated Coupling Facility.
IOSQ — Input/Output Subsystem Queue.
ID — Identifier.
IP — Internet Protocol. The communications
IDR — Incremental Data Replication. protocol that routes traffic across the
iFCP — Internet Fibre Channel Protocol. Allows Internet.
an organization to extend Fibre Channel IPv6 — Internet Protocol Version 6. The latest
storage networks over the Internet by using revision of the Internet Protocol (IP).
TCP/IP. TCP is responsible for managing IPL — Initial Program Load.
congestion control as well as error detection
IPSEC — IP security.
and recovery services.
IRR — Internal Rate of Return.
iFCP allows an organization to create an IP
SAN fabric that minimizes the Fibre Channel ISC — Initial shipping condition or Inter-System
fabric component and maximizes use of the Communication.
company's TCP/IP infrastructure. iSCSI — Internet SCSI. Pronounced eye skuzzy.
An IP-based standard for linking data
IFL — Integrated Facility for LINUX.
storage devices over a network and
IHE — Integrating the Healthcare Enterprise. transferring data by carrying SCSI
IID — Initiator ID. commands over IP networks.
IIS — Internet Information Server. ISE — Integrated Scripting Environment.
ILM — Information Life Cycle Management. iSER — iSCSI Extensions for RDMA.
ILO — (Hewlett-Packard) Integrated Lights-Out. ISL — Inter-Switch Link.

IML — Initial Microprogram Load. iSNS — Internet Storage Name Service.


ISOE — iSCSI Offload Engine.
IMS — Information Management System.
ISP — Internet service provider.
In-Band Virtualization — Refers to the location of
the storage network path, between the ISPF — Interactive System Productivity Facility.
application host servers in the storage ISPF/PDF — Interactive System Productivity
systems. Provides both control and data Facility/Program Development Facility.
along the same connection path. Also called ISV — Independent Software Vendor.
symmetric virtualization. ITaaS — IT as a Service. A cloud computing
INI — Initiator. business model. This general model is an
Interface —The physical and logical arrangement umbrella model that entails the SPI business
supporting the attachment of any device to a model (SaaS, PaaS and IaaS — Software,
connector or to another device. Platform and Infrastructure as a Service).
Internal Bus — Another name for an internal data ITSC — Informaton and Telecommunications
bus. Also, an expansion bus is often referred Systems Companies.
to as an internal bus. -back to top-

Internal Data Bus — A bus that operates only —J—


within the internal circuitry of the CPU,
Java — A widely accepted, open systems
communicating among the internal caches of
programming language. Hitachi’s enterprise
memory that are part of the CPU chip’s
software products are all accessed using Java
design. This bus is typically rather quick and
applications. This enables storage
is independent of the rest of the computer’s
administrators to access the Hitachi
operations.

HDS Confidential: For distribution only to authorized parties. Page G-13


enterprise software products from any PC or (all or portions of 1 or more disks) that are
workstation that runs a supported thin-client combined so that the subsystem sees and
internet browser application and that has treats them as a single area of data storage.
TCP/IP network access to the computer on Also called a volume. An LDEV has a
which the software product runs. specific and unique address within a
Java VM — Java Virtual Machine. subsystem. LDEVs become LUNs to an
open-systems host.
JBOD — Just a Bunch of Disks.
JCL — Job Control Language. LDKC — Logical Disk Controller or Logical Disk
Controller Manual.
JMP —Jumper. Option setting method.
LDM — Logical Disk Manager.
JMS — Java Message Service.
LDS — Linear Data Set.
JNL — Journal.
JNLG — Journal Group. LED — Light Emitting Diode.

JRE —Java Runtime Environment. LFF — Large Form Factor.


JVM — Java Virtual Machine. LIC — Licensed Internal Code.
J-VOL — Journal Volume. LIS — Laboratory Information Systems.
-back to top- LLQ — Lowest Level Qualifier.

—K— LM — Local Memory.

KSDS — Key Sequence Data Set. LMODs — Load Modules.

kVA— Kilovolt Ampere. LNKLST — Link List.

KVM — Kernel-based Virtual Machine or Load balancing — The process of distributing


Keyboard-Video Display-Mouse. processing and communications activity
evenly across a computer network so that no
kW — Kilowatt. single device is overwhelmed. Load
-back to top- balancing is especially important for
networks where it is difficult to predict the
—L— number of requests that will be issued to a
LACP — Link Aggregation Control Protocol. server. If 1 server starts to be swamped,
LAG — Link Aggregation Groups. requests are forwarded to another server
with more capacity. Load balancing can also
LAN — Local Area Network. A communications
refer to the communications channels
network that serves clients within a
themselves.
geographical area, such as a building.
LOC — “Locations” section of the Maintenance
LBA — Logical block address. A 28-bit value that
Manual.
maps to a specific cylinder-head-sector
address on the disk. Logical DKC (LDKC) — Logical Disk Controller
Manual. An internal architecture extension
LC — Lucent connector. Fibre Channel connector
to the Control Unit addressing scheme that
that is smaller than a simplex connector (SC).
allows more LDEVs to be identified within 1
LCDG — Link Processor Control Diagnostics. Hitachi enterprise storage system.
LCM — Link Control Module. Longitudinal record —Patient information from
LCP — Link Control Processor. Controls the birth to death.
optical links. LCP is located in the LCM. LPAR — Logical Partition (mode).
LCSS — Logical Channel Subsystems. LR — Local Router.
LCU — Logical Control Unit. LRECL — Logical Record Length.
LD — Logical Device. LRP — Local Router Processor.
LDAP — Lightweight Directory Access Protocol. LRU — Least Recently Used.
LDEV ― Logical Device or Logical Device
(number). A set of physical disk partitions

Page G-14 HDS Confidential: For distribution only to authorized parties.


LSS — Logical Storage Subsystem (equivalent to Control Unit. The local CU of a remote copy
LCU). pair. Main or Master Control Unit.
LU — Logical Unit. Mapping number of an LDEV. MCU — Master Control Unit.
LUN ― Logical Unit Number. 1 or more LDEVs. MDPL — Metadata Data Protection Level.
Used only for open systems. MediaAgent — The workhorse for all data
LUSE ― Logical Unit Size Expansion. Feature used movement. MediaAgent facilitates the
to create virtual LUs that are up to 36 times transfer of data between the data source, the
larger than the standard OPEN-x LUs. client computer, and the destination storage
media.
LVDS — Low Voltage Differential Signal
Metadata — In database management systems,
LVI — Logical Volume Image. Identifies a similar data files are the files that store the database
concept (as LUN) in the mainframe information; whereas other files, such as
environment. index files and data dictionaries, store
LVM — Logical Volume Manager. administrative information, known as
-back to top- metadata.
MFC — Main Failure Code.
—M— MG — (1) Module Group. 2 (DIMM) cache
MAC — Media Access Control. A MAC address is memory modules that work together. (2)
a unique identifier attached to most forms of Migration Group. A group of volumes to be
networking equipment. migrated together.
MAID — Massive array of disks. MGC — (3-Site) Metro/Global Mirror.
MAN — Metropolitan Area Network. A MIB — Management Information Base. A database
communications network that generally of objects that can be monitored by a
covers a city or suburb. MAN is very similar network management system. Both SNMP
to a LAN except it spans across a and RMON use standardized MIB formats
geographical region such as a state. Instead that allow any SNMP and RMON tools to
of the workstations in a LAN, the monitor any device defined by a MIB.
workstations in a MAN could depict Microcode — The lowest-level instructions that
different cities in a state. For example, the directly control a microprocessor. A single
state of Texas could have: Dallas, Austin, San machine-language instruction typically
Antonio. The city could be a separate LAN translates into several microcode
and all the cities connected together via a instructions.
switch. This topology would indicate a
Fortan Pascal C
MAN.
High-level Language
MAPI — Management Application Programming
Assembly Language
Interface.
Machine Language
Mapping — Conversion between 2 data
Hardware
addressing spaces. For example, mapping
refers to the conversion between physical
Microprogram — See Microcode.
disk block addresses and the block addresses
of the virtual disks presented to operating MIF — Multiple Image Facility.
environments by control software. Mirror Cache OFF — Increases cache efficiency
Mb — Megabit. over cache data redundancy.
MB — Megabyte. M-JNL — Primary journal volumes.
MBA — Memory Bus Adaptor. MM — Maintenance Manual.
MBUS — Multi-CPU Bus. MMC — Microsoft Management Console.
MC — Multi Cabinet. Mode — The state or setting of a program or
device. The term mode implies a choice,
MCU — Main Control Unit, Master Control Unit,
which is that you can change the setting and
Main Disk Control Unit or Master Disk
put the system in a different mode.

HDS Confidential: For distribution only to authorized parties. Page G-15


MP — Microprocessor. NFS protocol — Network File System is a protocol
MPA — Microprocessor adapter. that allows a computer to access files over a
network as easily as if they were on its local
MPB – Microprocessor board.
disks.
MPI — (Electronic) Master Patient Identifier. Also
NIM — Network Interface Module.
known as EMPI.
MPIO — Multipath I/O. NIS — Network Information Service (originally
called the Yellow Pages or YP).
MP PK – MP Package.
NIST — National Institute of Standards and
MPU — Microprocessor Unit.
Technology. A standards organization active
MQE — Metadata Query Engine (Hitachi). in cloud computing.
MS/SG — Microsoft Service Guard. NLS — Native Language Support.
MSCS — Microsoft Cluster Server. Node ― An addressable entity connected to an
MSS — (1) Multiple Subchannel Set. (2) Managed I/O bus or network, used primarily to refer
Security Services. to computers, storage devices and storage
subsystems. The component of a node that
MTBF — Mean Time Between Failure.
connects to the bus or network is a port.
MTS — Multitiered Storage.
Node name ― A Name_Identifier associated with
Multitenancy — In cloud computing, a node.
multitenancy is a secure way to partition the
infrastructure (application, storage pool and NPV — Net Present Value.
network) so multiple customers share a NRO — Network Recovery Objective.
single resource pool. Multitenancy is one of NTP — Network Time Protocol.
the key ways cloud can achieve massive
economy of scale. NVS — Non Volatile Storage.
-back to top-
M-VOL — Main Volume.
MVS — Multiple Virtual Storage. —O—
-back to top- OASIS – Organization for the Advancement of
Structured Information Standards.
—N—
OCC — Open Cloud Consortium. A standards
NAS ― Network Attached Storage. A disk array
organization active in cloud computing.
connected to a controller that gives access to
a LAN Transport. It handles data at the file OEM — Original Equipment Manufacturer.
level. OFC — Open Fibre Control.
NAT — Network Address Translation. OGF — Open Grid Forum. A standards
NDMP — Network Data Management Protocol. A organization active in cloud computing.
protocol meant to transport data between OID — Object identifier.
NAS devices.
OLA — Operating Level Agreements.
NetBIOS — Network Basic Input/Output System.
OLTP — On-Line Transaction Processing.
Network — A computer system that allows
OLTT — Open-loop throughput throttling.
sharing of resources, such as files and
peripheral hardware devices. OMG — Object Management Group. A standards
organization active in cloud computing.
Network Cloud — A communications network.
The word "cloud" by itself may refer to any On/Off CoD — On/Off Capacity on Demand.
local area network (LAN) or wide area ONODE — Object node.
network (WAN). The terms “computing"
OpenStack – An open source project to provide
and "cloud computing" refer to services
orchestration and provisioning for cloud
offered on the public Internet or to a private
environments based on a variety of different
network that uses the same protocols as a
hypervisors.
standard network. See also cloud computing.

Page G-16 HDS Confidential: For distribution only to authorized parties.


OPEX — Operational Expenditure. This is an multiple partitions. Then customize the
operating expense, operating expenditure, partition to match the I/O characteristics of
operational expense, or operational assigned LUs.
expenditure, which is an ongoing cost for PAT — Port Address Translation.
running a product, business, or system. Its
counterpart is a capital expenditure (CAPEX). PATA — Parallel ATA.

ORM — Online Read Margin. Path — Also referred to as a transmission channel,


the path between 2 nodes of a network that a
OS — Operating System. data communication follows. The term can
Out-of-Band Virtualization — Refers to systems refer to the physical cabling that connects the
where the controller is located outside of the nodes on a network, the signal that is
SAN data path. Separates control and data communicated over the pathway or a sub-
on different connection paths. Also called channel in a carrier frequency.
asymmetric virtualization. Path failover — See Failover.
-back to top-
PAV — Parallel Access Volumes.
—P— PAWS — Protect Against Wrapped Sequences.
P-2-P — Point to Point. Also P-P. PB — Petabyte.
PaaS — Platform as a Service. A cloud computing PBC — Port Bypass Circuit.
business model — delivering a computing PCB — Printed Circuit Board.
platform and solution stack as a service. PCHIDS — Physical Channel Path Identifiers.
PaaS offerings facilitate deployment of
PCI — Power Control Interface.
applications without the cost and complexity
of buying and managing the underlying PCI CON — Power Control Interface Connector
hardware, software and provisioning Board.
hosting capabilities. PaaS provides all of the PCI DSS — Payment Card Industry Data Security
facilities required to support the complete Standard.
life cycle of building and delivering web PCIe — Peripheral Component Interconnect
applications and services entirely from the Express.
Internet.
PD — Product Detail.
PACS – Picture Archiving and Communication PDEV— Physical Device.
System.
PDM — Policy based Data Migration or Primary
PAN — Personal Area Network. A Data Migrator.
communications network that transmit data
PDS — Partitioned Data Set.
wirelessly over a short distance. Bluetooth
and Wi-Fi Direct are examples of personal PDSE — Partitioned Data Set Extended.
area networks. Performance — Speed of access or the delivery of
PAP — Password Authentication Protocol. information.
Petabyte (PB) — A measurement of capacity — the
Parity — A technique of checking whether data
amount of data that a drive or storage
has been lost or written over when it is
system can store after formatting. 1PB =
moved from one place in storage to another
1,024TB.
or when it is transmitted between
computers. PFA — Predictive Failure Analysis.
Parity Group — Also called an array group. This is PFTaaS — Private File Tiering as a Service. A cloud
a group of hard disk drives (HDDs) that computing business model.
form the basic unit of storage in a subsystem. PGP — Pretty Good Privacy. A data encryption
All HDDs in a parity group must have the and decryption computer program used for
same physical capacity. increasing the security of email
Partitioned cache memory — Separate workloads communications.
in a “storage consolidated” system by PGR — Persistent Group Reserve.
dividing cache into individually managed

HDS Confidential: For distribution only to authorized parties. Page G-17


PI — Product Interval. Provisioning — The process of allocating storage
PIR — Performance Information Report. resources and assigning storage capacity for
an application, usually in the form of server
PiT — Point-in-Time.
disk drive space, in order to optimize the
PK — Package (see PCB). performance of a storage area network
PL — Platter. The circular disk on which the (SAN). Traditionally, this has been done by
magnetic data is stored. Also called the SAN administrator, and it can be a
motherboard or backplane. tedious process. In recent years, automated
PM — Package Memory. storage provisioning (also called auto-
provisioning) programs have become
POC — Proof of concept.
available. These programs can reduce the
Port — In TCP/IP and UDP networks, an time required for the storage provisioning
endpoint to a logical connection. The port process, and can free the administrator from
number identifies what type of port it is. For the often distasteful task of performing this
example, port 80 is used for HTTP traffic. chore manually.
POSIX — Portable Operating System Interface for PS — Power Supply.
UNIX. A set of standards that defines an
PSA — Partition Storage Administrator .
application programming interface (API) for
software designed to run under PSSC — Perl Silicon Server Control.
heterogeneous operating systems. PSU — Power Supply Unit.
PP — Program product. PTAM — Pickup Truck Access Method.
P-P — Point-to-point; also P2P. PTF — Program Temporary Fixes.
PPRC — Peer-to-Peer Remote Copy. PTR — Pointer.
Private Cloud — A type of cloud computing PU — Processing Unit.
defined by shared capabilities within a Public Cloud — Resources, such as applications
single company; modest economies of scale and storage, available to the general public
and less automation. Infrastructure and data over the Internet.
reside inside the company’s data center
P-VOL — Primary Volume.
behind a firewall. Comprised of licensed
-back to top-
software tools rather than on-going services.
—Q—
Example: An organization implements its QD — Quorum Device.
own virtual, scalable cloud and business
units are charged on a per use basis. QDepth — The number of I/O operations that can
run in parallel on a SAN device; also WWN
Private Network Cloud — A type of cloud
QDepth.
network with 3 characteristics: (1) Operated
solely for a single organization, (2) Managed QoS — Quality of Service. In the field of computer
internally or by a third-party, (3) Hosted networking, the traffic engineering term
internally or externally. quality of service (QoS) refers to resource
reservation control mechanisms rather than
PR/SM — Processor Resource/System Manager. the achieved service quality. Quality of
Protocol — A convention or standard that enables service is the ability to provide different
the communication between 2 computing priority to different applications, users, or
endpoints. In its simplest form, a protocol data flows, or to guarantee a certain level of
can be defined as the rules governing the performance to a data flow.
syntax, semantics and synchronization of
QSAM — Queued Sequential Access Method.
communication. Protocols may be
-back to top-
implemented by hardware, software or a
combination of the 2. At the lowest level, a —R—
protocol defines the behavior of a hardware RACF — Resource Access Control Facility.
connection.
RAID ― Redundant Array of Independent Disks,
or Redundant Array of Inexpensive Disks. A

Page G-18 HDS Confidential: For distribution only to authorized parties.


group of disks that look like a single volume telecommunication links that are installed to
to the server. RAID improves performance back up primary resources in case they fail.
by pulling a single stripe of data from
multiple disks, and improves fault-tolerance A well-known example of a redundant
either through mirroring or parity checking system is the redundant array of
and it is a component of a customer’s SLA. independent disks (RAID). Redundancy
contributes to the fault tolerance of a system.
RAID-0 — Striped array with no parity.
RAID-1 — Mirrored array and duplexing. Redundancy — Backing up a component to help
ensure high availability.
RAID-3 — Striped array with typically non-
rotating parity, optimized for long, single- Reliability — (1) Level of assurance that data will
threaded transfers. not be lost or degraded over time. (2) An
attribute of any commuter component
RAID-4 — Striped array with typically non-
(software, hardware or a network) that
rotating parity, optimized for short, multi-
consistently performs according to its
threaded transfers.
specifications.
RAID-5 — Striped array with typically rotating
REST — Representational State Transfer.
parity, optimized for short, multithreaded
transfers. REXX — Restructured extended executor.
RAID-6 — Similar to RAID-5, but with dual RID — Relative Identifier that uniquely identifies
rotating parity physical disks, tolerating 2 a user or group within a Microsoft Windows
physical disk failures. domain.
RAIN — Redundant (or Reliable) Array of RIS — Radiology Information System.
Independent Nodes (architecture). RISC — Reduced Instruction Set Computer.
RAM — Random Access Memory.
RIU — Radiology Imaging Unit.
RAM DISK — A LUN held entirely in the cache
R-JNL — Secondary journal volumes.
area.
RAS — Reliability, Availability, and Serviceability RK — Rack additional.
or Row Address Strobe. RKAJAT — Rack Additional SATA disk tray.
RBAC — Role Base Access Control. RKAK — Expansion unit.
RC — (1) Reference Code or (2) Remote Control. RLGFAN — Rear Logic Box Fan Assembly.
RCHA — RAID Channel Adapter. RLOGIC BOX — Rear Logic Box.
RCP — Remote Control Processor. RMF — Resource Measurement Facility.
RCU — Remote Control Unit or Remote Disk RMI — Remote Method Invocation. A way that a
Control Unit. programmer, using the Java programming
RCUT — RCU Target. language and development environment,
can write object-oriented programming in
RD/WR — Read/Write. which objects on different computers can
RDM — Raw Disk Mapped. interact in a distributed network. RMI is the
RDMA — Remote Direct Memory Access. Java version of what is generally known as a
RPC (remote procedure call), but with the
RDP — Remote Desktop Protocol.
ability to pass 1 or more objects along with
RDW — Record Descriptor Word. the request.
Read/Write Head — Read and write data to the RndRD — Random read.
platters, typically there is 1 head per platter ROA — Return on Asset.
side, and each head is attached to a single
actuator shaft. RoHS — Restriction of Hazardous Substances (in
Electrical and Electronic Equipment).
RECFM — Record Format Redundant. Describes
the computer or network system ROI — Return on Investment.
components, such as fans, hard disk drives, ROM — Read Only Memory.
servers, operating systems, switches, and

HDS Confidential: For distribution only to authorized parties. Page G-19


Round robin mode — A load balancing technique delivery model for most business
which distributes data packets equally applications, including accounting (CRM
among the available paths. Round robin and ERP), invoicing (HRM), content
DNS is usually used for balancing the load management (CM) and service desk
of geographically distributed Web servers. It management, just to name the most common
works on a rotating basis in that one server software that runs in the cloud. This is the
IP address is handed out, then moves to the fastest growing service in the cloud market
back of the list; the next server IP address is today. SaaS performs best for relatively
handed out, and then it moves to the end of simple tasks in IT-constrained organizations.
the list; and so on, depending on the number SACK — Sequential Acknowledge.
of servers being used. This works in a
looping fashion. SACL — System ACL. The part of a security
descriptor that stores system auditing
Router — A computer networking device that information.
forwards data packets toward their
destinations, through a process known as SAIN — SAN-attached Array of Independent
routing. Nodes (architecture).

RPC — Remote procedure call. SAN ― Storage Area Network. A network linking
computing devices to disk or tape arrays and
RPO — Recovery Point Objective. The point in other devices over Fibre Channel. It handles
time that recovered data should match. data at the block level.
RPSFAN — Rear Power Supply Fan Assembly. SAP — (1) System Assist Processor (for I/O
RRDS — Relative Record Data Set. processing), or (2) a German software
RS CON — RS232C/RS422 Interface Connector. company.

RSD — RAID Storage Division (of Hitachi). SAP HANA — High Performance Analytic
Appliance, a database appliance technology
R-SIM — Remote Service Information Message. proprietary to SAP.
RSM — Real Storage Manager. SARD — System Assurance Registration
RTM — Recovery Termination Manager. Document.
RTO — Recovery Time Objective. The length of SAS —Serial Attached SCSI.
time that can be tolerated between a disaster SATA — Serial ATA. Serial Advanced Technology
and recovery of data. Attachment is a new standard for connecting
R-VOL — Remote Volume. hard drives into computer systems. SATA is
R/W — Read/Write. based on serial signaling technology, unlike
current IDE (Integrated Drive Electronics)
-back to top-
hard drives that use parallel signaling.
—S— SBM — Solutions Business Manager.
SA — Storage Administrator. SBOD — Switched Bunch of Disks.
SA z/OS — System Automation for z/OS. SBSC — Smart Business Storage Cloud.
SAA — Share Access Authentication. The process SBX — Small Box (Small Form Factor).
of restricting a user's rights to a file system
SC — (1) Simplex connector. Fibre Channel
object by combining the security descriptors
connector that is larger than a Lucent
from both the file system object itself and the
connector (LC). (2) Single Cabinet.
share to which the user is connected.
SCM — Supply Chain Management.
SaaS — Software as a Service. A cloud computing
SCP — Secure Copy.
business model. SaaS is a software delivery
model in which software and its associated SCSI — Small Computer Systems Interface. A
data are hosted centrally in a cloud and are parallel bus architecture and a protocol for
typically accessed by users using a thin transmitting large data blocks up to a
client, such as a web browser via the distance of 15 to 25 meters.
Internet. SaaS has become a common SD — Software Division (of Hitachi).

Page G-20 HDS Confidential: For distribution only to authorized parties.


SDH — Synchronous Digital Hierarchy. • Specific performance benchmarks to
SDM — System Data Mover. which actual performance will be
periodically compared
SDO – Standards Development Organizations (a
general category). • The schedule for notification in advance of
network changes that may affect users
SDSF — Spool Display and Search Facility.
Sector — A sub-division of a track of a magnetic • Help desk response time for various
disk that stores a fixed amount of data. classes of problems

SEL — System Event Log. • Dial-in access availability


Selectable Segment Size — Can be set per • Usage statistics that will be provided
partition. Service-Level Objective — SLO. Individual
Selectable Stripe Size — Increases performance by performance metrics built into an SLA. Each
customizing the disk access size. SLO corresponds to a single performance
characteristic relevant to the delivery of an
SENC — Is the SATA (Serial ATA) version of the
overall service. Some examples of SLOs
ENC. ENCs and SENCs are complete
include: system availability, help desk
microprocessor systems on their own and
incident resolution time, and application
they occasionally require a firmware
response time.
upgrade.
SeqRD — Sequential read. SES — SCSI Enclosure Services.

Serial Transmission — The transmission of data SFF — Small Form Factor.


bits in sequential order over a single line. SFI — Storage Facility Image.
Server — A central computer that processes SFM — Sysplex Failure Management.
end-user applications or requests, also called
SFP — Small Form-Factor Pluggable module Host
a host.
connector. A specification for a new
Server Virtualization — The masking of server generation of optical modular transceivers.
resources, including the number and identity The devices are designed for use with small
of individual physical servers, processors, form factor (SFF) connectors, offer high
and operating systems, from server users. speed and physical compactness and are
The implementation of multiple isolated hot-swappable.
virtual environments in one physical server.
SHSN — Shared memory Hierarchical Star
Service-level Agreement — SLA. A contract Network.
between a network service provider and a
SID — Security Identifier. A user or group
customer that specifies, usually in
identifier within the Microsoft Windows
measurable terms, what services the network
security model.
service provider will furnish. Many Internet
service providers (ISP) provide their SIGP — Signal Processor.
customers with a SLA. More recently, IT SIM — (1) Service Information Message. A
departments in major enterprises have message reporting an error that contains fix
adopted the idea of writing a service level guidance information. (2) Storage Interface
agreement so that services for their Module. (3) Subscriber Identity Module.
customers (users in other departments SIM RC — Service (or system) Information
within the enterprise) can be measured, Message Reference Code.
justified, and perhaps compared with those
SIMM — Single In-line Memory Module.
of outsourcing network providers.
SLA —Service Level Agreement.
Some metrics that SLAs may specify include:
SLO — Service Level Objective.
• The percentage of the time services will be
SLRP — Storage Logical Partition.
available
SM ― Shared Memory or Shared Memory Module.
• The number of users that can be served
Stores the shared information about the
simultaneously
subsystem and the cache control information
(director names). This type of information is

HDS Confidential: For distribution only to authorized parties. Page G-21


used for the exclusive control of the can send and receive TCP/IP messages by
subsystem. Like CACHE, shared memory is opening a socket and reading and writing
controlled as 2 areas of memory and fully non- data to and from the socket. This simplifies
volatile (sustained for approximately 7 days). program development because the
SM PATH— Shared Memory Access Path. The programmer need only worry about
Access Path from the processors of CHA, manipulating the socket and can rely on the
DKA PCB to Shared Memory. operating system to actually transport
messages across the network correctly. Note
SMB/CIFS — Server Message Block
that a socket in this sense is completely soft;
Protocol/Common Internet File System.
it is a software object, not a physical
SMC — Shared Memory Control. component.
SME — Small and Medium Enterprise. SOM — System Option Mode.
SMF — System Management Facility. SONET — Synchronous Optical Network.
SMI-S — Storage Management Initiative SOSS — Service Oriented Storage Solutions.
Specification.
SPaaS — SharePoint as a Service. A cloud
SMP — Symmetric Multiprocessing. An IBM- computing business model.
licensed program used to install software
SPAN — Span is a section between 2 intermediate
and software changes on z/OS systems.
supports. See Storage pool.
SMP/E — System Modification
Spare — An object reserved for the purpose of
Program/Extended.
substitution for a like object in case of that
SMS — System Managed Storage. object's failure.
SMTP — Simple Mail Transfer Protocol. SPC — SCSI Protocol Controller.
SMU — System Management Unit. SpecSFS — Standard Performance Evaluation
Snapshot Image — A logical duplicated volume Corporation Shared File system.
(V-VOL) of the primary volume. It is an SPECsfs97 — Standard Performance Evaluation
internal volume intended for restoration. Corporation (SPEC) System File Server (sfs)
SNIA — Storage Networking Industry developed in 1997 (97).
Association. An association of producers and SPI model — Software, Platform and
consumers of storage networking products, Infrastructure as a service. A common term
whose goal is to further storage networking to describe the cloud computing “as a service”
technology and applications. Active in cloud business model.
computing.
SRA — Storage Replicator Adapter.
SNMP — Simple Network Management Protocol. SRDF/A — (EMC) Symmetrix Remote Data
A TCP/IP protocol that was designed for Facility Asynchronous.
management of networks over TCP/IP,
SRDF/S — (EMC) Symmetrix Remote Data
using agents and stations.
Facility Synchronous.
SOA — Service Oriented Architecture.
SRM — Site Recovery Manager.
SOAP — Simple Object Access Protocol. A way for
SSB — Sense Byte.
a program running in one kind of operating
system (such as Windows 2000) to SSC — SiliconServer Control.
communicate with a program in the same or SSCH — Start Subchannel.
another kind of an operating system (such as SSD — Solid-State Drive or Solid-State Disk.
Linux) by using the World Wide Web's
SSH — Secure Shell.
Hypertext Transfer Protocol (HTTP) and its
Extensible Markup Language (XML) as the SSID — Storage Subsystem ID or Subsystem
mechanisms for information exchange. Identifier.
Socket — In UNIX and some other operating SSL — Secure Sockets Layer.
systems, socket is a software object that SSPC — System Storage Productivity Center.
connects an application to a network SSUE — Split Suspended Error.
protocol. In UNIX, for example, a program

Page G-22 HDS Confidential: For distribution only to authorized parties.


SSUS — Split Suspend. TCO — Total Cost of Ownership.
SSVP — Sub Service Processor interfaces the SVP TCG – Trusted Computing Group.
to the DKC. TCP/IP — Transmission Control Protocol over
SSW — SAS Switch. Internet Protocol.
Sticky Bit — Extended UNIX mode bit that TDCONV — Trace Dump Converter. A software
prevents objects from being deleted from a program that is used to convert traces taken
directory by anyone other than the object's on the system into readable text. This
owner, the directory's owner or the root user. information is loaded into a special
Storage pooling — The ability to consolidate and spreadsheet that allows for further
manage storage resources across storage investigation of the data. More in-depth
system enclosures where the consolidation failure analysis.
of many appears as a single view. TDMF — Transparent Data Migration Facility.
STP — Server Time Protocol. Telco or TELCO — Telecommunications
STR — Storage and Retrieval Systems. Company.
Striping — A RAID technique for writing a file to TEP — Tivoli Enterprise Portal.
multiple disks on a block-by-block basis, Terabyte (TB) — A measurement of capacity, data
with or without parity. or data storage. 1TB = 1,024GB.
Subsystem — Hardware or software that performs TFS — Temporary File System.
a specific function within a larger system. TGTLIBs — Target Libraries.
SVC — Supervisor Call Interruption. THF — Front Thermostat.
SVC Interrupts — Supervisor calls. Thin Provisioning — Thin provisioning allows
S-VOL — (1) (ShadowImage) Source Volume for storage space to be easily allocated to servers
In-System Replication, or (2) (Universal on a just-enough and just-in-time basis.
Replicator) Secondary Volume. THR — Rear Thermostat.
SVP — Service Processor ― A laptop computer Throughput — The amount of data transferred
mounted on the control frame (DKC) and from 1 place to another or processed in a
used for monitoring, maintenance and specified amount of time. Data transfer rates
administration of the subsystem. for disk drives and networks are measured
Switch — A fabric device providing full in terms of throughput. Typically,
bandwidth per port and high-speed routing throughputs are measured in kb/sec,
of data via link-level addressing. Mb/sec and Gb/sec.
SWPX — Switching power supply. TID — Target ID.
SXP — SAS Expander. Tiered Storage — A storage strategy that matches
Symmetric Virtualization — See In-Band data classification to storage metrics. Tiered
Virtualization. storage is the assignment of different
categories of data to different types of
Synchronous — Operations that have a fixed time
storage media in order to reduce total
relationship to each other. Most commonly
storage cost. Categories may be based on
used to denote I/O operations that occur in
levels of protection needed, performance
time sequence, such as, a successor operation
requirements, frequency of use, and other
does not occur until its predecessor is
considerations. Since assigning data to
complete.
particular media may be an ongoing and
-back to top-
complex activity, some vendors provide
—T— software for automatically managing the
Target — The system component that receives a process based on a company-defined policy.
SCSI I/O command, an open device that Tiered Storage Promotion — Moving data
operates at the request of the initiator. between tiers of storage as their availability
TB — Terabyte. 1TB = 1,024GB. requirements change.
TCDO — Total Cost of Data Ownership. TLS — Tape Library System.

HDS Confidential: For distribution only to authorized parties. Page G-23


TLS — Transport Layer Security. secondary servers, set up protection and
TMP — Temporary or Test Management Program. perform failovers and failbacks.
TOD (or ToD) — Time Of Day. VCS — Veritas Cluster System.
TOE — TCP Offload Engine. VDEV — Virtual Device.
Topology — The shape of a network or how it is VDI — Virtual Desktop Infrastructure.
laid out. Topologies are either physical or VHD — Virtual Hard Disk.
logical.
VHDL — VHSIC (Very-High-Speed Integrated
TPC-R — Tivoli Productivity Center for
Circuit) Hardware Description Language.
Replication.
VHSIC — Very-High-Speed Integrated Circuit.
TPF — Transaction Processing Facility.
VI — Virtual Interface. A research prototype that
TPOF — Tolerable Points of Failure.
is undergoing active development, and the
Track — Circular segment of a hard disk or other details of the implementation may change
storage media. considerably. It is an application interface
Transfer Rate — See Data Transfer Rate. that gives user-level processes direct but
Trap — A program interrupt, usually an interrupt protected access to network interface cards.
caused by some exceptional situation in the This allows applications to bypass IP
user program. In most cases, the Operating processing overheads (for example, copying
System performs some action and then data, computing checksums) and system call
returns control to the program. overheads while still preventing 1 process
from accidentally or maliciously tampering
TSC — Tested Storage Configuration.
with or reading data being used by another.
TSO — Time Sharing Option.
Virtualization — Referring to storage
TSO/E — Time Sharing Option/Extended.
virtualization, virtualization is the
T-VOL — (ShadowImage) Target Volume for amalgamation of multiple network storage
In-System Replication. devices into what appears to be a single
-back to top- storage unit. Storage virtualization is often
—U— used in a SAN, and makes tasks such as
archiving, backup and recovery easier and
UA — Unified Agent. faster. Storage virtualization is usually
UBX — Large Box (Large Form Factor). implemented via software applications.
UCB — Unit Control Block.
UDP — User Datagram Protocol is 1 of the core There are many additional types of
protocols of the Internet protocol suite. virtualization.
Using UDP, programs on networked Virtual Private Cloud (VPC) — Private cloud
computers can send short messages known existing within a shared or public cloud (for
as datagrams to one another. example, the Intercloud). Also known as a
UFA — UNIX File Attributes. virtual private network cloud.
UID — User Identifier within the UNIX security VLL — Virtual Logical Volume Image/Logical
model. Unit Number.
UPS — Uninterruptible Power Supply — A power VLUN — Virtual LUN. Customized volume. Size
supply that includes a battery to maintain chosen by user.
power in the event of a power outage. VLVI — Virtual Logical Volume Image. Marketing
UR — Universal Replicator. name for CVS (custom volume size).
UUID — Universally Unique Identifier. VM — Virtual Machine.
-back to top- VMDK — Virtual Machine Disk file format.
—V— VNA — Vendor Neutral Archive.
vContinuum — Using the vContinuum wizard, VOJP — (Cache) Volatile Jumper.
users can push agents to primary and
VOLID — Volume ID.

Page G-24 HDS Confidential: For distribution only to authorized parties.


VOLSER — Volume Serial Numbers. WWNN — World Wide Node Name. A globally
Volume — A fixed amount of storage on a disk or unique 64-bit identifier assigned to each
tape. The term volume is often used as a Fibre Channel node process.
synonym for the storage medium itself, but WWPN ― World Wide Port Name. A globally
it is possible for a single disk to contain more unique 64-bit identifier assigned to each
than 1 volume or for a volume to span more Fibre Channel port. A Fibre Channel port’s
than 1 disk. WWPN is permitted to use any of several
VPC — Virtual Private Cloud. naming authorities. Fibre Channel specifies a
VSAM — Virtual Storage Access Method. Network Address Authority (NAA) to
distinguish between the various name
VSD — Virtual Storage Director. registration authorities that may be used to
VTL — Virtual Tape Library. identify the WWPN.
VSP — Virtual Storage Platform. -back to top-
VSS — (Microsoft) Volume Shadow Copy Service.
—X—
VTOC — Volume Table of Contents.
XAUI — "X"=10, AUI = Attachment Unit Interface.
VTOCIX — Volume Table of Contents Index.
VVDS — Virtual Volume Data Set. XCF — Cross System Communications Facility.

V-VOL — Virtual Volume. XDS — Cross Enterprise Document Sharing.


-back to top-
XDSi — Cross Enterprise Document Sharing for
—W— Imaging.

WAN — Wide Area Network. A computing XFI — Standard interface for connecting a 10Gb
internetwork that covers a broad area or Ethernet MAC device to XFP interface.
region. Contrast with PAN, LAN and MAN.
XFP — "X"=10Gb Small Form Factor Pluggable.
WDIR — Directory Name Object.
XML — eXtensible Markup Language.
WDIR — Working Directory.
XRC — Extended Remote Copy.
WDS — Working Data Set. -back to top-

WebDAV — Web-Based Distributed Authoring —Y—


and Versioning (HTTP extensions).
YB — Yottabyte.
WFILE — File Object or Working File.
Yottabyte — The highest-end measurement of
WFS — Working File Set. data at the present time. 1YB = 1,024ZB, or 1
quadrillion GB. A recent estimate (2011) is
WINS — Windows Internet Naming Service. that all the computer hard drives in the
WL — Wide Link. world do not contain 1YB of data.
-back to top-
WLM — Work Load Manager.
WORM — Write Once, Read Many.
—Z—
WSDL — Web Services Description Language. z/OS — z Operating System (IBM® S/390® or
WSRM — Write Seldom, Read Many. z/OS® Environments).
z/OS NFS — (System) z/OS Network File System.
WTREE — Directory Tree Object or Working Tree.
z/OSMF — (System) z/OS Management Facility.
WWN ― World Wide Name. A unique identifier
zAAP — (System) z Application Assist Processor
for an open-system host. It consists of a 64-
(for Java and XML workloads).
bit physical address (the IEEE 48-bit format
with a 12-bit extension and a 4-bit prefix).

HDS Confidential: For distribution only to authorized parties. Page G-25


ZCF — Zero Copy Failover. Also known as Data
Access Path (DAP).
Zettabyte (ZB) — A high-end measurement of
data. 1ZB = 1,024EB.
zFS — (System) zSeries File System.
zHPF — (System) z High Performance FICON.
zIIP — (System) z Integrated Information
Processor (specialty processor for database).
Zone — A collection of Fibre Channel Ports that
are permitted to communicate with each
other via the fabric.
Zoning — A method of subdividing a storage area
network into disjoint zones, or subsets of
nodes on the network. Storage area network
nodes outside a zone are invisible to nodes
within the zone. Moreover, with switched
SANs, traffic within each zone may be
physically isolated from traffic outside the
zone.
-back to top-

Page G-26 HDS Confidential: For distribution only to authorized parties.


Evaluating This Course
Please use the online evaluation system to help improve our
courses.

For evaluations handled inside the Learning Center, sign in to:


https://learningcenter.hds.com/Saba/Web/Main
Evaluations can be reached by clicking the My Learning tab, followed by Evaluations &
Surveys on the left navigation bar. Click the Launch link to evaluate the course.

Learning Center Sign-in location:

https://learningcenter.hds.com/Saba/Web/Main

Page E-1
Evaluating This Course

Page E-2

You might also like