Professional Documents
Culture Documents
THI2264 Student Guide Book2 v10 0x PDF
THI2264 Student Guide Book2 v10 0x PDF
THI2264
Book 2 of 2
© Hitachi Data Systems Corporation 2016. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Hitachi Content Platform Anywhere, Hitachi Live
Insight Solutions, ShadowImage, TrueCopy, Universal Storage Platform, Essential NAS Platform, Hi-Track, and Archivas are trademarks or registered trademarks of Hitachi Data
Systems Corporation. IBM, S/390, XRC, z/OS, and Flashcopy are trademarks or registered trademarks of International Business Machines Corporation. Microsoft, SQL Server,
Hyper-V, PowerShell, SharePoint, and Windows are trademarks or registered trademarks of Microsoft Corporation. All other trademarks, service marks, and company names are
properties of their respective owners.
ii
Contents
BOOK 1
3. Hitachi Virtual Storage Platform F400, F600 and F800 Storage Architecture
and Hitachi Flash Storage ....................................................................................... 3-1
iii
Contents
BOOK 2
iv
Contents
pairresync Command – Reverse Resync ...........................................................................................15-19
pairsplit -S Command.....................................................................................................................15-20
Volume Grouping...........................................................................................................................15-20
Pair Status Transitions ...................................................................................................................15-21
Hitachi Thin Image .............................................................................................................................15-22
What Is Hitachi Thin Image? ..........................................................................................................15-22
Hitachi ShadowImage Replication Clones Versus Thin Image Snapshots ............................................. 15-25
Hitachi Thin Image Technical Details (1 of 3) ................................................................................... 15-27
Hitachi Thin Image Technical Details (2 of 3) ................................................................................... 15-27
Hitachi Thin Image Technical Details (3 of 3) ................................................................................... 15-28
Hitachi Thin Image Components .....................................................................................................15-28
Comparison: Hitachi Copy-on-Write Snapshot and Hitachi Thin Image ................................................ 15-29
Operations ....................................................................................................................................15-30
Module Summary ...............................................................................................................................15-38
Module Review ...................................................................................................................................15-38
v
Contents
Replication Tab in Hitachi Command Suite – Makes Controlling HUR Easier ......................................... 16-22
Hitachi High Availability Manager ....................................................................................................16-23
Complete Virtualized, High Availability and Disaster Recovery Solution ............................................... 16-24
Global-Active Device ...........................................................................................................................16-25
Global-Active Device Overview ........................................................................................................16-25
Global-Active Device – Components ................................................................................................16-27
Global-Active Device Software Requirements for VSP G1000.............................................................. 16-30
Global-Active Device – Specifications for VSP G1000 ......................................................................... 16-31
Hitachi Business Continuity Management Software ................................................................................ 16-32
Hitachi Business Continuity Manager Overview ................................................................................. 16-32
Hitachi Business Continuity Manager Functions ................................................................................ 16-33
Demo ................................................................................................................................................16-34
Online Product Overview .....................................................................................................................16-34
Module Summary ...............................................................................................................................16-35
Module Review ...................................................................................................................................16-36
vi
Contents
Components ......................................................................................................................................18-16
Managing Users and Permissions .........................................................................................................18-18
Resource Groups Overview..................................................................................................................18-19
Resource Group Function ....................................................................................................................18-20
Resource Groups ................................................................................................................................18-21
Resource Group Properties ..................................................................................................................18-22
Hitachi Command Suite Replication Tab................................................................................................18-23
HCS Replication Tab ......................................................................................................................18-23
HCS Replication Tab Operations ......................................................................................................18-24
Module Summary ...............................................................................................................................18-26
Module Review ...................................................................................................................................18-26
vii
Contents
HDID Complimentary Products .......................................................................................................19-26
Unified Management ...........................................................................................................................19-27
How Many Backup Solutions Do You Use? .......................................................................................19-27
Data Protection Is Complicated .......................................................................................................19-27
Which Data Protection Options to Choose? ......................................................................................19-30
When Data Disaster Strikes ............................................................................................................19-32
Workflow-Based Policy Management ...............................................................................................19-32
Unique Graphical User Interface .....................................................................................................19-33
New: Multitenancy Support ............................................................................................................19-33
Example Deal With HDID ...............................................................................................................19-34
Demo ................................................................................................................................................19-35
Online Product Overview .....................................................................................................................19-36
Module Summary ...............................................................................................................................19-37
Module Review ...................................................................................................................................19-38
viii
Contents
Writable Clones ..................................................................................................................................20-21
Traditional Snapshot and NAS File Clone Differences ............................................................................. 20-21
Directory Clones .................................................................................................................................20-22
NDMP Backup Direct to Tape ...............................................................................................................20-22
HNAS Replication Access Point Replication ............................................................................................20-23
NAS Replication Object-by-Object ........................................................................................................20-23
Promote Secondary ............................................................................................................................20-24
Data Protection – Anti-Virus Support ....................................................................................................20-24
Data Migration Using Cross Volume Links .............................................................................................20-25
HNAS Data Migration to HCP ...............................................................................................................20-26
Data Migrator to Cloud Added .............................................................................................................20-26
Universal Migration .............................................................................................................................20-27
VSP G1000 Hardware .........................................................................................................................20-27
Global-Active Device and HNAS Integration ..........................................................................................20-28
Synchronous Disaster Recovery for HNAS Overview .......................................................................... 20-28
Why Is Global-Active Device Important to HNAS? ............................................................................. 20-30
Online Product Overview .....................................................................................................................20-32
Module Summary ...............................................................................................................................20-33
Module Review ...................................................................................................................................20-34
21. Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere ............. 21-1
Module Objectives ............................................................................................................................... 21-1
Hitachi Content Platform ...................................................................................................................... 21-2
What Is an HCP Object? .................................................................................................................. 21-2
What Is Hitachi Content Platform? ................................................................................................... 21-3
HCP Basics ..................................................................................................................................... 21-4
Fixed Content................................................................................................................................. 21-6
Categories of Storage ..................................................................................................................... 21-7
Object-Based Storage – Overview .................................................................................................... 21-8
Retention Times ............................................................................................................................. 21-9
Reviewing Retention ......................................................................................................................21-10
Policy Descriptions .........................................................................................................................21-11
HCP Integration With VLANs ...........................................................................................................21-12
Multiple Custom Metadata Injection ................................................................................................21-13
It’s not Just Archive Anymore .........................................................................................................21-14
Introducing Tenants and Namespaces .............................................................................................21-15
Internal Object Representation .......................................................................................................21-16
ix
Contents
HCP – Versatile Content Platform ....................................................................................................21-17
HCP Products .....................................................................................................................................21-17
Unified HCP G10 Platform...............................................................................................................21-18
HCP G10 With Local Storage...........................................................................................................21-19
HCP G10 With Attached Storage .....................................................................................................21-19
HCP S10 .......................................................................................................................................21-20
HCP S30 .......................................................................................................................................21-21
HCP S Node ..................................................................................................................................21-22
Direct Write to HCP S10/S30 ..........................................................................................................21-23
VMware and Hyper-V Editions of HCP ..............................................................................................21-24
Hitachi Data Ingestor ..........................................................................................................................21-24
Hitachi Data Ingestor (HDI)............................................................................................................21-25
What Is Hitachi Data Ingestor? .......................................................................................................21-25
How Does Hitachi Data Ingestor Work? ...........................................................................................21-26
Hitachi Data Ingestor Overview ......................................................................................................21-27
Hitachi Data Ingestor (HDI) Specifications .......................................................................................21-28
Major Components: Server + HBA, Switch and Storage..................................................................... 21-29
Protocols in Detail .........................................................................................................................21-29
How HDI Maps to HCP Tenants and Namespaces ............................................................................. 21-30
Content Sharing Use Case: Medical Image File Sharing ..................................................................... 21-31
A Quick Look: Migration, Stubbing and Recalling .............................................................................. 21-31
HDI Is Backup Free .......................................................................................................................21-32
HDI Intelligent Caching: Migration ..................................................................................................21-32
HDI Intelligent Caching: Stubbing ...................................................................................................21-33
File Retention Utility (WORM) .........................................................................................................21-33
Roaming Home Directories .............................................................................................................21-34
HDI With Remote Server .....................................................................................................................21-34
What Is HDI With Remote Server? ..................................................................................................21-34
Why HDI With Remote Server? .......................................................................................................21-35
Solution Components .....................................................................................................................21-35
HCP Anywhere ...................................................................................................................................21-35
HCP Solution With HCP Anywhere ...................................................................................................21-36
Hitachi Content Platform Anywhere .................................................................................................21-37
HCP Solution With HCP Anywhere ...................................................................................................21-38
Desktop Application Overview .........................................................................................................21-39
HCP Anywhere App in the App Store ...............................................................................................21-39
HCP Anywhere Features .................................................................................................................21-40
x
Contents
Demo ................................................................................................................................................21-40
Online Product Overviews ...................................................................................................................21-41
Module Summary ...............................................................................................................................21-41
Module Review ...................................................................................................................................21-42
22. Hitachi Compute Blade and Hitachi Unified Compute Platform ...................... 22-1
Module Objectives ............................................................................................................................... 22-1
Hitachi Compute Portfolio..................................................................................................................... 22-2
Hitachi Compute Blade 500 Series ......................................................................................................... 22-2
Compute Blade 500 Chassis And Components ........................................................................................ 22-4
Hitachi Compute Blade 500 Series ......................................................................................................... 22-5
CB500 Web Console ............................................................................................................................ 22-7
Hitachi Compute Blade 2500 Series ....................................................................................................... 22-8
Compute Blade 2500 Components - Front.............................................................................................. 22-9
Compute Blade 2500 Components - Rear..............................................................................................22-10
Hitachi Compute Blade 2500 Series ......................................................................................................22-11
CB2500 Web Console..........................................................................................................................22-11
Server Blade Options ..........................................................................................................................22-12
Compute Blade Platform Features ........................................................................................................22-13
What Is Logical Partitioning? ...............................................................................................................22-15
Compute Rack Server Family ...............................................................................................................22-16
Integrated Platform Management ........................................................................................................22-17
Hitachi Compute Systems Manager ......................................................................................................22-18
HCSM Resources – Compute Blade Chassis ...........................................................................................22-18
HCSM Resources – Compute Blade Servers ...........................................................................................22-19
HCSM Resources – Compute Blade Servers (continued) ......................................................................... 22-19
Demo ................................................................................................................................................22-20
Unified Compute Platform ...................................................................................................................22-20
Unified Compute Platform – One Platform for All Workloads .............................................................. 22-21
UCP With Unified Compute Platform Director ................................................................................... 22-22
Unified Compute Platform Family Overview ......................................................................................22-23
Unified Compute Platform 4000E – Entry-Level ................................................................................ 22-25
Demo ................................................................................................................................................22-27
Online Product Overviews ...................................................................................................................22-27
Module Summary ...............................................................................................................................22-28
Your Next Steps .................................................................................................................................22-29
xi
Contents
xii
14. Business Continuity Overview
Module Objectives
Page 14-1
Business Continuity Overview
Hitachi Replication Products
Page 14-2
Business Continuity Overview
ShadowImage Replication
ShadowImage Replication
Features
• Full physical copy of a volume
• Immediately available for concurrent use by other
applications (after split)
• No host processing cycles required
• No dependence on operating system, file system or Production
database Copy of
Volume Production
• All copies are additionally RAID protected Volume
Benefits Normal Point-in-
processing time copy
• Protects data availability continues for parallel
unaffected processing
• Simplifies and increases disaster recovery testing
• Eliminates the backup window
• Reduces testing and development cycles
• Enables nondisruptive sharing of critical information
Page 14-3
Business Continuity Overview
Hitachi Thin Image
Benefits Features
Reduce recovery time from data corruption or human errors
Up to 1024 point-in-time snapshot copies
while maximizing Hitachi disk storage capacity
Achieve frequent and nondisruptive data backup operations Only changed data blocks stored for
while critical applications run unaffected maximum capacity utilization
Accelerate application testing and deployment with always- Version tracking of backups enables
available copies of current production information easy restores of just the data you need
Significantly reduce or eliminate backup window time
Near instantaneous restore reduces
requirements
downtime and improves recovery
Improve operational efficiency by allowing multiple processes objectives
to run in parallel with access to the same information
New greatly improved write performance
Host can access reduces response time to host
minimizing impact on users and
S-Vol TI Pool applications
P-Vol Integration with industry-leading backup
software applications
S-Vol
An essential component of data backup and protection solutions is the ability to quickly and
easily copy data. On HUS VM and newer systems Hitachi provides this as Hitachi Thin Image.
This function provides logical, change-based, point-in-time data replication within Hitachi
storage systems for immediate business use. Business usage can include data backup and rapid
recovery operations, as well as decision support, information processing and software testing
and development.
• Maximum capacity of 2.1PB enables larger data sets or more virtual machines to be
protected.
• Maximum snapshots increased to 1024 for greater snapshot frequency and/or longer
retention periods
Page 14-4
Business Continuity Overview
Hitachi TrueCopy Remote Replication Software
Features Benefits
• Synchronous solution • Disaster recovery solution
• Consistency group support • Allows for data migration
• The remote copy is always a mirror • Increases the availability of revenue
image producing applications
• Provides fast recovery with no data loss
1 2
P-VOL
4 3
• TrueCopy Remote Replication software can be deployed with Hitachi Universal Replicator
software's asynchronous replication capabilities to provide advanced data replication
among multiple data centers.
Page 14-5
Business Continuity Overview
Hitachi Universal Replicator Software
Features Benefits
• Asynchronous replication • Resource optimization
• Leverages Virtual Storage Platform • Mitigation of network problems and
significantly reduced network costs
• Performance-optimized disk-based journaling
• Enhanced disaster recovery capabilities
• Resource-optimized processes
through 3 Data Center solutions
• Advanced 3 Data Center capabilities • Reduced costs due to single pane of glass
• Mainframe and Open Systems support heterogeneous replication
WRT
Application Application
JNL JNL
Volume Volume
• The following describes the basic technology behind the disk-optimized journals:
o I/O is initiated by the application and sent to the Virtual Storage Platform.
o It is captured in cache and sent to the disk journal, at which point it is written to
disk.
o The remote system pulls the data and writes it to its own journals and then to
the replicated application volumes.
• Universal Replicator software sorts the I/Os at the remote site by sequence and time
stamp (mainframe) and guaranteed data integrity.
• Note that Universal Replicator software offers full support for consistency groups
through the journal mechanism (journal groups).
Page 14-6
Business Continuity Overview
Hitachi Replication Manager
This uniquely integrated solution allows you to closely monitor critical storage components and
better manage recovery point objectives (RPO) and recovery time objectives (RTO).
This software tool simplifies replication management and optimizes the configuration,
operations and monitoring of the critical storage components of the replication infrastructure. It
leverages the volume replication capabilities of the Hitachi disk array storage systems to reduce
the workload involved in management tasks such as protecting and restoring system data.
Replication Manager reduces the need for manual configuration and provides true replication
function management and workflow capabilities.
Page 14-7
Business Continuity Overview
Tools Used for Setting Up Replication
Page 14-8
Business Continuity Overview
Tools Used for Setting Up Replication
o CCI represents the command line interface for performing replication operations.
o HORCM files contain the configuration for volumes to be replicated and used by
the commands available through CCI
Page 14-9
Business Continuity Overview
Requirements for All Replication Products
Replication Operations
Basic operations when working with replication products
• Paircreate
• Pairsplit
• Pairresync
• Pairsplit
Commands are consistent across products (in-system or remote replication), but
implementation varies depending on the product
• In-system — all operations with volumes within the same storage system
• Remote — all operations with volumes across different storage systems
• Use manual to identify product specific operations with above commands
A volume with source data is called a primary volume (P-VOL), and a volume to which the
data is copied is a secondary volume (S-VOL)
I/O operation
PAIR
P-Vol S-Vol
Host
Basic operations:
• Pair creation
• Splitting pairs
• Pair resynchronization
• Pair deletion
Page 14-10
Business Continuity Overview
Copy Operations
Copy Operations
If your Hitachi Virtual Storage Platform G1000 has encryption disk adapters (DKAs), you can
copy an encrypted volume to an unencrypted volume. There is no guard logic to enforce
copying encrypted P-VOLs to only encrypted S-VOLs. Unless there is a specific reason for the
data to become unencrypted, make sure you maintain the encryption by using only encrypted
S-VOLs.
Page 14-11
Business Continuity Overview
Thin Provisioning “Awareness”
Pair create
instruction
P-VOL S-VOL POOL
Delete allocated page
(Write 0 and restore it
Usage 0%
to POOL)
Saves bandwidth and reduces initial copy time: In “thin-to-thin” replication pairings,
only data pages actually consumed (allocated) from the HDP pool need to be copied
during initial copy
Reduce license costs: You only have to provision license capacity for capacity actually
consumed (allocated) from the HDP pool
Thin provisioning “awareness” applies to all Hitachi replication products, including Hitachi
Universal Replicator!
Page 14-12
Business Continuity Overview
Online Product Overview
https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKH
rZlDvXcv
https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKHrZlDvXcv
Module Summary
Page 14-13
Business Continuity Overview
Module Review
Module Review
1. List the software that offers a GUI for performing all replication
operations.
Page 14-14
15. Hitachi In-System Replication Bundle
Module Objectives
Page 15-1
Hitachi In-System Replication Bundle
Hitachi ShadowImage Replication
Hitachi ShadowImage® (SI) uses local mirroring technology to create and maintain a full copy
of a data volume within a Hitachi Virtual Storage Platform G1000 (VSP G1000) storage system.
Using SI volume copies (for example, as backups, with secondary host applications, for data
mining, for testing) allows you to continue seamless business operations without stopping host
application input/output (I/O) to the production volume.
It enables server-free backups, which allows customers to exceed service level agreements
(SLAs). It fulfills 2 primary functions:
ShadowImage Replication allows the pair to be split and use the secondary volume for system
backups, testing and data mining applications while the customer’s business continues to run. It
uses either graphical or command line interfaces to create a copy and then control data
replication and fast resynchronization of logical volumes within the system
Page 15-2
Hitachi In-System Replication Bundle
ShadowImage Replication Overview
RAID protection: A disk failure or Automatic Error correction is handled completely transparently
with no interruption.
At-Time split means that more than one pair in a CTG (Consistency Group) can be split at the
same time.
Page 15-3
Hitachi In-System Replication Bundle
ShadowImage Replication RAID-Protected Clones
ShadowImage In-System Replication software for IBM z/OS protects mainframe data in the
same manner. For mainframes, ShadowImage In-System Replication software can provide up to
three duplicates of one primary volume.
In Storage Navigator (java interface), the Paircreate command creates the first Level 1 “S”
volume. The set command can be used to create a second and third Level 1 “S” volume. And
the cascade command can be used to create the Level 2 “S” volumes off the Level 1 “S”
volumes.
Page 15-4
Hitachi In-System Replication Bundle
Easy to Create ShadowImage Replication Clones
• Select a volume that you want to duplicate. This becomes the P-VOL volume (P-VOL).
• Identify another volume to contain the copy. This becomes the secondary volume (S-
VOL).
• During the initial copy, the P-VOL remains available for read/write. After the copy is
completed, subsequent write operations to the P-VOL are regularly duplicated to the S-
VOL.
• The P-VOL and S-VOLs remain paired until they are split. The P-VOL for a split pair
continues to be updated but data in the S-VOL remains as it was at the time of the split.
The S-VOL contains a mirror image of the original volume at that point in time.
o You can pair the volumes again by resynchronizing the update data from P-VOL-
to-S-VOL or from S-VOL-to-P-VOL, as circumstance dictates.
Page 15-5
Hitachi In-System Replication Bundle
ShadowImage Replication Consistency Groups
A consistency group (CTG) is a group of pairs on which copy operations are performed
simultaneously and in which the status of the pairs remains consistent. A consistency group can
include pairs that reside in up to 4 primary and secondary systems.
Use a consistency group to perform tasks on the SI pairs in the group at the same time,
including CTG pair-split tasks. Using a CTG to perform tasks ensures the consistency of the pair
status for all pairs in the group.
Page 15-6
Hitachi In-System Replication Bundle
Overview
Overview
• ShadowImage Replication allows you to create a single, local copy of any active
application volume while benefiting from full RAID protection.
o This mirrored copy can be used by another application or system for a variety of
purposes, including data mining, full volume batch cycle testing and backups.
• It can provide up to 9 secondary volumes (S-VOL) per primary volume (P-VOL) within
the same system to maintain redundancy of the primary volume
o It allows you to split and combine duplex volumes and provides you the contents
of static volumes without stopping the access.
• ShadowImage Replication operations are nondisruptive and allow the primary volume of
each volume pair to remain online for all hosts for both read and write I/O operations.
Page 15-7
Hitachi In-System Replication Bundle
Applications
Applications
Application development
o Execute logical backups at faster speeds and with less effort than previously
possible
Page 15-8
Hitachi In-System Replication Bundle
ShadowImage Replication Licensing
Total capacity of all P-VOLs and S-VOLs must be less than or equal to
the installed license capacity
Dynamic provisioning and dynamic tiering, and pool capacity being used
by the volumes is counted
• The total capacity of all P-VOLs and S-VOLs must be less than or equal to the installed
license capacity. Volume capacity is counted only once, even if you use the volume more
than once. You do not need to multiply the capacity by the number of times a volume is
used. For example, a P-VOL used as the source volume for 3 pairs is counted only once.
• For a normal volume, the total volume capacity is counted, but for a DP-VOL (a virtual
volume used in dynamic provisioning, dynamic tiering or active flash) the pool capacity
being used by the volume is counted.
• After you start performing pair tasks, monitor your capacity requirements to keep the
used capacity within the capacity of the installed license.
• You can continue using ShadowImage Replication volumes in pairs for 30 days after
licensed capacity is exceeded. After 30 days, the only allowed operation is pair deletion.
Page 15-9
Hitachi In-System Replication Bundle
Management Resources
Management Resources
CMD-
DEV
CCI is a tool that uses the command line interface to run commands that perform most of the
same tasks you can do with HDvM - SN.
Page 15-10
Hitachi In-System Replication Bundle
Internal ShadowImage Replication Operation
1. Write I/O
P-VOL
2. Write complete
3. Asynchronous write I/O
replication
S-VOL
Server/Host
Creating a pair causes Hitachi Virtual Storage Platform G1000 to start the initial copy. During
the initial copy, the P-VOL remains available for read and write operations from the host. After
the initial copy, Virtual Storage Platform G1000 periodically copies the differential data in the P-
VOL to the S-VOL. Subsequent write operations to the P-VOL are regularly duplicated to the S-
VOL. The data in the P-VOL is copied to the SVOL.
• Initial copy is an operation VSP G1000 performs when you create a copy pair.
• Data on the P-VOL is copied to the S-VOL for the initial copy using the following
workflow.
• VSP G1000 goes through the following workflow to create an initial copy:
a. The S-VOLs are not paired. You create the copy pair.
b. The initial copy is in progress (COPY(PD)/COPY status). VSP G1000 copies the P-
VOL data to the S-VOL. A P-VOL continues receiving updates from the host
during the initial copy.
c. The initial copy is complete and the volumes are paired (PAIR status).
Page 15-11
Hitachi In-System Replication Bundle
Operations
Operations
Time
App App Backup App App
A A A B A B A A
o paircreate
o pairsplit
o pairresynchronize
Page 15-12
Hitachi In-System Replication Bundle
paircreate Command
paircreate Command
S-VOL
Initial Copy
Data Bitmap
P-VOL Differential Updates S-VOL after initial
Host copy
P-VOL available to host P-VOL Writes I/O to P-VOL during
for R/W I/O operations
Differential Data initial copy—duplicated at
S-VOL by update copy after
Update Copy S-VOL initial copy
• The volumes, which will become the P-VOL and S-VOL, must both be in the SMPL
(simplex) state before becoming a ShadowImage Replication pair.
• ShadowImage Replication initial copy operation copies all data from the P-VOL to the
associated S-VOL.
• P-VOL remains available to all hosts for read and write I/Os throughout the initial copy
operation.
• Write operations performed on the P-VOL during the initial copy operations will always
be duplicated to the S-VOL after the initial copy is complete.
• Status of the pair is COPY while the initial copy operation is in progress; the pair status
changes to PAIR when the initial copy is complete.
• You can select the pace for the initial copy operation when creating pairs.
o Slower
o Medium
o Faster
Page 15-13
Hitachi In-System Replication Bundle
pairsplit Command
• The best timing is based on the amount of write activity on the P-VOL and the amount
of time elapsed between update copies.
pairsplit Command
Differential Bitmap
The ShadowImage Replication pairsplit operation performs all pending S-VOL updates (those
issued prior to the split command and recorded in the P-VOL bitmap) to make the S-VOL
identical to the state of the P-VOL when the suspend command was issued and then provides
full read/write access to the split S-VOL.
You can split existing pairs as needed and you can use the paircreate operation to create and
split pairs in one step. This feature provides point-in-time backup of your data and facilitates
real data testing by making the ShadowImage Replication copies (S-VOLs) available for host
access.
Page 15-14
Hitachi In-System Replication Bundle
pairsplit Command
When the split operation is complete, the pair status changes to PSUS (pair suspended) and you
have full read/write access to the split S-VOL.
• While the pair is split, the system establishes a bitmap for the split P-VOL and S-VOL
and records all updates to both volumes.
Host I/O
Changed data at
3, 10, 15 and 18
Host I/O
Host I/O
Changed data at
3, 10, 15 and 18 Changed data
P-VOL S-VOL
10:01 a.m.
Location
Host I/O
2. The P-VOL and S-VOL are in PAIR status as of 10:00 a.m. Data at address 3, 10, 15 and
18 are marked for copying as a result of host I/O.
3. The status of the P-VOL and S-VOL is changed to PSUS. The bitmap for the P-VOL
contains information about changes that still need to be copied over to the S-VOL.
4. Data at address 3, 10, 15 and 18 are sent across to the S-VOL from the P-VOL, making
the S-VOL identical to the P-VOL at the time of the split command.
Page 15-15
Hitachi In-System Replication Bundle
pairresync Command – Operation Types
Host
Normal
PSUS SSUS Stop I/O to S-VOL
P-VOL S-VOL
PAIR Reverse
PAIR
Merge
Bitmaps
The Hitachi ShadowImage Replication pairresync operation resynchronizes the suspended pairs
(PSUS) or the suspended on error pairs (PSUE). When the pairresync operation starts, the pair
status changes to COPY(RS) or COPY(RS-R). The pair status changes to PAIR when the
pairresync operation completes.
• Normal: The normal pairresync operation resynchronizes the S-VOL with the P-VOL.
o The S-VOL becomes inaccessible to all hosts for write operations and the P-VOL
is accessible to all hosts for both read and write operations during a normal
pairresync.
o The normal pairresync operation can be executed for pairs with the status PSUS
and PSUE.
Page 15-16
Hitachi In-System Replication Bundle
pairresync Command – Operation Types
o The pair status during a reverse resync operation is COPY(RS-R) and the S-VOL
becomes inaccessible to all hosts for write operations during a reverse pairresync
operation.
o The P-VOL is inaccessible for both read and write operations and the write
operations on P-VOL will always be reflected to the S-VOL.
When a pairresync operation is performed on a suspended pair (status = PSUS), the storage
system merges the S-VOL differential bitmap into the P-VOL differential bitmap and then copies
all flagged data from the P-VOL to the S-VOL. When a reverse pairresync operation is
performed on a suspended pair, the storage system merges the P-VOL differential bitmap into
the S-VOL differential bitmap and then copies all flagged data from the S-VOL to the P-VOL.
This ensures that the P-VOL and S-VOL are properly resynchronized in the desired direction.
Page 15-17
Hitachi In-System Replication Bundle
pairresync Command – Normal Resync
Host I/O
Host I/O
Changed data at Changed data at
10,15,18 and 29 10, 19 and 23
P-VOL S-VOL
Host I/O
Data at locations
Changed data 10, 15, 18, 19, 23 and 29
sent from P-VOL to S-VOL
P-VOL Updates S-VOL
Host I/O
Changed data
Update Copy
5. The status of the P-VOL and the S-VOL is PSUS (pair suspended)/SSUS (secondary
suspended) as of 10:00 a.m.
Data at locations 10, 15, 18 and 19 on the P-VOL are marked as changed.
Data at locations 10, 19 and 23 on the S-VOL are marked as changed.
6. At 10:00 a.m., a pairresync (normal) command is issued. The bitmaps for the P-VOL and
S-VOL are merged.
The resulting bitmap has locations 10, 15, 18, 19, 23 and 29 marked as changed.
Data at these locations are sent from the P-VOL to the S-VOL as part of an update copy
operation.
7. Once the update copy operation in step 2 is complete, the P-VOL and S-VOL are
declared a PAIR again.
Page 15-18
Hitachi In-System Replication Bundle
pairresync Command – Reverse Resync
Host I/O
Host I/O
Changed data at Changed data at
10,15,18 and 29 10, 19 and 23
P-VOL S-VOL
Host I/O
Changed data
Update Copy
Page 15-19
Hitachi In-System Replication Bundle
pairsplit -S Command
pairsplit -S Command
PAIR PAIR
P-VOL S-VOL
SMPL SMPL
Stops copy operations to S-VOL
Volume Grouping
File Sever
VOL 3 VOL 13
VOL 4 VOL 14
Grouped
Database
P-VOLs S-VOLs
You can define or set up ShadowImage Replication pairs in groups, which enables you to issue
commands or perform operations for a single pair or a group of pairs.
Page 15-20
Hitachi In-System Replication Bundle
Pair Status Transitions
Not synchronized
paircreate
PAIR
COPY COPY (RS) pairresync
(PD) P-VOL S-VOL
P-VOL S-VOL
P-VOL S-VOL (Initial copy completed) Updated copy
Initial copy Updated copy
PSUS pairresync
pairsplit pairsplit
split option P-VOL S-VOL
Split pair
This illustration shows the ShadowImage Replication pair status transitions and the relationship
between pair status and ShadowImage Replication operations. Starting in the upper left of the
illustration, if a volume is not assigned to a ShadowImage Replication pair, its status is SMPL.
• When you create a pair, the status of the P-VOL and S-VOL changes to COPY(PD).
• When the initial copy operation is complete, the pair status becomes PAIR.
• If Hitachi Unified Storage cannot maintain PAIR status for any reason or if you suspend
on error the pair (pairsplit –E), the pair status changes to PSUE. When you suspend a
pair (pairsplit), the pair status changes to COPY(SP).
• When the pairsplit operation is complete, the pair status changes to PSUS to enable
you to access the suspended S-VOL.
• When you start a pairresync operation, the pair status changes to COPY(RS).
• When you specify reverse mode for a pairresync operation (pairresync –restore),
the pair status changes to COPY(RS-R) (data is copied in the reverse direction from the
S-VOL to the P-VOL).
• When the pairresync operation is complete, the pair status changes to PAIR.
• When you split or release a pair (pairsplit -S), the pair status changes to SMPL.
Page 15-21
Hitachi In-System Replication Bundle
Hitachi Thin Image
• Licensing
o For VSP family, Thin Image is part of Hitachi In-System Replication bundle (ISR),
free license key for any customer that has In-System Replication bundle under
maintenance.
o For Hitachi Unified Storage VM, HTI is part of the local protection bundle, free
license key for any customer that has the bundle under maintenance
o Requires Hitachi Dynamic Provisioning (HDP) be licensed for the capacity of the
HTI pool when not using Dynamic Provisioning for source volumes.
• Pool
o Uses a special HTI pool that is created much like an HDP pool; cannot be shared
with a regular HDP pool
o Pool can be up to 4PB and can be dynamically grown and have a customizable
threshold
Page 15-22
Hitachi In-System Replication Bundle
What Is Hitachi Thin Image?
• Shared memory
o Does not use shared memory, rather a cache management device which is
stored in the HTI pool
• V-VOLS
o Uses V-VOLS much like Hitachi Copy-on-Write Snapshot. P-VOL and V-VOL
cannot exceed 4TB size
• Management
• Copy mechanism
• Advanced configuration
o Note: Check with your HDS representative for currently supported configurations.
Page 15-23
Hitachi In-System Replication Bundle
What Is Hitachi Thin Image?
Subsequent writes to the same block for the same snapshot do not have to be
moved
Single instance of data stored in HDP snap pool regardless of number of snaps
o Snapshot image: A virtual replica volume for the primary volume (V-VOL); this is
an internal volume that is held for restoration purposes
Page 15-24
Hitachi In-System Replication Bundle
Hitachi ShadowImage Replication Clones Versus Thin Image Snapshots
• The P-VOL and the S-VOL have exactly the same size in Hitachi ShadowImage
Replication.
• In Hitachi Thin Image, less disk space is required for building a V-VOL image since only
part of the V-VOL is on the pool and the rest is still on the primary volume.
Pair configuration
Restore
• A primary volume can only be restored from the corresponding secondary volume in
ShadowImage Replication.
• With Thin Image, the primary volume can be restored from any snapshot image (V-VOL).
Page 15-25
Hitachi In-System Replication Bundle
Hitachi ShadowImage Replication Clones Versus Thin Image Snapshots
Simple positioning
• Clones should be positioned for data repurposing and data protection (for example, DR
testing) where performance is a primary concern
• Snapshots should be positioned for data protection (for example, backup) only where
space saving is the primary concern
ShadowImage Thin Image
P-VOL = S-VOL P-VOL ≥ V-VOL
Size of physical volume P-VOL = S-VOL P-VOL ≥ V-VOL
• Clones should be positioned for data repurposing and data protection (for example, DR
testing) where performance is a primary concern.
• Snapshots should be positioned for data protection (for example, backup) only where
space saving is the primary concern.
Page 15-26
Hitachi In-System Replication Bundle
Hitachi Thin Image Technical Details (1 of 3)
License
• Part of the Hitachi In-System Data Replication bundle
• Free license key for any customer that has In-System Replication bundle
under maintenance
• Requires a Hitachi Dynamic Provisioning (HDP) license for capacity of the
Thin Image (HTI) pool when not using HDP for source volumes
Pool
• Uses a special HTI pool, which is created similarly to an HDP pool
• Cannot be shared with a regular HDP pool or with Hitachi Copy-on-Write
Snapshot
• Pool can be up to 4PB, grow dynamically and have a customizable threshold
Shared memory
• Does not use shared memory except for difference tables
• Uses a cache management device, which is stored in the HTI pool
V-VOLs
• Uses V-VOLs like Hitachi Copy-on-Write Snapshot, but P-VOL and V-VOL
cannot exceed 4TB size
• Able to create 1,024 snapshots with a max of 32K in an array
• Does not have anonymous snapshot feature of Hitachi Unified Storage 100
Copy-on-Write
Page 15-27
Hitachi In-System Replication Bundle
Hitachi Thin Image Technical Details (3 of 3)
Management
• Managed through Hitachi Storage Navigator, RAIDCOM CLI (up to 1,024
generations) or CCI (up to 64 generations)
• Hitachi Replication Manager support in a future release
Copy mechanism
• Employs a Copy-on-Write Snapshot instead of Copy-on-Write mechanism
whenever possible
Advanced configuration
• Can be combined with Hitachi ShadowImage Replication, Hitachi Universal
Replicator and Hitachi TrueCopy software exactly like Copy-on-Write (see
tables in manual for complete details)
• Can be used with consistency groups
Host can
access
S-VOL
P-VOL
TI Pool
Page 15-28
Hitachi In-System Replication Bundle
Comparison: Hitachi Copy-on-Write Snapshot and Hitachi Thin Image
For latest specifications, refer to technical documentation. You can access the current
replication customer documentation for Hitachi Virtual Storage Platform G1000 and Hitachi
Virtual Storage Platform at: http://www.hds.com/corporate/tech-docs.html
For the purposes of this table, CoW stands for Copy-on-Write and CAW stands for Copy-After-
Write.
Page 15-29
Hitachi In-System Replication Bundle
Operations
Operations
4. New data
block moved to 2. If not previously
P-VOL moved (overwrite
condition), old data
block moved to pool
3. The write completion status is returned to the host after the snapshot data is stored.
Page 15-30
Hitachi In-System Replication Bundle
Operations
2. The write completion status is returned to the host before the snapshot data is stored.
P-VOL
V2
V3
Pool
Page 15-31
Hitachi In-System Replication Bundle
Operations
P-VOL
V2
V3
Pool
P-VOL
V2
V3
Pool
Page 15-32
Hitachi In-System Replication Bundle
Operations
P-VOL
V2
Pool
V3
P-VOL
V2 Pairsplit issued Monday 4 p.m.
Pool
V3
Page 15-33
Hitachi In-System Replication Bundle
Operations
P-VOL
V2 Pairsplit issued Monday 4 p.m.
Pool
V3
P-VOL
Pairsplit issued Monday 4 p.m.
V2
Pool
V3
Page 15-34
Hitachi In-System Replication Bundle
Operations
P-VOL
V2 Pairsplit issued Monday 4 p.m.
Pool
Pairsplit issued midnight
V3
P-VOL
Pairsplit issued Monday 4 p.m.
V2
Pool
V3 Pairsplit issued midnight
Page 15-35
Hitachi In-System Replication Bundle
Operations
P-VOL
V2 Pairsplit issued Monday 4 p.m.
Pool
V3 Pairsplit issued midnight
P-VOL
V2 Pairsplit issued Monday 4 p.m.
Pool
V3 Pairsplit issued midnight
Page 15-36
Hitachi In-System Replication Bundle
Operations
Read/Write is possible
immediately after restore V1
command
d Write
V2
Restore
P-VOL
Only differential data are copied
V3
• Restoring a primary volume can be done instantly from any V-VOL because it does not
involve immediate moving of data from pool to P-VOL. Only pointers need to modify.
• The background data will then be copied from the pool to P-VOL.
• If the P-VOL became physically damaged, all V-VOLs would be destroyed as well and
then a restore is not possible.
Page 15-37
Hitachi In-System Replication Bundle
Module Summary
Module Summary
Module Review
Page 15-38
16. Hitachi Remote Replication
Module Objectives
Page 16-1
Hitachi Remote Replication
Hitachi TrueCopy Remote Replication Bundle (Synchronous)
o Can remotely copy data to a second data center located up to 200 miles/320 km
away (distance limit is variable, but typically around 50–60 km for HUS).
o Uses synchronous data transfers, which means data from the host server
requires a write acknowledgment from the remote local, as an indication of a
successful data copy, before the server host can proceed to the next data write
I/O sequence.
• In addition to disaster recovery, use case examples for TrueCopy Remote Replication
bundle also include:
Page 16-2
Hitachi Remote Replication
Typical TrueCopy Remote Replication Bundle Environment
Optional
Primary Secondary
Host Server Host Server
CCI CCI
Local Array Remote Array
Command
Command
Device
TrueCopy Device
Modular Modular
only only
DM-LU DM-LU
• A typical configuration consists of the following elements (many, but not all require user
setup):
o Two Hitachi arrays — 1 on the local side connected to a host and 1 on the
remote side connected to the local array
• TrueCopy Remote Replication bundle replication between Hitachi enterprise storage and
Hitachi modular storage is not supported.
Page 16-3
Hitachi Remote Replication
Basic TrueCopy Remote Replication Bundle Operation
o Preparation
CCI (RAID manager) is optional. (You can use the GUI; command devices
are only necessary when CCI is used.)
Data at remote site remains synchronized with local site as data changes occur
Page 16-4
Hitachi Remote Replication
Basic TrueCopy Remote Replication Bundle Operation
• Data in a TrueCopy Remote Replication bundle backup stays synchronized with the data
in the local array.
o This happens when data is written from the host to the local array, then to the
remote system through Fibre Channel or iSCSI link.
o The host holds subsequent output until acknowledgement is received from the
remote array for the previous output.
• When a synchronized pair is split, writes to the primary volume are no longer copied to
the secondary side. Doing this means that the pair is no longer synchronous.
• Output to the local array is cached until the primary and secondary volumes are re-
synchronized.
• When resynchronization takes place, only the changed data is transferred, rather than
the entire primary volume, which reduces copy time.
o These in-system copy tools allow restoration from 1 or more additional copies of
critical data.
• Besides disaster recovery, TrueCopy Remote Replication bundle backup copies can be
used for test and development, data warehousing and mining or migration applications.
• Recovery objectives
o Recovery point objective (RPO): Point in time to which data must be restored to
successfully resume processing
Page 16-5
Hitachi Remote Replication
TrueCopy Remote Replication Bundle (Synchronous)
P-VOL S-VOL
o Distance limit is variable, but typically less than 25 miles (distance limit is
variable, but typically around 50–60 km for Hitachi Unified Storage)
Page 16-6
Hitachi Remote Replication
How TrueCopy Remote Replication Works
Orders DB
Log File Inventory Orders DB
P-VOL Log File Inventory
P-VOL
4. Write acknowledgement S-VOL S-VOL
TrueCopy Remote Replication bundle achieves zero recovery point objective (RPO) with an
immediate replication of data from the local storage system (P-VOL) over to the remote system
(S-VOL) using a first in first out (FIFO) mirrored data write sequence. Integrity of the replication
is maintained with acknowledgement from the remote system, which indicates a successful
write.
• When the local storage device (MCU) receives the write data in cache, TrueCopy Remote
Replication bundle synchronously transfers the data from the MCU’s cache to the remote
system’s (RCU) cache.
• RCU sends a write acknowledgement to MCU once the data is received in its cache.
• When the MCU receives the write acknowledgement, it sends I/O complete (channel end
and device end) to the host.
Page 16-7
Hitachi Remote Replication
Easy to Create Clones
Basic TrueCopy Remote Replication bundle operations consist of creating, splitting, re-
synchronizing, swapping and deleting a pair:
• Create pair:
o This establishes the initial copy using 2 logical units that you specify
o Data is copied from the P-VOL to the S-VOL
o The P-VOL remains available to the host for read and write throughout the
operation.
o Writes to the P-VOL are duplicated to the S-VOL
o The pair status changes to Paired when the initial copy is complete.
• Split:
o The S-VOL is made identical to the P-VOL and then copying from the P-VOL
stops.
o Read/write access becomes available to and from the S-VOL
o While the pair is split, the array keeps track of changes to the P-VOL and S-VOL
in track maps.
o The P-VOL remains fully accessible in Split status.
Page 16-8
Hitachi Remote Replication
Easy to Create Clones
• Resynchronize pair:
o When a pair is re-synchronized, changes in the P-VOL since the split are copied
to the S-VOL, making the S-VOL identical to the P-VOL again.
o During a resync operation, the S-VOL is inaccessible to hosts for write operations;
the P-VOL remains accessible for read/write.
o If a pair was suspended by the system because of a pair failure, the entire P-VOL
is copied to the S-VOL during a resync.
• Swap pair:
• Delete pair
Page 16-9
Hitachi Remote Replication
Volume States
Volume States
TrueCopy Remote Replication bundle volume pairs have 5 typical states. These states are used
to manage the health of volume pairs.
• COPY: The initial copy operation for this pair is in progress; this pair is not yet
synchronized. During this status, the P-VOL has read and write access and S-VOL has
read only status.
• PAIR: This volume is synchronized. The updates to the P-VOL are duplicated to the S-
VOL.
• PSUE (pair suspended due to error): This pair is not synchronized. It has been
suspended due to an error condition.
Page 16-10
Hitachi Remote Replication
Hitachi Universal Replicator
Universal Replicator presents a solution to avoid cases when a data center is affected by a
disaster that stops operations for a long period of time.
In the Universal Replicator system, a secondary storage system is located at a remote site from
the primary storage system at the main data center and the data on the primary volumes (P-
VOLs) at the primary site is copied to the secondary volumes (S-VOLs) at the remote site
asynchronously from the host write operations to the P-VOLs.
Journal data is created synchronously with the updates to the P-VOL to provide a copy of the
data written to the P-VOL.
The journal data is managed at the primary and secondary sites to ensure the consistency of
the primary and secondary volumes.
TrueCopy Synchronous software and HUR can be combined together to allow advanced 3-data
center configurations for optimal data protection.
Page 16-11
Hitachi Remote Replication
Hitachi Universal Replicator Benefits
Page 16-12
Hitachi Remote Replication
Hitachi Universal Replicator Functions
Host I/O process completes immediately after storing write data to the cache memory of
primary storage system master control unit (MCU)
• Then the data is asynchronously copied to secondary storage system remote disk control unit
(RCU)
MCU stores data to be transferred in journal cache to be destaged to journal volume in the
event of link failure
Universal Replicator software provides consistency of copied data by maintaining write
order in copy process
• To achieve this, it attaches write order information to the data in copy process
3. Asynchronous
1. Write I/O
remote copy
P-VOL JNL-VOL
JNL-VOL
Primary host 2. Write complete 4. Remote copy complete S-VOL
Remote replication for a Universal Replicator (HUR) pair is accomplished using the master
journal volume on the primary storage system and the restore journal volume on the secondary
storage system. As shown in the following figure, the P-VOL data and subsequent updates are
transferred to the S-VOL by obtain journal, read journal and restore journal operations involving
the master and restore journal volumes.
Replication Operations
• Obtain journal: Obtain journal operations are performed when the primary storage
system writes journal data to the master journal volume.
• Journal copy: Journal copy operations are performed when journal data is copied from
the master journal volume to the restore journal volume on the secondary storage
system.
• Restore journal: Restore journal operations are performed when the secondary
storage system writes journal data in the restore journal volume to the S-VOL.
Page 16-13
Hitachi Remote Replication
Hitachi Universal Replicator Hardware
MCU RCU
Initiator RCU Target
Initiator RCU Target
RCU Target Initiator
RCU Target Initiator
• In Hitachi Virtual Storage Platform mid-range, all ports are bi-directional, therefore there
is no need to change the port attribute.
Page 16-14
Hitachi Remote Replication
Hitachi Universal Replicator Components
Journal group
• A journal group consists of data volumes and journal volumes
• Maintains volume consistency by operating on multiple data volumes with
one command
UR System Components
• The Hitachi Virtual Storage Platform G1000 systems at the primary and secondary sites.
The primary storage system (MCU) contains the P-VOLs and master journal volumes and
the secondary storage system (RCU) contains the S-VOLs and restore journal volumes.
• The master journal consists of the primary volumes and master journal volumes.
• The restore journal consists of the secondary volumes and restore journal volumes.
o The data path connections between the systems. The primary and secondary
VSP G1000 systems are connected using dedicated Fibre Channel data paths.
Data paths are routed from the Fibre Channel ports on the primary storage
system to the ports on the secondary storage system and from the secondary
storage system to the primary storage system.
o The Hitachi Universal Replicator software on both the primary storage system
and the secondary storage system
• The hosts connected to the primary and secondary storage systems. The hosts are
connected to the Virtual Storage Platform G1000 systems using Fibre Channel or Fibre
Channel over Ethernet (FCoE) target ports.
Page 16-15
Hitachi Remote Replication
Hitachi Universal Replicator Components
Journal volumes
• A journal volume stores differential data
UR System Components
• The Hitachi Virtual Storage Platform G1000 systems at the primary and secondary sites.
The primary storage system (MCU) contains the P-VOLs and master journal volumes and
the secondary storage system (RCU) contains the S-VOLs and restore journal volumes.
• The master journal consists of the primary volumes and master journal volumes.
• The restore journal consists of the secondary volumes and restore journal volumes.
o The data path connections between the systems. The primary and secondary
VSP G1000 systems are connected using dedicated Fibre Channel data paths.
Data paths are routed from the Fibre Channel ports on the primary storage
system to the ports on the secondary storage system and from the secondary
storage system to the primary storage system.
o The Hitachi Universal Replicator software on both the primary storage system
and the secondary storage system
• The hosts connected to the primary and secondary storage systems. The hosts are
connected to the Virtual Storage Platform G1000 systems using Fibre Channel or Fibre
Channel over Ethernet (FCoE) target ports.
Page 16-16
Hitachi Remote Replication
Hitachi Universal Replicator Specifications
• HUR requires a one-to-one relationship between the volumes of the pairs; PVOL : SVOL
= 1 : 1.
• HUR operations involve 2 VSP G1000 systems, one at the primary site and one at the
secondary site.
o The primary storage system consists of the main control unit and service
processor (SVP).
o The secondary storage system consists of the remote control unit and its SVP.
Page 16-17
Hitachi Remote Replication
Hitachi Universal Replicator Specifications
• Each Virtual Storage Platform G1000 system can function simultaneously as a primary
and secondary storage system.
o The primary storage system communicates with the secondary storage system
over dedicated Fibre Channel remote copy connections.
o The primary storage system controls the P-VOL and the following operations:
o The secondary storage system controls the S-VOL and the following operations:
Initial copy and update copy between the P-VOL and the restore journal
Journal data copy from the master journal to the restore journal
Page 16-18
Hitachi Remote Replication
Three-Data-Center Cascade Replication
With HUR, you can set up 1 intermediate site and 1 secondary site for 1 primary site. It is
advisable that you create a HUR pair that connects the primary and secondary sites so that the
remote copying system that is created with the host operation site and backup site is configured
immediately in the event of a failure or disaster at the intermediate site. A HUR pair that is
created to make a triangle-shaped remote copy connection among the 3 sites is called HUR
delta resync pair. By creating a HUR delta resync pair in advance, you can transfer the copying
operations from between the primary and secondary sites, back to between the intermediate
and secondary sites in a short time when the intermediate site failure is corrected and the
intermediate site is brought back online.
Page 16-19
Hitachi Remote Replication
Three-Data-Center Multi-Target Replication
S-VOL
JNL-
TrueCopy (Sync) JNL- S-VOL
VOL
or HUR VOL
JNL-
JNL-
VOL
JNL Group
P-VOL VOL
JNL Group
With Universal Replicator (HUR), you can set up 2 secondary sites for 1 primary site. It is
advisable that you create a HUR pair that connects the 2 secondary sites so that the remote
copy system created with the host operation site and backup site can be created immediately in
the event of a failure or disaster at the primary site. A HUR pair that is created to make a
triangle-shaped remote copy connection among the 3 sites is called a HUR delta resync pair. By
creating a HUR delta resync pair in advance, you can transfer the copying operations from
between the secondary sites back to from the primary to the secondary site in a short time
when the failure is corrected and the primary site is brought back online.
Page 16-20
Hitachi Remote Replication
Four-Data-Center Multi-Target Replication
3DC
Multi-target TrueCopy (Sync) JNL-
JNL- S-VOL
VOL
VOL
JNL Group
JNL-
S-VOL
Journal Group JNL-
VOL
VOL
HUR
JNL Group
PDF Export
Page 16-21
Hitachi Remote Replication
Replication Tab in Hitachi Command Suite – Makes Controlling HUR Easier
Replication tab aids the user in the analysis of Hitachi Universal Replicator
performance problems and displays possible causes and solutions
Page 16-22
Hitachi Remote Replication
Hitachi High Availability Manager
Host (Multipath
HOST* HOST* software is installed)
Application Application
MCU RCU
VSP VOL Pair VSP
P-VOL S-VOL
P-VOL S-VOL
TrueCopy Paths
UVM UVM
Quorum
VSP VSP
Legend I/O Before the Failure
Any
External Storage I/O After the Failure
• Zero recovery time objective – layer high availability on top of synchronous replication
Page 16-23
Hitachi Remote Replication
Complete Virtualized, High Availability and Disaster Recovery Solution
TrueCopy HUR
P-VOL S-VOL S-VOL
TrueCopy TrueCopy
E-VOL
Hitachi High Availability Manager for 2DC H/A plus Hitachi Universal Replicator for out-of-region
disaster (3DC)
Not shown:
• Additional in-system replication copies that would be recommended for gold copies or
disaster recovery testing
Page 16-24
Hitachi Remote Replication
Global-Active Device
Global-Active Device
This section covers global-active device.
Clustering Clustering
App/ App/ App/ App/
DBMS DBMS DBMS DBMS
Production Production
Servers Servers
~100km
Volumes (62 ml.) Volumes Volumes
Global-active device Active Active
LDEV ID 44:44
LDEV ID 22:22
Virtual
VSP G1000 S/N LDEV ID 22:22
12345
LDEV ID 22:22
VSP G1000 S/N 12345 VSP G1000 S/N 23456 VSP G1000 S/N 12345
Global-active device enables you to create and maintain synchronous remote copies of data
volumes on the Virtual Storage Platform G1000 storage system. A virtual storage machine is
configured in the primary and secondary storage systems using the actual information of the
primary system and the global-active device primary and secondary volumes are assigned the
same virtual LDEV number in the virtual storage machine. Because of this, the pair volumes are
seen by the host as a single volume on a single storage system and both volumes receive the
same data from the host. A quorum disk located in a 3rd and external storage system is used to
monitor the global-active device pair volumes.
The quorum disk acts as a heartbeat for the global-active device pair, with both storage
systems accessing the quorum disk to check on each other. A communication failure between
systems results in a series of checks with the quorum disk to identify the problem for the
system able to receive host updates.
Page 16-25
Hitachi Remote Replication
Global-Active Device Overview
• Load balancing through migration of virtual storage machines without storage impact
Even if P-VOL side fails, App/DBMS can If one storage system fails, hosts see
Clustering
continue without that some path has failed, but other
App/ App/disruption by using S-VOL
DBMS DBMS paths are still available to the volumes
Production Clustering
Servers App/ App/
DBMS DBMS
Production
~100km Servers
With global-active device, host applications can run without disruption even if the storage
system fails. If a failure prevents host access to a volume in a global-active device pair, read
and write I/O can continue to the pair volume in the other storage system to provide
continuous server I/O to the data volume.
Page 16-26
Hitachi Remote Replication
Global-Active Device – Components
Differences between VSP G1000 global-active device and VSP Hitachi High Availability Manager (HAM)
Production
Servers
Global-active ~100km
Function HAM (62 ml.)
Device Volumes Volumes
VSP HAM
Page 16-27
Hitachi Remote Replication
Global-Active Device – Components
Storage Systems
A VSP G series system is required at the primary site and at the secondary site. An external
storage system for the quorum disk is also required which is connected to the primary and
secondary storage systems using Hitachi Universal Volume Manager.
Write I/O: All the updates are applied first to the primary storage system
and then to the secondary storage system.
o The virtual storage machine has the same model and serial number of the
physical storage system of global-active device pair target.
• Paired volumes: A global-active device pair consists of a P-VOL in the primary system
and an S-VOL in the secondary system.
• Paths and ports: Global-active device operations are carried out between hosts and
primary and secondary storage systems connected by Fibre Channel data paths
composed of 1 of more Fibre Channel physical links. The data path, also referred to as
the remote connection, connects ports on the primary system to ports on the secondary
system. The ports are assigned attributes that allow them to send and receive data. One
data path connection is required, but 2 or more independent connections are
recommended for hardware redundancy.
Page 16-28
Hitachi Remote Replication
Global-Active Device – Components
• Alternate path software: Alternate path software is used to set redundant paths from
servers to volumes and to distribute host workload evenly across the data paths.
Alternate path software is required for the single-server and cross-path global-active
device system configurations.
• Cluster software: Cluster software is used to configure a system with multiple servers
and to switch operations to another server when a server failure occurs. Cluster
software is required when 2 servers are in a global-active device server-cluster system
configuration.
Components
Quorum Disk
• Used to determine the storage system on which server I/O should continue when a
storage system or path failure occurs
• Virtualized from an external storage system that is connected to both the primary and
secondary storage systems
• Enables primary and secondary storage systems to determine the global-active device
owner node in case of failure. Any storage system is available, as long as it is supported
by Universal Volume Manager.
Page 16-29
Hitachi Remote Replication
Global-Active Device Software Requirements for VSP G1000
Software Notes
CCI/RAIDCOM Required
Optional (recommended)
Required for management GUI
Hitachi Command Suite (HCS)
(Global Services Solutions (GSS) Hitachi Command Suite
Implementation Service needs to be quoted)
Hitachi Replication Manager (HRpM) Required if using Command Suite
Dynamic Link Manager is recommended for long distances
Hitachi Dynamic Link Manager (HDLM)
(more than 10 km)
Storage Virtualization Operating System (SVOS) Required
Universal Volume Manager (UVM) Required for both Virtual Storage Platform G1000s
Global-active device Required for both Virtual Storage Platform G1000s
Global-active device software requirements for Hitachi Virtual Storage Platform G1000 are:
• CCI/RAIDCOM
• Global-active device
Note: For other VSP G series models, refer to specific model support matrix and technical
documentation.
Page 16-30
Hitachi Remote Replication
Global-Active Device – Specifications for VSP G1000
Item Specifications
Global-active device management Hitachi Command Suite v8.0.1 or later
Maximum number of volumes (creatable pairs) 64K
Maximum pool capacity 12.3PB
Maximum volume capacity 46MB to 59.9TB
Hitachi Dynamic Provisioning
Hitachi Dynamic Tiering
Supporting products in combination with global- Hitachi Universal Volume Manager
active device. All on either side or both sides Hitachi ShadowImage Replication
Hitachi Thin Image
Hitachi Universal Replicator with delta-resync
Campus distance support Can use any qualified path failover software
Here are the global-active device system specifications for Hitachi Virtual Storage Platform
G1000. For other VSP G series models, refer to specific model support matrix and technical
documentation.
*Asymmetric Logical Unit Assignment (ALUA) is a SCSI protocol standard for working with
multiple paths between storage and servers (or virtual servers) where path ownership needs to
be managed and resolved. ALUA manages access states and path attributes using explicit or
implicit methods set up by a storage administrator. It is used with SANs, iSCSI, FCoE and so on.
Page 16-31
Hitachi Remote Replication
Hitachi Business Continuity Management Software
Business Continuity Manager (BCM) software for IBM z/OS offers the following
benefits and features:
• Provides a centralized, enterprise-wide replication management for IBM z/OS
mainframe environments
• Automates Hitachi Universal Replicator for z/OS, Hitachi ShadowImage In-System
Replication for z/OS and Hitachi TrueCopy Remote Replication for z/OS software
operations
• Provides access to critical system performance metrics and thresholds, allowing
proactive problem avoidance and optimum performance to ensure that service-level
objectives are met or exceeded
• BCM software auto-discovery capability eliminates hours of tedious input and costly
human error when configuring and protecting complex, mission critical applications and
data
Page 16-32
Hitachi Remote Replication
Hitachi Business Continuity Manager Functions
Defines copy groups that contain multiple replication objects with similar
attributes that can be managed with a single command
Eliminates errors and streamlines management with auto-discovery for all
replication objects
Views the status of all enterprise-wide replication objects in real time
Accesses key replication metrics with built-in performance monitoring
Provides automatic notification of key events completion, such as pair state
transitions, timeout thresholds and other system events
Enables multisite remote replication management for wide-area disaster
protection with minimal data loss
• Uses FICON ports for host attachment
• Uses Fibre Channel for replication
Business Continuity Manager (BCM) delivers nondisruptive, periodic, point-in-time remote data
copies across any number of storage systems and over any distance.
Page 16-33
Hitachi Remote Replication
Demo
Demo
http://edemo.hds.com/edemo/OPO/HowProtectData/HowProtectDat
a_Video/HowProtectData.html
https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKH
rZlDvXcv
https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKHrZlDvXcv
Page 16-34
Hitachi Remote Replication
Module Summary
Module Summary
Page 16-35
Hitachi Remote Replication
Module Review
Module Review
Page 16-36
17. Command Control Interface Overview
Module Objectives
Page 17-1
Command Control Interface Overview
Overview
Overview
• Must use RAID Manager (CCI) when replicating to or from previous Unified Storage
models
• Can use RAID Manager (CCI) when replicating to or from HUS models
• Can use Hitachi Storage Navigator Modular GUI/CLI when replicating to or from HUS
models
Page 17-2
Command Control Interface Overview
Overview
These are 4 of the CCI components needed for the replication products. A product license (for
example, for Hitachi ShadowImage Replication) is also required.
• CCI commands
• HORCM Instance
Command device
CCI commands are issued by the CCI software to the RAID storage system command device.
• Is a user-selected, dedicated logical volume on the storage system that functions as the
interface to the CCI software on the host
• Accepts CCI read and write commands that are issued by the storage system
Page 17-3
Command Control Interface Overview
Overview
• Uses 32 MB and the remaining volume space is reserved for CCI and its utilities.
The volume designated as the command device is used only by the storage system and is
blocked from the user.
• The configuration definition file is a text file that is created and edited using any
standard text editor (for example, UNIX vi editor, Windows Notepad)
• The configuration definition file defines correspondences between the server and the
volumes used by the server
• When the CCI software starts up, it refers to the definitions in the configuration
definition file
• The configuration definition file defines the devices in copy pairs and is used for host
management of the copy pairs, including:
o Hitachi TrueCopy
Page 17-4
Command Control Interface Overview
Overview
The 2 methods for executing CCI commands are the in-band method and the out-of-band
method.
• In-band method: This method transfers a command from the client or server to the
command device of the storage system through Fibre Channel and executes the CCI
operation instruction.
• Out-of-band method: This method transfers a command from the client or server to
the virtual command device in the Service Processor in VSP G1000/G series maintenance
utility through LAN, assigning a CCI operation instruction to the storage system and
executes the CCI operation instruction. Out-of-band operations are supported on the
Hitachi Virtual Storage Platform and later storage systems.
Page 17-5
Command Control Interface Overview
Example With ShadowImage Replication
HORCM0.conf HORCM1.conf
HORCM
Commands
Server
Software HORCM
and HORCM
Application Instance0 Instance1
Command
Device
P-VOL S-VOL
• There are always at least 2 instances, each controlling one side of the replication. During
pair creation, it is determined which volume becomes the P-VOL and which becomes the
S-VOL
• Each instance relies on a configuration file to communicate with the other instance, as
well as to communicate with the storage system
Page 17-6
Command Control Interface Overview
Example With Hitachi TrueCopy
HORCM0.conf
HORCM LAN HORCM1.conf
HORCM
Commands Commands
Communication
between RAID
Server Manager instances Server
Software
HORCM Software
and Instance0 and HORCM
Application Application Instance1
Command
Device
P-VOL S-VOL
HORCM0.conf
HORCM LAN HORCM1.conf
HORCM
Commands Commands
Communication
between RAID
Server Manager instances Server
Software HORCM Software
and Instance0 and HORCM
Application Application Instance1
Command Command
Device Device
P-VOL S-VOL
Page 17-7
Command Control Interface Overview
Often Used Commands
For setup:
• raidscan - find volumes and show their status
• findcmdev - find command devices
For running:
• pairdisplay - show pair synchronization status
• paircreate - create a pair
• sync - flush system buffers to disk
• pairsplit - split a pair, temporarily or permanent
• pairresync - resynchronize a pair after split
• pairvolchk - checks the attributes and status of a pair volume
Page 17-8
Command Control Interface Overview
Module Summary
Module Summary
Page 17-9
Command Control Interface Overview
Module Review
Module Review
Page 17-10
18. Hitachi Replication Manager
Module Objectives
• Hitachi Command Suite Replication Manager Application Agent CLI User Guide
• Hitachi Command Suite Replication Manager Application Agent CLI Reference Guide
Page 18-1
Hitachi Replication Manager
Hitachi Replication Manager
Universal
Replicate Backup
Archive Snap
Data Protection
Software and
Management
ShadowImage Copy-on-Write/
HTI
TrueCopy
Universal Replicator
• The synchronous and asynchronous long-distance replication products, as well as the in-
system replication products, were discussed earlier in this course
• This solution
Page 18-2
Hitachi Replication Manager
Centralized Replication Management
Copy-On-Write Business
Universal
Snapshot Thin Image ShadowImage TrueCopy Continuity
Replicator
Manager
Primary Secondary
Provisioning Provisioning
CCI HORCM
o Configuration
o Operations
Page 18-3
Hitachi Replication Manager
Features Overview
Features Overview
As a fundamental management tool, Hitachi Replication Manager covers the entire life cycle of
replication management (configuration, monitoring, operations)
# Feature Item
Configuring prerequisites of local copy/remote copy
Configuring
• Command device
1 Prerequisites
• Thin Image pool, V-VOL
(GUI)
• UR journal group, remote path
Configuring Configuring copy groups/pairs
2 Replications(*) • Immediate execution
(GUI) • Scheduled execution
Managing pair status
• Monitoring the status
• Changing the status
Managing
3 Replications Monitoring metrics
(GUI) • C/T delta (RPO for async remote copy)*
• Journal usage
• Snapshot pool usage
• Troubleshooting of C/T delta increase
App-aware Backup/restore with application agent
4 Backup/restore • Immediate execution
(GUI/CLI) • Scheduled executions
(*) Volumes need to be provisioned before pair configuration
Consistency time delta is how many seconds the target volume is behind the source volume and
can be interpreted as recovery point objective (RPO), which means how much data would be
lost in case of a disaster.
Page 18-4
Hitachi Replication Manager
Overview
Overview
Hitachi Replication Manager is the software tool that configures, monitors and manages
Hitachi replication products in both open and mainframe environments for enterprise and
modular storage systems from a “single pane of glass”
Key features
• Centralized management of replication
• Application-aware backups – Microsoft SQL Server and Exchange
• Visual representation of replication structures
• Task management – scheduling and automation of the configuration of replicated data volume pairs
• Immediate notification of error and potential issues based on user-defined thresholds
• Simple wizards for pair creation and changing pair status
• Reports consistency deltas between source devices and their targets
• Supports email and SNMP alert reporting
Replication Manager (HRpM) configures, monitors and manages Hitachi replication products on
both local and remote storage systems. For both open systems and mainframe environments,
HRpM simplifies and optimizes the configuration and monitoring, operations, task management
and automation for critical storage components of the replication infrastructure. Users benefit
from a uniquely integrated tool that allows them to better control RPOs and RTOs.
Page 18-5
Hitachi Replication Manager
Launching Hitachi Command Suite
• In the web browser address bar, enter the URL for the management server where
Hitachi Replication Manager (HRpM) is installed. The User Login window appears
• When you log in to Replication Manager for the first time, you must use the built-in
default user account and then specify HRpM user settings
• The user ID and password of the built-in default user account are as follows:
• If HRpM user settings have already been specified, you can use the user ID and
password of a registered user to log in
• If you enabled authentication using an external authentication server, use the password
registered in that server
Page 18-6
Hitachi Replication Manager
Centralized Monitoring
Hitachi Replication Manager can also be launched from the Command Suite main window Tools
menu option.
Centralized Monitoring
Hitachi Replication Manager provides the following 4 functional views that allow you to view pair
configurations and the status of the replication environment from different perspectives:
Page 18-7
Hitachi Replication Manager
Centralized Monitoring
• Hosts
o This view lists open hosts and mainframe hosts and allows you to confirm pair
status summaries for each host
• Storage Systems
o This view lists open and mainframe storage systems and allows you to confirm
pair status summarized for each
o A storage system serving both mainframe and open system pairs is recognized
as 2 different resources to differentiate open copy pairs and mainframe copy
pairs
• Pair Configurations
o This view lists open and mainframe hosts managing copy pairs with CCI or BCM
and allows you to confirm pair status summarized for each host
o This view also provides a tree structure along with the pair management
structure
• Applications
o This view also provides a tree structure showing the servers and their associated
objects (Storage Groups, Information Stores and Mount Points)
Page 18-8
Hitachi Replication Manager
Centralized Monitoring
• Hitachi Replication Manager can send an alert when a monitored target, such as a copy
pair or buffer, satisfies a preset condition
o Performance information
• Alert notification is useful for enabling a quick response to a hardware failure or for
determining the cause of a degradation in transfer performance
• Alert notifications are also useful for preventing errors due to buffer overflow and
insufficient copy licenses, thereby facilitating the continuity of normal operation
• Because you can receive alerts by email or SNMP traps, you can also monitor the
replication environment while you are logged out of Replication Manager
Page 18-9
Hitachi Replication Manager
Centralized Monitoring
• You can export Replication Manager management information to a file in CSV or HTML
format
• Using the exported file, you can determine the cause of an error, establish corrective
measures and analyze performance information
o If necessary, you can edit the file or open it with another application program
o Event logs
• When you export management information, you can specify a time period to limit the
amount of information that will be exported
o You can export only information with a data retention period has not yet expired
Page 18-10
Hitachi Replication Manager
Features
o The retention period can be managed by a user with the Admin (Replication
Manager management) permission
Consistency time delta is how many seconds the target volume is behind the source volume and
can be interpreted as recovery point objective (RPO), which means how much data would be
lost in case of a disaster.
Features
• Copy Groups: A group of copy pairs created for management purposes, as required by
a particular task or job
o By specifying a Copy Group, you can perform operations such as changing the
pair status of multiple copy pairs at once
o Using the My Copy Groups feature, a user can register a copy group into My
Copy Groups, choosing only those that are most important to monitor, to see
how copy groups are related and check copy pair statuses in a single window
o My Copy Groups is also the default screen after you log in to the Hitachi
Replication Manager interface
Page 18-11
Hitachi Replication Manager
Features
• Sites: With Replication Manager, you can define logical sites in the GUI just as you
would define actual physical sites (actual data centers)
o It allows you to manage resources more efficiently if you set up separate sites
because it is easier to locate a required resource among many resources
displayed in the GUI
Page 18-12
Hitachi Replication Manager
Positioning
Positioning
Replication
Monitoring and
Hitachi Replication Manager Management
Configuration
Open Volumes
Navigator
Hitachi Device Manager Management
Storage
Volumes
BC
M/F
Manager1
RAID Manager Replication
Management
• Replication Manager (HRpM) provides monitoring for both enterprise storage systems
(open and mainframe volumes) and modular storage systems (open volumes)
• HRpM requires, and is dependent on Hitachi Device Manager and uses RAID manager
(CCI) and Device Manager agent for monitoring open volumes
• For monitoring mainframe volumes, HRpM can work with or without Hitachi Business
Continuity Manager (BCM) software or mainframe agent
• HRpM supports monitoring of IBM environments (z/OS, z/VM, z/VSE and z/Linux) and
non-IBM environments using only Device Manager (without Business Continuity Manager
or mainframe agent installed). HRpM retrieves the status of TCS/TCA/SI, and Hitachi
Universal Replicator copy pairs directly from storage arrays, without depending on
mainframe host types. The minimum interval of automatic refresh for this configuration
is 30 minutes.
Diagram legend
Page 18-13
Hitachi Replication Manager
Architecture – Open Systems and Mainframe
HRpM Agent
Agent Base
Manager
SNM2
Common
Plug-in
RAID
(CCI)
CMD
Browser HDvM Agent Device
Plug-in
Management
Host Agent
Manager
Common
Plug-in
RAID
(CCI)
HRpM Server
HDvM Agent
SAN
Plug-in FC-
HDvM Server
• Management server: Hitachi Replication Manager (HRpM) gets installed with Hitachi
Device Manager (HDvM). HBase is automatically installed by the Device Manager
installation. It is highly recommended to use the same version number, major and minor,
for the HDvM server and Replication Manager server
o Host agent: Only a single host agent is provided for HDvM and HRpM. One
agent install on the server works for HDvM and HRpM
Page 18-14
Hitachi Replication Manager
Architecture – Open Systems and Mainframe
Hitachi Business Continuity Manager (BCM): Business Continuity Manager software works on
the mainframe and manages replication pair volumes assigned for the mainframe computers.
HRpM can monitor and manage the mainframe replication volumes by communicating with
BCM.
• Host (production server): A host runs application programs. The installation of the
HDvM agent is optional. HRpM can acquire the host information (host name, IP address
and mount point) if the agent is installed on it
o IBM HTTP server is required on the mainframe host when using either of the
following:
o BCM program itself does not have the above capabilities, so the IBM HTTP server
is used to perform these functions. The IBM HTTP server works as a proxy server
between HRpM and BCM
Diagram legend:
Page 18-15
Hitachi Replication Manager
Architecture – Open Systems With Application Agent
Host Agent
HRpM Agent
Agent Base
Manager
Common
Plug-in
RAID
(CCI)
HDvM Agent Modular Storage
Plug-in
SNM2
Browser
CMD
Application Device
IP Network
Management Agent
Server
HRpM Server Host (Application
SAN
FC-
Backup/Import Server)
HDvM Server
Host Agent
Enterprise
Agent Base
HRpM Agent
Common
Manager
Plug-in Storage
RAID
(CCI)
HBase (*)
HDvM Agent S/N SVP
Plug-in
CMD
Device
Application
Agent
Application Server – MS
Exchange / MS SQL Server
Note: Depending on the configuration, backup servers are not required for SQL server
configurations.
Components
Page 18-16
Hitachi Replication Manager
Components
Management client: A management client runs on a web browser and provides access to the
instance of HRpM.
Host (application server): Application programs are installed on a host. A host can be used
as a pair management server, if required. The HDvM agent is optional if the server is used as a
host (and not pair management server).
Page 18-17
Hitachi Replication Manager
Managing Users and Permissions
All users can set up personal profiles and Hitachi Replication Manager licenses
regardless of their permissions
The built-in User ID, System, lets you manage all users in Hitachi Command
Suite—you cannot change or delete this user
Page 18-18
Hitachi Replication Manager
Resource Groups Overview
• Multiple resources can be registered in each resource group, but each resource can be
registered in only one resource group
• A user can be granted access permissions for multiple resource groups (that is, the user
can be associated with more than 1 resource group)
• Because a user logged in with the built-in account, System (the built-in account) is
permitted to access all resources
• Any user can be added to the All Resources group if they do not belong to another
resource group
• Except for users logged in as System, users with the Admin (user management)
permission can belong to resource groups only when they also have the Admin, Modify
or View (Hitachi Replication Manager management) permission
Page 18-19
Hitachi Replication Manager
Resource Group Function
• Use the GUI to define logical sites just as you would define actual physical sites (actual
data centers)
• Users can view the resources that belong to the sites in the resource groups with which
the users have been associated
Page 18-20
Hitachi Replication Manager
Resource Groups
• Create users
• Assign permissions to the users based on whether they will be managing Replication
Manager or they will also be creating other users
Resource Groups
• In the Explorer menu, click the Administration drawer and then select Resource
Groups
• Click Create Group to display the Create Resource Group dialog box
• Enter a resource group name in the Name field and then click OK
Page 18-21
Hitachi Replication Manager
Resource Group Properties
After the resource name has been created, assign hosts, storage systems, applications and
users.
Page 18-22
Hitachi Replication Manager
Hitachi Command Suite Replication Tab
• The default group, All Resources, cannot be deleted or renamed; a new resource group
named All Resources cannot be added
• The built-in admin account, System, is automatically registered in the All Resources
group
Page 18-23
Hitachi Replication Manager
HCS Replication Tab Operations
• The Universal Replicator (HUR) Performance Analysis window of the Replication tab
provides information for analyzing performance problems with data transfers between
the primary and secondary storage systems
• HUR asynchronously transfers data to the remote site. Delay times occur and differ
depending on the data transfer timing. This delay is known as C/T delta and is an
indicator for recovery point objective (RPO)*
• View the top 5 copy groups and their C/T delta rates in a chart. You can quickly identify
copy groups that consistently exceed the maximum write time delay (C/T delta threshold)
o Analyze the C/T delta threshold against the performance of primary, secondary,
and network resources. The analysis process supports 2 modes
o Wizard mode: a step-by-step guide that compares trends and helps users
identify the cause of the problem
o Advanced mode: a selection of charts that lets advanced users correlate multiple
trends and identify the problematic resource
Page 18-24
Hitachi Replication Manager
HCS Replication Tab Operations
• The UR Performance Analysis function gives users the option of supplying values for the
effective network bandwidth
• The effective network bandwidth is the actual speed at which data can be transmitted
on a remote path based on the replication environment. Check the network and supply a
proper bandwidth value for each path group
• The UR Performance Analysis window includes an option to set thresholds for the
following metrics:
• The threshold values are used to plot a horizontal lines in graphs indicating where the
limit has been exceeded. Although defaults are defined, the values should be based on
the replication environment
Consistency time delta is how many seconds the target volume is behind the source volume and
can be interpreted as recovery point objective (RPO), which means how much data would be
lost in case of a disaster.
Page 18-25
Hitachi Replication Manager
Module Summary
Module Summary
Module Review
3. What role does the Hitachi Device Manager (HDvM) agent play in
Replication Manager operations?
Page 18-26
19. Hitachi Data Instance Director
Module Objectives
Page 19-1
Hitachi Data Instance Director
HDS Data Protection Strategy
• The IT infrastructure has gotten extremely complex over time and data is everywhere.
o Many locations
o This explodes when you consider “copy data” for backup, disaster recovery, test
and development, audit and e-discovery, archiving and many other needs
• Budgets for IT in general, and data management in particular, have not increased at the
rate of data growth (40% per year) or infrastructure sprawl
• At the same time, line of business managers are demanding increasing availability of
systems, applications and data
o They want you to reduce backup windows, backup more often to reduce
recovery time (RPO) and restore faster (RTO)
Page 19-2
Hitachi Data Instance Director
Focus of Data Protection
HDS approaches data protection, retention and recovery from a business-defined perspective by
addressing the individual service level requirements for different data management scenarios,
including:
• Operational recovery to address events such as a lost file or email, application, data
volume or an entire system. Human error or malicious behavior are the most prevalent
causes of these events, though hardware failures and software bugs also contribute.
• Disaster recovery is required when something impacts the ability to restore operations
locally. This can include common disasters such as fire or flood and will require either
sourcing a copy of the data from another location or restarting operations at another
location.
• Long-term recovery speaks to the need to retain certain data assets for prescribed
amounts of time and the ability to retrieve them within that time frame. The most
common retention policies are set to address regulatory or governance compliance
requirements to keep the assets for data mining and big data applications or as
reference archives such as product manuals.
Page 19-3
Hitachi Data Instance Director
Goals of Data Protection
• Many organizations suffer in one or more of these areas, either failing to meet the
business service level requirements or by over-spending for higher levels of protection
than are necessary for individual applications and data sets. These service levels include,
but are not limited to:
• Backup window: The amount of time allotted to complete a particular backup job; the
protected applications or systems are often unavailable for normal use during this time,
so shorter backup windows are desired. In some cases such as critical, always-on
applications, any backup window may be unacceptable.
• Recovery Point Objective (RPO) specifies the amount of time and therefore the amount
of new data at risk between backup operations. A shorter RPO equals less data at risk
due to a more granular point-in-time recovery capability.
• Retention specifies both how long to keep a data asset and when to expire or delete it
from the environment. An asset that is retained longer than its prescribed retention
period could actually become a liability.
• Not all data is of equal value and there may be some data assets that are okay to lose.
It is important to understand this tolerance to failure and loss and adjust protection and
spending levels accordingly.
Page 19-4
Hitachi Data Instance Director
Goals of Data Protection
As we roll out these new capabilities, we’re able to provide our customers with some impressive
improvements:
• Reduce backup data by 80% or more: A traditional backup process will force the
periodic completion of a full backup, usually once per week; more than 80% of this
week’s full backup will be redundant to last week’s backup (only 20% new or changed
data, week to week); simple data deduplication will eliminate the duplicated 80%. This
number improves over time as each subsequent full backup is reduced to just the new
data. The data change rate and the number of full backups retained will effect the
overall results.
• Improve RPO by 95% or more: The load that typical backup processes put on
production systems limit it to being run once per day (each evening) for incremental
backups and once per week for full backups. This leaves about 24 hours of new data at
risk (the time between backups). Performing non-disruptive snapshots once per hour
reduces the amount of data at risk (the recovery point objective) by 95.8% (1/24);
using continuous data protection reduces it to near zero.
• Improve RTO from hours / days to seconds / minutes: Again, a function of the speed of
reverting a hardware-based snapshot versus copying the volume of data from backup
media.
• Reduce backup administration costs by 50% - 75%: Most organizations have deployed
multiple backup and disaster recovery tools to handle the diverse and distributed nature
of their data (different operating systems, applications, locations; virtual vs. physical;
servers versus workstations, and so on). Addressing all of these requirements in a single
admin console eliminates silos of licensing, training and certification, on-call personnel,
systems and storage.
Page 19-5
Hitachi Data Instance Director
Modern Approach to Data Protection
• First, reduce the amount of data that needs protecting. This will take the load off of
production systems and reduce the costs of primary storage. We do this through
effective policy-based archiving (or tiering) of data to a totally self-protection storage
platform (Hitachi Continuous Data Protector). We also reduce the amount of data in the
secondary, or backup systems by:
o Unique copy data management that avoids the need to create additional copies
of data, such as for test and development operations
o Cloning and replication technologies that remove data protection processing from
the production environment and provide extremely fast backup and restore
Page 19-6
Hitachi Data Instance Director
Business-Defined Data Protection: Goals
• Third, let’s simplify the data management environment. As technology has evolved with
new systems and applications and operating models, new solutions have been needed
to protect them. Because the “big guys” in backup are very slow to adapt, new “point
solutions” are purchased and deployed to meet these needs.
How many different tools do you have for different operating systems, applications
(RMAN for Oracle, BR*Tools for SAP and so on.), virtual servers (Veeam anyone?),
remote offices, and desktops and laptops? Each new tool adds complexity, new costs
and new risks. HDS recognizes the need for different technological approaches to meet
specific service level objectives, but we bundle them all under a single, easy-to-use
administrative interface that enables the creation of policies and data movement
workflows, plus monitoring and reporting.
One way to simplify this discussion is to break the data into 3 classes of importance, such as
critical, important and standard data.
Page 19-7
Hitachi Data Instance Director
Business-Defined Data Protection: Goals
• Critical data – the things that drive the business and make you special need to be
protected from any loss or outage. These can be your e-commerce website, order
processing and CRM systems. We include large databases here because they can be
critical and protecting them is nearly impossible with traditional backup.
• Important data are things that are in process at a corporate level, like sales and
marketing programs, human resources information, manufacturing and inventory
information. They aren’t absolutely critical to the survival of the organization, but losing
them would have a serious impact.
• Standard data is the typical files that we all use in our jobs – spreadsheets,
presentations, documents, and so on. If you were to lose all that data it might be very
impactful to your individual productivity, but in the scope of the entire organization it
probably wouldn’t be a devastating loss.
Then we figure out what we need for local, operational recovery for each of these tiers:
• Critical – we need to back it up as fast and as often as possible (backup window and
RPO), and recover as fast as possible (RTO). Ideally, you would love to drive these SLOs
to zero.
• Important – traditional backup and restore processes have served you well here, but
they need to be faster to deal with the data growth you’re experiencing
• Standard – stick with what’s been working: traditional, scheduled full and incremental
backups. Again, improve performance and scalability where necessary.
• Critical – we need the ability to provide continuous operations and immediately fail-over
in case of a catastrophic failure
• Important – ensure that your recovery capabilities meet your RTO, which can be
measured in hours, but not in days—you could be out of business by then
• Standard – you probably don’t need to restore everything; select what you need and
restore it over time
For long-term recovery, we simply need to meet data retention and expiration requirements.
This should be defined by policy and executed automatically. Additionally, look at this as an
opportunity to reduce overall costs, including in production, storage and backup.
Page 19-8
Hitachi Data Instance Director
Business-Defined Data Protection: Technologies
Choosing the right technology for each set of requirements is the key to meeting service level
objectives at the least possible cost.
On this slide, we’ve listed the best options that we see in most customer environments.
Page 19-9
Hitachi Data Instance Director
Introduction to Hitachi Data Instance Director
Operational
Recovery
Unified
Disaster
• Storage-based snapshot, clone and
Recovery replication orchestration
Long-Term
Retention
• Host-based backup, continuous data
protection, live backup, archive
Business-defined
• Policy-based, whiteboard-like workflow interface
• Easily meet complex data availability service levels
Hitachi Data Instance Director (HDID) provides unified data protection, enabling the
simplified creation and management of complex, business-defined policies to meet service
levels for availability. It is an excellent fit with Hitachi block and file replication solutions
supporting Virtual Storage Platform (VSP), including VSP F series and VSP G series, Hitachi
Unified Storage (HUS) VM and Hitachi NAS Platform (HNAS).
Data Instance Director offers the orchestration layer for remote replication, supporting
Hitachi True Copy and Hitachi Universal Replicator, local and remote snapshots and clones with
Hitachi Thin Image and Hitachi ShadowImage Replication, continuous data protection and
incremental backup forever, as well as file and email archiving. HDID is a unified data
protection solution that competitors cannot match. Hitachi Data Systems has the full solution:
the software to orchestrate and execute data protection, array-based replication software and
of course, the object, file, block and server platforms.
Page 19-10
Hitachi Data Instance Director
A Common Scenario
A Common Scenario
I need
Local backup every hour to minimize RPO
Monthly backup to keep for 7 years
Daily copy to refresh test and development operations
Real-time data mirror to a standby site for disaster recovery
Older data moved to a less expensive tier of storage
Page 19-11
Hitachi Data Instance Director
Eliminate the Backup Window Problem
o SLAs are becoming more stringent – application uptime, 24/7, critical data . . .
o Real-time data capture and movement – copies made immediately after created,
therefore, there is no artificial or constraint windows
• Benefits / Values
Set your recovery point objectives to your business needs. Mix and match
batch and continuous data movement anywhere it is needed.
For ease of use and quick setup, HDID provides a workflow user interface
giving you the power to set the data flow and replication policies per your
exact needs.
o Performance enhanced
Page 19-12
Hitachi Data Instance Director
Eliminate the Backup Window Problem
o HDID takes away batch processes, network loads and performance hits to
production servers and virtual machine environments.
o Combines backup data streams with other data protection methods to increase
recovery and retention options in ways that legacy backup products simply
cannot do
o Unify multiple protection methods to handle local, remote and virtual data
Page 19-13
Hitachi Data Instance Director
Easily Transform Backup Designs Into Policies (A Real Customer Example)
This slide is from an actual customer engagement. The image on the left is a photo of a process
flow that the customer described in the meeting, and noted that it would take him 2 days to
create this in his current EPIC environment, using IBM Tivoli Storage Manager. Our team
created the same flow in the HDID user interface in less than 10 minutes.
Page 19-14
Hitachi Data Instance Director
What Are the Benefits of Hitachi Data Instance Director?
Orchestration of remote replication and file and object replication for the
Virtual Storage Platform G1000 / Virtual Storage Platform / Unified
Storage VM and HNAS
GUI-based restores
Page 19-15
Hitachi Data Instance Director
Features and Capabilities
Comprehensive
Unified
Advancedcopy
Built-In offsite replication for Built-In Cost Bare metal recovery to
Protection
Recovery
data and
disaster recovery Savings physical and virtual servers
management
Capabilities
Retention
Application-consistent
Streamlined, off-host
snapshot for Microsoft File versioning to capture
operations for virtual
Exchange, Microsoft SQL every change as it is saved
environments
Server, Oracle, and others
1. Reduce costs
• Incremental forever has several huge advantages over the traditional full + incremental,
grandfather-father-son models of backup. No full backups on the weekends, no
unnecessary duplication of data, and faster, more reliable restores.
Page 19-16
Hitachi Data Instance Director
Advanced Features to Modernize Your Data Protection Infrastructure
E-mail Archiving
Remote Offices
• Data is transferred as it is created, trickles through the WAN for increased efficiency
• Network and storage friendly – sends only changed data blocks, which can be
deduplicated
Page 19-17
Hitachi Data Instance Director
Advanced Features to Modernize Your Data Protection Infrastructure
• No need to create a separate backup; HDID BMR will restore the OS (C:\) volume from
the normal backup
• Restore to a similar or dissimilar hardware platform – some new driver installation may
be required
File Versioning
o Other products only capture the latest version at the time of backup / archive
Virtual Environments
o Leverages vSphere APIs for Data Protection (VADP) and Changed Block Tracking
(CBT)
Page 19-18
Hitachi Data Instance Director
Quantifiable Benefits
Application-consistent protection
• Integrated snapshot and clone support for Exchange, SQL Server and Oracle
o Put the application into a backup-ready state, call the copy operation, release the
application
• One-Touch Recovery
Quantifiable Benefits
Features Benefits
Backup, CDP, snapshot, replicate, archive, tier; eliminate the need for
Choose the right technologies for the job
point solutions
Block-level incremental-forever data Eliminate redundant copies, reduce backup storage requirements by 90%
capture (snapshot, CDP) or more
Eliminate backup windows and strain on production servers; enable more
Hardware-based snapshot orchestration
frequent protection (RPO) and fast recovery (RTO)
Simplify administration; eliminate the costs and risks of managing
Unique graphical interface
multiple tools
Reduce the amount of copy data by using 1 backup copy for multiple
Data instance management
purposes (test/development, audit)
Reduce primary storage requirements by 40% or more; enable retention
Archiving and tiering
policies for compliance
View and restore Virtual Infrastructure Integrator-managed VM-level
Integration with Hitachi Virtual
storage snapshots from the HDID user interface – provides a level of
Infrastructure Integrator
central visibility and control
Page 19-19
Hitachi Data Instance Director
Storage-Based Protection With HDID
Page 19-20
Hitachi Data Instance Director
Capabilities for HNAS With HDID
o Allows read access to the target (can control access through SysLock)
o Creating snapshots with the “Snapshot rule” allows selecting specific source
snapshot for refreshing the replication or target snapshot for rollback during the
disaster recovery scenario (See #SBP0304 for the details)
o Clones are not supported. (The files are re-hydrated on the target and lose their
space efficiency)
o Not currently “firewall friendly” because EVS IP addresses (on source and target)
must be reachable from the SMU public IP address
Page 19-21
Hitachi Data Instance Director
Capabilities for Host-Based Operational Recovery With HDID
o This is suitable for structured data with multiple files (general application /
DBMS)
o Not atomic operation for entire directory (requires app-aware software for
correct operation)
Batch backup
• Traditional incremental or full backups for Microsoft
and Linux file systems
HDID Repository
Page 19-22
Hitachi Data Instance Director
Hitachi Data Instance Director Block Orchestration
Real-time archiving
Everyone knows that a backup is not an archive. An archived file is a file "version" with
sufficient metadata attached that allows for easy search and retrieval. Backup products can't do
that. Also, backups give you multiple points of recovery in time for your whole data set or
specific parts of it. Archive products can't do that
Page 19-23
Hitachi Data Instance Director
Archive File and Email Objects to HCP
• But then came Data Instance Director . . . and HDID can do both
• When you take that version for archive, be it once a month or 2 seconds ago when that
last change took place, it is your choice. Searching for that file with rich metadata
variables or clicking through your file system directory structure? . . . also your choice
Repository search
• The ability to search repository data through a wide range of definable parameters such
as file extension type, user, owner and many others, and the ability to browse past
snapshot histories, is not just a great way to facilitate user level restore empowerment,
it is also an important way for IT managers and users to understand the overall data
itself
• Data Instance Director reduces the backup window by offloading inactive email to
Content Platform (HCP), thus reducing 60-70% or more of stagnant email.
Page 19-24
Hitachi Data Instance Director
Archive File and Email Objects to HCP
Benefits/Values
Page 19-25
Hitachi Data Instance Director
HDID Complimentary Products
Disaster recovery
• TrueCopy Synchronous/Hitachi Universal Replicator
Storage for PVOLs and SVOLs
Hitachi Disaster Recovery bundle license on primary and secondary array
Page 19-26
Hitachi Data Instance Director
Unified Management
Unified Management
This sections provides an overview of the features and benefits of unified management
software provided by Hitachi.
63% of
organizations
say they use
more than one
solution
Snapshot
RPO/RTO
System
Operating Replication
Failures
Systems
Retention Archive
or Tier
Locations Site-Level
Disaster Cloud
Budget
Performing backups and recoveries used to be fairly routine back when the IT infrastructure
was fairly simple and homogeneous. Not anymore.
Page 19-27
Hitachi Data Instance Director
Data Protection is Complicated
• First, there are many application, platform (physical, virtual, cloud) and data types and
each requires its own specific methods and processes to protect data correctly, often
requiring scripts or interfacing with APIs
• Then there are the different types of threats and each of these requires a different
approach to protection and recovery. (click) For example, you wouldn’t want to restore
an entire system just to recover a single file or email. And when you have a major
disaster, such as a fire or earthquake, you’ll want to recover the data from or at another
location
• Besides different applications, there are also different operating systems, so you’ll need
specialized agents and processes for each
• The same with different locations – data centers, disaster recovery sites, regional
headquarters, remote and branch offices, home offices, each with different levels of
requirements and different levels of local IT skills available.
• As you can see, you can have a lot of complexity in your data protection environment –
varied infrastructure mapped against a number of different threats. (click)
• Of course, not all data is created equal and to address this we have a number of
different service level objectives. These SLOs are defined to meet the (sometimes
contradictory) needs of the business.
• The first SLO is the backup window – this defines how long you can take to perform the
backup operation, whether its to protect against a file loss, a system failure or a site
outage. These may be different policies, and often are
• Next is the recovery point objective (RPO). This defines how often the data is protected.
A nightly backup defines an RPO of 24 hours. This also means that you are leaving up to
24 hours of your newest data at risk between backup jobs. That might not be
acceptable for more important or critical applications and data
• Recovery time objective (RTO) defines how fast the recovery should be accomplished
when something goes wrong. For example, a short RTO would point you to storing the
backup copy locally on disk so you can restore it fast
• Finally, there’s always the budget limitation. Applying the best protection techniques
across all of your data may be prohibitively expensive. Apply the best and fastest to only
your most critical data and use less expensive techniques for less important data
Page 19-28
Hitachi Data Instance Director
Data Protection is Complicated
• All of these many combinations of infrastructure, threats and service level objectives
(SLO) lead to a number of different technologies to meet these needs. No single
technology or solution works best for everything, but HDS is striving toward that goal.
• Continuous data protection, captures every change as it is written to the disk and
therefore avoids the need for a backup window. It also provides an RPO of near zero.
But it consumes a lot of backup disk storage since it captures every change that is
written, as compared to the point-in-time differences found in other models. Continuous
data protection is appropriate for your most critical data, but probably in combination
with either periodic snapshots or backup to reduce the continuous data protection
storage footprint
• Snapshot technologies, such as Hitachi Thin Image, especially those embedded within
storage systems, are a modern way of capturing changed data in a fast, frequent and
efficient manner. They store the data locally on the same storage array as the primary
data, so they aren’t suitable to protect against system and site failures on their own, but
they do eliminate the backup window and enable more frequent RPOs and much faster
RTOs for operational recovery
• Replication, also known as mirroring, sends a copy of the data to another location for
recovery following a disaster. There are several forms of replication, including storage-
based mechanisms that are synchronous (metro distances) and asynchronous (global
distances); as well as replication of the backup repository within most backup software
applications
• An effective way to improve overall IT costs and performance is to move inactive data
from production systems to an archive tier of storage. This movement should be policy
based, selecting files by their creation date, last access or modification date, application
and data type, owner, or other factors. Using Hitachi Content Platform (HCP), which
includes many self-protecting capabilities, as the archive object repository eliminates the
need to back up the archive
Page 19-29
Hitachi Data Instance Director
Which Data Protection Options to Choose?
• Cloud storage is also becoming a popular, though potentially risky target for backup and
archive data. Cloud services provide a monthly, pay-as-you-grow subscription model
which can be very appealing in some situations. Security, resiliency, reliability and long-
term viability are all things to consider when choosing a public cloud provider.
Overall, you can see that a fairly typical environment can require hundreds or even thousands
of policies that drive a number of protection, retention and recovery tools.
Batch Backup
Operational recovery for noncritical data
Snapshot
Application-aware, fast, low impact
Archive / Tier
Long-term retention
Administrators do have a lot of choices on how to address specific data protection, retention
and recovery requirements. Our vision is to simplify these choices by providing a comprehensive,
unified solution
• Can you afford them all in regards to hardware, software, services and personnel?
Page 19-30
Hitachi Data Instance Director
Which Data Protection Options to Choose?
Traditional backup is still the obvious choice for many applications and data types and is often
used in addition to some of the more advanced technologies, such as snapshots and replication,
to a provide point-in-time copy, longer retention, and so on. The downside of backup is the
amount of time it takes to run a backup operation and applications usually need to be stopped
for the duration of that “backup window”. A related challenge is RPO,. Since it causes downtime,
backups are often run at night, leaving a whole day’s worth of new data at risk.
Continuous data protection eliminates the backup window and provides a near-zero RPO (much
less risk of data loss), but can put additional load on production servers, networks and storage.
Snapshots can be performed either in software (e.g. Windows Volume Shadow Copy Service
[VSS]) or in hardware (e.g. Hitachi Thin Image). These point-in-time solutions are fast and can
be run frequently, but by themselves they don’t address disaster recovery requirements.
Replication is a method of moving data to another site for disaster recovery and is often paired
with backup, continuous data protection or snapshots. By itself, however, replication does not
address data deletion or corruption threats – you end up with 2 copies of deleted / bad data.
Archiving is a great method to reduce the strains of backup and restore by moving older / static
data to a lower cost tier of storage, often tape but also an object store like Hitachi Content
Platform (HCP).
Backup-as-a-Service cloud offerings are starting to become the solution of choice for a lot of
smaller organizations and remote offices because they can eliminate most of the complexity and
administration and potentially reduce costs. However, there are challenges:
Page 19-31
Hitachi Data Instance Director
When Data Disaster Strikes
and restore the right data – Source: U.S. National Archives and
Records Administration
to the right place
in a timely manner
As you add point solutions to handle specific needs, you are also adding cost, complexity and
risk.
IT managers and CIOs use whiteboards to create workflows and business processes. Hitachi
Data Instance Director does this as well. Use it like a whiteboard to create policies and data
flows, then enable it all within your environment with the click of a button.
Page 19-32
Hitachi Data Instance Director
Unique Graphical User Interface
• Support multiple tenants (users) accessing Hitachi block devices at the same time. Each
tenant has their own management GUI and access rights to a restricted set of resources,
including pools, ports, host groups and logical devices.
Page 19-33
Hitachi Data Instance Director
Example Deal With HDID
• Further positions HDS storage as the right choice for IT service providers, with the
ability to add modern, high-performance data protection services, supporting their
customers’ operational recovery and disaster recovery requirements.
Page 19-34
Hitachi Data Instance Director
Demo
Demo
http://edemo.hds.com/edemo/OPO/HitachiDataInstanceDirector_HDI
D/HDID/HDID.html
Page 19-35
Hitachi Data Instance Director
Online Product Overview
https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKH
rZlDvXcv
https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKHrZlDvXcv
Page 19-36
Hitachi Data Instance Director
Module Summary
Module Summary
Page 19-37
Hitachi Data Instance Director
Module Review
Module Review
1. What does the integration of Hitachi Data Instance Director and Hitachi
Content Platform achieve?
A. Solves long-term retention needs
B. Improves system performance
C. Provides near-instant access to archive files and emails
D. Lowers operational and equipment costs
Page 19-38
20. Hitachi NAS Platform
Module Objectives
Page 20-1
Hitachi NAS Platform
Features
Features
Page 20-2
Hitachi NAS Platform
Hitachi NAS Platform Single-Node Portfolio
3090 PA 4080
96K IOPS per Node 105K IOPS per Node
8PB Max Capacity 16PB Max Capacity
3090 4060
73K IOPS per Node 70K IOPS per Node
8PB Max Capacity 8PB Max Capacity License key
4040 Model Dongle
3080 65K IOPS per Node
4PB Max Capacity
41K IOPS per Node
4PB Max Capacity
Features/Capacity/Performance
Performance numbers are only used for comparison purposes. Hitachi NAS Platform 3090 is
shown without and with Hitachi NAS Performance Accelerator, NAS Platform 3090 PA is with
Performance Accelerator installed. For more exact and customer facing numbers, consult the
appropriate and updated performance
documents. http://www.spec.org/sfs2008/results/sfs2008.html
3080 = Hitachi NAS Platform 3080 (is planned to be EOS for new sales July 2015)
3090 = Hitachi NAS Platform 3090 (is planned to be EOS for new sales July 2015)
3090 PA = Hitachi NAS Platform 3090 including Performance Accelerator license
4040 = Hitachi NAS Platform 4040
4060 = Hitachi NAS Platform 4060
4080 = Hitachi NAS Platform 4080
4100 = Hitachi NAS Platform 4100
The above specifications are according to HNAS line card Version 12.0.3528.04 (last revised:
07/16/2015 )
Page 20-3
Hitachi NAS Platform
Hitachi NAS 2-Node Cluster Portfolio January 2015
4100
Hitachi NAS Platform
Price
4080
280K IOPS
32PB Max Capacity
4060
210K IOPS
16PB Max Capacity
4040
140K IOPS
8PB Max Capacity
HNAS
F1140 130K IOPS
4PB Max Capacity
13K IOPS
336TB Max Capacity Features/Capacity/Performance
• Although the Hitachi NAS Platform (HNAS) F1140 and F1120 are based on a different
hardware platform, they belong to the complete HNAS offering. The HNAS F series is not
covered in further detail, as this is a different offering.
• HNAS F IOPS internal estimate and the TB limit is restricted by the HDD.
Page 20-4
Hitachi NAS Platform
The Family of HUS File and HNAS Models
Notes:
2. Dual-node configuration
No 10GbE, SMB Signing, scalability of 4040, basic differences with 3090 and so on
Heap settings
Performance numbers are only used for comparison purposes. HNAS 3090 is shown without and
with Performance Accelerator. HNAS 3090 PA is with Performance Accelerator installed. For
more exact and customer facing numbers consult the appropriate and updated performance
documents.
Page 20-5
Hitachi NAS Platform
The Family of HUS File and HNAS Models
http://www.spec.org/sfs2008/results/sfs2008.html
• Licensing
Page 20-6
Hitachi NAS Platform
System Hardware (Front View)
2 x 10G ETHERNET 2 x 10G ETHERNET 6 x 1G ETHERNET PRIVATE 10/100 ETHERNET 4 x 1/2/4G FIBRE
CLUSTER PORTS NETWORK PORTS NETWORK PORTS 5-PORT SWITCH CHANNEL PORTS
(XFP) (XFP) (1000BASE-T COPPER) (100BASE-T COPPER) (SFP)
2 X REDUNDANT, 2 x 10/100/1000
HOT-SWAPPABLE ETHERNET
PSU MANAGEMENT PORTS
Page 20-7
Hitachi NAS Platform
Hitachi NAS Platform 4060/4080/4100 Rear Panel
• Up to 8 aggregations
Page 20-8
Hitachi NAS Platform
Differences Between Models 4060 and 4080
• Up to 4 aggregations
4060
• Add Model
license: • No key
Model 4080 • Join a 4060
• Join a 4080 cluster
cluster
4080
• A cluster-wide model type license is available for the Hitachi NAS Platform 4060 models.
• Applying this license to a 4060 will report the system as a 4080 and gain the limits of a
4080.
• When adding a 4060 node to a cluster or replacing a node with a spare node in an
existing cluster, the new node will inherit the model type from cluster-wide licenses.
• A new model key is needed only for replacement in 4080 single node configuration
scenarios.
Page 20-9
Hitachi NAS Platform
MMB and MFB Printed Circuit Boards
• Hitachi NAS Platform 3080 and 3090 documentation: Mercury Motherboard (MMB)
• NAS Platform 4040, 4060, 4080, and 4100 documentation: Main Motherboard (MMB)
• The Mercury main motherboard (MMB) contains a multi core CPU and 8, 16, 32GB of
system memory
o The Mercury FPGA board (MFB) contains all the FPGA functionality found in
Hitachi NAS models
Page 20-10
Hitachi NAS Platform
Logical Elements in HNAS
H
U
S
FS01/ FS03/
SP 01 SP 02 S
FS02/ FS04/
T
O
LUN/SD R
SD 00
SD SD 11
SD SD 22
SD SD 3 SD 4
5 SD 6
5 SD 76
SD SD 87
SD
A
G
RG
E
RG = Raid group
SD = System drive
SP = Storage pool
FS = File system
SHR = Share
Page 20-11
Hitachi NAS Platform
EVS Migration (Failover)
H
U
S
FS01/ FS03/
SP 01 SP 02
FS02/ FS04/ S
T
O
SD 0 SD 1 SD 2 SD 3 5
SD 4 6
SD 5 SD 7
6 8
SD 7 LUN/SD R
A
G
RG E
RG = Raid group
SD = System drive
SP = Storage pool
FS = File system
SHR = Share
Page 20-12
Hitachi NAS Platform
CIFS Shares and NFS Exports
172.17.5.25 172.17.5.25
SHR2 on
EVS1 (X:)
LAG 1
MNT2
NFS Export: DIR01 EVS1
SHR1 on Active Directory
EVS1 172.17.5.25
RG = Raid group
SD = System drive
SP = Storage pool
FS = File system
SHR = Share
Page 20-13
Hitachi NAS Platform
HNAS 4000 Software Licensing
The unified entry bundle is the minimum required software licensing bundle. In the entry
bundle we offer both NFS and SMB protocols. Primary deduplication is included by default on all
the Hitachi NAS Platform models.
With the unified value bundle, we offer everything that the entry bundle does, but we also
incorporate the data migrator software. The data migrator can be used when customers would
like to migrate files from one tier of storage to another tier within the NAS Platform cluster. This
is where the value bundle comes into play.
Page 20-14
Hitachi NAS Platform
HNAS Features
Then, finally we have the unified ultra bundle which incorporates everything that the entry and
value bundles offer, but also incorporates the ability to replicate from one HNAS cluster to a
secondary HNAS cluster. In other words, for customers requiring any sort of disaster recovery
between 2 sites, the ultra bundle would be of value. The ultra bundle includes cross volume
links which allow for tiering of data to another platform, for example Hitachi Content Platform or
NetApp.
HNAS Features
Deduplication
HNAS data protection
• Hitachi Copy-on-Write Snapshot
• File clone/tree clone
• NDMP support
• Antivirus integration
• Replication
File based replication
Object based replication (Hitachi NAS Replication)
Data migration
• Internal
• External (to NFSv3, to HCP)
• To cloud (HCP/Amazon S3)
Universal migration
After Dedupe
Page 20-15
Hitachi NAS Platform
HNAS Platform Snapshots Implementation
• Snapshots are done within the file system and not with copy-on-write differential
volumes.
Quick Snapshot Restore is a licensed feature for rolling back one or more files to a previous
version of a Hitachi Copy-on-Write Snapshot.
For more information about this command line procedure, open the command line interface
(CLI) and run man snapshot, or refer to the Hitachi NAS Platform Command Line Reference.
If a file has been moved, renamed or hard linked since the snapshot was taken, Quick
Snapshot Restore may report that the file cannot be restored
If the file cannot be Quick Restored, it must be copied from the snapshot to the live file system
normally.
Page 20-16
Hitachi NAS Platform
Register Hitachi Unified Storage Into Hitachi Command Suite
Add Storage
Report
HUS
• Adding the storage system to be monitored and handled by Hitachi Device Manager
(HDvM) is not any different than adding in block only storage.
• Adding the Hitachi NAS Platform servers to the database enables Device Manager to
match the information given by the storage device and the HNAS server using the WWN
from HNAS and storage as the key.
• Therefore, it is essential to use secure storage domains, even if only HNAS is connected
to the storage.
• Not using secure storage domains requires manual mapping after both the HNAS and
storage is reported into the database.
Page 20-17
Hitachi NAS Platform
Register HUS File Module/HNAS Into HCS
SMU Register
Register Admin EVS
Admin EVS
• The SMU registers into the database with information from the selected range of single
nodes or clusters.
• This will enable Device Manager programmatic to issue Hitachi NAS Platform commands
directly on the NAS Platform admin EVS instead of using link/and/launch to the SMU.
Page 20-18
Hitachi NAS Platform
SMU Registration
SMU Registration
The configuration frame to register the Hitachi NAS Platform cluster is found on the SMU under
Storage Management.
The IP address of the Hitachi Device Manager server, port number and user account is required
to let the SMU login to Device Manager and report into the database.
If more than one entity is managed by the SMU, the user can select which entities are reported.
Page 20-19
Hitachi NAS Platform
Hitachi NAS File Clone
What is Cloning?
• The source file and new file share unmodified user data blocks.
• This is a new key feature as it allows administrators to rapidly deploy new virtual
machines by cloning the image/template file without consuming additional disk storage
space.
WFS-2 = Wise File System-version 2 – file system type used in a distributed environment
Page 20-20
Hitachi NAS Platform
Writable Clones
Writable Clones
Physical View of 256TB File System Logical View of 256TB File System
Instant Clone
Creation t1
2 VMDK 50GB
New Content t2
Live Read VMDK Read Original
5 3 6
VMDK VMDK 1
Clone A Original
Pointers
VMDK
VMDK
to same
blocks D
4
E Modified
Clone A 95GB
50GB
t2
A B C
t0
Application Initiated
No Yes
(API)
Licensed No Yes
Page 20-21
Hitachi NAS Platform
Directory Clones
Directory Clones
• NDMP – File-system-aware backup and restore using standard 3-rd party NDMP
software
Page 20-22
Hitachi NAS Platform
HNAS Replication Access Point Replication
Network
Direction of replication
EVS
Share/Export
Source Target
Page 20-23
Hitachi NAS Platform
Promote Secondary
Promote Secondary
EVS EVS
Share/Export Share/Export
Source Target
Page 20-24
Hitachi NAS Platform
Data Migration Using Cross Volume Links
• CA AntiVirus protection
On demand/offline virus scanning is dependent on the supplier of the antivirus scan engine.
Always consult the current Hitachi NAS Platform Independent Software Vendor reference list for
updates or contact product management if not listed as GA in the Features and Availability
Report (FAR).
EVS 1
FS-1 FS-2
F1
CVL-1 with 1KB is not default. Creating a migration link will, after version 6.1, use CVL-2 also
called XVL as default.
Page 20-25
Hitachi NAS Platform
HNAS Data Migration to HCP
FS-1 FS-2
Hash Hash
WORM
Compare
HTTPs
HTTPs
HTTP
NFS
NFS
Page 20-26
Hitachi NAS Platform
Universal Migration
Universal Migration
EVS
1. Create XVLs to
external files
FS-1 FS-2 2. To share this XVL
as files
3. Copy in
FC SATA or WORM
background
4KB or 32KB CVL-2 pointing to the new position
The Hitachi Virtual Storage Platform G1000 is a very flexible offering, with multiple
configurations, including unified block/file choices. With the introduction of the Virtual Storage
Platform G1000, we are also making it much easier to buy and deploy unified storage offerings
as one key configuration. Hitachi NAS Platform, (HNAS) is a leading high-performance NAS
engine in the industry and allows customers to further consolidate their NAS into their high-end
environment, eliminating separate management tools, consolidating on business continuity and
disaster recovery practices and accelerating their NAS performance.
Page 20-27
Hitachi NAS Platform
Global-Active Device and HNAS Integration
• Adds disaster recovery features to the existing HNAS high availability cluster solution
• A stretched 2-node cluster with 2 copies of data, which are implemented using
synchronous TrueCopy .
• Allows manual and automatic activation of the secondary copy of the data.
• Works for HUS, HUS-VM, VSP and VSP G1000 systems and all HNAS systems.
Page 20-28
Hitachi NAS Platform
Synchronous Disaster Recovery for HNAS Overview
EVS EVS
FS FS FS
SPAN SPAN
System Drives
P P P P S S S S
TrueCopy
Location A Location B
Hitachi NAS Platform is aware of the mirror and the relationship between primary and
secondary disks (system drives). The NAS Platform works with the primary disks of the Hitachi
TrueCopy mirror, as the secondary disks are read-only. If the primary storage system fails
Synchronous Disaster Recovery for HNAS cluster offers a method to recover quickly, by
activating the secondary disks.
Exchange virtual server (EVS) failover is still managed by the usual HNAS cluster mechanisms.
Performing storage failover with Synchronous Disaster Recovery for HNAS will not necessarily
result in an EVS failover. EVS failover works in a few seconds. Storage failover can take 40-90
seconds.
Page 20-29
Hitachi NAS Platform
Why Is Global-Active Device Important to HNAS?
How global-active device is different from Synchronous Disaster Recovery for HNAS
For Synchronous Disaster Recovery for HNAS, you need to configure servers known as
replication monitoring station (RMS) to run the Synchronous Disaster Recovery for HNAS scripts
(2 x RMS - one in each data center, RMS is the Linux server with the scripts) that control site
failover by changing the status (P-VOL/S-VOL) of the storage volumes on the local and remote
sites. For global-active device, there is no need for a dedicated server to run any scripts. All
aspects of storage failover is accomplished by the storage system (for example, Hitachi Virtual
Storage Platform G1000).
Before global-active device, Synchronous Disaster Recovery for HNAS was the only available
technology for implementing Synchronous Disaster Recovery for HNAS cluster. Now it is just an
alternate option for customers who have TrueCopy and want to use Synchronous Disaster
Recovery for HNAS with it.
Page 20-30
Hitachi NAS Platform
Why is Global-Active Device Important to HNAS?
• HNAS supports integration with global-active device from 12.2 with a number of caveats:
o Preferred paths to the primary storage system should be manually defined using
system drives path (sd-path).
o Only 3 site configurations, where the global-active device quorum and system
management unit (SMU) reside on a 3rd site should be used.
o Uses the vital product data (VPD) code page read from the array to automatically
prefer the path to the global-active device primary volume (primary storage
system path)
o All the system drives of a storage pool must be set as P-VOL in the same storage
o The exchange virtual server using a file system part of a storage pool should be
online in the HNAS positioned in the site where resides the storage with the P-
VOLs
o This extends the distance limitation between sites back to the 100km which
applies to Synchronous Disaster Recovery for HNAS
Page 20-31
Hitachi NAS Platform
Online Product Overview
https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKH
rZlDvXcv
https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKHrZlDvXcv
Page 20-32
Hitachi NAS Platform
Module Summary
Module Summary
Page 20-33
Hitachi NAS Platform
Module Review
Module Review
Page 20-34
21. Hitachi Content Platform, Hitachi Data
Ingestor and HCP Anywhere
Module Objectives
Page 21-1
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Hitachi Content Platform
Custom metadata
(Annotations)
• The metadata a user or
application provides to
further describe an object
This bubble contains the actual data, system-generated metadata and custom
metadata/annotations.
This architecture allows for easy HW/SW upgrades and great scalability.
Object storage is a black box. Users and admins do not work with file systems, only with data
containers.
Page 21-2
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
What Is Hitachi Content Platform?
Hitachi Content Platform (HCP) is a distributed storage system designed to support large,
growing repositories of fixed-content data
An HCP system consists of both hardware and software
• Stores objects that include both data and metadata that describes the data.
• Distributes these objects across the storage space
• Presents the objects as files in a standard directory structure
An HCP repository is partitioned into namespaces
• Each namespace consists of a distinct logical grouping of objects with its own directory structure
HCP provides a cost-effective, scalable and easy-to-use solution to the enterprise-wide
need to maintain a repository of all types of data
• From simple text files and medical image files to multi-gigabyte database images
Access to HCP is via open standard access protocols: REST API HTTP(s), WebDAV,
NFS, CIFS, SMTP and HS3 (Amazon)
Page 21-3
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP Basics
HCP Basics
• Hitachi Content Platform (HCP) is a distributed storage system designed to support large,
growing repositories of fixed-content data. HCP stores objects that include both data
and metadata that describes the data. It distributes these objects across the storage
space, but still presents them as files in a standard directory structure. HCP provides a
cost-effective, scalable, and easy-to-use solution to the enterprise-wide need to
maintain a repository of all types of data from simple text files and medical image files
to multi-gigabyte database images.
• HCP is optimized to work best with HTTP based APIs: REST and S3.
• REST API – Representational state transfer, stateless, using simple HTTP commands
(GET/PUT/DELETE)
o It is used by HCP-AW, HDI, HCP data migrator, HNAS and most 3rd party
middleware products
o HDS provides REST API developer’s guide – all our APIs are open and well
documented
Page 21-4
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP Basics
o Thanks to S3 API it is possible to use any S3 client software – it will work with
HCP out of box
• Comparing protocols
o Network File System (NFS) and Common Internet File System (CIFS) are value
added protocols
NFS and CIFS are good for migrations and/or application access
o Other protocols
Page 21-5
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Fixed Content
Fixed Content
Page 21-6
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Categories of Storage
Categories of Storage
• There is also file-level storage, which can fit somewhere between block-level and object-
level
• Note that access to data stored in HCP is always facilitated over IP networks; no access
to data over a SAN is possible
• HCP 500 has HBAs to connect to back end storage systems over a SAN
Page 21-7
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Object-Based Storage – Overview
Page 21-8
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Retention Times
Retention Times
1 2 3 4 5 10 15 20 25 50
Source: ESG
While government regulations have a significant impact on content archiving and preservation
for prescribed periods, compliance does not necessarily require immutable or Write Once, Read
Many (WORM)-like media. In many cases, the need for corporate governance of business
operations and the information generated are related to the need to retain authentic records.
This requirement ensures adherence to corporate records management policies as well as the
transparency of business activities to regulatory bodies. As this chart illustrates, the retention
periods for records are significant, from 2 years to near indefinite.
Page 21-9
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Reviewing Retention
Sarbanes–Oxley
The Sarbanes-Oxley (SOX) Act, passed in the year 2002, outlines the procedures of storing
financial records. All companies and business organizations must comply with the SOX
procedures of financial records storage, ensuring that there are no accounting errors related to
scandals or illegal financial activities. The Sarbanes-Oxley Act legislates the time period during
which the financial records of the company must be maintained, along with the manner in
which the records should be kept.
Reviewing Retention
Retention Hold: A condition that prevents an object from being deleted by any means or
having its metadata modified, regardless of its retention setting, until it is explicitly released.
Page 21-10
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Policy Descriptions
Policy Descriptions
Retention Shredding
May
May
Prevents file deletion before retention Ensures no trace of file is
21
21
2036
period expires recoverable from disk after
Can be set explicitly or inherited deletion
Deferred retention option
Can set a Retention Hold on any file
Indexing
Determines whether an object will be Versioning
indexed for search Object version consists of data,
system metadata and custom
metadata
Custom Metadata XML checking
New object version is created
Determines whether HCP allows when data changes
custom metadata to be added to a
Write Seldom Read Many
namespace if it is not well-formed
(WSRM)
XML
An HCP policy is one or more settings that influence how transactions and
• • Retention
• • Shredding
• • Indexing
• • Versioning
Page 21-11
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP Integration With VLANs
Tenant
One tenant may have separate network Tenant
for data and one for tenant management admin data
HCP supports virtual networking only for the front-end network through which clients
communicate with the system and through which different HCP systems communicate
with each other. HCP does not support virtual networking for the back-end network
through which the HCP nodes communicate with each other.
In HCP, logical network configurations are referred to simply as networks. Each
network has a name, an IP mode (IPv4, IPv6, or Dual), one or more subnets defined for
the network, IP addresses defined on each subnet for none, some, or all of the nodes in
the HCP system, and some other settings.
Page 21-12
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Multiple Custom Metadata Injection
Images such as X-rays and other medical scanning pictures have no content that can be
searched other than a file name, but can have embedded metadata such as billing details,
doctor and patient information and other relevant details regarding the actual object.
These details are invaluable for searching this type of content as functional in our Hitachi
Clinical Repository solution.
An HCP object can be associated with multiple sets of custom metadata. That is why we talk
about multiple custom metadata injection.
Page 21-13
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
It’s Not Just Archive Anymore
HCP can adapt the way no other content product can. It has a chance to grow in the archive
market and align to emerging markets such as the cloud. Think about active archiving. What
actually is archiving and what makes it active? Archiving means we are moving data from
expensive high performance storage to somewhere where it can be stored securely over long
periods of time. This is different from backup, where we create redundant copies. HCP has lots
of services that constantly work with data to ensure it is always healthy and securely stored.
The HCP services are what make archiving active. Old HCAP used to be a simple box with no
concept of multitenancy and with no authentication options. New HCP is a versatile and flexible
storage system that offers multiple deployment options. HCP is undergoing very turbulent
development – new features are added every year, these features bring significant
improvements in terms of possibilities the system can offer.
HCP always ensures backwards compatibility, meaning that even from the oldest system you
can upgrade to the newest version.
Because of this, there are some legacy features in the system, namely: default tenant, search
node references, blade chassis references, and so on.
Page 21-14
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Introducing Tenants and Namespaces
• Tenants – segregation of NS 1 NS 1
…
segregation of data
…
NS N NS N
In HCP v3.X and v4.X releases, the concept of data access accounts existed.
The data access account contained a set of assigned access permissions that identified what
a user could or could not do.
In HCP v5.0, the data access account was eliminated and the definition of the permissions was
moved into the user account of the individual users (more on this later in the course).
HCP supports access control lists that allow users to manage permissions on the object level.
Page 21-15
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Internal Object Representation
HCP
Custom metadata
Page 21-16
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP – Versatile Content Platform
Running S node
internal disks or
Spindown NFS Devices S3 compatible Amazon S3 Google Cloud Hitachi Cloud Microsoft Azure
disks on arrays and compatible
disks on arrays storage
HCP can store data coming from different sources (protocols) and can store data coming from
Hitachi Data Ingestor (HDI), HCP Anywhere and more than 100 applications. HCP can tier data
and store it where it is needed
Page 21-17
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP Products
HCP Products
This section provides information on the Hitachi Content Platform (HCP) products.
• 2U server enclosure
• G10 servers can me mixed with existing Hitachi Compute Rack (CR) 210H and CR 220S
based HCP systems.
Page 21-18
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP G10 With Local Storage
No SAN connectivity
• Customers who purchase a local storage HCP G10 system with 6 internal hard drives can
expand the internal capacity later by purchasing a “six-pack” upgrade. These six drives
are installed in each applicable node and a service procedure is run to add them into the
system. All RAID group creation, virtual drive creation, initialization, or formatting is
handled automatically – no manual configuration is required.
HCP G10 replacement for HCP 500 and HCP 500 XL models
Page 21-19
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP S10
• OS is now always stored locally on the server’s internal drives, not on the array (as it
used to be in HCP 500). No requirement to set up boot LUNs on the HBA cards for
attached storage systems. Online array migration is possible on HCP G10 nodes because
the OS is stored on the internal drives.
HCP S10
Controller 1 Controller 2
Mid-plane
= Half populated
168 TB (raw)
= Full populated
336 TB (raw)
• HCP S10 and S30 offer better data protection than offered by Hitachi Unified Storage
(HUS) and Hitachi Virtual Storage Platform (VSP) G family (20+6 enterprise Class (EC)
versus RAID 5/RAID 6)
• HCP S10/S30 licensing costs are lower than comparable array configurations per TB.
Page 21-20
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP S30
HCP S30
HCP S30 has higher storage capacity (4.3PB usable) versus HUS/VSP G family
Page 21-21
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP S Node
HCP S Node
• The software delivers high reliable and durable storage from commodity hardware
components.
• Offer fast data re-protection of the largest HDD available now and in the future.
• Has self-optimize features. The user does not have to be concerned with configuring,
tuning, balancing resources (HDD).
• Besides a fully capable web user interface, the HCP S10 an be entirely managed and
monitored using Management Application Programming Interface (MAPI).
• Communication between generic nodes and the HCP S10 nodes is S3 protocol based,
and as such ready to be supported by other HDS products like HNAS (august 2015).
• HCP objects stored on HCP S10 will fully support retention, WORM, versioning,
compression and encryption.
Page 21-22
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Direct Write to HCP S10/S30
Any HCP model with v7.2 software now supports direct write to HCP
S10/S30
• HCP G10 supports 10G front-end Ethernet networking and 1G back-end Ethernet
networking
• Excellent performance locally or with HCP S10/S30 versus attached storage (see
following slides)
• HCP S30 has higher storage capacity (4.3PB usable) versus HUS/VSP G family
Page 21-23
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
VMware and Hyper-V Editions of HCP
Benefits
• Easy and fast deployment
• Aligns with VMware and Hyper-V features
• No HCP hardware is needed
Open virtualization format (OVF) templates are part of every new HCP SW version release.
Using OVF templates make it faster to deploy HCP in VMware as you do not have to create VMs
manually nor do you need to install the OS.
If you wish to deploy four virtual nodes, you must deploy an OVF template 4 times.
When you have the required number of virtual nodes, you can start with HCP Application SW
install.
Page 21-24
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Hitachi Data Ingestor
Provides local and remote access to HCP for clients over CIFS and NFS
Page 21-25
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
How Does Hitachi Data Ingestor Work?
A Hitachi Data Ingestor (HDI) system provides services that enable clients on different
platforms to share data in storage systems. An HDI system consists of file servers called nodes
and storage systems in which data is compacted and stored. The HDI system provides a file
system service to clients by way of the network ports on the nodes.
The HDI model determines whether HDI nodes can be set up in a redundant configuration. A
configuration where nodes are made redundant is called a cluster configuration, and a
configuration where a node is not made redundant with another node is called a single-node
configuration.
Once a user reads a file, the file is transparently brought back into HDI
from HCP
• The file stays in HDI until the automated policy removes it from its cache
again
• If the file is changed after it is recalled, a new stub is created and the same
policy as above will apply
Page 21-26
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Hitachi Data Ingestor Overview
Features
• All content migrated and backed up in HCP
• Advanced cache management supports 400 million files
Hitachi Data
• Supports hundreds of users per node Ingestor
• Transparent NAS migration for existing filers and servers
Hitachi Content
• File restore for user self-service Platform
• Content sharing in a distributed environment
• Operating as an on-ramp for users and applications at the edge is Hitachi Data Ingestor.
Data Ingestor connects to Hitachi Content Platform at a core data center, no application
recoding is required for applications to work with Data Ingestor and interoperate with
Content Platform. Users work with it like any NFS or CIFS storage. Because Data
Ingestor is essentially a caching device, it provides users and applications with
seemingly endless storage and a host of newly available capabilities. Furthermore, for
easier and efficient control of distributed IT, Hitachi Data Ingestor comes with a
Management API that enables integration with Hitachi Content Platform’s management
UI and other 3rd-party/home-grown management UIs. Thanks to the Management API
of the Data Ingestor, customers can even integrate HDI management into their
homegrown management infrastructures for deployment and ongoing management.
o 100 namespaces: 100 file systems across all attached HDI systems
Page 21-27
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Hitachi Data Ingestor (HDI) Specifications
• HDI can be configured as a highly available Cluster Pair. These servers are SAN attached
to Hitachi storage. This serves as the user’s caching filer, mentioned in the previous
slide, where every single copy of a file gets stored eventually back to the HCP.
• The 3rd type of configuration is the HDI VMware Appliance. HDI is deployed on the
VMware Hypervisor. With this type of configuration, the customer defines the hardware
and storage configuration. The storage does not have to be Hitachi storage on the back
end.
• In addition, the single node, VMware and remote server configurations can be remotely
configured, provisioned and managed using Hitachi Content Platform Anywhere (HCP
Anywhere) and installed at the remote site by nontechnical personnel. Just plug it in,
power it up, and it will import everything from HCP Anywhere. In all configurations, HDI
acts as a tiering solution, copying its resident files to HCP, and maintaining access to
those files for on-demand recall.
Page 21-28
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Major Components: Server + HBA, Switch and Storage
Hitachi CR 220SM
Hitachi CR 210HM
The integrated cluster system is called an appliance and it is integrated with a HUS 110 system
only.
Protocols in Detail
CIFS
• Windows AD/NT authentication
• NTFS ACL
• Dynamic/Static user-mapping between Windows domain user and HDI
end user
• Level2 Opportunistic Lock support (Read client cache for multi users)
• Home Directory automatic creation
• ABE (Access based Enumeration) support
NFS
• NFS V4/V3/V2
• NIS/LDAP user repository
Page 21-29
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
How HDI Maps to HCP Tenants and Namespaces
Benefits
• Satisfy multiple applications, varying SLAs and workload types or organizations
• Edge dispersion: each HDI can access another when set up that way
Page 21-30
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Content Sharing Use Case: Medical Image File Sharing
Hospital-A Hospital-B
WAN Read
read/write access
read-only access
namespace a namespace b
When a file system capacity reaches 90% (the default), HDI deletes the files in excess
of the threshold and creates 4KB links (stubs) to replace them
• Users access (read) the files as they always had since links are transparent to clients
Recalled files are deleted from HDI later and replaced by another link, based on HDI’s
system capacity.
Page 21-31
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HDI Is Backup Free
Migration
• Every file written to HDI is migrated to HCP
• Migration is a scheduled event
• Each file system can have its own migration policy
• Default migration interval: once per day
Stubbing
• HDI keeps a copy of every file in its cache
• When the cache capacity reaches a defined threshold, candidate files are
deleted from the cache and replaced with a stub pointing to the file on HCP
Default threshold is 90%, threshold is tunable
Stub size is 4KB
• Recovery point objective (RPO) is the maximum tolerable period in which data might be
lost from an IT service due to a major incident
• Recovery time objective (RTO) is the duration of time and a service level within which a
business process must be restored after a disaster
Migration
HDI HCP
File system Namespace
Created:
Oct. 9
Meta
file1Stubbed
ref Migrated 1
Object-ID1
Created:
Oct. 20 file2 ref Migrated 5 Object-ID2
Modified:
Oct. 10
Meta
file3Stubbed
ref Migrated 3
Object-ID3
Created:
Oct. 10
Meta
file4Stubbed
ref Migrated 2
Object-ID4
Modified:
Oct. 12
Meta
file5Stubbed
ref Migrated 4
Object-ID5
Page 21-32
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HDI Intelligent Caching: Stubbing
Stubbing
HDI HCP
File system Namespace
Created: Stubbed
Oct. 9
Meta
Meta
data file1 ref
reference Stubbed
Object-ID1
Created:
Oct. 20 file2 ref Object-ID2
Modified: Stubbed
Oct. 10
Meta
Meta
data file3
reference
ref Stubbed
Object-ID3
Created: Stubbed
Oct. 10
Meta
Meta
data file4 ref
reference Stubbed
Object-ID4
Modified:
Oct. 12
Meta
file5Stubbed
ref Object-ID5
Features
• Enables write once read many file system
Protects from intentional or unintentional modification
File cannot be deleted during retention period if assigned
• Customer application can take advantage via published API
• “Auto-commit” automatically creates WORM file from read only* file
Item Description
Read Write file • Write and read the file
Read Only* file • Write not allowed
• Write allowed after removing read only flag*
• Deletion allowed
WORM file with retention • Write NOT allowed regardless of write permission
• Delete not allowed regardless of write permission
WORM file with expired • Write NOT allowed
retention • Deletion allowed if write permission is set
* Read Only is a state where write permission of the file is off or the file has CIFS read only attribute.
Page 21-33
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Roaming Home Directories
HCP Tenant
Namespace for shared
Namespace for HDI ‘A’ Home Directories Namespace for HDI ‘B’
HDI with Remote Server has the following characteristics that separate
it from the other HDI configurations
• Small form factor
• Simple to set up
• Remote management from the central data center via HCP Anywhere
• Costs $1,500 - $2,000 per system, including HDI software
Page 21-34
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Why HDI With Remote Server?
There is no IT presence
• No raised floor data center, no racks
• Typically administered remotely for networking and other core services
Solution Components
HCP
Hitachi Content
• Platform for DR and long term storage data from HDI with Remote Server Platform Anywhere
HCP Anywhere
This section presents Hitachi Content Platform Anywhere and describes how it operates. HCP
Anywhere is a software product for archiving, backup and restore.
Page 21-35
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP Solution With HCP Anywhere
An HCP Anywhere system consists of both hardware and software and uses Hitachi
Content Platform (HCP) to store data
HCP Anywhere is a combination of hardware and software that provides 2 major features:
• File synchronization and sharing
This feature allows users to add files to HCP Anywhere and access those files from nearly any
location
When users add files to HCP Anywhere, HCP Anywhere stores the files in an HCP system and
makes the files available through the user's computers, smartphones and tablets
Users can also share links to files that they have added to HCP Anywhere
• HDI device management
This feature allows an administrator to remotely configure and monitor HDI devices that have
been deployed at multiple remote sites throughout an enterprise
Hitachi Content Platform Anywhere (HCP Anywhere), is an option for Hitachi Content Platform
that provides a fully integrated, on-premises solution for safe, secure file synchronization and
sharing.
Also called “sync and share,” these are cloud-based software packages that connects a number
of devices to the same set of files. Think of consumer offerings like Dropbox and Apple iCloud
where a file can be created or added on one device and shows up on all other devices
registered to that account. Users love them because their desktop files are also on their laptops,
their smartphones and their tablets. Sync and share works by storing data in the cloud so that
any web-enabled device can send and receive updates. In the context of enterprise IT, this
technology can be a big headache. It puts corporate data outside of IT’s control and into risky
consumer clouds. What both parties need is a solution that IT departments can deploy on their
terms and lets users sync and share their work related files in a safe and secure manner.
An HCP customer just needs the base hardware, an HCP Anywhere POD and seat licenses for
each user.
Page 21-36
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Hitachi Content Platform Anywhere
File sync and mobile NAS access from your own cloud
• It’s safe and secure – encryption, access control, on-premises, IT managed, remote
wipe, and more
• It’s easy to use – active directory integration, client apps, self-registration, and more
• It’s efficient – backup free, compression, single instancing, spin-down, multiple media
types, metadata only, and more
Page 21-37
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP Solution With HCP Anywhere
Internal network
Clients Anywhere - Public or private networks
HTTPS
An HCP Anywhere system consists of both hardware and software and uses Hitachi Content
Platform (HCP) to store data.
HCP Anywhere is a combination of hardware and software that provides two major features:
This feature allows users to add files to HCP Anywhere and access those files from nearly any
location. When users add files to HCP Anywhere, HCP Anywhere stores the files in an HCP
system and makes the files available through the user's computers, smartphones and tablets.
Users can also share links to files that they have added to HCP Anywhere.
An HCP Anywhere system includes 2 servers, called nodes, that are networked together. The
physical disks in each node form 3 RAID groups. Both nodes run the complete HCP Anywhere
software. Additionally, the system keeps copies of essential system data on both nodes. These
features combine to ensure the continuous availability of the system in case of a node failure.
There are 4 major pieces to creating an HCP Anywhere solution. First is the HCP itself as it
provides the storage platform on which HCP Anywhere runs. For HCP Anywhere there are 3
components: the base hardware consisting of 2 Dell Ethernet switches, the HCP Anywhere POD
Page 21-38
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Desktop Application Overview
Contents are automatically synchronized with the HCP Anywhere system and
other devices on which you have installed HCP Anywhere
Search for Hitachi Content Platform Anywhere in the App Store from
your iOS device, or play store from your Android device
Page 21-39
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
HCP Anywhere Features
Demo
https://www.hds.com/groups/public/documents/webasset/content-
archive-platform.html?M=content-platform_data-ingestor
https://www.hds.com/hdscorp/groups/public/documents/webasset/hit
achi-cp-anywhere-opo.html
Page 21-40
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Online Product Overviews
https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKH
rZlDvXcv
https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKHrZlDvXcv
Module Summary
Page 21-41
Hitachi Content Platform, Hitachi Data Ingestor and HCP Anywhere
Module Review
Module Review
Page 21-42
22. Hitachi Compute Blade and Hitachi
Unified Compute Platform
Module Objectives
Page 22-1
Hitachi Compute Blade and Hitachi Unified Compute Platform
Hitachi Compute Portfolio
Page 22-2
Hitachi Compute Blade and Hitachi Unified Compute Platform
Hitachi Compute Blade 500 Series
Hitachi Compute Blade 500 combines the high-end features needed in today's mission critical
data center with the high compute density and adaptable architecture you need to lower costs
and protect investment. The flexible architecture and logical partitioning feature of Hitachi
Compute Blade 500 allow configurations to exactly match application needs, and multiple
applications to easily and securely co-exist in the same chassis.
The CB 500 is integrated into a number of solutions including several of the Unified Compute
Platform (UCP) offerings.
• Hardware based logical partition (LPAR) capability for robust, secure, high performance
virtualization
• IO flexibility via expansion blades, to add optional dedicated disk storage or I/O
expansion capability
Page 22-3
Hitachi Compute Blade and Hitachi Unified Compute Platform
Compute Blade 500 Chassis And Components
Front Rear
Here is a front and rear view of a Compute Blade 500 chassis. These images are taken from the
Web Console view of a training system and it is not fully populated.
• Power Supplies
• Fans
• Network switches
The server blades slots are in the front of the CB 500 chassis. The CB 500 front panel is also
located in the front of the chassis. The front panel includes status LEDs and two USB ports.
This information and comparable diagrams can be found in the documentation including in the
following resources:
Page 22-4
Hitachi Compute Blade and Hitachi Unified Compute Platform
Hitachi Compute Blade 500 Series
Enterprise-Class Capabilities Hitachi Compute Blade 500 is a true enterprise-class blade server,
and it is important to understand exactly what that term means. It is "enterprise-class" in terms
of performance, scalability, reliability and configuration flexibility, as outlined below:
• Performance: CB 500 supports blades based on the latest and most powerful Intel
Xeon E5v3 and E7v3 series processors with up to eight CPUs (in SMP mode). It meets
the performance needs of large-scale systems that require extremely high compute
power and I/O today. The extensible CB 500 architecture can support multiple blade
types, including future generations of Intel processor. Standard-width CB 500 blades can
also be expanded to support additional high-density disk (HDD) storage or additional
PCI slots with an expansion blade installed in an adjacent blade slot.
Page 22-5
Hitachi Compute Blade and Hitachi Unified Compute Platform
Hitachi Compute Blade 500 Series
• Configuration flexibility: CB 500 supports Windows and/or Linux OSs, and a wide
range of virtualization solutions, including native LPAR, providing a high level of
flexibility and investment protection. The system can easily be configured to the exact
number of sockets, processor cores, I/O slots, memory and other components required
to optimally support your application without bottlenecks. The chassis can be configured
and managed via simple GUI HTML-based Web interface, which is seamlessly integrated
with the Hitachi Command Suite tools used to manage Hitachi storage products.
Reference: https://www.hds.com/assets/pdf/hitachi-compute-blade-500-whitepaper.pdf
Page 22-6
Hitachi Compute Blade and Hitachi Unified Compute Platform
CB 500 Web Console
• The Compute Blade 500 Web Console GUI interface is accessed using a supported web
browser from a correctly configured system console computer. The CB 500 Web Console
is access by using the CB 500 Management Module’s IP address on the customer’s data
center management LAN.
• The CB 500 Web Console maintains consistency with the style, “look and feel” of the
Hitachi Command Suite (HCS 7.x) product.
• The Compute Blade 500 Web Console has 4 view tabs, Dashboard, Resources, Alerts,
and Administration. The Dashboard view, shown here, is displayed by default when you
connect to the Web Console. From the Dashboard view you can quickly determine which
components are installed in the CB 500 and their status. This is visually represented by
the front and rear view chassis graphics in the center of the display.
• Other important information shown in the header areas includes the CB 500
Management Module IP address, the description that this is the Web Console interface,
the fact that this is a Compute Blade 500 system and the user who is signed on.
Page 22-7
Hitachi Compute Blade and Hitachi Unified Compute Platform
Hitachi Compute Blade 2500 Series
With the latest Intel Xeon E5v3 and E7v3 family processors, Hitachi Compute Blade (CB) 2500
delivers enterprise computing power and performance, as well as unprecedented scalability and
configuration flexibility. This helps you to lower costs and protect investment. The flexible
architecture and logical partitioning feature of CB 2500 allow configurations to exactly match
application needs, and enables multiple applications to easily and securely co-exist in the same
chassis.
Page 22-8
Hitachi Compute Blade and Hitachi Unified Compute Platform
Compute Blade 2500 Components - Front
• Hardware based logical partition (LPAR) capability for robust, secure, and high
performance virtualization.
• IO flexibility via a choice of fabric options, including IP, FC and converged fabric, PCIe
cards and expansion blades.
2 management modules
Hardware that makes up the CB 2500: Server Chassis, Server Blade, PCI expansion blade,
management module , switch module, I/O board module, power supply module, fan module,
and fan control module.
Page 22-9
Hitachi Compute Blade and Hitachi Unified Compute Platform
Compute Blade 2500 Components - Rear
6 power-supply modules
2 switch modules
10 fan modules
The following shows the number of modules that can be installed in the CB 2500 server chassis:
• 8 fan modules are installed and 2 fan control modules that control these fan modules
are installed.
This information and comparable diagrams can be found in the documentation including in the
following resources:
Page 22-10
Hitachi Compute Blade and Hitachi Unified Compute Platform
Hitachi Compute Blade 2500 Series
• The Web console runs on a Web browser that is set up in the system console. The Web
console can manage and set all of the equipment installed in a server chassis.
Page 22-11
Hitachi Compute Blade and Hitachi Unified Compute Platform
Server Blade Options
• http://www.hds.com/products/compute-blade/compute-blade-
500.html?WT.ac=us_mg_pro_cb500
• http://www.hds.com/products/compute-blade/compute-blade-
500.html?WT.ac=us_mg_pro_cb500
Page 22-12
Hitachi Compute Blade and Hitachi Unified Compute Platform
Compute Blade Platform Features
Run on x86 technology and also support industry standard PCIe slots
• Reduce cost through flexibility, with SMP scalability, plan and purchase for now, without
worry or planning for excess resources you may need tomorrow
• Have greater security, isolation and performance with the Hitachi logical partitioning
feature (LPAR), which delivers significantly lower cost enablement than other software
solutions
• Support changing requirements with an open industry standard blade hardware platform.
Get away from proprietary UNIX server environments and switched only I/O fabrics
• Hitachi blade servers run on x86 technology and also support industry standard PCIe
slots
• One single vendor: Whether it’s enterprise storage or servers, Hitachi is the only partner
who can provide enterprise level hardware and support services on a global scale
• Blade Server 2000 covers a wide range of computing environments from PC server
consolidation to mission critical systems
o Investment in one single platform can meet a wide variety of computing needs
thus lowering total support and upfront capital costs
• Hitachi symmetric multiprocessing (SMP) technology lets you purchase server resources
to satisfy compute requirements today without the need to anticipate future growth and
compute requirements of tomorrow
Page 22-13
Hitachi Compute Blade and Hitachi Unified Compute Platform
Compute Blade Platform Features
o When additional resources are required, purchase the additional blades and SMP
connectors to scale up your server environment to satisfy future growth
• With up to 1TB addressable memory when using 4 blade SMP, a wide range of high
performance applications can run on the Blade Server 2000 platform, alleviating the
need for high end UNIX systems
o This reduces the high hardware, software and support costs that are typically
associated with UNIX or RISC systems
• Balanced architecture design of Hitachi blade servers insures there are no bottlenecks
o The hybrid I/O design of Blade Server 2000 supports both an integrated
switched fabric and direct access PCIe Gen 2 Slots to meet the I/O requirements
of the most demanding applications
o Using SSD in a Blade Server 320 lets you support applications that require faster
storage access
Page 22-14
Hitachi Compute Blade and Hitachi Unified Compute Platform
What Is Logical Partitioning?
A hypervisor (aka: virtual machine monitor) is a virtualization platform that allows multiple
operating systems to run on a host computer at the same time. The term usually refers to an
implementation using full virtualization. Hypervisors are currently classified in 2 types:
• Type 1 hypervisor is software that runs directly on a given hardware platform (as an
operating system control program)
o A guest operating system thus runs at the second level above the hardware
o The classic type 1 hypervisor was CP/CMS, developed at IBM in the 1960s,
ancestor of IBM's current z/VM
o More recent examples are Xen, VMWare’s ESX Server and Sun's Logical Domains
Hypervisor (released in 2005)
o A guest operating system thus runs at the 3rd level above the hardware
• http://en.wikipedia.org/wiki/Hypervisor
Page 22-15
Hitachi Compute Blade and Hitachi Unified Compute Platform
Compute Rack Server Family
CR 210H xM
CR 220H xM
CR 220S xM
• Hitachi and Hitachi Data System offer a line of Compute Rack servers
• Compute Rack 210 servers require 1 rack unit (1U) of space and Compute Rack 220
servers take up 2 rack units (2U)
• Each of these rack servers offers the ability to configure 1 or 2 Intel Xeon processors
o The High Storage Capacity server gives up some memory in trade for the ability
to offer more configured HDDs
• It is valuable to mention the Compute Rack servers here as some of the Hitachi Unified
Compute Platform (UCP) solutions use Compute Rack servers
• The Compute Rack servers may be part of the solution’s core functionality or may be
configured as management server(s) for the UCP solution
Page 22-16
Hitachi Compute Blade and Hitachi Unified Compute Platform
Integrated Platform Management
Service Processor
LPARs
Chassis
Hitachi Compute Systems Manager is a standalone set of optional management tools designed
for data center management of multiple chassis via a graphical intuitive interface that provides
point-and-click simplicity. At the system level, HCSM provides centralized management and
monitoring of extended systems containing multiple chassis and racks.
Unified Dashboard
HCSM allows the various CB 500 system components to be managed through a unified interface,
which is seamlessly integrated with Hitachi Command Suite (see Figure 11). When rack
management is used, an overview of all Hitachi Compute Blade racks, including which servers,
storage and network devices are installed, can be quickly and easily obtained. In the event of
any system malfunction, the faulty part can be located at a glance.
In addition, HCSM software provides the ability to define and manage the logical system
configuration of each element to be managed by using the service name. With traditional blade
servers, management of both the logical system and the system's physical resources is required.
When definitions are made with service names (such as sales or stock) within Hitachi Compute
Blade management suite, there is no longer any need for administrators to concern themselves
with the management of physical resources.
HCSM provides centralized system management and control of all server, network and storage
resources. This includes the ability to set up and configure servers, monitor server resources,
integrate with enterprise management software (SNMP), phone home and manage server
assets.
Page 22-17
Hitachi Compute Blade and Hitachi Unified Compute Platform
Hitachi Compute Systems Manager
HCSM provides
• Usability (GUI integrated with HCS)
• Scalability (10,000 heterogeneous servers)
• Maintainability and serviceability
Page 22-18
Hitachi Compute Blade and Hitachi Unified Compute Platform
HCSM Resources – Compute Blade Servers
Page 22-19
Hitachi Compute Blade and Hitachi Unified Compute Platform
Demo
Demo
http://edemo.hds.com/edemo/OPO/3D_CB2500/CB2500_Main.html
http://edemo.hds.com/edemo/OPO/CB500/CB500.html?M=cb500-
res
Page 22-20
Hitachi Compute Blade and Hitachi Unified Compute Platform
Unified Compute Platform – One Platform for All Workloads
Microsoft®
Microsoft Citrix
SAP ERP Exchange
Sharepoint® XenDesktop
Server
Service Orchestration
Orchestration
Infrastructure
Bare Metal OS Hypervisor
Reference architecture Compute Blades
for ISV software
IP and SAN Networks
Integrated with ISV software
and single point of support Storage plus DR and Backup
Open, Reliable Platform with High Performance and Automation
• The Hitachi Unified Compute Platform takes the four basic infrastructure components of
compute, storage, network and software and UNIFIES them into single, platform
solution. It’s a bundled solution offering so you can create a more modern and nimble
data center.
• The Hitachi Unified Compute Platform is designed from the business requirements down,
and built from the bottom up to achieve a more converged virtualized infrastructure,
leveraging the converged stack to make the most efficient decisions about how to
perform the function, and then executing it consistently and predictably based on
architecture design created to support the business requirements. This end-to-end
platform supports multiple architectures, both stateless and stateful, that are comprised
of multiple vendors infrastructure components, resulting in a holistic converged IT that
will be aligned to meet your business needs
• This new way of leveraging the converged infrastructure stack allows us to execute
based on the most efficient path, the overall architecture, and the business
requirements.
Virtual solution that’s flexible and scalable, transforming data center infrastructure into a private
cloud at your own pace
Page 22-21
Hitachi Compute Blade and Hitachi Unified Compute Platform
UCP With Unified Compute Platform Director
The enterprise-class solutions combine the highest quality systems and the most advanced
architecture. UCP creates a framework on which to build a robust converged cloud
infrastructure that includes data protection and management capabilities. UCP includes best-of-
breed Hitachi blade servers (powered by Intel Xeon processors), industry-leading Hitachi
storage, SAN switches from Brocade, and Ethernet networks from Brocade or Cisco. Hitachi
blade servers are known for their superior quality and have advanced functionality that makes
them uniquely suited to support mission-critical applications and a converged cloud
infrastructure.
Page 22-22
Hitachi Compute Blade and Hitachi Unified Compute Platform
Unified Compute Platform Family Overview
Each UCP solution for VMware vSphere and Microsoft Private Cloud (Hyper-V®) is configured to
maximize the value of your server virtualization environment. Equipment is neither
overpurchased nor overprovisioned. It is architected as a fully integrated platform for deploying
IT as a service. And it improves organizational agility by quickly deploying new applications and
services to respond to changes in business needs and integrate them into current environments.
Page 22-23
Hitachi Compute Blade and Hitachi Unified Compute Platform
Unified Compute Platform Family Overview
Server 2U4N Rack (4-16) 2U4N Rack (4-16) 2U4N Rack (4-16) Cisco UCS
Storage Internal Disks Internal Disk VSP G200 HUS-VM, VSP G1000
• The UCP 6000 high end converged system is designed for high-end application
optimized environments. It integrates the Hitachi CB 2500 high performance blade
servers and supports mid to high-end storage systems such as HUS VM and VSP G1000.
For example, solutions for SAP and Oracle that use the CB 2500 chassis and deliver the
highest performance and availability will be called UCP 6000 for SAP HANA and UCP
6000 for Oracle Database RAC.
• The UCP 4000 mid-range converged system is designed primarily for enterprise-class
virtualized workloads as well as specific application optimized environments. It uses the
CB500 high-density blade chassis. UCP 4000 for VMware vSphere and UCP 4000 for
Microsoft Private Cloud are examples of these solutions. For UCP solutions that limit
scaling to a maximum of 16 blade servers, the models include letter “E” for “Entry Level”
(e.g. UCP 4000E for VMware vSphere).
• The UCP 2000 entry level system is targeted at ROBO (Remote Office or Branch Office)
for tier 2 and tier 3 applications. It integrates a new 2U 4Node rackmount server as well
as the new VSP GS200 with networking. UCP 2000 VMware vSphere is the first of many
solutions which will use this model number.
Page 22-24
Hitachi Compute Blade and Hitachi Unified Compute Platform
Unified Compute Platform 4000E – Entry-Level
• The UCP 1000 hyper-converged system is also targeted at ROBO and tier 2 and tier 3
applications. It uses a rackmount server with internal disks. The first solution is the
UCP 1000 for VMware EVO: RAIL, which includes virtual SAN and virtual networking with
its hypervisor (virtualization software).
Page 22-25
Hitachi Compute Blade and Hitachi Unified Compute Platform
Unified Compute Platform 4000E – Entry-Level
System Components
Unified Compute Platform Director (for VMware or
Microsoft)
Hitachi Compute Blade B500 (up to 16 blades)
Choice of modular or enterprise storage
• Hitachi Unified Storage VM or HUS 130;
Hitachi Virtual Storage Platform (VSP) bolt-on
Converged networks
• Cisco Nexus 5548
Non-HA management server (Optional HA cluster)
Page 22-26
Hitachi Compute Blade and Hitachi Unified Compute Platform
Demo
Demo
https://www.hds.com/go/tour-ucp/
https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKH
rZlDvXcv
https://www.youtube.com/playlist?list=PL8C3KTgTz0vbNzdbi_rEQFkKHrZlDvXcv
Page 22-27
Hitachi Compute Blade and Hitachi Unified Compute Platform
Module Summary
Module Summary
Page 22-28
Hitachi Compute Blade and Hitachi Unified Compute Platform
Your Next Steps
@HDSAcademy
Check your progress in the Learning Path.
Certification: http://www.hds.com/services/education/certification
Learning Paths:
Page 22-29
Hitachi Compute Blade and Hitachi Unified Compute Platform
Your Next Steps
Page 22-30
A. Hitachi Enterprise Storage Hardware –
Hitachi Virtual Storage Platform
Module Objectives
• Hitachi Virtual Storage Platform G1000 Hardware Guide Mainframe Host Attachment and
Operations Guide
Page A-1
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Hitachi Enterprise Storage Hardware Overview
• Hitachi Virtual Storage Platform G1000 Performance Guide Hitachi Virtual Storage
Platform G1000 Product Overview
• Hitachi Virtual Storage Platform G1000 Provisioning Guide for Mainframe Systems
• Hitachi Virtual Storage Platform G1000 Provisioning Guide for Open Systems Open-
Systems Host Attachment Guide
Page A-2
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
VSP Full Configuration – 6 Rack
1 Module 2 Module
HDD (2.5”) 1,024 2,048
HDD (3.5”) 640 1,280
CHA ports 80 (96*1) 176 (192*1)
Cache 512GB 1024GB
RK-12 *1 ALL CHA configuration (Diskless)
RK-11
RK-10
RK-00
RK-01
RK-02
• A fully-configured VSP system contains 2 DKC Boxes, in each of 2 separate racks, and
16 HDU Boxes
• The rack naming convention is a bit different from the RAID 600 USP V.
Page A-3
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
VSP 2-Module System
o The HDU racks associated with its DKC rack uses the same left digit followed by
a 1 or 2 depending upon its physical position relative to the DKC rack
Each module has 1 logic box and HDD box connected with it
The basic module and option module are called Module-0 and Module-1
respectively
Module-0 and Module-1 are connected via Grid Switch (GSW) PCB
Option Module (Module-1) Basic Module (Module-0)
HDD BOX HDD BOX HDD BOX HDD BOX HDD BOX HDD BOX
#1-7 #1-4 #1-1 #0-1 #0-4 #0-7
HDD BOX HDD BOX HDD BOX HDD BOX HDD BOX HDD BOX
#1-6 #1-3 #1-0 #0-0 #0-3 #0-6
Page A-4
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Control and Drive Chassis Structure
14U 13U
SAS x 4
PS x 4
FED x 8
Cache x 8 Control Chassis BED x 4
3.5 in. Drive Chassis
• Looking at the control chassis, the design is a more modular, blade style structure
• Virtual Storage Directors and Cache adapters are added to the front
• FEDs, BEDs and Grid Switch adapters are added to the back
• Services Processors are accessed from the back of the system as well
• Two control chassis (14 rack units high) can be combined to operate as a single unit
• The drive chassis (13 rack units high) contain either 2.5” or 3.5” drives
• Fans in the opposite side run faster to move air when the other side is open
Page A-5
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
19 Inch Industry Standard Width Rack – DKC
42U Frame
(19 inch)
13U
13U
DKU Box
DKU Box
Front side Rear side
13U
DKU Box
14U 14U
DKC Box
DKC Box
Front side Rear side
o The rack is a Hitachi-custom rack that conforms to the industry standard 19 inch
width
• The DKC Box and DKU Box containers are used to hold the VSP components
o The DKC Box is 14U high and the DKC Box is 13U high
o The first rack will contain 1 DKU Box, 1 front and 1 rear
Page A-6
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Disk Unit (DKU) Frames
13U
13U
DKU Chassis
(SFF or LFF chassis)
13U
• The VSP DKU frame is also a 19 inch industry standard width rack
• A rack that is used as a DKU frame can hold 3 DKU Boxes, each 13U high
• A hard disk nit (HDU) Box contains HDDs in both the front and rear
• The VSP supports 2 different internal HDU Box structures — one that holds Large Form
Factor (LFF) 3.5” disk drives and one that holds Small Form Factor (SFF) 2.5” disk drives
o One HDU Box can be either for LFF or SFF disk drives but not mixed
o LFF and SFF HDU Boxes can be mixed in any configuration in the VSP
Page A-7
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
DKC Components
DKC Components
• The battery and the Cache Flash Memory are also installed in the CMA to prevent data
loss from a power outage or other event
• The storage system continues to operate when a single point of failure occurs, by
adopting a duplexed configuration for each control board (CHA, DKA, CPC, GSW and
VSD), a redundant configuration for the AC-DC power supply and the cooling fan
• The addition and the replacement of the components and the upgrade of the microcode
is an online operation
• The SVP allows the engineers to set and modify the system configuration information,
and also can be used for checking the system status
• The SVP can also be configured to report system status and errors to Service Center and
enables the remote maintenance of the storage system
Page A-8
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Dual Cluster Structure of the DKC
1. DKC-0
CACHE-2CH DKCPS-01 DKCPS-02
DKCFAN-021 DKCFAN-020
CACHE-2CG SVP-OPTION/HUBBOX-01
CACHE-2CD CHA-2RL CHA-2RU
DKCFAN-023 DKCFAN-022 CHA-2QL CHA-2QU
CACHE-2CC
DKA/CHA-2ML DKA/CHA-2MU
MP-2MD DKC DKC
DKCFAN-024 ESW-2SD FAN- FAN-
MP-2MC Cluster 2 ESW-2SC 026 026
SSVPMN-0
MP-2MA Cluster 1 ESW-1SA DKC DKC
DKCFAN-014 FAN- FAN-
MP-2MB ESW-1SB 016 016
DKA/CHA-1AL DKA/CHA-1AU
CACHE-2CA
DKCFAN-013 DKCFAN-012 CHA-1EL CHA-1EU
CACHE-2CB CHA-1FL CHA-1FU
CACHE-2CE SVP-BASIC
DKCFAN-011 DKCFAN-010
CACHE-2CF DKCPS-03 DKCPS-00
Front view of DKC 0 (SKCPANEL is opened.) Rear view of DKC 0
o The diagrams on this page show the front view and rear view of DKC#0
o In both the front and rear of the DKC Box, Cluster 1 components are found in
the lower slots and Cluster 2 components are found in the upper slots of the DKC
Box
• When a VSP system includes 2 modules, both DKC#0 and DKC#1, the DKC slots in the
second module have different identification codes
o The main printed circuit board (PCB) types are found in the same slot locations
and the cluster boundaries are the same in both DKC modules
• PCB types
Page A-9
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
DKC Components
DKC Components
Note: One Option consists of 2 PCBs. One gets installed in Cluster 1 (CL1) and the second in
Cluster 2 (CL2).
Page A-10
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Back-End Director and Front-End Director Pairs
• The Back End Director (BED) boards execute all I/O jobs received from the processor
boards and control all reading or writing to disks
o There are up to 640 LFF disks or 1024 SFF disks per chassis attached to the 16
or 32 6Gb/sec SAS links from these 2 or 4 BED boards
Page A-11
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
DKC Components
DKC Components
• The Data Cache Adapter (DCA) boards are the memory boards that hold all user data
and the master copy of Control Memory (metadata)
• There are up to 8 DCAs installed per chassis, with 8GB to 32GB of cache per board
(32GB to 256GB per chassis) when using the current 4GB DIMM (as part of the 16GB
feature set)
o The 2 boards of a feature must have the same RAM configuration, but each DCA
feature can be different
• The first 2 DCA boards in the base chassis (but not in the expansion chassis) have a
region of up to 48GB (24GB per board) used for the master copy of Control Memory
• Each DCA board also has a 500MB region reserved for a Cache Directory
• Each DCA board also has 1 or 2 on‐board SSD drives (31.5GB each) for use in backing
up the entire memory space in the event of an array shutdown due to power failure
o If the full 32GB of RAMM is installed on a DCA, it must have two 31.5GB SSDs
installed
o On‐board batteries power each DCA board long enough to complete several such
shutdown operations back‐to‐back in the event of repeated power failures before
the batteries have had a chance to charge back up
Page A-12
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Cache PCB and Component Specifications
# Item Specification
1 Board model name DKC-F710I-CPC A pair of CM Boards
(Included a 32GB-SSD and a Battery)
2 DIMM model name DKC-F710I-C16G 4GB-DIMM x 4
DKC-F710I-C32G* 8GB-DIMM x 4
3 SSD model name DKC-F710I-BM64 32GB-SSD x 2
DKC-F710I-BM128* 64GB-SSD x 2
5 Spare parts (FRU) CM Board (without SSD, Battery, and DIMM)
Cache Memory (4/8 GB DIMM)
Battery (12v)
SSD (32/64*GB)
6 Firmware on CM board 1) Backup and Battery Firmware: Memory backup and Battery
charge/discharge control
Online firmware update is available
2) SSD Firm: SSD internal control
• The table on this page identifies the component codes for the Cache PCB, the cache
memory DIMMs and the SSDs
• This information also indicates that the firmware for the battery management and
memory backup is non-disruptively upgradeable starting with V01
• Online upgrade of the SSD internal control firmware was added with V02
Page A-13
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
DKC Components
DKC Components
o Each board includes one Intel 2.33GHz Core Duo Xeon CPU with 4 processor
cores and 12MB of L2 cache
o This local RAM space is partitioned into 5 regions, with 1 region used for each
core’s private execution space, plus a shared Control Memory region used by all
4 cores
• Each VSD board executes all I/O requests for the LDEVs that are assigned to that board
• The firmware loaded onto the VSD board contains 5 types of code and each Xeon core
will schedule a process that depends upon the nature of the job it is executing
o Target process – Manages host requests to and from a FED board for a particular
LDEV
Page A-14
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
DKC Components
o BED process – Manages the staging or de-staging of data between cache blocks
and internal disks via a BED board
o HUR Initiator (MCU) process – Manages the respond side of a Hitachi Universal
Replicator connection on a FED port
o RCU Target (RCU) process – Manages the pull side of a remote copy connection
on a FED port
• The core of the VSP HiStar-E Network architecture is the Express Switch which is
identified by the acronym GSW
o The GSW provides the highly redundant, high performance interconnection paths
among the other main components of the HiStar-E Network as shown by the
schematic diagram on this page
Page A-15
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
DKC Components
• The Grid Switch boards supply the high performance interconnect paths that cross
connect all of the other boards
• Each GSW board has 24 unidirectional ports, each port having a send path and a receive
path, with each operating at 1024MBps
• As such, the GSW supports an aggregate peak load of 24GB/sec send and 24GB/sec
receive (or 48GB/sec overall)
• Eight of the GSW paths connect to the FED and BED boards for a total of 192 GB/sec full
duplex
• Four more paths connect to VSD boards, and the final 4 paths cross connect to the
matching GSW board in the second chassis (if used)
Page A-16
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
VSP Hardware Architecture
Interfaces with
offload processors
Processors
Control Memory
The diagram on this page shows how the DKC components in a single DKC system are
connected. A single DKC VSP system can support a maximum of 4 DKA features for a total of 16
back end data loops.
Page A-17
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Architecture Layout Dual Node
When the VSP configuration needs to expand to more back end capacity and/or access, the
second DKC, module 1 must be added. The 2 DKCs are connected between their respective
GSWs.
Service Processor
Service Processor (SVP) is a blade server that runs the application for
performing hardware and software maintenance functions
The SVP PC provides 3 main functions
• Human interfaces
• Storage system health monitoring and reporting
• Performance monitor capabilities
The SVP Application, Web Console and Storage Navigator applications
run on the SVP PC
If the SVP PC fails or is unavailable, the movement of I/O is not affected
Support for High Reliability Kit — Second SVP
The purpose of the SVP blade PC is to provide the human interface to the Virtual Storage
Platform
Page A-18
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Service Processor
o Web Console
o SVP Application
• Connection point for the Virtual Storage Platform (VSP) to the customer LAN
• Collects and reports workload and performance information through the SVP Monitor
A Maintenance PC is used to connect to the SVP for storage system management and
administration.
Page A-19
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
DKU Box Structure
Rear
Side
Front
Side
HDD x 40
Fan Assembly HDD x 40
LFF 3.5HDD box – maximum 80 HDD
o The example shown is the LFF structure which holds 3.5” HDDs
o Each HDU Box has 2 fan door assemblies on the front and 2 on the rear
o From this diagram, you can see how the fan door assembly blocks access to the
HDD slots when the fan doors are in their normal operating position
• The fan assembly latch mechanism is located from front-to-back between the upper sets
of HDDs
o The fan door latch is pushed or pulled, depending on which side you are
standing on and on which side you need to open a fan door
• Note: Technicians who have worked with early versions of the VSP system have found
that the fan door latch mechanisms are a bit touchy; you may have to walk to the other
side of the system if the fan door latch gets stuck
• The SSW components are installed along the sides of the HDU Box in both the front and
rear
Page A-20
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Expanded Support for Multiple HDD Types
DKU Chassis
Can choose SFF or LFF chassis
DKU(for LFF)
Can mount 80 HDDs in a chassis
VSP using SATA HDD or SSD
Can mount 2 DKU chassis in 1st Rack DKU(for SFF)
Can mount 3 DKU chassis in 2nd and 3rd Rack Can mount 128 HDDs in a chassis using SAS HDD or SSD
Page A-21
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
VSP Back End Cabling
• 1 HDU Box can be either for LFF or SFF disk drives but not mixed
o LFF and SFF HDU Boxes can be mixed in any configuration in the VSP
Page A-22
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
VSP DKU and HDU Numbering – Front View
• In the VSP, each DKU is identified by a 2 digit number which includes the number of the
DKC, 0 or 1, and a number for the DKU within the module
• Remember that the DKC#0 module can be configured on either the right or the left as
compared to its DKC#1 partner
• This will add complexity to component identification at the customer site (in a 2-module
system, you will need to know or be able to determine the position of DKC#0 and
DKC#1 relative to each other)
Page A-23
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
VSP DKU and HDU Numbering – Back View
Page A-24
24-port SAS 24-port SAS
expander expander
…
…
1
0
1
0
15
15
IOC
DKA0
2W 2W
24-port SAS 24-port SAS
expander expander
Cluster 1
expander expander
…
…
1
0
1
0
15
15
VSP SAS Back End Paths
IOC
DKA1
2W 2W
VSP B4 Layout – Back View
…
…
1
0
1
0
IOC
15
15
DKA0
2W 2W
24-port SAS 24-port SAS
VSP DKU and B4 Numbering
expander expander
Cluster 2
24-port SAS 24-port SAS
expander expander
…
…
1
0
IOC
1
0
15
15
DKA1
2W 2W
Page A-25
VSP B4 Layout – Back View
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Encryption Support
Encryption Support
Every VSP is encryption capable and requires a license key to be installed to be activated.
Each array supports up to 32 encryption keys per platform allowing for encryption to be used as
an access control mechanism within the array. This allows for different classifications of data to
be stored on the same array with encryption providing a data leakage prevention mechanism.
Disk Sparing
Sparing Operations
• Dynamic Sparing (Preemptive Copy)
Preemptive means that the ORM (Online Read Margin) diagnostics have determined a drive to
be suspect, or drive read/write error thresholds have been exceeded
The storage system spares out the drive even though the drive has not completely failed
Data is copied to spare drive (not recreated)
Page A-26
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Architecture – Storage
Architecture – Storage
Storage overview
1. Physical Devices – PDEV
2. PDEV grouped together with RAID
type: RAID-1+, RAID-5, RAID-6
3. Parity Group/RAID
Group/Array Group
4. Emulation specifies smaller logical
unit sizes
6. Assign addresses in
LDKC:CU:LDEV format
00:00:00
00:00:01
00:00:02
• The emulation creates equal sized stripes called LDEVs (Logical Devices)
o All the Logical Devices (LDEVs) that have been carved out of a RAID Group have
to be a part of a Control Unit (up to 256 on the Universal Storage Platform V)
Page A-27
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Data Redundancy
• Emulations
o When you wish to carve out LDEVs from a RAID group, you must specify the size
of the LDEVs
o The storage system supports various emulation modes which specify the size of
each LDEV in a RAID group
o Storage system can have multiple RAID groups with a different emulations, such
as Open-V
Data Redundancy
RAID Implementation
The Concatenated Array Group feature allows you to configure all of the space from either 2
or 4 RAID-5 (7d+1p) Array Groups into an association of 16 or 32 drives whereby all LDEVs
created on these Array Groups are actually striped across all of the elements. Recall that a slice
(or partition) created on a standard Array Group is an LDEV (Logical Device), becoming a LUN
(Logical Unit) once it has been given a name and mapped to a host port.
Page A-28
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Supported RAID Configurations
RAID-1
RAID-1 (2D + 2D) Configuration
A A’ B B’
E’ E F’ F
G G’ H H’
I I’ J J’
RAID-1 (4D + 4D) Configuration
A A’ B B’ C C’ D D’
E’ E F’ F G’ G H H’
I I’ J J’ K K’ L L’
M M’ N N’ O O’ P P’
Description:
• Two parity group disk drive writes for every host write
• Do not care about what the previous data was, just over-write with new data
• For reads, the data can be read from either disk drive
• Read activity distributed over both copies reduces disk drive busy (due to reads) to 1/2
of what it would be to read from a single (non-RAID) disk drive
Disadvantages: Uses more raw disks to implement which means a more expensive solution
Page A-29
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
RAID Configurations
RAID Configurations
RAID-5
RAID-5 (3D + 1P) Configuration
A B C P
D E P F
G P H I
P J K L
A B C D E F G P
H I J K L M P N
O Q R S T P U V
W X Y Z P AA AB AC
o It’s very space efficient (smallest space for parity), and sequential reads and
writes are efficient, because they operate on whole stripes
• For workloads with higher access density and more random writes, RAID-5 can be
throughput-limited due to all the extra parity group I/O operations to handle the RAID-5
write penalty
• In the RAID-5 design data are written to the 1st three disks and the 4th disk has an
error correction data set that would allow any 1 failing block to be reconstructed from
the other 3
o This method has the advantage that only, effectively 1 disk is used out of the 4
for error correction, (parity), information
• Small-sized records are intensively read and written randomly in transaction processing
o This type of processing generates many I/O requests for transferring small
amounts of data
Page A-30
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
RAID Configurations
o RAID-5 has been introduced to be suitable for this type of transaction processing
A B C D E F P N
H I J K L P Q V
O R S T P Q U AC
W X Y P Q AA AB Q
Description
• An extension of the RAID-5 concept that uses 2 separate parity-type fields usually called
P and Q
• Allows data to be reconstructed from the remaining drives in a parity group when any 1
or 2 drives have failed
*The math is the same as for ECC used to correct errors in DRAM memory or on the
surface of disk drives
Page A-31
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Hitachi Enterprise Storage Software Tools
• Each host random write turns into 6 parity group I/O operations
o (Compute new P, Q)
• Advantages:
• Disadvantages:
Page A-32
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Software Tools for Configuring VSP
SN — Storage Navigator
This table compares the 4 main GUI applications that are used to view and manage the Virtual
Storage Platform (VSP) storage systems
Page A-33
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
SVP Application
o Web Console
o Storage Navigator
• The fourth GUI interface is Hitachi Command Suite Device Manager software, which is
installed and runs on a Microsoft Windows® or Sun® Solaris host other than the SVP PC
• The SVP Application and the Web Console applications are used primarily by the
maintenance engineer
SVP Application
The SVP Application is used by the Engineers for doing hardware and software maintenance.
The application is launched by accessing the Web Console application.
Page A-34
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Storage Navigator/Web Console
The Web Console is Storage Navigator that is accessed on the SVP as a user with the
maintenance user account.
Storage Navigator GUI is accessed from an end user PC, via the public IP LAN, using a
supported web browser. In the customer environment, this public LAN may be a secured
management LAN within the customer’s network environment.
Page A-35
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Command Line Interface
We use the term public LAN to differentiate from the internal LAN within the VSP storage
system. Storage Navigator should never be accessed and used on the VSP internal LAN.
RAIDCOM
• In-band
• Out-of-band
• Command Control
Interface (CCI)
• The Virtual Storage Platform (VSP) is the first enterprise storage system to include a
unified, fully compatible command line interface
o The VSP Command Line Interface (CLI) supports all storage provisioning and
configuration operations that can be performed through SN
• The example on this page shows the raidcom command that retrieves the configuration
information about an LDEV
• For in-band CCI operations, the command device is used, which is a user-selected and
dedicated logical volume on the storage system that functions as the interface to the
storage system for the UNIX/PC host
• The dedicated logical volume is called command device and accepts commands that are
executed by the storage system
• The virtual command device is defined by specifying the IP address for the SVP
Page A-36
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Hitachi Command Suite 7
• CCI commands are issued from the host and transferred via LAN to the virtual command
device (SVP) and the requested operations are then performed by the storage system
The customer can also use the Device Manager component of the Hitachi Command Suite 7
storage management software products to view and administer the VSP storage system as well
as any other HDS storage system.
Page A-37
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Storage Navigator Interface
Page A-38
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Storage Navigator Setup
External authentication of privileged users (storage admins) allows for the storage array to
integrate into the customer existing security and compliance infrastructure (e.g., existing
authentication data), generally directory services environments such as Active Directory. This
allow customers to provision and de-provision storage management access with existing
infrastructure.
Page A-39
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Storage Navigator GUI
The above list represents some of the common tasks that can be performed as part of day-to-
day operations.
Page A-40
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Storage Navigator Provisioning Tasks
• You can connect multiple server hosts of different platforms to one port of your storage
system
o When configuring your system, you must group server hosts connected to the
storage system by host groups
• For example, if HP-UX hosts and Windows hosts are connected to a port, you must
create one host group for HP-UX hosts and also create another host group for Windows
hosts
o Next, you must register HP-UX hosts to the corresponding host group and also
register Windows hosts to the other host group
Page A-41
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Storage Navigator System Information Display
Storage Navigator can be used for viewing storage configuration information. The information
can also downloaded as HTML or CSV reports.
SN System Alerts
• SN can be used to view Service Information Messages (SIM)
generated on the storage
• These are alerts related to
• H/W component failure
• S/W configuration/operations issues
• Pools capacity thresholds exceeded
• LDEVs blocked
• License related issues
• External Storage Issues
• Replication Issues
Page A-42
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Attributes of License Keys
License Keys delivered in text format in a file with the .plk extension
o When the customer or engineer accesses Storage Navigator with no license keys
installed, Storage Navigator will open the License Key interface by default
o No other Storage Navigator functions will be possible until the license keys have
been installed
o Emergency – 7 to 30 days; used when a key is needed quickly, but there also
must be negotiation and agreement reached with the customer regarding the
licenses
Page A-43
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Licensing — Capacity Definitions and Key Types
Capacity Definitions
• Usable Capacity
• Used Capacity
• Unlimited Capacity
License • Permanent
Time • Emergency – 30 days
Duration • Temporary – 120 days
Types • Term – 365 Days
Two new storage capacity definitions have been created: usable capacity and used capacity
• Usable capacity is calculated based on the real, internal storage and the connected
external storage capacity of a VSP
o The program products of the BOS are licensed based on usable capacity
• Used capacity is the total allocated capacity including any and all replicated copies
o That is to say the total of all P-VOLs and all S-VOLs capacities are added
together to determine the used capacity
o Used capacity is the basis for the licenses for replication program products
• When the customer buys the license for a capacity free program product, the customer
is entitled to use that product functionality against an unlimited capacity
This information is provided so that you are aware that there may be differences in licensed
capacity calculation for different program products within the VSP system and also as compared
to older enterprise storage systems
Whenever a license key expires, the current configuration is retained but no new configuration
changes are allowed with related the program product
Page A-44
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Licensing – SN GUI
Licensing – SN GUI
Licensing GUI on SN
Module Summary
Page A-45
Hitachi Enterprise Storage Hardware –Hitachi Virtual Storage Platform
Module Review
Module Review
4. For a 2-module system what is the maximum number of drives that can
be installed?
Page A-46
Training Course Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
ACC — Action Code. A SIM (System Information AL-PA — Arbitrated Loop Physical Address.
Message). AMS — Adaptable Modular Storage.
ACE — Access Control Entry. Stores access rights APAR — Authorized Program Analysis Reports.
for a single user or group within the APF — Authorized Program Facility. In IBM z/OS
Windows security model. and OS/390 environments, a facility that
ACL — Access Control List. Stores a set of ACEs permits the identification of programs that
so that it describes the complete set of access are authorized to use restricted functions.
rights for a file system object within the API — Application Programming Interface.
Microsoft Windows security model.
APID — Application Identification. An ID to
ACP ― Array Control Processor. Microprocessor identify a command device.
mounted on the disk adapter circuit board
(DKA) that controls the drives in a specific Application Management — The processes that
disk array. Considered part of the back end; manage the capacity and performance of
it controls data transfer between cache and applications.
the hard drives. ARB — Arbitration or request.
ACP Domain ― Also Array Domain. All of the ARM — Automated Restart Manager.
array-groups controlled by the same pair of Array Domain — Also ACP Domain. All
DKA boards, or the HDDs managed by 1 functions, paths and disk drives controlled
ACP PAIR (also called BED). by a single ACP pair. An array domain can
ACP PAIR ― Physical disk access control logic. contain a variety of LVI or LU
Each ACP consists of 2 DKA PCBs to configurations.
provide 8 loop paths to the real HDDs. Array Group — Also called a parity group. A
Actuator (arm) — Read/write heads are attached group of hard disk drives (HDDs) that form
to a single head actuator, or actuator arm, the basic unit of storage in a subsystem. All
that moves the heads around the platters. HDDs in a parity group must have the same
AD — Active Directory. physical capacity.
ADC — Accelerated Data Copy. Array Unit — A group of hard disk drives in 1
RAID structure. Same as parity group.
Address — A location of data, usually in main
memory or on a disk. A name or token that ASIC — Application specific integrated circuit.
identifies a network component. In local area ASSY — Assembly.
networks (LANs), for example, every node Asymmetric virtualization — See Out-of-Band
has a unique address. virtualization.
ADP — Adapter. Asynchronous — An I/O operation whose
ADS — Active Directory Service. initiator does not await its completion before
CLPR — Cache Logical Partition. Cache can be Cloud Fundamental —A core requirement to the
deployment of cloud computing. Cloud
divided into multiple virtual cache
memories to lessen I/O contention. fundamentals include:
RPC — Remote procedure call. SAN ― Storage Area Network. A network linking
computing devices to disk or tape arrays and
RPO — Recovery Point Objective. The point in other devices over Fibre Channel. It handles
time that recovered data should match. data at the block level.
RPSFAN — Rear Power Supply Fan Assembly. SAP — (1) System Assist Processor (for I/O
RRDS — Relative Record Data Set. processing), or (2) a German software
RS CON — RS232C/RS422 Interface Connector. company.
RSD — RAID Storage Division (of Hitachi). SAP HANA — High Performance Analytic
Appliance, a database appliance technology
R-SIM — Remote Service Information Message. proprietary to SAP.
RSM — Real Storage Manager. SARD — System Assurance Registration
RTM — Recovery Termination Manager. Document.
RTO — Recovery Time Objective. The length of SAS —Serial Attached SCSI.
time that can be tolerated between a disaster SATA — Serial ATA. Serial Advanced Technology
and recovery of data. Attachment is a new standard for connecting
R-VOL — Remote Volume. hard drives into computer systems. SATA is
R/W — Read/Write. based on serial signaling technology, unlike
current IDE (Integrated Drive Electronics)
-back to top-
hard drives that use parallel signaling.
—S— SBM — Solutions Business Manager.
SA — Storage Administrator. SBOD — Switched Bunch of Disks.
SA z/OS — System Automation for z/OS. SBSC — Smart Business Storage Cloud.
SAA — Share Access Authentication. The process SBX — Small Box (Small Form Factor).
of restricting a user's rights to a file system
SC — (1) Simplex connector. Fibre Channel
object by combining the security descriptors
connector that is larger than a Lucent
from both the file system object itself and the
connector (LC). (2) Single Cabinet.
share to which the user is connected.
SCM — Supply Chain Management.
SaaS — Software as a Service. A cloud computing
SCP — Secure Copy.
business model. SaaS is a software delivery
model in which software and its associated SCSI — Small Computer Systems Interface. A
data are hosted centrally in a cloud and are parallel bus architecture and a protocol for
typically accessed by users using a thin transmitting large data blocks up to a
client, such as a web browser via the distance of 15 to 25 meters.
Internet. SaaS has become a common SD — Software Division (of Hitachi).
WAN — Wide Area Network. A computing XFI — Standard interface for connecting a 10Gb
internetwork that covers a broad area or Ethernet MAC device to XFP interface.
region. Contrast with PAN, LAN and MAN.
XFP — "X"=10Gb Small Form Factor Pluggable.
WDIR — Directory Name Object.
XML — eXtensible Markup Language.
WDIR — Working Directory.
XRC — Extended Remote Copy.
WDS — Working Data Set. -back to top-
https://learningcenter.hds.com/Saba/Web/Main
Page E-1
Evaluating This Course
Page E-2