Professional Documents
Culture Documents
Data Protection and Special Volumes: November 2004
Data Protection and Special Volumes: November 2004
Data Protection and Special Volumes: November 2004
Revision History
EMC believes the information in this publication is accurate as of its publication date. The information is subject to
change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR
FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software
license.
EMC, Symmeytrix, DMX, Timefinder and Optimizer are registered trademarks of EMC Corporation.
All other trademarks used herein are the property of their respective owners.
Logical Vol. 1
Logical Vol. 6
Logical Vol. 9
Logical Vol. 1A
The Hypervolume Extension Feature allows one physical disk to be split into logical disk or hypers. This example shows
one physical disk split into four hypers. Hypervolumes can be configured along with various types of data protection using
the IMPL.BIN file in SymmWin.
Meta Volumes
Several operating systems, for example, Windows NT; some applications software; and some open systems environments
require larger volumes than are provided by the Symmetrix (maximum volume size is determined by Enginuity level).
A meta volume is two or more Symmetrix volumes presented to a host as a single addressable device. The meta volume
consists of a head device, some number of member devices (optional), and a tail device.
Creating Meta volumes allows the host to use a greater amount of GB per address and offers increased performance.
Concatenated Volumes
H M M T
Concatenated volumes are volume sets that are organized with the first byte of data at the beginning of the first volume.
Writing continues to the end of the first volume before any data on the next volume is written to.
Striped Volumes
Striped volumes define a stripe depth. This is the amount of data written to one volume before moving to the next volume
in the volume group. The current minimum stripe size on Symmetrix is two cylinders.
Mirror Positions
Symmetrix Logical
Volume 001
M1 M2 M3 M4
Within the Symmetrix, each logical volume is represented by four mirror positions – M1, M2, M3, M4. These Mirror
Positions are actually data structures that point to a physical location of data and the status of each track; or are unused.
For example, an unprotected volume will only use the M1 position to point to the only data copy. A RAID-1 protected
volume will use the M1 and M2 positions.
Mirroring: RAID-1
Different Disk
Disk Director
Director
LV 001 M1 LV 001 M2
Host Address
Target = 1
LUN = 0
Mirroring provides the highest level of performance and availability for all applications. Mirroring maintains a duplicate
copy of a logical volume on two physical drives. The Symmetrix maintains these copies internally by writing all modified
data to both physical locations represented by the M1 and M2. The write pending is not released from cache until the data
is written to the M1 and M2 hyper locations.
The mirroring function is transparent to attached hosts, as the hosts view the mirrored pair of hypers as a single device.
Prior to the Symmetrix DMX, mirrors were configured with what is known as the “rule of 17”. Because of where within
the card cage the DA pairs reside (1/2, 3/4, 13/14, 15/16), as long as the sum of the disk director numbers equals 17 (1/16,
2/15, 3/14, 4/13), the mirrors will always be on different internal system buses for the highest availability and maximum
Symmetrix resources. The Symmetrix DMX uses the rule of 17 for director failover pairing and not volume mirroring. The
point-to-point connections with cache eliminates the need for protection against a bus failure while mirroring volumes.
1. LV 004 M1 LV 001 M2
- Read all tracks from M1
2. LV 001 M1 LV 004 M2
- Read all tracks from M2
During a read operation, if data is not available in cache memory, the Symmetrix reads the data from the volume chosen
for best overall system performance. Performance algorithms within Enginuity track path-busy information, as well as the
actuator location, and which sector is currently under the disk head in each device. Symmetrix performance algorithms for
a read operation choose the best volume in the mirrored pair based on these service policies.
• Split Service Policy – Different from the Interleave Service Policy because read operations are assigned to
either the M1 or the M2 logical volumes, but not both. Split Service policy is designed to minimize head
movement.
• Interleave Service Policy – Share the read operations of a mirror pair by reading tracks from both logical
volumes in an alternating method: a number of tracks from the primary volume (M1) and a number of tracks from
the secondary volume (M2). The Interleave Service Policy is designed to achieve maximum throughput.
Volume 000
Physical Drive Physical Drive
Hyper 000 M1
Volume 004 Hyper 000 M2
Volume 00C
Dynamic Mirror Service Policy -DMSP dynamically chooses between the Interleave and Split policies at the logical
volume level based on current performance and environmental variables, for maximum throughput and minimum head
movement. DMSP adjusts each logical volume dynamically based on recent access patterns. This is done by the disk
directors and is the default mode. The Symmetrix system tracks I/O performance of logical volumes, physical disks, and
disk directors. Based on these measurements, it directs read operation for mirrored data to the appropriate mirror. As the
access patterns and workloads change, the DMSP algorithm analyzes the new workload and adjusts the service policy to
optimize performance.
M1 M2
SFS Volume
6140 Cyl 6140 Cyl
M1 M2
SFS Volume
6140 Cyl 6140 Cyl
SFS volumes are automatically configured by SymmWin and are non-addressable special volumes. The SFS consist of
information on the dynamic mirror policy decisions, error logging, event traces, code image storage and a dedicated lost
write area.
Not Host
3 Host Addressable Volumes
Addressable
Parity RAID
Ranks Physical Physical Physical Physical
Drive 0 Drive 1 Drive 2 Drive 3
Rank 1 Volume A Volume B Volume C Parity ABC
Rank 2 Volume D Volume E Parity DEF Volume F
Rank 3 Volume G Parity GHI Volume H Volume I
EMC Global Education 14
© 2004 EMC Corporation. All rights reserved.
Symmetrix Hyper-Volume Extension is supported with Parity RAID. When using HVE, parity and data volumes are
distributed among the members of a Parity RAID rank. HVE allows logical volumes that are members of a Parity RAID
rank to be distributed across multiple physical drives. The parity volume for each group can reside on any volume within
the Parity RAID rank, as long as it is on a different physical drive than the data volumes of that group. This distributed
parity provides improved performance over a single physical drive, which could become a performance bottleneck in a
heavy write workload. All volumes that compose the Parity RAID rank must be identical in format and size.
1111 0110
1001 1100
XOR XOR
0110 1010
When a Parity RAID rank operates with all data and parity volumes functioning, it is operating in normal mode. In normal
mode, the Symmetrix system accomplishes data redundancy by using the standard Parity RAID EXCLUSIVE OR (XOR)
logic to generate and store XOR parity data that can then be used to reconstruct the data of a failed volume. In parity
generation, a parity volume is initially formed by performing an XOR calculation on the contents of all member data
volumes and writing the resulting parity to the parity volume. This slide shows the bit-by-bit parity generation. The XOR
instruction is used to compare the binary values of two data fields (Data Chunk A and Data Chunk B). The result is then
XOR’d with the binary value of Data Chunk C which produces the resultant parity binary value.
Data Reconstruction
When a Parity RAID rank operates with one failed data volume, it is running in reduced mode. The data on the failed
volumes are reconstructed by XORing the parity volume with the remaining data volumes in the same rank.
Volume A
Stripe
P Width
P
Period
P
P
Due to the striped nature of RAID-5, no single member of the group is the parity. Rather, each member owns some data
tracks and some parity tracks. So unlike Symmetrix Parity RAID, in RAID-5 all members are created equal.
CKD Meta
Volume
To improve mainframe volume performance, Symmetrix RAID 10 stripes data of a logical device across multiple
Symmetrix logical devices. (The feature is analogous to meta-volume stripes on open systems.) Four Symmetrix devices
(each one-fourth the size of the original mainframe device) appear as one mainframe device to the host. Any four
Symmetrix logical devices can be chosen to define a RAID 10 group provided they are the same type (for example, IBM
3390) and have the same mirror configuration. Striping occurs across this group of four devices with a striping unit of one
cylinder. Since each member of the stripe group is mirrored, the entire set is protected. Dynamic Mirror Service Policy
(DMSP) can then be applied to the mirrored devices. The combination of DMSP with mirrored striping creates a
mainframe volume, as illustrated above, enables greatly improved performance in mainframe systems. RAID 10 uses four
pairs of disks in its Symmetrix implementation ( 4 for M1 and 4 for M2).
Timefinder Snap
Timefinder Snap captures logical point-in-time images of a source volume by duplicating only the original data of tracks
that were changed, consuming only a fraction of the original source volume's capacity. Snapping to a virtual device creates
the appearance of copying volumes but it simply maintains pointers to the original production data and a set of pointers to
any data that has been modified. Timefinder SNAP does a Copy-on-Write; writes to production volume are first copied to
a Save Area.
To obtain the best possible performance, tracks are "striped" in a round-robin manner to save devices in the common pool.
EMC recommends that multiple devices be used in the save pool in order to maximize performance.
When a copy session is terminated, the virtual device is removed, and tracks on the save device are reclaimed if they are
not referenced by any other copy session.
BCV’S
M1 M2 M3
Business Continuance Volumes (BCV) are used for dynamic mirroring. The BCV has additional attributes that allow it to
independently support host processes. BCVs can be configured as non-mirrored or mirrored devices but can not be RAID
protected. In this example we have the establishment of the BCV Volume 003 as the third mirror of Standard volume 002.
At this time volume 003 loses its identity and becomes a copy of volume 002. All the data tracks associated with volume
002 are copied one for one to the new mirror (the BCV).
Timefinder Commands
A Business continuance sequence first involves establishing the BCV device as an additional mirror of a standard
Symmetrix device. Once the BCV is established as a mirror of the standard device, it is not accessible through its original
device address. The BCV device may be separated or split from the standard Symmetrix device with which it was
previously paired. After the split, the BCV has valid data and is available for backup or other host processes through its
original device address.
DRVs
M1 M2 M3
Symmetrix Optimizer is a tool that performs self-tuning of Symmetrix data configurations from the Symmetrix service
processor by swapping logical volumes and their data. Dynamic Reallocation Volumes (DRVs) are non-addressable
volumes used by Optimizer software to temporarily hold user data while reorganization of the devices is being executed.
Symmetrix Optimizer
3.) Copy DRVs to New Locations 4.) Split DRVs from Standard volumes
Symmetrix Optimizer monitors, analyzes, and moves highly active logical volumes to provide balance and maintain
optimal Symmetrix performance that is automatically based on parameters the customer has set. All this is transparent to
end-users because it is accomplished while providing constant data availability and protection. Symmetrix Optimizer uses
internal Dynamic Reallocation Volumes (DRVs) to hold customer data while reconfiguring the system (on a volume-by-
volume basis). Swapping reassigns the logical volume numbers and changes the back-end configuration. The Example
above shows volume A (high activity volume) and volume B (low activity volume) performing a Swap using DRVs.
When a dynamic spare is invoked for a locally mirrored pair, the Symmetrix system automatically augments the original
mirrored pair with a dynamic spare volume that joins the mirrored pair as an additional or (third) mirror. Data is copied to
the dynamic spare volume from the failing volume. If any data cannot be copied from the failing volume, it is copied from
the other mirror. The Symmetrix system continues processing I/O requests with the spare functioning as a mirror and with
no interruption in operation. The failing disk can then be replaced and resynchronized with the mirror group. The dynamic
spare can then be returned to the spare pool.
In a Symmetrix parity RAID system, all data volumes of the RAID group will be spared if there are enough dynamic
spares available for all the data volumes. When a device in a RAID group fails, the Symmetrix system tries to copy data
from the failing device to the first spare. If the failing data volume becomes not ready before it can be replaced, the
Symmetrix system turns off parity protection, recalculates the data for the failed device from the remaining data devices
and parity volume, and places the regenerated parity data on the parity device for the RAID group. The dynamic spare–
parity drive functions as a mirrored pair for that data volume. RAID protection is not available until the failing device is
replaced. All volumes of the group are spared if there are three dynamic spares available.
Module Summary
Closing Slide