Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Replication (Local and Remote)

with CLARiiON
A Case Study























Abstract
This white paper provides a case study that analyzes an existing customers environment and local and
remote replication requirements. After careful analysis and consideration, an optimal solution is proposed
to meet the customers expectations.

Published 1/4/2005
Engineering White Paper
1/4/2005
Replication (Local and Remote) with CLARiiON
A Case Study 2
Copyright 2005 EMC Corporation. All rights reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION
MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE
INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.

Part Number H1426
1/4/2005
Replication (Local and Remote) with CLARiiON
A Case Study 3
Table of Contents
Executive Summary............................................................................................ 4
Intended Audience.............................................................................................. 4
Terms and Definitions ........................................................................................ 4
Customer Environment/Requirements ............................................................. 4
Customer Requirements .................................................................................... 5
Local Replication Requirements.................................................................................................. 5
Remote Replication Requirements.............................................................................................. 5
Requirement to Replicate Active Directory (AD).......................................................................... 6
EMC CLARiiON Available Solutions.................................................................. 6
Local Replication.......................................................................................................................... 6
SnapView Snapshots ............................................................................................................... 6
SnapView BCVs....................................................................................................................... 6
SAN Copy................................................................................................................................. 6
Remote Replication...................................................................................................................... 7
MirrorView/S............................................................................................................................. 7
Full SAN Copy.......................................................................................................................... 7
Incremental SAN Copy............................................................................................................. 7
MirrorView/A............................................................................................................................. 7
Drive Options ............................................................................................................................... 8
Fibre Channel Drives................................................................................................................ 8
ATA Drives ............................................................................................................................... 8
Proposed Solution .............................................................................................. 8
Step 1: Local Replication at Primary Site of Fibre Channel Drives to ATA1 Drives.................. 10
Step 2: Remote Replication from ATA1 to ATA2 Drives ........................................................... 10
Step 3: Local Replication at Remote Site from ATA2 to Fibre Channel Drives......................... 10
Step 4: Local Replication at Remote Site from Fibre Channel Drives to ATA3 Drives.............. 11
Step 5: Active Directory Replication.......................................................................................... 11
Example of Proposed Solution ........................................................................ 12
Failure Scenarios .............................................................................................. 12
Database Corruption at the Primary Site (Albany) .................................................................... 12
Primary Site Failure ................................................................................................................... 13
Site Failure............................................................................................................................. 13
Site Failback........................................................................................................................... 13
Final Recommendations .................................................................................. 13
Summary ........................................................................................................... 14

1/4/2005
Replication (Local and Remote) with CLARiiON
A Case Study 4
Executive Summary
The customer, a New York-based company, wants to implement a remote replication solution between the
primary Oracle environment in Albany and the disaster recovery (DR) site in New York City. Several
solutions were explored to meet the customers objective, with the ultimate goal being to allow testing and
backup of the Oracle database at the remote site with minimum impact on the customers production
environment.
The final proposed solution was to use SnapView

clones to replicate Oracle (using Replication Manager


to local ATA drives) and to use Incremental SAN Copy

to move data to the remote site for backup testing


and recovery if an outage occurs at the primary site.
Intended Audience
This white paper is intended for customers, system engineers, EMC

partners, members of the EMC and
partners sales and professional services communities, or anyone who requires information about how the
different replication products running within the EMC CLARiiON

work and how they provide a solution


to fit customer needs. It is assumed that the audience is familiar with the CLARiiON hardware and
software products.
Terms and Definitions
BC/DR Business continuity/disaster recovery.
BCVs Business continuance volumes (also known as clones).
CLARiiON LUN Logical subdivisions of RAID groups in a CLARiiON storage system.
Recovery Point Objective (RPO) The data lag between the DR and primary sites. In the event of a
failure at the production site, the RPO represents the longest interval during which updates to the
primary are not replicated to the secondary site. Updates during this interval could be lost in the event
of a failure.
Recovery Time Objective (RTO) The time it takes to get the application back online following a
failure.
Customer Environment/Requirements
The customer environment consists of 25 Windows 2000 servers running Oracle on internal storage at the
Albany site, with no local replication of the Oracle database. For disaster recovery, data is backed up to
server using backup software and exported to tapes every night. The tapes are then transported to the New
York City site and, in case of disaster, restored to the recovery server. The Oracle application can be
restarted at the remote site. During backup and restores, servers are offline for the duration. Only one copy
of the production data was available for development and testing. Figure 1 shows a diagram of customers
current environment.

1/4/2005
Replication (Local and Remote) with CLARiiON
A Case Study 5

























Figure 1. Customer's Current Environment
Customer Requirements
The tape-based recovery procedures significantly increase the backup and recovery windows. The customer
requires external storage using the SAN infrastructure in order to boost the performance of the Oracle
applications and better service their users. The storage should be able to create local and remote replicas of
their production databases, in order to support business continuance (BC) and disaster recovery (DR)
capabilities.
Local Replication Requirements
A local copy of the production database and control files at the primary site is required for BC (in case of
database or hardware failures). The process of replicating the Oracle database should have minimum
impact on the production database. The replication process should be automated and scheduled nightly,
without user intervention. In case of disaster, the backup copy should provide an instant restore capability
without substantial downtime of the production volume.
Remote Replication Requirements
The remote copy of the production database is required for BC/DR purposes and for testing and
development. DR tests will be conducted quarterly without impacting live production. The data at the
remote site should be current within 4 hours; this is referred to as RPO. The infrastructure must be built in
such a way that, in case of disaster, the database can be restarted within a period of 24 hours; this is referred
to as RTO. A previous days copy is needed to perform report generation and testing at the remote site of
the Oracle database. The solution should provide a means by which the customer can move back and forth
from a previous days copy or a current days copy.
New York City DR/Test
25 Windows 2000 Servers
Albany Primary Site
25 Windows 2000 Servers

Test/Recovery Backup Server
Offsite Backup
1/4/2005
Replication (Local and Remote) with CLARiiON
A Case Study 6
Requirement to Replicate Active Directory (AD)
In order to bring the application online and service the different users accessing the 25 Windows servers, a
replica of the Windows Active Directory services is required at the remote site. This requires 25 Windows
servers ready for use at the remote site in case of primary site failure.
EMC CLARiiON Available Solutions
This section discusses the variable solutions EMC provides for local and remote replication. It provides the
pros and cons of each solution and how each solution would play in the customers environment.
Local Replication
This section outlines the EMC solutions available for local replication. It provides reasoning on which
solutions were chosen by the customer for implementation purposes and which solutions did not fit
customer requirements.
SnapView Snapshots
The SnapView snapshots feature is a pointer-based local replication solution that requires a fraction of the
storage required by the actual production database LUNs. For any changes on the source LUN, the original
block is copied to a private region on the storage system before the write is acknowledged to the host. This
is known as the copy-on-first-write (COFW) penalty. This may incur a performance impact during the
write operation to the source LUN. Initially, the SnapView snapshots feature was considered as an option
for performing local replicas of the Oracle database at the Albany site using the Advanced Technology
Attached (ATA) drives. However, due to the high change rate of the Oracle database environment, the
snapshot feature would have had unacceptable impact and, hence, was not used. Nevertheless, SnapView
snapshots remained an option for nonproduction volumes, in order to maintain an N-1 copy of the
production data. SnapView snapshots contain a rollback feature to roll back the contents of the snapshot
volume to the source volume.
For more information on the SnapView snapshot technology, refer to the CLARiiON SnapView Software-
Snapshots and Snap Sessions and EMC SnapView Rollback white papers.
SnapView BCVs
SnapView BCVs (clones) are full copies of the production volume. They require the same amount of
storage as the actual production database. After a full initial synchronization process with the production
volume, any changes on the source/production database are incrementally updated on the BCV/clone
volume. Unlike the COFW penalty related to SnapView snapshots, BCVs have less of a performance
impact on the source LUN. Since the customer was willing to allocate additional spindles for replication to
reduce the performance impact on the production database, clones seemed to be a better choice in this
instance.
For more information on the SnapView BCV technology, refer to the CLARiiON BCVs white paper.
SAN Copy
SAN Copy is storage-system-based software that copies data from a CLARiiON LUN, either inter-array or
intra-array. With SAN Copy sessions, a user can define the rate at which data should be transferred by
adjusting the throttle value of the session. A value of 10 is the highest, and 1 is the lowest. You can use
SAN Copy to create full copies and incremental copies.
Full SAN Copy
Whether the full SAN Copy session is within the storage system or between storage systems, the
production volume must be brought offline for the duration of the copy session.

1/4/2005
Replication (Local and Remote) with CLARiiON
A Case Study 7
Incremental SAN Copy
Since the source is not in a MirrorView

relationship, BCVs could be used and were easier to manage than


incremental SAN Copy. The incremental SAN Copy feature uses the SnapView snapshot technology and
would cause a number of COFWs. Hence, it was not considered for local replication for the production
volume. Before starting an incremental SAN Copy session, a full synchronization must be performed in
order for the source and destination to be in sync.
For more information on the SAN Copy technology, refer to the EMC CLARiiON SAN Copy Enhancements
SAN Copy Version 2.0 and EMC CLARiiON SAN Copy Data Mobility Software white papers.
Remote Replication
This session outlines the EMC solutions available for remote replication. It provides reasoning on which
solutions were chosen by the customer for implementation purposes and which solutions did not fit
customer requirements.
MirrorView/S
MirrorView/S (synchronous) provides users the means to make a remote copy of the data by maintaining
synchronous data mirroring between CLARiiON storage systems since an updated, exact copy of the data
is always available. This technology was considered for remote replication because it would allow for an
RPO of zero. However, the distance between the two sitesAlbany and New York Citywas more than
200 km. The two sites were connected through a T3 line and this could probably cause possible latency and
response-time issues.
For more information on the MirrorView/S technology, refer to the EMC CLARiiON MirrorView white
paper.
Full SAN Copy
The full SAN Copy feature could not be used for distance replication since full SAN Copy would have to
send 6 TB of data through the pipe. The volume had to be brought offline during the entire copy session.
Since the customer was using a T3 line between the two sites, the time it would take to transfer data to the
remote site would exceed the RPO and RTO time. As a result, the full SAN Copy was not used for remote
replication.
Incremental SAN Copy
The incremental SAN Copy feature was another option for distance replication. It supported incremental
updates at specified intervals and could be scripted for automation. This seemed to be the best option since
incremental SAN Copy sessions provided bulk movement of data from the primary site to the remote site
through a T3 line. Even though incremental SAN Copy used the SnapView snapshot feature to track
incremental updates, this was not implemented on the production volume.
MirrorView/A
MirrorView/A (asynchronous) is an asynchronous remote replication software that includes incremental
updates from the primary to the secondary site. In the event of a disaster at the primary site, the secondary
copy could be promoted. MirrorView/A also consists of the consistency group technology in which
multiple mirrors can be grouped together to operate as a single unit. All the operationssuch as
synchronize and promoteare applied to a group as a whole and not to individual members. MirrorView/A
was not included in the proposed solution because the customer needed local and remote replication. The
customer was determined to use SnapView clones instead of SnapView snapshots in order to avoid the
performance impact incurred by snapshots. Since a CLARiiON LUN cannot be in a clone and MirrorView
relationship, MirrorView/A was not used in this solution.
For more information on the MirrorView asynchronous technology, refer to the EMC CLARiiON
MirrorView/Asynchronous Disaster Recovery Software white paper.
1/4/2005
Replication (Local and Remote) with CLARiiON
A Case Study 8
Drive Options
This section outlines the drive types available on the EMC CLARiiON storage system. It discusses the uses
and caveats of these drives, allowing customers to determine which drive type would be best suited for
their environment.
Fibre Channel Drives
Fibre Channel drives are the fastest available drives in the market today. They provide high data integrity in
multiple-drive systems, including Fibre Channel RAID. Depending on the storage capacity and
performance requirements of the customers environment, Fibre Channel drives provide a more suitable
solution where data protection, recovery, and flexibility are extremely significant.
ATA Drives
ATA technology allows customers to bring their offline storage online. The CLARiiON ATA drives
provide superior capacity and reduced cost when applied to suitable environments. Replicas of the
production volume can be created/moved to inexpensive storage using SnapView, MirrorView, and SAN
Copy. However, ATA drives are slower when compared with Fibre Channel drives; this difference should
be considered when looking at a customers requirements.
Proposed Solution
SnapView clones and incremental SAN Copy were used on the primary storage system while SnapView
clones and SnapView snapshots were used on the secondary CX700 storage system for creating replicas at
the remote site for testing and DR purposes.
Figure 2 provides a diagram of the entire solution proposed to the customer. The customer tested the
solution to ensure that their requirements were met. The diagram consists of four copies of production data.
Production data on Fibre Channel drives is replicated to low-cost local ATA1 drives. After a nightly
backup of the production volume, incremental SAN Copy sessions are started at the primary site to
replicate the data at the remote site on ATA2 drives. After completion of the incremental SAN Copy
sessions, the Fibre Channel drives are updated incrementally by ATA2 drives using SnapView clones.
Once the update process completes, a point-in-time copy of the Fibre Channel drives at the remote site is
created using SnapView snapshots to preserve a previous days copy. Data scrubbing is done using the
snapshots copies on the ATA3 drives. As a result, a gold copy of the set of Fibre Channel drives is
preserved in case of disaster.

1/4/2005
Replication (Local and Remote) with CLARiiON
A Case Study 9



















Figure 2. Proposed Solution




25 Windows 2000 Servers
Albany Primary Site
FC/Prod
ATA1
Step 1: SnapView Clones
(Using Replication Manager)
CLARiiON CX700
FC Switch
Step 2: Incremental SAN Copy (T3 Line)
Using Nishan iFCP Switches
New York City DR/Test
ATA2
FC/Gold
CLARiiON CX700
FC Switch
ATA3
N-1
ATA3
N
25 Windows 2000 Servers
Step 4: SnapView Snapshots
Step 3: SnapView Clones
1/4/2005
Replication (Local and Remote) with CLARiiON
A Case Study 10
Step 1: Local Replication at Primary Site of Fibre Channel Drives
to ATA1 Drives
Each database to be replicated must not share disks with any other databases. In order to recover/restart the
Oracle database, the replica taken must be consistent. During off-peak hours, a replica is incrementally
updated using SnapView clones, only requiring changes tracked since the last update. Take the following
steps to ensure a consistent database replica:
1. Check the status of the database. Make sure the database is functioning properly.
2. Synchronize the local clone ATA1s to the production drives and wait for this to complete.
3. Put the Oracle tablespaces in hot backup mode.
4. Fracture or split the clone ATA1 LUNs containing the data.
5. Return the database tables to normal operating mode.
6. Synchronize the logs and control files with clone ATA1 LUNs.
7. Fracture or split the clone ATA1 LUNs containing the logs and control files.
The database is back online within minutes, with little-to-minimum impact on the production database
because of using SnapView clones. The partial synchronization operation completed within 10 minutes.
EMC Replication Manager, which integrates with Oracle, was used to automate these steps and thus
simplify the process.
Step 2: Remote Replication from ATA1 to ATA2 Drives
A consistent copy of the database is taken using SnapView clones on ATA1 on the production CLARiiON
storage system. This consistent copy residing on a number of CLARiiON LUNs at the production site is
then copied to the remote site on ATA2 LUNs using incremental SAN Copy. Incremental SAN Copy, a
storage-system-resident software, allows bulk movement (block level) of data within the same storage
system or between two or more storage systems, in increments, without bringing the source offline.
Incremental SAN Copy sessions can be started on all the LUNs in order to copy the data to the CX700 on
the remote site. The creation and start (initiating) of incremental SAN Copy sessions is automated using
EMC Replication Manager. Initial synchronization of the remote AT2 volume to ATA1 is required before
starting incremental updates in order for both the source and destination volume to be in sync. Since
incremental SAN Copy updated the remote site, this solution was perfect for the customer because 6 TB of
data could now be replicated incrementally through a T3 line between the Albany and New York site using
iFCP Nishan IPS3300 switches for FC-to-IP conversion. The replication of the daily changes using
incremental SAN Copy incremental completed in 30 to 40 minutes.
Step 3: Local Replication at Remote Site from ATA2 to Fibre
Channel Drives
The replication of ATA1 was performed on ATA2 instead of the Fibre Channel drives, in order to preserve
a gold copy of the database in case of link failure. Thus, after the completion of SAN Copy sessions,
SnapView clones were used at this stage to incrementally update changes from ATA2 to the Fibre Channel
drives. This process can be automated using Replication Manager. Replication Manager uses the replicas
created on the remote CLARiiON and mounts those replicas on the hosts that have Oracle installed for
disaster recovery at the remote site. The customer could achieve the 4-hour RPO since updates from ATA2
to the Fibre Channel test drives would be incremental. Since a gold copy of the Oracle database on Fibre
Channel drives resides on the remote CX700 storage system, the application can be brought online within a
period of 24 hours if a disaster occurs at the primary site.
1/4/2005
Replication (Local and Remote) with CLARiiON
A Case Study 11
Step 4: Local Replication at Remote Site from Fibre Channel
Drives to ATA3 Drives
The purpose of this step is to have a copy of the previous days data after the Fibre Channel drives on the
remote site get incrementally updated to the current days data by ATA2. Hence, after the incremental
updates to the Fibre Channel drives, a snapshot is taken of the Fibre Channel drives to preserve an N-1
copy if the production data is N. SnapView snapshots provide the best solution since point-in-time copies
could be created using ATA drives. The source Fibre Channel drives at the remote site could have up to
eight snapshots or point-in-time copies. They require half the number of spindles. They provide a means to
go back to a particular point-in-time copy by assigning the snapshot object to a storage group and
associating the snapshot sessions (either N-1or N) to that object for testing and validation. The snapshot can
be mounted and written to by a host for exclusive testing. Data scrubbing is done at the remote site on the
ATA3 drives. As a result, based on the customers test/development requirements, contents of either the
current days or previous days data can be presented to the host using the SnapView snapshot technology.
Step 5: Active Directory Replication
A domain controller (DC) is added on the New York City site to host a replica of the Active Directory
(AD) structure for the domain. The two DCs replicate the AD information between them using the
Windowss native replication technology. In case of disaster/disruption on the Albany site, there would be a
surviving DC on the New York site with all the domain databases required for Oracle and domain user
authentication. Oracle communicates with LDAP for the purpose of global validation for access to Oracle
database and/or Oracle applications under Windows.
1/4/2005
Replication (Local and Remote) with CLARiiON
A Case Study 12
Example of Proposed Solution
The following is an example of how this proposed solution can be implemented in the customers
environment, based on their requirements.

First Cycle:
10:00 P.M. (Sunday) With production data as N, synchronization of clone ATA1 from production
Fibre Channel drives starts.
10:15 P.M. Synchronization from Fibre Channel drives to ATA1 drives completes. ATA1 is now N.
10:25 P.M. Incremental SAN Copy sessions are started to replicate ATA1 data to ATA2 at the
remote site using a T3 line.
11:25 P.M. ATA2 is now N.
11:30 P.M. Using SnapView clones, the Fibre Channel drives at the remote site are incrementally
updated and are now N.
11:45 P.M. Fibre Channel drives at the remote site are now N.
11:50 P.M. A point-in-time copy of the Fibre Channel drives is taken using the SnapView snapshot
technology to preserve a previous days copy.
12:00 P.M. Snapshot session is now N.

Second Cycle:
10:00 P.M. (Monday) With production data as N and ATA1 as N-1, synchronization of clone
ATA1 from production Fibre Channel drives starts.
10:15 P.M. Synchronization from Fibre Channel drives to ATA1 drives completes.
ATA1 is now N.
10:.25 P.M. Incremental SAN Copy sessions are started to replicate ATA1 data to ATA2 at the
remote site using a T3 line.
11:25 P.M. ATA2 is now N instead of N-1.
11:30 P.M. Using SnapView clones, the Fibre Channel drives at the remote site are updated
incrementally and are now N instead of N-1.
11:45 P.M. Fibre Channel drives at the remote site are now N.
11:50 P.M. A point-in-time copy of the Fibre Channel drives is taken using the SnapView snapshot
technology to preserve a previous days copy.
12:.00 p.m. Snapshot session is now N.
Using the snapshot session from the first cycle, which is now N-1, data scrubbing is done at the remote site
by putting the object (which in this case is the snapshot) in the storage group and associating the snapshots
sessions between the previous days copy (N-1) and the current days copy (N). A gold copy of the
production data is thus available on the Fibre Channel drives for disaster recovery at the remote site, since
testing is now done on the snapshot copies.
Failure Scenarios
This section covers the various failure scenarios and possible outcomes, based on the proposed solution.
This section also discusses how EMC CLARiiON local and remote replication products help protect
customers data and reduce downtime.
Database Corruption at the Primary Site (Albany)
This situation occurs when the production volume on the primary site fails due to software or hardware
failures. In such a scenario, the local ATA drives acting as clones can be used to incrementally restore the
Oracle database. For the restore process, the production volume is momentarily brought offline. In other
words, there must be no host-buffered I/O on the volume before the reverse-synchronization process is
1/4/2005
Replication (Local and Remote) with CLARiiON
A Case Study 13
initiated. Once the production volume is taken offline for the purposes of flushing the buffers, it can be
brought back online during the reverse-synchronization process. The production volume is online and any
reads to data scheduled for copy back in the reverse-synchronization process will cause the copy of data to
jump the queue in that process, maintaining the correct view. Writes are copied to the production volume
and are propagated to the clone due to the nonprotected nature of the restore operation employed.
Primary Site Failure
This failure occurs in the case of a power failure at the primary site or some catastrophic event like
earthquake or hurricane. In such a scenario, a recovery or backup site must be available to bring the
application back online for users accessing the database.
Site Failure
A replication of the entire primary site would be needed to handle primary site failures. The customer
replicated and installed 25 Windows servers running Oracle at the secondary site, and then replicated the
data to them in case of a disaster. Since incremental SAN Copy is used to copy data from the Albany site to
the New York site, the final data resides on the Fibre Channel drives, which may end up serving as
production data in the event of a disaster. Depending on when the failure occurs, scripts are written to stop
the replication process through clones and incremental SAN Copy in order to preserve a gold copy of the
Oracle database on the Fibre Channel drives at the remote site. This gold copy can then be used to bring the
application back online in order to service the 25 Windows servers.
Site Failback
Since SAN Copy software can reside on the secondary storage system, a connection between the two sites
is established. Hence, full SAN Copy sessions could be started from the remote CX700 storage system to
the Fibre Channel drives on the primary site for restoration. This entire process can be scripted for
automation and scheduled for execution using Replication Manager.
Final Recommendations
The customer was advised to implement the design put forth in this document. It would provide them with
a highly available, quick recovery of the Oracle database that would help make data available for
unplanned outageincluding catastrophic failure at their Albany site. In the event of a disaster, they would
quickly be able to fail over to their New York site and bring up the Oracle infrastructure with minimum
data loss.
The following storage-system recommendations were made to the customer:
File systems or raw spaces for the database data and indexes must be on completely separate disk from
the redo, archive, and control files. In other words, database and log files should not reside on the same
CLARiiON LUN.
Use a RAID 5 (4+1) configuration for database LUNs and log LUNs.
The reserved LUN pool requires LUNs for initiating incremental SAN Copy sessions both on the
primary and secondary storage system. The size of these LUNs should be equal to 20 percent of the
size of the largest LUN being SAN copied.
The clone synchronization rate should be set to high for faster increment updates.
1/4/2005
Replication (Local and Remote) with CLARiiON
A Case Study 14
Summary
This case study considered the use of SAN Copy, and SnapView clones and snapshots to replicate Oracle
databases and logs to a remote location. The solution was accepted and implemented by the customer, since
it met their requirements and provided a DR/BC site. This was a significant improvement over their tape-
based solution. Five different copies of data resided on two CLARiiON arrays, allowing the customer the
choice to select a copy at a specified time interval. All copies could be mounted to a host and used for test
and development.
Generally, incremental SAN Copy could be run at full throttle and provide a solution where the customer
could copy data to the remote site using a T3 line without impacting on their production volume. Using
EMCs replication software, corruption protection was also available at both the primary and the secondary
site, reaching the RPO time of 24 hours and RTO time of 4 hours. Downtime for application backup was
minimized using SnapView clones. This was a significant improvement over the time it took to back up to
tape.
Several point-in-time copies (previous days copy) of the production data could be used for testing and
validation at the remote site using SnapView snapshots. This entire replication process was automated
using Replication Manager for clones, snapshots, and SAN Copy. This case study shows that EMC
CLARiiON replication software satisfies customer needs and gives them peace of mind by providing data
availability and protection.

You might also like