Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 36

EMC Celerra Setup Guide for VMware Site

Recovery Manager
Cormac Hogan
Product Support Engineering
December 2008

EMC provide a Celerra simulator in a VM for test & training purposes only. EMC do not
support the use of the Celerra simulator in production.

The Celerra simulator is downloadable free of charge from the EMC powerlink web site and is
a great Site Recovery Manager learning tool.

The Celerra simulator has a single Control Station (Management Network). These may be
allocated DHCP addresses or configured with a static IP address. You will connect to these
via a web interface to do some of the configuration.

When deploying the Celerra simulator VM, you will need:

 Allocate 3GB of memory and a single network interface to the Celerra Simulator VM.

 The virtual disk requires 40 GB of disk space.

 2 IP addresses – one for the Control Station and one for the Data Mover.

Time to setup: For a pair of replicated Celerra Simulators, you need to consider giving
yourself in the region if 4 hours. The main issue here is the reboot of the simulator. It is slow
to start-up, but after the VM has started, it may take the Celerra Simulator itself an additional
15 minutes before it becomes manageable.

This is very tricky and not at all intuitive. Do not deviate from the setup steps listed
below or you will run into problems.
Part 1 – Control Station Configuration Steps
Import the Celerra virtual appliance onto the ESX & boot the VM. The simulator runs a
modified RH Linux OS.

There are 2 logins configured on the Celerra Simulator:

root/nasadmin
nasadmin/nasadmin.

Step 1: Delete any old data mover IP addresses. Login as nasadmin and check using the
following command:

[nasadmin@celerra_B_VM ~]$ server_ifconfig ALL -all


dmover_sim_B :
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
10-21-68-73 protocol=IP device=cge0
inet=10.21.68.73 netmask=255.255.252.0 broadcast=10.21.71.255
UP, ethernet, mtu=1500, vlan=0, macaddr=0:50:56:ae:41:e3
el31 protocol=IP device=el31
inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255
UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:6 netname=localhost
el30 protocol=IP device=el30
inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255
UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:5 netname=localhost

[nasadmin@celerra_B_VM ~]$

The only interfaces that need to be removed are those that use a cgeX device. For instance,
in the above output the only interface is the one called 10-21-68-73. The name is simply a
representation of the IP address of the interface. Remove this interface using the following
command:

[nasadmin@celerra_B_VM ~]$ server_ifconfig dmover_sim_B -delete 10-21-68-73


dmover_sim_B : done
[nasadmin@celerra_B_VM ~]$

Once this is removed, you can now work on removing and recreating the Control Station
network rather than the data mover network.

Step 2: To change the Control Station (Management) network settings, as the root user use
the command netconfig –d eth0. This allows you to choose DHCP or setup static
networking on the interface. After making the change, run an ifdown eth0 and an ifup
eth0.

Repeat if using a second and or third interface, eth1 & eth2. However we will only be using a
single interface in this configuration.

Ignore the dart_eth0 and dart_eth1 interfaces – these are used for communicating with
back-end storage. In the case of the Celerra Simulator, it communicates to a simulated EMC
Clariion back-end.

Run an ifconfig eth0 to verify that your changes have taken affect. Verify that you can
ping the new IP address. You can also ssh to the Control Station if the network is functional.
Step 3: Now we setup the Data Mover networking. To make sure that we are using MAC
addresses unique to this Celerra, and not some older MACs from the original cloned Celerra,
we have to clean out the old interfaces and re-add them. Login as nasadmin and cd to
/opt/blackbird/tools, (blackbird is the EMC codename for Celerra), run the command
configure_nic ALL –l which lists all the defined Data Mover interfaces. This may return
something like this:

[nasadmin@celerra_B_VM ~]$ cd /opt/blackbird/tools


[nasadmin@celerra_B_VM tools]$ ./configure_nic ALL -l
---------------------------------------------------------------
server_2: network devices:
Slot Device Driver Stub Ifname Irq Id Vendor
---------------------------------------------------------------
3 cge0 bbnic direct eth0 0x0018 0x1645 0x14e4
---------------------------------------------------------------
---------------------------------------------------------------
server_3: network devices:
Slot Device Driver Stub Ifname Irq Id Vendor
---------------------------------------------------------------
3 cge0 bbnic direct eth0 0x0018 0x1645 0x14e4
---------------------------------------------------------------
[nasadmin@celerra_B_VM tools]$

The objective is to clear all these entries, reboot the Celerra, and re-add new entries. To
delete the old entries, use the following command for each datamover defined:

configure_nic <data mover> -d cgeX.

To clear these entries, run:


[nasadmin@celerra_B_VM tools]$ ./configure_nic server_2 -d cge0
server_2: deleted device cge0.
---------------------------------------------------------------
server_2: network devices:
Slot Device Driver Stub Ifname Irq Id Vendor
---------------------------------------------------------------
---------------------------------------------------------------
[nasadmin@celerra_B_VM tools]$ ./configure_nic server_3 -d cge0
server_3: deleted device cge0.
---------------------------------------------------------------
server_3: network devices:
Slot Device Driver Stub Ifname Irq Id Vendor
---------------------------------------------------------------
---------------------------------------------------------------
[nasadmin@celerra_B_VM tools]$ ./configure_nic ALL -l
---------------------------------------------------------------
server_2: network devices:
Slot Device Driver Stub Ifname Irq Id Vendor
---------------------------------------------------------------
---------------------------------------------------------------
---------------------------------------------------------------
server_3: network devices:
Slot Device Driver Stub Ifname Irq Id Vendor
---------------------------------------------------------------
---------------------------------------------------------------

All interfaces to the data movers have now been cleared.

Step 4: Now before we reboot we initialize the Celerra ID as we want to make sure that both
the source and target Celerra IDs are unique when replicating between them.

Change to the root user, go to /opt/blackbird/tools and run the command init_storageID. It
asks you do you want to reboot the Celerra. Answer y at this time.

I’ve found this to be slow so I allow it to sit for a while, then CNTL C and use reboot –n.
Step 5: After logging back in, cd to the /opt/blackbird/tools directory again, run the
command ./configure_nic <data mover> -a ethX. For each one of these commands that you
run, a new cge interface is added to the data mover. This means that if you add eth0 as your
first argument, a cge0 is created which will communicate to the outside world via eth0.
Similarly, if you specified eth1 as your first argument, you data mover cge0 interface would
communicate to the outside world via eth1. And so on.

[nasadmin@celerraVM tools]$ ./configure_nic server_2 -a eth0


server_2: added new device cge0 in slot 3.
Use server_ifconfig to configure the newly added device
after reboot the virtual machine.
---------------------------------------------------------------
server_2: network devices:
Slot Device Driver Stub Ifname Irq Id Vendor
---------------------------------------------------------------
3 cge0 bbnic direct eth0 0x0018 0x1645 0x14e4
---------------------------------------------------------------

[nasadmin@celerraVM tools]$ ./configure_nic server_3 -a eth0


server_3: added new device cge0 in slot 3.
Use server_ifconfig to configure the newly added device
after reboot the virtual machine.
---------------------------------------------------------------
server_3: network devices:
Slot Device Driver Stub Ifname Irq Id Vendor
---------------------------------------------------------------
3 cge0 bbnic direct eth0 0x0018 0x1645 0x14e4
---------------------------------------------------------------
[nasadmin@celerraVM tools]$

Once again we must reboot. You may notice that I had 2 data movers here. In the Celerra
simulator that I have, there appears to be two and I’m unsure which is the active one. So
therefore I added the interface to both. Normally one would expect to see only a single data
mover defined – but to be sure, I’m configuring both. Reboot.

Step 6: Login as root & setup IP address and hostname using the following commands:

[root@celerraVM ~]# ifconfig eth0


eth0 Link encap:Ethernet HWaddr 00:50:56:AF:46:30
inet addr:10.21.68.252 Bcast:10.21.71.255 Mask:255.255.252.0
inet6 addr: fe80::250:56ff:feaf:4630/64 Scope:Link
UP BROADCAST RUNNING MULTICAST DYNAMIC MTU:1500 Metric:1
RX packets:897 errors:0 dropped:0 overruns:0 frame:0
TX packets:261 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:74495 (72.7 KiB) TX bytes:29526 (28.8 KiB)
Interrupt:11 Base address:0x1400

[root@celerraVM ~]# more /etc/hosts


# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost localhost.localdomain localhost
10.21.68.252 celerraVM
# Internal DART Server Primary Network

[root@celerraVM ~]# cd /etc/sysconfig/


[root@celerraVM sysconfig]# cat network
NETWORKING=yes
FORWARD_IPV4="no"
DOMAINNAME=csl.vmware.com
HOSTNAME=celerraVM

[root@celerraVM sysconfig]# hostname celerraVM


[root@celerraVM sysconfig]# hostname
celerraVM
[root@celerraVM sysconfig]#
Step 7: Update the Celerra identification details

Log off as root and login as nasadmin/nasadmin and run the command nas_cel -l

[nasadmin@celerraVM ~]$ nas_cel -l


id name owner mount_dev channel net_path CMU
0 localhost 0 127.0.0.1 BB005056AF1EE60000

Notice that the name is localhost. We need to update this to be the current Celerra settings.
Become root and use the following command:

[nasadmin@celerraVM ~]$ su -
Password:
[root@celerraVM ~]# NAS_DB=/nas
[root@celerraVM ~]# export NAS_DB
[root@celerraVM ~]# /nas/bin/nas_cel -update id=0
operation in progress (not interruptible)...spawn /usr/bin/htdigest
/nas/http/conf/digest DIC_Authentication BB005056AF1EE60000_BB005056AF1EE60000
Adding user BB005056AF1EE60000_BB005056AF1EE60000 in realm DIC_Authentication
New password:
Re-type new password:

id = 0
name = celerraVM
owner = 0
device =
channel =
net_path = 10.21.68.252
celerra_id = BB005056AF1EE60000
Warning 17716815874: server_4 : failed to create the loopback interconnect
[root@celerraVM ~]#

Notice that I need to set the NAS_DB environment variable originally. Return to the
nasadmin user and re-run the nas_cel –l command:

[nasadmin@celerraVM ~]$ nas_cel -l


id name owner mount_dev channel net_path CMU
0 celerraVM 0 10.21.68.252 BB005056AF1EE60000
[nasadmin@celerraVM ~]$

This looks much better and completes the Command Line setup. The remainder of the tasks
we will implement from the Control Manager web interface, namely adding the Data mover to
the network, creating an iSCSI target and LUN.
Part 2 – Data Mover Network Configuration steps

Step 1 – Connect to the IP address of your Celerra Simulator eth0 interface and login as
nasadmin/nasadmin

Step 2: Navigate to Data Movers, <data mover>, Network. If you did not clean up the data
mover networks as described earlier in the document, it may be that your data mover has
some older pre-defined networking, so you will first have to remove that. If no cge interfaces
exist, proceed to step 4.
Step 3: Select the cgeX network interfaces and click the Delete button. Do not touch the elX
network interfaces as these are used for communicating to the simulated backed storage
(Clariion).

Once all the old cge network interfaces are removed, we can now add a new cge interface.
The old interfaces would have been using old MAC addresses. Since we set up new ones
using configure_nic earlier, we need to re-add the newer interfaces to the data mover
using this method.

Step 4: Add the new interfaces by clicking on the New button.

The interface cge0 is correct. Populate the IP Address & Netmask, allow the Broadcast
Address to automatically populate and click OK. This will be using the other IP addresses that
we discussed in the introduction.
Step 5: Verify that you can ping this interface once it is created. Do not try to ping it from the
Celerra Control Station – ping it from outside the Celerra, i.e. your desktop.

Use the following commands to test the network connectivity of the Data Mover:

[root@celerra_A_VM_2 ~]# NAS_DB=/nas


[root@celerra_A_VM_2 ~]# export NAS_DB

[root@celerra_A_VM_2 ~]# /nas/bin/server_ifconfig ALL -all


server_4 :
10-21-68-178 protocol=IP device=cge0
inet=10.21.68.178 netmask=255.255.252.0 broadcast=10.21.71.255
UP, ethernet, mtu=1500, vlan=0, macaddr=0:50:56:af:61:8e
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
el31 protocol=IP device=el31
inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255
UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:6
netname=localhost
el30 protocol=IP device=el30
inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255
UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:5
netname=localhost

[root@celerra_A_VM_2 ~]#

You can also try downing this interface and bringing it up again:

[root@celerra_A_VM_2 ~]# /nas/bin/server_ifconfig server_4 10-21-68-178 down


server_4 : done

[root@celerra_A_VM_2 ~]# /nas/bin/server_ifconfig ALL -all


server_4 :
10-21-68-178 protocol=IP device=cge0
inet=10.21.68.178 netmask=255.255.252.0 broadcast=10.21.71.255
DOWN, ethernet, mtu=1500, vlan=0, macaddr=0:50:56:af:61:8e
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
el31 protocol=IP device=el31
inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255
UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:6
netname=localhost
el30 protocol=IP device=el30
inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255
UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:5
netname=localhost

[root@celerra_A_VM_2 ~]# /nas/bin/server_ifconfig server_4 10-21-68-178 up


server_4 : done

[root@celerra_A_VM_2 ~]# /nas/bin/server_ifconfig ALL -all


server_4 :
10-21-68-178 protocol=IP device=cge0
inet=10.21.68.178 netmask=255.255.252.0 broadcast=10.21.71.255
UP, ethernet, mtu=1500, vlan=0, macaddr=0:50:56:af:61:8e
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
el31 protocol=IP device=el31
inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255
UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:6
netname=localhost
el30 protocol=IP device=el30
inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255
UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:5
netname=localhost
Lastly, since the Control Station eth0 and the Data Mover cge0 share the same MAC
address, ensure that the Virtual Switch and VM network are in promiscuous mode.

If you cannot, go back and check the configure_nic steps in part 1. Until you can ping this
interface from outside the Celerra, there is no point in continuing any further.
Part 3 – iSCSI Configuration Steps
The final steps of the configuration are to create and present an iSCSI LUN to our ESX
servers.

Step 1: First thing to do is to license the features that we are going to use. Select the Celerra
in the Celerra Manager screen and then the Licenses tab. You will not need any license keys
to enable the features; it is a simple matter of enabling them. However you may have to first
of all initialize the license table. If you fail to enable a license with the error ‘license table is not
initialized’, run this command:

[nasadmin@celerra_A_VM_2 ~]$ nas_license -init


done
[nasadmin@celerra_A_VM_2 ~]$

Then repeat the enabling of the licensable features. The following features should be enabled
before continuing to the next steps.

Step 2: In Celerra Manager, click on the Wizards button.


Step 3: Click on New iSCSI Target

Step 4: Verify that the Data Mover is correct and click Next

Step 5: Add a target Alias Name (I used celerra_b_sim but it doesn’t really matter what you
choose), ensure that Auto Generate Target Qualified Name is checked and click Next.
Step 6: Add the Data Mover Interface to the Target Portals by click the Add button. Then click
Next.

The IQN is comprised of the Celerra id which was created back in part 1 step 7.

Step 7: Click Finish.

Step 8: Verify that the command was successful and proceed to create the iSCSI LUN and
present it to the ESX by clicking Close.

Step 9: Now you can enable software iSCSI on your ESX server and add the IP addresses
of your Data Mover (now an iSCSI target) to the list of Dynamically Discovered Targets.
This should be straight forward so I will not document it here.
Step 10: At the Wizards window, select New iSCSI Lun just above the target option chosen
previously.

Step 11: Verify that the Data Mover is correct and click Next.

Step 12: The Target Portals view should display the IP address of your Data Mover that you
created earlier. Notice also the IQN used for the interface. Once verified, click Next to
continue.
Step 13: A file system of 4.7GB called vol1 has already been setup by default on the
simulator. Verify that it is available & selected. Click Next to continue.

Step 14: Create a new LUN of 1600MB. This will hold our demo VM. We make it small so that
snapshots can be stored on the file system. Notice also the % of file system used.

Step 15: You should already have added this Data Mover to the list of Dynamic Targets to be
discovered by the ESX software iSCSI initiator in step 1. If you have done this, then you
should see the initiator from the ESX available here for masking.

If you do not see the ESX software iSCSI initiator in the Known Initiator list, log onto the VI
client for your ESX server, enable your software iSCSI initiator, add the data mover as a
target, open the iSCSI port (3260) in the firewall and click rescan. Your ESX software iSCSI
initiator should appear in the Known Initiator list.

Click on the Grant button to grant the ESX software iSCSI initiator LUN access for the
Protected/Source LUN.

Note: When configuring the recovery side, do not grant access to the recovery/target as we
will not be able to replicate the LUN if you do that. Click Next to continue.
Step 16: Click Next to skip over this CHAP screen. We will not be setting up CHAP.

Step 17: Click Finish.

Step 18: Verify that the commands were successful and click Close.

Step 19: On the recovery/source side LUN, create a VMFS file system on the iSCSI LUN and
run a VM on it. Use one of the small 1GB JEOS VMs. Accept the default 1MB file block size,
but give the VMFS label something recognisable, like celerra_sim_vol.
Step 20: Repeat these steps for the recovery side Celerra simulator, keeping in mind the
difference at step 14, and move onto the final part of the setup which is replication between
the two simulators.
Part 4 – Replication Configuration Steps

We will do most of these steps from within the CLI. The steps can be summarised as follows:
1. Create a trust between the data movers at the local and remote sites.
2. Create an interconnect to allow the data movers to communicate
3. Set the iSCSI LUN on the recovery Celerra read-only
4. Configure the replication between the local and remote iSCSI LUNs.

Step 1: Create a trust between the data movers at the local and remote sites.

On both the protection side Celerra and the recovery side Celerra, run:

# nas_cel -create <cel_name> -ip <ipaddr> -passphrase <phrase>

e.g.

[nasadmin@celerraVM ~]$ nas_cel -l


id name owner mount_dev channel net_path CMU
0 celerraVM 0 10.21.68.252 BB005056AF1EE60000

[nasadmin@celerraVM ~]$ nas_cel -create celerra_B_VM -ip 10.21.68.250 -passphrase


vmware
operation in progress (not interruptible)...
id = 1
name = celerra_B_VM
owner = 0
device =
channel =
net_path = 10.21.68.250
celerra_id = BB005056AE2C0F0000
passphrase = vmware
[nasadmin@celerraVM ~]$

The phrase must be the same in both cases. I use vmware as the phrase.

To verify the trust relationship, run this command on both Celerras:

# nas_cel –l

[nasadmin@celerraVM ~]$ nas_cel -l


id name owner mount_dev channel net_path CMU
0 celerraVM 0 10.21.68.252 BB005056AF1EE60000
1 celerra_B_VM 0 10.21.68.250 BB005056AE2C0F0000
[nasadmin@celerraVM ~]$
Step 2: Create an interconnect to allow the data movers to communicate

On both the protection side Celerra and the recovery side Celerra, run:

# nas_cel -interconnect -create <name>


-source_server <movername>
-destination_system {<cel_name> | id=<cel_id>}
-destination_server <movername>
-source_interfaces {<name_service_interface_name> | ip=<ipaddr> [,
{<name_service_interface_name> | ip=<ipaddr>},...]
-destination_interfaces {<name_service_interface_name> | ip=<ipaddr>}
[,{<name_service_interface_name> | ip=<ipaddr>},...]

 from the source/protected side:

[nasadmin@celerraVM ~]$ nas_cel -interconnect -create srm_inter


-source_server server_4 -destination_system celerra_B_VM
-destination_server dmover_sim_B -source_interfaces ip=10.21.68.75
-destination_interfaces ip=10.21.68.73
operation in progress (not interruptible)...
id = 20003
name = srm_inter
source_server = server_4
source_interfaces = 10.21.68.75
destination_system = celerra_B_VM
destination_server = dmover_sim_B
destination_interfaces = 10.21.68.73
bandwidth schedule = use available bandwidth
crc enabled = yes
number of configured replications = 0
number of replications in transfer = 0
current transfer rate (KB/sec) = 0
average transfer rate (KB/sec) = 0
sample transfer rate (KB/sec) = 0
status = The interconnect is OK.

 from the target/recovery side:

[nasadmin@celerra_B_VM ~]$ nas_cel -interconnect -create srm_inter


-source_server dmover_sim_B -destination_system celerra_VM
-destination_server server_4 -source_interfaces ip=10.21.68.73
-destination_interfaces ip=10.21.68.75
operation in progress (not interruptible)...
id = 20003
name = srm_inter
source_server = dmover_sim_B
source_interfaces = 10.21.68.73
destination_system = celerra_VM
destination_server = server_4
destination_interfaces = 10.21.68.75
bandwidth schedule = use available bandwidth
crc enabled = yes
number of configured replications = 0
number of replications in transfer = 0
current transfer rate (KB/sec) = 0
average transfer rate (KB/sec) = 0
sample transfer rate (KB/sec) = 0
status = The interconnect is OK.
Check the status of the interconnects

[nasadmin@celerraVM ~]$ nas_cel -interconnect -list


id name source_server destination_system destination_server
20001 loopback server_4 unknown unknown
20003 srm_inter server_4 celerra_B_VM dmover_sim_B

[nasadmin@celerra_B_VM ~]$ nas_cel -interconnect -list


id name source_server destination_system destination_server
20001 loopback dmover_sim_B unknown unknown
20003 srm_inter dmover_sim_B celerra_VM server_4

Note: The interconnect name must be the same on both Celerra.

You can use the –info option for additional information.

Step 3 : Set the iSCSI LUN on the recovery Celerra read-only

If you followed the steps correctly in part 3, step 13, you will not have presented the LUN at
the recovery/target side to any initiators. Therefore you can go ahead and make this LUN
read-only for replication using the following command:

# server_iscsi <movername> -lun -number <lun_number>


-create <target_alias_name> -readonly yes

e.g.

[nasadmin@celerra_B_VM ~]$ server_iscsi dmover_sim_B -lun -modify 0


-target celerra_b_dm -readonly yes

If this command succeeds, skip to step 4. However, if you did present the LUN, the command
to make the LUN read-only will fail with the error:

cfgModifyLun failed. LUN 0 is used by initiators and cannot be modified to Read-Only.


Error 4020: dmover_sim_B : failed to complete command

To unmask the LUN from the initiator, type the following command:

# server_iscsi <movername> -mask -clear <target_alias_name> -initiator <initiator_name>

e.g.

[nasadmin@celerra_B_VM ~]$ server_iscsi dmover_sim_B -mask -clear


celerra_b_dm -initiator iqn.1998-01.com.vmware:cs-pse-d02-2954bbcd
dmover_sim_B : done

Now retry your attempt to make the LUN read-only to allow us to use it in replication.

[nasadmin@celerra_B_VM ~]$ server_iscsi dmover_sim_B -lun -modify 0


-target celerra_b_dm -readonly yes
dmover_sim_B : done
Step 4: When the LUN has been made read-only, apply the LUN mask to assign it to the
recovery side ESX software iSCSI initiator using the command:

# server_iscsi <movername> -mask -set <target_alias_name>


-initiator <initiator_name> -grant <access_list>

e.g.

[nasadmin@celerra_B_VM ~]$ server_iscsi dmover_sim_B -mask -set


celerra_b_dm -initiator iqn.1998-01.com.vmware:cs-pse-d02-2954bbcd
-grant 0
dmover_sim_B : done
[nasadmin@celerra_B_VM ~]$

Step 5: Now we are finally ready to do the replication. Use the following command from the
source/protected Celerra:

# nas_replicate -create <name> -source -lun <lunNumber>


-target <targetIqn> -destination -lun <lunNumber> -target <targetIqn> -interconnect
{ <name> | id=<interConnectId> }
[-source_interface { ip=<ipAddr> | <nameServiceInterfaceName> }] [-destination_interface {
ip=<ipAddr> | <nameServiceInterfaceName> }] [ { -max_time_out_of_sync
<maxTimeOutOfSync> | -manual_refresh } ]
-overwrite_destination [ -background ]

e.g.

[nasadmin@celerraVM ~]$ nas_replicate -create srm_replic -source -lun


0 -target iqn.1992-05.com.emc:bb005056af1ee60000-8 -destination -lun
0 -target iqn.1992-05.com.emc:bb005056ae2c0f0000-10 -interconnect
srm_inter -source_interface ip=10.21.68.75 -destination_interface
ip=10.21.68.73
OK
[nasadmin@celerraVM ~]$

If you get the OK response, it means that the replication request was successful. Check the
status of the sync by running the following commands:

[nasadmin@celerraVM ~]$ nas_replicate -l


Name Type Local Mover Interconnect
Celerra Status
srm_replic iscsiLun server_4 -->srm_inter
celerra_B_VM OK
[nasadmin@celerraVM ~]$ nas_replicate -info srm_replic
ID =
fs25_T8_LUN0_BB005056AF1EE6_0000_fs25_T10_LUN0_BB005056AE2C0F_0000
Name = srm_replic
Source Status = OK
Network Status = OK
Destination Status = OK
Last Sync Time =
Type = iscsiLun
Celerra Network Server = celerra_B_VM
Dart Interconnect = srm_inter
Peer Dart Interconnect = srm_inter
Replication Role = source
Source Target = iqn.1992-05.com.emc:bb005056af1ee60000-8
Source LUN = 0
Source Data Mover = server_4
Source Interface = 10.21.68.75
Source Control Port = 0
Source Current Data Port = 59050
Destination Target = iqn.1992-05.com.emc:bb005056ae2c0f0000-10
Destination LUN = 0
Destination Data Mover = dmover_sim_B
Destination Interface = 10.21.68.73
Destination Control Port = 5085
Destination Data Port = 8888
Max Out of Sync Time (minutes) = 10
Application Data =
Next Transfer Size (Kb) = 56
Latest Snap on Source =
Latest Snap on Destination =
Current Transfer Size (KB) = 56
Current Transfer Remain (KB) = 0
Estimated Completion Time =
Current Transfer is Full Copy = Yes
Current Transfer Rate (KB/s) = 1486
Current Read Rate (KB/s) = 438
Current Write Rate (KB/s) = 58
Previous Transfer Rate (KB/s) = 0
Previous Read Rate (KB/s) = 0
Previous Write Rate (KB/s) = 0
Average Transfer Rate (KB/s) = 0
Average Read Rate (KB/s) = 0
Average Write Rate (KB/s) = 0

Note the blank Last Sync Time and the Current Transfer is Full Copy options. This means
the LUN is currently doing a full sync. When this is finished we can setup with SRM.
When the sync is complete, you should notice:

[nasadmin@celerraVM ~]$ nas_replicate -info srm_replic


ID =
fs25_T8_LUN0_BB005056AF1EE6_0000_fs25_T10_LUN0_BB005056AE2C0F_0000
Name = srm_replic
Source Status = OK
Network Status = OK
Destination Status = OK
Last Sync Time = Wed Nov 12 14:19:51 GMT 2008
Type = iscsiLun
Celerra Network Server = celerra_B_VM
Dart Interconnect = srm_inter
Peer Dart Interconnect = srm_inter
Replication Role = source
Source Target = iqn.1992-05.com.emc:bb005056af1ee60000-8
Source LUN = 0
Source Data Mover = server_4
Source Interface = 10.21.68.75
Source Control Port = 0
Source Current Data Port = 0
Destination Target = iqn.1992-05.com.emc:bb005056ae2c0f0000-10
Destination LUN = 0
Destination Data Mover = dmover_sim_B
Destination Interface = 10.21.68.73
Destination Control Port = 5085
Destination Data Port = 8888
Max Out of Sync Time (minutes) = 10
Application Data =
Next Transfer Size (Kb) = 0
Latest Snap on Source =
Latest Snap on Destination =
Current Transfer Size (KB) = 0
Current Transfer Remain (KB) = 0
Estimated Completion Time =
Current Transfer is Full Copy = No
Current Transfer Rate (KB/s) = 66
Current Read Rate (KB/s) = 774
Current Write Rate (KB/s) = 0
Previous Transfer Rate (KB/s) = 0
Previous Read Rate (KB/s) = 0
Previous Write Rate (KB/s) = 0
Average Transfer Rate (KB/s) = 959
Average Read Rate (KB/s) = 0
Average Write Rate (KB/s) = 0

Only when the sync has completed will the LUN be discovered by SRM.

This completes the Clariion Simulator replication setup.


Part 5 – SRA installation
Double click the Celerra SRA executable to begin the installation. This must be installed on
both the protected and recovery sides.

Step 1: Launch the Installation and click Install.

Step 2: Accept the EULA and click Next


Step 3: Click Finish or view README.txt

Step 4: README points you to Powerlink


Part 6 - SRM Array Managers Configuration
Step 1: On the protected side, click on the SRM button with Virtual Center and select the Site
Recovery Summary tab

Step 2: Click the Configure link against Array Managers. Populate the Display Name,
select the correct Manager Type for Celerra (Celerra iSCSI Native), insert the IP Address for
the Protected Array as well as the username and password (nasadmin/nasadmin) for the
array and finally click Connect.
Step 3: Once the protected Celerra array appears in the list of Protection Arrays, click Next to
discover the Recovery side array

Step 4: Populate as per step 3, but this time for the Recovery side array. Once the recovery
Celerra array appears in the list of Protection Arrays, this verifies that the Array Discovery
task has completed successfully. Click Next to proceed to the Replicated LUN discover task.
Step 5: If the Array Managers screen returns a LUN for the arrays that you have populated,
then the Discover Replicated LUN task has succeed. If it has not returned, then it could be
that the LUN has not replicated within the array, or that the protected LUN does not have a
running VM. We will look at this in more detail but for the moment, the screen below shows
what is expected from a working configuration:

These steps are not necessary on the recovery side – array manager is only configured on
the protected side.
Part 7 – Create Protection Group & Recovery Plan
Step 1: In SRM, on the protected site, click on Protection Groups:

Step 2: Click on the Create Protection Groups link, then when the Create Protection Group
Window opens, enter the name of your Protection Group. In this case, I have called it PG-
Celerra-New:
Step 3: Select the Datastore Group that you wish to protect. This will be the datastore group
which contains the LUN that is being replicated, and thus the VMFS and Virtual Machine. The
list of virtual machines in the datastore group will appear below:

Step 4: Decide where to hold your virtual machine information on the recover site:
Step 5: On the recovery side, create a Recovery Plan by clicking on the Create Recovery
Plan link:

Step 6: Give the Recovery Plan a name:

Step 7: Choose a Protection Group to use with this Recovery Plan. We chose the Protection
Group that we created on the protected side a few steps ago:
Step 8: Set the timeouts for the virtual machines during this failover. These can be left at the
default for the most part unless you have some virtual machines that take a long time to start:

Step 9: Decide which network the VMs should come up on during a failover. You can also
have them come up in a ‘bubble’ network during a test failover.
Step 10: Do you want to suspend any Virtual Machines during failover?

Step 11: Recovery Plan is now created.


Part 8 - Do a Test Failover

Step 1: Now that you have a protection Group and Recovery Plan in place, you can go ahead
and do a test failover. On the recovery side, select first your recovery plan and then the
Recovery Steps tab:

Step 2: Click on Test.

Step 3: If everything is working successfully, eventually the recovery steps will look like:
Step 4: Navigate back to your Inventory -> Hosts & Clusters, and you should observer that
the protected VM is now running on the recovery side. You should also observe that the VM’s
network is part of the ‘bubble’ network and that the VM’s datastores is a snapshot of the
original LUN on the protected side:

Notice that the replicated LUN is id 0, but that the snapshot LUN which has been promoted to
a physical LUN is id 128.

Back on the Celerra, you can verify this with the following command:

[nasadmin@celerra_B_VM_2 ~]$ server_iscsi ALL -lun -list


dmover_sim_B :
target: celerra_B
lun size(MB) filesystem
0 1600 vol1 ( id=25 )
128 1600 vol1 ( id=25 )

Looking at the datastore:


Looking at the network:

That completes our verification. Return to the SRM view, and complete the test.
Tips
Q. How can I tell when the Celerra Manager is ready to accept logins over the web interface?

A. Monitor the CPU performance usage of the VM.

When you see the CPU usage start to drop off after the first 10 to 15 minutes, then you can
login. Otherwise you will get a Service Temporarily Unavailable message when trying to
login to Celerra Manager. Notice that the CPU usage will become high again once you
successfully launch the Celerra Manager anyway.

Q. Can I speed up the reboot of the Celerra?

A. Yes, you can turn off the sendmail service which seems to take an awful long time to start.
Use the command chkconfig sendmail off.

You might also like