Professional Documents
Culture Documents
Celerra Setup Guide For SRM
Celerra Setup Guide For SRM
Recovery Manager
Cormac Hogan
Product Support Engineering
December 2008
EMC provide a Celerra simulator in a VM for test & training purposes only. EMC do not
support the use of the Celerra simulator in production.
The Celerra simulator is downloadable free of charge from the EMC powerlink web site and is
a great Site Recovery Manager learning tool.
The Celerra simulator has a single Control Station (Management Network). These may be
allocated DHCP addresses or configured with a static IP address. You will connect to these
via a web interface to do some of the configuration.
Allocate 3GB of memory and a single network interface to the Celerra Simulator VM.
2 IP addresses – one for the Control Station and one for the Data Mover.
Time to setup: For a pair of replicated Celerra Simulators, you need to consider giving
yourself in the region if 4 hours. The main issue here is the reboot of the simulator. It is slow
to start-up, but after the VM has started, it may take the Celerra Simulator itself an additional
15 minutes before it becomes manageable.
This is very tricky and not at all intuitive. Do not deviate from the setup steps listed
below or you will run into problems.
Part 1 – Control Station Configuration Steps
Import the Celerra virtual appliance onto the ESX & boot the VM. The simulator runs a
modified RH Linux OS.
root/nasadmin
nasadmin/nasadmin.
Step 1: Delete any old data mover IP addresses. Login as nasadmin and check using the
following command:
[nasadmin@celerra_B_VM ~]$
The only interfaces that need to be removed are those that use a cgeX device. For instance,
in the above output the only interface is the one called 10-21-68-73. The name is simply a
representation of the IP address of the interface. Remove this interface using the following
command:
Once this is removed, you can now work on removing and recreating the Control Station
network rather than the data mover network.
Step 2: To change the Control Station (Management) network settings, as the root user use
the command netconfig –d eth0. This allows you to choose DHCP or setup static
networking on the interface. After making the change, run an ifdown eth0 and an ifup
eth0.
Repeat if using a second and or third interface, eth1 & eth2. However we will only be using a
single interface in this configuration.
Ignore the dart_eth0 and dart_eth1 interfaces – these are used for communicating with
back-end storage. In the case of the Celerra Simulator, it communicates to a simulated EMC
Clariion back-end.
Run an ifconfig eth0 to verify that your changes have taken affect. Verify that you can
ping the new IP address. You can also ssh to the Control Station if the network is functional.
Step 3: Now we setup the Data Mover networking. To make sure that we are using MAC
addresses unique to this Celerra, and not some older MACs from the original cloned Celerra,
we have to clean out the old interfaces and re-add them. Login as nasadmin and cd to
/opt/blackbird/tools, (blackbird is the EMC codename for Celerra), run the command
configure_nic ALL –l which lists all the defined Data Mover interfaces. This may return
something like this:
The objective is to clear all these entries, reboot the Celerra, and re-add new entries. To
delete the old entries, use the following command for each datamover defined:
Step 4: Now before we reboot we initialize the Celerra ID as we want to make sure that both
the source and target Celerra IDs are unique when replicating between them.
Change to the root user, go to /opt/blackbird/tools and run the command init_storageID. It
asks you do you want to reboot the Celerra. Answer y at this time.
I’ve found this to be slow so I allow it to sit for a while, then CNTL C and use reboot –n.
Step 5: After logging back in, cd to the /opt/blackbird/tools directory again, run the
command ./configure_nic <data mover> -a ethX. For each one of these commands that you
run, a new cge interface is added to the data mover. This means that if you add eth0 as your
first argument, a cge0 is created which will communicate to the outside world via eth0.
Similarly, if you specified eth1 as your first argument, you data mover cge0 interface would
communicate to the outside world via eth1. And so on.
Once again we must reboot. You may notice that I had 2 data movers here. In the Celerra
simulator that I have, there appears to be two and I’m unsure which is the active one. So
therefore I added the interface to both. Normally one would expect to see only a single data
mover defined – but to be sure, I’m configuring both. Reboot.
Step 6: Login as root & setup IP address and hostname using the following commands:
Log off as root and login as nasadmin/nasadmin and run the command nas_cel -l
Notice that the name is localhost. We need to update this to be the current Celerra settings.
Become root and use the following command:
[nasadmin@celerraVM ~]$ su -
Password:
[root@celerraVM ~]# NAS_DB=/nas
[root@celerraVM ~]# export NAS_DB
[root@celerraVM ~]# /nas/bin/nas_cel -update id=0
operation in progress (not interruptible)...spawn /usr/bin/htdigest
/nas/http/conf/digest DIC_Authentication BB005056AF1EE60000_BB005056AF1EE60000
Adding user BB005056AF1EE60000_BB005056AF1EE60000 in realm DIC_Authentication
New password:
Re-type new password:
id = 0
name = celerraVM
owner = 0
device =
channel =
net_path = 10.21.68.252
celerra_id = BB005056AF1EE60000
Warning 17716815874: server_4 : failed to create the loopback interconnect
[root@celerraVM ~]#
Notice that I need to set the NAS_DB environment variable originally. Return to the
nasadmin user and re-run the nas_cel –l command:
This looks much better and completes the Command Line setup. The remainder of the tasks
we will implement from the Control Manager web interface, namely adding the Data mover to
the network, creating an iSCSI target and LUN.
Part 2 – Data Mover Network Configuration steps
Step 1 – Connect to the IP address of your Celerra Simulator eth0 interface and login as
nasadmin/nasadmin
Step 2: Navigate to Data Movers, <data mover>, Network. If you did not clean up the data
mover networks as described earlier in the document, it may be that your data mover has
some older pre-defined networking, so you will first have to remove that. If no cge interfaces
exist, proceed to step 4.
Step 3: Select the cgeX network interfaces and click the Delete button. Do not touch the elX
network interfaces as these are used for communicating to the simulated backed storage
(Clariion).
Once all the old cge network interfaces are removed, we can now add a new cge interface.
The old interfaces would have been using old MAC addresses. Since we set up new ones
using configure_nic earlier, we need to re-add the newer interfaces to the data mover
using this method.
The interface cge0 is correct. Populate the IP Address & Netmask, allow the Broadcast
Address to automatically populate and click OK. This will be using the other IP addresses that
we discussed in the introduction.
Step 5: Verify that you can ping this interface once it is created. Do not try to ping it from the
Celerra Control Station – ping it from outside the Celerra, i.e. your desktop.
Use the following commands to test the network connectivity of the Data Mover:
[root@celerra_A_VM_2 ~]#
You can also try downing this interface and bringing it up again:
If you cannot, go back and check the configure_nic steps in part 1. Until you can ping this
interface from outside the Celerra, there is no point in continuing any further.
Part 3 – iSCSI Configuration Steps
The final steps of the configuration are to create and present an iSCSI LUN to our ESX
servers.
Step 1: First thing to do is to license the features that we are going to use. Select the Celerra
in the Celerra Manager screen and then the Licenses tab. You will not need any license keys
to enable the features; it is a simple matter of enabling them. However you may have to first
of all initialize the license table. If you fail to enable a license with the error ‘license table is not
initialized’, run this command:
Then repeat the enabling of the licensable features. The following features should be enabled
before continuing to the next steps.
Step 4: Verify that the Data Mover is correct and click Next
Step 5: Add a target Alias Name (I used celerra_b_sim but it doesn’t really matter what you
choose), ensure that Auto Generate Target Qualified Name is checked and click Next.
Step 6: Add the Data Mover Interface to the Target Portals by click the Add button. Then click
Next.
The IQN is comprised of the Celerra id which was created back in part 1 step 7.
Step 8: Verify that the command was successful and proceed to create the iSCSI LUN and
present it to the ESX by clicking Close.
Step 9: Now you can enable software iSCSI on your ESX server and add the IP addresses
of your Data Mover (now an iSCSI target) to the list of Dynamically Discovered Targets.
This should be straight forward so I will not document it here.
Step 10: At the Wizards window, select New iSCSI Lun just above the target option chosen
previously.
Step 11: Verify that the Data Mover is correct and click Next.
Step 12: The Target Portals view should display the IP address of your Data Mover that you
created earlier. Notice also the IQN used for the interface. Once verified, click Next to
continue.
Step 13: A file system of 4.7GB called vol1 has already been setup by default on the
simulator. Verify that it is available & selected. Click Next to continue.
Step 14: Create a new LUN of 1600MB. This will hold our demo VM. We make it small so that
snapshots can be stored on the file system. Notice also the % of file system used.
Step 15: You should already have added this Data Mover to the list of Dynamic Targets to be
discovered by the ESX software iSCSI initiator in step 1. If you have done this, then you
should see the initiator from the ESX available here for masking.
If you do not see the ESX software iSCSI initiator in the Known Initiator list, log onto the VI
client for your ESX server, enable your software iSCSI initiator, add the data mover as a
target, open the iSCSI port (3260) in the firewall and click rescan. Your ESX software iSCSI
initiator should appear in the Known Initiator list.
Click on the Grant button to grant the ESX software iSCSI initiator LUN access for the
Protected/Source LUN.
Note: When configuring the recovery side, do not grant access to the recovery/target as we
will not be able to replicate the LUN if you do that. Click Next to continue.
Step 16: Click Next to skip over this CHAP screen. We will not be setting up CHAP.
Step 18: Verify that the commands were successful and click Close.
Step 19: On the recovery/source side LUN, create a VMFS file system on the iSCSI LUN and
run a VM on it. Use one of the small 1GB JEOS VMs. Accept the default 1MB file block size,
but give the VMFS label something recognisable, like celerra_sim_vol.
Step 20: Repeat these steps for the recovery side Celerra simulator, keeping in mind the
difference at step 14, and move onto the final part of the setup which is replication between
the two simulators.
Part 4 – Replication Configuration Steps
We will do most of these steps from within the CLI. The steps can be summarised as follows:
1. Create a trust between the data movers at the local and remote sites.
2. Create an interconnect to allow the data movers to communicate
3. Set the iSCSI LUN on the recovery Celerra read-only
4. Configure the replication between the local and remote iSCSI LUNs.
Step 1: Create a trust between the data movers at the local and remote sites.
On both the protection side Celerra and the recovery side Celerra, run:
e.g.
The phrase must be the same in both cases. I use vmware as the phrase.
# nas_cel –l
On both the protection side Celerra and the recovery side Celerra, run:
If you followed the steps correctly in part 3, step 13, you will not have presented the LUN at
the recovery/target side to any initiators. Therefore you can go ahead and make this LUN
read-only for replication using the following command:
e.g.
If this command succeeds, skip to step 4. However, if you did present the LUN, the command
to make the LUN read-only will fail with the error:
To unmask the LUN from the initiator, type the following command:
e.g.
Now retry your attempt to make the LUN read-only to allow us to use it in replication.
e.g.
Step 5: Now we are finally ready to do the replication. Use the following command from the
source/protected Celerra:
e.g.
If you get the OK response, it means that the replication request was successful. Check the
status of the sync by running the following commands:
Note the blank Last Sync Time and the Current Transfer is Full Copy options. This means
the LUN is currently doing a full sync. When this is finished we can setup with SRM.
When the sync is complete, you should notice:
Only when the sync has completed will the LUN be discovered by SRM.
Step 2: Click the Configure link against Array Managers. Populate the Display Name,
select the correct Manager Type for Celerra (Celerra iSCSI Native), insert the IP Address for
the Protected Array as well as the username and password (nasadmin/nasadmin) for the
array and finally click Connect.
Step 3: Once the protected Celerra array appears in the list of Protection Arrays, click Next to
discover the Recovery side array
Step 4: Populate as per step 3, but this time for the Recovery side array. Once the recovery
Celerra array appears in the list of Protection Arrays, this verifies that the Array Discovery
task has completed successfully. Click Next to proceed to the Replicated LUN discover task.
Step 5: If the Array Managers screen returns a LUN for the arrays that you have populated,
then the Discover Replicated LUN task has succeed. If it has not returned, then it could be
that the LUN has not replicated within the array, or that the protected LUN does not have a
running VM. We will look at this in more detail but for the moment, the screen below shows
what is expected from a working configuration:
These steps are not necessary on the recovery side – array manager is only configured on
the protected side.
Part 7 – Create Protection Group & Recovery Plan
Step 1: In SRM, on the protected site, click on Protection Groups:
Step 2: Click on the Create Protection Groups link, then when the Create Protection Group
Window opens, enter the name of your Protection Group. In this case, I have called it PG-
Celerra-New:
Step 3: Select the Datastore Group that you wish to protect. This will be the datastore group
which contains the LUN that is being replicated, and thus the VMFS and Virtual Machine. The
list of virtual machines in the datastore group will appear below:
Step 4: Decide where to hold your virtual machine information on the recover site:
Step 5: On the recovery side, create a Recovery Plan by clicking on the Create Recovery
Plan link:
Step 7: Choose a Protection Group to use with this Recovery Plan. We chose the Protection
Group that we created on the protected side a few steps ago:
Step 8: Set the timeouts for the virtual machines during this failover. These can be left at the
default for the most part unless you have some virtual machines that take a long time to start:
Step 9: Decide which network the VMs should come up on during a failover. You can also
have them come up in a ‘bubble’ network during a test failover.
Step 10: Do you want to suspend any Virtual Machines during failover?
Step 1: Now that you have a protection Group and Recovery Plan in place, you can go ahead
and do a test failover. On the recovery side, select first your recovery plan and then the
Recovery Steps tab:
Step 3: If everything is working successfully, eventually the recovery steps will look like:
Step 4: Navigate back to your Inventory -> Hosts & Clusters, and you should observer that
the protected VM is now running on the recovery side. You should also observe that the VM’s
network is part of the ‘bubble’ network and that the VM’s datastores is a snapshot of the
original LUN on the protected side:
Notice that the replicated LUN is id 0, but that the snapshot LUN which has been promoted to
a physical LUN is id 128.
Back on the Celerra, you can verify this with the following command:
That completes our verification. Return to the SRM view, and complete the test.
Tips
Q. How can I tell when the Celerra Manager is ready to accept logins over the web interface?
When you see the CPU usage start to drop off after the first 10 to 15 minutes, then you can
login. Otherwise you will get a Service Temporarily Unavailable message when trying to
login to Celerra Manager. Notice that the CPU usage will become high again once you
successfully launch the Celerra Manager anyway.
A. Yes, you can turn off the sendmail service which seems to take an awful long time to start.
Use the command chkconfig sendmail off.