GAD Configuration Using GAD Configuration Window v4

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 29

Implementation of Global-Active Device (GAD)

Using Hitachi Command Suite (HCS) Global-Active


Device Configuration Window

Hitachi Data Systems


GSS Americas
HDS GSS Internal Only

Page 1 of 29 741445665.docx
Revision History
Date of this revision: [insert current date]

Version Version Summary of Changes Changes


Number Date marked
V1 9/2014 Original Version N
V2 8/2015 Added Gx00 Series N
V3 1/2016 Added ALUA Setup N
V4 5/2016 Updated for HCS 8.4.0 N

Page 2 of 29 741445665.docx
Table of Contents
1. Required Licenses.................................................................................................... 4
2. CCI Implementation Steps........................................................................................5
3. HCS Implementation Steps.......................................................................................6
4. Storage Array Configuration..................................................................................... 7
5. Tips........................................................................................................................... 8
a. HCS Specific......................................................................................................... 8
b. General GAD.........................................................................................................8
c. Known Limitations................................................................................................. 9
6. GAD Implementation Steps using the HCS GAD Configuration Window................10
a. Set the Port Attributes.........................................................................................10
b. Set up Replication/GAD Screen..........................................................................11
c. Configure DP Pools.............................................................................................11
d. Set up Global-Active Device................................................................................11
e. Configure Remote Paths.....................................................................................11
f. Configure Quorum Disks.....................................................................................12
g. Configure Pair Management Servers..................................................................13
h. Configure Virtual Storage Machine......................................................................15
i. Verify GAD Setup is Complete............................................................................16
7. Create GAD pairs from new volumes......................................................................17
8. To create GAD pairs from existing volumes (Using Command Suite)....................21
9. To create GAD pairs from existing volumes (Using Replication Manager).............22
10. ALUA Overview...................................................................................................24
11. Configure ALUA on a Host Group using HCS.....................................................25
12. Configure ALUA on a LDEV using HCS..............................................................26
13. ALUA Configuration using CCI............................................................................27
14. MPIO Host Configuration.....................................................................................29
a. MPIO................................................................................................................... 29
b. HLDM.................................................................................................................. 29

Page 3 of 29 741445665.docx
1. Required Licenses
- GAD
o Once GAD licenses are installed Shared Memory needs to be configured.
Make sure the CE does this as creating the quorum disk setup and
creating GAD pairs will fail.
- UVM License
- HDLM (if using it)
- HCS License (if using it)
- HRpM (if using it)

Page 4 of 29 741445665.docx
2. CCI Implementation Steps
- Follow install instructions in the “Command Control Interface Installation and
Configuration Guide”
- The release date of CCI that is installed and the release date of HCS that is
installed should be a match. If they are not then there could be issues with
getting features in HCS to work.

Page 5 of 29 741445665.docx
3. HCS Implementation Steps
- Recommend installing the latest GA version of HCS since major improvements
continue to be added to HCS related to GAD features
o If configuring GAD for G1000 arrays HCS 8.0.1 or higher is required
o If configuring GAD for Gx00 Series arrays HCS 8.2.0 or higher is required
- The release date of CCI that is installed and the release date of HCS that is
installed should be a match. If they are not then there could be issues with
getting features in HCS to work.
- Single HCS Install with or without VMware SRM
o Follow instructions in the “HCS-Installation and Configuration Guide”
section “Hitachi Command Suite Server Installation”
o Discover Primary G-Series
o Discover Secondary G-Series
o Discover quorum array
o Discover primary CCI host
o Discover secondary CCI host
- Clustered HCS Install
o Follow instructions in the “HCS-Installation and Configuration Guide”
section “Hitachi Command Suite Server Installation in a Cluster
Environment”
o Discover Primary G-Series
o Discover Secondary G-Series
o Discover Quorum array
o Discover Primary CCI host
o Discover Secondary CCI host

Page 6 of 29 741445665.docx
4. Storage Array Configuration
- Connect the physical array paths for the following:
o G-Series at GAD Primary site to the G- Series at GAD Secondary site (two
or more paths required)
o G- Series at GAD Secondary site to the G- Series at GAD Primary site
(two or more paths required)
o G- Series at GAD Primary site to Quorum array (two or more paths
required)
o G- Series at GAD Secondary site to Quorum array (two or more paths
required)
o Note: Although Gx00 Arrays support bidirectional ports that can serve
both host, external, and replication traffic at the same time it is
recommended to separate host traffic from replication traffic
- Connect the physical host paths for the following:
o Primary GAD Pair Management Server to G- Series at GAD Primary site
(one or more paths required)
o Secondary GAD Pair Management Server to G- Series at GAD Secondary
site (one or more paths required)
o Production Server(s) at GAD Primary site and the G- Series at GAD
Primary site (two or more paths required)
o Optional but recommended: Production Server(s) at GAD Primary site and
the G- Series at GAD Secondary site (two or more paths required)
o Production Server(s) at GAD Secondary site to G- Series at GAD
Secondary site (two or more paths required)
o Optional but recommended: Production Server(s) at GAD Secondary site
to G- Series at GAD Primary site (two or more paths required)

Page 7 of 29 741445665.docx
5. Tips
a. HCS Specific
- When making changes with HCS and HCS-Storage Navigator, keep in mind that
if something does not look correct (such as it looks the same as it did before the
change) check system tasks to see if the DB refreshed after the change. If not
force a manual refresh of the storage system and/or pair management servers.
- If creating a new S-VOL make sure the checkbox next to “Reserve as a
secondary volume for a global-active device pair.” under advanced options
checked. This is only required when allocating the P-VOL and S-VOL separately
and not when using the Allocate GAD Volume option
- Hitachi Device Manager Storage Navigator should not be used to create or
delete GAD pairs (right click on storage array and select Remote Replication).
As with previous versions of Storage Navigator and remote replication it does not
add or remove entries from the horcm files. This will cause issues down the road
because HRpM and the Allocate Volumes and Change to Global-Active Device
Volumes feature in HCS modify the horcm files.
- With HCS 8.1.0, the volume type for the quorum disk can be basic or HDP/HDT
- With HCS 8.1.0, the volume type for the CCI command device disk can be basic
or HDP/HDT

b. General GAD
- Host groups on the secondary storage system cannot exist until after the host
group number is added to the VSM.
- Host groups on the secondary storage system do not need to exist before
creating GAD devices (it is part of the allocate volumes steps).
- On the secondary array if there is a server that will have both GAD and non-GAD
volumes the server will need to be zoned to a set of ports that are in the GAD
VSM and a set of ports that are in the default VSM. The same HBA WWN
cannot exist in more than one host group on the same port.
o To simplify the customer environment, discuss with them the idea of
having dedicated GAD ports and ports that are non-GAD ports. This is not
a requirement but may make design easier for the customer as they grow.
- If the intended S-VOL already exists, the LDEV will need to be deleted and
recreated and assigned to the virtual storage machine. If the LDEV is not
assigned to the virtual storage machine a GAD pair cannot be created.
- Multipathing options
o ALUA is now supported in the G200, G400, G600, G800, and G1000
arrays. Make sure the microcode is at the latest GA. ALUA will support a
non-preferred path equivalent but it is not using Host Mode Option 78
o MPIO without setting ALUA at the LDEV level will still use all available
paths (both local and remote connections)
o HDLM still supports Host Mode Option 78, which is the non-preferred path
option

Page 8 of 29 741445665.docx
c. Known Limitations
- If the customer has MSCS guests in VMware HDLM cannot be used. Refer to
the HDLM Release Notes for more details.
- There are issues when GAD is configured for non-default VSM’s on the primary
array in HCS.
o Pre HCS 8.2.0-01 there is an issue with converting existing volumes to
GAD. CCI has to be used to accomplish this task
o HCS 8.2.0-01 requires the default VSM on the primary array to have GAD
configured in order for non-default VSM’s to be recognized in HCS. The
pre HCS 8.2.0-01 issue is resolved though.
o HCS 8.2.3-03 or later allows configuration of GAD pairs using HCS when
the P-VOL is not in the default VSM. This has been tested in a lab and all
features are working.

Page 9 of 29 741445665.docx
6. GAD Implementation Steps using the HCS GAD
Configuration Window
a. Set the Port Attributes
- Note: This section can be skipped if the array is a G200 – G800 array
- Select the Resources Tab
- Expand the primary storage system
- Select Ports/Host Groups
o Select the Ports Tab
o Check the box next to all desired Initiator ports
o Select the Edit Ports button
 Next to Port Attribute Select Initiator
 Click Finish
 Click Apply
o Check the box next to all desired RCU Target ports
o Select the Edit Ports button
 Next to Port Attribute Select RCU Target
 Click Finish
 Click Apply
- Expand the secondary storage system
- Select Ports/Host Groups
o Select the Ports Tab
o Check the box next to all desired Initiator ports
o Select the Edit Ports button
 Next to Port Attribute Select Initiator
 Click Finish
 Click Apply
o Check the box next to all desired RCU Target ports
o Select the Edit Ports button
 Next to Port Attribute Select RCU Target
 Click Finish
 Click Apply
- Change the G-Series ports to external ports for quorum array connectivity
o Expand the primary storage system
o Select Ports/Host Groups
 Select the Ports tab
 Check the box next to all desired External ports
 Select the Edit Ports button
 Next to Port Attribute Select External
 Next to Port Security Select Enable
 Click Finish
 Click Apply
o Expand the secondary storage system
o Select Ports/Host Groups

Page 10 of 29 741445665.docx
 Select the Ports tab
 Check the box next to all desired External ports
 Select the Edit Ports button
 Next to Port Attribute Select External
 Next to Port Security Select Enable
 Click Finish
 Click Apply
- Refresh the quorum, primary, and secondary storage systems in HCS
o Select Administration Tab
o Check the boxes next to all three storage systems
o Click the Refresh Storage Systems button

b. Set up Replication/GAD Screen


- Select Actions from Drop Down
- Select “Set up Replication/GAD”
o If there are multiple Device Manager Environments for the storage arrays
 Note: this has not been tested, it is still recommended to have all
arrays that are part of GAD in one HCS environment
 Click Add Device Manager
 Enter the Name, Host ID, Port, User ID, and Password
 Click OK
o Click Select Storage Systems and Copy Type button
 For the Primary Site select local HDvM
 For the Primary Storage System select the primary array
 For the Secondary Site select local HDvM
 For the Secondary Storage System select the secondary array
 For the Copy Type select GAD
 Click OK

c. Configure DP Pools
- Create DP pools if needed, any existing pools should be listed

d. Set up Global-Active Device


- Click the Set Up Global-Active Device button

e. Configure Remote Paths


- Tip: if a remote path is already setup on the array see if it is listed in the
Configure Remote Paths section by expanding Show Details at the bottom of the
section. It should list both paths if they are valid paths.
- Click Create Remote Paths button
o Confirm Storage Systems
 Copy Type should default to “global-active device”
 Nothing else should need to be selected or entered
 Click Next
o In Define Remote Path

Page 11 of 29 741445665.docx
 Primary -> Secondary section
 Enter a label for the pairs – I normally enter the
“<PrimaryArraySN>_<SecondaryArraySN>”
 Enter a path group ID (should be 00)
 Select the local port from drop down (Initiator)
 Select the remote port from the drop down (RCU Target)
 Click the Plus symbol to add additional rows
 Secondary -> Primary section
 Enter a label for the pairs – I normally enter the
“<SecondaryArraySN >_<PrimaryArraySN >”
 Enter a path group ID (Should be 00)
 Select the local port from drop down (Initiator)
 Select the remote port from the drop down (RCU Target)
 Click Next
o Confirm
 Click Yes check box
 Click Confirm
o Click Finish
o Incomplete should change to Complete next to Configure Remote Paths
once the task to create the path completes. This may take a few minutes
to occur. If the task completes successfully but Complete is not listed
Close the Set up Global-Active Device window and try refreshing both
storage systems in HCS. You can then go back in to the Set up Global-
Active Device to see if the paths show up under Show Details.

f. Configure Quorum Disks


- Select the radio button next to Configure Method that is appropriate for the
situation you are setting up:
o Create a new external volume is if a disk that can be used for the quorum
does not existing on the array
o Configure an existing external volume is if a disk has been created and
presented to the G-Series’s that will be used for the quorum disk
- Click button for Select Volume under Configure Quorum Disks
o Select the quorum array from Storage System drop down
o Select create volume
 Enter the size: 13 GB (is recommended)
 Select Volume Type can be HDP or Basic Volume
 Under Advance
 Select Drive Type
 Select Parity Group
 Enter a label
 Click Show Plan
 Click Submit
 Wait for the task to complete
- Click button for Select Volume under Configure Quorum Disks

Page 12 of 29 741445665.docx
o Select the quorum array from Storage system drop down (if not already
selected)
o Select the radio button next to the intended volume – Tip: if the quorum
disk does not appear try refreshing all three arrays in HCS.
o Click OK
- Click Virtualize Volumes button for Primary Site
o Verify the external paths
o Verify External path priority
o Enter a host group name (should be descriptive that it is a quorum
connection)
o Select the desired parity group
o Select the desired CLPR
o Select Disable for Inflow Control
o Select Disable for Cache mode
o Enter a label to make sure it is easily identified
o Select an Initial LDEV ID (recommend a high CU LDEV)
o Click Show Plan
o Click Submit
- In the Setup Global-Active Device Window wait until the task completes for the
virtualize volumes for primary site to complete before moving on.
- Click Virtualize Volumes button for Secondary Site
o Verify the external paths
o Verify External path priority
o Enter a host group name (should be descriptive that it is a quorum
connection)
o Select the desired parity group
o Select the desired CLPR
o Select Disable for Inflow Control
o Select Disable for Cache mode
o Enter a label to make sure it is easily identified
o Select an Initial LDEV ID (recommend a high CU LDEV)
o Click Show Plan
o Click Submit
- In the Setup Global-Active Device Window wait until the task completes for the
virtualize volumes for primary site to complete before moving on.
- Incomplete should change to Complete next to Configure Quorum Disks once the
task to create the virtual volumes for secondary site completes. This may take a
few minutes to occur. If the task completes successfully but Complete is not
listed Close the Set up Global-Active Device window and try refreshing both
storage systems in HCS. You can then go back in to the Set up Global-Active
Device to see if Incomplete changed to Complete.

Page 13 of 29 741445665.docx
g. Configure Pair Management Servers
- Select Allocate under configure pair management server for primary server (if
there is an existing command device this step may not be needed, to verify if the
wizard detected one expand “Show Details” at the bottom of the Pair
Configuration Section
o Enter the size of the command disk (recommend 50 MB)
o Volume type can be HDP/HDT or Basic
o Set command device settings (default is fine)
o Set the volume selection radio button to the desired option
o Command Device Settings
 Set Command Device Security to Disable
 Set User Authentication to Enable
 Set Device Group Definition to Disable
o Click show plan
o Wait for the allocate command device task to complete
- Select Allocate under configure pair management server for secondary server (if
there is an existing command device this step may not be needed, to verify if the
wizard detected one expand “Show Details” at the bottom of the Configure Pair
Management Servers Section. If both pair management servers do not appear in
the list then follow the steps below for either or both pair management servers.
o Enter the size of the command disk (recommend 50 MB)
o Set command device settings (default is fine)
o Command Device Settings
 Set Command Device Security to Disable
 Set User Authentication to Enable
 Set Device Group Definition to Disable
o Click show plan
o Wait for the allocate command device task to finish
- You will need to log into each of the pair management servers and configure the
command device. The steps required will vary by operating system. The
minimum requirements are to create an empty horcm file on both servers,
configure it with the command device, start horcm, force a login to the array to
make sure the server can communicate to the array, and then make sure the
agent is running under the same account used to log into the array.
o To find the command device disk information run the following command:
raidscan –x findcmddev 0,99
o The horcmX.conf file only needs two lines in it for HCS to recognize the
pair management server. It does not need to be the one that will be used
to create GAD pairs since HCS typically insists on creating its own the first
time. Do not use the same horcm instance numbers on both pair
management servers.
HORCM_CMD
\\.\PhysicalDrive<X>
Where ”<X>” is the drive number (if windows)

Page 14 of 29 741445665.docx
o Next log into the horcm instance for HCS to recognize the pair
management server
raidcom –login –I<horcm instance #>
o Lastly make sure the Device Manager agent is set with the same user ID
as the Windows account you are logged in as. You can check which
account you need to use by:
 C:\horcm\usr\var
 The name of the file stored there will contain the
<servername_WindowsAccount_arraySN>
 Edit the Service Account
 Right click on the HBsA Service and select properties
 Go to the Log On tab
 Select this account and make sure the account name
matches the WindowsAccount in the horcm\usr\var file and
enter the appropriate password
 Restart the service
- Once both servers have a horcm file configured and it can log into the array, click
refresh under configure pair management server
- Incomplete should change to Complete next to Configure Pair Management
Servers. If all steps above were followed but Complete is not listed Close the Set
up Global-Active Device window and try refreshing both storage systems in HCS.
You can then go back in to the Set up Global-Active Device to see if Incomplete
changed to Complete.
o May need to restart HDvM Agent
 \Hitachi\HDVM\HBaseAgent\bin\hbsasrv stop
 \Hitachi\HDVM\HBaseAgent\bin\hbsasrv start

h. Configure Virtual Storage Machine


- Select edit under Configure Storage Virtual Machine
- The name can be change or left at default
- The Storage System already listed should be the primary storage system
- Click the Add Storage Systems button
- The secondary storage system should be listed by default, click the box next to
the secondary storage system
- Click Ok
- Click submit
- Incomplete should change to Complete next to Configure Virtual Storage
Machine. If all steps above were followed but Complete is not listed Close the
Set up Global-Active Device window and try refreshing both storage systems in
HCS. You can then go back in to the Set up Global-Active Device to see if
Incomplete changed to Complete.
- Additional Changes to Virtual Storage Machine (still in Set up Global-Active
Device window)
- Add the Host Group Numbers to the VSM
o Click Edit the virtual machine button

Page 15 of 29 741445665.docx
o Select the Host Group Numbers tab
o Click Add Host Group Numbers button
o Click the Specify Host Groups radio button
o Click the Storage System drop down box
o Select the secondary storage system
o You can use the filter to show only ports you intend to use on the
secondary array for host connectivity.
o Check the host group number(s) that the host will be assigned to (the host
group cannot already exist)
o Click Ok
- Add LDEV IDs to the VSM
o Select the LDEV IDs tab
o Select the Add LDEV ID button
o Make sure the Specify the number of LDEV IDs radio button is selected
o Select the secondary storage system from the drop down
o Considerations for next step --- This is to pre allocate LDEV ID’s to the
VSM. Suggestion is to pre-allocate a specific CU number and all DEV
addresses within that CU to the VSM to prevent from having to do this
step each time a GAD LDEV is created. The CU DEV should not be in
use on the primary system.
o Enter the number of LDEVs (256 will cover all DEV’s in a CU)
o Select a CU from the drop down (make sure it is not one that will be used
for non-GAD LDEVs and they are not already assigned on the primary
storage system)
o Select the first DEV address from the drop down list (leave at 0 if planning
to assign all 00-FF to GAD for the CU)
o Click OK
o Click Submit
o Wait for task to complete

i. Verify GAD Setup is Complete


- Select Actions
- Select Set up Replication/GAD
- Verify there is a green checkmark and it says Complete for the following sections
o Configure Remote Paths
o Configure Quorum Disks
o Configure Pair Management Servers
o Configure Virtual Storage Machine
- Tip: If any of sections do not say Complete make sure the refresh completed it
may not be bad to refresh again. Also verify HCS is displaying the correct
information for whatever is incomplete. If all else fails, you may need to repeat
the step again.
- If all sections do not say complete HCS and HRpM will not allow you to create
GAD pairs or allocate new volumes that are intended to be GAD pairs

Page 16 of 29 741445665.docx
7. Create GAD pairs from new volumes
o If you are using ALUA make sure you understand how it works by reading
this section ALUA Overview
o It would not be a bad idea to refresh all storage systems and pair
management hosts before continuing
o In HCS click allocate volumes in the resources tab
o Select the primary side server from the drop down
 Tip: The Allocate Volumes Wizard will only setup access to the new
LDEVs to one side of the environment. Suggest selecting the
primary site server whenever using the allocate volumes wizard
and then manually add the LDEV(s) to the secondary site server by
editing the secondary site server host group
o Make sure under allocation type you check the Global-Active Device radio
button –Tip: There are a number of warnings and information messages
that appear, they can be ignored at this time
o Enter the number of volume(s)
o Enter the size of the volume(s)
o In the Primary Site tab
 Verify the Storage System serial number is the primary array
 Select the volume type
 Select the Volume Location
 Select Pool if Dynamic Provisioning or Dynamic Tiering
 Select Internal or External if Basic Volume
 Expand Advanced Options
 Select the desired Volume Selection Option
 Select the desired Volume Criteria
 Specify the label if desired
 Creating Volume Settings should be changed to manual
o Select the CU that was assigned to the VSM (so the
P-VOL and S-VOL match)
o Select the DEV that was assigned to the VSM (so the
P-VOL and S-VOL match)
 Select the desired CLPR for DP Volume
 Select Enable or Disable for ALUA
o If customer is using HDLM select Disable for ALUA
o If customer is using native multipath software
 And wants the remote path unoptimized then
select enable --- Note: if this is the first LDEV
allocated to the server see Configure Host
Group for ALUA for instructions on how to
configure host groups for ALUA
 And wants the remote path optimized then
select disable
 Expand LUN Path Options

Page 17 of 29 741445665.docx
 Select the correct No. of LUN Paths per Volume
 Verify the LUN Paths are correct in the table
 Expand Host Groups and LUN Settings
 Enter the desired Name for the host group (if it does not
already exist)
 Expand Host Mode Settings
o Set the appropriate Host Mode for the host
o Set the appropriate Host Mode Options for the host
(HMO 78 would never be set for the primary path)
 Specify the desired LU number
 Expand the Pair Management Server Settings
 Select the Pair Management Server for the primary site
 Enter the instance ID
o If you are adding to an existing horcm instance, make
sure you specify that horcm instance by selecting the
radio button next to Existing otherwise select New
o Enter the UDP port if creating new horcm instance
 Expand Pair Settings
 Select the desired quorum disk ID from drop down
 Copy Group
o Enter the desired name if not default
 Consistency Group
o If the customer would like consistency groups setup
for their GAD pairs check the box next to CTGID
o Select either a new consistency group number or use
an existing one
 Pair Name
o Select User Defined if you do not want to use the
default – Recommend the default
o In the Secondary Site tab
 Verify the Storage System serial number is the secondary array
 Select the volume type
 Select the Volume Location
 Select Pool if Dynamic Provisioning or Dynamic Tiering
 Select Internal or External if Basic Volume
 Expand Advanced Options
 Select the desired Volume Selection Option
 Select the desired Volume Criteria
 Specify the label if desired
 Creating Volume Settings should be changed to manual
o Select the CU that was assigned to the VSM (so the
P-VOL and S-VOL match)
o Select the DEV that was assigned to the VSM (so the
P-VOL and S-VOL match)

Page 18 of 29 741445665.docx
 Select the desired CLPR for DP Volume
 Expand LUN Path Options
 Select the correct No. of LUN Paths per Volume
 Verify the LUN Paths are correct in the table
 Expand Host Groups and LUN Settings
 Enter the desired Name for the host group (if it does not
already exist)
 Expand Host Mode Settings
o Set the appropriate Host Mode for the host
o Set the appropriate Host Mode Options for the host
 If customer is using HDLM and wants the
remote path to be non-preferred set HMO 78
 Specify the desired LU number
 Expand the Pair Management Server Settings
 Select the Pair Management Server for the primary site
 Enter the instance ID
o If you are adding to an existing horcm instance, make
sure you specify that horcm instance by selecting the
radio button next to Existing otherwise select New
o Enter the UDP port if creating new horcm instance
 Expand Pair Settings
o Verify the settings are still correct (they might have
changed after setting the pair management server
information on the secondary site
o Click Show plan
o Click Submit
o Wait for pairs to reach a paired status before using the secondary volume
 If the task to create the pair fails it could be due to the following
scenarios
 The storage arrays and pair management servers were not
refreshed in HCS
 The LDEV ID was not allocated to the VSM
 The Ports for the secondary server were not added to the
VSM
o If there is a clustered configuration and the LDEV(s) created above need
to be added to the node at the secondary site
 Tips for creating Host Groups
 Suggest creating the local host group first (whichever array
is local to the server node)
 Make sure the correct Resource Group Name (ID) is set to
the VSM for GAD. This could be on the secondary array
only or both the primary and secondary array depending
upon the GAD setup
 HMO 78 should be set if the production server is not in the
same building as the secondary array. If they are in the

Page 19 of 29 741445665.docx
same building, then response time should be the same for
either array.
 If this is the first LDEV allocated to the server see Configure
Host Group for ALUA for instructions on how to configure
host groups for ALUA
 Create the host group for additional cluster nodes
 Expand the desired array in HCS
 Click Ports/Host Groups
o Click Host Group / iSCSI Targets tab
o Click Create Host Groups
 Enter Host Group Name
 Make sure you select the correct Resource
Group Name (select the VSM that was created
for GAD)
 Select the desired Host Mode
 Select any applicable Host Mode Options
 May need to set HMO 78 if this is the
remote path
 If ALUA is going to be used you cannot
set that in this screen. First create the
host group and then see Configure Host
Group for ALUA for instructions on how
to configure host groups for ALUA
 Select the appropriate HBA under Available
Hosts
 Select the appropriate Ports under Available
Ports
 Click Add
 Once all selections are made you can do one
of the following:
 Click Next and select the LDEVs that
you would like to add to the host group.
Do not do this if you are going to setup
ALUA
 If you are setting up ALUA for this host
group click Finish then see Configure
Host Group for ALUA for instructions on
how to configure host groups for ALUA
o Once ALUA is configured for the
host group then add the LDEVs
to the host group
o Repeat the steps until all needed host groups are
created

Page 20 of 29 741445665.docx
8. To create GAD pairs from existing volumes (Using
Command Suite)
- If you are using ALUA make sure you understand how it works by reading this
section ALUA Overview
- Open HCS
o Navigate to the primary array>Allocated Volumes
o Select the desired volume and then click “Change To Global-active Device
Volume”
o On the tab for the primary site, select the command device server and
specify an HORCM device, existing or new.
o On the secondary site tab, select the host
o Change the number of paths to two (or more)
o Verify that the paths are correct
o Select Automatic for S-VOL selection.
o Select the desired quorum disk ID from drop down
o Copy Group
 Enter the desired name if not default
o Consistency Group (only an option for G1000)
 Select either a new consistency group ID or use an existing one
o Pair Name
 Select User Defined if you do not want to use the default –
Recommend the default
o Select Show Plan and continue
- Allocate LDEV(s) to host(s)
o Create host group(s) for servers (if not already done)
 Suggest creating the local host group first (whichever array is
closest to the server or the preferred path)
 If the host group is to the secondary array make sure the
Resource Group Name (ID): is set to the VSM that was
previously setup when creating the host group
 Create the secondary path host group
 If the host group is to the secondary array make sure the
Resource Group Name (ID): is set to the VSM that was
previously setup when creating the host group
o May need to set HMO 78 if this is the remote path
o If ALUA is going to be used see Configure Host
Group for ALUA for instructions on how to configure
host groups for ALUA
o Add storage to host groups just like you normally would in HCS

Page 21 of 29 741445665.docx
9. To create GAD pairs from existing volumes (Using
Replication Manager)
- If you are using ALUA make sure you understand how it works by reading this
section ALUA Overview
- The S-VOL must be created with the following conditions:
o Must be assigned to the primary Virtual Storage Machine
o Must be the same size as the primary volume
o Make sure during the S-VOL creation the check box “Reserve as a
secondary volume for a global-active device pair.” Under advanced
options was checked. If not HRpM will not find it as a valid volume.
- It would not be a bad idea to refresh all hosts and storage arrays in Command
Suite
- It is also suggested to make sure the horcm instances are running on the pair
management servers and that they are logged in to the local array(s)
- Open HCS
o Select Actions from the menu bar
o Select Manage Replication
o In Replication Manager
 Select Resources
 Select Storage Systems
 Select the primary array
 Select Open
 Select the Unpaired tab
 Check the box next to the volume(s) that will be the P-VOL
 Click the Pair Management Button
 Click the Add Group Button
o Enter the Pair Group Name
o Select GAD as the Copy Type from the drop down
o Select the OK button
 Make sure the volumes you checked are in the Pair List /
Pairs window
 In the Candidate List window the defaults should not need to
be changed
 Click the Apply button
 In the Results tab
o Select the intended S-VOL from the list
o Click Add
 Click the Next button
 If the intent is to use a new Copy Group click the Create
Group button. If the intent is to use an existing, skip the sub
tasks listed below
o Enter the group name

Page 22 of 29 741445665.docx
o Select the Server Name from the drop down in the
Primary Server Pair Management Server section
o Select an existing horcm file or a new horcm file
under Instance
o Enter a UDP port if a new horcm file was selected
o Select the Server Name from the drop down in the
Secondary Server Pair Management Server section
o Select an existing horcm file or a new horcm file
under Instance
o Enter a UDP port if a new horcm file was selected
o The remaining options could be left at default unless
Path Group ID needs to be changed due to multiple
replication paths
o Click the Add button
o Click the OK button
 Make sure the correct copy group is selected by either
selecting it from the drop down menu or if a new one was
created it will appear next to Copy Group
 Click the Apply button
 Click the Next button
 Click the Next button
 Check the box next to Yes.
 Click the Confirm button
 Click Finish
 Monitor the task by expanding Tasks in Explorer and clicking
Tasks. Tip: Remember the task list does not automatically
refresh.
 Once the task completes successfully verify the hosts(s) see
the appropriate number of paths to the LDEV using the
multipathing software.

Page 23 of 29 741445665.docx
10. ALUA Overview
 ALUA in concept works similar to the way HMO 78 works
o A host group is configured as Active/Non-Optimized by editing its
Asymmetric Access States (similar to setting HMO 78 for a host group)
o Whenever LDEVs are created they must be ALUA enabled for that LDEV
(this would not have been required for HMO 78)
o The host group Asymmetric Access State Setting determines if a LDEV is
Optimized or Non-Optimized
o Enabling ALUA when LDEVs are created allows that LDEV to recognize
the Host Group Setting for Asymmetric Access State
 There is a two step process to setup ALUA
o The host group that is the non preferred path needs to set to Active/Non-
Optimized, the host group with the preferred path needs to be set to
Active/Optimized (default)
 This should be done before the host sees the new host groups, it
may require a reboot if this is changed after the server sees LDEVs
from this host group
 This can be done in CCI or Storage Navigator
o When LDEVs are created ALUA must be enabled for that LDEV
 When creating new LDEVs (Allocate Volumes) it is an option to
select Enable for ALUA
 If this ALUA was not enabled for the LDEV it can be changed using
CCI or Storage Navigator but may require the server to be rebooted
to recognize the change
 If ALUA is properly configured the mpclaim output should look similar to the
example below

C:\Users\Administrator>mpclaim -s -d 1
MPIO Disk1: 02 Paths, Round Robin with Subset, Implicit Only
Controlling DSM: Microsoft DSM
SN: 606E807DAC80030DAC80002
Supported Load Balance Policies: FOO RRWS LQD WP LB

Path ID State SCSI Address Weight


---------------------------------------------------------------------------
0000000077030000 Active/Optimized 003|000|000|001 0
TPG_State : Active/Optimized , TPG_Id: 1, : 1

0000000077010000 Active/Unoptimized 001|000|000|000 0


TPG_State : Active/Unoptimized, TPG_Id: 0, : 0
 Take note that one path is optimized while the other is Unoptimized
o The Unoptimized path is similar to the non-preferred path and will remain
Unoptimized until there is a failure with the Optimized path
o The above is just an example, in a GAD configuration there should be four
or more paths listed with two being Optimized and two being Unoptimized

Page 24 of 29 741445665.docx
11. Configure ALUA on a Host Group using HCS
- Note: this should be done before the host can see the storage. If not the server
may need to be rebooted to pick up the ALUA setting change
- Expand the storage array that contains the remote path host group
- Click Ports/Host Groups
- Make sure the Host Groups / iSCSI Targets tab is selected
- Find the port the host is using for the remote path and click on the Port ID
- Check the box next to the host group name for the remote path
o Select More Actions drop down
o Select Edit Asymmetric Access States
o Select the radio button next to Active/Non-Optimized
o Click Finish
- Repeat this step for all remote path host groups for that particular server

Page 25 of 29 741445665.docx
12. Configure ALUA on a LDEV using HCS
 If an LDEV was created without Enabling ALUA for that LDEV the ALUA
mode must be set to enable – Note the Asymmetric Access State must be set
for the host group
 This only needs to be done on the Primary GAD Volume, once the pairresync
command is issued this setting will be copied to the Secondary GAD Volume
 This cannot be changed while the GAD pair is split (not deleted)
 Change ALUA setting for an LDEV
o Split the GAD pair
o Expend the Primary Storage array in HCS
o Click on Volumes
o Click on System GUI
o Select the checkbox next to the LDEV ID
o Select the Edit LDEVs button
o Click the Enable radio button under ALUA Mode
o Click Finish
o Reboot the server(s) that see the LDEV(s)
o Resync the GAD pair
 At this point the primary and secondary volumes will be configured for ALUA
 Make sure the host group has the correct settings to support ALUA
 If ALUA is properly configured the mpclaim output should look similar to the
example below
C:\Users\Administrator>mpclaim -s -d 1
MPIO Disk1: 02 Paths, Round Robin with Subset, Implicit Only
Controlling DSM: Microsoft DSM
SN: 606E807DAC80030DAC80002
Supported Load Balance Policies: FOO RRWS LQD WP LB

Path ID State SCSI Address Weight


---------------------------------------------------------------------------
0000000077030000 Active/Optimized 003|000|000|001 0
TPG_State : Active/Optimized , TPG_Id: 1, : 1

0000000077010000 Active/Unoptimized 001|000|000|000 0


TPG_State : Active/Unoptimized, TPG_Id: 0, : 0

Page 26 of 29 741445665.docx
13. ALUA Configuration using CCI
- There are two options for setting the ALUA mode:
o If the GAD pair has not been created set the ALUA mode on the intended
P-VOL before creating the GAD pair. Once the GAD pair is created the
ALUA mode of the S-VOL will automatically be enabled.
o If the GAD pair has been created ALUA can still be set but it will require
the GAD pairs to be split and will require a server reboot.
- Set ALUA when GAD pair does not exist:
o Note: Only CCI can be used to set the ALUA mode
o Set ALUA mode on the P-VOL
 raidcom modify ldev –ldev_id <P-VOL_ID> -alua enable –
I<horcm_instance_for_Primary_Array>
 Verify ALUA mode for the LDEV:
 Check for “ALUA=Enable” in the following command
 raidcom get ldev –ldev_id <P-VOL_ID> -fx –
I<horcm_instance_for_Primary_Array>
o Create the GAD pair
o Set the asymmetric access
 raidcom modify lun –port <non_preferred_path_port> -lun_id all –
asymmetric_access_state non_optimized –
I<horcm_instance_for_Primary_Array>
 Verify ALUA mode for the path:
 The “AL” Column should be display “E” if ALUA is set
 raidcom get lun –port <non_preferred_path_port> -key opt_page1 –
fx -I<horcm_instance_for_Primary_Array>
- Set ALUA when GAD pair exists:
o Note: Only CCI can be used to set the ALUA mode
o Suspend all GAD pair for the server (the pair does not need to be deleted
just suspended)
 pairsplit –g <dev_grp> -r –IH<horcm_instance>
o Set ALUA mode on the P-VOL
 raidcom modify ldev –ldev_id <P-VOL_ID> -alua enable –
I<horcm_instance_for_Primary_Array>
 Verify ALUA mode for the LDEV:
 Check for “ALUA=Enable” in the following command
 raidcom get ldev –ldev_id <P-VOL_ID> -fx –
I<horcm_instance_for_Primary_Array>
o Reboot the server
o Resync the GAD pair
 pairresync –g <dev_grp> -r –IH<horcm_instance>
o Set the asymmetric access
 raidcom modify lun –port <non_preferred_path_port> -lun_id all –
asymmetric_access_state non_optimized –
I<horcm_instance_for_Primary_Array>

Page 27 of 29 741445665.docx
Verify ALUA mode for the path:

 The “AL” Column should be display “E” if ALUA is set
 raidcom get lun –port <non_preferred_path_port> -key
opt_page1 –fx -I<horcm_instance_for_Primary_Array>
o If ALUA is properly configured the mpclaim output should look similar to
the example below
C:\Users\Administrator>mpclaim -s -d 1
MPIO Disk1: 02 Paths, Round Robin with Subset, Implicit Only
Controlling DSM: Microsoft DSM
SN: 606E807DAC80030DAC80002
Supported Load Balance Policies: FOO RRWS LQD WP LB

Path ID State SCSI Address Weight


---------------------------------------------------------------------------
0000000077030000 Active/Optimized 003|000|000|001 0
TPG_State : Active/Optimized , TPG_Id: 1, : 1

0000000077010000 Active/Unoptimized 001|000|000|000 0


TPG_State : Active/Unoptimized, TPG_Id: 0, : 0

Page 28 of 29 741445665.docx
14. MPIO Host Configuration
a. MPIO
o HMO 78 cannot be set on the alternate path(s) as this is not supported
o ALUA is supported, see ALUA Overview for configuration options
o Install MPIO following the instructions from the OS manual
o Follow instructions from OS manuals to configure MPIO with multipathing
 For Windows 2008 R2 all that should be needed is to go into the
control panel for MPIO, click the discover multipaths, select the
device hardware ID in others, and select add. If the disk is already
configured for MPIO the additional paths from the secondary array
will automatically be added
 Use the following commands to verify the paths
 List the disks: mpclaim –s –d
 Get path detail on a specific disk: mpclaim –s –d <disk
number>
o You should see four or more paths (2 paths to LDEV
on primary storage system and 2 paths to LDEV on
secondary storage system)

b. HLDM
o Install HDLM
 Follow installation instructions for the appropriate operating system
in the HDLM User Guide
o If the non-preferred path is set for any of the host groups
 All paths may show online initially (non-preferred paths should be
non-owner)
 If that is the case run the following command:
o dlnkmgr refresh –gad
o For VMware the command must be run from the Remote Management
Client
 Run “dlnkmgr view -path -item dn lu cp phys –stname” to display
path information

Page 29 of 29 741445665.docx

You might also like