Professional Documents
Culture Documents
Hypermetro Configuration Guide For Huawei San Storage Using Os Native Multipathing Software
Hypermetro Configuration Guide For Huawei San Storage Using Os Native Multipathing Software
Issue 13
Date 2020-06-30
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees
or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://e.huawei.com
Overview
This document describes how to configure storage systems and multipathing
software for the SAN HyperMetro solution when the host uses OS native
multipathing software.
Intended Audience
This document is intended for:
● Huawei storage technical support engineers
● Technical engineers of Huawei's partners
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
Contents
2 Windows.................................................................................................................................... 9
2.1 Precautions................................................................................................................................................................................ 9
2.2 Configuring Storage Arrays.................................................................................................................................................. 9
2.2.1 Non-NPIV Mode................................................................................................................................................................... 9
2.2.2 NPIV Mode........................................................................................................................................................................... 14
2.2.3 iSCSI Networking............................................................................................................................................................... 19
2.3 Configuring the Host........................................................................................................................................................... 19
2.3.1 Version and Patch Requirements................................................................................................................................. 19
2.3.2 Configuring Multipathing Software............................................................................................................................ 20
2.3.3 Configuring HBAs.............................................................................................................................................................. 27
2.3.4 Configuring Registries...................................................................................................................................................... 29
3 XenServer................................................................................................................................ 30
3.1 Precautions.............................................................................................................................................................................. 30
3.2 Configuring Storage Arrays............................................................................................................................................... 30
3.3 Configuring the Host........................................................................................................................................................... 31
3.3.1 Configuring Multipathing Software............................................................................................................................ 31
3.3.2 Configuring HBAs.............................................................................................................................................................. 33
3.4 Verifying the Configurations............................................................................................................................................. 36
3.4.1 Checking the Multipathing Software Status............................................................................................................ 36
3.4.2 Verifying Path Information.............................................................................................................................................36
4 HP-UX....................................................................................................................................... 38
4.1 Precautions.............................................................................................................................................................................. 38
4.2 Configuring Storage Arrays............................................................................................................................................... 38
5 Red Hat.................................................................................................................................... 41
5.1 Precautions.............................................................................................................................................................................. 41
5.2 Configuring Storage Arrays............................................................................................................................................... 41
5.3 Configuring the Host........................................................................................................................................................... 42
5.3.1 Configuring Multipathing Software............................................................................................................................ 42
5.3.2 Configuring HBAs.............................................................................................................................................................. 44
5.4 Verifying the Configurations............................................................................................................................................. 44
6 Oracle VM............................................................................................................................... 46
6.1 Precautions.............................................................................................................................................................................. 46
6.2 Configuring Storage Arrays............................................................................................................................................... 46
6.3 Configuring the Host........................................................................................................................................................... 47
6.3.1 Configuring Multipathing Software............................................................................................................................ 47
6.3.2 Configuring HBAs.............................................................................................................................................................. 48
7 SLES.......................................................................................................................................... 49
7.1 Precautions.............................................................................................................................................................................. 49
7.2 Configuring the Storage Arrays........................................................................................................................................49
7.3 Configuring the Host........................................................................................................................................................... 50
7.3.1 Version and Patch Requirements................................................................................................................................. 50
7.3.2 Configuring Multipathing Software............................................................................................................................ 51
7.4 Verifying the Configurations............................................................................................................................................. 54
8 RHV........................................................................................................................................... 55
8.1 Precautions.............................................................................................................................................................................. 55
8.2 Configuring Storage Arrays............................................................................................................................................... 55
8.3 Configuring the Host........................................................................................................................................................... 56
8.3.1 Configuring Multipathing Software............................................................................................................................ 56
8.4 Verifying the Configurations............................................................................................................................................. 58
8.4.1 Checking the Multipathing Software Status............................................................................................................ 58
8.4.2 Verifying Path Information.............................................................................................................................................58
9 Rocky........................................................................................................................................ 60
9.1 Precautions.............................................................................................................................................................................. 60
9.2 Configuring Storage Arrays............................................................................................................................................... 60
9.3 Configuring the Host........................................................................................................................................................... 61
9.3.1 Version and Patch Requirements................................................................................................................................. 61
9.3.2 Configuring Multipathing Software............................................................................................................................ 62
10 NeoKylin................................................................................................................................ 63
10.1 Precautions........................................................................................................................................................................... 63
10.2 Configuring Storage Arrays............................................................................................................................................. 63
11 Solaris.................................................................................................................................... 66
11.1 Precautions........................................................................................................................................................................... 66
11.2 Configuring Storage Arrays............................................................................................................................................. 66
11.3 Configuring the Host......................................................................................................................................................... 67
11.3.1 Configuring Multipathing Software.......................................................................................................................... 67
11.4 Verifying the Configurations...........................................................................................................................................68
12 Asianux.................................................................................................................................. 71
12.1 Precautions........................................................................................................................................................................... 71
12.2 Configuring Storage Arrays............................................................................................................................................. 71
12.3 Configuring the Host......................................................................................................................................................... 72
12.3.1 Configuring Multipathing Software.......................................................................................................................... 72
12.3.2 Configuring HBAs............................................................................................................................................................ 73
12.4 Verifying the Configurations...........................................................................................................................................73
12.4.1 Checking the Multipathing Software Status..........................................................................................................73
12.4.2 Verifying Path Information.......................................................................................................................................... 73
13 Ubuntu................................................................................................................................... 75
13.1 Precautions........................................................................................................................................................................... 75
13.2 Configuring Storage Arrays............................................................................................................................................. 75
13.3 Configuring the Host......................................................................................................................................................... 76
13.3.1 Configuring Multipathing Software.......................................................................................................................... 76
14 VMware................................................................................................................................. 78
14.1 Precautions........................................................................................................................................................................... 78
14.2 Configuring Storage Arrays............................................................................................................................................. 80
14.3 Configuring the Host......................................................................................................................................................... 81
14.3.1 Configuring Multipathing Software.......................................................................................................................... 81
14.3.2 Setting Timeout Parameters....................................................................................................................................... 82
14.3.3 Configuring a VMware Cluster................................................................................................................................... 82
14.4 Verifying the Configurations...........................................................................................................................................87
14.4.1 Verifying the Multipathing Software Status and Path Information.............................................................. 87
15 AIX.......................................................................................................................................... 89
15.1 Precautions........................................................................................................................................................................... 89
15.2 Configuring Storage Arrays............................................................................................................................................. 89
15.3 Configuring the Host......................................................................................................................................................... 90
15.3.1 Configuring Multipathing Software.......................................................................................................................... 91
16 FusionCompute.................................................................................................................... 93
16.1 Precautions........................................................................................................................................................................... 93
17 FAQs....................................................................................................................................... 98
17.1 How Do I Determine Whether the HBA Parameters Configured for the Multipathing Software Have
Taken Effect?................................................................................................................................................................................. 98
17.2 Why Does the Multipathing Software Automatically Return to Its Initial Configuration Every Time
RHV-H Restarts?......................................................................................................................................................................... 100
17.3 Why Does SLES Enter Emergency Mode After the Multipathing Software Is Configured?................... 100
17.4 Why Does the Old Path Information Remain After an HBA Is Replaced on a XenServer Host?......... 101
17.5 What Can I Do If I/Os on a VMware 6.0 Host Are Interrupted After the Replication Link Between the
Storage Systems Goes Down?............................................................................................................................................... 102
17.6 What Can I Do If Links Are Not Aggregated in Linux Due to the Multipathing Software Anomaly?
......................................................................................................................................................................................................... 102
Load balancing mode Enable ALUA on the host and set the path selection
policy to round-robin.
Configure a switchover mode that supports ALUA for
both HyperMetro storage arrays' initiators that are
added to the host.
Set the path type for both storage arrays' initiators to
the optimal path.
Local preferred mode Enable ALUA on the host. It is advised to set the path
selection policy to round-robin.
Configure a switchover mode that supports ALUA for
both HyperMetro storage arrays' initiators that are
added to the host.
Set the path type for the local storage array's
initiators to the optimal path and that for the remote
storage array's initiators to the non-optimal path.
Other modes Set the initiator switchover mode for the HyperMetro
storage arrays by following instructions in the follow-
up chapters in this guide. The path type does not
require manual configuration.
When HyperMetro works in local preferred mode, the host multipathing software
defines the paths to the owning controller on the local storage array as AO paths.
This ensures that the host delivers I/Os only to the owning controller on the local
storage array, reducing link consumption. If all AO paths fail, the host will deliver
I/Os to the AN paths on the non-owning controller. If the owning controller of the
local storage array fails, the system will activate the other controller to maintain
the local preferred mode.
Special mode type Determines which Special mode is used for Mode 0
path switchover. All three special modes
support ALUA. Detailed requirements are as
follows:
● Mode 0:
– The host and storage system must be
connected using a Fibre Channel
network.
– The OS of the host that connects to
the storage system must be Red Hat
7.X, Windows Server 2012 (using
QLogic HBAs), or Windows Server
2008 (using QLogic HBAs).
● Mode 1:
– The OS of the host that connects to
the storage system must be AIX or
VMware.
– HyperMetro works in load balancing
mode.
● Mode 2:
– The OS of the host that connects to
the storage system must be AIX or
VMware.
– HyperMetro works in local preferred
mode.
You must configure initiators according to the requirements of the specific OS that
is installed on the host. All of the initiators added to a single host must be
configured with the same switchover mode. Otherwise, host services may be
interrupted.
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
Step 2 On the Host tab page, select a host you want to modify. Then select the desired
initiator (on the host) and click Modify.
Step 3 In the Modify Initiator dialog box, modify the initiator information based on the
requirements of your operating system.
Step 4 Repeat the preceding operations to modify other initiators on the host.
----End
1.4 Compatibility
When employing HyperMetro with OS native multipathing software, consider the
compatibility between components (such as storage systems, operating system,
HBAs, and switches) and upper-layer software.
Go to the following website to check the HyperMetro compatibility:
http://support-open.huawei.com/ready/index.jsf
NOTICE
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
2 Windows
2.1 Precautions
2.2 Configuring Storage Arrays
2.3 Configuring the Host
2.1 Precautions
● If all optimal paths have failed, only one non-optimal path will deliver I/Os.
● Currently, HyperMetro LUNs do not support offloaded data transfer (ODX).
● If Hyper-V is enabled, you must close all VMs before restarting a host.
NOTICE
Factors that affect the HyperMetro solution include, but are not limited to,
operating systems, HBAs, switches, clusters, and upper-layer applications. Before
deploying the solution, consult the compatibility list at http://support-
open.huawei.com/ready/index.jsf to ensure that its requirements have been met.
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
Emulex HBAs
Table 2-1 lists the storage array configuration when Emulex HBAs are used in
non-N_Port_ID virtualization (non-NPIV) mode.
Table 2-1 Configuration on storage arrays when Emulex HBAs are used in non-NPIV mode
Server Storage Array Configuration
OS
HyperMetr Storage OS Third-Party Switchover Special Path
o Mode Array Setting Multipathing Mode Mode Type
Software Type
For details about the Windows versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
QLogic HBAs
Table 2-2 lists the storage array configuration when QLogic HBAs are used in non-
NPIV mode.
Table 2-2 Configuration on storage arrays when QLogic HBAs are used in non-NPIV mode
For details about the Windows versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
NOTICE
In NPIV mode, VMs directly communicate with storage arrays. The multipathing
software is configured on the VMs. Therefore, the OS versions of the VMs must be
considered.
Currently, only some versions of Windows support Hyper-V (NPIV). For details, see
Microsoft's official explanation:
https://technet.microsoft.com/windows-server-docs/compute/hyper-v/hyper-
v-feature-compatibility-by-generation-and-guest
Emulex HBAs
Table 2-3 lists the storage array configuration when Emulex HBAs are used in
NPIV mode.
Table 2-3 Configuration on storage arrays when Emulex HBAs are used in NPIV mode
For details about the Windows versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
1. If the server is running Windows Server 2012 R2 with Hyper-V, the OS of the VMs
created on the server must be Windows Server 2012.
2. If the server is running Windows Server 2012 with Hyper-V, the OS of the VMs created
on the server must be Windows Server 2012.
3. If the server is running Windows Server 2016 with Hyper-V, the OS of the VMs created
on the server can be Windows Server 2012, Windows Server 2012 R2, or Windows
Server 2016. Patches must be installed in Windows Server 2012 R2.
4. If the server is running Windows Server 2019 with Hyper-V, the OS of the VMs created
on the server can be Windows Server 2012, Windows Server 2012 R2, Windows Server
2016, or Windows Server 2019. Patches must be installed in Windows Server 2012 R2.
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
QLogic HBAs
Table 2-4 lists the storage array configuration when QLogic HBAs are used in
NPIV mode.
Table 2-4 Configuration on storage arrays when QLogic HBAs are used in NPIV mode
For details about the Windows versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
1. If the server is running Windows Server 2012 R2 with Hyper-V, the OS of the VMs
created on the server must be Windows Server 2012.
2. If the server is running Windows Server 2012 with Hyper-V, the OS of the VMs created
on the server must be Windows Server 2012.
3. If the server is running Windows Server 2016 with Hyper-V, the OS of the VMs created
on the server can be Windows Server 2012, Windows Server 2012 R2, or Windows
Server 2016. Patches must be installed in Windows Server 2012 R2.
4. If the server is running Windows Server 2019 with Hyper-V, the OS of the VMs created
on the server can be Windows Server 2012, Windows Server 2012 R2, Windows Server
2016, or Windows Server 2019. Patches must be installed in Windows Server 2012 R2.
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
NOTICE
If the patches listed in this section are not installed, exceptions will occur upon a
switchover due to a path failure.
KB2520235 Yes No
In Windows Server 2012: On the Server Manager page, click Local Server and
choose Manage > Add Roles and Features.
In the preceding figure, the VID is HUAWEI and the PID is XSG1. If the value of
the MPIO-ed parameter is NO, the vendor's LUNs have not been taken over by
MPIO.
Step 2 Take over the storage array.
On the Windows server, open the CMD command line and run the mpclaim -r -i -
d "HUAWEI XSG1 " command, as shown in the following figure.
The VID must contain eight characters and the PID must contain 12 characters. If the
characters are insufficient, add spaces. You can copy the command from the output of the
mpclaim -e command.
NOTICE
NOTICE
After the host has restarted, right-click a disk discovered by the server and choose
Properties from the shortcut menu.
On the HUAWEI XSG1 Multi-Path Disk Device Properties page, click the MPIO
tab. In Select the MPIO policy, select Round Robin With Subset.
Activate path verification. On the MPIO tab, click Details. In the DSM Details
dialog box, select Path Verify Enabled.
----End
Verification
Run the mpclaim -s -d command to verify that the configuration has taken effect.
Run the mpclaim -s -d MPIO Disk No. command to verify path information about
an MPIO disk.
Emulex
For Emulex HBAs, modify the following parameters:
● LinkTimeOut
The default value is 30. Set it to 10.
● NodeTimeOut
The default value is 30. Set it to 10.
NOTICE
If the default values for these parameters are different from those specified in this
document, do not modify them. Contact related project personnel for
confirmation.
NOTICE
----End
QLogic
For QLogic HBAs, modify the following parameters:
NOTICE
If the default values for these parameters are different from those specified in this
document, do not modify them. Contact related project personnel for
confirmation.
For details about how to modify these parameters, see 3.3.2 Configuring HBAs.
3 XenServer
3.1 Precautions
3.2 Configuring Storage Arrays
3.3 Configuring the Host
3.4 Verifying the Configurations
3.1 Precautions
NOTICE
Factors that affect the HyperMetro solution include, but are not limited to,
operating systems, HBAs, switches, clusters, and upper-layer applications. Before
deploying the solution, consult the compatibility list at http://support-
open.huawei.com/ready/index.jsf to ensure that its requirements have been met.
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
For details about the XenServer versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
Step 1 Right-click the server and choose Enter Maintenance Mode from the shortcut
menu.
NOTICE
----End
NOTICE
If the default values for these parameters are different from those specified in this
document, do not modify them. Contact related project personnel for
confirmation.
Type 3 to select HBA Parameters. The HBA port status window is displayed.
Select 13 and 15 and set the values of each to 10. Then type 20 to Commit
Changes.
After the configuration is complete, check the HBA parameters.
----End
4 HP-UX
4.1 Precautions
4.2 Configuring Storage Arrays
4.3 Configuring the Host
4.1 Precautions
NOTICE
Factors that affect the HyperMetro solution include, but are not limited to,
operating systems, HBAs, switches, clusters, and upper-layer applications. Before
deploying the solution, consult the compatibility list at http://support-
open.huawei.com/ready/index.jsf to ensure that its requirements have been met.
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
For details about the HP-UX versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
In this example:
● State
Path state. ACTIVE paths are AO paths and STANDBY paths are AN paths. If
both AO and AN paths are displayed, ALUA configuration has taken effect.
NOTICE
When a LUN mapped to the host does not have any service, the state of
paths to this LUN on the host becomes UNOPEN. To restore the path status
to ACTIVE, run the ioscan command or read or write the mapped LUN.
5 Red Hat
5.1 Precautions
5.2 Configuring Storage Arrays
5.3 Configuring the Host
5.4 Verifying the Configurations
5.1 Precautions
NOTICE
Factors that affect the HyperMetro solution include, but are not limited to,
operating systems, HBAs, switches, clusters, and upper-layer applications. Before
deploying the solution, consult the compatibility list at http://support-
open.huawei.com/ready/index.jsf to ensure that its requirements have been met.
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
For details about the RHEL versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
If the preceding command output is not displayed, install and enable DM-
Multipath as follows:
Step 1 Find the DM-Multipath software package on the installation CD-ROM of the OS
and run the rpm -vih packagename command to install DM-Multipath.
For RHEL 7.x, run the following command to check whether the multipath service
runs at startup.
If the multipath service status in the command output is not enabled, run the
following command to enable it.
systemctl enable multipathd.service
For RHEL 7.x, run the following commands to start the multipath service and
check its status.
----End
dev_loss_tmo and fast_io_fail_tmo specify the retry time and switchover time in the event
of a link fault. The preceding figure provides recommended values for these two
parameters, and you can modify them according to your own requirements.
As shown in Figure 5-5, paths to the HyperMetro storage systems have been
converged successfully and the number of paths is correct. status=active
corresponds to the AO path to the owning controller of the LUN, and
Generally, the prio value of an AO path on a Linux system is 50, and that of an AN path is
10.
6 Oracle VM
6.1 Precautions
6.2 Configuring Storage Arrays
6.3 Configuring the Host
6.1 Precautions
NOTICE
Factors that affect the HyperMetro solution include, but are not limited to,
operating systems, HBAs, switches, clusters, and upper-layer applications. Before
deploying the solution, consult the compatibility list at http://support-
open.huawei.com/ready/index.jsf to ensure that its requirements have been met.
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
For details about the Oracle VM versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
If the switchover mode is Common ALUA, add the following contents to the /etc/
multipath.conf configuration file on the host.
devices {
device {
vendor "HUAWEI"
product "XSG1"
path_grouping_policy group_by_prio
prio alua
path_checker tur
path_selector "round-robin 0"
failback immediate
fast_io_fail_tmo 5
dev_loss_tmo 30
}
}
7 SLES
7.1 Precautions
7.2 Configuring the Storage Arrays
7.3 Configuring the Host
7.4 Verifying the Configurations
7.1 Precautions
● It is recommended that you shield the server's local disks before configuring
the multipathing software.
● For SLES 12 SP1, the kernel patch must be installed.
NOTICE
Factors that affect the HyperMetro solution include, but are not limited to,
operating systems, HBAs, switches, clusters, and upper-layer applications. Before
deploying the solution, consult the compatibility list at http://support-
open.huawei.com/ready/index.jsf to ensure that its requirements have been met.
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
For details about the SLES versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
The system kernel has been upgraded and is not the original standard kernel.
If the preceding command output is not displayed, install and enable DM-
Multipath as follows:
Step 1 Find the DM-Multipath software package on the installation CD-ROM of the OS
and run the rpm -vih packagename command to install DM-Multipath.
For SLES 12.x, run the following command to check whether the multipath service
runs at startup.
Figure 7-3 Checking whether the multipath service runs at startup (SLES 12.x)
If the multipath service status in the command output is not enabled, run the
following command to enable it.
systemctl enable multipathd.service
For SLES 11.x and earlier versions, run the following command to check whether
the multipath service runs at startup.
Figure 7-4 Checking whether the multipath service runs at startup (SLES 11.x and
earlier versions)
If the multipath service status in the command output is not on, run the following
command to enable it.
chkconfig multipathd on
The following figure shows the status after the command is executed.
Figure 7-7 Starting the multipath service in SLES 11.x and earlier versions
----End
NOTICE
If the system enters the emergency mode after the multipathing software is
enabled and the host is restarted, isolate the local disks of the OS to prevent DM-
Multipath from taking them over. For details, go to the following website:
https://www.suse.com/documentation/sles-12/stor_admin/data/
sec_multipath_trouble.html
For SLES 11 and versions of SLES 12 earlier than SP2, add the following contents
to the /etc/multipath.conf configuration file of the multipathing software.
Figure 7-8 Multipathing configuration in SLES 11 and SLES 12 (earlier than SP2)
The WWID in the blacklist in the preceding figure is the server's local disk information.
Change it to the actual information at your site (this also applies to SLES 12 SP2). For
details, go to the following website:
https://www.suse.com/documentation/sles-12/stor_admin/data/
sec_multipath_trouble.html
dev_loss_tmo and fast_io_fail_tmo specify the retry time and switchover time in the event
of a link fault. The preceding figure provides recommended values for these two
parameters, and you can modify them according to your own requirements.
For SLES 12 SP2, the configuration file of the multipathing software is as follows.
dev_loss_tmo and fast_io_fail_tmo specify the retry time and switchover time in the event
of a link fault. The preceding figure provides recommended values for these two
parameters, and you can modify them according to your own requirements.
As shown in Figure 7-10, paths to the HyperMetro storage systems have been
converged successfully and the number of paths is correct. status=active
corresponds to the AO path to the owning controller of the LUN, and
status=enabled corresponds to the AN path to the non-owning controller of the
LUN. This indicates that the ALUA configuration has taken effect. Generally, the
prio value of an AO path on a Linux system is 50, and that of an AN path is 10.
8 RHV
8.1 Precautions
8.2 Configuring Storage Arrays
8.3 Configuring the Host
8.4 Verifying the Configurations
8.1 Precautions
NOTICE
Factors that affect the HyperMetro solution include, but are not limited to,
operating systems, HBAs, switches, clusters, and upper-layer applications. Before
deploying the solution, consult the compatibility list at http://support-
open.huawei.com/ready/index.jsf to ensure that its requirements have been met.
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
For details about the RHV versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
Figure 8-1 Command output indicating that DM-Multipath has been installed
If the preceding command output is not displayed, install and enable DM-
Multipath as follows:
Step 1 Find the DM-Multipath software package n the installation CD-ROM of the OS
and run the rpm -vih packagename command to install DM-Multipath.
Step 2 Configure the multipath service to run at host startup.
Run the following command to check whether the multipath service runs at
startup.
If the multipath service status in the command output is not enabled, run the
following command to enable it.
systemctl enable multipathd.service
NOTICE
The configuration file for RHV-H is different from that of other Red Hat OSs. After
the configuration is complete, you must run the persist /etc/multipath.conf
command. For details, see 17.2 Why Does the Multipathing Software
Automatically Return to Its Initial Configuration Every Time RHV-H Restarts?
----End
path_checker tur
path_selector "round-robin 0"
failback immediate
fast_io_fail_tmo 15
dev_loss_tmo 30
}
}
9 Rocky
9.1 Precautions
9.2 Configuring Storage Arrays
9.3 Configuring the Host
9.1 Precautions
NOTICE
Factors that affect the HyperMetro solution include, but are not limited to,
operating systems, HBAs, switches, clusters, and upper-layer applications. Before
deploying the solution, consult the compatibility list at http://support-
open.huawei.com/ready/index.jsf to ensure that its requirements have been met.
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
Rocky's kernel and multipathing software versions change frequently and require extra
attention.
If your version differs from that in the preceding figure, contact Huawei's technical support.
10 NeoKylin
10.1 Precautions
10.2 Configuring Storage Arrays
10.3 Configuring the Host
10.1 Precautions
NOTICE
Factors that affect the HyperMetro solution include, but are not limited to,
operating systems, HBAs, switches, clusters, and upper-layer applications. Before
deploying the solution, consult the compatibility list at http://support-
open.huawei.com/ready/index.jsf to ensure that its requirements have been met.
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
For details about the Neokylin versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
11 Solaris
11.1 Precautions
11.2 Configuring Storage Arrays
11.3 Configuring the Host
11.4 Verifying the Configurations
11.1 Precautions
NOTICE
Factors that affect the HyperMetro solution include, but are not limited to,
operating systems, HBAs, switches, clusters, and upper-layer applications. Before
deploying the solution, consult the compatibility list at http://support-
open.huawei.com/ready/index.jsf to ensure that its requirements have been met.
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
For details about the Solaris versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
NOTICE
After you perform this operation, the host system will restart.
After the host system has restarted, the LUN information is as follows:
The path format is scsi_vhci. This is the path information after multiple paths are
aggregated.
Disabled: no
Initiator Port Name: 2100001b3210fbda
Target Port Name: 202800ac01020304
Override Path: NA
Path State: OK
Disabled: no
Initiator Port Name: 2101001b3230fbda
Target Port Name: 201800ac01020304
Override Path: NA
Path State: OK
Disabled: no
Initiator Port Name: 2101001b3230fbda
Target Port Name: 2601222222222222
Override Path: NA
Path State: OK
Disabled: no
Initiator Port Name: 2101001b3230fbda
Target Port Name: 2618222222222222
Override Path: NA
Path State: OK
Disabled: no
Initiator Port Name: 2101001b3230fbda
Target Port Name: 203800ac01020304
Override Path: NA
Path State: OK
Disabled: no
Target Port Groups:
ID: 33
Explicit Failover: no
Access State: active optimized
Target Ports:
Name: 200800ac01020304
Relative ID: 8193
ID: 1
Explicit Failover: no
Access State: active optimized
Target Ports:
Name: 2602222222222222
Relative ID: 23
Name: 2601222222222222
Relative ID: 22
ID: 2
Explicit Failover: no
Access State: active not optimized
Target Ports:
Name: 2619222222222222
Relative ID: 282
Name: 2618222222222222
Relative ID: 281
ID: 35
Explicit Failover: no
Access State: active not optimized
Target Ports:
Name: 202800ac01020304
Relative ID: 8705
ID: 34
Explicit Failover: no
Access State: active not optimized
Target Ports:
Name: 201800ac01020304
Relative ID: 8449
ID: 36
Explicit Failover: no
Access State: active not optimized
Target Ports:
Name: 203800ac01020304
Relative ID: 8961
root@solarisx86:~#
12 Asianux
12.1 Precautions
12.2 Configuring Storage Arrays
12.3 Configuring the Host
12.4 Verifying the Configurations
12.1 Precautions
NOTICE
Factors that affect the HyperMetro solution include, but are not limited to,
operating systems, HBAs, switches, clusters, and upper-layer applications. Before
deploying the solution, consult the compatibility list at http://support-
open.huawei.com/ready/index.jsf to ensure that its requirements have been met.
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
For details about the Asianux versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
NOTICE
If you use DM-Multipath for the first time, you are advised to copy a file
from /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf when
configuring the /etc/multipath.conf file, rather than using the default /etc/
multipath.conf file.
Run the following command to confirm that the multipathing software runs at
system startup.
Figure 12-3 Confirming that the multipathing software runs at system startup
13 Ubuntu
13.1 Precautions
13.2 Configuring Storage Arrays
13.3 Configuring the Host
13.1 Precautions
NOTICE
Factors that affect the HyperMetro solution include, but are not limited to,
operating systems, HBAs, switches, clusters, and upper-layer applications. Before
deploying the solution, consult the compatibility list at http://support-
open.huawei.com/ready/index.jsf to ensure that its requirements have been met.
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
For details about the Ubuntu versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
----End
14 VMware
14.1 Precautions
14.2 Configuring Storage Arrays
14.3 Configuring the Host
14.4 Verifying the Configurations
14.1 Precautions
When using HyperMetro with VMware ESXi, note the following:
● If two HyperMetro LUNs are mapped to a host as VMFS datastores, their host
LUN IDs must be the same when the host is of ESXi 6.5.0 GA build 4564106 or
a subsequent version earlier than ESXi 6.5 U1 build 5969303. For other ESXi
versions, it is recommended that the host LUN IDs be the same.
● If two HyperMetro LUNs are mapped to a host as raw devices (RDM), their
host LUN IDs must be the same regardless of host versions.
● If a HyperMetro LUN is mapped to multiple ESXi hosts in a cluster as VMFS
datastores or raw devices (RDM), the host LUN IDs of the LUN for all of these
ESXi hosts must be the same. You are advised to add all ESXi hosts in a cluster
that are served by the same storage device to a host group and to the same
mapping view.
You can query the host LUN ID mapped to the ESXi host in the Mapping
View of OceanStor DeviceManager, as shown in Figure 14-1.
Before modifying the Host LUN ID, read the following warnings carefully
since misoperations may cause service interruption. To modify the host LUN
ID for a LUN, right-click the LUN and choose Change host LUN ID from the
shortcut menu. In the displayed dialog box, set the same Host LUN ID value
for the two storage devices in the HyperMetro pair and then click OK.
Changing the host LUN ID with an incorrect procedure may cause service
interruption.
If no datastore has been created on either LUN in the HyperMetro pair, you
can directly change the host LUN ID for the LUNs. Wait for about 5 to 15
minutes after the modification is complete, and then run the Rescan
command in the ESXi host CLI to verify that the LUNs in the HyperMetro pair
have been restored and are online.
If a datastore has been created on either LUN in the HyperMetro pair and a
service has been deployed in the datastore, change the host LUN ID using
only the following two methods (otherwise, changing the host LUN ID for
either LUN will cause the LUN to enter the PDL state and consequently
interrupt services):
● Method 1: You do not need to restart the ESXi host. Migrate all VMs in
the datastore deployed on the LUNs in the HyperMetro pair to another
datastore, and then change the host LUN ID on the OceanStor
DeviceManager. Wait for about 5 to 15 minutes after the modification is
complete, and then run the Rescan command in the ESXi host CLI to
verify that the LUNs in the HyperMetro pair have been restored and are
online. Then, migrate the VMs back to the datastore deployed on the
LUNs in the HyperMetro pair.
● Method 2: You need to restart the ESXi host. Power off all VMs in the
datastore deployed on the LUNs in the HyperMetro pair to ensure that no
service is running on the LUNs. Then, modify the host LUN ID on the
OceanStor DeviceManager. Then, restart the ESXi host for the
modification to take effect. After restarting the ESXi host, check whether
the LUNs in the HyperMetro pair have been restored and are online.
NOTICE
When deploying HyperMetro with VMware ESXi NMP, consider the compatibility
between components (such as storage system, operating system, HBAs, and
switches) and the application software.
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
For details about the VMware ESXi versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
● In VMware ESXi 6.0 P07 (build number 9239799) and later 6.0 releases, the system
integrates Huawei storage's VMW_SATP_ALUA and PSP_RR rules (system-level) by
default.
In VMware ESXi 6.5 Patch 02 (build number 7388607) and later 6.5 releases, the system
integrates Huawei storage's VMW_SATP_ALUA and PSP_RR rules (system-level) by
default.
In all VMware ESXi 6.7 releases, the system integrates Huawei storage's
VMW_SATP_ALUA and PSP_RR rules (system-level) by default. However, you are advised
to run the preceding commands to manually add VMW_SATP_ALUA and PSP_RR rules
(user-level). Manually added rules have a higher priority than the default ones.
● The preceding command only applies to ESXi 5.0 and later. For ESXi versions that
HyperMetro supports, see the compatibility list at http://support-open.huawei.com/
ready/pages/user/compatibility/support-matrix.jsf
● New SATP rules will immediately take effect for newly mapped LUNs, but will not take
effect for previously mapped LUNs until the host is restarted.
● For details about the parameters in the host commands, see https://
docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.storage.doc/GUID-
D10F7E66-9DF1-4CB7-AAE8-6F3F1F450B42.html.
iSCSI Network
Run the following commands on the host:
esxcli iscsi adapter param set -A vmhba35 -k NoopOutInterval -v 1
esxcli iscsi adapter param set -A vmhba35 -k NoopOutTimeout -v 10
esxcli iscsi adapter param set -A vmhba35 -k RecoveryTimeout -v 1
● The preceding commands only apply to ESXi 5.0 and later. For ESXi versions that
HyperMetro supports, see the compatibility list at http://support-open.huawei.com/
ready/pages/user/compatibility/support-matrix.jsf
● Replace vmhba35 with iSCSI storage adapters as required.
● The settings shorten the path switchover time to about 11s. In comparison, the default
ESXi settings may result in an up-to-35s path switchover time for ESXi 6.0.* and ESXi
6.5.* and an up-to-25s path switchover time for ESXi 6.7.*.
Table 14-2 Cluster configuration when the OS native multipathing software (VMware NMP) is used
6.0 U2 False (Retain the 1 (Retain the 1. Select Turn on For a host of a
6.0 U3 default value.) default value.) vSphere HA. version from ESXi
2. Set Datastore 6.0 U2 to ESXi
6.5.* with PDL to 6.7.*, retain the
Power off and default host
6.7.* parameter
restart VMs.
settings. You only
3. Set Datastore
need to enable
with APD to
HA again in
Power off and
vCenter for the
restart VMs-
settings to take
Aggressive
effect.
restart policy.
● For VMware vSphere 5.0 u1, 5.1, and 5.5, deploy ESXi hosts across data
centers in an HA cluster and configure the cluster with the HA advanced
parameter das.maskCleanShutdownEnabled = True.
● A VM service network requires L2 interworking between data centers for VM
migration between data centers without affecting VM services.
● For VMware vSphere 5.0 u1, later 5.0 versions, and 5.1 versions, log in to the
CLI of each ESXi host using SSH and add Disk.terminateVMOnPDLDefault =
True in the /etc/vmware/settings file.
● For VMware vSphere 5.5.*, 6.0 u1, and versions between them, use vSphere
Web Client to connect to vCenter, go to the cluster HA configuration page,
and select Turn on vSphere HA. Then, log in to each ESXi host using vSphere
Web Client or vCenter and complete the following settings:
Set VMkernel.Boot.terminateVMOnPDL = True. The parameter forcibly
powers off VMs on a datastore when the datastore enters the PDL state.
Step 1 Run the esxcli storage nmp satp rule list | grep -i huawei command to verify
that SATP rules are successfully added.
The command output shows that SATP rules are successfully added.
Step 2 Run the esxcli storage nmp device list -d=naa.6xxxxxxx command to verify that
working paths of LUNs are properly configured.
naa.6xxxxxxx indicates the drive letter of a LUN after being mapped to the host.
Working paths are successfully configured if their Storage Array Type and Path
Selection Policy are the same as those configured, and the number of Working
Paths is equal to the total number of paths in the port group.
Example:
The following SATP rule is configured:
esxcli storage nmp satp rule add -V HUAWEI -M XSG1 -s VMW_SATP_ALUA -P VMW_PSP_RR -c tpgs_on
When Path Selection Policy is VMW_PSP_FIXED, only one working path is available, which
is any path in the port group where AO paths reside.
----End
15 AIX
15.1 Precautions
15.2 Configuring Storage Arrays
15.3 Configuring the Host
15.1 Precautions
NOTICE
Factors that affect the HyperMetro solution include, but are not limited to,
operating systems, HBAs, switches, clusters, and upper-layer applications. Before
deploying the solution, consult the compatibility list at http://support-
open.huawei.com/ready/index.jsf to ensure that its requirements have been met.
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
For details about the AIX versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
Figure 15-2 Checking whether MPIO takes over Huawei storage disks
After I/Os have been delivered, you can run the following command to check
whether the path priority is correct:
If the value of path_status has Opt, the corresponding paths are preferred ones.
Other paths are non-preferred ones.
In the example shown in Figure 15-4, there are two preferred paths and 10 non-
preferred paths. The result is the same as configured, indicating that the
configuration is successful.
NOTICE
Only AIX 6.1 TL9, AIX 7.1 TL3, and later versions support the lsmpio command.
For other AIX versions, you can only use the lspath command to query paths.
16 FusionCompute
16.1 Precautions
16.2 Configuring Storage Arrays
16.3 Configuring the Host
16.4 Verifying the Configurations
16.1 Precautions
NOTICE
Factors that affect the HyperMetro solution include, but are not limited to,
operating systems, HBAs, switches, clusters, and upper-layer applications. Before
deploying the solution, consult the FusionCompute team to ensure that its
requirements are met.
This document only provides the methods to configure the components. For
details about specific compatibility scenarios, see the HyperMetro compatibility
list.
For details about the FusionCompute versions, see the compatibility list:
http://support-open.huawei.com/ready/pages/user/compatibility/support-
matrix.jsf
NOTICE
If a LUN has been mapped to the host, you must restart the host for the
configuration to take effect after you modify the initiator parameters. If you
configure the initiator for the first time, restart is not needed.
If the preceding command output is not displayed, install and enable DM-
Multipath as follows:
Step 1 Find the DM-Multipath software package on the installation CD-ROM of the OS
and run the rpm -vih packagename command to install DM-Multipath.
For FusionCompute 6.3.0, 6.3.1, and 6.5.0, run the following command to check
whether the multipath service runs at startup.
If the multipath service status in the command output is not enabled, run the
following command to enable it.
systemctl enable multipathd.service
dev_loss_tmo and fast_io_fail_tmo specify the retry time and switchover time in the event
of a link fault. The preceding figure provides recommended values for these two
parameters, and you can modify them according to your own requirements.
After the configuration is complete, run the following command to restart the
multipath service for the configuration to take effect.
systemctl restart multipathd.service
After the paths are taken over, the path information similar to the following is
displayed on the host.
After the configuration has taken effect, the status of some LUN paths is prio=50
status=active and that of other LUN paths is prio=10 status=enabled.
17 FAQs
17.1 How Do I Determine Whether the HBA Parameters Configured for the
Multipathing Software Have Taken Effect?
17.2 Why Does the Multipathing Software Automatically Return to Its Initial
Configuration Every Time RHV-H Restarts?
17.3 Why Does SLES Enter Emergency Mode After the Multipathing Software Is
Configured?
17.4 Why Does the Old Path Information Remain After an HBA Is Replaced on a
XenServer Host?
17.5 What Can I Do If I/Os on a VMware 6.0 Host Are Interrupted After the
Replication Link Between the Storage Systems Goes Down?
17.6 What Can I Do If Links Are Not Aggregated in Linux Due to the Multipathing
Software Anomaly?
Solution
Perform the following operations to check whether the configuration file has
taken effect.
In the command output, the remote port information is from 0:0-2 to 0:0-5.
Step 3 Check the fast_io_fail_tmo information.
Run the following command:
grep . /sys/class/fc_remote_ports/rport-0\:0-*/fast*
The value of the remote ports' (0:0-2 to 0:0-5) dev_loss_tmo parameter is 30. The
configuration has taken effect.
----End
Solution
This problem can be found at Red Hat's official website:
https://access.redhat.com/articles/43459
For RHV-H, the first two lines in the multipath.conf file are as follows:
# RHEV REVISION
# RHEV PRIVATE
After the command is executed, the configuration data will be retained after the
host is restarted.
Solution
This is a known problem with SLES. It can be found at SLES's official website:
https://www.suse.com/documentation/sles-12/stor_admin/data/
sec_multipath_trouble.html
In emergency mode, run the multipath -v2 command to view the path
information.
multipath -v2
Dec 18 10:10:03 | 3600508b1001030343841423043300400: ignoring map
Edit the /etc/multipath.conf file and add the following content to the blacklist.
blacklist {
wwid 3600508b1001030343841423043300400
}
----End
The expected result is that the new HBA will have eight new paths and those of
the original HBA will disappear. However, there are now 16 paths. The original
eight paths remain and their status is faulty.
Solution
Perform the following operations.
----End
Solution
This problem is caused by internal defects of VMware 6.0. VMware's official
website provides an explanation:
https://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=2144657
Upgrade your system to VMware 6.0 U2 to solve this problem.
In the preceding figure, the links with a priority of 130 are not aggregated, nor are
those with a priority of 10.
Solution
An exception occurred when the multipathing software processed path priorities.
The status of the paths with a priority of 130 should be active rather than
enabled. Run the following command to let the multipathing software identify
the paths again:
multipathd -k"reconfigure"
D
DM-Multipath Device Mapper-Multipath
F
FC Fiber Channel
H
HBA Host Bus Adapter
L
LUN Logical Unit Number
O
ODX Offloaded Data Transfer
P
PID Product ID
V
VID Vendor ID