Professional Documents
Culture Documents
Useful FusionCompute V100R005C10 Initial Configuration Guide 01
Useful FusionCompute V100R005C10 Initial Configuration Guide 01
V100R005C10
Issue 01
Date 2015-11-11
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
Purpose
This document describes the FusionCompute initial configuration operations and provides
guidance for users on the basic FusionCompute configuration process and follow-up tasks
after the software installation.
Intended Audience
This document is intended for software commissioning engineers.
Symbol Conventions
The symbols that may be found in this document are defined as follows:
Symbol Description
Symbol Description
Change History
Changes between document issues are cumulative. The latest document issue contains all the
changes made in earlier issues.
Issue 01 (2015-11-11)
This issue is the first official release.
Contents
A Glossary......................................................................................................................................118
A.1 A-E.............................................................................................................................................................................119
A.2 F-J.............................................................................................................................................................................. 120
A.3 K-O............................................................................................................................................................................ 122
A.4 P-T............................................................................................................................................................................. 123
A.5 U-Z.............................................................................................................................................................................124
Scenarios
After the FusionCompute is installed, load a license file so that the FusionCompute can
provide licensed services within the specified period.
You can obtain the license using either of the following methods:
l Apply for a license file based on the equipment serial number (ESN) and load the license
file.
l Share a license file with another site.
The number of host CPUs on the sites that share one license file cannot exceed the
licensed limit.
Prerequisites
Conditions
You have obtained the following information if you want to use a license file of another site:
l Management IP address of the Virtualization Resource Management (VRM) node if the
target site has only one VRM, or floating IP address of the VRMs if the target site has
two VRMs working in active/standby mode
l Username and password of the FusionCompute administrator of the target site
Data
Data preparation is not required for this operation.
Procedure
Log in to the FusionCompute.
1 Log in to the FusionCompute.
For details, see Logging In to FusionCompute.
2 On the FusionCompute, choose System > System Configuration > License
Management.
The License Management page is displayed.
3 Click Load License.
The Load License File page is displayed.
Scenarios
If FusionStorage is used, set the host CPU resource mode to Isolated to isolate CPU resources
in Domain 0 from those in Domain U. However, if the Isolated mode is configured, VM CPU
affinity and guest NUMA functions become invalid.
Prerequisites
Conditions
l You have logged in to FusionCompute.
Procedure
1 On the FusionCompute web client, choose System > System Configuration > Host
CPU Configuration.
The Host CPU Configuration page is displayed.
2 Set the host CPU resource mode to Isolated.
3 Click Save.
A dialog box is displayed.
4 Click OK.
A dialog box is displayed.
5 Click OK.
The host CPU resource mode is set successfully.
----End
Scenarios
After the FusionCompute is installed, create service clusters based on the data plan.
A management cluster and a service cluster can be deployed as one cluster.
NOTE
A cluster named ManagementCluster is automatically created after the FusionCompute is installed
using the FusionCompute installation wizard. Hosts on which Virtualization Resource Management
(VRM) VMs run are automatically added to the cluster.
Prerequisites
Conditions
You have logged in to the FusionCompute.
Data
You have obtained the following information about the cluster to be created:
l Name and description
l Resource scheduling policies
l High availability (HA) policy
l Memory overcommitment policy
Procedure
Switch to the page for creating a cluster.
1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
2 In the navigation tree on the left, right-click the site and choose Create Cluster.
The Create Cluster page is displayed.
Configure the basic information.
– After host memory overcommitment is enabled, the overcommitment ratio for a VM can be
controlled by specifying the Reserved (MB) parameter in the VM QoS settings area.
n If the configured VM memory is greater than or equal to 16 GB, the VM Reserved
(MB) parameter must be set to its maximum value to ensure the optimal VM
performance. In this case, host memory overcommitment does not take effect for the
VM.
n If the configured VM memory is less than 16 GB, the VM Reserved (MB) value can be
set to 70% of the specified VM memory size so that more VMs than the number of
physically supported ones can use the host memory resources in practice. If the
monitored VM memory usages have been persistently greater than 40% in several hours,
the VM Reserved (MB) parameter must be set to its maximum value. In this case, host
memory overcommitment does not take effect for the VM.
– After host memory overcommitment is enabled, the total available memory capacity equals to
the total memory capacity in the virtualization domain multiplied by 120%. The total memory
capacity in the virtualization domain equals to the server memory capacity minus memory size
required by virtualization management. You can choose Computing Pool > Site > Cluster >
Host > Summary > Monitoring Information to view the total memory capacity in the
virtualization domain.
– After host memory overcommitment is enabled, plan VMs on hosts based on the total memory
capacity. If VMs that consumed a large amount of memory exist, some VMs may fail to start
even if the used memory has been released. This startup failure occurs because the
virtualization layer does not know that the memory is released.
– After host memory overcommitment is enabled, VMs cannot be hibernated and memory
snapshots cannot be created for them.
6 Configure the VM startup policy.
– Assign automatically: The system starts a VM on a host that has available
resources in the cluster.
– Assign based on load balancing: The system starts VMs on the host with the
largest available CPU capacity.
7 If you need the guest non-uniform memory access (NUMA) function, set GuestNUMA
to Enable.
Guest NUMA presents a topology view of memory and CPU resources on each host to
VMs, the VM user can configure VM CPUs and memory using the third-party software
(such as Eclipse) based on this topology, so that VMs can obtain the most easy-to-access
memory based on the topology, thereby reducing access latency and improving VM
performance.
NOTE
Ensure that the following conditions are met for the Guest NUMA function to take effect:
– The number of VM CPUs supported by a host in the cluster must be a multiple of the physical
CPU quantity on the host or a multiple of the thread quantity of a single CPU of the host. (To
check the number of physical CPUs on a host or the number of CPU threads of a host, switch
to the host hardware information page and choose Hardware > CPU, as described in
Querying Host Information.)
The Guest NUMA function may fail on the host that accommodates the VM if the host CPU
quantity changes (due to VM live migration or VM startup on another host) or VM CPU
specifications change.
– The memory overcommitment function or the host CPU resource mode is disabled for the
cluster.
– The NUMA function is enabled for hosts in the cluster. For example, to enable the NUMA
function for a RH2288H V2 server, choose Advanced > Advanced Processor in the advanced
basic input/output system (BIOS) settings of the server, and set NUMA mode to Enabled.
– VMs on the hosts in the cluster are restarted after the Guest NUMA function is enabled for the
cluster.
8 Set VMs processing policy upon data store faults as needed.
This parameter can be set to No processing or Stop VM.
9 If local RAM disks are required for hosts in the cluster, set Local Ramdisk to Enable.
10 Click Next.
The Configure HA page is displayed.
Configure high availability (HA) settings.
11 Determine whether to configure HA settings.
The HA function can be enabled for VMs in a cluster only when the cluster also has the
HA function enabled.
– If yes, go to 12.
– If no, click Next and go to 14.
12 Select Enable.
13 Configure the HA function.
– HA resource reservation: The system reserves the specified amount of CPU and
memory resources for the cluster. The reserved resources can only be used to
implement the VM HA function.
– Tolerate cluster host failures: The system allows the specified number of hosts to
become faulty in the cluster. The system also periodically checks whether sufficient
resources are available in the cluster to support service switchover for these faulty
VMs. If no sufficient resources are available, an alarm will be generated to notify
users of ensuring sufficient resources in the cluster for VM service switchover.
A slot is a basic unit for allocating CPU and memory resources, and the slot value
can be set to Automatic or Custom.
n Automatic: The system sets the slot size based on the maximum amount of
VM CPU and memory resources required by the cluster.
n Custom: Users can set custom VM CPU and memory resource size based on
the service requirements.
14 Click Next.
The Configure Resource page is displayed.
Configure computing resource scheduling policies.
15 Determine whether to enable the computing resource scheduling function.
– If yes, go to 16.
– If no, click Next and go to 27.
16 Select Enable computing resource scheduling.
17 Select an automation level.
Automation levels include:
– Manual: The user is prompted to migrate VMs based on the suggestions provided
on the Computing Resource Scheduling page.
– Automatic: The system automatically migrates VMs to maximize resource
utilization when the resource load is heavy.
18 Select a measurement condition.
Measurement conditions (such as CPU or memory usage) are criteria for the system to
determine whether to schedule resources.
Measurement conditions include:
– CPU
– Memory
– CPU and Memory
19 Configure the CPU and memory thresholds for triggering scheduling policies.
20 Set resource scheduling thresholds for each hour.
Thresholds include:
– Conservative: The system does not migrate VMs or provide VM migration
suggestions.
– Slightly conservative, Medium, Slightly radical, and Radical: The system
improves cluster load imbalance by a degree in ascending order.
The threshold is set to Medium for each period by default.
The threshold validity period can be set to:
– By day: The thresholds always take effect.
– By week: The thresholds take effect only on the specified days in a week.
– By month: The thresholds take effect only on the specified days in a month.
21 Set the VM scheduling customization value to configure the VM scheduling
automation level.
– If this parameter is selected, you can set an automation level for each VM in the
cluster on the VM Override Policy page.
– If this parameter is not selected, the VMs in a cluster use the automation level
specified for the cluster.
22 Determine whether to enable automated power management.
– If yes, go to 23.
– If no, click Next and go to 27.
With this function enabled, the system migrates the VMs on the host and powers on or
off the host in the cluster based on the host resource usages. This function is remotely
implemented by the host BMC. Therefore, you need to configure the BMC for all hosts
in the cluster.
23 Select Enable automated power management.
Automated power management depends on automated computing resource scheduling.
Therefore, automated power management takes effects only when automated computing
resource scheduling is enabled and the migration threshold is not set to Conservative.
24 Configure the automation level.
Automation levels include:
– Manual: The user is prompted to migrate VMs based on the suggestions provided
on the Computing Resource Scheduling page.
– Automatic: The system automatically migrates VMs to maximize resource
utilization when the resource load is heavy.
25 Set the power management threshold and the threshold validity duration for each period.
Power management thresholds include:
– Conservative: The system does not power off hosts by default. It powers on
available hosts in the cluster only when the average host resource usage in the
cluster is higher than the heavy-load threshold.
– Slightly conservative, Medium and Slightly radical: The system powers on
available hosts in the cluster when the average host resource usage in the cluster is
higher than the heavy-load threshold. It powers off some hosts when the average
host resource usage in the cluster is lower than the light-load threshold.
– Radical: The system does not power on available hosts by default. It powers off
some hosts in the cluster only when the average host resource usage in the cluster is
lower than the light-load threshold.
The default value is Medium for all periods.
Table 3-1 lists the light-load and heavy-load threshold value of each power management
threshold.
Table 3-1 Light-load and heavy-load threshold value of each power management
threshold
Threshold Name Heavy-Load Threshold Light-Load Threshold
Value Value
Radical - 63%
– By month: The thresholds take effect only on the specified days in a month.
26 Click Next.
The Configure IMC page is displayed.
Configure the incompatible migration cluster (IMC) function.
27 Determine whether to enable the IMC function.
The IMC function allows the VMs running on the hosts in the cluster to present the same
CPU features. This ensures successful VM migration across these hosts even if the hosts
use physical CPUs of different performance baselines.
The IMC mode of a cluster must be set to the CPU generation that exposes the minimum
function set in the cluster or an earlier version of the CPU generation.
The CPU generation of the host to be added to the IMC-enabled cluster is the same as or
later than the IMC mode configured for the cluster.
FusionCompute supports the following Intel CPU generations, which present higher
performance levels in ascending order:
– Merom
– Penryn
– Nehalem
– Westmere
– Sandy Bridge
– Ivy Bridge
NOTE
After setting an IMC mode for a cluster, you need to enable the Execute Disable Bit function,
which is also known as the No eXecute bit (NX) or eXecute Disable (ND) function, in the BIOS
advanced CPU options for existing hosts in the cluster and the hosts to be added to the cluster.
28 Set the IMC mode and description.
29 Click Next.
The Confirm page is displayed.
Finish cluster creation.
30 Click Create.
An information dialog box is displayed.
31 Click OK.
The cluster creation task is complete.
Follow-up Procedure
Add hosts to the cluster.
----End
Scenarios
Add hosts to a cluster. The hosts provide computing resources to the system.
When you add hosts to a cluster, the system automatically binds the management ports in
active/standby mode. Therefore, you need to delete the Eth-Trunk configuration on the access
switch. Otherwise, network communication is interrupted.
After hosts are added to a cluster, add other planned management network ports to the bound
network port to improve reliability of the management plane network.
NOTE
If you use the FusionCompute installation wizard to install the FusionCompute, some hosts are
automatically added to the management cluster. You are advised to also bind the management ports for
the hosts.
Prerequisites
Conditions
l You have logged in to the FusionCompute.
l The operating system (OS) has been installed on the host to be added.
l The Virtualization Resource Management (VRM) management plane has been initialized
after FusionCompute installation is complete, and the communication between the host
and the Virtualization Resource Management (VRM) management plane is normal. Only
one management plane can be configured for the VRM node.
l The CPU generation of the host is the same as or later than the incompatible migration
cluster (IMC) mode configured for the cluster if the target cluster has IMC mode
enabled. Execute Disable Bit function, which is also known as the No eXecute bit (NX)
or eXecute Disable (ND) function, has been enabled in the advanced CPU options for the
BIOS of the host.
Data
Table 4-1 describes the data required for performing this operation.
Procedure
Determine the method of adding hosts.
1 Choose one of the following methods to add hosts:
– To add hosts in batches using a template, go to 8.
You can define host data in a template and import the template. This method is
recommended when a large number of hosts need to be added.
– To add hosts one by one, go to 2.
You can add only one host at a time. This method is recommended when a small
number of hosts need to be added.
Open the page for adding hosts.
NOTICE
The Connect to OpenStack option can be configured only when no host is added. If
hosts have been added to the FusionCompute system, you are not allowed to change the
Connect to OpenStack option.
Add a host.
4 Set the following parameters for the host:
– Name
– IP address
– BMC IP
– Username
– Password
– Use the site time sync policy: synchronizes the configured cluster system time to
the host.
5 Click OK.
An information dialog box is displayed.
6 Click OK.
host management service processes, which will interrupt host services for 3 to 5
minutes. If you set this parameter to No, you can manually configure a time
synchronization policy for the host after it is added.
14 Save and close the template.
15 Click Browse to the right of Import template file on the Add Hosts in Batches page.
A dialog box is displayed.
16 Select the template and click Open.
17 Click OK.
An information dialog box is displayed.
18 Click OK.
You can choose System > Tasks and Logs > Task Center to view the task progress.
Bind the management network ports.
If you use the FusionCompute installation wizard to install the FusionCompute, some hosts
are automatically added to the management cluster. You are advised to also bind the
management ports for the hosts.
19 Determine whether to bind the management network ports.
– If yes, go to 20.
– If no, no further action is required.
20 In the navigation tree on the Computing Pool page, click the host.
The Getting Started page for the host is displayed in the right pane.
21 Choose Configuration > System Port > Bind Network Port.
The page for binding network ports is displayed.
22 In the Bind Network Port area, locate the row that contains Mgnt_Aggr, click More,
and select Add Network Port.
A dialog box is displayed, as shown in Figure 4-3.
23 Select PORT1 and click OK to bind eth1 on the host to the current management network
port eth0.
----End
Additional Information
Related Tasks
None
Related Concepts
Working Principles of Computing Resource Virtualization
The FusionCompute system integrates physical CPUs and memory resources on hosts into a
computing resource pool and divides the resources into virtual CPUs and memory resources
for VMs, as shown in Figure 4-4.
VM
Virtual CPU
Virtual memory
Virtualized
computing
resource pool
Memory resource pool CPU resource pool
Virtualization
layer
Host
(physical server)
Name Description
Virtual CPU and When the system creates a VM, the system automatically allocates
memory required memory space and virtual CPUs from the resource pool to
the VM according to the specified VM specifications.
Name Description
VM NOTE
The virtual CPU and memory resources used by a VM must be provided by
the same host. If this host fails, the system automatically assigns another host
to the VM to provide computing resources. Therefore, the resources actually
used by a VM cannot exceed the specifications of the hardware resources on
the host.
Storage Resources
The FusionCompute can use storage resources provided by dedicated storage devices or local
disks on hosts. Dedicated storage devices are connected to hosts through network cables or
fiber cables.
Data Store
A data store is a storage unit that is converted from a storage resource by FusionCompute.
After a data store is associated with a host, the data store can be used to create virtual disks
for VMs.
Raw device mapping (RDM) allows logical unit numbers (LUNs) on SAN devices can serve
as data stores without creating virtual disks. This technology applies to large disk capacity
scenarios, for example, the database server construction. RDM can be used only for VMs that
run certain operating systems (OSs). For details about the supported OS list, see
Compatibility. If RDM storage is used to deploy application cluster services, such as Oracle
RAC, it is recommended that you not use the VM snapshot creation function and not restore a
VM using a snapshot. If you use a snapshot to restore a VM, the application cluster service
may become faulty.
After storage resources are converted to data stores, the difference between virtual disks
created using different resources are hidden from VM OSs.
Storage resources that can be converted to data stores are:
l LUNs on SAN devices, including Internet Small Computer Systems Interface (iSCSI)
storage devices and fiber channel (FC) storage devices.
l File systems on NAS devices.
l Storage pools on FusionStorage
l Local hard disks on hosts
l Local RAM disk on hosts
Table 5-1 shows the relationship between data resources, storage devices and data stores.
Table 5-1 The storage devices, storage resources, and data stores support for the
FusionCompute
Storage Storage Data Store Storage Space
Device Resource Required By Data
Store
Local RAM N/A Local RAM disk (non- Local RAM disk: [16 GB,
disk virtualization) 512 GB]
VMs
File systems
Virtual disks
NAS devices
Hosts LUN
Data stores LUN
Storage Port
A storage port on a host connects the host to a storage device. One physical NIC or a group of
physical NICs that are bound together on the host can be set as a storage port.
If iSCSI storage devices are used, two physical NICs on a host can be connected to multiple
storage NICs on the storage devices, working in multipathing mode. Binding of physical
NICs is not required in this mode.
If NAS devices are to be used, you are advised to bind the storage plane NICs in active/
standby mode and set the storage port to connect to the NAS devices to enhance reliability.
iSCSI Storage
An iSCSI storage device is connected to a host through network cables. The host accesses the
storage device using the TCP/IP protocol.
To ensure efficient access to an iSCSI storage device, configure an iSCSI initiator using the
world wide name (WWN) generated after the storage device is associated to the host.
Typical iSCSI storage devices include IP SAN devices and OceanStor 18000 series storage
devices.
FC Storage
An FC storage device is connected to the FC host bus adapter (HBA) on a host through
optical cables, which provide high data transmission rate.
To ensure efficient access to an FC storage device, configure an FC initiator using the WWN
generated after the storage device is connected to the host FC HBA.
Typical FC storage devices include FC SAN devices and OceanStor 18000 series storage
devices.
Multipathing
Multipathing is a storage access mechanism that provides more than one physical path to
connect a network storage device to one or more host network interface cards (NICs),
enabling load sharing for data flows and thereby enhancing reliability for storage access.
Usually, multipathing is supported by storage devices using iSCSI and Fibre Channel (FC),
such as IP SAN devices, FC SAN devices, and OceanStor 18000 series storage devices.
Multipathing supports the Huawei and universal multipathing modes. If the universal
multipathing mode is used and the VM uses raw device mapped disks, the MSCS cluster
cannot be deployed.for Windows Server OS. However, you can deploy the iSCSI network on
the VM to deploy the MSCS cluster.
VLAN 4 Controller A
172.20.10.10
Host
VLAN 5
172.30.10.10
VLAN 6
Eth2 172.40.10.10
172.20.100.100
172.30.100.100 VLAN 7
172.50.10.10
NAS Storage
NAS storage devices use the network file sharing (NFS) protocol to provide shared folders
over a network.
A NAS storage device is connected to a host through network cables. A host accesses the
storage device using TCP/IP.
Local Storage
Disks on hosts provide local storage resources.
The FusionCompute can identify the following local storage resources:
l Free space on the disk on which the host operating system is installed
NOTE
The remaining space on the local disk where the host OS is installed can be added as a data store
of the local storage type. If the space of the disk is greater than 2 TB, the system identifies the disk
only as a 2 TB disk during the host OS installation process. Therefore, the remaining space on this
disk after the OS installation is less than 2 TB. Other local disks on the host are not affected by the
OS installation. Therefore, they can provide all of their space.
l Bare disks or unpartitioned redundant array of independent disks (RAID) on the host
Local storage can be provided only to the host housing the disks.
For details about adding local RAM disks on a host, see Creating Local RAM Disks on a
Host.
Prerequisites
Conditions
l You have logged in to FusionCompute.
l The host has been added to a cluster.
l Operations provided in Binding Network Ports have been performed on the host if the
host uses multiple network ports to connect to a storage device.
Procedure
Determine the method of adding storage ports to hosts.
1 Determine the method of adding storage ports to hosts.
– To add storage ports to hosts in batches, go to 12.
This method is recommended when multiple storage ports are to be added a large
number of hosts or to hosts that use the multipathing function.
– To add storage ports to hosts one by one, go to 2.
This method is recommended when the hosts are small in number or do not use the
multipathing function.
Manually add storage ports to a host.
2 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
3 In the navigation tree on the left, select the site, the cluster, and the host.
The Getting Started page is displayed.
4 Choose Configuration > System Port > Add Storage Port.
The Add Storage Port page is displayed, as shown in Figure 5-3.
5 Select the network port to which the storage plane network interface card (NIC)
connects, and click Next.
PORTX indicates network port ethX on the host. To identify the ports on common
Huawei servers, see How to Identify Server Ports.
The Connection Settings page is displayed, as shown in Figure 5-4.
the storage device is 172.20.100.100 and subnet mask is 255.255.0.0, set the IP
address to 172.20.XXX.XXX.
n If the storage plane uses a layer 3 network, set it to an IP address that
communicates with the storage IP address of the storage device.
– Subnet mask: Enter the subnet mask of the storage plane.
– VLAN ID: Enter the VLAN ID of the storage plane.
– Switching mode: specifies the data exchange mode of the storage plane. The value
can be Linux subinterface or OVS forwarding.
NOTE
If SAN devices are used and multipathing is required, configure multiple storage
interfaces for a single storage network port.
For example, if a storage device has four storage paths, using VLAN4, VLAN5,
VLAN6, and VLAN7, respectively, the network port eth2 on the host is to be configured
to intercommunicate with VLAN4 and VLAN5, and eth3 to intercommunicate with
VLAN6 and VLAN7, configure storage interfaces VLAN4 and VLAN5 on eth2, and
VLAN6 and VLAN7 on eth3.
7 Click Next.
The Confirm page is displayed.
8 Ensure the all the information is correct and click Add.
A dialog box is displayed.
9 Check whether all the planned storage ports have been added.
– If yes, go to 11.
– If no, go to 10.
10 Click Continue, and perform 5 to 8 for each storage port to be added.
11 Click OK.
The storage ports are added to the host.
After this step is complete, no further action is required.
Add storage ports to hosts in batches.
12 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
13 In the navigation tree on the left, select the site and cluster.
The Getting Started page is displayed.
14 Choose Operation > Add Storage Ports in Batches on top of the page.
The Add Storage Ports in Batches page is displayed, as shown in Figure 5-5.
17 Open the template, click the Host Storage Port sheet, locate the row that contains
information about storage ports to be added, and copy the information in the row to the
Config sheet.
PORTX indicates network port ethX on the host. To identify the ports on common
Huawei servers, see How to Identify Server Ports.
Storage port information includes Host IP Address, Host ID, Network Port Name, and
Network Port ID.
18 On the Config sheet, set the following parameters for the storage ports:
– Storage Port Name
– Storage Port Description
– Storage IP Address: Enter the storage port IP address. It must be an idle IP address
in the same network segment as the storage device IP address.
n If the storage plane uses a layer 2 network, set it to an idle IP address that
communicates with the storage plane. For example, if the storage IP address of
the storage device is 172.20.100.100 and subnet mask is 255.255.0.0, set the IP
address to 172.20.XXX.XXX.
n If the storage plane uses a layer 3 network, set it to an IP address that
communicates with the storage IP address of the storage device.
– Subnet Mask: Enter the subnet mask of the storage plane.
– VLAN ID: Enter the VLAN ID of the storage plane.
– Switching mode: specifies the data exchange mode of the storage plane. The value
can be Linux subinterface or OVS forwarding.
NOTE
For details about the parameters, see the help sheet in the template.
19 Save and close the template.
20 Click Browse to the right of Import template file on the Add Storage Ports in Batches
page.
A dialog box is displayed.
21 Select the template and click Open.
22 Click OK.
An information dialog box is displayed.
23 Click OK.
You can choose System > Tasks and Logs > Task Center to view the task progress.
Follow-up Procedure
After the storage ports are added to the hosts, If the hosts use shared storage resources, add
and associate the storage resources to hosts, and then scan storage devices. For details, see
Adding Storage Resources to a Site, Associating Storage Resources with a Host and
Scanning Storage Devices in the FusionCompute V100R005C10 Storage Management
Guide.
----End
If all hosts in the system use the local storage resources and local RAM Disk they each
provide, skip this task.
Prerequisites
l You have logged in to the FusionCompute.
l You have prepared information about the shared storage resource to be added.
l You have added storage ports to all hosts if you want to add a SAN, NAS, or
FusionStorage resource that uses Internet Small Computer Systems Interface (iSCSI)
channels.
l You have created a management account other than admin if you want to add an
advanced (SAN) storage resource.
l You have configured Challenge Handshake Authentication Protocol (CHAP)
authentication for the IP SAN storage device on the storage device management portal if
an IP SAN device that uses CHAP authentication is to be added,
Data
Table 5-2 lists the data required for performing this operation.
Procedure
Change the multipathing type of the hosts.
1 Determine whether to change the multipathing type of the hosts on the site.
The default multipathing type for hosts in the FusionCompute system is Universal. If
you want to add Huawei IP SAN storage using iSCSI channels, change the mode to
Huawei to improve storage performance. If you want to add a storage resource of any
other type, do not change the multipathing type
– If yes, go to 2.
– If no, go to 8.
2 On FusionCompute, click Computing Pool.
The getting started page for clusters and hosts is displayed.
3 On the Host page, click the name of the target host.
The getting started page for the host is displayed.
4 Click Configure Storage Multipathing.
A dialog box is displayed, as shown in Figure 5-6.
n Data consistency verification ensures data integrity but deteriorates storage device I/O
performance.
n When configuring storage multipathing, add all storage IP addresses and ports of a
storage device at a time. The storage IP address in-use cannot be changed.
– If NAS is selected, set Name and Storage IP address, as shown in Figure 5-10.
12 Click Next.
The Select Host page is displayed.
13 Select hosts with which the storage resource is to be associated and click Next if you
selected FC SAN or IP SAN or click Finish if you selected any other resource type.
– If you selected FC SAN, the Obtain Host WWW page is displayed. Go to 14.
– If you selected any other resource type, a dialog box is displayed. Go to 16.
14 On the storage resource management interface, configure the initiator using the obtained
world wide name (WWN).
For details, see How to Configure an FC SAN Initiator or How to Configure an IP
SAN Initiator.
15 After the initiator is successfully configured, click Finish.
A dialog box is displayed.
16 Click OK.
The storage resource is added to the site.
Follow-up Procedure
----End
system can locate and protect the data stores that are added to the failed host in a timely
manner. The system uses an independent network plane to carry the virtualized SAN storage
traffic by default. Therefore, add a service management port to each host to manage the traffic
before you add virtualized SAN data stores to hosts. You can also make the management
plane to carry the virtualized SAN storage traffic. For details, see Changing the Network
Plane that Carries Virtualized SAN Storage Traffic in the FusionCompute V100R005C10
Storage Management Guide. However, in this case, the management IP address changing
function is adversely affected.
This topic describes how to add service management ports to hosts on FusionCompute to
carry the virtualized SAN storage traffic.
This task is required only when virtualized SAN data stores are to be added to hosts.
A host supports a maximum of four service management ports. A host that uses intelligent
network interface cards (iNICs) does not support service management ports.
Prerequisites
Conditions
l You have logged in to the FusionCompute.
l You have planned the network information for the virtualized SAN storage plane.
Procedure
Determine the method of adding service management ports to hosts.
1 Determine the method of adding service management ports to hosts.
– To add service management ports to hosts in batches, go to 10.
This method is recommended when the system has a large number of hosts.
– To add service management ports to hosts one by one, go to 2.
This method is recommended when the system has a small number of hosts.
Manually add a service management port to a host.
5 Select the host network port that connects to the service plane, and click Next.
PORTX indicates network port ethX on the host. For details about common network
ports on Huawei servers, see How to Identify Server Ports.
The Connection Settings page is displayed, as shown in Figure 5-12.
– VLAN ID: specifies the VLAN ID of the planned service management plane.
– Routing info: specifies the information about the route to the peer host. This
parameter is required when the service management port on the local host and the
service management ports on other hosts in the cluster do not belong to the same
network segment and the service management port on the local host is enabled.
n Gateway: specifies the gateway on the network segment to which the service
management port on the destination host belongs.
n Network destination: specifies the start IP address of the network segment to
which the service management port on the destination host belongs, for
example, 192.168.0.0.
n Netmask: specifies the subnet mask of the network segment to which the
service management port on the destination host belongs.
– Select Use this port for virtualized SAN storage traffic in Available Services.
– Outbound Traffic Shaping
n Average send bandwidth (Mbit/s): specifies the average number of bits per
second to allow across a port during a certain period of time.
If a common NIC is used, the port traffic remains close to the configured
average bandwidth when no burst of traffic occurs. If an iNIC is used, the
average bandwidth is equal to the minimum bandwidth when no congestion
occurs on the network. If the burst send size is set to a too small value, the
network bandwidth decreases.
n Peak send bandwidth (Mbit/s): specifies the maximum number of bits per
second to allow across a port when it is sending a burst of traffic.
The peak receive bandwidth must be greater than or equal to the average
receive bandwidth. A proper peak send bandwidth set for a service prevents
network congestion on other VM networks when the traffic of this service is
too large. When an iNIC is used, the peak send bandwidth is equal to the
maximum bandwidth after the burst of traffic disappears, and in the idle
period, the bandwidth remains around the peak receive bandwidth.
n Burst send size (Mbits): specifies the maximum number of bytes to allow in a
burst.
7 Click Next.
The Confirm page is displayed.
8 Confirm the information and click Add.
A dialog box is displayed.
9 Click OK.
The service management port is added to the host.
After this step is complete, no further action is required.
Add service management ports to hosts in batches.
10 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
11 In the navigation tree on the left, select the site and cluster.
The Getting Started page is displayed.
12 In the Operation list in the upper corner, click Add Service Management Port in
Batches.
The Add Service Management Port in Batches page is displayed, as shown in Figure
5-13.
Table 5-3 Storage space sizes required by data stores of different types
Data Store Type Storage Space Requirement
Local (non-virtualized) ≥ 2 GB
SAN (non-virtualized) ≥ 2 GB
NAS
FusionStorage
Prerequisites
Conditions
l You have logged in to FusionCompute.
l You have added storage resources to the system. For details, see Adding a Storage
Resource to a Site.
l An independent network plane that carries the virtualized SAN storage traffic and a
service management port that manages the virtualized SAN storage heartbeat traffic are
both enabled or disabled, and configured as Table 5-4 requires if you want to add a data
store provided by virtualized SAN storage. For details, see Changing the Network Plane
that Carries Virtualized SAN Storage Trafficin the FusionCompute V100R005C10
Storage Management Guide and Adding Service Management Ports (Required Only
for Virtualized SAN Storage) in the FusionCompute V100R005C10 Host and Cluster
Management Guide.
Procedure
Scan storage devices.
1 On FusionCompute, click Storage Pool.
2 In the Storage Getting Started page, click Scan Storage Device.
The Scan Storage Device page is displayed.
3 Select hosts where the storage devices are to be scanned.
You can select one or multiple hosts.
4 Click OK.
An information dialog box is displayed.
5 Click OK.
You can choose System > Tasks and Logs > Operation Logs to view the task progress.
After the scan is complete, all available data stores are displayed on the Storage Pool >
Storage Device page.
NOTE
If no available storage device is displayed after the scan if a SAN device has been associated with
the host or the host uses local disks or FC SAN resources, follow the operations provided in How
to Handle the Failure in Detecting Storage Devices on a FusionCompute Host During VRM
Installation Process to address this issue. if abnormal storage devices or no storage devices are
detected, follow operations described in A Host Detects Non-Existent Storage Devices or Storage
Devices with an Incorrect Name on the FusionCompute Portal to address this issue.
store supports only VMs running certain OSs, such as Red Hat Linux
Enterprise 5.4/5.5/6.1/6.2 64-bit. For details about the supported OS list, see
the OS description for PVSCSI in the Compatibility.
– Description
13 Click Next.
The Select Host page is displayed.
14 Select hosts to which the data store is to be added.
You can select one or multiple hosts.
15 Click Next.
The Confirm page is displayed.
16 Confirm the configuration and click Finish.
An information dialog box is displayed.
17 Click OK.
You can choose System > Tasks and Logs > Task Center to view the task progress.
Follow-up Procedure
Create disks on the data store. For details, see Creating a Disk in the FusionCompute
V100R005C10 Storage Management Guide.
----End
NOTE
l The remaining space on the local disk where the host OS is installed can be added as a data store of
the local storage type. If the space of the disk is greater than 2 TB, the system identifies the disk only
as a 2 TB disk during the host OS installation process. Therefore, the remaining space on this disk
after the OS installation is less than 2 TB. Other local disks on the host are not affected by the OS
installation. Therefore, they can provide all of their space.
l If local hard disks are to be added as data stores, it is recommended that you configure the disks in
RAID 1. If customers have other requirements on RAID levels, configure the disks based on
customer requirements.
l If FusionStorage is deployed, the added data stores must be created on local hard disks that are
configured as RAID.
Prerequisites
Conditions
You have logged in to the FusionCompute.
Data
Data preparation is not required for this operation.
Procedure
Determine the method for adding data stores.
NOTICE
Multiple sites cannot share the same data stores. Otherwise, data on the data stores may be
overwritten.
9 Select the data store to be added in the Select Storage Device list, and set the following
parameters:
– Name
– Description
– Storage mode: Non-virtualization or Virtualization. It takes time to create a
common disk on a virtualized data store. However, a thin provisioning disk can be
quickly created on a virtualized data store, taking the same time as a common disk
on a non-virtualized data store takes. In addition, a thin provisioning disk supports
advanced features that improve storage utilization and system security and
reliability, such as thin provisioning, snapshots, and live storage migration. A
common disk created on a non-virtualized data store has higher I/O performance
than disks created on virtualized data stores but the common disk (except the one
created on FusionStorage and advanced SAN storage as well as the local RAM
disk) does not support advanced features.
10 Click OK.
The data store is added to the host.
11 Add other data stores to the host.
Repeat 8 to 10 for each data store to be added.
After this step is complete, no further action is required.
Add data stores in batches.
NOTICE
Multiple sites cannot share the same data stores. Otherwise, data on the data stores may be
overwritten.
Scenarios
After data stores are added to hosts in a service cluster, add virtual network resources, which
include distributed virtual switches (DVSs) and port groups.
A DVS provides the same functions as a physical switch. In the upstream direction, the DVS
connects to physical network ports on hosts. In the downstream direction, the DVS connects
to VMs through port groups. VMs connect to an external network through the uplinks
provided by the DVS.
The network connection mode of a port group determines how the VMs obtain IP addresses:
l If the port group connects to a subnet, the system automatically allocates IP addresses
from the IP address pool to the VMs that use the port group.
l If the port group connects to a virtual local area network (VLAN), the VM users must
configure IP addresses for the VMs that use the port group.
NOTE
l If VRM nodes are deployed on VMs, after the FusionCompute is installed using the installation
wizard, a DVS named ManagementDVS on the management plane and a port group named
managePortgroup are automatically created. The management ports on the hosts used for running
VRM VMs are automatically added to the uplink group on the DVS on the management plane. The
DVS and the port group on the service plane are required to be created manually.
l If other hosts are added to ManagerCluster and need to use ManagementDVS, add the
management plane network port of these hosts to the uplink group on ManagementDVS. For details
about the operation, see Adding an Uplink.
l If VRM nodes are deployed on physical servers, the DVSs and the port groups on both the
management plane and service plane are required to be created.
l In the FusionSphere solution, if both FusionCompute and FusionManager are in use, and
FusionManager is used to provision VMS, no port group on the DVS on the service plane is
required.
Prerequisites
Conditions
l You have logged in to the FusionCompute.
l Hosts have been added to a cluster.
Data
Table 6-1 describes the data required for performing this operation.
Procedure
Create a DVS.
1 On the FusionCompute, choose Network Pool.
The Network Pool page is displayed.
2 Click Create DVS.
The Create DVS page is displayed, as shown in Figure 6-1.
5 Click Create.
A dialog box is displayed.
Go to 26.
6 Click Next.
7 Determine whether to add uplinks to the DVS.
– If yes, go to 8.
– If no, go to 20.
8 Determine whether to bind the uplink ports on the host to improve network reliability.
NOTE
If the host uses intelligent network interface cards (iNICs), bind the uplink network ports on the host
together. Otherwise, the broadcast suppression function of the port group may be adversely affected.
– If yes, go to 9.
– If no, go to 15.
9 Locate the row containing the target host and click Bind Network Port.
The Bind Network Port page is displayed.
10 In the Network Port list, select the uplink ports to be bound.
11
NOTICE
– In all load sharing modes, aggregation must be configured on the switch to which
network ports are connected, that is, the ports to be bound must be configured on the
same Eth-trunk port on the switch. Otherwise, network exception may occur.
– In the Link Aggregation Control Protocol (LACP) mode, create an Eth-trunk in
LACP mode on the switch to which network ports are connected, configure ports to
be bound on the same Eth-trunk, and enable the bridge protocol data unit (BPTU)
protocol packet forwarding function on the Eth-trunk. For example, if the switch is
Huawei S5300, run the following commands:
<S5352_01>sys
[S5352_01]interface Eth-Trunk x
[S5352_01-Eth-Trunkx]mode lacp-static
[S5352_01-Eth-Trunkx]bpdu enable
For details about how to configure port aggregation on a switch, see the switch user
guide.
In the middle area of the page, set Name and Binding Mode for the network ports to be
bound.
The following binding modes are available for common network interface cards (NICs):
– Active-backup: applies to scenarios where two network ports are to be bound. This
mode provides high reliability. The bandwidth of the bound port in this mode equals
to that of a member port.
– Round-robin: applies to scenarios where two or more network ports are to be
bound. The bandwidth of the bound port in this mode is higher than that of a
member port, because the member ports share workloads in sequence.
This mode may result in data packet disorder because traffic is evenly sent to each
port. Therefore, MAC address based load balancing prevails over Polling in load
sharing modes.
– IP address and port-based load balancing: applies to scenarios where two or
more network ports are to be bound. The bandwidth of the bound port in this mode
is higher than that of a member port, because the member ports share workloads
based on the source-destination-port-based load sharing algorithm.
Source-destination-port-based load balancing algorithm: When the packets
contain IP addresses and ports, the member ports share loads based on the source
and destination IP addresses, ports, and MAC addresses. When the packets contain
IP addresses, the member ports share loads based on the IP addresses and MAC
addresses. When the packets contain only MAC addresses, the member ports share
loads based on the MAC addresses.
This mode is recommended when the virtual extensible LAN (VXLAN) function is
enabled. This mode allows network traffic to be evenly distributed based on the
source and destination port information in the packets.
– MAC address-based load balancing: applies to scenarios where two or more
network ports are to be bound. The bandwidth of the bound port in this mode is
higher than that of a member port, because the member ports share workloads based
on the MAC addresses of the source and destination ports.
This mode is recommended when most network traffic is on the layer 2 network.
This mode allows network traffic to be evenly distributed based on the MAC
addresses.
– MAC address-based LACP: This mode is developed based on the MAC address
based load balancing mode. In MAC address-based LACP mode, the bound port
can automatically detect faults on the link layer and trigger a switchover if a link
fails using the LACP protocol.
– IP address-based LACP: applies to scenarios where two or more network ports are
to be bound. The bandwidth of the bound port in this mode is higher than that of a
member port, because the member ports share workloads based on the source-
destination-IP-address-based load sharing algorithm. When the packets contain IP
addresses, the member ports share loads based on the IP addresses and MAC
addresses. When the packets contain only MAC addresses, the member ports share
loads based on the MAC addresses. In this mode, the bound port can also
automatically detect faults on the link layer and trigger a switchover if a link fails
using the LACP protocol.
This mode is recommended when most network traffic goes across layer 2 and layer
3 networks.
The following binding modes are available for intelligent network interface cards
(iNICs):
– Active-backup: applies to scenarios where two network ports are to be bound. This
mode provides high reliability. The bandwidth of the bound port in this mode equals
to that of a member port.
– Source MAC address-based load balancing: applies to scenarios where two or
more network ports are to be bound. The bandwidth of the bound port in this mode
is higher than that of a member port, because the member ports share workloads
based on the MAC address of the source port.
– Destination MAC address-based load balancing: applies to scenarios where two
or more network ports are to be bound. The bandwidth of the bound port in this
mode is higher than that of a member port, because the member ports share
workloads based on the MAC address of the destination port.
This mode is recommended when most network traffic is on the layer 2 network.
This mode allows network traffic to be evenly distributed based on the MAC
addresses.
– Source IP address-based load balancing: applies to scenarios where two or more
network ports are to be bound. The bandwidth of the bound port in this mode is
higher than that of a member port, because the member ports share workloads based
on the IP address of the source port.
– Destination IP address-based load balancing: applies to scenarios where two or
more network ports are to be bound. The bandwidth of the bound port in this mode
is higher than that of a member port, because the member ports share workloads
based on the IP address of the destination port.
This mode is recommended when most network traffic is on the layer 3 network.
This mode allows network traffic to be evenly distributed based on the destination
IP addresses.
12 Click Bind.
An information dialog box is displayed.
13 Click OK.
The Bind Network Port page is displayed.
14 Click OK.
The Add Uplink page is displayed.
15 Check whether to configure virtual tunnel end point (VTEP) for the VXLAN used on
FusionManager.
– If yes, go to 16.
NOTE
When you configure the VXLAN function, allocate the IP address from the VTEP network to the
software router, so that the software router can communicate with VTEPs on hosts. For details,
see the VXLAN chapter in the FusionManager V100R005C10 Administrator Guide.
– If no, go to 19.
16 Locate the row that contains the host and click Configure VTEP.
The Configure VTEP page is displayed.
17 Configure VTEP information.
– IP: specifies the IP address planned for the VTEP.
NOTE
The following conditions must be met when you configure the IP address of the VTEP.
n The IP address of the VTEP cannot be in the same network segment as that of other
system interfaces on the same host.
n The IP address of the VTEP cannot be in the same network segment as that of other
VTEPs on the same host.
n The IP address of the VTEP must be unique.
– Subnet mask: specifies the subnet mask of the VTEP.
– Gateway: specifies the gateway address of the VTEP.
– Outer VLAN: specifies the VLAN to be used by the VTEP. The VLAN must be
different from the VLANs used by the management, storage, and service planes.
– LLDP: specifies the LLDP service. If this service is enabled, the host topology can
be reported to the switch using the LLDP protocol.
18 Click OK.
The Add Uplink page is displayed.
19 Select the DVS ports to be connected to the hosts, and click Next, as shown in Figure
6-2.
Select the Mgnt_Aggr port on each host for the creation of the DVS on the management
plane.
22 Set the VLAN pool parameters including Start VLAN ID and End VLAN ID.
23 Click OK.
24 Click Next.
The Confirm page is displayed.
25 Confirm that all the information is correct and click Create.
A dialog box is displayed.
26 Determine whether to create another DVS.
– If yes, go to 27.
– If no, go to 28.
27 Click Create Another to create another DVS.
28 Click OK.
The DVS is created.
Create a port group.
NOTE
In the FusionSphere solution, if both FusionCompute and FusionManager are in use, and
FusionManager is used to provision VMS, no port group on the DVS on the service plane is required. A
DVS on the management plane requires a port group of the VLAN mode.
29 In the navigation tree on the left side of the Network Pool page, expand the Network
Pool, right-click the DVS, and choose Create Port Group.
The Basic Information page is displayed, as shown in Figure 6-4.
Declaration: This feature is a secure feature. It enhances end user data security.
– IP-MAC binding: binds the IP address and MAC address of the VM that uses the
port group. This function enhances VM network security because it prevents users
from initiating IP address or MAC address spoofing attacks after changing the IP
address or MAC address of the VM NIC. This parameter is valid only when Port
type is set to Access. Do not enable this function if a VM NIC is configured with
multiple IP addresses, because this function may cause communication exceptions
for some IP addresses of this NIC.
NOTE
Declaration: This feature is a secure feature. It enhances end user data security.
– TCP checksum calculation: this enables FusionCompute to automatically
calculate the TCP checksum when VMs in this port group receive packets. Enable
this function only when the checksum accuracy is high priority as it may impact
VM network receive performance.
31 Click Next.
The Network Connection page is displayed.
32 Perform the required operation based on the selected port type.
– If the port is an access port, go to 34.
– If the port is a trunk port, go to 33.
33 Enter the allowed VLAN range in the VLAN text box and go to 42.
Note the following requirements for specifying the VLAN parameter:
– VLANs length 1-2047.
– Enter single VLAN IDs or VLAN ID ranges.
– VLAN IDs can range in value from 1 to 4094, VLAN IDs can range in the VLAN
pool of the DVS.
– When entering a VLAN ID range, use the format A-B, where the value of A less
than the value of B.
– When entering multiple VLAN IDs or VLAN ID ranges, use commas (,) to separate
the IDs or ranges.
34 Select the network connection mode for the port group.
Select VLAN for the port group on the management plane.
– If IP address pool is selected, go to 35.
– If VLAN is selected, go to 40.
35 Click Add IP Address Pool.
A dialog box is displayed, as shown in Figure 6-5.
– Name
– Description
– Subnet
– Subnet mask
– Gateway
– Reserved IP segment
– Domain name
– Preferred/Alternate DNS server
– Preferred/Alternate WINS server
– VLAN ID
37 Click OK.
An information dialog box is displayed.
38 Click OK.
A subnet is added.
39 Select the newly added subnet on Network Connection and click Next.
The Confirm page is displayed.
Go to 43.
40 Set Connection mode to VLAN, as shown in Figure 6-6.
Additional Information
Related Tasks
None
Related Concepts
Principles of VM Network Access
A virtual NIC of a VM communicates with an external network by connecting to the DVS
through the port group, then by connecting to the physical NIC of a host through the DVS
uplink. These connections are shown in the following figure, as shown in Figure 6-7.
VM
Port group
DVS
Virtual
resources Uplink
Physical
resources
Physical network
Port group A port group is a virtual logical port similar to a template with
network attributes. A port group is used to define VM NIC attributes
and uses a DVS to connect to the network:
l Subnet: FusionCompute automatically allocates an IP address in
the subnet IP address pool to each NIC on VMs that use the port
group.
l VLAN: Users must manually assign IP addresses to VM NICs.
VMs connect to the VLAN defined by the port group.
Scenarios
On FusionCompute, configure a Domain Name Server (DNS) to convert domain names of the
Network Time Protocol (NTP) server to corresponding IP addresses.
Prerequisites
Conditions
l You have logged in to FusionCompute.
l The DNS server communicates with the management IP network of each
FusionCompute node properly.
Procedure
----End
Scenarios
Configure a third-party File Transfer Protocol (FTP) server to back up important data on the
Virtualization Resource Management (VRM) node. After the FTP server is configured, the
VRM automatically sends important data to the FTP server at 02:00:00 every day. If a system
exception occurs, the backup data can be used to restore the system.
Skip over this operation if the site does not have an FTP backup server.
Prerequisites
Conditions
You have logged in to the FusionCompute.
Data
Table 8-1 describes the data required for performing this operation.
Procedure
1 On the FusionCompute, choose System > Service Configuration > Service & Mgt.
Nodes.
The Service & Mgt. Nodes page is displayed.
2 In the Service list area, locate the row that contains VRM service, click More, and
select Configure Management Data Backup.
A dialog box is displayed, as shown in Figure 8-1.
3 Click Backup to other FTP server and set the following parameters:
– Username: Enter the username for logging in to the FTP server.
– Password: Enter the password for logging in to the FTP server.
– IP address: Enter the IP address of the FTP server.
– Port: Enter the communication port used by the FTP server.
– Protocol type: You are advised to select ftps to enhance file transmission security.
If the FTP server does not support the FTPS protocol, select ftp.
– Backup path: Enter the relative path in which the backup files are stored.
4 Click OK.
A message is displayed indicating that the configuration is successful.
5 Click OK.
----End
Scenarios
On FusionCompute, configure the time zone and data for time synchronization. After the
configuration, all Virtualization Resource Management (VRM) nodes and existing hosts
synchronize time with the Network Time Protocol (NTP) server. Hosts added after the NTP
server configuration do not synchronize with the NTP server unless you configure time
synchronization data for the hosts.
You are advised to configure an external clock source. If no external clock source is available,
configure the VRM node (in physical deployment) or the host Where VRM VM (in
virtualization deployment) is running as the clock source.
If the external clock source is w32time, configure the NTP clock source by following the
steps provided in How to Configure Time Synchronization Between the System and a
w32time-type NTP Server.
If the external clock source is a Linux time server, set a host or a VRM node when the VRM
node is deployed on a physical server as the internal clock source, and configure the internal
clock source to enable it to synchronize time with the external clock source. For details, see
How to Configure Time Synchronization Between the System and a Host or VRM Node
(NTP Server) When an External Linux Clock Source Is Used.
The time zone information set on the FusionCompute determines the time information
displayed in the exported alarms. The time displayed on FusionCompute is still identical to
the time set for the browser.
Prerequisites
Conditions
l Network communication between the clock source and FusionCompute is normal.
l If multiple NTP servers need to be deployed, all the NTP servers use the same upper-
layer clock source.
l You have logged in to the FusionCompute.
l If the NTP server domain name is to be used, a domain name server (DNS) is available.
For details about DNS server configuration, see System Configuration > Configuring
Procedure
Configure the NTP clock source.
1 Choose System.
The Service Management page is displayed.
2 In the navigation tree on the left, choose System Configuration > Time Management.
3 Configure the NTP clock source in the Time Management area, as shown in Figure
9-1.
NOTICE
If multiple NTP servers need to be deployed, all the NTP servers use the same upper-
layer clock source.
– NTP server: Enter up to three NTP server IP addresses or domain names. If you
enter a domain name to configure the NTP server, ensure a DNS server is available.
If no external NTP server is deployed, configure this parameter based on the
following the deployment scenarios:
n VMR node in physical deployment: Set this parameter to the management IP
address of the active VRM node.
n VRM node in virtualization deployment: Set this parameter to the management
IP address of the host accommodating the active VRM node.
NOTE
If no external NTP server is deployed, rectify the system time on the node that serves as the
NTP server first. For details, see Manually Changing the System Time on a Node.
– Synchronization interval (s)
5 Click Save.
A dialog box is displayed.
6 Click OK.
A dialog box is displayed.
7 Click OK.
Time synchronization is configured.
NOTE
The configuration takes effect only after the FusionCompute service processes restart, which
results in service interruption. Proceed with the subsequent operation only after the service
processes restart.
12 Click OK.
The time zone is configured.
NOTE
The configuration takes effect only after the FusionCompute service processes restart, which
results in service interruption. Proceed with other operations only after the service processes
restart.
----End
Scenarios
To ensure network security in certain circumstances, the management ports and service ports
on hosts are connected to different physical switches to implement isolation between the
service plane and the management plane. If the Virtualization Resource Management (VRM)
node is deployed on a VM, the VRM VM cannot directly communicate with the service plane
and therefore cannot provide Dynamic Host Configuration Protocol (DHCP) services for
subnets on the service plane. As a result, the VMs on the service plane cannot automatically
obtain IP addresses.
To resolve this issue, configure a second network interface card (NIC) for the VRM VM and
connect the NIC to the service plane.
This operation applies only to VRM nodes working in active/standby mode, because the
VRM VM must be stopped during the configuration process.
Prerequisites
Conditions
l The VRM nodes work in active/standby mode.
l You have obtained the passwords of user root for logging in to the active and standby
VRM nodes. If the VRM nodes are installed using the FusionCompute installation
wizard, the default passwords of user root are both Huawei@CLOUD8!.
l You have obtained the passwords of user root for logging in to the hosts on which the
VRM VMs run.
l You have logged in to the FusionCompute.
l You have configured the DHCP relay for the service plane VLAN on the aggregation
switch or firewall. The DHCP server IP address is the IP address of the VRM VM NIC
connected to the service plane.
Data
The following data has been planned:
l VLAN through which the VRM VM NICs connect to the service plane
l IP address the VRM VM NICs use to connect to the service plane
The active and standby VRMs use the same service plane IP address.
NOTICE
The service plane IP address of the VRM node cannot conflict with any subnet segments
that are created or to be created, including the subnet that contains the reserved service
plane IP address.
Procedure
Create a port group for connecting the distributed virtual switch (DVS) and the VRM VM NICs over
the service plane.
1 Check whether a DVS has been created for the service plane.
– If yes, go to 3.
– If no, go to 2.
2 Create a DVS.
When creating the DVS, add the service plane NICs on the hosts to the DVS to create
uplinks between the DVS and the hosts.
For details, see Create a DVS in Adding Virtual Network Resources to a Site.
3 Create the port group on the DVS.
Set the following parameters for the port group:
– Rate limiting: Set it to Disable.
– DHCP quarantine: Deselect it.
– Connection mode: Set it to VLAN.
– VLAN ID: Enter the VLAN ID planned for the VRM VM NIC that connects to the
service plane.
For details, see Create a port group in Adding Virtual Network Resources to a Site.
Add a NIC to the VRM VMs.
4 On the FusionCompute, choose System > System Configuration > Services & Mgt.
Nodes, and make a note of the active and standby VRM nodes and their IP addresses.
5 Choose VM and Template, and make a note of the VM IDs for the active and standby
VRM nodes on the VM page.
6 On the VM page, click the name of the VRM VM.
The Summary page is displayed.
7 On the Hardware page, click NIC and Add NIC.
A dialog box is displayed.
8 Select the DVS and port group for the VRM VM, and click OK.
An information dialog box is displayed.
9 Click OK.
10 Repeat 6 to 9 to add a NIC to the other VRM VM.
Configure the VRM VM NICs that connect to the service plane.
11 Use PuTTY to log in to the standby VRM.
Ensure that the management IP address and username gandalf are used to establish the
connection.
The default password of user gandalf is Huawei@CLOUD8.
12 Run the following command and enter the password of user root to switch to user root:
su - root
13 Run the following command to disable logout on timeout:
TMOUT=0
14 Run the following command to set the NIC IP address for the service plane:
sh /opt/galax/vrm/tomcat/script/setDHCPIntf.sh ethID IP netmask gateway
– ethID: indicates the name of the NIC. For example, set it to eth1. You can query
the NIC name by running the ifconfig -a command.
– IP: indicates the IP address planned for the VRM NIC that connects to the service
plane. The IP addresses of the active and standby VRM VM NICs are the same.
– netmask: indicates the subnet mask of the VRM VM service plane.
– gateway: indicates the gateway IP address of the VRM VM service plane.
For example, if the VRM NIC IP address for the service plane is 192.168.60.10, the
subnet mask is 255.255.255.0, and the gateway IP address is 192.168.60.1, run the
following command:
sh /opt/galax/vrm/tomcat/script/setDHCPIntf.sh eth1 192.168.60.10 255.255.255.0
192.168.60.1
15 Run the following command to check whether the configuration is successful:
cat /etc/sysconfig/network/ifcfg-ethID
ethID: indicates the name of the NIC that connects to the service plane.
The configuration is successful if information similar to the following is displayed:
BOOTPROTO='static'
DEVICE='eth1'
IPADDR='192.168.60.10'
NETMASK='255.255.255.0'
16 Check whether the information displayed is consistent with the data plan.
If the information displayed is inconsistent with the data plan, repeat 14.
17 Run the following command to check whether the scheduled task specified by the
startDHCP.sh script exists:
cat /etc/crontab
The scheduled task exists if information similar to the following is displayed:
* * * * * root sh /opt/galax/vrm/tomcat/script/startDHCP.sh eth1 192.168.60.1
192.168.60.10 255.255.255.0 >> /dev/null 2>&1
* * * * * root sleep 10; sh /opt/galax/vrm/tomcat/script/startDHCP.sh eth1
192.168.60.1 192.168.60.10 255.255.255.0 >> /dev/null 2>&1
* * * * * root sleep 20; sh /opt/galax/vrm/tomcat/script/startDHCP.sh eth1
192.168.60.1 192.168.60.10 255.255.255.0 >> /dev/null 2>&1
* * * * * root sleep 30; sh /opt/galax/vrm/tomcat/script/startDHCP.sh eth1
Additional Information
Related Tasks
To resume the communication between the service plane and the management plane on a
VRM node, cancel the isolation and modify VRM configurations. For details about the
cancelation, see How to Cancel the Isolation Between the Service Plane and the
Management Plane on a VRM Node.
Scenarios
After the FusionCompute is installed, configure media access control (MAC) address
segments. Each VM must be assigned a unique MAC address. If the default MAC address
segment is used, skip over this operation.
The FusionCompute provides 100,000 MAC addresses for users, ranging from
28:6E:D4:88:B2:A1 to 28:6E:D4:8A:39:40. In this segment, the first 5000 MAC addresses
(from 28:6E:D4:88:B2:A1 to 28:6E:D4:88:C6:28) are reserved for Virtualization Resource
Management (VRM) VMs.
A maximum of five MAC address segments can be configured. You can change the MAC
address segment configured by default or add new MAC address segments. The MAC address
segments cannot overlap.
Prerequisites
Conditions
You have logged in to the FusionCompute.
Data
The MAC address segments for user VMs have been planned.
NOTE
The MAC segments to be configured cannot contain any of the reserved 5000 MAC addresses.
Procedure
12 Appendix
12.1 FAQs
12.2 Common Operations
12.1 FAQs
Symptom
On the FusionCompute, a host cannot detect storage devices.
Possible Causes
A host cannot detect the storage devices in storage resources in either of the following
scenarios:
l When a host uses a local disk as the storage device, the disk has residual partition or
logical volume manager (LVM) information.
l When a host uses a storage area network (SAN) device as the storage device, LUNs on
the SAN device have residual partition or LVM information.
Fault Diagnosis
Manually delete residual information from the undetected storage device.
Prerequisites
Prerequisites
l You have obtained the management IP address of the host.
l You have obtained the login passwords of user gandalf and user root.
Data
Data preparation is not required for this operation.
Procedure
Check whether the undetected storage device is mapped to the host.
1 Use PuTTY to log in to the operating system (OS) of the host.
Ensure that the management IP address and username gandalf are used to establish the
connection.
2 Run the following command and enter the password of user root to switch to user root:
su - root
3 Run the following command to disable logout on timeout:
TMOUT=0
4 Check whether the undetected storage device serves as a shared disk.
– If yes, go to 5.
– If no, go to 9.
5 Determine which multipathing mode is used as the storage path of the shared disk.
– If the universal multipathing mode is used, go to 6.
– If the Huawei multipathing mode is used, go to 7.
6 Run the following command to check whether the undetected storage device is mapped
to the host:
multipath -ll
– If yes, obtain the world wide name (WWN) of the undetected storage device based
on its name, make a note of the WWN (for example,
6925805100a122002ae31e4e0000006e), and go to 8.
– If no, check whether the storage device is configured correctly. No further action is
required.
Information similar to the following is displayed:
36925805100a122002ae31e4e0000006e dm-6 HUASY,S5600T
size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 7:0:0:8 sdi 8:128 active ready running
|- 8:0:0:8 sdq 65:0 active ready running
|- 9:0:0:8 sdy 65:128 active ready running
|- 10:0:0:8 sdag 66:0 active ready running
|- 11:0:0:8 sdao 66:128 active ready running
|- 12:0:0:8 sdaw 67:0 active ready running
|- 13:0:0:8 sdbe 67:128 active ready running
`- 14:0:0:8 sdbm 68:0 active ready running
36925805100a1220006b8b70500000064 dm-4 HUASY,S5600T
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 7:0:0:2 sde 8:64 active ready running
|- 8:0:0:2 sdm 8:192 active ready running
|- 9:0:0:2 sdu 65:64 active ready running
|- 10:0:0:2 sdac 65:192 active ready running
|- 11:0:0:2 sdak 66:64 active ready running
|- 12:0:0:2 sdas 66:192 active ready running
|- 13:0:0:2 sdba 67:64 active ready running
`- 14:0:0:2 sdbi 67:192 active ready running
7 Run the following command to check whether the undetected storage device is mapped
to the host:
upadmin show vlun
– If yes, obtain the WWN of the undetected storage device based on its name, make a
note of the WWN (for example, 6925805100a12200000af4ee00000012), and go to
8.
– If no, check whether the storage device is configured correctly. No further action is
required.
Information similar to the following is displayed:
------------------------------------------------------------------------------
------------------------------------------------------
Vlun ID Disk Name Lun
WWN Status Capacity Ctrl(Own/Work) Array Name
0 sdc LUN_019
6925805100a12200000af4ee00000012 Normal 500.00GB 0A/0A
SN_210235G6EAZ0B4000006
1 sdd LUN_020
6925805100a12200000af53300000013 Normal 500.00GB 0B/0B
SN_210235G6EAZ0B4000006
2 sde LUN004--test
6925805100a1220006b8b70500000064 Normal 10.00GB 0A/0A
SN_210235G6EAZ0B4000006
3 sdf LUN_BRM_C02_03_002
6925805100a122000e8b90da00000075 Normal 20.00GB 0B/0B
SN_210235G6EAZ0B4000006
4 sdg qr_lun004
6925805100a122000012d6a100000072 Normal 10.00GB 0A/0A
SN_210235G6EAZ0B4000006
5 sdh qr_lun002
6925805100a122002ae4e1f60000006f Normal 10.00GB 0A/0A
SN_210235G6EAZ0B4000006
6 sdi qr_lun001
6925805100a122002ae31e4e0000006e Normal 20.00GB 0A/0A
SN_210235G6EAZ0B4000006
7 sdj qr_lun_r2_001
6925805100a122000171e05800000077 Normal 20.00GB 0A/0A
SN_210235G6EAZ0B4000006
------------------------------------------------------------------------------
------------------------------------------------------
NOTE
The local device sda has 10 partitions, and only the sda10 partition can be recognized as a storage
device on FusionCompute.
10 Run the following command to check whether the undetected device has the physical
volume (PV) information:
pvdisplay /dev/Name of the logical device
– If yes, information similar to the following is displayed. Then, go to 11.
– If no, go to 25.
This command takes the logical device name /sdb as an example.
--- Physical volume ---
PV Name /dev/sdb
VG Name 3
PV Size 931.51 GiB / not usable 1.71 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 238467
Free PE 387
Allocated PE 238080
PV UUID LZJQFe-lrRT-hvnc-81RN-Y2Dz-Dres-CkXVTc
11 Check whether the command output in 10 contains the VG Name information, for
example, VG Name 3:
– If yes, go to 12.
– If no, go to 23.
12 Run the following command to obtain LV Name based on VG Name:
lvdisplay
This command takes VG Name 3 as an example. If multiple LV Name results are
displayed for VG Name 3, make a note of all the logical volume (LV) names.
Check whether the command output contains at least one LV name in the
format /dev/VG/LV, for example, LV Name /dev/3/3.
– If yes, go to 13.
– If no, go to 21.
--- Logical volume ---
LV Name /dev/3/3
VG Name 3
LV UUID vizADs-wqLO-ct34-4gp5-LieF-HZ7s-Rza12a
LV Write Access read/write
LV Status available
# open 1
LV Size 930.00 GiB
Current LE 238080
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1024
Block device 253:9
13 Run the following command to delete the LV based on the obtained LV name:
lvremove /dev/VG/LV
For example, run lvremove /dev/3/3.
Check whether the command output contains Can't remove open logical volume.
– If yes, go to 14.
– If no, go to 18.
14 Run the following command to obtain the name of the undetected storage device based
on the logical device name:
ll /dev/disk/by-id/ | grep Name of the logical device
NOTICE
Do not delete the partitions from the system disk when deleting host partitions. Otherwise, the
host will become unavailable and OSs must be reinstalled for it.
By default, /dev/sda is the system disk. However, it changes when the system uses another
disk as the system disk (for example, the host installs OSs using the internal USB disk), or
when the user changes the system disk during host installation. Therefore, you must
differentiate the system disk and user disks when deleting host partitions.
25 Run the following command to view the partition information of the undetected device
on the host based on the logical device name:
fdisk -l /dev/Name of the logical device
Information similar to the following is displayed:
Disk /dev/sdb: 300.0 GB, 300000000000 bytes
256 heads, 63 sectors/track, 36330 cylinders, total 585937500 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
– If information similar to the following is displayed, the disk has only one partition,
and the partition has been deleted. Then, go to 30.
Selected partition 1
12.1.2 How to Cancel the Isolation Between the Service Plane and
the Management Plane on a VRM Node
Scenarios
To resume the communication between the service plane and the management plane on a
Virtualization Resource Management (VRM), cancel the isolation between them.
After the cancelation, the VRM node uses its management plane network interface card (NIC)
to provide the Dynamic Host Configuration Protocol (DHCP) service to subnets in the
FusionCompute system again.
Prerequisites
l The isolation between the service plane and the management plane on the VRM node has
been implemented.
l The active and standby VRM nodes are running properly.
l You have obtained the passwords for users gandalf and root of the active and standby
VRM nodes.
l You have obtained the passwords of users gandalf and root for logging in to the hosts on
which the VRM VMs run.
Procedure
Cancel the isolation between the service plane and the management plane on the VRM node.
1 On the FusionCompute, choose System > System Configuration > Services & Mgt.
Nodes, and make a note of the active and standby VRM nodes and their IP addresses.
2 Use PuTTY to log in to the active VRM node.
Ensure that the management IP address and username gandalf are used to establish the
connection.
3 Run the following command and enter the password of user root to switch to user root:
su - root
4 Run the following command to disable logout on timeout:
TMOUT=0
5 Run the following command to cancel the isolation between the service plane and the
management plane on the VRM node:
sh /opt/galax/vrm/tomcat/script/clear_isolation.sh
The command is executed if the following information is displayed:
clear isolation done.
If you do not want to implement the isolation between the management plane and the service
plane, delete the service plane NIC. Otherwise, skip this operation.
10 Determine whether to delete the service plane NICs on the VRM VMs.
– If yes, go to 11.
– If no, no further action is required.
11 Log in to the FusionCompute.
12 Select VM and Template and click the active VRM VM on the VM page.
The Summary page of the VM is displayed.
NOTE
Make a note of the active VM ID on the Summary page. This ID will be used when you backup
the VRM node information.
13 On the Hardware page, click NIC, locate the row that contain the service plane NIC,
click Operation, and click Delete NIC.
A dialog box is displayed.
14 Click OK.
The deletion starts and a dialog box is displayed.
15 Click OK.
View the task progress on the Task Tracing page and ensure that the NIC is successfully
deleted.
16 Delete the service plane NIC on the standby VRM VM. For details, see 11 to 15.
Update the VRM node information.
17 In the PuTTY window on the active VRM node, run the following commands to update
the active and standby VRM nodes information:
perl /opt/galax/vrm/tomcat/script/vrmWindowsInstall/bin/importVrmDb.pl -g -i
Active VRM VM ID
perl /opt/galax/vrm/tomcat/script/vrmWindowsInstall/bin/importVrmDb.pl -g -i
Standby VRM VM ID
For example, if the active and standby VRM VM IDs are i-00000001 and i-00000002,
run the following command:
perl /opt/galax/vrm/tomcat/script/vrmWindowsInstall/bin/importVrmDb.pl -g -i
i-00000001
perl /opt/galax/vrm/tomcat/script/vrmWindowsInstall/bin/importVrmDb.pl -g -i
i-00000002
VRM VM configuration files named VM ID.xml are generated in the /home folder.
18 Run the following commands in sequence to copy the active and standby VM
configuration files to the hosts on which the active and standby VMs run, respectively:
scp /home/Active VRM VM ID.xml gandalf@Host IP address:/home/GalaX8800
scp /home/Standby VRM ID.xml gandalf@Host IP address:/home/GalaX8800
For example, if the VM ID and host IP address for the active VRM node are i-00000001
and 192.168.200.21, respectively, and the VM ID and host IP address for the standby
VRM node are i-00000002 and 192.168.200.22, respectively, run the following
commands in sequence:
scp /home/i-00000001.xml gandalf@192.168.200.21:/home/GalaX8800
----End
Scenarios
A fiber channel storage area network (FC SAN) initiator is used to map hosts and FC SAN
storage devices using world wide names (WWNs), which are generated after the storage
devices are associated with hosts. This section describes how to obtain the WWN of the host
and configure the FC SAN initiator.
Prerequisites
Conditions
l The host has been added to the FusionCompute.
l You have logged in to the FusionCompute.
l You have configured the logic host (group) and logical unit numbers (LUNs) on the
storage management system, including creating a logical host (group), dividing LUNs,
and configuring the mapping between LUNs and the logical host (group).
Data
Data preparation is not required for this operation.
Procedure
----End
Scenarios
An IP storage area network (SAN) initiator is used to map hosts and IP SAN storage devices
using world wide names (WWNs), which are generated after the storage devices are
associated with hosts.
This section uses the S5500T storage device and the ISM V100R005C00SPC012 storage
management system as an example to describe how to configure the IP SAN initiator. For
more details, see the documentation delivered with the storage device.
Prerequisites
Conditions
l You have logged in to the storage management system, and the storage devices have
been discovered.
l You have obtained the host WWN.
l You have configured the logic host (group) and logical unit numbers (LUNs) on the
storage management system, including creating a logical host (group), dividing LUNs,
and configuring the mapping between LUNs and the logical host (group).
Data
Data preparation is not required for this operation.
Procedure
1 In the navigation tree on the left in the Oceanspace ISM window, choose All Devices,
select the storage device, click SAN Services, click Mappings, and click Hosts.
2 In the list displayed in the right pane, select the host that is associated with the storage
device.
3 Click Initiator Configuration on the menu bar.
A dialog box is displayed.
4 Click Add.
A dialog box is displayed.
5 Select the initiator based on the WWN value, and click OK.
An information dialog box is displayed.
6 Click OK.
The Result information dialog box is displayed.
7 Check whether the initiator is successfully added.
– If yes, click Close and go to 9.
– If no, click Close and perform 5 to 7. If the initiator fails to be added for several
times, you need to go to 8.
8 Disassociate storage resources that are associated with the host. For details, see
Disassociating a Storage Resource from a Host. After the storage resources are
disassociated, perform 5 to 7 again.
9 Close the Initiator Configuration dialog box.
10 If 8 has been performed, associate the storage resources with the host. For details,
seeAssociating Storage Resources with a Host.
----End
Scenarios
If the system uses advanced storage area network (SAN) devices, you must create a
management account in addition to admin. This account is used to connect to the storage
devices when the FusionCompute is installed.
This section uses the OceanStor S5500T as an example to describe how to create a
management account for advanced SAN devices.
Prerequisites
Conditions
You have obtained the management IP address of the advanced SAN device controller.
Data
You have obtained the username and password for the account to be created.
Procedure
1 In the browser address box, enter http://management IP address of the advanced SAN
device controller/start.html, and press Enter.
The OceanStor Integrated Storage Management (ISM) login page is displayed.
2 Enter the username and password, and click Login.
The default username is admin, and the password is Admin@storage.
The OceanStor ISM page is displayed, as shown in Figure 12-1.
10 Click OK.
The management account is created.
----End
PLR03C7001
NOTE
By default, MM1, located in the left of the subrack, works as the active management module of the
E6000, and MM2, in the right of the shelf, works as the standby management module. You can
determine the active and standby management modules based on the indicators on the front panel of the
management modules.
l After the active management module is powered on, the ACT indicator is steady green.
l After the standby management module is powered on, the ACT indicator blinks green at 0.5 Hz.
SLOT C1
SLOT C2
SLOT B1
SLOT A1
SLOT A2
SLOT B2
PWR4 PWR5 PWR6
PLE01003
Figure 12-9 shows port 23 of switch module A1 in the E6000. The NX112 switch module is
used as an example.
If NX113 switch module is used in the E6000, connect the local computer to the port on the
switch. Ensure that the connected port on the switch and the host management plane belong to
the same VLAN.
12.1.7 Compatibility
For details about the compatibility for servers, I/O devices, storage devices. and operating
systems (OSs), log in to compatibility check assistant.
Type OS
Type OS
Scenarios
On FusionCompute, bind network ports on a host to improve network reliability.
NOTE
If the host uses intelligent network interface cards (iNICs), bind the uplink network ports on the host together.
Otherwise, the broadcast suppression function of the port group may be adversely affected.
Prerequisites
Conditions
l You have logged in to FusionCompute.
l The host has been added to a cluster.
Procedure
Determine the method of binding network ports.
1 Determine the method of binding network ports.
5 In the Network Port list, select the physical network ports to be bound.
6 In the middle of the page, set Name and Binding Mode for the network ports.
NOTICE
– In all load sharing modes, aggregation must be configured on the switch to which
network ports are connected, that is, the ports to be bound must be configured on the
same Eth-trunk port on the switch. Otherwise, network exception may occur.
– In the Link Aggregation Control Protocol (LACP) mode, create an Eth-trunk in
LACP mode on the switch to which network ports are connected, configure ports to
be bound on the same Eth-trunk, and enable the bridge protocol data unit (BPTU)
protocol packet forwarding function on the Eth-trunk. For example, if the switch is
Huawei S5300, run the following commands:
<S5352_01>sys
[S5352_01]interface Eth-Trunk x
[S5352_01-Eth-Trunkx]mode lacp-static
[S5352_01-Eth-Trunkx]bpdu enable
For details about how to configure port aggregation on a switch, see the switch user
guide.
The following binding modes are available for common network interface cards (NICs):
– Active-backup: applies to scenarios where two network ports are to be bound. This
mode provides high reliability. The bandwidth of the bound port in this mode equals
to that of a member port.
– Round-robin: applies to scenarios where two or more network ports are to be
bound. The bandwidth of the bound port in this mode is higher than that of a
member port, because the member ports share workloads in sequence.
This mode may result in data packet disorder because traffic is evenly sent to each
port. Therefore, MAC address based load balancing prevails over Polling in load
sharing modes.
– IP address and port-based load balancing: applies to scenarios where two or
more network ports are to be bound. The bandwidth of the bound port in this mode
is higher than that of a member port, because the member ports share workloads
based on the source-destination-port-based load sharing algorithm.
Source-destination-port-based load balancing algorithm: When the packets
contain IP addresses and ports, the member ports share loads based on the source
and destination IP addresses, ports, and MAC addresses. When the packets contain
IP addresses, the member ports share loads based on the IP addresses and MAC
addresses. When the packets contain only MAC addresses, the member ports share
loads based on the MAC addresses.
This mode is recommended when the virtual extensible LAN (VXLAN) function is
enabled. This mode allows network traffic to be evenly distributed based on the
source and destination port information in the packets.
– MAC address-based load balancing: applies to scenarios where two or more
network ports are to be bound. The bandwidth of the bound port in this mode is
higher than that of a member port, because the member ports share workloads based
on the MAC addresses of the source and destination ports.
This mode is recommended when most network traffic is on the layer 2 network.
This mode allows network traffic to be evenly distributed based on the MAC
addresses.
– MAC address-based LACP: This mode is developed based on the MAC address
based load balancing mode. In MAC address-based LACP mode, the bound port
can automatically detect faults on the link layer and trigger a switchover if a link
fails using the LACP protocol.
– IP address-based LACP: applies to scenarios where two or more network ports are
to be bound. The bandwidth of the bound port in this mode is higher than that of a
member port, because the member ports share workloads based on the source-
destination-IP-address-based load sharing algorithm. When the packets contain IP
addresses, the member ports share loads based on the IP addresses and MAC
addresses. When the packets contain only MAC addresses, the member ports share
loads based on the MAC addresses. In this mode, the bound port can also
automatically detect faults on the link layer and trigger a switchover if a link fails
using the LACP protocol.
This mode is recommended when most network traffic goes across layer 2 and layer
3 networks.
The following binding modes are available for intelligent network interface cards
(iNICs):
– Active-backup: applies to scenarios where two network ports are to be bound. This
mode provides high reliability. The bandwidth of the bound port in this mode equals
to that of a member port.
NOTICE
– Switching between different load sharing modes or between different LACP modes
interrupts network communication of the bound network port for 2 or 3 seconds.
– If the binding mode is changed from the active/standby mode to load sharing mode,
port aggregation must be configured on the switch to which network ports are
connected. If the binding mode is changed from the load sharing mode to active/
standby mode, the aggregation configured on the switch must be canceled.
Otherwise, network exception may occur.
– If the binding mode is changed from the LACP mode to another mode, port
configuration must be changed on the switch to which network ports are connected. If
the binding mode is changed from another mode to the LACP mode, port aggregation
in LACP mode on the switch. Otherwise, network exception may occur.
Configuration operations on the switch may interrupt the network communication. After
the configurations are complete, the network communication is automatically restored. If
the network communication is not restored, perform either of the following methods to
troubleshoot the network:
– Ping the destination IP address from the switch to trigger a MAX table update.
– Select a member port in port aggregation, disable other ports on the switch, change
the binding mode, and enable those ports.
9 Click OK.
The network ports on the host are bound.
After this step is complete, no further action is required.
Bind network ports in batches.
10 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
11 In the navigation tree on the left, select the site and cluster.
The Getting Started page is displayed.
12 In the Operation list of the cluster, click Bind Network Ports in Batches.
The Bind Network Ports in Batches page is displayed, as shown in Figure 12-11.
----End
Additional Information
Related Tasks
Removing a Default Network Port from a Bound Port
The default network port is the first port added to the bound port. To remove the default port,
unbind the bound port and then bind the other non-default member ports. If the bound port is
used by any services, you must remove the services first.
1. Locate the row that contains the bound port, choose More > Unbind Port, and check
whether The network port aggregation is in use is displayed.
– If yes, the bound port is in use. Go to 2.
– If no, go to 7.
2. Switch to the page for the host, choose Configuration > System Port, and check
whether the bound port is used as a storage port or a service management port or not.
– If yes, go to 3.
– If no, go to 4.
3. Migrate all the VMs from the host to other hosts or stop all the VMs on the host,
disassociate all the virtualized SAN storage from the host, and delete the storage port or
service management port from the bound port.
4. In Network Pool, select the uplink of the DVS and check whether the bound port is used
by the uplink.
– If yes, go to 5.
– If no, go to 6.
5. Delete the uplink. If the uplink is used by VMs, migrate the VMs to other hosts or delete
the VM NICs, and then delete the uplink.
6. Locate the row that contains the bound port and choose More > Unbind Port.
7. Bind the other member ports of the bound port other than the default port.
8. If you remove the services on the original bound port before you unbind the port, restore
the services on the newly bound port, for example, creating the system interface on the
bound port, associating with the uplink, and migrating the original VMs back to the host.
NOTE
If the security certificate has not been installed during Internet Explorer configuration, the browser may
prompt users with a web page display exception message when they log in to FusionCompute for the
first time or log in to VMs using Virtual Network Computing (VNC). In this case, press F5 to refresh the
web page.
Prerequisites
Conditions
l The Internet Explorer browser used for logging in to FusionCompute is an official
release from Internet Explorer 9 to Internet Explorer 11.
l You have obtained the IP address of the VRM node.
Data
Data preparation is not required for this operation.
Procedure
Enter the login page.
1 Open Internet Explorer.
2 Enter http://IP address of the VRM node and press Enter.
NOTE
– If a firewall is deployed between the local PC and FusionCompute, open ports 80, 8080, 43,
and 8443 on the firewall. If ports 80 and 8080 cannot be opened on the firewall, enter
https://IP address of the VRM node:8443 in the address box.
– If the local PC uses the Windows Server 2003 or Windows XP operating system (OS),
connection to FusionCompute from the PC using the Hypertext Transfer Protocol Secure
(HTTPS) protocol may file. In such cases, if the connection is triggered through an Internet
Explorer browser, the PC prompts the user to choose a digital certificate. If the connection is
triggered through a Google Chrome browser, the PC displays a message indicating that the
server certificate is invalid. To address this issue, see http://support.microsoft.com/kb/
968730/zh-cn.
– The HTTPS protocol used by FusionCompute supports only TLS 1.0. If SSL 2.0, SSL 3.0,
TLS 1.1, or TLS 1.2 is used, the FusionCompute system cannot be accessed. You must open
the browser, choose Internet Options > Advanced > Security, and select only Use TLS 1.0
among the protocols.
– If Internet Explorer slows down after running for a period of time and no data is required to be
saved, press F6 on the current page to move the cursor to the address bar of the browser. Then,
press F5 to refresh the page and increase the browser running speed.
3 Click Continue to this website (not recommended).
In common mode, the FusionCompute login page is displayed, as shown in Figure
12-13.
In single sign-on (SSO) mode, the FusionManager login page is displayed, as shown in
Figure 12-14.
– Cookies
– History
16 Click Delete.
Historical data is deleted.
Configure compatibility view settings.
17 Press Alt to show the menu bar and choose Tools > Compatibility View Settings on the
menu bar.
The Compatibility View Settings dialog box is displayed.
18 Click Add.
The address for logging in to the current system is added to the compatibility view.
19 Click Close.
20 Close Internet Explorer, open it again, and log in to the FusionCompute.
The settings take effect after the browser is restarted.
----End
Prerequisites
Conditions
l The Firefox browser used for logging in to FusionCompute is an official release from
Mozilla Firefox 21 to Mozilla Firefox 33.
l You have obtained the floating IP address of the VRM.
Data
Data preparation is not required for this operation.
Procedure
Scenarios
Log in to FusionCompute to manage virtual, service, and user resources in a centralized
manner.
Prerequisites
Conditions
l The browser for logging in to FusionCompute is available.
l The Internet Explorer or Mozilla Firefox is set properly. For details, see Setting Internet
Explorer Browser in the FusionCompute V100R005C10 Software Install Guide or
FusionCompute V100R005C10 Software Install Guide.
l The browser resolution is set to 1280 x 1024 or higher based on the service requirement
to ensure the optimum display effect on the FusionCompute.
NOTE
If the security certificate has not been installed during Internet Explorer configuration, the browser may
prompt users with a web page display exception message when they log in to FusionCompute for the
first time or log in to VMs using Virtual Network Computing (VNC). In this case, press F5 to refresh the
web page.
The system supports the following browsers:
l Internet Explorer 9 to Internet Explorer 11
l Mozilla Firefox 21 to Mozilla Firefox 33
l Google Chrome 21 to Google Chrome 39
Data
Table 12-2 lists the data required for performing this operation.
Procedure
NOTE
– If a firewall is deployed between the local PC and FusionCompute, open ports 80, 8080, 43,
and 8443 on the firewall. If ports 80 and 8080 cannot be opened on the firewall, enter
https://IP address of the VRM node:8443 in the address box.
– If Internet Explorer slows down after running for a period of time and no data is required to be
saved, press F6 on the current page to move the cursor to the address bar of the browser. Then,
press F5 to refresh the page and increase the browser running speed.
The login page is displayed.
After the SSO is configured, if you open the login page of the FusionCompute, the
system switches to the login page of the FusionManager. However, multiple users cannot
log in to the FusionCompute using the same account.
3 Perform the required operation based on the login page.
– If the FusionCompute login page shown in Figure 12-15 is displayed, single sign-
on (SSO) is not configured. Go to 4.
– If the FusionManager login page shown in Figure 12-16 is displayed, SSO has been
configured. Go to 5.
4 Set the Username and Password, select the required User type and Login type, and
click Login. If you attempt to log in to the system again after the initial login fails, you
also need to set the Verification code.
Enter the username and password based on the rights management mode configured
during VRM installation.
– Common login mode: The initial login username is admin and the password is
Huawei@CLOUD8!.
– Rights separation login mode: The username and password of the system
administrator is sysadmin/Sysadmin#, the username and password of the security
administrator is secadmin/Secadmin#, and the username and password of the
security auditor is secauditor/Secauditor#.
NOTE
– If it is your first login using the admin username, the system asks you to change the password
of the admin username.
– The new password must meet the following requirements:
n It contains a minimum of eight characters and a maximum of 32 characters.
n It contains at least one space or one of the following special characters: `~!@#$%^&*()-
_=+\|[{}];:'",<.>/?.
n It contains at least two of the following character types:
○ Uppercase letters
○ Lowercase letters
○ Digits
The FusionCompute operation page is displayed after you log in to the system.
The login operation is complete.
5 Set the Username and Password, If you attempt to log in to the system again after the
initial login fails, you also need to set the Verification code.
– Common login mode: Log in to the system using the FusionManager system
account. After the login, you can only perform query-related operations.
– Rights separation login mode: The username and password of the system
administrator is sysadmin/Sysadmin#, the username and password of the security
administrator is secadmin/Secadmin#, and the username and password of the
security auditor is secauditor/Secauditor#.
6 Click Login to log in to the FusionCompute management system.
NOTE
The user is automatically logged out of the FusionCompute management system if one of the
following conditions is met:
– The current user's session times out.
– The system administrator deletes the current user.
– The system administrator manually locks the current user out.
After you log in to FusionCompute, you can learn the system and its functions from the online
help, product tutorial, and alarm help. If you save the URLs of the documents, you can access
them even when you are logged out of the FusionCompute system.
----End
A Glossary
A.1 A-E
A.2 F-J
A.3 K-O
A.4 P-T
A.5 U-Z
A.1 A-E
A
active directory A directory service created by Microsoft for Windows domain networks. It is included
in most Windows Server operating systems, such as Windows Standard Server,
Windows Enterprise Server, and Windows Datacenter Server.
AD See active directory
B
bare VM A VM that has an identity but does not occupy any CPU, memory, storage, or network
resource in the system.
Baseboard A dedicated micro controller embedded in the main board of a computer (especially a
Management server).
Controller
BMC See Baseboard Management Controller
C
CBT See Changed Block Tracking
Changed Block An incremental data backup function. With this function enabled, the system uses a
Tracking bitmap to keep track of VM storage blocks as they change following the last backup.
Therefore, the system only backs up the data blocks that have been changed since the
last backup.
CNA See Computing Node Agent
Computing Node This is deployed on a computing node and used to manage the VMs and VM
Agent mounting on the computing node.
D
disk The logical storage disk of a VM, which can either be a system disk or user disk.
distributed virtual A virtual switch (created on a physical server) that uses software to implement data
switch switching between VMs on the same or different servers.
Distributed Virtual A module used to manage distributed virtual switches (DVSs). Deployed in the same
Switch Management cluster with the Virtual Resource Management (VRM) node, the DVSM creates,
deletes, maintains, and presents DVSs in the system. Each cluster has a DVSM
module.
Dom0 See Domain 0
Domain Domain includes Dom0 and DomU.
Domain 0 A modified Linux kernel and the only VM that operates on the Xen Hypervisor. Dom0
can access physical I/O resources and interwork with other VMs operating on the
system. Dom0 must be started before other domains.
Domain U Paravirtualized VMs operating on the Xen Hypervisor are called Domain U PV
Guests, which supports the operating system whose kernel has been modified, such as
Linux, Solaris, FreeBSD, and other UNIX operating systems. Fully virtualized VMs
are called Domain U HVM Guests, which supports the operating system whose kernel
does not need to be modified, for example, Windows.
DomU See Domain U
DPM See Dynamic Power Management
DRS See Dynamic Resource Scheduler
DVS See distributed virtual switch
DVSM See Distributed Virtual Switch Management
Dynamic Power A module that intelligently powers on or off idle physical servers based on the system
Management load on the network.
Dynamic Resource A module that uses intelligent scheduling algorithms to flexibly schedule resources
Scheduler and dynamically balance system load to improve user experience.
E
Elastic Load Balancer A component that provides load balancing services for tenants. End users can apply
for an ELB and associate their hosts with the ELB. The ELB evenly distributes service
requests to the associated hosts based on customized load balancing policies. The ELB
helps improve service stability and reliability.
Elastic Service A point from which to control VM resources and virtual block storage resources. It
Controller provides an open ECi interface.
elastic virtual switch A virtual switch that implements data switching, virtual local area network (VLAN)
isolation, Dynamic Host Configuration Protocol (DHCP) isolation, bandwidth
limiting, and priority setting.
ELB See Elastic Load Balancer
Equipment Serial This uniquely identifies a set of equipment.
Number
ESN See Equipment Serial Number
EVS See elastic virtual switch
A.2 F-J
F
FCSAN See fibre channel storage area network
fiber channel storage A type of storage area network (SAN) that uses fiber channels between servers and
area network storage devices. FC SAN devices provide high performance but high costs, and are
gradually replaced by IP SAN devices.
full clone Full copy of the consolidated sum of delta disks and base disk of a virtual machine.
Each full clone is entirely separated from the parent VM and can have different system
disks or software from the parent VM. Full clones apply to common office automation
scenarios.
hierarchical storage A storage mechanism that stores the most-frequently accessed IP SAN data on a solid-
state drive (SSD) to speed up access, stores the less-frequently accessed data on a
Serial Attached SCSI (SAS) drive, and stores the seldom accessed data on a Serial
Advanced Technology Attachment (SATA) drive.
Host A physical server that runs virtual software. VMs can be created on a host.
Hypervisor The software layer on a virtual server, which manages the VMs on the server and
helps VMs share the hardware resources of the virtual server. The Xen Hypervisor is a
software layer between the hardware and operating system, which performs CPU
scheduling and partitioning between VMs. The Xen Hypervisor controls VM
migration between hardware devices and other VM-related operations (because the
VMs share a processing environment). The Xen Hypervisor does not process
networks, storage devices, videos, or other I/O resources.
Image An exact copy of all running software on a server used for quick installation of the
VM operating system and software.
iNIC Intelligent Network Interface Card
Integrated Storage This centrally manages multiple storage systems.
Management
IP storage area A type of storage area network (SAN) that uses IP channels between servers and
network storage devices. IP SAN device performance is not as good as the FC SAN device
performance, but the use of IP SAN devices is not restricted by transmission distances.
With the IP bandwidth improvement, IP SAN devices will gradually replace FC SAN
devices.
IP SAN See IP storage area network
iSCSI Internet Small Computer Systems Interface
ISM See Integrated Storage Management
A.3 K-O
L
LB Load Balancer
linked clone A duplicate of a virtual machine that uses the same base disk as the original and a
chain of delta disks to keep track of the differences between the original and the clone.
This reduces the need for disk space and allows multiple VMs to use the same
software installation. Linked clones apply to scenarios that require VMs using the
same software installation, for example, call centers. System disks on linked clones
can have slight differences from the system disk on the parent VM. The differences
are stored on delta disks of the linked clones.
linked cloning A technology used to generate a quick copy of VMs by creating a delta disk instead of
copying an entire virtual hard disk.
linked snapshot A snapshot taken only for the memory or storage changes of a VM. A VM can be
restored using multiple relevant linked snapshots. A snapshot can be taken for the
memory or storage resource.
Live Migration Also known as hot migration, this is a method of migrating virtual machines (VMs)
without interrupting services.
local storage Storage space provided by a computing node agent (CNA).
logical cluster A logical group consisting of servers with the same attributes, such as CPU, storage,
and distributed virtual switch (DVS), in a physical cluster. The VM HA function takes
effect only for servers in the same logical cluster.
LUN Logical Unit Number
A.4 P-T
P
placeorder VM When the host-based replication disaster recovery (DR) is used, and you add VMs to a
protection group, UltraVR automatically creates placeorder VMs at the DR site based
on the VM specifications and resource mappings of the VMs added to the protection
group. Then UltraVR synchronizes data of the protected VMs with the placeorder
VMs. When executing the recovery plan, UltraVR can start the placeorder VMs at the
DR site to quickly restore services. UltraVR can also use a placeorder VM to clone
and start a VM to test the recovery plan. This process exerts no adverse impact on data
synchronization between the protected VM and the placeorder VM. After the recovery
plan is executed, UltraVR will stop and delete the clone.
POE See Provisioning Orchestration Engine
port group A group of ports with the same attributes on a distributed virtual switch (DVS) or
virtual software switch (VSS). In a hypervisor, all DVS settings, such as virtual local
area network (VLAN) and network flow control, are configured on a port group basis.
PortGroup See port group
Pre-boot Execution This technology enables computers to boot from the network. This technology is the
Environment successor of Remote Initial Program Load (RPL). The PXE works in client/server
mode. The PXE client resides in the ROM of a network card. When the computer
boots up, the BIOS invokes the PXE client to the memory. The PXE client obtains an
IP address from the DHCP server and downloads the operating system from the
remote server through TFTP.
Provisioning This exposes the unified service provisioning interface and synchronizes services
Orchestration Engine between components in the SingleCLOUD system.
PXE See Pre-boot Execution Environment
Software Client Software running on a common PC to process the virtual desktop protocol.
Storage Area Network A network dedicated to transporting data for storage and retrieval.
storage cold migration A storage migration mode that allows data migration on a disk only after all VMs on
the disk are stopped.
storage resource pool A collection of storage resources. For example, an IP storage area network (IP SAN)
functions as a storage resource pool for a cluster.
Storage Thin Storage thin provisioning is the act of using virtualization technology to give the
Provisioning appearance of more physical storage resources than is actually available. It allows
storage space to be easily allocated to users on an on-demand and auto-scale basis.
This optimizes utilization of available storage resources.
T
TC See Thin Client
Thin Client A terminal with lower processing power than a Thick Client that processes the virtual
desktop protocol, serves as the client of the remote desktop, and provides an access
method for users.
Thin LUN A logical storage unit created in the thin pool. The thin LUN is accessible to the host.
Thin Pool A thin pool is implemented based on storage thin provisioning. It allows storage space
to be dynamically allocated to users on demand. This optimizes utilization of available
storage resources.
Tools Tools is a virtualized driver for VMs. Tools improves VM performance, enables VM
hardware monitoring and advanced VM functions, such as migration, snapshot taking,
and on-line CPU adjustment.
A.5 U-Z
U
Unified Virtualization Virtual management software that divides the computing resource into multiple VM
Platform resources.
UVP See Unified Virtualization Platform
V
vCPU See Virtual CPU
VDS See Virtual Distributed Switch
vFW See Virtual Firewall
VIMS See Virtual Image Management System
Virtual CPU A hyper-thread on a server with multiple physical CPUs, which have multiple physical
cores on each CPU and multiple hyper-threads on each core.
Virtual Disk A file in the host file system. For a customer operating system, it functions as a
physical disk drive. The file can be configured on the host or a remote file system.
After configuring a VM with a virtual disk, you can install a new operating system to
the disk file without having to repartition a physical disk or restart the host. Virtual
disks on the VMware Workstation can be mapped to the partitions on the host.
Virtual Distributed A virtual switch (created on a physical server) that uses software to implement data
Switch switching between VMs on the same or different servers.
Virtual Firewall A network firewall service or appliance running in a virtualized environment to
provide the usual packet filtering and monitoring functions like a physical network
firewall.
Virtual Image A high-performance cluster file system that enables the FusionManager to connect to
Management System storage resources through a unified interface, which allows multiple VMs to gain
access to an integrated storage pool to improve resource utilization efficiency. The
VIMS, as the basis for virtualizing multiple storage servers, provides services such as
live migration, dynamic resources scheduling, and high availability for storage
devices.
Virtual Local Area An end-to-end logical network across different network segments and networks,
Network constructed using the network management software based on the switching LAN.
Network resources and users are logically divided based on a certain principle and a
physical LAN is logically divided into multiple broadcasting domains (VLANs). The
hosts on a VLAN can directly communicate with each other whereas different VLANs
cannot. This efficiently suppresses broadcasting packets.
Virtual Machine One or multiple computer systems virtualized from a physical server.
Virtual Machine A function that controls the scheduling policy configuration and the control points of
Dispatch the policy dispatching.
Virtual Machine An operation that can be performed to a running VM. After the hibernated VM is
Hibernate started, it restores all programs before the hibernation.
Virtual Machine IP The IP address assigned to a VM, corresponding to the IP address of a physical
Address machine. The VM can communicate with other devices on the network through the IP
address.
Virtual Memory Virtualized memory for a VM allocated based on the physical memory. Even if the
memory is not physically contiguous, it is contiguous for the VM. The VM can
randomly save and obtain data in its virtual memory without affecting the memory
accessibility of other VMs on the same physical machine.
Virtual Network Card A network card for VMs that corresponds to that of a physical machine. Multiple
virtual network cards can be created for a single VM. The virtual network card can
connect to the physical network card in bridge mode for data transmission.
Virtual Resource Huawei-developed virtualization management software, which comprises Huawei
Management infrastructure products and the Unified Virtualization Platform (UVP).
Virtual Server This is where the operating system and applications run based on various
virtualization technologies, unlike the original physical server. When using certain
resources of the physical server, the virtual server is the same as the physical server
for users. Both partitions and VMs are considered virtual servers.
Virtual Service A virtual appliance that provides layer 3 or layer 4 services on a virtual network. It can
Gateway contain one or multiple service instances including vRouter, vFirewall, vDHCP, NAT,
and VPN. The FusionManager supports VSG service implementation through a
vFirewall or a system VM.
Virtual Software This is deployed on a computing node and performs the virtual network switching
Switch function for the VMs on the node.
VLAN See Virtual Local Area Network
VM See Virtual Machine
VM High Availability With this, the O&M system continuously monitors all physical hosts and
automatically migrates all VMs off a faulty host.
VM Migration A technology used to migrate VMs to another hardware resource for VM operations.
VM Specifications A set of pre-defined VM attributes for creating VMs with unified specifications.
VM template A template used to create VMs that have the same specifications. A VM template is a
VM in essence. A VM and a VM template can convert to each other as required. After
a VM is converted to a VM template, only its isTemplate attribute is changed to true.
VMD See Virtual Machine Dispatch
VRM See Virtual Resource Management
VSG See Virtual Service Gateway
VSS See Virtual Software Switch