Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Using HP Serviceguard to manage HP

Integrity Virtual Machines

HP Part Number: T2767-90042


Published: June 2006
Edition: 2.1
The high availability initiative
When you use HP Integrity Virtual Machines (Integrity VM) to consolidate and simplify your computing
environment, you might need to provide the same (or better) standards of availability to the users of the
applications running on the virtual machines. A virtual machine should just as reliable as a discrete physical
computer. To provide this level of availability, use HP Serviceguard to manage your virtual machines. This
paper explains the advantages of this configuration, the considerations for hardware and software
configuration, and the procedure for making virtual machines into Serviceguard packages. You will see how
to test and verify that the virtual machine fails over to another cluster member running as the VM Host, and
how to manage virtual machines that are configured as Serviceguard packages. Scripts provided in the
supplied HP Serviceguard for Integrity Virtual Machines Toolkit help you set up and manage the Serviceguard
virtual machine cluster.

NOTE: This paper assumes you are familiar with Serviceguard and Integrity Virtual Machines. Additional
references are listed at the end of this paper.

Virtual machine failover


On each node in the cluster, the following products are running:
• HP-UX 11i v2
• HP Integrity Virtual Machines A.01.20
• HP Serviceguard A.11.16 or A.11.17
Download the HP Serviceguard for Integrity Virtual Machines Toolkit, from software.hp.com. Untar the
toolkit to the /var/opt/hpvm/cluster directory on a cluster node that is running Integrity VM and into
a temporary directory on each guest.
The following sections describe how to configure a virtual machine as a Serviceguard package and distribute
it to VM Host cluster members. If the VM Host system fails, the network interfaces go silent, or system
administrator shuts down the system, an instance of the virtual machine starts on another cluster member, as
illustrated in Figure 1.

Figure 1 The virtual machine failing over to another cluster member

Of course, Figure 1 is a greatly simplified illustration. Your configuration is likely to hold many Integrity
servers, each of which hosts many virtual machines.
Ease of maintenance is an important consideration in a cluster environment. Applications can overuse or
underuse resources, network performance can be erratic, or virtual machines can simply fail with little or no
information. High availability should not come at the cost of maintainability. A site plan is useful for
determining how to set up the Serviceguard cluster that will protect your virtual machines (and your customers)
from excessive downtime. (The Managing Serviceguard manual provides sample worksheets.).
The key to a manageable cluster is to keep it simple. Virtual machines share the network, storage, and
processor resources of the VM Host. As it fails over to another cluster member, the guest (the operating
system and applications on the virtual machines) must have access the same virtual devices as it did on the
failed VM Host. As you configure each cluster member, provide sufficient and appropriate processor, network,
The high availability initiative 2
and storage resources for all the guests that might run on it. If a guest cannot access the resources it needs,
it will not start.

The highly available virtual machine


To configure virtual machine packages that can stop on one cluster member and start on another, provide
access to the same logical storage units and LANs on all the cluster members, as illustrated in Figure 2.

Figure 2 The virtual machine in the cluster

Two cluster nodes (Node 1 and Node 2) run the VM Host. Node 1 hosts two virtual machines (VM1 and
VM4). Node 2 hosts VM2. VM1 is configured as highly available; VM4, which has local storage only, is
only capable of running on Node 1. Node 2 hosts VM2. VM1 accesses VM1 disks and VM2 accesses VM2
disks. VM1 and VM2 are connected to both VM1 disks and VM2 disks. VM1 and VM2 are configured as
virtual machine packages and are capable of running on either Node 1 or Node 2.
Each node has two LANS. One is dedicated to the heartbeat, which is part of Serviceguard operation. The
other LAN serves as the primary LAN and is shared by the virtual machines.
Because redundancy is built into the cluster design, the root disk and the guest storage diska are mirrored,
and a serial line acts as a standby heartbeat LAN.
The Serviceguard configuration on each node requires a customized package control script. This control
script identifies the guests that can run on the node. For more information, see Configuring HP Serviceguard
packages.
If Serviceguard detects that Node 1 (the primary node) is not available, VM1 is started on Node 2, as
illustrated in Figure 3. Meanwhile, VM4, which is not configured as a virtual machine package, is unavailable
until service is restored to Node 1.

The highly available virtual machine 3


Figure 3 The virtual machine package after cluster failover

Because the two nodes have access to the same storage units, the failover of VM1 from Node 1 to the
symmetrically configured Node 2 is barely discernable to users. Different configurations and guest requirements
present different challenges. Virtual machines can access many different types of storage units (files, disks,
logical volumes, and DVDs) on the VM Host. However, the root disk and application storage devices are
presented to the virtual machine as virtual devices. As such, the same storage units for the root disk and
storage devices should be used to present virtual devices on each node in the cluster. If these differ, manual
intervention might be necessary after the virtual machine fails over to another cluster member. If you use
logical volumes, you must include the Logical Volume Manager (LVM) or Veritas Volume Manager (VxVM)
information in the package configuration file, as described in “Configuring HP Serviceguard virtual machine
packages.” For more information about configuring storage types in Serviceguard clusters, see the Managing
Serviceguard manual.
You can also provide network failover from one LAN to another on the same node by including two LANs
for guest use as well as the dedicated heartbeat LAN (for a total of three LANs on the virtual machine
package). If you already have Auto Port Aggregation (APA), you have redundancy. Supply the APA device
names in the configuration of the package, as described in “Configuring the Serviceguard Package.”
The following sections describe the considerations for various types of storage units and network configurations.

Configuring virtual machine storage


A storage unit is a VM Host entity that houses data for a virtual disk. Integrity VM A.01.20 supports files,
logical volumes, disk partitions, and whole disks. The following types of storage units can be used as virtual
storage by virtual machine packages:
• Files inside logical volumes (LVM, VxVM, Veritas cluster volume manager (CVM))
• Files on Cluster File System (CFS)
• Raw logical volumes (LVM, VxVM, CVM)
• Whole disks
The following storage types are not supported:
• Files outside logical volume
• Disk partitions
When LVM, VxVM, and CVM disk groups are used for guest storage, each guest must have its own set of
LVM or CVM volume groups (or VxVM disk groups). For CFS, all storage units are available to all running
guests at the same time.

4
For more information about how to configure virtual storage to meet the needs of the guests, see the HP
Integrity Virtual Machines Installation, Configuration, and Administration manual. The following sections
describe how to achieve the desired level of storage and network high availability.

HP Serviceguard cluster symmetry


When you create a virtual machine, you identify VM Host system storage units that the virtual machine's
operating system and applications will use as virtual devices. You can even supply the PCI bus slots and
adapter types. In a symmetric Serviceguard configuration, the storage units are presented the same way on
every cluster node. This configuration is easy for both the VM Host system administrator and the administrator
of the virtual machine to manage. Changes to the storage unit configuration need never affect the virtual
machines because they recognize only virtual devices. If you carefully configure a symmetric set of
Serviceguard cluster nodes, the virtual machine can access the same storage using common device, volume,
and file names, and you can use the same operations to create and modify every instance of the virtual
machine, across all cluster nodes.
Even an asymmetric Serviceguard configuration, in which the storage units differ across nodes, can be
managed easily, as long as the Integrity VM device database presents the same storage units to the virtual
machines using the same virtual device names. The virtual machine recognizes only virtual devices; you can
change the virtual device definition without changing the virtual machine configuration. On an asymmetric
Serviceguard configuration, however, you must make this change on each node in the cluster to accommodate
the differences in the storage units. Each virtual machine has a guest configuration file that defines its virtual
devices. A change to the virtual device configuration of a virtual machine can be distributed to all the cluster
nodes by copying the guest configuration file to each cluster node.
If it is necessary for the virtual machines on each cluster node to have different virtual devices, the virtual
machine must be configured and managed separately on each node. If a failure occurs in the hardware, it
is difficult to detect and isolate. In addition, modifications to the virtual machine must be made on all cluster
nodes, thereby hampering the failover process risking performance and security problems. HP recommends
that you do not configure your servers this way.

Multipath storage solutions


Multipath solutions provide storage redundancy. Using storage and network solutions that employ redundant
hardware with failover capabilities increases availability.
The multipath solution depends on the type of storage units used as guest storage. Both Integrity VM and
Serviceguard allow a wide range of storage solutions. The supported combinations of types of storage and
multipath solutions are listed in Table 1.
Table 1 Multipath solutions on the VM Host

Type of storage Multipath solutions

Whole disks Secure Path

CFS files on CVM Dynamic Multipathing (DMP)

VxFS files on LVM volumes Secure Path


EMC Powerpath
PVLinks

VxFS files on VxVM volumes Secure Path (VxVM 3.5 only)


EMC Powerpath
DMP (VxVM 4.0 and higher)

LVM volumes Secure Path


EMC Powerpath
PVLinks

VxVM volumes Secure Path (VxVM 3.5 only)


EMC Powerpath
DMP

CVM volumes (not yet tested) DMP

The highly available virtual machine 5


One common concern of virtual machine users is where to deploy high availability solutions for mass storage
— on the virtual machine itself, or on the physical server hosting those virtual machines (the VM Host).
Multipath I/O solutions, such as SecurePath and PowerPath, are designed to provide fault tolerance by
leveraging the existence of multiple physical paths between the server processor and mass storage devices.
Because of the nature of virtual machines, there is no physical path between the virtual machine and its
virtual mass storage. Therefore, multipath solutions are best suited for the physical VM Host server. These
technologies are used on the VM Host and exposed to the virtual machine as logical storage (backing stores)
for a single logical unit number (LUN). Most configurations created with Secure Path, EMC Powerpath,
PVLinks, and Veritas Dynamic Multipathing on the VM Host can be used to define virtual storage this way.
See the Integrity VM Release Notes for details.

Configuring network high availability


Using Serviceguard with Integrity VM requires a minimum of three LANs on each VM Host. One for the
Serviceguard heartbeat; the others are primary and standby LANs used by virtual machines running on the
VM Host.
The virtual machines access the network using virtual switches, which you associate with physical network
interface cards (PNICs), APA interfaces, and virtual network interface cards (VNICs), using the hpvmnet
Integrity VM command. Depending on the network adapters and switches available, you can provide one
of the following levels of network availability:
• Redundant LAN configuration – using a standby NIC, Serviceguard monitors the system and, upon
detecting a failure, switches the LAN to the standby network switch on the same VM Host server.
• Auto Port Aggregation – using automatic link aggregation (in manual or AUTO_FEC mode), the network
is monitored and fails over to another APA port or the standby LAN.
If a network failure occurs when there are no working network resources on the VM Host that are available
to the virtual machine, the guest package fails over to another cluster member.

Redundant LAN configuration


With redundant PNICs, you can provide LAN failover without service interruption. The virtual machine stops
using the failed LAN, starts using the standby LAN, and does not fail over to another cluster member. Figure
4 illustrates this configuration.

Figure 4 LAN high-availability configuration

Autoport aggregation for network high availability


If you have autoport aggregation (APA) you have redundancy. Network failover is handled by the APA
service. If all APA ports fail, the standby LAN is used. If the standby LAN is also down, the virtual machine
fails over to another cluster member. Figure 5 illustrates this configuration.

6
Figure 5 APA network cluster configuration

Setting up network high availability


To configure the VM Host for network high availability:
1. Install redundant hardware or APA.
2. Create vswitches to use the primary LAN.
3. Add the vswitches to the guests.
4. Install the vswitch monitor on the VM Host.
5. Install the ping script on the guests.
For information about installing the vswitch monitor and the ping script, see Creating the hp Serviceguard
package.

Monitoring network connections


The HP Serviceguard for Integrity Virtual Machines Toolkit includes two helper utilities for network availability:
• The vswitchmon.sh (vswitch monitor) script runs on the VM Host and monitors the Serviceguard
Network Manager.
The vswitch monitor script monitors the syslog.log file. When it detects that Serviceguard is failing
over the primary network to the standby network, the vswitch monitor halts, deletes, creates, and boots
the vswitch associated with the primary network onto the standby network. When the primary network
is restored, Serviceguard and the vswitch monitor move the network and associated vswitch back to
the primary network.
• The hpvmsgping.sh (ping script) runs on the virtual machine, and is used to improve network
communication between the guest operating system and the outside network during network and
package failover.
The ping script sends periodic pings to each user-selected IP address or host. This operation reestablishes
network connectivity in the vswitch during network failover, and forces the physical switches in the
network to establish connectivity to the guest during package failover.

Configuring HP Serviceguard virtual machine packages


This procedure assumes you have installed Serviceguard Version A.11.16 or A.11.17, and Integrity Virtual
Machines, Version A.01.20 on each node in the cluster. The VM Host cannot be an SGeRAC or SGeSAP
node.

Creating the virtual machines


On each cluster node where you installed Integrity VM, create the virtual switches and virtual machines that
Serviceguard will managed on each VM Host that participates in the cluster. For more information, see the
HP Integrity Virtual Machines Installation, Configuration, and Administration manual. Also, see the HP Integrity
Virtual Machines Release Notes for current information about Integrity VM.
Each virtual machine must have access to at least two virtual switches: one for primary LAN use, and the
other for the Serviceguard heartbeat. Do not set the virtual switches to autoboot mode.
The virtual machine is a guest of the VM Host and depends on the VM Host to present virtual processors,
storage, and network resources as if they were physical devices on a dedicated physical system. When you

Configuring HP Serviceguard virtual machine packages 7


create the virtual machine (using the hpvmcreate or hpvmclone command), and when you modify the
virtual machine (using the hpvmmodify command), you specify the devices that are used by the guest. This
configuration is stored in the guest configuration file. After the guest configuration file has been created,
you can copy it to the other cluster nodes. In a Serviceguard cluster, the guest fails over successfully only if
the guest configuration file is the same on both the primary node and the adoptive node.
For the virtual machines to behave the same way on all the cluster nodes, you must specify the same
information on each cluster node for the following:
• Guest names
• Shared storage devices
• Virtual NICs and associated vswitches, including corresponding networks and subnets
• Boot options and boot order in guest nonvolatile RAM (NVRAM).
If you create the virtual machine on each cluster member, you must copy the NVRAM file to each node in
the cluster, or use the following procedure to make the same modifications to the NVRAM file on every node
that can potentially run the virtual machine:
1. Use the hpvmconsole command to connect to the console of the virtual machine.
2. Select CO: Console.
3. At the EFI shell, enter Exit.
4. Select the Boot option maintenance menu.
5. Select Add a boot option.
6. Select the same device used by this guest on the other cluster member.
7. Select the HPUX directory.
8. Select the HPUX.EFI file.
9. Enter a description.
10. Select the appropriate boot Option (enter N).
11. Save to NVRAM (enter Y).
12. Select Exit.
13. Select Change the boot order.
14. Use the U and D commands to set the newly created boot option as the first item.
15. Save to NVRAM.
16. Select Exit.
17. Boot the guest by selecting the new boot option.
If you create the virtual machine on one cluster member and then copy it to the other members, be sure the
virtual machine MAC address is different on each cluster member. Use the hpvmmodify command to change
the virtual machine's MAC address on each cluster member. You can base the new MAC address on the
address of the existing virtual machine, changing it slightly for the other cluster members (for example, add
or subtract 1 from the current MAC address). The hpvmmodify command verifies the specified MAC address
before accepting the change.

Creating the HP Serviceguard packages


Your best reference to creating and applying Serviceguard packages is the Managing Serviceguard manual.
The following sections describe how to set up and verify the virtual machine packages. To configure the
virtual machines as packages, download the HP Serviceguard for Integrity Virtual Machines Toolkit. The files
in the toolkit are listed in the section Toolkit.
On the VM Host, create a failover package configuration file and the package control script, as follows:
1. Start the virtual machine and use the hpvmstatus command to verify that the virtual machine is on.
For example:
2. Unzip or untar the toolkit file into the /var/opt/hpvm/cluster directory.
3. Create a Serviceguard package by running the following script for each virtual machine:

8
# /var/opt/hpvm/cluster/hpvm_package.sh vmname

Specify the virtual machine name (vmname) exactly as it was entered in the hpvmcreate command
that was used to create the virtual machine. You can use the hpvmstatus command to verify the virtual
machine name. For example, to create a Serviceguard package for the virtual machine named
compass1, enter the following commands:
# hpvmstatus
[Virtual Machines]
Virtual Machine Name VM # OS Type State # vCPUs # Devs # Nets Memory
==================== ===== ======= ============ ======= ====== ====== ======
compass1 4 HPUX On (EFI) 1 3 1 1 GB
#
# /var/opt/hpvm/cluster/hpvm_package.sh compass1

This is the HP Virtual Machine Serviceguard Toolkit Package Template Creation


script.

This script will assist the user develop and distribute a set of Serviceguard
package configuration template files and associated start, stop and monitor scripts.

The templates generated by these scripts will handle many guest configurations,
but it is only a template and may not be appropriate for your particular
configuration needs. You are encouraged to review and modify these template
files as needed for your particular environment.

Do you wish to continue? (y/n):y

[Virtual Machine Details]


Virtual Machine Name VM # OS Type State
==================== ===== ======= ========
compass1 11 HPUX On
[Storage Interface Details]
Guest Physical
Device Adaptor Bus Dev Ftn Tgt Lun Storage Device
====== ========== === === === === === ========= =========================
disk scsi 0 0 0 0 0 disk /dev/rdsk/c12t0d0
disk scsi 0 0 0 1 0 lv /dev/vgsglvm/rlvol1
disk scsi 0 0 0 2 0 file /hpvm/g1lvm/hpvmnet2
disk scsi 0 0 0 3 0 lv /dev/vx/rdsk/sgvxvm/sgvxvms
disk scsi 0 0 0 4 0 file /hpvm/g1vxvm/hpvmnet2
disk scsi 0 0 0 5 0 disk /dev/rdsk/c12t0d5
[Network Interface Details]
Interface Adaptor Name/Num Bus Dev Ftn Mac Address
========= ========== ========== === === === =================
vswitch lan vswitch2 0 1 0 ea-5c-08-d3-70-f2
vswitch lan vswitch5 0 2 0 f2-c7-0d-09-ac-8f
vswitch lan vswitch6 0 4 0 92-35-ed-1f-6c-67

Would you like to create a failover package for this Virtual Machine summarized above? (y/n):y

Would you like to distribute the package to each cluster member? (y/n):y

The failover package template files for the Virtual Machine were successfully created.

Configuring HP Serviceguard virtual machine packages 9


The script asks you to confirm the following actions:
• Creating a failover package
• Distributing the package to all the cluster nodes
Respond to both prompts by entering y. The hpvm_package.sh script creates the virtual machine
package template files shown in the following example, creates the compass1 directory on all the
cluster nodes, and distributes the package to all the nodes.
# cd /etc/cmcluster
# ls compass1
hpvmkit.sh hpvmmon.sh compass1.config compass1.sh hpvmstart.sh hpvmstop.sh

4. Use the cmcheckconf command to verify that the package is set up correctly. For example:
# cmcheckconf -v -C /etc/cmcluster/compass1.config -P /etc/cmcluster/vmname.conf

For example, to verify the compass1 package, enter the following command:
cmcheckconf -v -C /etc/cmcluster/cluster1.config -P /etc/cmcluster/compass1/compass1.config

Checking cluster file: /etc/cmcluster/cluster1.config


Checking nodes ... Done
Checking existing configuration ... Done
Gathering configuration information ... Done
Gathering configuration information ... Done
Gathering configuration information ..
Gathering storage information ..
Found 10 devices on node charm
Found 10 devices on node clowder
Analysis of 20 devices should take approximately 3 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 7 volume groups on node charm
Found 7 volume groups on node clowder
Analysis of 14 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
.....
Gathering Network Configuration ......... Done
Cluster cluster1 is an existing cluster
Parsing package file: /etc/cmcluster/cluster1/compass1.config.
Package compass1 already exists. It will be modified.
Checking for inconsistencies .. Done
Cluster cluster1 is an existing cluster
Maximum configured packages parameter is 10.
Configuring 3 package(s).
7 package(s) can be added to this cluster.
200 access policies can be added to this cluster.
Modifying configuration on node charm
Modifying configuration on node clowder
Modifying the cluster configuration for cluster cluster1.
Modifying node charm in cluster cluster1.
Modifying node clowder in cluster cluster1.
Modifying the package configuration for package compass1.

Verification completed with no errors found.


Use the cmapplyconf command to apply the configuration.

5. Distribute the package configuration file to the /etc/cmcluster/compass1/ directory on all cluster
nodes:

10
# cmapplyconf -v -C /etc/cmcluster/cluster-name.config -P /etc/cmcluster/vmname/vmname.config

For example:
# cmapplyconf -v -C /etc/cmcluster/cluster1.config -P /etc/cmcluster/compass1/compass1.config

Checking cluster file: /etc/cmcluster/cluster1.config


Checking nodes ... Done
Checking existing configuration ... Done
Gathering configuration information ... Done
Gathering configuration information ... Done
Gathering configuration information ..
Gathering storage information ..
Found 10 devices on node charm
Found 10 devices on node clowder
Analysis of 20 devices should take approximately 3 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 7 volume groups on node charm
Found 7 volume groups on node clowder
Analysis of 14 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
.....
Gathering Network Configuration ......... Done
Cluster cluster1 is an existing cluster
Parsing package file: /etc/cmcluster/compass1/compass1.config.
Package hpvmnet2 already exists. It will be modified.
Checking for inconsistencies .. Done
Cluster cluster1 is an existing cluster
Maximum configured packages parameter is 10.
Configuring 3 package(s).
7 package(s) can be added to this cluster.
200 access policies can be added to this cluster.
Modifying configuration on node charm
Modifying configuration on node clowder

Modify the cluster configuration ([y]/n)? y


Marking/unmarking volume groups for use in the cluster
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Modifying the cluster configuration for cluster cluster1.
Modifying node charm in cluster cluster1.
Modifying node clowder in cluster cluster1.
Modifying the package configuration for package compass1.
Completed the cluster creation.

6. Stop the virtual machine using the following command:


# hpvmstop -Pvmname

For example:
# hpvmstatus
[Virtual Machines]
Virtual Machine Name VM # OS Type State # vCPUs # Devs # Nets Memory
==================== ===== ======= ======== ======= ====== ====== ===========
compass1 2 HPUX On 2 1 2 1 GB
compass2 11 HPUX Off 2 6 3 1 GB
compass3 12 HPUX Off 2 6 4 2 GB
compass4 13 HPUX Off 2 6 5 2 GB

Configuring HP Serviceguard virtual machine packages 11


# hpvmstop -P compass1
hpvmstop: Stop the virtual machine 'compass1'? [n]: y

NOTE: This is the last time you should use the hpvmstop and hpvmstart commands to stop and
start the virtual machine. As a Serviceguard package, the virtual machine is stopped and started with
the cmrunpkg and cmhaltpkg commands.

7. If the cluster is not running, use the cmruncl command to start it:
# cmruncl -v
cmruncl : Validating network configuration...
Gathering configuration information ..
Gathering Network Configuration ....... Done
cmruncl : Network validation complete
cmruncl : Waiting for cluster to form.....
cmruncl : Cluster successfully formed.
cmruncl : Check the syslog files on all nodes in the cluster
cmruncl : to verify that no warnings occurred during startup.

8. Start the Serviceguard service on all the cluster nodes, as follows:


# cmrunpkg -v vmname

For example:
# cmrunpkg -v compass1
Running package compass1 on node clowder.
cmrunpkg : Successfully started package compass1.
cmrunpkg : Completed successfully on all packages specified.

9. Verify that the virtual machine package is on and running. Use the Integrity VM and the Serviceguard
commands to verify the package status. Then, enter the cmmodpkg command to enable autorun and
failover:
# hpvmstatus -Pcompass1
[Virtual Machines]
Virtual Machine Name VM # OS Type State # vCPUs # Devs # Nets Memory
==================== ===== ======= ======== ======= ====== ====== ===========
compass1 11 HPUX On 2 6 3 1 GB
# cmviewcl -v compass1
CLUSTER STATUS
cluster1 up

NODE STATUS STATE


charm up running

Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/2/1/0/4/1 lan7
PRIMARY up 0/2/1/0/6/1 lan9
PRIMARY up 0/5/1/0/7/0 lan6
STANDBY up 0/1/2/0 lan1
STANDBY up 0/2/1/0/4/0 lan2
STANDBY up 0/2/1/0/6/0 lan8
STANDBY up LinkAgg0 lan900
STANDBY up 0/0/3/0 lan0

NODE STATUS STATE


clowder up running

12
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/2/1/0/4/1 lan7
STANDBY up 0/1/2/0 lan1
STANDBY up 0/2/1/0/4/0 lan2
STANDBY up 0/2/1/0/6/0 lan8
STANDBY up LinkAgg0 lan900
PRIMARY up 0/5/1/0/7/0 lan6
PRIMARY up 0/2/1/0/6/1 lan9
STANDBY up 0/0/3/0 lan0

PACKAGE STATUS STATE AUTO_RUN NODE


vmname up running disabled clowder

Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual

Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Service up 0 0 compass1

Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled charm
Alternate up enabled clowder (current)

UNOWNED_PACKAGES

PACKAGE STATUS STATE AUTO_RUN NODE


compass3 down halted disabled unowned

Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual

This example shows the compass1 package is on, but autorun mode is disabled.
10. Use the following command to enable autorun and failover:
# cmmodpkg -e compass1
cmmodpkg : Completed successfully on all packages specified.

CLUSTER STATUS
cluster1 up

NODE STATUS STATE


charm up running

Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/2/1/0/4/1 lan7
PRIMARY up 0/2/1/0/6/1 lan9
PRIMARY up 0/5/1/0/7/0 lan6
STANDBY up 0/1/2/0 lan1
STANDBY up 0/2/1/0/4/0 lan2
STANDBY up 0/2/1/0/6/0 lan8

Configuring HP Serviceguard virtual machine packages 13


STANDBY up LinkAgg0 lan900
STANDBY up 0/0/3/0 lan0

NODE STATUS STATE


clowder up running

Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/2/1/0/4/1 lan7
STANDBY up 0/1/2/0 lan1
STANDBY up 0/2/1/0/4/0 lan2
STANDBY up 0/2/1/0/6/0 lan8
STANDBY up LinkAgg0 lan900
PRIMARY up 0/5/1/0/7/0 lan6
PRIMARY up 0/2/1/0/6/1 lan9
STANDBY up 0/0/3/0 lan0

PACKAGE STATUS STATE AUTO_RUN NODE


compass1 up running enabled clowder

Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual

Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Service up 0 0 compass1

Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled charm
Alternate up enabled clowder (current)

Modifying the package control and configuration files


The hpvm_package.sh script creates the following files, which you can optionally modify:
• /etc/cmcluster/vmname/vmname.config – you can optionally modify this configuration file to
control the way Serviceguard operates. Some of the parameters in this file include:
• SERVICE_NAME – Specifies the virtual machine name
• SERVICE_FAIL_FAST_ENABLED – Indicates whether the failure of a service results in the failure of
a node. If the parameter is set to YES, in the event of a service failure, Serviceguard halts the node
on which the service is running. Set this parameter to NO.
• SERVICE_HALT_TIMEOUT
Specifies (in seconds) the amount of time that Serviceguard waits for a service to terminate. In the
event of a service halt, Serviceguard sends a SIGTERM signal to terminate the service. If the process
is not terminated, Serviceguard waits for the specified timeout before sending the SIGKILL signal
to force process termination. If you do not specify this parameter, Serviceguard does not allow

14
any timeout (0 seconds). The maximum value is restricted only by the HP-UX parameter
ULONG_MAX, for an absolute limit of 4,294 seconds. Set this value to 300.
The configuration file describes the guest storage units if they are LVMs. For volume-group-name, specify
the name of the volume group. For more information about the parameters for configuring storage, see
the Managing Serviceguard manual.
• /etc/cmcluster/vmname/vmname.sh – You can optionally modify this configuration file to control
the way Serviceguard handles the virtual machine packages. The parameters in this file include:
• SERVICE_NAME – Specifies the virtual machine name. This must be the same as the argument to
the SERVICE_NAME parameter in the vmname.conf file.
• SERVICE_CMD – Specifies the path to the monitor script.
• SERVICE_RESTART – Specifies the number of times Serviceguard will try to restart the service. The
default is 0.
If LVM logical volumes are used by the virtual machine, the following parameters are included:
Cluster ASCII file:
VOLUME_GROUP volume-group-name
Package control script:
VG[0]=volume-group-name

Installing the vswitch monitor


After you create the Serviceguard package, install the vswitch monitor on the VM Host, as follows:
1. Change the working directory to /var/opt/hpvm/cluster/.
2. Execute the ./install_vswitchmon.sh script. For example:
# ./install_vswitchmon.sh
This program installs the vswitchmon script.
This script should be run on each HP-UX Serviceguard member in the cluster.
Do you wish to continue with this installation? [no]: yes
Installation of : vswitchmon : completed succesfully
This functionality will be activated during the next reboot
If you would like to manually start this function execute the following command:
/sbin/init.d/vswitchmon start

Installing the ping script


Install the ping script on each virtual machine, as follows:
1. On the virtual machine, untar the SG Integrity VM Toolkit into a temporary directory. Change the
working directory to the temporary directory.
2. Execute the ./install_hpvmsgping.sh script. For example:
# ./install_hpvmsgping.sh
This program installs the hpvmsgping script.
This script should be run on each HP-UX guest managed as a Serviceguard package.
Continue with installation? [no]: y

The HP Virtual Machine Serviceguard ping script only works if it has access to a
set of valid IP addresses for your guests. The supplied address are
periodically pinged to ensure network connectivity between the guest and these
IP addresses. You can specify the IP addresses, or host names, or you can allow
them to be determined from the nameserver entries in the
/etc/resolve.conf file.

You can can specify host names if they are listed in the /etc/hosts file.

If you specify the IP addresses, include at least one address for each subnet

Configuring HP Serviceguard virtual machine packages 15


to which the guest is connected. Specify any IP addresses associated with
systems required by the guest for proper operation.

Would you like to enter an IP addresses at this time? [yes]:


Enter an IP address: 16.116.8.75
You have enter 16.116.8.75 as an IP address. Is this correct? [yes]: yes
Would you like to enter another IP addresses at this time? [yes]:
Enter an IP address: charm
You have enter charm as an IP address. Is this correct? [yes]:
Would you like to enter another IP addresses at this time? [yes]: no
You have selected the following IP address:
16.116.8.75 charm
Do you wish to use these addresses [yes]:

The entered IP addresses have been placed in the HP Virtual Machine


Serviceguard ping script configuration file: /etc/hpvmsgping.conf

To add, delete, or change IP addresses, or to change the default timing


parameters, edit the variables in this file. After you edit this file,
stop and start the HP Virtual Machine Serviceguard ping
script using the following commands:

# /sbin/init.d/hpvmsgping stop

# /sbin/init.d/hpvmsgping start

Installation of : hpvmsgping : completed succesfully


This functionality will be activated during the next reboot.
To start this function, execute the following command:
/sbin/init.d/hpvmsgping start

Verifying that virtual machine packages can fail over


To verify that the Serviceguard packages are working properly, use the following commands to perform a
manual switch-over:
1. On the original node, verify that the package is running:
# cmviewcl -v compass1

2. Halt the package:


# cmhaltpkg compass1
Halting package compass1.
cmhaltpkg : Script failed with no restart: compass1 should not be restarted.
Check the syslog and pkg log files for more detailed information.

3. Verify that the package has stopped:


# cmviewcl -v compass1

4. On the adoptive node, verify that the package has started:


# cmviewcl -v compass1

5. On the adoptive node, verify that the guest is on:


# hpvmstatus -Pcompass1
16
Managing the virtual machine packages
To start, stop, and monitor the virtual machine package, use the Serviceguard commands described in this
section. Do not use the Integrity VM commands (hpvmstart, hpvmstop, and hpvmmigrate).

Starting a virtual machine package


To start a Serviceguard package, enter the following command:
# cmrunpkg vmname

Stopping a virtual machine package


To stop a Serviceguard package, enter the following command:
# cmhaltpkg vmname

Monitoring a virtual machine package


To monitor a Serviceguard package, enter the following command:
# cmviewcl vmname

Modifying a virtual machine package


You can modify the virtual machine resources using the hpvmmodify command. However, if you modify
the guest on one VM Host server, you must make the same changes to the guest on the other cluster nodes
that can run the guest.
Do not attempt to use the hpvmmigrate command to migrate a virtual machine that is configured as a
Serviceguard package.

Troubleshooting HP Serviceguard virtual machine problems


This section describes how to solve some of the problems that can occur during Serviceguard virtual machine
cluster configuration and operation.
As you analyze problems with the Serviceguard virtual machines, first determine the nature of the problem:
• If the virtual machine package does not fail over, take the cluster down (using the cmhaltnode
command), and be sure the virtual machine runs on the adoptive node with the same workload.

• If the virtual machine package does not start under manual control (using the cmstartpkg command),
use the cmstoppkg command to stop the cluster service. Then test the virtual machine by starting it
with the hpvmstart command. Use the virtual machine console to ensure that it is installed and that
the applications are working properly.
If the guest does not start and displays errors about storage problems (and you are using logical
volumes), you might need to modify the storage units, as follows:
• For LVM logical volumes, enter the following commands:
# vgchange -cn /dev/vgxx
# vgchange -a y /dev/vgxx

• For VxVM logical volumes, enter the following commands:


# vxdg import volume-group-name
# vxdg -g volume-group-name startall

• If you are using files on a logical volume, enter the following command also:
# mount /dev/vgxx

These command make the storage unit available to the local node.
Managing the virtual machine packages 17
Procedure checklist
The following procedure provides a quick checklist of the necessary steps to set up Serviceguard to manage
Integrity virtual machines.
1. Install HP Serviceguard A.11.16 or A.11.17 on each cluster node.
2. Install HP Integrity Virtual Machines A.01.20 on each cluster node.
3. Download the HP Serviceguard for Integrity Virtual Machines Toolkit.
4. Configure the cluster, verify the configuration, and distribute it to all the cluster nodes.
5. Create the vswitches and the virtual machines on each VM Host in the cluster. Provide at least two
vswitches, one for the heartbeat and one for the guest applications. Verify that they start and stop
properly
6. Create the Serviceguard virtual machine package, modify the configuration and package control scripts,
if necessary, and distribute them across the cluster.
7. Install the vswitch monitor on the VM Host to monitor the status of the vswitches.
8. Install the ping script on the virtual machine to monitor the virtual machine network access.
9. Start the virtual machine.
10. Start the Serviceguard failover service.

Contents of the HP Serviceguard for Integrity Virtual Machines Toolkit


The toolkit for creating and managing virtual machines using Serviceguard includes the files listed in the
following table:

File name Purpose Stored as

hpvm_package.sh Creates the template files that /var/opt/hpvm/cluster/hpvm_package.sh


you use to define virtual
machine specific directories
and files.

hpvmkit.sh The main script called by /etc/cmcluster/vmname/hpvmkit.sh


Serviceguard to start, stop,
and monitor virtual machine
packages.

hpvmmon.sh Monitors the virtual machine /etc/cmcluster/vmname/hpvmmon.sh


packages

hpvmsgping Pings the guest network. /temp/hpvmsgping.sh

hpvmstart.sh Starts the virtual machine /etc/cmcluster/vmname/hpvmstart.sh


package.

hpvmstop.sh Stops the virtual machine /etc/cmcluster/vmname/hpvmstop.sh


package.

hpvmtemplate.sh Contains service parameters. /etc/cmcluster/vmname/vmname.sh


Renamed to vmname.sh,
where vmname is the virtual
machine name.

install_hpvmsgping.sh Installs the ping script on the /temp/hpvmsgping.sh


guest.

install_vswitchmon.sh Installs the vswitch monitor on /var/opt/hpvm/cluster/install_vswitchmon.sh


the VM Host.

vswitchmon.sh Monitors the status of the /var/opt/hpvm/cluster/install_vswitchmon.sh


vswitches on the VM Host.

hpvmtemplate.conf Contains guest-specific /etc/cmcluster/vmname/vmname.conf


parameters. Renamed to
vmname.conf, where
vmname is the virtual
machine name.

README Provides information about /var/opt/hpvm/cluster/hpvm_package.sh


the HP Serviceguard for
Integrity Virtual Machines
Toolkit and its use.

18
Release notes
Currently, when running virtual machines as Serviceguard packages, the following are not supported:
• Serviceguard disaster-tolerant solutions, including HP Extended Distance Clusters, HP Metroclusters,
and HP Continentalclusters
• Running Serviceguard in the guest

Release notes 19
Glossary
This glossary explains the term used in this white paper.
adoptive node The cluster member where the package starts after it fails over.
APA Auto Port Aggregation.
application A collection of processes that perform a specific function. In the context of virtual machine clusters,
this refers to any software running on the guest.
asymmetric A cluster configuration in which the cluster nodes do not have access to the same physical storage
Serviceguard and network devices.
configuration
available resources Processors, memory, and I/O resources that are not assigned to a virtual machine. These resources
are available to be used in new partitions or can be added to existing partitions.
cluster A set of two or more systems configured together to host workloads; users are unaware that more
than one system is hosting the workload.
cluster member A cluster node that is actively participating in the Serviceguard cluster.
cluster node A system set up to be a part of a Serviceguard cluster.
dedicated device A PNIC or storage unit that is dedicated to a specific virtual machine. A dedicated device cannot
be used by multiple virtual machines. If the virtual machine tries to access a dedicated device
that is being used by another guest, it is not allowed to start.
EFI Extensible firmware interface. The system firmware user interface that allows boot-related
configuration changes and operations on Integrity servers. For example, EFI provides ways to
specify boot options and list boot devices. The boot console handler (BCH) provides a similar
function for PA-RISC systems.
entitlement The amount of a system resource (for example, processor) that is guaranteed to a virtual machine.
The actual allocation of resources to the virtual machine may be greater or less than its entitlement
depending on the virtual machine's demand for processor resources and the overall system
processor load.
event log Information about system events. An event log indicates which event has occurred, when and
where it happened, and its severity (the alert level). Event logs do not rely on normal I/O operation.
extensible See EFI.
firmware interface
failover The operation that takes place when a primary service (network, storage, or CPU) fails, and the
application continues operation on a secondary unit. In the case of Serviceguard virtual machines,
the virtual machine can fail over to another cluster member. In case of a network failure, on a
properly configured system the virtual machine can fail over to another LAN on the same cluster
node.
guest The virtual machine running the guest OS and guest applications.
guest administrator The administrator of a virtual machine. A guest administrator can operate the virtual machine
using the hpvmconsole command with action that can affect the specific guest only.
guest console The virtual machine console that is started by the hpvmconsole command.
guest operator The administrator of the guest OS. This level of privilege gives complete control of the virtual
machine but does not allow control of the other guests, the VM Host, or the storage units.
guest OS Guest operating system.
HA High availability. The ability of a server or partition to continue operating despite the failure of
one or more components. High availability requires redundant resources, such as processors and
memory, in specific combinations.
high availability See HA.

20
host • A system or partition that is running an instance of an operating system.
• The physical machine that is the VM Host for one or more virtual machines.
host administrator The system administrator. This level of privilege provides control of the VM Host system and its
resources, as well as creating and managing guests.
host name The name of a system or partition that is running an OS instance.
host OS The operating system that is running on the host machine.
Ignite-UX The HP-UX Ignite server product, used as a core build image to create or reload HP-UX servers.
Integrity Virtual Using Integrity Virtual Machines, you can install and run multiple systems (virtual machines) on
Machines the same physical host system. This can be used for hardware consolidation, resource utilization,
or flexibility in system management. Once it has been created, the virtual machine can be installed
and managed like a physical system.
Integrity VM The HP Integrity Virtual Machines product.
localnet The local network created by Integrity VM for internal, local communications. Guests can
communicate on the localnet, but the VM Host cannot.
LUN Logical unit number.
migration Regarding Serviceguard clusters, the operation of manually stopping a package on one cluster
member and starting it on another. Migrating the package (for example, a virtual machine) can
be useful in system management procedures and workload balancing. See also, virtual machine
migration.
NIC Network interface card. Also called network adapter.
NSPOF No Single Point of Failure. A configuration imperative that implies the use of redundancy and
high availability to ensure that the failure of a single component does not affect the operations
of the machine.
package The script that is customized for each virtual machine package, containing specific variables and
configuration script parameters, including logical volume definitions, for the specific virtual machine.
package control The script that contains parameters controlling how Serviceguard operates.
script
PNIC Physical network interface card (NIC).
primary node The cluster member on which a failed-over package was originally running.
redundancy A method of providing high availability that makes use of multiple copies of storage or network
units to ensure services are always available. For example, disk mirroring.
restricted device A physical device that can be accessed only by the VM Host system. For example, the VM Host
boot device should be a restricted device.
Serviceguard Serviceguard allows you to create high availability clusters of HP 9000 or HP Integrity servers.
Many customers using Serviceguard want to manage virtual machines as Serviceguard packages.
A Serviceguard package groups application services (individual HP-UX processes) together and
maintains them on multiple nodes in the cluster, making them available for failover.
SG for Integrity HP Serviceguard for Integrity Virtual Machines Toolkit. The set of templates and scripts provided
VM Toolkit for setting up and managing virtual machine packages.
SGeRAC Serviceguard extension for real application clusters.
SGeSAP Serviceguard extension for SAP.
shared device A virtual device that can be used by more than one virtual machine.
storage unit A file, DVD, disk, or logical volume that is on the VM Host and is used by the virtual machines
running on the VM Host.
symmetric A cluster configuration in which the nodes share access to the same storage and network devices.
Serviceguard
configuration

21
virtual console The virtualized console of a virtual machine that emulates the functionality of the Management
Processor interface for HP Integrity servers. Each virtual machine has its own virtual console from
which the virtual machine can be powered on or off, booted or shut down, and from which the
guest OS can be selected.
virtual device An emulation of a physical device. This emulation, used as a device by a virtual machine,
effectively maps a virtual device to an entity (for example, s a DVD) on the VM Host.
virtual machine An emulation of a physical system. The guest OS and its applications run on the virtual machine
in the same ways as if they were running on a dedicated physical system.
virtual machine The executable program on the VM Host that manifests the individual virtual machine. It
application communicates with the loadable drivers based on information in the guest-specific configuration
file, and it instantiates the virtual machine.
virtual machine See virtual console.
console
virtual machine See VM Host.
host
virtual machine The operation of migrating a virtual machine from one VM Host system to another, using the
migration Integrity VM command hpvmmigrate. Do not use this command for virtual machine packages.
virtual machine A virtual machine that has been configured as a Serviceguard package.
package
virtual network A LAN shared by the virtual machines running on the same VM Host or in the same Serviceguard
cluster.
virtual switch See vswitch.
VM See virtual machine.
VM Host An HP Integrity server running HP-UX with the HP Integrity Virtual Machines software installed.
Virtual machines are manifested as processes executing on the VM Host. Configuration,
management, and monitoring of virtual machines is performed on the VM Host.
VM Host cluster A VM Host system that is running Serviceguard and is capable of running virtual machine
member packages.
VNIC A virtual network interface card (NIC).
vswitch Virtual switch. Refers to both a dynamically loadable kernel module (DLKM) and a user-mode
component implementing a virtual network switch. The virtualized network interface cards (NICs)
for guest machines are attached to the virtual switches.
WBEM Web-Based Enterprise Management. A set of Web-based information services standards developed
by the Distributed Management Task Force, Inc. A WBEM provider offers access to a resource.
WBEM clients send requests to providers to get information about and access to the registered
resources.
Web-Based See WBEM.
Enterprise
Management
workload The collection of processes in a virtual machine.

22 Glossary
For more information
See the following related publications:
• HP Auto Port Aggregation (APA) Support Guide
• Managing Serviceguard
• HP Integrity Virtual Machines Installation, Configuration, and Administration
• Designing High Availability Solutions with Serviceguard and Integrity VM (an HP white paper)

23

You might also like