Professional Documents
Culture Documents
c02058055 - HP Serviceguard To Manage HPVM
c02058055 - HP Serviceguard To Manage HPVM
NOTE: This paper assumes you are familiar with Serviceguard and Integrity Virtual Machines. Additional
references are listed at the end of this paper.
Of course, Figure 1 is a greatly simplified illustration. Your configuration is likely to hold many Integrity
servers, each of which hosts many virtual machines.
Ease of maintenance is an important consideration in a cluster environment. Applications can overuse or
underuse resources, network performance can be erratic, or virtual machines can simply fail with little or no
information. High availability should not come at the cost of maintainability. A site plan is useful for
determining how to set up the Serviceguard cluster that will protect your virtual machines (and your customers)
from excessive downtime. (The Managing Serviceguard manual provides sample worksheets.).
The key to a manageable cluster is to keep it simple. Virtual machines share the network, storage, and
processor resources of the VM Host. As it fails over to another cluster member, the guest (the operating
system and applications on the virtual machines) must have access the same virtual devices as it did on the
failed VM Host. As you configure each cluster member, provide sufficient and appropriate processor, network,
The high availability initiative 2
and storage resources for all the guests that might run on it. If a guest cannot access the resources it needs,
it will not start.
Two cluster nodes (Node 1 and Node 2) run the VM Host. Node 1 hosts two virtual machines (VM1 and
VM4). Node 2 hosts VM2. VM1 is configured as highly available; VM4, which has local storage only, is
only capable of running on Node 1. Node 2 hosts VM2. VM1 accesses VM1 disks and VM2 accesses VM2
disks. VM1 and VM2 are connected to both VM1 disks and VM2 disks. VM1 and VM2 are configured as
virtual machine packages and are capable of running on either Node 1 or Node 2.
Each node has two LANS. One is dedicated to the heartbeat, which is part of Serviceguard operation. The
other LAN serves as the primary LAN and is shared by the virtual machines.
Because redundancy is built into the cluster design, the root disk and the guest storage diska are mirrored,
and a serial line acts as a standby heartbeat LAN.
The Serviceguard configuration on each node requires a customized package control script. This control
script identifies the guests that can run on the node. For more information, see Configuring HP Serviceguard
packages.
If Serviceguard detects that Node 1 (the primary node) is not available, VM1 is started on Node 2, as
illustrated in Figure 3. Meanwhile, VM4, which is not configured as a virtual machine package, is unavailable
until service is restored to Node 1.
Because the two nodes have access to the same storage units, the failover of VM1 from Node 1 to the
symmetrically configured Node 2 is barely discernable to users. Different configurations and guest requirements
present different challenges. Virtual machines can access many different types of storage units (files, disks,
logical volumes, and DVDs) on the VM Host. However, the root disk and application storage devices are
presented to the virtual machine as virtual devices. As such, the same storage units for the root disk and
storage devices should be used to present virtual devices on each node in the cluster. If these differ, manual
intervention might be necessary after the virtual machine fails over to another cluster member. If you use
logical volumes, you must include the Logical Volume Manager (LVM) or Veritas Volume Manager (VxVM)
information in the package configuration file, as described in “Configuring HP Serviceguard virtual machine
packages.” For more information about configuring storage types in Serviceguard clusters, see the Managing
Serviceguard manual.
You can also provide network failover from one LAN to another on the same node by including two LANs
for guest use as well as the dedicated heartbeat LAN (for a total of three LANs on the virtual machine
package). If you already have Auto Port Aggregation (APA), you have redundancy. Supply the APA device
names in the configuration of the package, as described in “Configuring the Serviceguard Package.”
The following sections describe the considerations for various types of storage units and network configurations.
4
For more information about how to configure virtual storage to meet the needs of the guests, see the HP
Integrity Virtual Machines Installation, Configuration, and Administration manual. The following sections
describe how to achieve the desired level of storage and network high availability.
6
Figure 5 APA network cluster configuration
8
# /var/opt/hpvm/cluster/hpvm_package.sh vmname
Specify the virtual machine name (vmname) exactly as it was entered in the hpvmcreate command
that was used to create the virtual machine. You can use the hpvmstatus command to verify the virtual
machine name. For example, to create a Serviceguard package for the virtual machine named
compass1, enter the following commands:
# hpvmstatus
[Virtual Machines]
Virtual Machine Name VM # OS Type State # vCPUs # Devs # Nets Memory
==================== ===== ======= ============ ======= ====== ====== ======
compass1 4 HPUX On (EFI) 1 3 1 1 GB
#
# /var/opt/hpvm/cluster/hpvm_package.sh compass1
This script will assist the user develop and distribute a set of Serviceguard
package configuration template files and associated start, stop and monitor scripts.
The templates generated by these scripts will handle many guest configurations,
but it is only a template and may not be appropriate for your particular
configuration needs. You are encouraged to review and modify these template
files as needed for your particular environment.
Would you like to create a failover package for this Virtual Machine summarized above? (y/n):y
Would you like to distribute the package to each cluster member? (y/n):y
The failover package template files for the Virtual Machine were successfully created.
4. Use the cmcheckconf command to verify that the package is set up correctly. For example:
# cmcheckconf -v -C /etc/cmcluster/compass1.config -P /etc/cmcluster/vmname.conf
For example, to verify the compass1 package, enter the following command:
cmcheckconf -v -C /etc/cmcluster/cluster1.config -P /etc/cmcluster/compass1/compass1.config
5. Distribute the package configuration file to the /etc/cmcluster/compass1/ directory on all cluster
nodes:
10
# cmapplyconf -v -C /etc/cmcluster/cluster-name.config -P /etc/cmcluster/vmname/vmname.config
For example:
# cmapplyconf -v -C /etc/cmcluster/cluster1.config -P /etc/cmcluster/compass1/compass1.config
For example:
# hpvmstatus
[Virtual Machines]
Virtual Machine Name VM # OS Type State # vCPUs # Devs # Nets Memory
==================== ===== ======= ======== ======= ====== ====== ===========
compass1 2 HPUX On 2 1 2 1 GB
compass2 11 HPUX Off 2 6 3 1 GB
compass3 12 HPUX Off 2 6 4 2 GB
compass4 13 HPUX Off 2 6 5 2 GB
NOTE: This is the last time you should use the hpvmstop and hpvmstart commands to stop and
start the virtual machine. As a Serviceguard package, the virtual machine is stopped and started with
the cmrunpkg and cmhaltpkg commands.
7. If the cluster is not running, use the cmruncl command to start it:
# cmruncl -v
cmruncl : Validating network configuration...
Gathering configuration information ..
Gathering Network Configuration ....... Done
cmruncl : Network validation complete
cmruncl : Waiting for cluster to form.....
cmruncl : Cluster successfully formed.
cmruncl : Check the syslog files on all nodes in the cluster
cmruncl : to verify that no warnings occurred during startup.
For example:
# cmrunpkg -v compass1
Running package compass1 on node clowder.
cmrunpkg : Successfully started package compass1.
cmrunpkg : Completed successfully on all packages specified.
9. Verify that the virtual machine package is on and running. Use the Integrity VM and the Serviceguard
commands to verify the package status. Then, enter the cmmodpkg command to enable autorun and
failover:
# hpvmstatus -Pcompass1
[Virtual Machines]
Virtual Machine Name VM # OS Type State # vCPUs # Devs # Nets Memory
==================== ===== ======= ======== ======= ====== ====== ===========
compass1 11 HPUX On 2 6 3 1 GB
# cmviewcl -v compass1
CLUSTER STATUS
cluster1 up
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/2/1/0/4/1 lan7
PRIMARY up 0/2/1/0/6/1 lan9
PRIMARY up 0/5/1/0/7/0 lan6
STANDBY up 0/1/2/0 lan1
STANDBY up 0/2/1/0/4/0 lan2
STANDBY up 0/2/1/0/6/0 lan8
STANDBY up LinkAgg0 lan900
STANDBY up 0/0/3/0 lan0
12
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/2/1/0/4/1 lan7
STANDBY up 0/1/2/0 lan1
STANDBY up 0/2/1/0/4/0 lan2
STANDBY up 0/2/1/0/6/0 lan8
STANDBY up LinkAgg0 lan900
PRIMARY up 0/5/1/0/7/0 lan6
PRIMARY up 0/2/1/0/6/1 lan9
STANDBY up 0/0/3/0 lan0
Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual
Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Service up 0 0 compass1
Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled charm
Alternate up enabled clowder (current)
UNOWNED_PACKAGES
Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual
This example shows the compass1 package is on, but autorun mode is disabled.
10. Use the following command to enable autorun and failover:
# cmmodpkg -e compass1
cmmodpkg : Completed successfully on all packages specified.
CLUSTER STATUS
cluster1 up
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/2/1/0/4/1 lan7
PRIMARY up 0/2/1/0/6/1 lan9
PRIMARY up 0/5/1/0/7/0 lan6
STANDBY up 0/1/2/0 lan1
STANDBY up 0/2/1/0/4/0 lan2
STANDBY up 0/2/1/0/6/0 lan8
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/2/1/0/4/1 lan7
STANDBY up 0/1/2/0 lan1
STANDBY up 0/2/1/0/4/0 lan2
STANDBY up 0/2/1/0/6/0 lan8
STANDBY up LinkAgg0 lan900
PRIMARY up 0/5/1/0/7/0 lan6
PRIMARY up 0/2/1/0/6/1 lan9
STANDBY up 0/0/3/0 lan0
Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual
Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Service up 0 0 compass1
Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled charm
Alternate up enabled clowder (current)
14
any timeout (0 seconds). The maximum value is restricted only by the HP-UX parameter
ULONG_MAX, for an absolute limit of 4,294 seconds. Set this value to 300.
The configuration file describes the guest storage units if they are LVMs. For volume-group-name, specify
the name of the volume group. For more information about the parameters for configuring storage, see
the Managing Serviceguard manual.
• /etc/cmcluster/vmname/vmname.sh – You can optionally modify this configuration file to control
the way Serviceguard handles the virtual machine packages. The parameters in this file include:
• SERVICE_NAME – Specifies the virtual machine name. This must be the same as the argument to
the SERVICE_NAME parameter in the vmname.conf file.
• SERVICE_CMD – Specifies the path to the monitor script.
• SERVICE_RESTART – Specifies the number of times Serviceguard will try to restart the service. The
default is 0.
If LVM logical volumes are used by the virtual machine, the following parameters are included:
Cluster ASCII file:
VOLUME_GROUP volume-group-name
Package control script:
VG[0]=volume-group-name
The HP Virtual Machine Serviceguard ping script only works if it has access to a
set of valid IP addresses for your guests. The supplied address are
periodically pinged to ensure network connectivity between the guest and these
IP addresses. You can specify the IP addresses, or host names, or you can allow
them to be determined from the nameserver entries in the
/etc/resolve.conf file.
You can can specify host names if they are listed in the /etc/hosts file.
If you specify the IP addresses, include at least one address for each subnet
# /sbin/init.d/hpvmsgping stop
# /sbin/init.d/hpvmsgping start
• If the virtual machine package does not start under manual control (using the cmstartpkg command),
use the cmstoppkg command to stop the cluster service. Then test the virtual machine by starting it
with the hpvmstart command. Use the virtual machine console to ensure that it is installed and that
the applications are working properly.
If the guest does not start and displays errors about storage problems (and you are using logical
volumes), you might need to modify the storage units, as follows:
• For LVM logical volumes, enter the following commands:
# vgchange -cn /dev/vgxx
# vgchange -a y /dev/vgxx
• If you are using files on a logical volume, enter the following command also:
# mount /dev/vgxx
These command make the storage unit available to the local node.
Managing the virtual machine packages 17
Procedure checklist
The following procedure provides a quick checklist of the necessary steps to set up Serviceguard to manage
Integrity virtual machines.
1. Install HP Serviceguard A.11.16 or A.11.17 on each cluster node.
2. Install HP Integrity Virtual Machines A.01.20 on each cluster node.
3. Download the HP Serviceguard for Integrity Virtual Machines Toolkit.
4. Configure the cluster, verify the configuration, and distribute it to all the cluster nodes.
5. Create the vswitches and the virtual machines on each VM Host in the cluster. Provide at least two
vswitches, one for the heartbeat and one for the guest applications. Verify that they start and stop
properly
6. Create the Serviceguard virtual machine package, modify the configuration and package control scripts,
if necessary, and distribute them across the cluster.
7. Install the vswitch monitor on the VM Host to monitor the status of the vswitches.
8. Install the ping script on the virtual machine to monitor the virtual machine network access.
9. Start the virtual machine.
10. Start the Serviceguard failover service.
18
Release notes
Currently, when running virtual machines as Serviceguard packages, the following are not supported:
• Serviceguard disaster-tolerant solutions, including HP Extended Distance Clusters, HP Metroclusters,
and HP Continentalclusters
• Running Serviceguard in the guest
Release notes 19
Glossary
This glossary explains the term used in this white paper.
adoptive node The cluster member where the package starts after it fails over.
APA Auto Port Aggregation.
application A collection of processes that perform a specific function. In the context of virtual machine clusters,
this refers to any software running on the guest.
asymmetric A cluster configuration in which the cluster nodes do not have access to the same physical storage
Serviceguard and network devices.
configuration
available resources Processors, memory, and I/O resources that are not assigned to a virtual machine. These resources
are available to be used in new partitions or can be added to existing partitions.
cluster A set of two or more systems configured together to host workloads; users are unaware that more
than one system is hosting the workload.
cluster member A cluster node that is actively participating in the Serviceguard cluster.
cluster node A system set up to be a part of a Serviceguard cluster.
dedicated device A PNIC or storage unit that is dedicated to a specific virtual machine. A dedicated device cannot
be used by multiple virtual machines. If the virtual machine tries to access a dedicated device
that is being used by another guest, it is not allowed to start.
EFI Extensible firmware interface. The system firmware user interface that allows boot-related
configuration changes and operations on Integrity servers. For example, EFI provides ways to
specify boot options and list boot devices. The boot console handler (BCH) provides a similar
function for PA-RISC systems.
entitlement The amount of a system resource (for example, processor) that is guaranteed to a virtual machine.
The actual allocation of resources to the virtual machine may be greater or less than its entitlement
depending on the virtual machine's demand for processor resources and the overall system
processor load.
event log Information about system events. An event log indicates which event has occurred, when and
where it happened, and its severity (the alert level). Event logs do not rely on normal I/O operation.
extensible See EFI.
firmware interface
failover The operation that takes place when a primary service (network, storage, or CPU) fails, and the
application continues operation on a secondary unit. In the case of Serviceguard virtual machines,
the virtual machine can fail over to another cluster member. In case of a network failure, on a
properly configured system the virtual machine can fail over to another LAN on the same cluster
node.
guest The virtual machine running the guest OS and guest applications.
guest administrator The administrator of a virtual machine. A guest administrator can operate the virtual machine
using the hpvmconsole command with action that can affect the specific guest only.
guest console The virtual machine console that is started by the hpvmconsole command.
guest operator The administrator of the guest OS. This level of privilege gives complete control of the virtual
machine but does not allow control of the other guests, the VM Host, or the storage units.
guest OS Guest operating system.
HA High availability. The ability of a server or partition to continue operating despite the failure of
one or more components. High availability requires redundant resources, such as processors and
memory, in specific combinations.
high availability See HA.
20
host • A system or partition that is running an instance of an operating system.
• The physical machine that is the VM Host for one or more virtual machines.
host administrator The system administrator. This level of privilege provides control of the VM Host system and its
resources, as well as creating and managing guests.
host name The name of a system or partition that is running an OS instance.
host OS The operating system that is running on the host machine.
Ignite-UX The HP-UX Ignite server product, used as a core build image to create or reload HP-UX servers.
Integrity Virtual Using Integrity Virtual Machines, you can install and run multiple systems (virtual machines) on
Machines the same physical host system. This can be used for hardware consolidation, resource utilization,
or flexibility in system management. Once it has been created, the virtual machine can be installed
and managed like a physical system.
Integrity VM The HP Integrity Virtual Machines product.
localnet The local network created by Integrity VM for internal, local communications. Guests can
communicate on the localnet, but the VM Host cannot.
LUN Logical unit number.
migration Regarding Serviceguard clusters, the operation of manually stopping a package on one cluster
member and starting it on another. Migrating the package (for example, a virtual machine) can
be useful in system management procedures and workload balancing. See also, virtual machine
migration.
NIC Network interface card. Also called network adapter.
NSPOF No Single Point of Failure. A configuration imperative that implies the use of redundancy and
high availability to ensure that the failure of a single component does not affect the operations
of the machine.
package The script that is customized for each virtual machine package, containing specific variables and
configuration script parameters, including logical volume definitions, for the specific virtual machine.
package control The script that contains parameters controlling how Serviceguard operates.
script
PNIC Physical network interface card (NIC).
primary node The cluster member on which a failed-over package was originally running.
redundancy A method of providing high availability that makes use of multiple copies of storage or network
units to ensure services are always available. For example, disk mirroring.
restricted device A physical device that can be accessed only by the VM Host system. For example, the VM Host
boot device should be a restricted device.
Serviceguard Serviceguard allows you to create high availability clusters of HP 9000 or HP Integrity servers.
Many customers using Serviceguard want to manage virtual machines as Serviceguard packages.
A Serviceguard package groups application services (individual HP-UX processes) together and
maintains them on multiple nodes in the cluster, making them available for failover.
SG for Integrity HP Serviceguard for Integrity Virtual Machines Toolkit. The set of templates and scripts provided
VM Toolkit for setting up and managing virtual machine packages.
SGeRAC Serviceguard extension for real application clusters.
SGeSAP Serviceguard extension for SAP.
shared device A virtual device that can be used by more than one virtual machine.
storage unit A file, DVD, disk, or logical volume that is on the VM Host and is used by the virtual machines
running on the VM Host.
symmetric A cluster configuration in which the nodes share access to the same storage and network devices.
Serviceguard
configuration
21
virtual console The virtualized console of a virtual machine that emulates the functionality of the Management
Processor interface for HP Integrity servers. Each virtual machine has its own virtual console from
which the virtual machine can be powered on or off, booted or shut down, and from which the
guest OS can be selected.
virtual device An emulation of a physical device. This emulation, used as a device by a virtual machine,
effectively maps a virtual device to an entity (for example, s a DVD) on the VM Host.
virtual machine An emulation of a physical system. The guest OS and its applications run on the virtual machine
in the same ways as if they were running on a dedicated physical system.
virtual machine The executable program on the VM Host that manifests the individual virtual machine. It
application communicates with the loadable drivers based on information in the guest-specific configuration
file, and it instantiates the virtual machine.
virtual machine See virtual console.
console
virtual machine See VM Host.
host
virtual machine The operation of migrating a virtual machine from one VM Host system to another, using the
migration Integrity VM command hpvmmigrate. Do not use this command for virtual machine packages.
virtual machine A virtual machine that has been configured as a Serviceguard package.
package
virtual network A LAN shared by the virtual machines running on the same VM Host or in the same Serviceguard
cluster.
virtual switch See vswitch.
VM See virtual machine.
VM Host An HP Integrity server running HP-UX with the HP Integrity Virtual Machines software installed.
Virtual machines are manifested as processes executing on the VM Host. Configuration,
management, and monitoring of virtual machines is performed on the VM Host.
VM Host cluster A VM Host system that is running Serviceguard and is capable of running virtual machine
member packages.
VNIC A virtual network interface card (NIC).
vswitch Virtual switch. Refers to both a dynamically loadable kernel module (DLKM) and a user-mode
component implementing a virtual network switch. The virtualized network interface cards (NICs)
for guest machines are attached to the virtual switches.
WBEM Web-Based Enterprise Management. A set of Web-based information services standards developed
by the Distributed Management Task Force, Inc. A WBEM provider offers access to a resource.
WBEM clients send requests to providers to get information about and access to the registered
resources.
Web-Based See WBEM.
Enterprise
Management
workload The collection of processes in a virtual machine.
22 Glossary
For more information
See the following related publications:
• HP Auto Port Aggregation (APA) Support Guide
• Managing Serviceguard
• HP Integrity Virtual Machines Installation, Configuration, and Administration
• Designing High Availability Solutions with Serviceguard and Integrity VM (an HP white paper)
23