Professional Documents
Culture Documents
Using DRBD in Heartbeat R1
Using DRBD in Heartbeat R1
Running Heartbeat clusters in release 1 compatible configuration is now considered obsolete by the Linux-HA development team. However, it is still widely used in the field, which is why it is documented here in this section. Advantages. Configuring Heartbeat in R1 compatible mode has some advantages over using CRM configuration. In particular, Heartbeat R1 compatible clusters are simple and easy to configure; it is fairly straightforward to extend Heartbeat's functionality with custom, R1-style resource agents.
Disadvantages. Disadvantages of R1 compatible configuration, as opposed to CRM configurations, include: Cluster configuration must be manually kept in sync between cluster nodes, it is not propagated automatically. While node monitoring is available, resource-level monitoring is not. Individual resources must be monitored by an external monitoring system. Resource group support is limited to two resource groups. CRM clusters, by contrast, support any number, and also come with a complex resource-level constraint framework.
Another disadvantage, namely the fact that R1 style configuration limits cluster size to 2 nodes (whereas CRM clusters support up to 255) is largely irrelevant for setups involving DRBD, DRBD itself being limited to two nodes.
whichever node is currently the active node. Of course, a corresponding resource must exist and be configured in /etc/drbd.conf for this to work. That DRBD resource translates to the block device named /dev/drbd0, which contains an ext3 filesystem that is to be mounted at /var/lib/mysql (the default location for MySQL data files). The resource group also contains a service IP address, 10.9.42.1. Heartbeat will make sure that this IP address is configured and available on whichever node is currently active. Finally, Heartbeat will use the LSB resource agent named mysql in order to start the MySQL daemon, which will then find its data files at /var/lib/mysql and be able to listen on the service IP address, 192.168.42.1. It is important to understand that the resources listed in the haresources file are always evaluated from left to right when resources are being started, and from right to left when they are being stopped.
Note To have a stacked resource managed by Heartbeat, you must first configure it as outlined in the section called Configuring a stacked resource.
The stacked resource is managed by Heartbeat by way of the drbdupper resource agent. That resource agent is distributed, as all other Heartbeat R1 resource agents, in/etc/ha.d/resource.d. It is to stacked resources what the drbddisk resource agent is to conventional, unstacked resources. drbdupper takes care of managing both the lower-level resource and the stacked resource. Consider the following haresources example, which would replace the one given in the previous section: bob 192.168.42.1 \ drbdupper::mysql-U Filesystem::/dev/drbd1::/var/lib/mysql::ext3 \ mysql Note the following differences to the earlier example: You start the cluster IP address before all other resources. This is necessary because stacked resource replication uses a connection from the cluster IP address to the node IP address of the third node. Lower-level resource replication, by contrast, uses a connection between the physical node IP addresses of the two cluster nodes. You pass the stacked resource name to drbdupper (in this example, mysql-U). You configure the Filesystem resource agent to mount the DRBD device associated with the stacked resource (in this example, /dev/drbd1), not the lower-level one.
approach you would use in case of a kernel upgrade (which also requires the installation of a matching DRBD version).
Another advantage, namely the fact that CRM clusters support up to 255 nodes in a single cluster, is somewhat irrelevant for setups involving DRBD (DRBD itself being limited to two nodes). Disadvantages. Configuring Heartbeat in CRM mode also has some disadvantages in comparison to using R1-compatible configuration. In particular, Heartbeat CRM clusters are comparatively complex to configure and administer; Extending Heartbeat's functionality with custom OCF resource agents is non-trivial.
Note This disadvantage is somewhat mitigated by the fact that you do have the option of using custom (or legacy) R1-style resource agents in CRM clusters.
The remainder of the cluster configuration is maintained in the Cluster Information Base (CIB), covered in detail in the following section. Contrary to the two relevant configuration files, the CIB need not be manually distributed among cluster nodes; the Heartbeat services take care of that automatically.
Even though you are using Heartbeat in CRM mode, you may still utilize R1-compatible resource agents such as drbddisk. This resource agent provides no secondary node monitoring, and ensures only resource promotion and demotion. In order to enable a DRBD-backed configuration for a MySQL database in a Heartbeat CRM cluster with drbddisk, you would use a configuration like this: <group ordered="true" collocated="true" id="rg_mysql"> <primitive class="heartbeat" type="drbddisk" provider="heartbeat" id="drbddisk_mysql"> <meta_attributes> <attributes> <nvpair name="target_role" value="started"/> </attributes> </meta_attributes> <instance_attributes> <attributes> <nvpair name="1" value="mysql"/> </attributes> </instance_attributes> </primitive> <primitive class="ocf" type="Filesystem" provider="heartbeat" id="fs_mysql"> <instance_attributes> <attributes> <nvpair name="device" value="/dev/drbd0"/> <nvpair name="directory" value="/var/lib/mysql"/> <nvpair name="type" value="ext3"/> </attributes> </instance_attributes> </primitive> <primitive class="ocf" type="IPaddr2" provider="heartbeat" id="ip_mysql"> <instance_attributes> <attributes> <nvpair name="ip" value="192.168.42.1"/> <nvpair name="cidr_netmask" value="24"/> <nvpair name="nic" value="eth0"/> </attributes> </instance_attributes> </primitive> <primitive class="lsb" type="mysqld" provider="heartbeat" id="mysqld"/> </group> Assuming you created this configuration in a temporary file named /tmp/hb_mysql.xml, you would add this resource group to the cluster configuration using the following command (on any cluster node): cibadmin -o resources -C -x /tmp/hb_mysql.xml After this, Heartbeat will automatically propagate the newly-configured resource group to all cluster nodes. Using the drbd OCF resource agent in a Heartbeat CRM configuration The drbd resource agent is a pure-bred OCF RA which provides Master/Slave capability, allowing Heartbeat to start and monitor the DRBD resource on multiple nodes and promoting and demoting as needed. You must, however, understand that the drbd RA disconnects and
detaches all DRBD resources it manages on Heartbeat shutdown, and also upon enabling standby mode for a node. In order to enable a DRBD-backed configuration for a MySQL database in a Heartbeat CRM cluster with the drbd OCF resource agent, you must create both the necessary resources, and Heartbeat constraints to ensure your service only starts on a previously promoted DRBD resource. It is recommended that you start with the constraints, such as shown in this example: <constraints> <rsc_order id="mysql_after_drbd" from="rg_mysql" action="start" to="ms_drbd_mysql" to_action="promote" type="after"/> <rsc_colocation id="mysql_on_drbd" to="ms_drbd_mysql" to_role="master" from="rg_mysql" score="INFINITY"/> </constraints> Assuming you put these settings in a file named /tmp/constraints.xml, here is how you would enable them: cibadmin -U -x /tmp/constraints.xml Subsequently, you would create your relevant resources: <resources> <master_slave id="ms_drbd_mysql"> <meta_attributes id="ms_drbd_mysql-meta_attributes"> <attributes> <nvpair name="notify" value="yes"/> <nvpair name="globally_unique" value="false"/> </attributes> </meta_attributes> <primitive id="drbd_mysql" class="ocf" provider="heartbeat" type="drbd"> <instance_attributes id="ms_drbd_mysql-instance_attributes"> <attributes> <nvpair name="drbd_resource" value="mysql"/> </attributes> </instance_attributes> <operations id="ms_drbd_mysql-operations"> <op id="ms_drbd_mysql-monitor-master" name="monitor" interval="29s" timeout="10s" role="Master"/> <op id="ms_drbd_mysql-monitor-slave" name="monitor" interval="30s" timeout="10s" role="Slave"/> </operations> </primitive> </master_slave> <group id="rg_mysql"> <primitive class="ocf" type="Filesystem" provider="heartbeat" id="fs_mysql"> <instance_attributes id="fs_mysql-instance_attributes"> <attributes> <nvpair name="device" value="/dev/drbd0"/> <nvpair name="directory" value="/var/lib/mysql"/> <nvpair name="type" value="ext3"/> </attributes> </instance_attributes> </primitive> <primitive class="ocf" type="IPaddr2" provider="heartbeat" id="ip_mysql">
<instance_attributes id="ip_mysql-instance_attributes"> <attributes> <nvpair name="ip" value="10.9.42.1"/> <nvpair name="nic" value="eth0"/> </attributes> </instance_attributes> </primitive> <primitive class="lsb" type="mysqld" provider="heartbeat" id="mysqld"/> </group> </resources> Assuming you put these settings in a file named /tmp/resources.xml, here is how you would enable them: cibadmin -U -x /tmp/resources.xml After this, your configuration should be enabled. Heartbeat now selects a node on which it promotes the DRBD resource, and then starts the DRBD-backed resource group on that same node.
Note The -M (or --migrate) option for the crm_resource command, when used without the -H option, implies a resource migration awayfrom the local host. You must initiate a migration to the local host by specifying the -H option, giving the local host name as the option argument. It is also important to understand that the migration is permanent, that is, unless told otherwise, Heartbeat will not move the resource back to a node it was previously migrated away from even if that node happens to be the only surviving node in a near-cluster-wide system failure. This is undesirable under most circumstances. So, it is prudent to immediately un-migrate resources after successful migration, using the the following command:
crm_resource -r resource -U
Finally, it is important to know that during resource migration, Heartbeat may simultaneously migrate resources other than the one explicitly specified (as required by existing resource groups or colocation and order constraints).
Manual takeover of all cluster resources. This procedure involves switching the peer node to standby mode (where hostname is the peer node's host name): crm_standby -U hostname -v on