Professional Documents
Culture Documents
8 Integrating DRBD With Red Hat Cluster 1
8 Integrating DRBD With Red Hat Cluster 1
This guide uses the unofficial term Red Hat Cluster to refer to a product that has had multiple
official product names over its history, including Red Hat Cluster Suite and Red Hat
Enterprise Linux High Availability Add-On.
8.1.1. Fencing
Red Hat Cluster, originally designed primarily for shared storage clusters, relies on node fencing to
prevent concurrent, uncoordinated access to shared resources. The Red Hat Cluster fencing
infrastructure relies on the fencing daemon fenced, and fencing agents implemented as shell
scripts.
Even though DRBD-based clusters utilize no shared storage resources and thus fencing is not
strictly required from DRBD’s standpoint, Red Hat Cluster Suite still requires fencing even in DRBD-
based configurations.
Where resources depend on each other — such as, for example, an NFS export depending on a
filesystem being mounted — they form a resource tree, a form of nesting resources inside another.
Resources in inner levels of nesting may inherit parameters from resources in outer nesting levels.
The concept of resource trees is absent in Pacemaker.
Red Hat Cluster resource agents install into the /usr/share/cluster/ directory. Unlike Pacemaker OCF
resource agents which are by convention self-contained, some Red Hat Cluster resource agents are
split into a .sh file containing the actual shell code, and a .metadata file containing XML resource
agent metadata.
DRBD includes a Red Hat Cluster resource agent. It installs into the customary directory as drbd.sh
and drbd.metadata.
For more information about configuring Red Hat clusters, see Red Hat’s documentation on
the Red Hat Cluster and GFS.
This section, like Integrating DRBD with Pacemaker clusters, assumes you are about to configure a
highly available MySQL database with the following configuration parameters:
The DRBD resources to be used as your database storage area is named mysql, and it
manages the device /dev/drbd0.
The DRBD device holds an ext3 filesystem which is to be mounted to /var/lib/mysql (the
default MySQL data directory).
The MySQL database is to utilize that filesystem, and listen on a dedicated cluster IP
address, 192.168.42.1.
To do that, open /etc/cluster/cluster.conf with your preferred text editing application. Then, include
the following items in your resource configuration:
<rm>
<resources />
<fs device="/dev/drbd/by-res/mysql/0"
mountpoint="/var/lib/mysql"
fstype="ext3"
name="mysql"
options="noatime"/>
</drbd>
<mysql config_file="/etc/my.cnf"
listen_address="192.168.42.1"
name="mysqld"/>
</service>
</rm>
Nesting resource references inside one another in is the Red Hat Cluster way of expressing
resource dependencies.
Be sure to increment the config_version attribute, found on the root element, after you have
completed your configuration. Then, issue the following commands to commit your changes to the
running cluster configuration:
In the second command, be sure to replace with the new cluster configuration version number.
Both the system-config-cluster GUI configuration utility and the Conga web based cluster
management infrastructure will complain about your cluster configuration after including the
drbd resource agent in your cluster.conf file. This is due to the design of the Python cluster
management wrappers provided by these two applications which does not expect third party
extensions to the cluster infrastructure.
Thus, when you utilize the drbd resource agent in cluster configurations, it is not recommended to
utilize system-config-cluster nor Conga for cluster configuration purposes. Using either of these
tools to only monitor the cluster’s status, however, is expected to work fine.
Versione #3
Creato 7 September 2021 07:02:22 da chuanzhi.dai
Aggiornato 12 October 2021 07:43:15 da chaoyue.shan