Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

This article assume you have two system with RHEL 5.

2 X86_64 installed and you want


to create a cluster to have High Availability for some services (in this article Apache Web
Server).

This article also assume that you have a shared storage accessible from the two system, as
for example a Storage Area Network (SAN) Fibre Channel oer iSCSI and you have free
space on it.

First of all you need to install on both your systems all needed packages.
For doing this, create a cluster.repo file in /etc/yum.repos.d with the following command

#touch /etc/yum.repos.d/cluster.repo

Other command to put values in cluster.repo file.

#echo [Server] >> /etc/yum.repos.d/cluster.repo


#echo name=Server >> /etc/yum.repos.d/cluster.repo
#echo baseurl=file:///mnt/Server >> /etc/yum.repos.d/cluster.repo
#echo enabled=1 >> /etc/yum.repos.d/cluster.repo
#echo gpgcheck=0 >> /etc/yum.repos.d/cluster.repo
#echo [Cluster] >> /etc/yum.repos.d/cluster.repo
#echo name=Cluster >> /etc/yum.repos.d/cluster.repo
#echo baseurl=file:///mnt/Cluster >> /etc/yum.repos.d/cluster.repo
#echo enabled=1 >> /etc/yum.repos.d/cluster.repo
#echo gpgcheck=0 >> /etc/yum.repos.d/cluster.repo
#echo [ClusterStorage] >> /etc/yum.repos.d/cluster.repo
#echo name=ClusterStorage >> /etc/yum.repos.d/cluster.repo
#echo baseurl=file:///mnt/ClusterStorage >> /etc/yum.repos.d/cluster.repo
#echo enabled=1 >> /etc/yum.repos.d/cluster.repo
#echo gpgcheck=0 >> /etc/yum.repos.d/cluster.repo

You need to mount the DVD in case if it is not mounted by the following command

# mount /dev/dvd /mnt

Insert the RHEL 5.5 X86_64 media on you CD/DVD Reader, and run the following
command to update yum database :

You can run the following command

# yum update

Or from X windows use “add application”


If yum can’t use the new repository, check if autofs service is up and running (or start it)
with the folowing command :

service autofs restart

At this point you can install all needed packages from create and administer a cluster :

yum groupinstall -y “Cluster Storage” “Clustering”

Following for SCSI shared storage.

If you have to use iSCSI initiator (in this How-To I’ll use it) you have to install also the
following packages :

yum install -y iscsi-initiator-utils isns-utils

And configure it to start at boot :

chkconfig iscsi on
chkconfig iscsid on

service iscsi start


service iscsid start

In this How-to I’ll use three systems, with this IP Address.


The two “rhel-cluster-nodeX” systems have two NICs, one for production and one for
HighAvailability check.

rhel-cluster-node1 rhel-cluster-node1
192.168.234.201 10.2.5.25
10.10.10.1 192.168.1.1

rhel-cluster-node2 rhel-cluster-node2
192.168.234.202 10.2.5.27
1010.10.2 192.168.1.1

rhel-cluster-san
192.168.234.203

What I’m going to do is create a cluster with 192.168.234.200 IP Address who share the
service from 192.168.234.201 and 192.168.234.202 machines, and use a GFS filesystem
reachable with iSCSI on 192.168.234.203 .

Assuming you have just configured the iSCSI target on the SAN (if you don’t know how
to do it, look for another post on thi blog) you must run the following command to check
and login to the shared LUN :
iscsiadm -m discovery -t st -p 192.168.234.203

iscsiadm -m node -L all

touch /etc/iscsi/send_targets

echo 192.168.234.203 >> /etc/iscsi/send_targets

For convenience, add to both cluster nodes, the following lines in /etc/hosts :

10.10.10.1 rhel-cluster-node1.mgmt.local rhel-cluster-node1


10.10.10.2 rhel-cluster-node2.mgmt.local rhel-cluster-node2

Be sure that the iSCSI mapped device is /dev/sdb (otherwise change the following
commands), then proceed creating a new Phisical Volume, a new Volume Group and a
new Logical Volume to use as a shared storage for cluster nodes, by using he following
commands :

pvcreate /dev/sdb

vgcreate vg1 /dev/sdb

lvcreate -l 10239 -n lv0 vg1

You’re done, you create a new volume group “vg1″ and a new logical volume “lv0″. The
“-l 10239″ parameter is based on the size on my iSCSI shared storage, in this case 40 GB.

At this point you are ready the create the clustered GFS file system on your device using
the command below :

gfs_mkfs -p lock_dlm -t rhel-cluster:storage1 -j 8 /dev/vg1/lv0

You’re done, you’ve created a GFS fil system, with locking protocol “lock_dlm” for a
cluster called “rhel-cluster” and with name “storage1″, you can use this GFS for a
maximum of 8 hosts and you’ve used the /dev/vg1/lvo device.

To administer Red Hat Clusters with Conga, run luci and ricci as follows :

service luci start


service ricci start

Configure the automatic startup for ricci and luci on both systems, using :

chkconfig luci on
chkconfig ricci on
On both systems, initialize the luci server using the luci_admin init command.

service luci stop


luci_admin init

This command create the ‘admin’ user and his password, for doing so follow on screen
instruction, and check for an output as the following :

The admin password has been successfully set.


Generating SSL certificates…
The luci server has been successfully initialized

You must restart the luci server for changes to take effect, run the following to do it :

service luci restart

For a correct cluster configuration and maintenance, you have to start (and configure to
start at boot) the following services :

chkconfig rgmanager on
service rgmanager start
chkconfig cman on
service cman start

Edit fstab and add

/dev/vg1/lv0 /data gfs defaults,acl 0 0

You can check if all works using the command :

mount -a

try to mount/umount read and write … if all works fine you can continue.

configure apache to use one or more virtual host on folder on the same storage.
for example, on both nodes, add to the end of /etc/httpd/conf/httpd.conf

<VirtualHost *:80>
ServerAdmin webmaster@mgmt.local
DocumentRoot /data/websites/default
ServerName rhel-cluster.mgmt.local
ErrorLog logs/rhel-cluster_mgmt_local-error_log
CustomLog logs/rhel-cluster_mgmt_local-access_log common
</VirtualHost>

For use the example above, you must create two directory under /data,
mkdir /data/websites
mkdir /data/websites/default

and you must create an index file to put on that directory :

touch /data/websites/default/index.html

echo WORKS!!! >> /data/websites/default/index.html

Configure apache to start at booot time and start it with the following commands :

chkconfig httpd on
service httpd start

Point your web browser to https://rhel-cluster-node1:8084 to access luci

1. As administrator of luci, select the cluster tab.


2. Click Create a New Cluster.
3. At the Cluster Name text box, enter cluster name “rhel-cluster.
Add the node name and password for each cluster node.
4. Click Submit. Clicking Submit causes the following actions:
a. Cluster software packages to be downloaded onto each cluster node.
b. Cluster software to be installed onto each cluster node.
c. Cluster configuration file to be created and propagated to each node in the cluster.
d. Starting the cluster.
A progress page shows the progress of those actions for each node in the cluster.
When the process of creating a new cluster is complete, a page is displayed providing a
configuration interface for the newly created cluster.

Managing your newly created cluster you can ad resources.


Add a resource, choose IP Address and use 192.168.234.200

Create a service named “cluster”, add the resource “IP Address” you had created before,
check “Automatically start this service”
check “Run exclusive”
choose “Recovery policy” as “Relocate”

Save the service.

If the service created give no errors, enable it, and try to start it on one cluster node.

The Cluster configuration file would be /etc/cluster/cluster.conf and must looks like
similar than the following :

cat /etc/cluster/cluster.conf
<?xml version=”1.0″?>
<cluster alias=”rhel-cluster” config_version=”25″ name=”rhel-cluster”>
<fence_daemon clean_start=”0″ post_fail_delay=”0″ post_join_delay=”3″/>
<clusternodes>
<clusternode name=”rhel-cluster-node2.mgmt.local” nodeid=”1″ votes=”1″>
<fence/>
</clusternode>
<clusternode name=”rhel-cluster-node1.mgmt.local” nodeid=”2″ votes=”1″>
<fence/>
</clusternode>
</clusternodes>
<cman expected_votes=”1″ two_node=”1″/>
<fencedevices/>
<rm>
<failoverdomains/>
<resources>
<ip address=”192.168.234.200″ monitor_link=”0″/>
</resources>
<service autostart=”1″ exclusive=”1″ name=”cluster” recovery=”relocate”>
<ip ref=”192.168.234.200″/>
</service>
</rm>
</cluster>

To check if shared IP Address is working correctly, try the following :

/sbin/ip addr list

The ouput must be similar to the following :

eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen


1000
link/ether 00:0c:29:96:8b:ed brd ff:ff:ff:ff:ff:ff
inet 192.168.234.201/24 brd 192.168.234.255 scope global eth0
inet 192.168.234.200/24 scope global secondary eth0
inet6 fe80::20c:29ff:fe96:8bed/64 scope link
valid_lft forever preferred_lft forever

At this point you can shutdown (or disconnect from network) one host and see if the web
page on 192.168.234.200 is still reachable.

If all works, you’re done.

This is a very simple cluster, sharing only the IP Address resource, but you can add more
resource, more services and configure failover domains and/or fence devices. For doing
so, refer to RedHat KnowledgeBase and Documentation on http://www.redhat.com .
Hope this help

Bye
Riccardo

 Print This Post

You might also like