Professional Documents
Culture Documents
Cluster INST - Sufyan
Cluster INST - Sufyan
This article also assume that you have a shared storage accessible from the two system, as
for example a Storage Area Network (SAN) Fibre Channel oer iSCSI and you have free
space on it.
First of all you need to install on both your systems all needed packages.
For doing this, create a cluster.repo file in /etc/yum.repos.d with the following command
#touch /etc/yum.repos.d/cluster.repo
You need to mount the DVD in case if it is not mounted by the following command
Insert the RHEL 5.5 X86_64 media on you CD/DVD Reader, and run the following
command to update yum database :
# yum update
At this point you can install all needed packages from create and administer a cluster :
If you have to use iSCSI initiator (in this How-To I’ll use it) you have to install also the
following packages :
chkconfig iscsi on
chkconfig iscsid on
rhel-cluster-node1 rhel-cluster-node1
192.168.234.201 10.2.5.25
10.10.10.1 192.168.1.1
rhel-cluster-node2 rhel-cluster-node2
192.168.234.202 10.2.5.27
1010.10.2 192.168.1.1
rhel-cluster-san
192.168.234.203
What I’m going to do is create a cluster with 192.168.234.200 IP Address who share the
service from 192.168.234.201 and 192.168.234.202 machines, and use a GFS filesystem
reachable with iSCSI on 192.168.234.203 .
Assuming you have just configured the iSCSI target on the SAN (if you don’t know how
to do it, look for another post on thi blog) you must run the following command to check
and login to the shared LUN :
iscsiadm -m discovery -t st -p 192.168.234.203
touch /etc/iscsi/send_targets
For convenience, add to both cluster nodes, the following lines in /etc/hosts :
Be sure that the iSCSI mapped device is /dev/sdb (otherwise change the following
commands), then proceed creating a new Phisical Volume, a new Volume Group and a
new Logical Volume to use as a shared storage for cluster nodes, by using he following
commands :
pvcreate /dev/sdb
You’re done, you create a new volume group “vg1″ and a new logical volume “lv0″. The
“-l 10239″ parameter is based on the size on my iSCSI shared storage, in this case 40 GB.
At this point you are ready the create the clustered GFS file system on your device using
the command below :
You’re done, you’ve created a GFS fil system, with locking protocol “lock_dlm” for a
cluster called “rhel-cluster” and with name “storage1″, you can use this GFS for a
maximum of 8 hosts and you’ve used the /dev/vg1/lvo device.
To administer Red Hat Clusters with Conga, run luci and ricci as follows :
Configure the automatic startup for ricci and luci on both systems, using :
chkconfig luci on
chkconfig ricci on
On both systems, initialize the luci server using the luci_admin init command.
This command create the ‘admin’ user and his password, for doing so follow on screen
instruction, and check for an output as the following :
You must restart the luci server for changes to take effect, run the following to do it :
For a correct cluster configuration and maintenance, you have to start (and configure to
start at boot) the following services :
chkconfig rgmanager on
service rgmanager start
chkconfig cman on
service cman start
mount -a
try to mount/umount read and write … if all works fine you can continue.
configure apache to use one or more virtual host on folder on the same storage.
for example, on both nodes, add to the end of /etc/httpd/conf/httpd.conf
<VirtualHost *:80>
ServerAdmin webmaster@mgmt.local
DocumentRoot /data/websites/default
ServerName rhel-cluster.mgmt.local
ErrorLog logs/rhel-cluster_mgmt_local-error_log
CustomLog logs/rhel-cluster_mgmt_local-access_log common
</VirtualHost>
For use the example above, you must create two directory under /data,
mkdir /data/websites
mkdir /data/websites/default
touch /data/websites/default/index.html
Configure apache to start at booot time and start it with the following commands :
chkconfig httpd on
service httpd start
Create a service named “cluster”, add the resource “IP Address” you had created before,
check “Automatically start this service”
check “Run exclusive”
choose “Recovery policy” as “Relocate”
If the service created give no errors, enable it, and try to start it on one cluster node.
The Cluster configuration file would be /etc/cluster/cluster.conf and must looks like
similar than the following :
cat /etc/cluster/cluster.conf
<?xml version=”1.0″?>
<cluster alias=”rhel-cluster” config_version=”25″ name=”rhel-cluster”>
<fence_daemon clean_start=”0″ post_fail_delay=”0″ post_join_delay=”3″/>
<clusternodes>
<clusternode name=”rhel-cluster-node2.mgmt.local” nodeid=”1″ votes=”1″>
<fence/>
</clusternode>
<clusternode name=”rhel-cluster-node1.mgmt.local” nodeid=”2″ votes=”1″>
<fence/>
</clusternode>
</clusternodes>
<cman expected_votes=”1″ two_node=”1″/>
<fencedevices/>
<rm>
<failoverdomains/>
<resources>
<ip address=”192.168.234.200″ monitor_link=”0″/>
</resources>
<service autostart=”1″ exclusive=”1″ name=”cluster” recovery=”relocate”>
<ip ref=”192.168.234.200″/>
</service>
</rm>
</cluster>
At this point you can shutdown (or disconnect from network) one host and see if the web
page on 192.168.234.200 is still reachable.
This is a very simple cluster, sharing only the IP Address resource, but you can add more
resource, more services and configure failover domains and/or fence devices. For doing
so, refer to RedHat KnowledgeBase and Documentation on http://www.redhat.com .
Hope this help
Bye
Riccardo