Professional Documents
Culture Documents
Linux Clustering
Linux Clustering
Linux Clustering
Prerequisite:
• Operating system on all servers
• Physical connectivity between shared storage and
servers (to be done by hardware vendor)
Login as root
Run “fdisk /dev/cciss/c0d2” (where c0 is controller name and d0 is
device name pl see the annexure for details )
1. Press “p” to see the partition details
2. Press “n” to create new partition
3. type “p” for primary partition
4. type “1” for 1st primary partition
5. press enter to 1st cylinder
6. type “ +100M”
Repeat this process to create more partition and see the annexure for
respective cluster
7. press ‘w” to write and save the partition table
8. press “q” to quit from fdisk menu
/dev/raw/raw1 /dev/cciss/c1d0p1
/dev/raw/raw2 /dev/cciss/c1d0p2
(where c1d0p1 is the device name please see the annexure for correct
partition name)
2. Check the box for the Red Hat Cluster Suite, and click the Details link to
the package descriptions.
3. While viewing the package group details, check the box next to the
packages to install. Click Close when finished.
The Cluster Configuration Tool (you need to run this tools only on
one server)
Red Hat Cluster Manager consists of the following RPM packages:
clumanager . This package consists of the software that is responsible for
cluster operation (including the cluster daemons).
redhat-config-cluster . This package contains the Cluster Configuration
Tool and the Cluster Status Tool, which allow for the configuration of the
cluster and the display of the current status of the cluster and its members
and services.
You can use either of the following methods to access the Cluster
Configuration Tool:
Select Main Menu => System Settings => Server Settings => Cluster.
Or At a shell prompt, type the redhat-config-cluster
The first time that the application is started, the Cluster Configuration Tool
is displayed. After complete the cluster configuration, the command starts
the Cluster Status Tool by default.
To access the Cluster Configuration Tool from the Cluster Status Tool,
select Cluster => Configure.
The following tabbed sections are available within the Cluster Configuration
Tool:
Members : Use this section to add members to the cluster and optionally
configure a power controller connection for any given member.
Failover Domains : Use this section to establish one or more subsets of the
cluster members for specifying which members are eligible to run a service
in the event of a system failure. (Note that the use of failover domains is
optional.)
Services : Use this section to configure one or more services to be
managed by the cluster. As you specify an application service, the
relationship between the service and its IP address, device special file,
mount point, and NFS exports is represented by a hierarchical structure.
The parent-child relationships in the Cluster Configuration Tool reflect the
organization of the service information in the /etc/cluster.xml file.
You can specify the following properties for the clumembd daemon:
Log Level . Determines the level of event messages that get logged to the
cluster log file (by default /var/log/messages). Choose the appropriate
logging level from the menu.
Failover Speed.Determines the number of seconds that the cluster service
waits before shutting down a non-responding member (that is, a member
from which no heartbeat is detected). To set the failover speed, drag the
slider bar. The default failover speed is 10 seconds
2. Enter a name for the domain (example- Oracle) in the Domain Name
field. The name should be descriptive enough to distinguish its purpose
relative to other names used on your network.
2. Check Ordered Failover if you want members to assume control of a
failed service in a particular sequence; preference is indicated by the
member's position in the list of members in the domain, with the most
preferred member at the top.
5. Click Add Members to select the members for this failover domain. The
Failover Domain Member dialog box is displayed.
You can choose multiple members from the list by pressing either the
[Shift] key while clicking the start and end of a range of members, or
pressing the [Ctrl] key while clicking on non-contiguous members.
6. When finished selecting members from the list, click OK. The selected
members are displayed on the Failover Domain list.
7. When Ordered Failover is checked, you can rearrange the order of the
members in the domain by dragging the member name in the list box to
the desired position. A thin, black line is displayed to indicate the new row
position (when you release the mouse button).
8. When finished, click OK.
9. Choose File => Save to save the changes to the cluster configuration.
To remove a member from a failover domain, follow these steps:
1. On the Failover Domains tab, double-click the name of the domain you
want to modify (or select the domain and click Properties).
2. In the Failover Domain dialog box, click the name of the member you
want to remove from the domain and click Delete Member. (Members must
be deleted one at a time.) You are prompted to confirm the deletion.
3. When finished, click OK.
4. Choose File => Save to save the changes to the cluster configuration.
local4.* /var/log/cluster
/var/log/cluster.log {
monthly
create 0664 root utmp
rotate 5
postrotate
/sbin/killall -HUP syslogd
endscript
}
df –h
Make sure any partition should not full more than 80%
Output will be like the (all outputs are examples only)
[root@delhi-oam OAM]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p8 1008M 437M 520M 46% /
/dev/cciss/c0d0p1 97M 15M 77M 17% /boot
/dev/cciss/c0d0p3 5.8G 3.8G 1.8G 69% /data
/dev/cciss/c0d0p10 114G 72G 37G 67% /home
none 1.9G 0 1.9G 0% /dev/shm
/dev/cciss/c0d0p7 1008M 17M 941M 2% /tmp
/dev/cciss/c0d0p5 2.9G 2.1G 666M 77% /usr
/dev/cciss/c0d0p9 483M 53M 406M 12% /usr/local
/dev/cciss/c0d0p6 1008M 909M 49M 95% /var
PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME
COMMAND
28742 root 39 5 141M 141M 14716 R N 22.6 3.8 10:03 jrun
28840 root 39 5 141M 141M 14716 R N 22.4 3.8 4:34 jrun
28955 root 39 5 141M 141M 14716 R N 21.8 3.8 4:59 jrun
28744 root 39 5 141M 141M 14716 R N 21.2 3.8 5:16 jrun
28660 root 39 5 141M 141M 14716 R N 21.0 3.8 6:33 jrun
28934 root 39 5 141M 141M 14716 R N 20.4 3.8 3:39 jrun
28722 root 39 5 141M 141M 14716 R N 20.2 3.8 6:01 jrun
28839 root 39 5 141M 141M 14716 R N 20.2 3.8 7:35 jrun
28658 root 39 5 141M 141M 14716 R N 20.0 3.8 13:22 jrun
28768 root 39 5 141M 141M 14716 R N 19.8 3.8 8:51 jrun
28732 root 39 5 141M 141M 14716 R N 19.4 3.8 5:36 jrun
28667 root 39 5 141M 141M 14716 R N 19.2 3.8 6:19 jrun
28757 root 39 5 141M 141M 14716 R N 18.6 3.8 5:00 jrun
28941 root 39 5 141M 141M 14716 R N 18.6 3.8 3:23 jrun
28931 root 39 5 141M 141M 14716 R N 18.4 3.8 5:21 jrun
28867 root 39 5 141M 141M 14716 R N 17.8 3.8 3:37 jrun
25037 root 39 5 141M 141M 14716 R N 17.6 3.8 49:38 jrun
25041 root 39 5 141M 141M 14716 R N 17.0 3.8 49:09 jrun
28726 root 39 5 141M 141M 14716 R N 16.6 3.8 11:58 jrun
28856 root 39 5 141M 141M 14716 R N 16.4 3.8 6:19 jrun
24926 root 20 5 141M 141M 14716 S N 1.5 3.8 1:50 jrun
25001 root 20 5 141M 141M 14716 S N 1.3 3.8 0:41 jrun
25002 root 20 5 141M 141M 14716 S N 0.9 3.8 0:42 jrun
30149 root 15 0 1240 1240 832 R 0.7 0.0 0:00 top
14573 test 15 0 728 704 624 S 0.1 0.0 4:51 ping
28946 root 21 5 141M 141M 14716 S N 0.1 3.8 0:03 jrun
1 root 15 0 496 448 448 S 0.0 0.0 0:32 init
2 root 15 0 0 0 0 SW 0.0 0.0 0:00 keventd
3 root 15 0 0 0 0 SW 0.0 0.0 0:00 keventd
4 root 15 0 0 0 0 SW 0.0 0.0 0:00 keventd
Managing Cluster
clustat :
This will show the status of all member in this cluster make sure all node
should be active. Last trasition time show the latest time of service
restarted and Restart column show the how many times service restarted
since cluster running. Service should be running with 0 restart count
clustat
[root@cluster1 root]# clustat
Cluster Status - BTSL-CLUSTER 15:01:53
Cluster Quorum Incarnation #3
Shared State: Shared Raw Device Driver v1.2
Member Status
------------------ ----------
cluster0 Active
cluster1 Active <-- You are here
If you find the status of any node inactive, then login to thr respective
server as root
Run ‘service clumanager status’ this command will tell u the status of the
node whethet service is running on the server or not and check the
/var/log/cluster.log for details
• fsck utility
clusvcadm –e service_name
[root@cluster1 root]# clusvcadm –e service_name
trying to enable service oracle ……… success
clusvcadm –d service_name
[root@cluster1 root]# clusvcadm –d service_name
trying to disable service oracle ……… success