Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 12

STEP BY STEP RAC CONFIGURATION OF ORACLE 10G-R2 ON RHEL 5.

1) Install OS RHEL 5.3 with Kernel version 2.6.18-128.el5 SMP on both nodes

From 1TB Storage assign 51 GB separate to each node with RAID 5 and rest would be treated as a shared storage between both the nodes.

On each database node

boot --> 250MB (Check Force to be primary partition, select fixed size)
swap --> 8GB (select fixed size)
/tmp --> 1GB (select fixed size)
/ (root) --> 15 GB (Check Force to be primary partition, select fixed size)
/u01 --> 27 GB (Check Force to be primary partition, select fill to maximum allowable size)

2) Install the following RPM required for configuring RAC on both nodes

binutils-2.15.92.0.2-10.EL4
compat-db-4.1.25-9
control-center-2.8.0-12
gcc-3.4.3-9.EL4
gcc-c++-3.4.3-9.EL4
glibc-2.3.4-2
glibc-common-2.3.4-2
gnome-libs-1.4.1.2.90-44.1
libstdc++-3.4.3-9.EL4
libstdc++-devel-3.4.3-9.EL4
make-3.80-5
pdksh-5.2.14-30
sysstat-5.0.5-1
xscreensaver-4.18-5.rhel4.2

Other required package

gcc-3.3gcc-c++-3.3.3-43
glibc-2.3.3-98.28
libaio-0.3.98-18
libaio-devel-0.3.98-18
make-3.80
openmotif-libs-2.2.2-519.1

3) Configure the network on both nodes

Node 1: - Public IP = 192.168.1.127


Private IP = 10.20.60.1
Virtual IP = 192.168.1.170

Node 2: Public IP = 192.168.1.128


Private IP = 10.20.60.2
Virtual IP = 192.168.1.171

4) Configure NTP (Network Time Protocol) Server

Edit vi /etc/ntp.conf & comment all the entries of Server in that file made the below entry at client end

Server 192.168.1.127
Now edit vi /etc/ntpd/step-trickers file at client end and made the following entry:

Node1 192.168.1.127

Now edit the vi /etc/ntp.conf file at server end and comment all the server entries and uncomment the following line

restrict 192.168.1.0 mask 255.255.255.0 no modify notrap

Now restart the ntpd service on both nodes

service ntpd restart & check the dates on both nodes whether it is same or not by the following method

ssh node1 date; date


ssh node2 date; date

5) Configure Host File on both nodes:


Edit the host file and made the following entries vi /etc/hosts

192.168.1.127 node1.vpc.in node1


192.168.1.128 node2.vpc.in node2

10.20.60.1 node1-priv.vpc.in node1-priv


10.20.60.2 node2-priv.vpc.in node2-priv

192.168.1.170 node1-vip.vpc.in node1-vip


192.168.1.171 node2-vip.vpc.in node2-vip

6) Create Oracle User and Groups on both nodes:

groupadd –g 500 oinstall


groupadd –g 501 dba

useradd –m –u 500 –g oinstall –G dba oracle


passwd oracle

mkdir –p /u01/app/oracle
chown –R oracle: oinstall /u01/app/oracle
chmod –R 775 /u01/app/oracle

7) Configure Oracle user profile on both nodes:

Login from oracle user and edit bash profile file and made the following entries
$ vi .bash_profile

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE


ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
CRS_HOME=$ORACLE_BASE/product/10.2.0/crs; export CRS_HOME
ORACLE_SID=HMS1; export ORACLE_SID
PATH=@ORACLE_HOME/bin:$CRS_HOME/bin:$PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib; export LD_LIBRARY_PATH
ORA_NLS10=$ORACLE_HOME/nls/data; export ORA_NLS10

Change ORACLE_SID=HMS2 on node 2.

Now rum the following command to activate profile file


Source .bash_profile
Set env
env

8) Configure the Kernel Parameters on each nodes:

vi /etc/sysctl.conf

kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 658576
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 1048536
net.core.wmem_max = 1048536
net.ipv4.tcp_rmem = 4096 262144 524288
net.ipv4.tcp_wmem = 4096 262144 524288
vm.nr_hugepages = 800
vm.disable_cap_mlock = 1
net.ipv4.tcp_syn_retries = 2
net.ipv4.tcp_retries2 = 3
net.ipv4.tcp_keepalive_time = 30
net.ipv4.tcp_keepalive_intvl = 60
net.ipv4.ip_forward = 0

9) Set Limits for User Oracle on both nodes:

vi /etc/security/limits.conf

oracle soft nproc 2047


oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536

vi /etc/pam.d/login

session required /lib/security/pam_limits.so

10) Configure the Hangcheck Timer on all nodes:


vi /etc/rc.d/rc.local

touch /var/lock/subsys/local
modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

11) Configure SSH For Oracle user equivalence on both nodes:

Login from oracle user


mkdir ~/.ssh
chmod 755 ~/.ssh
ssh-keygen -t rsa

yes, passphrase = skip

ssh-keygen -t dsa

yes, passphrase = skip

cd ~/.ssh

cat id_rsa.pub >> node1.pub


cat id_dsa.pub >> node1.pub

On node2 login as oracle

mkdir ~/.ssh
chmod 755 ~/.ssh

ssh-keygen -t rsa

yes, passphrase = skip

ssh-keygen -t dsa

yes , passphrase = skip

cd ~/.ssh

cat id_rsa.pub >> node2.pub


cat id_dsa.pub >> node2.pub

scp /home/oracle/.ssh/node1.pub node2:/home/oracle/.ssh/ (from node1)


scp /home/oracle/.ssh/node2.pub node1:/home/oracle/.ssh/ (from node2)

now on each node login in as oracle


cd ~/.ssh

cat node1.pub >> authorized_keys


cat node2.pub >> authorized_keys
chmod 644 authorized_keys

exec /usr/bin/ssh-agent $SHELL


ssh-add

Now check from oracle user by doing ssh on both nodes


ssh node1; ssh node2

12) Prepare Disk Partitions for ASM and OCFS

This Time partition to be done on shared storage through one nodes i.e. node1

/dev/sdb = 107 GB
/dev/sdc = 950 GB

Make 4 primary partitions on /dev/sdb from fdisk /dev/sdb as

/dev/sdb1 = 280MB (OCR), /dev/sdb2 = 280MB (voting disk)


/dev/sdb3 = 280MB (voting disk mirror1)
/dev/sdb4 = 100GB (this partion to be mounted on both nodes on same dir)
/dev/sdb4 is used for backup & archive log.

Make 4 primary partitions on /dev/sdc from fdisk /dev/sdc as

/dev/sdc1 = 280MB (OCR Mirror), /dev/sdc2=280MB (voting disk mirror2)


/dev/sdc3 = 455GB (ASM Disk), /dev/sdc4 = 455GB (ASM Disk)

Run partprobe after doing partion.

13) Login from user root and make entry in .csh file

vi /etc/csh.login

if ($?PATH) then
if ( "${path}" !~ */usr/X11R6/bin* ) then
setenv PATH "${PATH}:/usr/X11R6/bin"
endif
else
if ( $uid == 0 ) then
setenv PATH "/sbin:/usr/sbin:/usr/local/sbin:/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin"
else
setenv PATH "/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin"
endif
endif

setenv HOSTNAME `/bin/hostname`


set history=1000

if ( ! -f $HOME/.inputrc ) then
setenv INPUTRC /etc/inputrc
endif
if ( $USER == "oracle" ) then
limit maxproc 16384
limit descriptors 65536
umask 022end
if

14) Configure RAW Devices for Clusterware, OCFS and ASM.


Make rules for configuring raw devices as follows:

vi /etc/udev/rules.d/60-raw.rules

# OCFS2 Binding, since this can be buffered, it can be mapped as block device

ACTION=="add", KERNEL=="sdb4", RUN+="/bin/raw /dev/raw/raw6 %N"


KERNEL=="raw6", OWNER="root", GROUP="oinstall", MODE="660"

# For Clustering -->

ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"


ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdb2", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sdb3", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sdc2", RUN+="/bin/raw /dev/raw/raw5 %N"

KERNEL=="raw1", OWNER="root", GROUP="oinstall", MODE="660"


KERNEL=="raw2", OWNER="root", GROUP="oinstall", MODE="660"

KERNEL=="raw3", OWNER="oracle", GROUP="oinstall", MODE="660"


KERNEL=="raw4", OWNER="oracle", GROUP="oinstall", MODE="660"
KERNEL=="raw5", OWNER="oracle", GROUP="oinstall", MODE="660"

# For <-- Clustering


# For ASM -->

ACTION=="add", KERNEL=="sdc3", RUN+="/bin/raw /dev/raw/raw7 %N"


ACTION=="add", KERNEL=="sdc4", RUN+="/bin/raw /dev/raw/raw8 %N"

KERNEL=="raw7", OWNER="oracle", GROUP="dba", MODE="660"


KERNEL=="raw8", OWNER="oracle", GROUP="dba", MODE="660"

# For <-- ASM

15) Mount the Shared Partition /dev/sdb4 on both nodes using OCFS2 (Oracle Cluster File System)
mkdir /u02 (on both nodes)
Install the following rpm for OCFS (Oracle Cluster File System)

rpm –ivh ocfs2-tools


rpm –ivh ocf2-
rpm –ivh ocf2console
These all rpm’s are to be installed on both nodes. All the rpm are
Available on http://oss.oracle.com/projects/ocfs2

chkconfig ocfs2 on (both nodes)


chkconfig o2cb on (both nodes)

Now from first node on GUI (Graphical User Interface) run the following command from user root
Ocfs2console

This will open the graphical interface and then click on add nodes and add the both nodes entry like this

node1.vpc.in 192.168.1.127
node2.vpc.in 192.168.1.128

After this done propagation this will create the entry on both nodes cluster.conf file
vi /etc/o2cb/cluster.conf (compare the entries on both nodes in this file)
now format /dev/sdb4 using label = /u02 and option = _netdev, datavolume
now mount /dev/sdb4 on /u02 it will mount successfully.

Now configure o2cb on both nodes for heartbeat as follows:


/etc/init.d/o2cb configure (mark enter an all the things which asked during this time)
Now check for the o2cb status Heartbeat should be alive on both nodes
/etc/init.d/o2cb status & /etc/init.d/o2cb enable

Now manually mount /dev/sdb4 on node 2 using ocfs2

mount –t ocfs2 –L u02 /u02

Now make the entry in fstab file on both the nodes

vi /etc/fstab

LABEL=u02 /u02 ocfs2 _netdev,datavolume 0,0

Now restart both the nodes & after booting /dev/sdb4 should mount on both nodes and we can see it by mount or df –h command.

16) Run Clusterverification utility:

Login form oracle user


cd /u01/10gR2/software/clusterware/rpm

rpm –ivh cvuqdisk then check

cd /u01/10gR2/software/clusterware/cluvfy

./clufy.sh stage –pre crsinst –n node1,node2 –verbose


./clufy.sh stage –post hwos –n node1,node2 –verbose

Run this above command in pre & post both condition for db,crsinst & hwos.

17) Install Oracle Clusterware:

Login from oracle user

cd /u01/10gr2/software/clusterware/

./runInstaller
18) Configure Vipca(Virtual Internet Protocol Configuration Assistant):

Login from user root


cd /u01/app/oracle/product/10.2.0/db_1/bin

vi vipca

and enter a line unset LD_ASSUME_KERNEL & run

./vipca

Then edit service control file

cd /u01/app/oracle/product/10.2.0/db_1/bin

vi srvctl and enter a line unset LD_ASSUME_KERNEL

19) Create ASM Disk:

Install the following rpm

rpm –ivh oracleasm-


rpm –ivh oracleasmlib
rpm –ivh oracleasmtools

These all rpm’s are to be installed on both nodes. All the rpm are
Available on http://oss.oracle.com/projects

/etc/init.d/oracleasm createdisk VOL1 /dev/sdc3


/etc/init.d/oracleasm createdisk VOL2 /dev/sdc4

Now check from other nodes with the following command

/etc/init.d/oracleasm scandisk
/etc/init.d/oracleasm listdisk

20) Install DB Software:


Login From user oracle

Cd /u01/10gR2/software/database

./runInstaller

21) Create Database:


Login from user oracle

cd /u01/app/oracle/product/10.2.0/db_1/bin

& run command ./dbca

You might also like