Professional Documents
Culture Documents
11gRAC Installation AIX6.1
11gRAC Installation AIX6.1
1
1) Prepare Box/LPAR –
BOX/LPAR specification -
OS Version : AIX 6.1 TL 07 SP5 ("6100-07-05") or higher, 64-bit kernel
Swap Space : 16 GB
AIX JDK & JRE : IBM JDK 1.6.0.00 (64 BIT)
NIC card : 1 for public network - 1 Gbps
1 for cluster interconnect - 10 Gbps
----------
Validation Commands :-
$ uname -a
AIX rstsdb01 1 6 00F7AD034C00
$ oslevel -s
6100-07-05-1228
$ /usr/sbin/lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type Chksum
paging00 hdisk1 rootvg 15360MB 1 yes no lv 0
hd6 hdisk0 rootvg 15360MB 1 yes no lv 0
$ /usr/bin/getconf HARDWARE_BITMODE
64
Validation :-
$ ifconfig -a
en1:
flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BI
T,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
inet 172.16.1.12 netmask 0xffffff00 broadcast 172.16.1.255
inet 169.254.22.85 netmask 0xffff0000 broadcast 169.254.255.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en2:
flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BI
T,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
inet 172.16.2.12 netmask 0xffffff00 broadcast 172.16.2.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en0:
flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BI
T,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
inet 10.40.218.46 netmask 0xffffff00 broadcast 10.40.218.255
inet 10.40.218.249 netmask 0xffffff00 broadcast 10.40.218.255
inet 10.40.218.161 netmask 0xffffff00 broadcast 10.40.218.255
inet 10.40.218.192 netmask 0xffffff00 broadcast 10.40.218.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
lo0:
flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LA
RGESEND,CHAIN>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1%1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
$ nslookup rstsdb-scan
Server: 10.1.17.46
Address: 10.1.17.46#53
Name: rstsdb-scan.bcbsnj.com
Address: 10.40.218.161
Name: rstsdb-scan.bcbsnj.com
Address: 10.40.218.191
Name: rstsdb-scan.bcbsnj.com
Address: 10.40.218.192
3) Adding Entries to the hosts file –
Mount all public and private IPs
Update the /etc/hosts file with the entries for host ip, private ip, virtual ip of the nodes in the
cluster
Validation –
$ cat /etc/hosts
127.0.0.1 loopback localhost # loopback (lo0) name/address
10.72.16.55 rspbps01a
10.72.17.81 rsdmgd01
10.1.17.30 rsptsm01
10.40.235.12 ntp.corpads.local
10.40.218.46 rstsdb01.corpads.local rstsdb01
10.40.218.47 rstsdb02.corpads.local rstsdb02
172.16.1.12 rstsdb01-priv.corpads.local rstsdb01-priv
172.16.1.13 rstsdb02-priv.corpads.local rstsdb02-priv
10.40.218.249 rstsdb01-vip.corpads.local rstsdb01-vip
10.40.218.250 rstsdb02-vip.corpads.local rstsdb02-vip
4) Install Packages –
Following AIX packages needs to be installed on both the nodes of the cluster –
Install below mentioned packages on each of the nodes----
bos.adt.base
bos.adt.lib
bos.adt.libm
bos.perf.libperfstat 6.1.2.1 or later
bos.perf.perfstat
bos.perf.proctools
rsct.basic.rte
rsct.compat.clients.rte
xlC.aix61.rte:10.1.0.0 or later
xlC.rte.10.1.0.0 or later
Validation -
$ lslpp -l bos.adt.base
$ lslpp -l bos.adt.lib
$ lslpp -l bos.adt.libm
$ lslpp -l bos.perf.libperfstat
$ lslpp -l bos.perf.perfstat
$ lslpp -l bos.perf.proctools
$ lslpp -l rsct.basic.rte
$ lslpp -l rsct.compat.clients.rte
$ lslpp -l xlC.aix61.rte
$ lslpp -l xlC.rte
$ lslpp -l gpfs.base
5) Compiler Requirements –
The following is the minimum compiler requirement for Pro*C/C++, Oracle Call Interface, Oracle
C++ Call Interface, and Oracle XML Developer’s Kit (XDK) with Oracle Database 11g Release 2
(11.2) -
IBM XL C/C++ Enterprise Edition for AIX, V9.0 April 2008 PTF
Please install accordingly on each of the nodes.
Validation –
6) Patch Requirement –
For AIX v6.1 -
Install all AIX 6L 6.1 Authorized Problem Analysis Reports (APARs) for AIX 6.1 TL 02 SP1 and
Following Fixes - IZ41855, IZ51456, IZ52319, IZ89165, IZ97457 on each of the nodes
Validation –
$ /usr/sbin/instfix -i -k "IZ41855"
$ /usr/sbin/instfix -i -k "IZ51456"
$ /usr/sbin/instfix -i -k "IZ52319"
$ /usr/sbin/instfix -i -k "IZ89165"
$ /usr/sbin/instfix -i -k "IZ97457"
For AIX version 6.1 TL 07, IBM team have confirmed that the APARs IZ97457 & IZ89165 are part of the
complete package and hence they are showing as not installed when the commands –
$ /usr/sbin/instfix -i -k "IZ89165"
$ /usr/sbin/instfix -i -k "IZ89165"
were executed.
So Oracle support has advised to go ahead with the installation if IBM team has confirmed the presence of the
APARs.
7) Configure ntpd daemon –
The ntpd daemon should be configured for all the nodes so that the clocks in all the nodes are
synchronized.
Make sure that the xntpd service is running with -x option.
Edit the file /etc/ntp.conf and add the following lines
OPTIONS="-x"
Change the /etc/rc.tcpip file with the line starting with -
start /usr/sbin/xntpd "$src_running"
to
start /usr/sbin/xntpd "$src_running" "-x"
Validation –
$ ps -f | grep xntpd
grid 7733450 10747952 0 09:23:11 pts/5 0:00 grep xntpd
Validation –
$ cat /etc/resolv.conf
nameserver 10.1.17.46
nameserver 10.72.16.62
search bcbsnj.com igntdom1.com corpads.local
$ nslookup rstsdb-scan
Server: 10.1.17.46
Address: 10.1.17.46#53
Name: rstsdb-scan.bcbsnj.com
Address: 10.40.218.161
Name: rstsdb-scan.bcbsnj.com
Address: 10.40.218.191
Name: rstsdb-scan.bcbsnj.com
Address: 10.40.218.192
$ id oracle
uid=1002(oracle) gid=2001(oinstall) groups=1(staff),2002(dba),2003(asmdba)
$ id grid
uid=1001(grid) gid=2001(oinstall)
groups=1(staff),2002(dba),2003(asmdba),2005(asmadmin),2004(asmoper)
Validation –
$ ulimit –a
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) unlimited
coredump(blocks) 2097151
nofiles(descriptors) unlimited
threads(per process) unlimited
processes(per user) unlimited
# cat /etc/security/limits
12) Tuning virtual memory manager parameter –
Set the VMM parameter as follows -
#vmo -p -o minperm%=3
#vmo -p -o maxperm%=90
#vmo -p -o maxclient%=90
#vmo -p -o lru_file_repage=0
#vmo -p -o strict_maxclient=1
#vmo -p -o strict_maxperm=0
Validation –
# vmo -L minperm%
# vmo -L maxperm%
# vmo -L maxclient%
# vmo -L lru_file_repage
# vmo -L strict_maxclient
# vmo -L strict_maxperm
Comment : for the ipqmaxlen parameter change, a restart of the server is required.
Validation –
# ioo –o aio_maxreqs
16) Scan the storage and Create the required Mount points –
Scan the allocated LUNs.
Create 3 raw devices out of 3 x 5GB LUNs shared across all the cluster nodes.
Create 1 block device out of 1 x 70 GB for each node mounted locally .
The permission for raw devices would be 660 with grid:asmadmin as its owner.
1 NFS mounted staging area out of sinze 70GB, shared across all the nodes to keep soft.
Create the following mount points -
Reserve one 70 GB LUN for /u01.
One 70 GB mount point - /u01 (for Oracle grid Infrastructure Binaries Installations)
Owner should be grid:oinstall
Create /Soft and /Backup directories and attach the NFS storage to : /Soft & /Backup
respectively and provide owner grid:oinstall and 770 permission
Also allocate us 210 GB (disks of equal sizes) of raw volumes for ASM diskgroup creation.
All hdisks and rhdisks should have grid:asmadmin ownership with permission as 660
Make sure that output of the command ls -las /dev/rhdisk* (including the device name, type,
permission, owner and its group, and device name) is the same across the nodes.
Validation –
$ ls -lat /dev/rhdisk*
crw-rw---- 1 grid asmadmin 13, 30 Jun 24 03:11 /dev/rhdisk1
crw-rw---- 1 grid asmadmin 13, 21 Jun 24 03:11 /dev/rhdisk24
crw-rw---- 1 grid asmadmin 13, 3 Jun 24 03:11 /dev/rhdisk8
crw-rw---- 1 grid asmadmin 13, 10 Jun 24 03:10 /dev/rhdisk13
crw-rw---- 1 grid asmadmin 13, 4 Jun 24 03:10 /dev/rhdisk10
crw-rw---- 1 grid asmadmin 13, 2 Jun 24 03:10 /dev/rhdisk11
crw-rw---- 1 grid asmadmin 13, 1 Jun 24 03:10 /dev/rhdisk12
crw-rw---- 1 grid asmadmin 13, 11 Jun 24 03:10 /dev/rhdisk14
crw-rw---- 1 grid asmadmin 13, 12 Jun 24 03:10 /dev/rhdisk15
crw-rw---- 1 grid asmadmin 13, 13 Jun 24 03:10 /dev/rhdisk16
crw-rw---- 1 grid asmadmin 13, 31 Jun 24 03:10 /dev/rhdisk5
crw-rw---- 1 grid asmadmin 13, 7 Jun 24 03:10 /dev/rhdisk6
….. …. ….
Validation -
$ df -g /tmp
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd3 4.00 3.61 10% 933 1% /tmp
Validation –
$ ls –ltr /u01/app/11.2.0.3/grid
$ ls –ltr /u01/app/oracle
Validation –
execute the below commands from both the nodes with
i) Grid user
ii) Oracle User
Validation/Process to do –
$ vi ~/.profile
$ . ~/.profile
22) Verifying Cluster Pre-installation requirement and resolution of errors (if any) –
Execute rucluvfy utility to check if the cluster fulfills all the pre-installation requirements of grid
infrastructure. The runcluvfy.sh is located at the grid installable directory.
After execution resolve all the error/warnings if any and then try to execute the utility again till
all the warnings/errors are resolved.
Alternatively many of the issues may be resolved by creating a fixup script and execute the script
as root user as follows -
./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -/tmp/crs_fixup.sh –verbose
24) Setting up the DISPLAY env variable and starting the X Window software –
If the installation is being is carried out through remote login (with putty or alike tool) then start
your x-window software (like winaxe etc.). Also set the DISPLAY environment variable with the
ip of the local machine as given in the box below .
Validation/Steps to do –
$ export DISPLAY=192.168.39.130:0.0
$ echo $DISPLAY
192.168.39.130:0.0
$ vi $HOME/.profile
(Add the following lines in oracle user's .profile file - )
export ORACLE_HOME=/u01/app/oracle
export ORACLE_SID=<db_name>
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/bin:$PATH
$ . ~/.profile
29) Verifying RDBMS Pre-installation requirement and resolution of errors (if any) –
Again run the runcluvfy utility to check if the rdbms installation prerequisites has been met. If
not try to resolve the errors/warnings and re-run the utility once again.
i) Request AIX Team for following sudo access to /$GRID_HOME/bin/crsctl stop/start crs
ii) Request AIX Team for sudo access to tar for backing up GRID & ORACLE binaries during
patching as the grid binaries may contain directories/files with ownership to root.