Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

Oracle 11gR2 RAC Installation on AIX 6.

1
1) Prepare Box/LPAR –
BOX/LPAR specification -
OS Version : AIX 6.1 TL 07 SP5 ("6100-07-05") or higher, 64-bit kernel
Swap Space : 16 GB
AIX JDK & JRE : IBM JDK 1.6.0.00 (64 BIT)
NIC card : 1 for public network - 1 Gbps
1 for cluster interconnect - 10 Gbps
----------
Validation Commands :-

$ uname -a
AIX rstsdb01 1 6 00F7AD034C00

$ oslevel -s
6100-07-05-1228

$ /usr/sbin/lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type Chksum
paging00 hdisk1 rootvg 15360MB 1 yes no lv 0
hd6 hdisk0 rootvg 15360MB 1 yes no lv 0

$ /usr/sbin/lsattr -E -l sys0 -a realmem


realmem 25165824 Amount of usable physical memory in Kbytes False

$ /usr/bin/getconf HARDWARE_BITMODE
64

$ lsdev -Cc processor |wc -l


2

$ lscfg -vp |grep -ip proc


The following resources are installed on your machine.

Model Architecture: chrp


Model Implementation: Multiple Processor, PCI bus

fscsi3 U8205.E6C.10AD03R-V5-C114-T1 FC SCSI I/O Controller Protocol Device


sfwcomm3 U8205.E6C.10AD03R-V5-C114-T1-W0-L0 Fibre Channel Storage
Framework Comm
L2cache0 L2 Cache
mem0 Memory
proc0 Processor
proc4 Processor
……………….
……………….
……………….
# bootinfo -k
2) Network Configuration –
Reserve IPs for each LPAR -
One public ip
One Private ip with separate netmask from public network like rstsdb01-priv, rstsdb02-priv
One Virtual ip having same netmask for public network like rstsdb01-vip, rstsdb02-vip
Set of 3 SCAN IPs with having same netmask for public network like rstsdb-scan
This need to be updated all 3 SCAN IPs in the DNS against cluster name

Validation :-

$ ifconfig -a
en1:
flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BI
T,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
inet 172.16.1.12 netmask 0xffffff00 broadcast 172.16.1.255
inet 169.254.22.85 netmask 0xffff0000 broadcast 169.254.255.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en2:
flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BI
T,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
inet 172.16.2.12 netmask 0xffffff00 broadcast 172.16.2.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en0:
flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BI
T,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
inet 10.40.218.46 netmask 0xffffff00 broadcast 10.40.218.255
inet 10.40.218.249 netmask 0xffffff00 broadcast 10.40.218.255
inet 10.40.218.161 netmask 0xffffff00 broadcast 10.40.218.255
inet 10.40.218.192 netmask 0xffffff00 broadcast 10.40.218.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
lo0:
flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LA
RGESEND,CHAIN>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1%1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1

$ nslookup rstsdb-scan
Server: 10.1.17.46
Address: 10.1.17.46#53

Name: rstsdb-scan.bcbsnj.com
Address: 10.40.218.161
Name: rstsdb-scan.bcbsnj.com
Address: 10.40.218.191
Name: rstsdb-scan.bcbsnj.com
Address: 10.40.218.192
3) Adding Entries to the hosts file –
Mount all public and private IPs
Update the /etc/hosts file with the entries for host ip, private ip, virtual ip of the nodes in the
cluster

Validation –

$ cat /etc/hosts
127.0.0.1 loopback localhost # loopback (lo0) name/address
10.72.16.55 rspbps01a
10.72.17.81 rsdmgd01
10.1.17.30 rsptsm01
10.40.235.12 ntp.corpads.local
10.40.218.46 rstsdb01.corpads.local rstsdb01
10.40.218.47 rstsdb02.corpads.local rstsdb02
172.16.1.12 rstsdb01-priv.corpads.local rstsdb01-priv
172.16.1.13 rstsdb02-priv.corpads.local rstsdb02-priv
10.40.218.249 rstsdb01-vip.corpads.local rstsdb01-vip
10.40.218.250 rstsdb02-vip.corpads.local rstsdb02-vip

4) Install Packages –
Following AIX packages needs to be installed on both the nodes of the cluster –
Install below mentioned packages on each of the nodes----
bos.adt.base
bos.adt.lib
bos.adt.libm
bos.perf.libperfstat 6.1.2.1 or later
bos.perf.perfstat
bos.perf.proctools
rsct.basic.rte
rsct.compat.clients.rte
xlC.aix61.rte:10.1.0.0 or later
xlC.rte.10.1.0.0 or later

Validation -

$ lslpp -l bos.adt.base
$ lslpp -l bos.adt.lib
$ lslpp -l bos.adt.libm
$ lslpp -l bos.perf.libperfstat
$ lslpp -l bos.perf.perfstat
$ lslpp -l bos.perf.proctools
$ lslpp -l rsct.basic.rte
$ lslpp -l rsct.compat.clients.rte
$ lslpp -l xlC.aix61.rte
$ lslpp -l xlC.rte
$ lslpp -l gpfs.base

5) Compiler Requirements –
The following is the minimum compiler requirement for Pro*C/C++, Oracle Call Interface, Oracle
C++ Call Interface, and Oracle XML Developer’s Kit (XDK) with Oracle Database 11g Release 2
(11.2) -

IBM XL C/C++ Enterprise Edition for AIX, V9.0 April 2008 PTF
Please install accordingly on each of the nodes.

Validation –

$ lslpp -l all |grep xlC


xlC.aix61.rte 11.1.0.2 COMMITTED XL C/C++ Runtime for AIX 6.1
xlC.cpp 9.0.0.0 COMMITTED C for AIX Preprocessor
xlC.msg.en_US.cpp 9.0.0.0 COMMITTED C for AIX Preprocessor
xlC.msg.en_US.rte 11.1.0.2 COMMITTED XL C/C++ Runtime
xlC.rte 11.1.0.2 COMMITTED XL C/C++ Runtime
xlC.sup.aix50.rte 9.0.0.1 COMMITTED XL C/C++ Runtime for AIX 5.2

6) Patch Requirement –
For AIX v6.1 -
Install all AIX 6L 6.1 Authorized Problem Analysis Reports (APARs) for AIX 6.1 TL 02 SP1 and
Following Fixes - IZ41855, IZ51456, IZ52319, IZ89165, IZ97457 on each of the nodes

Validation –

$ /usr/sbin/instfix -i -k "IZ41855"
$ /usr/sbin/instfix -i -k "IZ51456"
$ /usr/sbin/instfix -i -k "IZ52319"
$ /usr/sbin/instfix -i -k "IZ89165"
$ /usr/sbin/instfix -i -k "IZ97457"

For AIX version 6.1 TL 07, IBM team have confirmed that the APARs IZ97457 & IZ89165 are part of the
complete package and hence they are showing as not installed when the commands –

$ /usr/sbin/instfix -i -k "IZ89165"
$ /usr/sbin/instfix -i -k "IZ89165"
were executed.

So Oracle support has advised to go ahead with the installation if IBM team has confirmed the presence of the
APARs.
7) Configure ntpd daemon –
The ntpd daemon should be configured for all the nodes so that the clocks in all the nodes are
synchronized.
Make sure that the xntpd service is running with -x option.
Edit the file /etc/ntp.conf and add the following lines
OPTIONS="-x"
Change the /etc/rc.tcpip file with the line starting with -
start /usr/sbin/xntpd "$src_running"
to
start /usr/sbin/xntpd "$src_running" "-x"

Validation –

$ ps -f | grep xntpd
grid 7733450 10747952 0 09:23:11 pts/5 0:00 grep xntpd

$ cat /etc/rc.tcpip | grep xntpd


start /usr/sbin/xntpd "$src_running" "-x"

$ cat /etc/ntp.conf | grep OPTIONS


OPTIONS="-x"

8) Setting System Configuration Parameters –


Following UDP and TCP kernels should be set as below on each of the nodes -
tcp_ephemeral_low = 32768
tcp_ephemeral_high = 65535
udp_ephemeral_low = 32768
udp_ephemeral_high = 65535
Set the following System Configuraution Parameter as
Maxuprocs = 16384
ncargs = 128

Validation –

$ /usr/sbin/no -a | fgrep ephemeral


tcp_ephemeral_high = 65500
tcp_ephemeral_low = 9000
udp_ephemeral_high = 65500
udp_ephemeral_low = 9000

$ lsattr -E -l sys0 | grep maxuproc


maxuproc 16384 Maximum number of PROCESSES allowed per user True
$ lsattr -E -l sys0 | grep ncargs
ncargs 256 ARG/ENV list size in 4K byte blocks True

9) SCAN VIP and SCAN Listener issues –


a) Check /etc/resolv.conf file on both the nodes if its same on both location.
b) Make sure only search entry (and not domain entry or both search and domain entries) is
there in the resolv.conf file in both the nodes.
c) the command nslookup scan-name should resolve to all the 3 ips reserved for the scan
listener
d) Ensure the SCAN VIP uses the same netmask that is used by the public interface

$ cat /etc/resolv.conf
nameserver 10.1.17.46
nameserver 10.72.16.62
search bcbsnj.com igntdom1.com corpads.local

$ nslookup rstsdb-scan
Server: 10.1.17.46
Address: 10.1.17.46#53

Name: rstsdb-scan.bcbsnj.com
Address: 10.40.218.161
Name: rstsdb-scan.bcbsnj.com
Address: 10.40.218.191
Name: rstsdb-scan.bcbsnj.com
Address: 10.40.218.192

10) Creation of Operating System Groups and Users –


On each of the cluster nodes, create groups as follows -
Oracle Inventory Group – oinstall (gid 2001)
OSDBA group – dba (gid 2002), asmdba (gid 2003)
OSOPER group – asmoper (gid 2004)
OSASM group – asmadmin (gid 2005)
Create Oracle Software Owner – oracle (uid 1002)
Primary GROUP - oinstall and other Group - dba, oper,asmdba
Create grid Infrastructure owner – grid (uid 1001)
In the primary Group field - oinstall, other Group - oinstall, asmadmin, asmdba, dba, asmoper
Please make sure the user ids - oracle, grid are having unix gui enabled. This is to help
installation in GUI mode. Also share the oracle & grid user's passwords.
Request AIX team to create the uids and gids with the above defined values so as to make sure
that the uid of oracle and grid in both the nodes are the same.
Validation –

$ id oracle
uid=1002(oracle) gid=2001(oinstall) groups=1(staff),2002(dba),2003(asmdba)

$ id grid
uid=1001(grid) gid=2001(oinstall)
groups=1(staff),2002(dba),2003(asmdba),2005(asmadmin),2004(asmoper)

11) Configure Shell Limits for grid and root user –


Set shell limits for the Oracle Grid Infrastructure installation owner (grid) and for root to
unlimited. Verify that unlimited is set for both accounts either by using the smit utility or by
editing the /etc/security/limits file -
fsize = -1
core = 2097151
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = -1

Validation –

$ ulimit –a
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) unlimited
coredump(blocks) 2097151
nofiles(descriptors) unlimited
threads(per process) unlimited
processes(per user) unlimited

# cat /etc/security/limits
12) Tuning virtual memory manager parameter –
Set the VMM parameter as follows -
#vmo -p -o minperm%=3
#vmo -p -o maxperm%=90
#vmo -p -o maxclient%=90
#vmo -p -o lru_file_repage=0
#vmo -p -o strict_maxclient=1
#vmo -p -o strict_maxperm=0

Validation –

# vmo -L minperm%
# vmo -L maxperm%
# vmo -L maxclient%
# vmo -L lru_file_repage
# vmo -L strict_maxclient
# vmo -L strict_maxperm

13) Tuning Network Parameters –


Please set the below network parameters on both the nodes -
no -r -o ipqmaxlen=512
no -p -o sb_max=4194304
no -p -o tcp_recvspace=65536
no -p -o tcp_sendspace=65536
no -p -o udp_sendspace=65536
no -p -o udp_recvspace=655360
no -p -o rfc1323=1

Comment : for the ipqmaxlen parameter change, a restart of the server is required.

Validation –

$ /usr/sbin/no -a | grep ipqmaxlen


$ /usr/sbin/no -a | grep sb_max
$ /usr/sbin/no -a | grep tcp_recvspace
$ /usr/sbin/no -a | grep tcp_sendspace
$ /usr/sbin/no -a | grep udp_sendspace
$ /usr/sbin/no -a | grep udp_recvspace
$ /usr/sbin/no -a | grep rfc1323
14) Check Asynchronous Input Output Processes –
Recommended Value for aio_maxreqs as 65536. Please configure accordingly.

# ioo –o aio_maxreqs

15) Storage Allocation –


Allocate __ X 70GB (for DATA, FRA, oracle/grid binaries), 3 X 5GB LUNs (for OCR & Voting Disks)
to the system.
Make sure multipathing is enabled to avoid single point of failure on the storage side.
Please ensure Disk level redundancy (RAID etc.) has been set.
Allocate 2 NFS shared storage of size 70 GB each (/Backup and /Soft) for keeping Software and
Staging area with oracle ownership and 774 permission. (ONE TIME ACTIVITY)

16) Scan the storage and Create the required Mount points –
Scan the allocated LUNs.
Create 3 raw devices out of 3 x 5GB LUNs shared across all the cluster nodes.
Create 1 block device out of 1 x 70 GB for each node mounted locally .
The permission for raw devices would be 660 with grid:asmadmin as its owner.
1 NFS mounted staging area out of sinze 70GB, shared across all the nodes to keep soft.
Create the following mount points -
Reserve one 70 GB LUN for /u01.
One 70 GB mount point - /u01 (for Oracle grid Infrastructure Binaries Installations)
Owner should be grid:oinstall
Create /Soft and /Backup directories and attach the NFS storage to : /Soft & /Backup
respectively and provide owner grid:oinstall and 770 permission
Also allocate us 210 GB (disks of equal sizes) of raw volumes for ASM diskgroup creation.
All hdisks and rhdisks should have grid:asmadmin ownership with permission as 660
Make sure that output of the command ls -las /dev/rhdisk* (including the device name, type,
permission, owner and its group, and device name) is the same across the nodes.

Validation –

$ ls -lat /dev/rhdisk*
crw-rw---- 1 grid asmadmin 13, 30 Jun 24 03:11 /dev/rhdisk1
crw-rw---- 1 grid asmadmin 13, 21 Jun 24 03:11 /dev/rhdisk24
crw-rw---- 1 grid asmadmin 13, 3 Jun 24 03:11 /dev/rhdisk8
crw-rw---- 1 grid asmadmin 13, 10 Jun 24 03:10 /dev/rhdisk13
crw-rw---- 1 grid asmadmin 13, 4 Jun 24 03:10 /dev/rhdisk10
crw-rw---- 1 grid asmadmin 13, 2 Jun 24 03:10 /dev/rhdisk11
crw-rw---- 1 grid asmadmin 13, 1 Jun 24 03:10 /dev/rhdisk12
crw-rw---- 1 grid asmadmin 13, 11 Jun 24 03:10 /dev/rhdisk14
crw-rw---- 1 grid asmadmin 13, 12 Jun 24 03:10 /dev/rhdisk15
crw-rw---- 1 grid asmadmin 13, 13 Jun 24 03:10 /dev/rhdisk16
crw-rw---- 1 grid asmadmin 13, 31 Jun 24 03:10 /dev/rhdisk5
crw-rw---- 1 grid asmadmin 13, 7 Jun 24 03:10 /dev/rhdisk6
….. …. ….

17) /tmp Sizing – /temp size should be 8 GB.

Validation -

$ df -g /tmp
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd3 4.00 3.61 10% 933 1% /tmp

18) Directory Creation –


Creation of the directories for Grid Infrastructure & Oracle RDBMS binaries installation and
proper ownership on all the nodes -
Grid Directories - /u01/app/11.2.0.3/grid owner - grid:oinstall
/u01/app/grid owner - grid:oinstall
Oracle Directories - /u01/app/oracle owner - oracle:oinstall
Permission 760 for /u01 recursively

Validation –

$ ls –ltr /u01/app/11.2.0.3/grid

$ ls –ltr /u01/app/oracle

19) Assigned Capabilities to Users –


Assign following capabilities to the user in each of the nodes -
# /usr/bin/chuser
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid
# /usr/bin/chuser
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle
# /usr/sbin/lsuser -a capabilities grid
grid capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
# /usr/sbin/lsuser -a capabilities oracle
oracle capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
Validation –

# /usr/sbin/lsuser -a capabilities grid


grid capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
# /usr/sbin/lsuser -a capabilities oracle
oracle capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE

20) ssh configuration –


For each user - oracle & grid the ssh equivalence should be set as following. Execute the below
command on all the node(s) once for grid user and then for oracle user –

Validation –
execute the below commands from both the nodes with
i) Grid user
ii) Oracle User

$ ssh rstsdb01 date


Mon Jun 24 06:29:30 EDT 2013
$ ssh rstsdb01 date
Mon Jun 24 06:29:30 EDT 2013

21) grid user .profile updation -


Modify the .profile file in each of the nodes as follows (ORACLE_SID would be ASM1 & ASM2 for
the two nodes) -
A sample profile for the grid user is attached. Update the .profile for grid user with the contents
of the sample data.

Validation/Process to do –

$ vi ~/.profile

$ . ~/.profile
22) Verifying Cluster Pre-installation requirement and resolution of errors (if any) –
Execute rucluvfy utility to check if the cluster fulfills all the pre-installation requirements of grid
infrastructure. The runcluvfy.sh is located at the grid installable directory.

$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose

After execution resolve all the error/warnings if any and then try to execute the utility again till
all the warnings/errors are resolved.

Alternatively many of the issues may be resolved by creating a fixup script and execute the script
as root user as follows -
./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -/tmp/crs_fixup.sh –verbose

23) Running the rootpre.sh script -


Execution of rootpre.sh script is optional. (We did not execute it for our installation)
The rootpre.sh script will be found in the grid directory of the grid installables. Execute it with
root user. It should only be installed once in a system/LPAR for an oracle grid installation (unless
the next installation is a higher version than the current).

24) Setting up the DISPLAY env variable and starting the X Window software –
If the installation is being is carried out through remote login (with putty or alike tool) then start
your x-window software (like winaxe etc.). Also set the DISPLAY environment variable with the
ip of the local machine as given in the box below .

Validation/Steps to do –

$ export DISPLAY=192.168.39.130:0.0

$ echo $DISPLAY
192.168.39.130:0.0

25) Grid Infrastructure Installation –


Starting runInstaller
Login with "grid" and start runInstaller from grid directory of oracle grid software - ./runInstaller
26) run Cluvfy for checking successful clusterware installation –
The success of grid infrastructure installation can be confirmedthe following way.
Execute the below command as grid user -
./cluvfy stage -post crsinst -n all > /tmp/cluvfy-cluster.log

27) ASM Configuration –


Starting asmca
Login with "grid" and run asmca.

28) Oracle user .profile updation –


Update the oracle user profile as below (the value of the variables may be changed as per your
choice/requirement) –

$ vi $HOME/.profile
(Add the following lines in oracle user's .profile file - )
export ORACLE_HOME=/u01/app/oracle
export ORACLE_SID=<db_name>
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/bin:$PATH

Next execute the .profile file -

$ . ~/.profile

29) Verifying RDBMS Pre-installation requirement and resolution of errors (if any) –
Again run the runcluvfy utility to check if the rdbms installation prerequisites has been met. If
not try to resolve the errors/warnings and re-run the utility once again.

./cluvfy stage -pre dbinst –n rstsdb01,rstsdb02 –verbose


30) Revisit step no 24 for Oracle login –
31) RDBMS Binaries Installation –
Starting runInstaller :
Login with Oracle. Go to the database folder of the unzipped oracle software, and execute the
runInstaller as follows -
./runInstaller

32) Database Creation –


Start dbca -
type dbca from an oracle user in a terminal. The database configuration Assistant will start.
Press next.

33) Post Installation Testing –


Now oracle 11gr2 RAC installation is complete. Execute the crs_stat –t command to check if all
the services are running successfully. A typical output of crs_stat –t command would look like as
below –

rstsdb01: grid] /u01/app/11.2.0.3/grid/bin> ./crs_stat -t


Name           Type           Target    State     Host
------------------------------------------------------------
ora.DATA.dg    ora....up.type ONLINE    ONLINE    rstsdb01
ora.FRA.dg     ora....up.type ONLINE    ONLINE    rstsdb01
ora....ER.lsnr ora....er.type ONLINE    ONLINE    rstsdb01
ora....N1.lsnr ora....er.type ONLINE    ONLINE    rstsdb02
ora....N2.lsnr ora....er.type ONLINE    ONLINE    rstsdb01
ora....N3.lsnr ora....er.type ONLINE    ONLINE    rstsdb01
ora.OCRVOTE.dg ora....up.type ONLINE    ONLINE    rstsdb01
ora.asm        ora.asm.type   ONLINE    ONLINE    rstsdb01
ora.cvu        ora.cvu.type   ONLINE    ONLINE    rstsdb01
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    rstsdb01
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    rstsdb01
ora.ons        ora.ons.type   ONLINE    ONLINE    rstsdb01
ora....ev01.db ora....se.type ONLINE    ONLINE    rstsdb01
ora....ry.acfs ora....fs.type ONLINE    ONLINE    rstsdb01
ora....SM1.asm application    ONLINE    ONLINE    rstsdb01
ora....01.lsnr application    ONLINE    ONLINE    rstsdb01
ora....b01.gsd application    OFFLINE   OFFLINE
ora....b01.ons application    ONLINE    ONLINE    rstsdb01
ora....b01.vip ora....t1.type ONLINE    ONLINE    rstsdb01
ora....SM2.asm application    ONLINE    ONLINE    rstsdb02
ora....02.lsnr application    ONLINE    ONLINE    rstsdb02
ora....b02.gsd application    OFFLINE   OFFLINE
ora....b02.ons application    ONLINE    ONLINE    rstsdb02
ora....b02.vip ora....t1.type ONLINE    ONLINE    rstsdb02
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    rstsdb02
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    rstsdb01
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    rstsdb01

34) Known Issues and Errors during Installation –

35) Post Installment Activities –

i) Request AIX Team for following sudo access to /$GRID_HOME/bin/crsctl stop/start crs
ii) Request AIX Team for sudo access to tar for backing up GRID & ORACLE binaries during
patching as the grid binaries may contain directories/files with ownership to root.

You might also like