Professional Documents
Culture Documents
Oracle Database 12c - Clusterware Administration (Activity Guide)
Oracle Database 12c - Clusterware Administration (Activity Guide)
n s e
i ce
b l el
fe ra
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bles
R o
ul io
B r a
Oracle Database 12c:
Clusterware Administration
Activity Guide
D81246GC11
Edition 1.1 | July 2015 | D91717
Disclaimer
This document contains proprietary information and is protected by copyright and other intellectual property laws. You may copy and
print this document solely for your own use in an Oracle training course. The document may not be modified or altered in any way.
Except where your use constitutes "fair use" under copyright law, you may not use, share, download, upload, copy, print, display,
perform, reproduce, publish, license, post, transmit, or distribute this document in whole or in part without the express authorization
of Oracle.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
The information contained in this document is subject to change without notice. If you find any problems in the document, please
report them in writing to: Oracle University, 500 Oracle Parkway, Redwood Shores, California 94065 USA. This document is not
warranted to be error-free.
If this documentation is delivered to the United States Government or anyone using the documentation on behalf of the United
States Government, the following notice is applicable:
n - tr
Author
a no
Jim Womack
h a s eฺ
m ) u id
Technical Contributors and Reviewers
ฺco Herbert
n G
t Bradbury,
Allan Graves, Gerlinde Frenzen, Branislavis e
s a s tud
Valny, Ira Singer,
@ s is S
Harald Van Breederode, Joel Goodman, Sean Kim, Andy Fortunak, Al Flournoy,
s
Markus Michalewicz, MariaeBillings,
l th Lee
Jerry
b
ro o us e
( b
b l es publishedtusing: oracletutor
This book was
i o Ro
ul
Bra
Table of Contents
Course Practice Environment: Security Credentials ....................................................................................I-1
Course Practice Environment: Security Credentials.......................................................................................I-2
Practices for Lesson 1: Introduction to Clusterware ....................................................................................1-1
Practices for Lesson 1 ....................................................................................................................................1-2
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
tG
ฺco e....................................................................7-1
Practice 6-4: Creating a RAC Database .........................................................................................................6-24
Practices for Lesson 7: Traditional Clusterware Management
i s n
s as Stud
Practices for Lesson 7: Overview ...................................................................................................................7-2
s
e s@ Oracle t h is Configuration Files ................................................7-14
Practice 7-1: Verifying, Starting, and Stopping Oracle Clusterware ...............................................................7-3
l
oabBackup uof stheeOCR and OLR ..............................................................................7-17
Practice 7-2: Adding and Removing Clusterware
b r
Practice 7-3: Performing
e s ( Network
Practice 7-4: Configuring
l
to Interfaces Using oifcfg .............................................................................7-20
Practiceb
R o 7-5: Working with SCANs, SCAN Listeners, and GNS ....................................................................7-37
oPracticefor7-6:Lesson
uliPractices
Recovering From Voting Disk Corruptions ...............................................................................7-47
Bra
8: Policy-Based Cluster Management ........................................................................8-1
Practices for Lesson 8: Overview ...................................................................................................................8-2
Practice 8-1: Configuring and Using Policy-Based Cluster Management .......................................................8-3
Practices for Lesson 9: Upgrading and Patching Grid Infrastructure .........................................................9-1
Practices for Lesson 9 ....................................................................................................................................9-2
Practices for Lesson 10: Troubleshooting Oracle Clusterware ...................................................................10-1
Practices for Lesson 10: Overview .................................................................................................................10-2
Practice 10-1: Working with Log Files ............................................................................................................10-3
Practice 10-2: Working with OCRDUMP ........................................................................................................10-7
Practice 10-3: Working with CLUVFY ............................................................................................................10-11
Practice 10-4: Working with Cluster Health Monitor .......................................................................................10-16
Practices for Lesson 11: Making Applications Highly Available .................................................................11-1
Practices for Lesson 11: Overview .................................................................................................................11-2
Practice 11-1: Configuring Highly Available Application Resources on Flex Cluster Leaf Nodes ..................11-3
Practice 11-2: Protecting the Apache Application ..........................................................................................11-12
B r a
ioul
R o bl
es
( b r
l e
to
s
ob use
i
s@ this
s
sas Stud
m ) h a
ฺco ent G
u
a
id
no
s eฺ
n - tr
a n s fe r
a b l el
i
ce n s e
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
n s e
i ce
b l el
fer a
a n s
n - tr
no
Course Practice
a
a s eฺSecurity
Environment:
h
m )
Credentialsu id
i s ฺco ent G
s sas SChapter
tud I
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
For operating system (Linux) usernames and passwords, see the following:
• If you are attending a classroom-based or live virtual class, ask your instructor or LVC
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
For product-specific credentials used in this course, see the following table:
Product-Specific Credentials
n s e
Product/Application Username Password
i ce
Database (orcl) SYS oracle_4U
b l el
fe r a
Database (orcl) SYSTEM s
oracle_4U
a n
Grid
n - tr
SYSASM
oracle_4U
Operating system grid a no oracle
h a s eฺ oracle
Operating system root
m ) u id
Operating system
i s co ent G
ฺoracle oracle
s sas Stud
l e s@ this
b r ob use
l e s( to
R ob
u lio
Bra
n s e
i ce
b l el
fer a
a n s
n - tr
Practices afor noLesson 1:
Introduction
h a s etoฺ Clusterware
m ) u id
o n1t G
ฺcChapter
i s
s tude
s a
s is S
s @
o b le se th
( b r o u
es t
o bl
io R
r a ul
B
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
n s e
i ce
b l el
fer a
a n s
n - tr
Practices afornoLesson 2:
Oracleh a s eฺ
Clusterware
m ) u
Architecture
id
i s ฺco ent G
s sas SChapter
tud 2
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
Tasks
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
Access to your laboratory environment will be through a graphical display running on your
classroom workstation or hosted on a remote machine. Your instructor will provide you with
instructions to access your practice environment.
The practice environment for this course is hosted on a server running Oracle Virtual Machine
(OVM). In turn, OVM hosts numerous virtual machines (VMs). Each VM is a logically separate
server that will be used to run Oracle Database 12c software, including Clusterware, Automatic
Storage Management (ASM) and Real Application Clusters (RAC).
n s e
i ce
1. Open a terminal window and become the root user. Execute the xm list command.
You should see output similar to the example displayed below. It shows that your server is b l el
hosting six domains. Domain-0 is the OVM server. The other domains relate to five VMs fer a
which you will use to form a cluster in upcoming practices. Exit the root account when a n s
finished. n - tr
[vncuser@classroom_pc ~] su - a no
h a s eฺ
Password: *****
[root@classroom_pc ~]# xm listm
) u id
G
ฺco IDentMem
Name
s i s d VCPUs State
Time(s) a u
ss is St 0 1024
Domain-0
e s @ t h 2 r----- 7872.6
host01 l
ob use 1 4000 1 -b---- 4582.3
host02( b r
e s to 2 3400 1 -b---- 5130.9
l
obhost03 3 3400 1 -b---- 4470.0
b r ob use
brw-rw---- 1 grid asmadmin 202, 104 May 7 18:37 /dev/asmdisk2p7
( to
brw-rw---- 1 grid asmadmin 202, 105 May 7 18:37 /dev/asmdisk2p8
bl es
brw-rw---- 1 grid asmadmin 202, 106 May 7 18:37 /dev/asmdisk2p9
R o [root@host01 ~]#
a u li4.o Examine host02 to confirm that the shared disk devices are also visible on that node.
Br [root@host01 ~]# ssh host02 "ls -l /dev/asmdisk*"
brw-rw---- 1 grid asmadmin 202, 81 May 7 18:37 /dev/asmdisk1p1
brw-rw---- 1 grid asmadmin 202, 91 May 7 18:37 /dev/asmdisk1p10
brw-rw---- 1 grid asmadmin 202, 92 May 7 18:37 /dev/asmdisk1p11
brw-rw---- 1 grid asmadmin 202, 93 May 7 18:37 /dev/asmdisk1p12
brw-rw---- 1 grid asmadmin 202, 82 May 7 18:37 /dev/asmdisk1p2
brw-rw---- 1 grid asmadmin 202, 83 May 7 18:37 /dev/asmdisk1p3
brw-rw---- 1 grid asmadmin 202, 85 May 7 18:37 /dev/asmdisk1p4
brw-rw---- 1 grid asmadmin 202, 86 May 7 18:37 /dev/asmdisk1p5
brw-rw---- 1 grid asmadmin 202, 87 May 7 18:37 /dev/asmdisk1p6
brw-rw---- 1 grid asmadmin 202, 88 May 7 18:37 /dev/asmdisk1p7
brw-rw---- 1 grid asmadmin 202, 89 May 7 18:37 /dev/asmdisk1p8
brw-rw---- 1 grid asmadmin 202, 90 May 7 18:37 /dev/asmdisk1p9
brw-rw---- 1 grid asmadmin 202, 97 May 7 18:37 /dev/asmdisk2p1
brw-rw---- 1 grid asmadmin 202, 107 May 7 18:37 /dev/asmdisk2p10
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
sas Stud
brw-rw---- 1 grid asmadmin 202, 88 May 7 18:37 /dev/asmdisk1p7
s
s@ this
brw-rw---- 1 grid asmadmin 202, 89 May 7 18:37 /dev/asmdisk1p8
l e
brw-rw---- 1 grid asmadmin 202, 90 May 7 18:37 /dev/asmdisk1p9
b r ob use
brw-rw---- 1 grid asmadmin 202, 97 May 7 18:37 /dev/asmdisk2p1
( to
bl es
brw-rw---- 1 grid asmadmin 202, 107 May 7 18:37 /dev/asmdisk2p10
b r ob use
[grid@host01 ~]$ ssh host04
( to
es
[grid@host04 ~]$ logout
o bl
Connection to host04 closed.
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
n s e
i ce
b l el
fer a
a n s
n - tr
Practices afor noLesson 3: Flex
Clustersh a s eฺ
m ) u id
o n3t G
ฺcChapter
i s
s tude
s a
s is S
s @
o b le se th
( b r o u
es t
o bl
io R
r a ul
B
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
n s e
i ce
b l el
fer a
a n s
n - tr
Practices afornoLesson 4: Grid
h a s eฺPreinstallation
Infrastructure
m )
Tasks u id
i s ฺco ent G
s sas SChapter
tud 4
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
Tasks
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
You will perform various tasks that are required before installing Oracle Grid Infrastructure.
These tasks include:
• Creating base directory
• Setting shell limits
• Editing profile entries
1. SSH to host01 as the root user. The cluster will use CTSSD for time synchronization
n s e
between cluster nodes so NTPD will not be used. Check that NTPD is not running on any i ce
cluster nodes. Perform this step on all five of your nodes.
b l el
[vncuser@classroom_pc ~] $ ssh root@host01 fer a
a n s
Password: *****
n - tr
no
[root@host01 ~]# service ntpd status
ntpd is stopped a
s eฺ
) h a
[root@host01 ~]# ssh host02 service ntpd status
id
m u
ntpd is stopped
i s ฺco ent G
sas Stud
[root@host01 ~]# ssh host03 service ntpd status
ntpd is stopped s
l e s@ this
[root@host01 ~]# ssh host04 service ntpd status
ntpd is stopped
b r ob use
( to
es
[root@host01 ~]# ssh host05 service ntpd status
bl
o
ntpd is stopped
R
a u lio [root@host01 ~]#
Br 2. As the root user, make sure the local naming cache daemon is running on all five cluster
nodes. If NSCD is not running on any of the nodes, start it. Perform these steps on all
five of your nodes.
[root@host01 ~]# service nscd status
nscd (pid 1545) is running...
[root@host01 ~]# ssh host02 service nscd status
nscd (pid 1541) is running...
[root@host01 ~]# ssh host03 service nscd status
nscd (pid 1525) is running...
[root@host01 ~]# ssh host04 service nscd status
nscd (pid 12076) is running...
[root@host01 ~]# ssh host05 service nscd status
nscd (pid 10345) is running...
[root@host01 ~]#
pathmunge () { ble
s th
if b o s
!r echo $PATH
e
u | /bin/egrep -q "(^|:)$1($|:)" ; then
( t o
b l es if [ "$2" = "after" ] ; then
o PATH=$PATH:$1
u lio R else
Bra PATH=$1:$PATH
fi
fi
}
# ksh workaround
if [ -z "$EUID" -a -x /usr/bin/id ]; then
EUID=`id -u`
UID=`id -ru`
fi
# Path manipulation
if [ "$EUID" = "0" ]; then
pathmunge /sbin
pathmunge /usr/sbin
pathmunge /usr/local/sbin
fi
if [ -x /usr/bin/id ]; then
USER="`id -un`"
LOGNAME=$USER
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
MAIL="/var/spool/mail/$USER"
fi
HOSTNAME=`/bin/hostname`
HISTSIZE=1000
#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
[root@host01 ~]#
5. Check that the proper scheduler is used for your shared disks (actual devices are xvde and
xvdf). On each cluster node, enter the following commands to ensure that the Deadline
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
n s e
ce
[root@host01 ~]# ssh host02 cat /sys/block/xvdf/queue/scheduler
eli
noop [deadline] cfq
a b l
fer
[root@host01 ~]# ssh host03 cat /sys/block/xvde/queue/scheduler
a n s
noop [deadline] cfq
n - tr
a no
a s eฺ
[root@host01 ~]# ssh host03 cat /sys/block/xvdf/queue/scheduler
h
noop [deadline] cfq m ) u id
i s ฺco ent G
[root@host01 ~]#
s sas Stud
s grid and oracle-owned software. Set the
s@ tforhiboth
6. Create the installation directories
l e
b r ob uofsethe directories as shown below: Do this on all five hosts.
ownership and permissions
l e s( to
Use the /stage/GRID/labs/less_04/cr_dir.sh script to save time.
R ob
[root@host01 ~]# cat /stage/GRID/labs/less_04/cr_dir.sh
u lio
Bra #!/bin/bash
mkdir -p /u01/app/12.1.0/grid
mkdir -p /u01/app/grid
mkdir -p /u01/app/oracle
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
n s e
i ce
b l el
fer a
a n s
n - tr
Practices afor noLesson 5: Grid
h a s eฺInstallation
Infrastructure
m ) u id
o n5t G
ฺcChapter
i s
s tude
s a
s is S
s @
o b le se th
( b r o u
es t
o bl
io R
r a ul
B
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
1. From a vncuser terminal on your desktop PC, change to the root account. First, set the
time across all nodes using the command shown below. Then restart the NAMED service to
ensure viability and availability of the services for the software installation.
[vncuser@classroom_pc - ~]$ su -
Password:
n s e
[root@classroom_pc ~]# TIME="`date +%T`";for H in host01 host02ice
host03 host04 host05;do ssh $H date -s $TIME;done
b l el
f e ra
root@host01's password: a n s
n -t r
Wed Jun 17 08:25:14 UTC 2015
o
root@host02's password:
s an ฺ
Wed Jun 17 08:25:14 UTC 2015
) ha uide
m
ฺco ent G
root@host03's password:
Wed Jun 17 08:25:14 UTC i2015 s
root@host04's password:s sas Stud
s@ UTC t h s
i2015
e
Wed Jun 17 08:25:14
l
r
root@host05's
b u se
ob password:
Wed s ( 17 08:25:14
Jun to UTC 2015
b l e
R o
io
aul
[root@classroom_pc ~]# service named restart
Br Stopping named: .
Starting named:
[
[
OK
OK
]
]
[root@classroom_pc ~]# exit
Logout
[vncuser@classroom_pc - ~]$
[grid@host01 ~]$
3. Start the Oracle Clusterware release 12.1 installer.
[grid@host01 ~]$ /stage/clusterware/runInstaller
Starting Oracle Universal Installer...
4. On the Select Installation Option screen, click Next to accept the default selection (Install
and Configure Oracle Grid Infrastructure for a Cluster).
5. On the Select Cluster Type screen, select Configure a Flex Cluster and click Next. n s e
i ce
6. On the Select Product Languages screen, click Next to accept the default selection
b l el
(English).
fer a
7. Use the Grid Plug and Play Information screen to configure the following settings:
a n s
• Cluster Name: cluster01
n - tr
• SCAN Name: cluster01-scan.cluster01.example.com
a no
• SCAN Port: 1521 h a s eฺ
• GNS VIP Address: 192.0.2.155 om
) u id
i s ฺc ent G
• GNS Sub Domain: cluster01.example.com
sasandS“Configure
Make sure that the “ConfiguresGNS” tud nodes Virtual IPs…” check boxes are
s@ a new
selected. Click on the “create t h s radio button and then click Next.
iGNS”
l e
r
8. On the Cluster Node
b u se screen, click Add to begin the process of specifying
ob Information
( nodes.
additional cluster
s to
l e
Clickbthe Add button and add host02.example.com. Make sure to set Node Role to HUB,
R o
u lio and click OK. Click the add button again and add host04.example.com. Make sure the
Node Role is set to LEAF and click OK.
Bra
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
10. On the Specify Network Interface Usage screen, ensure that network interface eth0 is
designated as the Public network and that network interface eth1 is designated as the
ASM & Private network. The eth2 interface should be designated “Do not use”. Click Next
to continue.
11. On the Create ASM Disk Group screen, click Change Discovery Path to customize the ASM
Disk Discovery Path.
12. In the Change Disk Discovery Path dialog box, set the Disk Discovery Path to
/dev/asmdisk* and click OK.
13. On the Create ASM Disk Group screen, make sure the Disk group name is DATA and
select the first 10 candidate disks in the list:
• /dev/asmdisk1p1
• /dev/asmdisk1p10
• /dev/asmdisk1p11
• /dev/asmdisk1p12
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
• /dev/asmdisk1p7
Select Normal for the Redundancy and click Next to continue.
14. On the Specify ASM Password screen, select ‘Use same passwords for these accounts’
and enter (please refer to D81246GC11_passwords.doc for account passwords) as the
password. Then click Next to continue.
15. On the Failure Isolation Support screen, click Next to accept the default setting (Do not use
IPMI).
n s e
16. On the Specify Management Options screen, accept the defaults (nothing selected) and
i ce
click Next to continue.
b l el
17. On the Privileged Operating System Groups screen, the values should default to the
fer a
following:
a n s
• Oracle ASM Administrator Group:
n - tr
asmadmin
• Oracle ASM DBA Group: asmdba
a no
• Oracle ASM Operator Group :
h a
asmopers eฺ
Click Next to accept the default values. m ) u id
i s
18. On the Specify Installation Location screen, t Gthe Oracle base is
ฺcomakeensure
/u01/app/grid and change the s s tulocation
aSoftware d to /u01/app/12.1.0/grid. Click
Next to continue. s
@ this S
l e sscreen,
r o b
19. On the Create Inventory
u s e click Next to accept the default installation inventory
s (b
location of /u01/app/oraInventory.
to
20. On thele Root script execution configuration screen, check ‘Automatically run configuration
ob and select ‘Use “root” user credential’. Enter the root password as the password
scripts’
R
u lio and click Next to proceed. (please refer to D81246GC11_passwords.doc for account
Bra passwords)
21. Wait while a series of prerequisite checks are performed.
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
u o
li23. In the confirmation dialog, click Yes to ignore the prerequisites flagged by the installer.
a
Br 24. Examine the Summary screen. When ready, click Install to begin the installation. Oracle
Grid Infrastructure release 12.1 will now install on the cluster. The Install Product screen
follows the course of the installation.
25. Oracle Universal Installer pauses prior to executing the root configuration scripts. Click
Yes to proceed. Oracle Universal Installer now automatically executes the root
configuration scripts and you can follow the progress using the Install Product screen.
26. After configuration completes you will see the following message:
“The installation of Oracle Grid Infrastucture for a Cluster was successful”
Click Close to close Oracle Universal Installer.
27. Back in your terminal session, configure the environment using the oraenv script. Enter
+ASM1 when you are prompted for an ORACLE_SID value.
[grid@host01 ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid
[grid@host01 ~]$
a no 168.1.101,STABLE
ora.asm
h a s eฺ
1 ONLINE ONLINE m )
host01 u id STABLE
2
i
ONLINE ONLINE s ฺco ent G
host02 STABLE
3 sas Stud
OFFLINE OFFLINE
s STABLE
ora.cvu
l e s@ this
1
b r ob use
ONLINE ONLINE host01 STABLE
ora.gns ( to
bl es
1 ONLINE ONLINE host01 STABLE
R o
ul io ora.gns.vip
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
n s e
i ce
b l el
fer a
a n s
n - tr
Practices afor noLesson 6:
Managingh a s Cluster
eฺ Nodes
) i d
ฺ c om n6t Gu
Chapter
a s is de
s s S tu
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
1. Establish a terminal session connected to host01 as the grid OS user. Ensure that you
specify the –X option for ssh to configure the X environment properly for the grid user.
[vncuser@classroom_pc ~] $ ssh –X grid@host01
grid@host01's password: *****
[grid@host01 ~]$
2. Make sure that you can connect from your host01 to host03 without being prompted for
passwords. Exit back to host01 when finished.
n s e
[grid@host01 ~]$ ssh host03
i ce
[grid@host03 ~]$ exit
b l el
logout
fer a
Connection to host03 closed.
a n s
[grid@host01 ~]$ n - tr
a nofor the grid user to point to
3. Make sure that you set up your environment variables correctly
your grid installation.
h a s eฺ
m ) u id
ฺco ent G
[grid@host01 ~]$ . oraenv
i s
as set
ORACLE_SID = [grid] ? +ASM1
The Oracle base hasss been S tutod /u01/app/grid
[grid@host01 ~]$
l e s@ this
r
4. Check your pre-grid
b u sefor host03 node using the Cluster Verification Utility.
obinstallation
l e s(
[grid@host01
o cluvfy stage -pre crsinst -n host03
t~]$
R ob
u lio Performing pre-checks for cluster services setup
Bra
Checking node reachability...
Node reachability check passed from node "host01"
n ocommunication
Check of subnet "192.168.1.0" for multicast
s a ฺ with
multicast group "224.0.0.251" passed.
) a
h uide
ฺ c om passed.
n tG
i
Check of multicast communication s e
Total memory check failed
s sas Stud
Check failed on @ nodes: is
l e s th
o b
host03
r memory s e
ucheck passed
(
Available b t o
b l
Swapes space check passed
Ro
io Free disk space check passed for
Br aul "host03:/usr,host03:/var,host03:/etc,host03:/sbin,host03:/tmp"
Free disk space check passed for "host03:/u01/app/12.1.0/grid"
Check for multiple users with UID value 54322 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as
Primary] passed
Membership check for user "grid" in group "dba" failed
Check failed on nodes:
host03
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
i s ฺco ent G
Package existence check passed for "libgcc(x86_64)"
sas Stud
Package existence check passed for "libstdc++(x86_64)"
s
l e s@ this
Package existence check passed for "libstdc++-devel(x86_64)"
b r ob use
Package existence check passed for "sysstat"
( to
Package existence check passed for "gcc"
bl es
Package existence check passed for "gcc-c++"
R o
ul io Package existence check passed for "ksh"
Br
interconnect network interfaces passed
[grid@host01 ~]$
The only exceptions you should encounter are total system memory and grid membership
in the dba group, which is normal.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
6. On the Cluster Add Node Information screen, click Add to add the hostname for the node to
be added.
7. Enter host03.example.com for the Public Hostname and select HUB as the Node Role. n s e
i ce
el
Click OK, then click Next to continue.
8. On the Perform Prerequisite Checks page, the Physical memory test will fail as host03 has
a b l
less than 4GB of memory. Click the Ignore All check box and click Next. fer
9. Click Yes in the Oracle Grid Infrastructure dialog box. a n s
n - tr
10. Click Install on the Summary page.
a no
11. The Execute Configuration Scripts dialog box will prompt you to run the root scripts on
h a
host03. Run the orainstRoot.sh script first.
s eฺ
m ) u id
o G
sฺc dent
[root@host01 ~]# ssh host03 /u01/app/oraInventory/orainstRoot.sh
Changing permissions ofsi/u01/app/oraInventory.
s sa Stufor group.
Adding read,write permissions
l e s@ this permissions for world.
Removing read,write,execute
b r ob use
l e s ( groupname
Changing to of /u01/app/oraInventory to oinstall.
R obThe execution of the script is complete.
io
Br aul [root@host01 ~]#
12. Next, execute the root.sh.script on host03.
[root@host01 ~]# ssh host03 /u01/app/12.1.0/grid/root.sh
root@host03's password:
Performing root user operation for Oracle 12c
a no
CRS-4133: Oracle High AvailabilityhServices a s eฺhas been stopped.
m u id has been started.
) Services
CRS-4123: Oracle High Availability
G
ฺco entServices
s i
CRS-4133: Oracle High Availabilitys d has been stopped.
CRS-4123: Oracle High
s a S t u
s Availability Services has been started.
@ s
thi onof'host03'
CRS-2791: Starting
b l es resources
shutdown
e
Oracle High Availability
roAttempting
Services-managed
b u s
(
CRS-2673:
s t o to stop 'ora.drivers.acfs' on 'host03'
l e
RobCRS-2677: Stop of 'ora.drivers.acfs' on 'host03' succeeded
io CRS-2793: Shutdown of Oracle High Availability Services-managed
Br aul resources
CRS-4133:
on 'host03' has completed
Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed
resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'host03'
CRS-2672: Attempting to start 'ora.evmd' on 'host03'
CRS-2676: Start of 'ora.mdnsd' on 'host03' succeeded
CRS-2676: Start of 'ora.evmd' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'host03'
CRS-2676: Start of 'ora.gpnpd' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'host03'
CRS-2676: Start of 'ora.gipcd' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host03'
CRS-2676: Start of 'ora.cssdmonitor' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host03'
CRS-2672: Attempting to start 'ora.diskmon' on 'host03'
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
no
CRS-2676: Start of 'ora.crsd' on 'host03' succeeded
a
CRS-6017: Processing resource auto-start for servers: host03
s eฺ
) h a id
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on
m u
ฺco ent G
'host01'
i s
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on
'host03'
s sas Stud
l e s@ this
CRS-2672: Attempting to start 'ora.ons' on 'host03'
b r ob use
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'host01'
(
succeeded
to
bl es
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'host01'
R o CRS-2677: Stop of 'ora.scan2.vip' on 'host01' succeeded
ul io
B r a CRS-2672: Attempting to start 'ora.scan2.vip' on 'host03'
CRS-2676: Start of 'ora.scan2.vip' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on
'host03'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'host03'
succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host03'
CRS-2676: Start of 'ora.ons' on 'host03' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'host03'
succeeded
CRS-2676: Start of 'ora.asm' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.host03.vip' on 'host03'
CRS-2676: Start of 'ora.host03.vip' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'host03'
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.proxy_advm' on 'host03'
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
l e s( to
----------------------------------------------------------------
ulio details
Bra ----------------------------------------------------------------
Local Resources
----------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.DATA.dg
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
l e s@ this 8.1.101,STABLE
ora.asm
b r ob use
(1
to
ONLINE ONLINE host01 STABLE
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
1. Establish a terminal session connected to host01 as the grid OS user. Do not specify
the –X option for ssh.
[grid@host01 addnode]$ exit (From the grid terminal opened
in the previous practice)
Checking Temp space: must be greater than 120 MB. Actual 7382
MB Passed
Checking swap space: must be greater than 150 MB. Actual 8623
MB Passed
[WARNING] [INS-13014] Target environment does not meet some
optional requirements.
CAUSE: Some of the optional prerequisites are not met. See
logs for details. /u01/app/oraInventory/logs/addNodeActions2015-
05-13_04-57-53AM.log
ACTION: Identify the list of failed prerequisite checks from
the log: /u01/app/oraInventory/logs/addNodeActions2015-05-13_04-
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
a s is de
s
Saving cluster inventory tu
s inSprogress.
l e s@ this
.................................................. 80% Done.
b
ro o us e
( b
s clustert inventory successful.
Saving
b l e
Ro The Cluster Node Addition of /u01/app/12.1.0/grid was
io successful.
..........
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
[grid@host01 addnode]$
n s e
5. Execute the orainstRoot.sh and root.sh scripts on host05.
i ce
[root@host01 ~]# ssh host05 /u01/app/oraInventory/orainstRoot.sh
b l el
Changing permissions of /u01/app/oraInventory.
fer a
a n s
Adding read,write permissions for group.
n - tr
no
Removing read,write,execute permissions for world.
a
s eฺto oinstall.
Changing groupname of /u01/app/oraInventory
) h a id
m u
ฺco ent G
The execution of the script is complete.
i s
[root@host01 ~]# ssh s as S/u01/app/12.1.0/grid/root.sh
shost05 tud
Performing roots@
e t h is
user operation for Oracle 12c
b l e
( b ro o us
The s
e following t environment variables are set as:
b l
o ORACLE_OWNER= grid
io R
aul
ORACLE_HOME= /u01/app/12.1.0/grid
a no
CRS-2672: Attempting to start 'ora.mdnsd' on 'host05'
h a s eฺ
CRS-2672: Attempting to start 'ora.evmd' on 'host05'
m ) u id
ฺco ent G
CRS-2676: Start of 'ora.mdnsd' on 'host05' succeeded
i s
CRS-2676: Start of 'ora.evmd' on 'host05' succeeded
s sas Stud
CRS-2672: Attempting to start 'ora.gpnpd' on 'host05'
l e s@ this
CRS-2676: Start of 'ora.gpnpd' on 'host05' succeeded
r ob use
CRS-2672: Attempting to start 'ora.gipcd' on 'host05'
b
( to
es
CRS-2676: Start of 'ora.gipcd' on 'host05' succeeded
bl
R o CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host05'
managed resources
CRS-4123: Oracle High Availability Services has been started.
2015/05/16 16:50:35 CLSRSC-343: Successfully started Oracle
clusterware stack
b l es
details
Ro ----------------------------------------------------------------
io
aul
Local Resources
Br ----------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.DATA.dg
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.LISTENER_LEAF.lsnr
OFFLINE OFFLINE host04 STABLE
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
ora.ons
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
----------------------------------------------------------------
Cluster Resources
----------------------------------------------------------------
n s e
ce
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE host02
eli
STABLE a b l
fer
ora.LISTENER_SCAN2.lsnr
a n s
1 ONLINE ONLINE host03
n - tr STABLE
ora.LISTENER_SCAN3.lsnr
a no
1 ONLINE ONLINE
a s eฺ
host01
h
STABLE
ora.MGMTLSNR
m ) u id
1 ONLINE ONLINE
i s ฺco ent G
host01 169.254.247.174 192.
io R ora.cvu
----------------------------------------------------------------
[root@host01 ~]#
7. At this point you have configured a Flex Cluster with Flex ASM using all available nodes.
Next, you will install database software and create a RAC database on the cluster. In
preparation for this you will now create another ASM disk group to host the Fast Recovery
Area (FRA). Exit out of the grid terminal session and start s grid terminal session using the
–X option. Start the ASM Configuration Assistant (asmca).
n s e
i ce
[grid@host01 addnode]$ exit (From the current grid terminal)
b l el
fer a
[vncuser@classsroom_pc ~]$ ssh –X grid@host01
a n s
grid@host01's password: *****
n - tr
a no
[grid@host01 ~]$ . oraenv
h a s eฺ
ORACLE_SID = [grid] ? +ASM1
m ) u id
The Oracle base has been set
i s tG
ฺcoto e/u01/app/grid
n
s s as Stud
[grid@host01 ~]$@ asmca is
s th appears, click Create.
le sAssistant
r o b
8. After the ASM Configuration
u e
9. In the Create(b t
Disk Group o window, enter FRA as the disk group name and select first three
l
candidate
b esdisks (/dev/asmdisk1p8, /dev/asmdisk1p9, and /dev/asmdisk2p1).
o sure the Redundancy is External. Then click OK to create the disk group.
io R
Make
r a ul10. After the disk group creation process completes you will see a dialog window indicating that
B the disk group has been created. Click OK to proceed.
11. Click Exit to quit the ASM Configuration Assistant.
12. Click Yes to confirm that you want to quit the ASM Configuration Assistant.
1. Establish an ssh connection to host01 using the –X option as the oracle user.
[vncuser@classsroom_pc ~]$ ssh -X oracle@host01
oracle@host01's password:
Last login: Tue Apr 30 17:07:09 2015 from 192.0.2.1
[oracle@host01 ~]$
2. Change directory to /stage/database/ and start the installer.
n s e
[oracle@host01 ~]$ cd /stage/database
i ce
[oracle@host01 database]$ ./runInstaller
b l el
Starting Oracle Universal Installer...
fer a
a n s
n - tr
no
Checking Temp space: must be greater than 500 MB. Actual 7934 MB
Passed
a
s eฺ
Checking swap space: must be greater than 150 MB.
) h a
Actual 8632 MB
id
Passed
m u
s ฺco ent G
Checking monitor: must be configured to display at least 256
i
colors. Actual 16777216
s sas Stud
Passed
s@ this
Preparing to launch Oracle Universal Installer from
l e
/tmp/OraInstall2015-05-22_03-35-08PM. Please wait ...
b
3. On the Configure r se screen, make sure that The I wish to receive security
b uUpdates
oSecurity
s (My OracletoSupport check box is unselected and click Next. Click Yes to confirm
updatesevia
l
obyou will not configure security updates for this installation.
that
R
u li4.o On the Select Installation Option screen, select Install database software only and click
Bra
Next.
5. On the Gird Installation Options screen, click Next to accept the default selection (Oracle
Real Application Clusters database installation).
6. On the Select List of Nodes screen, ensure that all the cluster nodes are selected and click
SSH Connectivity. Note that Oracle recommends that you install the Oracle Database
software on all the cluster nodes, even Leaf Nodes. This simplifies things if you ever want
to convert a Leaf Node to a Hub Node and run database instances on it.
7. Enter the oracle user password (please refer to D81246GC11_passwords.doc for account
passwords) into the OS Password field and click Test to confirm that the required SSH
connectivity is configured across the cluster. Your laboratory environment is preconfigured
with the required SSH connectivity so you will next see a dialog confirming this. Click OK to
continue.
8. If the required SSH connectivity was not present you could now click Setup to perform the
required configuration. However, since the laboratory environment is already configured
correctly, click Next to continue.
9. On the Select Product Languages screen, click Next to accept the default selection
(English).
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
settings. They should all be dba except Database Operator which should be oper.
13. On the System Prerequisite Checks page, a series of prerequisite checks is performed. A
warning regarding Maximum locked memory is displayed. Click Fix and Check again.
Execute the fixup scripts on all indicated nodes. Click on the Verification Result tab, click
the Ignore All check box and then click Next.
An Oracle Database 12c Rel 1 Installation popup screen appears next. Click Yes to
continue.
14. Start Examine the Summary screen. When ready, click Install to start the installation.
n s e
i ce
el
15. Oracle Database release 12.1 software now installs on the cluster. The Install Product
screen follows the course of the installation.
a b l
16. Near the end of the installation process, you will see the Execute Configuration scripts fer
dialog box. Execute the orainstRoot.sh script on host05, if prompted. In your root a n s
n - tr
terminal session on host01, execute the configuration script. Press the Enter key when
no
you are prompted for the local bin directory location.
a
a s eฺ
Run the script on host02, host03, host04 and host05 as shown below.
h
m ) u id
o nt G
[root@host01 ~]# /u01/app/oracle/product/12.1.0/dbhome_1/root.sh
ฺcfor
Performing root user operation
s i s d e
Oracle 12c
a
ss is St u
e s
The following environment@ thvariables are set as:
b l s e
ro ooracle
ORACLE_OWNER=
( b u
es
ORACLE_HOME= t /u01/app/oracle/product/12.1.0/dbhome_1
o b l
u lio R
Enter the full pathname of the local bin directory:
Bra
[/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
[/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
( b r
[/usr/local/bin]:
o u
e s
The contents of t
"dbhome" have not changed. No need to overwrite.
b l
o contents of "oraenv" have not changed. No need to overwrite.
The
R
io The contents of "coraenv" have not changed. No need to overwrite.
u l
Bra Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
a b l
fer
The following environment variables are set as:
a n s
ORACLE_OWNER= oracle
n - tr
no
ORACLE_HOME= /u01/app/oracle/product/12.1.0/dbhome_1
a
Enter the full pathname of the local) h
as ideฺ
bin directory:
[/usr/local/bin]:
ฺ c om nt Gu
s
The contents of "dbhome" have
a is not dchanged.
e No need to overwrite.
s t u
@s thhave
The contents of "oraenv" have S not changed. No need to overwrite.
The contents of e s is not changed. No need to overwrite.
bl use
"coraenv"
r o
(b be added
Entriesswill to to the /etc/oratab file as needed by
l e
ob Configuration Assistant when a database is created
Database
R
u lio Finished running generic part of root script.
Bra Now product-specific root actions will be performed.
[root@host01 ~]#
17. After you have executed the required configuration script on all of the cluster nodes, return
to your Oracle Universal Installer session and click OK to proceed.
18. After configuration completes you will see the Finish screen. Click Close to close Oracle
Universal Installer.
[oracle@host01 database]$ cd
/u01/app/oracle/product/12.1.0/dbhome_1/bin
[oracle@host01 bin]$ ./dbca
2. On the Database Operation screen, click Next to accept the default selection (Create
Database).
3. On the Creation Mode screen, click Next to accept the default selection (Advanced Mode).
4. On the Database Template screen, click Next to accept the default settings for a Policy-
n s e
Managed RAC Database using the General Purpose template). i ce
5. On the Database Identification screen, specify orcl as the Global Database Name and b l el
click Next. fer a
6. On the Database Placement screen, specify orcldb for the Server pool Name and set its a n s
cardinality to 3. Click Next to proceed. n - tr
7. a no
On the Management Options screen, unselect Run CVU Checks Periodically, ensure that
a s eฺ
Configure EM Database Express is selected and the EM Database Port is 5500. Click Next
h
to continue. m ) u id
8. ฺco ent G
On the Database Credentials screen, select ‘Use the Same Administrative Passwords for
i s
Next to continue. s sas Stud
All Accounts’ and enter the administrative accounts (SYS, SYSTEM) password. Then click
9. l e s@ this
On the Storage Locations screen, make sure storage Type is ASM and make the following
adjustments:
b r ob use
( to
es
− Database File Locations: +DATA
bl
R o − Fast Recovery Area: +FRA
ul io − Fast Recovery Area Size: 5160
B r a Space constraints in the laboratory environment require that you ignore the warning
regarding the size of the Fast Recovery Area. Once your values match the values above,
click Next to continue.).
10. On the Database Options screen, select Sample Schemas and click Next. Click Yes in the
Database Configuration Assistant dialog box regarding instance placement on the local
node.
11. On the Memory tab, in the Initialization Parameters screen, set Memory Size (SGA and
PGA) to 1300 and click on the Character Sets tab.
12. On the Character Sets tab, in the Initialization Parameters screen, select Use Unicode
(AL32UTF8) and click Next.
13. On the Creation Options screen, click Next to accept the default selection (Create
Database).
14. Wait while a series of prerequisite checks are performed.
15. Examine the Summary screen. When you are ready, click Finish to start the database
creation process.
16. Follow the database creation process on the Progress Page.
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
n s e
i ce
b l el
fer a
a n s
n - tr
Practices afor noLesson 7:
Traditional
h a s Clusterware
eฺ
) i d
ฺ c om nt Gu
Management
a s is Chapter
de 7
s
s is S t u
s @
o b le se th
( b r o u
es t
o bl
io R
r a ul
B
• Add and remove Oracle Clusterware configuration files and back up the Oracle Cluster
Registry and the Oracle Local Registry.
• Determine the location of the Oracle Local Registry (OLR) and perform backups of the
OCR and OLR.
• Use oifcfg to configure a second private interconnect.
• Drop and add SCANs, SCAN listeners, and the GNS VIP.
• Recover from a voting disk corruption. e
n s
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
1. Connect to the first node of your cluster as the grid user. You can use the oraenv script
to configure your environment.
[vncuser@classroom_pc ~] ssh grid@host01
Password: *******
[grid@host01~] $ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid n s e
i ce
[grid@host01~] $
b l el
f er a
s
n processes
adaemon
2. Using operating system commands, verify that the Oracle Clusterware
- t r
n daemon processes
o
are running on the current node. (Hint: Most of the Oracle Clusterware
n
have names that end with d.bin.)
s a ฺ
[grid@host01~]$ pgrep -l d.bin
) a
h uide
309 ocssd.bin
ฺ c om nt G
679 octssd.bin s
si tude
1721 crsd.bin
s a
s is S
17048 ohasd.bin
17363 mdnsd.bin e s @ th
o b l s e
17380 gpnpd.bin
( b r o u
s
17388 gipcd.bin
18065 le
t
b osysmond.bin
Ro gnsd.bin
29304
io 32739
r a ul evmd.bin
B [grid@host01 ~]$
3. Using the crsctl utility, verify that Oracle Clusterware is running on the current node.
[grid@host01 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@host01 ~]$
-------------------------------------------------------------------
Local Resources
-------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
n s e
ce
ora.DATA.dg
eli
ONLINE ONLINE host01 STABLE
a b l
ONLINE ONLINE host02 STABLE
fer
ONLINE ONLINE host03 STABLE
a n s
ora.FRA.dg
n - tr
ONLINE ONLINE host01
a no STABLE
ONLINE ONLINE host02
h a s eฺ STABLE
ONLINE ONLINE m )
host03 u id STABLE
ora.LISTENER.lsnr
i s ฺco ent G
s sas Stud
ONLINE ONLINE host01 STABLE
l e s@ this
ONLINE ONLINE host02 STABLE
b r ob use
ONLINE ONLINE host03 STABLE
( to
ora.LISTENER_LEAF.lsnr
bl es OFFLINE OFFLINE host04 STABLE
R o
ul io OFFLINE OFFLINE host05 STABLE
B r a ora.net1.network
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.ons
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
-------------------------------------------------------------------
Cluster Resources
-------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE host02 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE host03 STABLE
ora.asm
1 ONLINE ONLINE host01 STABLE
2 ONLINE ONLINE host02 STABLE
3 ONLINE ONLINE host03 STABLE
ora.cvu
1 ONLINE ONLINE host01 STABLE
ora.gns
n s e
ce
1 ONLINE ONLINE host01 STABLE
ora.gns.vip
eli
a b l
1 ONLINE ONLINE host01 STABLE
fer
ora.host01.vip
a n s
1 ONLINE ONLINE host01
n - tr STABLE
ora.host02.vip
a no
1 ONLINE ONLINE host02
h a s eฺ STABLE
ora.host03.vip m ) u id
1 ONLINE ONLINE
i s ฺco ent G
host03 STABLE
ora.mgmtdb
s sas Stud
1
l e s@ this
ONLINE ONLINE host01 Open,STABLE
ora.oc4j
b r ob use
1 ( to
ONLINE ONLINE host01 STABLE
bl es
ora.orcl.db
R o
ul io 1 ONLINE ONLINE host02 Open,STABLE
[grid@host01 ~]$
[grid@host01 ~]$
6. Switch to the root account, set the environment with oraenv and stop Oracle Clusterware
only on the current node.
[grid@host01 ~]$ su -
[root@host01 ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
n s e
The Oracle base has been set to /u01/app/grid
i ce
[root@host01 ~]# crsctl stop crs
b l el
CRS-2791: Starting shutdown of Oracle High Availability Services-
fer a
managed resources on 'host01'
a n s
CRS-2673: Attempting to stop 'ora.crsd' on 'host01'
n - tr
no
CRS-2790: Starting shutdown of Cluster Ready Services-managed
a
resources on 'host01'
h a s eฺ
) id
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'host01'
m u
i s ฺco ent G
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'host01'
sas Stud
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'host01'
s
s@ this
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'host01'
l e
ob use
CRS-2673: Attempting to stop 'ora.cvu' on 'host01'
( b r
to
CRS-2673: Attempting to stop 'ora.oc4j' on 'host01'
es
bl
CRS-2673: Attempting to stop 'ora.proxy_advm' on 'host01'
o
io R
CRS-2673: Attempting to stop 'ora.gns' on 'host01'
i s ฺco ent G
CRS-2792: Shutdown of Cluster Ready Services-managed resources on
'host01' has completed
s sas Stud
l e s@ this
CRS-2677: Stop of 'ora.crsd' on 'host01' succeeded
b r ob use
CRS-2673: Attempting to stop 'ora.crf' on 'host01'
( to
CRS-2673: Attempting to stop 'ora.ctssd' on 'host01'
bl es
CRS-2673: Attempting to stop 'ora.evmd' on 'host01'
R o
CRS-2673: Attempting to stop 'ora.storage' on 'host01'
ul io
B r a CRS-2673: Attempting to stop 'ora.gpnpd' on 'host01'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'host01'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'host01'
CRS-2677: Stop of 'ora.storage' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'host01'
CRS-2677: Stop of 'ora.drivers.acfs' on 'host01' succeeded
CRS-2677: Stop of 'ora.crf' on 'host01' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'host01' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'host01' succeeded
CRS-2677: Stop of 'ora.evmd' on 'host01' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'host01' succeeded
CRS-2677: Stop of 'ora.asm' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on
'host01'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'host01'
succeeded
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
7. Attempt to check the status of Oracle Clusterware now that it has been successfully
stopped.
[root@host01 ~]# crsctl check cluster
CRS-4639: Could not contact Oracle High Availability Services n s e
i ce
CRS-4000: Command Check failed, or completed with errors.
b l el
fer a
[root@host01 ~]#
a n s
n - tr
no Clusterware is still
8. Connect to the second node of your cluster and verify that Oracle
a
h a s node
running on that node. Set your environment for the second
e ฺby using the oraenv
utility.
m ) ui d
[root@host01 ~]# ssh host02 ฺc o nt G
s i s 2015
d e
Last login: Thu May 16 16:00:25
s s a Stu from host01.example.com
[root@host02 ~]# . @ oraenv is
s
le ?s+ASM2 th
ORACLE_SID = [root]
o b
r has e
ubeen set to /u01/app/grid
The Oracle ( bbase
t o
b l es
[root@host02 ~]# crsctl check crs
o
io R
CRS-4638: Oracle High Availability Services is online
u l
Bra
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[root@host02 ~]#
9. Verify that all cluster resources are running on the second and third nodes, stopped on the
first node, and that the VIP resources from the first node have migrated or failed over to the
other two nodes. When you are finished, exit from host02.
[root@host02 ~]# crsctl stat res -t
-------------------------------------------------------------------
Name Target State Server State details
-------------------------------------------------------------------
Local Resources
-------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ora.FRA.dg
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.LISTENER_LEAF.lsnr
n s e
ce
OFFLINE OFFLINE host04 STABLE
OFFLINE OFFLINE host05 STABLE
eli
a b l
ora.net1.network
fer
ONLINE ONLINE host02 STABLE
a n s
ONLINE ONLINE host03
n - tr STABLE
ora.ons
a no
ONLINE ONLINE host02
h a s eฺ STABLE
ONLINE ONLINE m )
host03 u id STABLE
i s ฺco ent G
-------------------------------------------------------------------
Cluster Resources
s sas Stud
l e s@ this
-------------------------------------------------------------------
b r ob use
ora.LISTENER_SCAN1.lsnr
1 ( to
ONLINE ONLINE host02 STABLE
bl es
ora.LISTENER_SCAN2.lsnr
R o
ul io 1 ONLINE ONLINE host03 STABLE
B r a ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE host02 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE host02 169.254.247.177 192.
168.1.102,STABLE
ora.asm
1 ONLINE OFFLINE STABLE
2 ONLINE ONLINE host02 STABLE
3 ONLINE ONLINE host03 STABLE
ora.cvu
1 ONLINE ONLINE host03 STABLE
ora.gns
1 ONLINE ONLINE host02 STABLE
ora.gns.vip
1 ONLINE ONLINE host02 STABLE
i s ฺco ent G
-------------------------------------------------------------------
s sas Stud
[root@host02 ~]# exit
e s @ this
logout
o l
b use
b r
s ( to host02
Connection
l e to closed.
R ob
u lio [root@host01 ~]#
Bra 10. Restart Oracle Clusterware on host01 as the root user. Exit from the root account back
to grid and verify the results of the crsctl start crs command you just executed. It
may take several minutes for the entire stack to be up on all nodes.
[root@host01 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
i
OFFLINE OFFLINE s ฺco ent G
host05 STABLE
ora.net1.network
s sas Stud
l e s@ this
ONLINE ONLINE host01 STABLE
b r ob use
ONLINE ONLINE host02 STABLE
( to
ONLINE ONLINE host03 STABLE
bl
ora.ons es
R o
ul io ONLINE ONLINE host01 STABLE
ora.gns
1 ONLINE ONLINE host02 STABLE
ora.gns.vip
1 ONLINE ONLINE host02 STABLE
ora.host01.vip
1 ONLINE ONLINE host01 STABLE
ora.host02.vip
n s e
ce
1 ONLINE ONLINE host02 STABLE
ora.host03.vip
eli
a b l
1 ONLINE ONLINE host03 STABLE
fer
ora.mgmtdb
a n s
1 ONLINE ONLINE host02
n - tr
Open,STABLE
ora.orcl.db
a no
1 ONLINE ONLINE host02
h a s eฺ Open,STABLE
2 ONLINE ONLINE m )
host03 u id Open,STABLE
3 ONLINE ONLINE
i s ฺco ent G
host01 Open,STABLE
ora.oc4j
s sas Stud
1
l e s@ this
ONLINE ONLINE host02 STABLE
ora.scan1.vip
b r ob use
1 ( to
ONLINE ONLINE host01 STABLE
bl es
ora.scan2.vip
R o
ul io 1 ONLINE ONLINE host03 STABLE
B r a ora.scan3.vip
1 ONLINE ONLINE host02 STABLE
-------------------------------------------------------------------
[grid@host01 ~]$
[grid@host01 ~]$
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
1. As the grid user, use the crsctl utility to determine the location of the voting disks that
are currently used by your Oracle Clusterware installation.
[vncuser@classroom_pc ~] ssh grid@host01
Password: *******
[grid@host01~] $ . oraenv n s e
i ce
ORACLE_SID = [grid] ? +ASM1
b l el
The Oracle base has been set to /u01/app/grid
fer a
a n s
[grid@host01 ~]$ crsctl query css votedisk
n - tr
## STATE File Universal Id
a noFile Name Disk group
-- ----- -----------------
h a s e---------
ฺ ---------
1. ONLINE
m
7b50c15f90dd4f8bbf81d40d84f81f33) d
ui (/dev/asmdisk1p1)
o G
sฺc dent
[DATA]
s i
2. ONLINE
[DATA] s sa Stu
d084edbe14444ffdbf362ff8ec50c511 (/dev/asmdisk1p10)
3. ONLINE
l e s@ this
3c7041d9c84e4ffdbfc550f315b61226 (/dev/asmdisk1p11)
[DATA] r o b u s e
( b t o
es
Located 3 voting disk(s).
bl
o
[grid@host01 ~]$
R
a u lio
Br 2. Use the ocrcheck utility to determine the location of the Oracle Clusterware Registry
(OCR) files.
[grid@host01 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is :
Device/File Name : +DATA
[grid@host01 ~]$
3. Verify that the FRA ASM disk group is currently online for all nodes using the crsctl utility.
[grid@host01 ~]$ crsctl stat res ora.FRA.dg -t
----------------------------------------------------------------
Name Target State Server State
details
----------------------------------------------------------------
[grid@host01 ~]$
4. Switch to the root account and add a second OCR location that is to be stored in the FRA
ASM disk group. Set the environment with oraenv and use the ocrcheck command to n s e
i ce
el
verify the results.
[grid@host01 ~]$ su -
a b l
fer
Password:
a n s
n - tr
[root@host01 ~]# . oraenv
a no
ORACLE_SID = [root] ? +ASM1
h a s eฺ
)
The Oracle base has been set to /u01/app/grid
m u id
i s ฺco ent G
[root@host01 ~]# ocrconfig
s sas-addS ud
t+FRA
s
@ thi-config
socrcheck
l e
[[root@host01 ~]#
b r obRegistry
u se configuration is :
s (Device/File
to Name
Oracle Cluster
b l e : +DATA
o
io R
Device/File Name : +FRA
u l
Bra [root@host01 ~]#
5. Examine the contents of the ocr.loc configuration file to see the changes made to the file
referencing the new OCR location.
[root@host01 ~]# cat /etc/oracle/ocr.loc
#Device/file getting replaced by
+FRA/cluster01/OCRFILE/registry.255.883185695
ocrconfig_loc=+DATA/cluster01/OCRFILE/registry.255.883015549
ocrmirrorconfig_loc=+FRA/cluster01/OCRFILE/registry.255.883185695
local_only=false
[root@host01 ~]#
1. As the root user, use the ocrconfig utility to list the automatic backups of the Oracle
Cluster Registry (OCR) and the node or nodes on which they have been performed.
Note: You will see backups listed only if it has been more than four hours since Grid
Infrastructure was installed. Your list will most like look slightly different from the example
below.
[vncuser@classroom_pc ~] ssh root@host01
Password: *******
n s e
i ce
[root@host01~] $ . oraenv
b l el
ORACLE_SID = [grid] ? +ASM1
fer a
a n s
The Oracle base has been set to /u01/app/grid
n - tr
[root@host01 ~]# ocrconfig -showbackup
a no
h a s eฺ
host01 2015/05/19 08:07:28
m ) u id
i s ฺco ent G
/u01/app/12.1.0/grid/cdata/cluster01/backup00.ocr
io R
u l host01 2015/05/19 00:07:25
Bra /u01/app/12.1.0/grid/cdata/cluster01/day.ocr
[root@host01 ~]#
[root@host01 ~]#
n s e
Note: Your output may vary slightly. i ce
b l el
3. Use the crsctl utility to determine the OCR locations that are currently used by your fer a
Oracle Clusterware installation. a n s
n - tr
no
[root@host01 ~]# ocrconfig -showbackup manual
a
h a s eฺ
host02 2015/05/19 08:49:32
m ) u id
ฺco ent G
/u01/app/12.1.0/grid/cdata/cluster01/backup_20150519_084932.ocr
i s
host01 2015/05/19 s sas Stud
05:50:54
l e s@ this
/u01/app/12.1.0/grid/cdata/cluster01/backup_20150519_055054.ocr
b r ob use
host01 s ( 2015/05/19
to 04:47:54
l e
ob
/u01/app/12.1.0/grid/cdata/cluster01/backup_20150519_044754.ocr
R
u lio
Bra
host01 2015/05/18 20:08:18
/u01/app/12.1.0/grid/cdata/cluster01/backup_20150518_200818.ocr
[root@host01 ~]#
Note: Your output may vary slightly.
4. Determine the location of the Oracle Local Registry (OLR) using the ocrcheck utility.
When finished, log out as the root user.
[root@host01 ~]# ocrcheck -local
Status of Oracle Local Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Overview
In this practice, you add a new interface to create an HAIP configuration for the cluster.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
1. As the grid user, ensure that Oracle Clusterware is running on all of the cluster nodes.
[vncuser@classroom_pc ~] ssh grid@host01
Password: *******
[grid@host01~] $ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid
[grid@host01 ~]$
3. The eth0 interface is currently used for the public interface and eth1 is used for the
private interconnect/ASM. We’ll add eth2 to the interconnect configuration. Ensure that the
replacement interface is configured and operational in the operating system on all of the
nodes.
[grid@host01 ~]$ /sbin/ifconfig eth2
eth2 Link encap:Ethernet HWaddr 00:16:3E:00:01:21
inet addr: 192.168.2.101 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:121/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:240917 errors:0 dropped:0 overruns:0 frame:0
TX packets:30185 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
[grid@host01 ~]$
4. Switch user to root, set the Grid environment, and add eth2 to the cluster configuration
as a global private interconnect and ASM.
[grid@host01 ~]$ su -
Password:
[root@host01 ~]#
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
5. Verify that the new interface has been added to the cluster configuration.
[root@host01 ~]$ oifcfg getif
eth0 192.0.2.0 global public
eth1 192.168.1.0 global cluster_interconnect,asm
eth2 192.168.2.0 global cluster_interconnect,asm
[root@host01 ~]#
n s e
i ce
el
6. Stop Clusterware on all nodes of the cluster.
[root@host01 ~]# crsctl stop cluster -all a b l
fer
CRS-2673: Attempting to stop 'ora.crsd' on 'host04'
a n s
CRS-2673: Attempting to stop 'ora.crsd' on 'host05'
CRS-2677: Stop of 'ora.crsd' on 'host04' succeeded n - tr
a no
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on
'host04'
h a s eฺ
) id
CRS-2673: Attempting to stop 'ora.ctssd' on 'host04'
m u
i s ฺco ent G
CRS-2673: Attempting to stop 'ora.evmd' on 'host04'
sas Stud
CRS-2673: Attempting to stop 'ora.storage' on 'host04'
CRS-2677: Stop of 'ora.storage' on 'host04' succeeded
s
l e s@ this
CRS-2677: Stop of 'ora.crsd' on 'host05' succeeded
ob use
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on
'host05'
( b r
es to
CRS-2673: Attempting to stop 'ora.ctssd' on 'host05'
o bl
CRS-2673: Attempting to stop 'ora.evmd' on 'host05'
io R
CRS-2673: Attempting to stop 'ora.storage' on 'host05'
io R
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'host03' succeeded
r a ul
B 7. Restart Clusterware on all nodes.
[root@host01 ~]# crsctl start cluster -all
CRS-2672: Attempting to start 'ora.evmd' on 'host04'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host04'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host05'
CRS-2672: Attempting to start 'ora.evmd' on 'host05'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host02'
CRS-2672: Attempting to start 'ora.evmd' on 'host02'
CRS-2676: Start of 'ora.evmd' on 'host04' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host01'
CRS-2672: Attempting to start 'ora.evmd' on 'host01'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host03'
CRS-2672: Attempting to start 'ora.evmd' on 'host03'
CRS-2676: Start of 'ora.cssdmonitor' on 'host04' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host04'
CRS-2672: Attempting to start 'ora.diskmon' on 'host04'
CRS-2676: Start of 'ora.evmd' on 'host05' succeeded
CRS-2672:
sas S
Attempting to start
s tud
'ora.cluster_interconnect.haip' on
'host01'
CRS-2676: e
Start lof
@ this on 'host03' succeeded
s'ora.cssd'
CRS-2672: r ob uto
Attempting
b sestart 'ora.ctssd' on 'host03'
(Attempting
to to start 'ora.cluster_interconnect.haip'
es
CRS-2672: on
obl
'host03'
R
CRS-2676: Start of 'ora.ctssd' on 'host02' succeeded
u l i o
CRS-2676: Start of 'ora.ctssd' on 'host01' succeeded
Bra CRS-2676:
CRS-2676:
Start
Start
of
of
'ora.ctssd' on 'host03' succeeded
'ora.cluster_interconnect.haip' on 'host02'
succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host02'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'host01'
succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host01'
CRS-2676: Start of 'ora.asm' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'host01'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'host03'
succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host03'
CRS-2676: Start of 'ora.asm' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'host03'
CRS-2676: Start of 'ora.asm' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'host02'
CRS-2676: Start of 'ora.storage' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'host02'
o b le txqueuelen:0
TX packets:5305717
s e errors:0 dropped:0 overruns:0 carrier:0
( b r
collisions:0
o u
es
RX t
bytes:5480515150 (5.1 GiB) TX bytes:5480515150 (5.1 GiB)
o bl
io R
[root@host01 ~]# ssh host02 ifconfig -a
io R
eth0
Bra
inet6 addr: fe80::216:3eff:fe00:103/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:768703 errors:0 dropped:0 overruns:0 frame:0
TX packets:586788 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:12896112928 (12.0 GiB) TX bytes:59606340 (56.8 MiB)
collisions:0 txqueuelen:0
m u id
TX packets:131363 errors:0 dropped:0 overruns:0 carrier:0
)
i s ฺco ent G
RX bytes:100433099 (95.7 MiB) TX bytes:100433099 (95.7 MiB)
e to
ob l
R
i9.o As the root user, use the ifdown command to bring down the eth1 interface. Wait a few
u l
Bra moments and run ifconfig –a. What do you observe?
[root@host01 ~]# ifdown eth1
ul10. io
B r a Use ifconfig –a to check the HAIP interfaces on host02 and host03.
[root@host01 ~]# ssh host02 ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:16:3E:00:01:02
inet addr:192.0.2.102 Bcast:192.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:102/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:864009 errors:0 dropped:0 overruns:0 frame:0
TX packets:710606 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:13743233812 (12.7 GiB) TX bytes:338572572 (322.8 MiB)
lo
s a Stu
Link encap:LocalsLoopback
s @
inet addr:127.0.0.1
t h i sMask:255.0.0.0
r o ble uRUNNING
inet6 addr:
UP LOOPBACK se MTU:16436 Metric:1
::1/128 Scope:Host
b
( packets:1018083
s RX to errors:0 dropped:0 overruns:0 frame:0
l e
ob collisions:0 txqueuelen:0
TX packets:1018083 errors:0 dropped:0 overruns:0 carrier:0
io R
u l RX bytes:714443596 (681.3 MiB) TX bytes:714443596 (681.3 MiB)
ro b
RX bytes:91915590
us
( b~]# to
eseth1 is up on host02 and host03 but the HAIP interconnect eth1:1 on both
[root@host01
o
Noteb l
that
Bra 11. Use the ifup command to bring up the eth1 interface on host01. Use ifconfig to
check the HAIP configuration on the three Hub nodes.
[root@host01 ~]# ifup eth1
m )
eth2:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:21 u id
inet addr:169.254.145.237 c o nt G
ฺ Bcast:169.254.255.255 Mask:255.255.128.0
i s e
s s as Stud
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
lo
s @
Link encap:Local
e isMask:255.0.0.0
Loopback
t h
l se Scope:Host
obaddr: u::1/128
inet addr:127.0.0.1
b r
inet6
( LOOPBACK
e s UP to RUNNING MTU:16436 Metric:1
l
ob TX packets:5305717 errors:0 dropped:0 overruns:0 carrier:0
RX packets:5305717 errors:0 dropped:0 overruns:0 frame:0
io R
u l collisions:0 txqueuelen:0
Bra
RX bytes:5480515150 (5.1 GiB) TX bytes:5480515150 (5.1 GiB)
lo
s a Stu
Link encap:LocalsLoopback
s @
inet addr:127.0.0.1
t h i sMask:255.0.0.0
r o ble uRUNNING
inet6 addr:
UP LOOPBACK se MTU:16436 Metric:1
::1/128 Scope:Host
b
( packets:1038607
s RX to errors:0 dropped:0 overruns:0 frame:0
l e
ob collisions:0 txqueuelen:0
TX packets:1038607 errors:0 dropped:0 overruns:0 carrier:0
io R
u l RX bytes:742484033 (708.0 MiB) TX bytes:742484033 (708.0 MiB)
1. As the grid user, use srvctl to check the status and configuration of the GNS on your
cluster.
[vncuser@classroom_pc ~] ssh grid@host01
Password: *******
[grid@host01~] $ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid
n s e
i ce
[grid@host01 ~]$ srvctl status gns
GNS is running on node host01. b l el
GNS is enabled on node host01.
fer a
a n s
[grid@host01 ~]$ srvctl config gns -detail
n - tr
GNS is enabled.
a no
h a s eฺ
GNS is listening for DNS server requests on port 53
)
GNS is using port 5353 to connect to mDNS
m u id
ฺco ent G
GNS status: OK
i s
Domain served by GNS: cluster01.example.com
GNS version: 12.1.0.2.0
s sas Stud
s@ this
Globally unique identifier of the cluster where GNS is running:
l e
b596525249286f61ffdcb4e07a8221ce
r ob use
Name of the cluster where GNS is running: cluster01
b
( to
es
Cluster type: server.
bl
GNS log level: 1.
o
io R
GNS listening addresses: tcp://192.0.2.155:50211.
2. Use srvctl to check the status and configuration of the SCAN VIPs on your cluster.
Note: Since the VIPs are cluster resources, they may not be running on the same hosts as
the example below.
[grid@host01 ~]$ srvctl status scan -verbose
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node host02
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node host03
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node host01
n s e
[grid@host01 ~]$
i ce
b l el
3. Use srvctl to check the status and configuration of the SCAN Listeners on your cluster.
fer a
a n s
[grid@host01 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled n - tr
a no
SCAN listener LISTENER_SCAN1 is running on node host02
SCAN Listener LISTENER_SCAN2 is enabled
h a s eฺ
) id
SCAN listener LISTENER_SCAN2 is running on node host03
m u
i s ฺco ent G
SCAN Listener LISTENER_SCAN3 is enabled
as Stud
SCAN listener LISTENER_SCAN3 is running on node host01
s s
s @
[grid@host01 ~]$ srvctl config
t h i sexists.
scan_listener
b le se
SCAN Listener LISTENER_SCAN1
o
Port: TCP:1521
Registrationr invitedunodes:
s (b invited
Registration to subnets:
l e
SCANbListener is enabled.
Ro Listener is individually enabled on nodes:
io SCAN
u l SCAN Listener is individually disabled on nodes:
Bra SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
[grid@host01 ~]$
;; QUESTION SECTION:
;cluster01-scan.cluster01.example.com. IN ANY
;; ANSWER SECTION:
cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.249
cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.251
cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.246
n s e
;; AUTHORITY SECTION:
i ce
cluster01.example.com. 86400 IN NS cluster01-
b l el
gns.example.com.
fe r a
a n s
;; ADDITIONAL SECTION:
n - tr
no
cluster01-gns.example.com. 86400 IN 192.0.2.155 A
ora.gns
1 OFFLINE OFFLINE STABLE
--------------------------------------------------------------------
[root@host01 ~]#
[grid@host01 ~]$
sas Stud
Cluster Resources
--------------------------------------------------------------------
s
ora.scan1.vip
l e s@ this
ob use
1 OFFLINE OFFLINE STABLE
ora.scan2.vip
( b r
1
es to
OFFLINE OFFLINE STABLE
o bl
ora.scan3.vip
r a ul --------------------------------------------------------------------
B [grid@host01 ~]$
[grid@host01 ~]$
PRCS-1102 : Could not find any Single Client Access Name (SCAN)
Virtual Internet Protocol (VIP) resources using filter
TYPE=ora.scan_vip.type on network 1
[root@host01 ~]#
12. Use the host command to re-check the operating system resolution of the SCANs on your
cluster. Use tnsping to check SCAN resolution on the database side. Note the TNS-
12541 returned by the tnsping command.
n s e
ice
el
[root@host01 ~]# host -a cluster01-scan.cluster01.example.com
Trying "cluster01-scan.cluster01.example.com"
a b l
Received 54 bytes from 192.0.2.1#53 in 198 ms
fer
Trying "cluster01-scan.cluster01.example.com.example.com"
a n s
- tr
Host cluster01-scan.cluster01.example.com not found: 3(NXDOMAIN)
n
Received 115 bytes from 192.0.2.1#53 in 11 ms
a no
h a s eฺ
[grid@host01 ~]$ tnsping cluster01-scan.cluster01.example.com
m ) u id
TNS Ping Utility for Linux: Version
i s ฺ n tG
co e12.1.0.2.0 - Production on 27-
MAY-2015 08:14:25 s
sa Stu d
s is
Copyright (c) 1997,
l e s@2014,thOracle. All rights reserved.
b
ro files: u s e
( b
Used parameter
t o
l es
/u01/app/12.1.0/grid/network/admin/sqlnet.ora
b
Ro
io TNS-03505:
u l Failed to resolve name
13. Lets add back the components that were removed. Start by adding the GNS VIP back to
the cluster and starting it. Check the GNS VIP status and configuration.
[root@host01 ~]# srvctl add gns -vip 192.0.2.155 -domain
cluster01.example.com
b596525249286f61ffdcb4e07a8221ce
Name of the cluster where GNS is running: cluster01
Cluster type: server.
GNS log level: 1.
GNS listening addresses: tcp://192.0.2.155:61617.
GNS is individually enabled on nodes:
GNS is individually disabled on nodes:
[root@host01 ~]#
n s e
i ce
14. Add the SCAN VIPs and SCAN Listeners back to your cluster. Start the SCAN VIPs and b l el
verify the status and configuration.
fe ra
n s
[root@host01 ~]# srvctl add scan -scanname cluster01-
scan.cluster01.example.com n - tra
a no
[root@host01 ~]# srvctl start scan
h a s eฺ
m ) u id
ฺco ent G
[root@host01 ~]# srvctl status scan
SCAN VIP scan1 is enabled i s
s nodetuhost02
d
SCAN VIP scan1 is running
s s aon S
SCAN VIP scan2 is enabled
e @ thonisnode host03
srunning
SCAN VIP scan2 is l
obis enabledse
( b
SCAN VIP scan3r
SCAN VIPs scan3 isto
u
running on node host01
b l e
io Ro
[root@host01 ~]#
r a ul
B ********************* Go to the grid terminal *********************
ora.scan1.vip
1 ONLINE ONLINE host02 STABLE
ora.scan2.vip
1 ONLINE ONLINE host03 STABLE
ora.scan3.vip
1 ONLINE ONLINE host01 STABLE
--------------------------------------------------------------------
;; QUESTION SECTION:
;cluster01-scan.cluster01.example.com. IN ANY
;; ANSWER SECTION:
cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.241
cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.244
cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.246
;; AUTHORITY SECTION:
cluster01.example.com. 86400 IN NS cluster01-
gns.example.com.
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
[grid@host01 ~]$
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
Local Resources
--------------------------------------------------------------------
ora.LABDG.dg
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
--------------------------------------------------------------------
[grid@host01 ~]$ n s e
li ce
l e
bon all
4. Open a terminal from your desktop to host01 as the root user. Stop Clusterware
e r a
nodes, then stop high availability services on all nodes. Restart it I exclusive
a n sfmode on
host01.
n - tr
[vncuser@classroom_pc ~] ssh root@host01
a no
Password: *******
h a s eฺ
m ) u id
[root@host01~] $ . oraenv
ORACLE_SID = [grid] ? +ASM1 sฺc
o nt G
The Oracle base has been a
i de
s to t/u01/app/grid
s set u
[root@host01 ~]# e s @s stop t h i s S
b l crsctl
e
cluster -all
s stop 'ora.crsd' on 'host04'
b ro o uto
CRS-2673: Attempting
CRS-2673: (Attempting
l e s Stop oft 'ora.crsd'
to stop 'ora.crsd' on 'host05'
o b
CRS-2677: on 'host04' succeeded
R
CRS-2673:
io 'host04'
Attempting to stop 'ora.cluster_interconnect.haip' on
r a ul ...
B CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'host01'
succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'host01'
CRS-2677: Stop of 'ora.cssd' on 'host01' succeeded
...
bl es
CRS-2676: Start of 'ora.cssd' on 'host01' succeeded
R o
CRS-2672: Attempting to start 'ora.crf' on 'host01'
B r a CRS-2672:
'host01'
Attempting to start 'ora.cluster_interconnect.haip' on
[root@host01 ~]#
6. From the grid terminal list voting disks again, and then replace them on the LABDG disk
group. List the voting disks again to verify the replacement.
n s e
i ce
el
[grid@host01 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
a b l
-- ----- ----------------- --------- ---------
fer
1. ONLINE 5a70acc510c04f1abf34a4921c3c251c s
(/dev/asmdisk1p1) [DATA]
a n
2. ONLINE 0c90eb6166fb4f35bf61d5c3bbf3452c
n - tr (/dev/asmdisk1p10) [DATA]
no
3. ONLINE 2fa97a8fd6314f6bbf96fd504680c00e (/dev/asmdisk1p11) [DATA]
Located 3 voting disk(s).
a
s eฺ
h a
) +LABDG id
[grid@host01 ~]$ crsctl replace votedisk
m u
Successful addition of voting disk
i s ฺdisk e n tG
co 598eaa67e0094fddbf63fac177b9719c.
9974000d4b564fa5bff895313fad4433.
sas S tud536c583578474f82bf6cf4791df02994.
Successful addition of voting
s
Successful addition of voting disk
Successful deletion @
l e s of
of e t his disk
voting 5a70acc510c04f1abf34a4921c3c251c.
u lio
Bra
[grid@host01 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 9974000d4b564fa5bff895313fad4433 (/dev/asmdisk2p4)[LABDG]
2. ONLINE 598eaa67e0094fddbf63fac177b9719c (/dev/asmdisk2p3) [LABDG]
3. ONLINE 536c583578474f82bf6cf4791df02994 (/dev/asmdisk2p5) [LABDG]
Located 3 voting disk(s).
[grid@host01 ~]$
7. Moving the voting disk to a new disk group constitutes a major change in the cluster
configuration. Perform a manual OCR backup as the root user.
[root@host01 ~]# ocrconfig -manualbackup
[root@host01 ~]#
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
/dev/asmdisk2p4
/dev/asmdisk2p5
/dev/asmdisk2p6
h a s eฺ
[root@host01 dev]# dd if=/dev/zero of=/dev/asmdisk2p5 bs=1M count=10
) i d
om nt Gu
10+0 records in
10+0 records out ฺ c
is 0.01389
1048576 bytes (1.0 MB) copied,
s a s t u de s, 75.5 MB/s
[root@host01 dev]#sdd @ s is S of=/dev/asmdisk2p6 bs=1M count=10
10+0 records inble e th
if=/dev/zero
b o
r out o u s
10+0 records
s ( t
b l e
R o
[root@host01 ~]# crsctl stat res -t|more
Bra
CRS-4000: Command Status failed, or completed with errors.
[root@host01 ~]#
Clusterware is now non-responsive on host01.
9. Stop Clusterware on host01. Check the CRS logfile using adrci Go to the bottom of the
file (Shift-G). What do you observe? Quit the editor and exit adrci when finished.
[root@host01 catchup]# crsctl stop crs -f
s s as Stud
@
Copyright (c) 1982, 2014, Oracle
s t h i s and/or its affiliates. All rights
reserved.
r o ble use
ADR base =(b to
l e s "/u01/app/grid"
R ob show alert
adrci>
u lio
Bra Choose the home from which to view the alert log:
1: diag/asm/+asm/+ASM1
2: diag/crs/host01/crs
3: diag/rdbms/_mgmtdb/-MGMTDB
4: diag/clients/user_grid/host_2394698683_82
5: diag/clients/user_root/host_2394698683_82
6: diag/clients/user_oracle/host_2394698683_82
7: diag/tnslsnr/host01/listener
8: diag/tnslsnr/host01/asmnet1lsnr_asm
9: diag/tnslsnr/host01/listener_scan2
10: diag/tnslsnr/host01/listener_scan3
11: diag/tnslsnr/host01/listener_scan1
12: diag/tnslsnr/host01/mgmtlsnr
Q: to quit
adrci> exit
[root@host01 ~]#
sas Stud
succeeded
s
CRS-2672: Attempting to start 'ora.asm' on 'host01'
l e s@ this
CRS-2676: Start of 'ora.asm' on 'host01' succeeded
b r ob use
CRS-2672: Attempting to start 'ora.storage' on 'host01'
( to
CRS-2676: Start of 'ora.storage' on 'host01' succeeded
es
CRS-2672: Attempting to start 'ora.crsd' on 'host01'
bl
o
CRS-2676: Start of 'ora.crsd' on 'host01' succeeded
R
ul io [root@host01
B r a ~]# crsctl replace votedisk +DATA
Successful addition of voting disk 44a9f26cdcef4fe4bf28419057f6009b.
Successful addition of voting disk 5ac02418f4a74fb8bf3d757397585e64.
Successful addition of voting disk 2423fd054b684f98bf56abae82e1a44c.
Successfully replaced voting disk group with +DATA.
CRS-4266: Voting file(s) successfully replaced
[root@host01 ~]#
b r ob use
CRS-2677: Stop of 'ora.cssd' on 'host01' succeeded
( to
CRS-2673: Attempting to stop 'ora.gipcd' on 'host01'
es
CRS-2677: Stop of 'ora.gipcd' on 'host01' succeeded
bl
o
CRS-2793: Shutdown of Oracle High Availability Services-managed
R
ul io resources on 'host01' has completed
[root@host01 ~]#
Local Resources
--------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.DATA.dg
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
n s e
ONLINE ONLINE host03 STABLE
i ce
ora.FRA.dg
b l el
ONLINE ONLINE host01 STABLE
fer a
ONLINE ONLINE host02 STABLE
a n s
ONLINE ONLINE host03
n - tr STABLE
no
ora.LABDG.dg
OFFLINE OFFLINE host01
a
s eฺ
STABLE
OFFLINE OFFLINE host02
) h a id
STABLE
OFFLINE OFFLINE host03
m u STABLE
ora.LISTENER.lsnr
i s ฺco ent G
sas Stud
ONLINE ONLINE host01 STABLE
s
ONLINE ONLINE host02 STABLE
l e s@ this
ONLINE ONLINE host03 STABLE
ob use
ora.LISTENER_LEAF.lsnr
( b rOFFLINE OFFLINE host04 STABLE
es to
OFFLINE OFFLINE host05 STABLE
o bl
ora.net1.network
io R ONLINE ONLINE host01 STABLE
1,STABLE
ora.asm
1 ONLINE ONLINE host01
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
Started,STABLE
2 ONLINE ONLINE host02
Started,STABLE
3 ONLINE ONLINE host03
Started,STABLE
ora.cvu
1 ONLINE ONLINE host01 STABLE
ora.gns
1 ONLINE ONLINE host01 STABLE
n s e
ora.gns.vip
i ce
1 ONLINE ONLINE host01 STABLE
b l el
ora.host01.vip
1 ONLINE ONLINE host01 STABLE fer a
ora.host02.vip a n s
1 ONLINE ONLINE host02 n - tr STABLE
ora.host03.vip
a no
1 ONLINE ONLINE host03
h a s eฺ STABLE
ora.mgmtdb
m ) u id
ฺco ent G
1 ONLINE ONLINE host01 Open,STABLE
ora.oc4j i s
1
s sas Stud
ONLINE ONLINE host01 STABLE
s@ this
ora.orcl.db
1 l e
ONLINE ONLINE host02 Open,STABLE
2
b r ob use
ONLINE ONLINE host03 Open,STABLE
( to
es
3 ONLINE ONLINE host01 Open,STABLE
bl
ora.scan1.vip
o
io R 1 ONLINE ONLINE host02 STABLE
r a ul ora.scan2.vip
B 1
ora.scan3.vip
ONLINE ONLINE host03 STABLE
[root@host01 ~]#
Note that the LABDG disk group is OFFLINE.
13. The LABDG resource is OFFLINE because the disk headers have been corrupted. As the
grid user, use asmcmd to list currently configured disk groups. The LABDG disk group is
no longer listed. Use asmca to recreate the disk group.
[root@host01 ~]# crsctl stat res -t -w "NAME = ora.LABDG.dg"
--------------------------------------------------------------------
Name Target State Server State
details
--------------------------------------------------------------------
Local Resources
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
[root@host01 ~]#
14. As the grid user, determine where the management repository is running. If the
management repository is not running on host01, then relocate it there.
[root@host01 ~]# crsctl stat res ora.mgmtdb -t
-------------------------------------------------------------------
Name Target State Server State details n s e
i ce
-------------------------------------------------------------------
b l el
Cluster Resources
fer a
-------------------------------------------------------------------
a n s
ora.mgmtdb
n - tr
1 ONLINE ONLINE host02
a no Open,STABLE
a s eฺ
-------------------------------------------------------------------
h
m ) u id
[grid@host01 ~]$ srvctl relocate
i s ฺco mgmtdb
e n tG-n host01
s s as Stud
[grid@host01 ~]$ crsctl
s @ stat
t h i s res ora.mgmtdb -t
r o ble use
-------------------------------------------------------------------
Name
s (b Target
t o State Server State details
b le
-------------------------------------------------------------------
o
io R
Cluster Resources
u l
Bra
-------------------------------------------------------------------
ora.mgmtdb
1 ONLINE ONLINE host01 Open,STABLE
-------------------------------------------------------------------
[grid@host01 ~]$
n s e
i ce
b l el
fer a
a n s
n - tr
Practices afornoLesson 8:
s eCluster
Policy-Based
h a ฺ
) i d
ฺ c om nt Gu
Management
a s is Chapter
de 8
s
s is S t u
s @
o b le se th
( b r o u
es t
o bl
io R
r a ul
B
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
1. Connect to the first node of your cluster as the grid user. You can use the oraenv script
to set your environment correctly. Do not use the root account at any time during this
practice!
[vncuser@classroom_pc]# ssh grid@host01
grid@host01's password:
Last login: Mon May 20 20:01:43 2015 from dns.example.com
[grid@host01 ~]$
[grid@host01~] $ . oraenv n s e
i ce
el
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid
a b l
2. Examine the extended server attributes associated with host01, host02 and host03. fer
a n s
n - tr
Notice the amount of physical memory associated with each server (3806MB + for host01,
3128MB + for host02 and host03). Notice also that the attributes identify these nodes
as Hub Nodes in the cluster. a no
[grid@host01 ~]$ crsctl status server h a s -feฺ
m ) host01
u id
ฺco ent G
NAME=host01
MEMORY_SIZE=3915
i s
CPU_COUNT=1
s sas Stud
s@ this
CPU_CLOCK_RATE=2992
l
CPU_HYPERTHREADING=0 e
b use
r o
CPU_EQUIVALENCY=1000
b
s(
DEPLOYMENT=other
l e to
ob
CONFIGURED_CSS_ROLE=hub
R
RESOURCE_USE_ENABLED=1
io SERVER_LABEL=
u l
Bra PHYSICAL_HOSTNAME=
STATE=ONLINE
ACTIVE_POOLS=ora.orcldb
STATE_DETAILS=
ACTIVE_CSS_ROLE=hub
MEMORY_SIZE=3326
CPU_COUNT=1
CPU_CLOCK_RATE=2992
CPU_HYPERTHREADING=0
CPU_EQUIVALENCY=1000
DEPLOYMENT=other
CONFIGURED_CSS_ROLE=hub
RESOURCE_USE_ENABLED=1
SERVER_LABEL=
n s e
ce
PHYSICAL_HOSTNAME=
STATE=ONLINE
eli
ACTIVE_POOLS=ora.orcldb
a b l
STATE_DETAILS=
fer
ACTIVE_CSS_ROLE=hub
a n s
n - tr
[grid@host01 ~]$
a no
3. Examine the extended server attributes associated with
h a shost04e ฺandand
host05. Notice that
these nodes contain less physical memory on each
m ) node
u i d
(1557MB) that they are
o nt G
identified as Leaf Nodes in this Flex Cluster.
ฺcserver
i s
s tude host04 -f
[grid@host01 ~]$ crsctl status
NAME=host04 s a
s is S
MEMORY_SIZE=1557 s@
CPU_COUNT=1
o b le se th
( b r o u
CPU_CLOCK_RATE=2992
es t
l
CPU_HYPERTHREADING=0
b
Ro
CPU_EQUIVALENCY=1000
io DEPLOYMENT=other
u l
Bra
CONFIGURED_CSS_ROLE=leaf
RESOURCE_USE_ENABLED=1
SERVER_LABEL=
PHYSICAL_HOSTNAME=
STATE=ONLINE
ACTIVE_POOLS=Free
STATE_DETAILS=
ACTIVE_CSS_ROLE=leaf
[grid@host01 ~]$
4. Examine the built-in category definitions. These are implicitly used to categorize the cluster
nodes as Hub Nodes or Leaf Nodes based on the ACTIVE_CSS_ROLE setting. Note that
these categories also exist in a standard cluster, however in a standard cluster all the
nodes are designated as Hub Nodes.
[grid@host01 ~]$ crsctl status category
NAME=ora.hub.category
ACL=owner:root:rwx,pgrp:root:r-x,other::r-- n s e
i ce
el
ACTIVE_CSS_ROLE=hub
EXPRESSION=
a b l
fer
NAME=ora.leaf.category
a n s
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
n - tr
ACTIVE_CSS_ROLE=leaf
EXPRESSION= a no
h a s eฺ
[grid@host01 ~]$
m ) u id
5. Examine the servers that are associatedฺc
i s with
e n t Gcategories. Confirm that the server-
o the inbuilt
s s as Stud
to-category mappings are as expected.
s@ this
[grid@host01 ~]$ crsctl status server -category ora.hub.category
NAME=host01 l e
STATE=ONLINErob
b u se
l e
NAME=host02 s( to
R ob
STATE=ONLINE
u lio
Bra
NAME=host03
STATE=ONLINE
[grid@host01 ~]$ crsctl status server -category ora.leaf.category
NAME=host04
STATE=ONLINE
NAME=host05
STATE=ONLINE
[grid@host01 ~]$
6. Create a user defined category which includes the servers that contain more than 3000MB
of system memory.
[grid@host01 ~]$ crsctl add category big -attr
"EXPRESSION='(MEMORY_SIZE > 3000)'"
[grid@host01 ~]$
ACTIVE_CSS_ROLE=hub
EXPRESSION=(MEMORY_SIZE > 3000)
8. Examine the servers associated with the big category. Confirm that host01, host02 and
host03 are associated with the big category.
[grid@host01 ~]$ crsctl status server -category big
NAME=host01
STATE=ONLINE
n s e
NAME=host02
i ce
STATE=ONLINE
b l el
fer a
NAME=host03
a n s
STATE=ONLINE
n - tr
[grid@host01 ~]$ a no
9. Examine the categories associated with the host01 h a s Notice
server.d eฺ that a server can be
) i
associated with multiple categories.
ฺ c om nt Gu
[grid@host01 ~]$ crsctl status
a s is category
de -server host01
NAME=big s
s is S t u
s @
le se th
ACL=owner:grid:rwx,pgrp:oinstall:rwx,other::r--
ACTIVE_CSS_ROLE=hub
r o b u > 3000)
( b
EXPRESSION=(MEMORY_SIZE
t o
b l es
Ro
NAME=ora.hub.category
u lio ACL=owner:root:rwx,pgrp:root:r-x,other::r--
Bra
ACTIVE_CSS_ROLE=hub
EXPRESSION=
[grid@host01 ~]$
At this point you have configured a category and examined category definitions along with
server-to-category associations. Next, you will make use of the category definitions as you
configure and exercise policy-based cluster management.
10. Examine the policy set. At this point, no policy set configuration has been performed so the
only policy listed is the Current policy. The Current policy is a special built-in policy
which contains all of the currently active server pool definitions.
[grid@host01 ~]$ crsctl status policyset
ACL=owner:grid:rwx,pgrp:oinstall:rwx,other::r--
LAST_ACTIVATED_POLICY=
SERVER_POOL_NAMES=Free
POLICY
NAME=Current
DESCRIPTION=This policy is built-in and managed automatically to
reflect current configuration
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
SERVER_NAMES=
SERVERPOOL
NAME=Generic
IMPORTANCE=0
MAX_SIZE=-1
MIN_SIZE=0
SERVER_CATEGORY=
SERVER_NAMES=
SERVERPOOL
n s e
NAME=ora.orcldb
i ce
IMPORTANCE=0
b l el
MAX_SIZE=3
MIN_SIZE=0 fer a
SERVER_CATEGORY=ora.hub.category a n s
SERVER_NAMES= n - tr
[grid@host01 ~]$
a no
a s ateฺ
11. Examine the policy set definition provided in the file located
h
/stage/GRID/labs/less_08/policyset.txt.
m ) Theupolicyid set definition contains two
policies. The day policy enables the orcldb ฺ co server
n G to use two Hub Nodes while not
tpool
i s e
as Sserver
providing any allocation for the bigpool
orcldb server pool, howeversitsalso provides tudanpool. The night policy still prioritizes the
allocation for the bigpool server pool.
Notice that the bigpool e @ pool
sserver t h isreferences the big server category that you defined
l
ob Notice e that the policy set does not list the my_app_pool server
salso
( b r
earlier in this practice. u
pool in the
e s to
SERVER_POOL_NAMES attribute. This demonstrates how it is possible for you to
define
o l
b server pools that are outside policy set management.
R
io [grid@host01 ~]$ cat /stage/GRID/labs/less_08/policyset.txt
u l
Bra
SERVER_POOL_NAMES=Free bigpool ora.orcldb
POLICY
NAME=day
DESCRIPTION=The day policy
SERVERPOOL
NAME=ora.orcldb
IMPORTANCE=10
MAX_SIZE=3
MIN_SIZE=1
SERVER_CATEGORY=ora.hub.category
SERVERPOOL
NAME=bigpool
IMPORTANCE=0
MAX_SIZE=0
MIN_SIZE=0
SERVER_CATEGORY=big
POLICY
NAME=night
DESCRIPTION=The night policy
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
SERVERPOOL
NAME=bigpool
IMPORTANCE=5
MAX_SIZE=1
MIN_SIZE=1
SERVER_CATEGORY=big
[grid@host01 ~]$
12. Modify the policy set to load the configuration file.
n s e
i ce
el
[grid@host01 ~]$ crsctl modify policyset -file
/stage/GRID/labs/less_08/policyset.txt
a b l
[grid@host01 ~]$
s f er
r
13. Reexamine the policy set to confirm that the configuration file was loaded
- t anin the previous
step. Notice that the LAST_ACTIVATED_POLICY attribute is not n set and the Current
policy is unchanged from what you observed earlier. Thisa no no policies have been
is because
activated yet.
h a s eฺ
m ) u id
ฺco ent G
[grid@host01 ~]$ crsctl status policyset
s
ACL=owner:grid:rwx,pgrp:oinstall:rwx,other::r--
i
LAST_ACTIVATED_POLICY=
SERVER_POOL_NAMES=Free sbigpool sas Sora.orcldb
tud
POLICY
l e s@ this
NAME=Current
b r ob policy
u se is built-in and managed automatically to
(
DESCRIPTION=This
scurrent to
l e
ob
reflect configuration
io R SERVERPOOL
u l NAME=Free
Bra IMPORTANCE=0
MAX_SIZE=-1
MIN_SIZE=0
SERVER_CATEGORY=
SERVER_NAMES=
SERVERPOOL
NAME=Generic
IMPORTANCE=0
MAX_SIZE=-1
MIN_SIZE=0
SERVER_CATEGORY=
SERVER_NAMES=
SERVERPOOL
NAME=ora.orcldb
IMPORTANCE=0
MAX_SIZE=3
MIN_SIZE=0
SERVER_CATEGORY=ora.hub.category
SERVER_NAMES=
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
MAX_SIZE=-1
MIN_SIZE=0
SERVER_CATEGORY=
SERVER_NAMES=
SERVERPOOL
NAME=bigpool
IMPORTANCE=0
MAX_SIZE=0
MIN_SIZE=0
n s e
SERVER_CATEGORY=big
i ce
SERVER_NAMES=
b l el
SERVERPOOL
NAME=ora.orcldb fer a
IMPORTANCE=10 a n s
MAX_SIZE=3 n - tr
MIN_SIZE=1
a no
SERVER_CATEGORY=ora.hub.category
h a s eฺ
SERVER_NAMES=
m ) u id
ฺco ent G
POLICY
NAME=night i s
sas Stud
DESCRIPTION=The night policy
s
s@ this
SERVERPOOL
NAME=Free l e
r
IMPORTANCE=0
b ob use
( to
es
MAX_SIZE=-1
o bl
MIN_SIZE=0
io R SERVER_CATEGORY=
r a ul SERVER_NAMES=
B SERVERPOOL
NAME=bigpool
IMPORTANCE=5
MAX_SIZE=1
MIN_SIZE=1
SERVER_CATEGORY=big
SERVER_NAMES=
SERVERPOOL
NAME=ora.orcldb
IMPORTANCE=10
MAX_SIZE=3
MIN_SIZE=1
SERVER_CATEGORY=ora.hub.category
SERVER_NAMES=
[grid@host01 ~]$
r a ul MIN_SIZE=0
B SERVER_CATEGORY=big
SERVER_NAMES=
SERVERPOOL
NAME=ora.orcldb
IMPORTANCE=10
MAX_SIZE=3
MIN_SIZE=1
SERVER_CATEGORY=ora.hub.category
SERVER_NAMES=
POLICY
NAME=day
DESCRIPTION=The day policy
SERVERPOOL
NAME=Free
IMPORTANCE=0
MAX_SIZE=-1
MIN_SIZE=0
SERVER_CATEGORY=
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
SERVER_CATEGORY=big
SERVER_NAMES=
SERVERPOOL
NAME=ora.orcldb
IMPORTANCE=10
MAX_SIZE=3
MIN_SIZE=1
SERVER_CATEGORY=ora.hub.category
SERVER_NAMES=
n s e
POLICY
i ce
NAME=night
b l el
DESCRIPTION=The night policy
SERVERPOOL fer a
NAME=Free a n s
IMPORTANCE=0 n - tr
MAX_SIZE=-1
a no
MIN_SIZE=0
h a s eฺ
SERVER_CATEGORY=
m ) u id
ฺco ent G
SERVER_NAMES=
SERVERPOOL i s
NAME=bigpool
s sas Stud
s@ this
IMPORTANCE=5
MAX_SIZE=1 l e
MIN_SIZE=1
b r ob use
( to
es
SERVER_CATEGORY=big
o bl
SERVER_NAMES=
io R SERVERPOOL
r a ul NAME=ora.orcldb
B IMPORTANCE=10
MAX_SIZE=3
MIN_SIZE=1
SERVER_CATEGORY=ora.hub.category
SERVER_NAMES=
[grid@host01 ~]$
16. Examine the server pool allocations. Confirm the Hub Nodes are still allocated to the
orcldb server pool in line with the day policy.
[grid@host01 ~]$ crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=host04 host05
NAME=Generic
ACTIVE_SERVERS=
NAME=bigpool
ACTIVE_SERVERS=
[grid@host01 ~]$
18. Examine the server pool allocations. Confirm that the servers allocated to each pool are
consistent with the night policy.
[grid@host01 ~]$ crsctl status serverpool
NAME=Free
n s e
i ce
el
ACTIVE_SERVERS=host04 host05
a b l
NAME=Generic
fer
ACTIVE_SERVERS=
a n s
n - tr
NAME=bigpool
ACTIVE_SERVERS=host01 a no
h a s eฺ
NAME=ora.orcldb
m ) u id
ACTIVE_SERVERS=host02 host03
i s ฺco ent G
[grid@host01 ~]$ s sas Stud
19. One of the side-effects e s@
of t
activatingh is night policy is that the RAC database (orcl) that
the
l
b host01,
oboth se host02, and host03 now only runs on two nodes.
previously ran on
b r u
(only two instances
s
Confirm that
e to of the orcl database are running. This demonstrates how
b
Oracle
o l Clusterware automatically starts and stops required resources when a policy change
i o Ris made.
ul [grid@host01 ~]$ srvctl status database -d orcl
Bra Instance orcl_1 is running on node host03
Instance orcl_3 is running on node host02
[grid@host01 ~]$
20. To illustrate the dynamic nature of policy-based cluster management, modify the big
category so that no servers can be associated with it. Notice that the category change
immediately causes a server re-allocation, which in turn causes an instance of the orcl
database to start up.
[grid@host01 ~]$ crsctl modify category big -attr
"EXPRESSION='(MEMORY_SIZE > 8000)'"
CRS-2672: Attempting to start 'ora.orcl.db' on 'host01'
CRS-2676: Start of 'ora.orcl.db' on 'host01' succeeded
[grid@host01 ~]$
ACTIVE_SERVERS=host04 host05
NAME=Generic
ACTIVE_SERVERS=
NAME=bigpool
ACTIVE_SERVERS=
NAME=ora.orcldb
n s e
i ce
el
ACTIVE_SERVERS=host01 host02 host03
a b l
[grid@host01 ~]$ srvctl status database -d orcl
fer
Instance orcl_1 is running on node host03
a n s
Instance orcl_2 is running on node host01
n - tr
Instance orcl_3 is running on node host02
[grid@host01 ~]$ a no
22. Close all terminal windows opened for this practice.ha
s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
l e s( to
R ob
u lio
Bra
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
n s e
i ce
b l el
fer a
a n s
n - tr
Practices afor noLesson 9:
Upgrading
h a s and eฺ Patching Grid
) i d
ฺ c om nt Gu
Infrastructure
a s is Chapter
de 9
s
s is S t u
s @
o b le se th
( b r o u
es t
o bl
io R
r a ul
B
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
n s e
i ce
b l el
fer a
a n s
n - tr
Practices afornoLesson 10:
h a s eฺ Oracle
Troubleshooting
m )
Clusterwareu id
i s ฺco ent G
s sas SChapter
tud 10
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
log.
[vncuser@classroom_pc ~]$ ssh grid@host01
grid@host01's password:
Last login: Wed May 22 15:37:18 2015 from dns.example.com
l e s@ this
ob use
1: diag/asm/+asm/+ASM1
b r
2: diag/crs/host01/crs
e s( to
3: diag/rdbms/_mgmtdb/-MGMTDB
l
b
odiag/clients/user_root/host_2394698683_82
4: diag/clients/user_grid/host_2394698683_82
io R
5:
u l
Bra
6: diag/clients/user_oracle/host_2394698683_82
7: diag/tnslsnr/host01/listener
8: diag/tnslsnr/host01/asmnet1lsnr_asm
9: diag/tnslsnr/host01/listener_scan2
10: diag/tnslsnr/host01/listener_scan3
11: diag/tnslsnr/host01/listener_scan1
12: diag/tnslsnr/host01/mgmtlsnr
Q: to quit
********************************************************************
[grid@host01 ~]$
3. Switch to the root user and set up the environment variables for the Grid Infrastructure.
Change to the ORACLE_HOME/bin directory and run the diagcollection.pl script to
gather all log files that can be sent to My Oracle Support for problem analysis.
[grid@host01 ~]$ su -
Password:
[grid@host01 ~]$
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
1. While connected to the grid account, dump the contents of the OCR to the standard output
and count the number of lines of output.
[grid@host01 ~]$ ocrdump -stdout | wc -l
1416
[grid@host01 ~]$
2. Switch to the root user, dump the contents of the OCR to the standard output and count
the number of lines of output. Compare your results with the previous step. How do the
results differ?
n s e
[grid@host01 ~]$ su -
i ce
Password:
b l el
fer a
[root@host01 ~]# . oraenv
a n s
ORACLE_SID = [root] ? +ASM1
n - tr
The Oracle base has been set to /u01/app/grid
a no
[root@host01 ~]# ocrdump -stdout | wc -l
h a s eฺ
4153
m ) u id
[root@host01 ~]#
i s ฺco ent G
3. Dump the first 25 lines of the OCR
s s atos standard
S tudoutput using XML format.
[root@host01 ~]# ocrdump
e s@ t-stdout
h is -xml | head -25
<OCRDUMP> l
ob use
b r
e s(
<TIMESTAMP>06/12/2015
l
to 06:22:03</TIMESTAMP>
R ob
<COMMAND>/u01/app/12.1.0/grid/bin/ocrdump.bin -stdout -xml
u lio </COMMAND>
Bra <KEY>
<NAME>SYSTEM</NAME>
<VALUE_TYPE>UNDEF</VALUE_TYPE>
<VALUE><![CDATA[]]></VALUE>
<USER_PERMISSION>PROCR_ALL_ACCESS</USER_PERMISSION>
<GROUP_PERMISSION>PROCR_READ</GROUP_PERMISSION>
<OTHER_PERMISSION>PROCR_READ</OTHER_PERMISSION>
<USER_NAME>root</USER_NAME>
<GROUP_NAME>root</GROUP_NAME>
<KEY>
<NAME>SYSTEM.version</NAME>
<VALUE_TYPE>UB4 (10)</VALUE_TYPE>
<VALUE><![CDATA[5]]></VALUE>
<USER_PERMISSION>PROCR_ALL_ACCESS</USER_PERMISSION>
<GROUP_PERMISSION>PROCR_READ</GROUP_PERMISSION>
<OTHER_PERMISSION>PROCR_READ</OTHER_PERMISSION>
<USER_NAME>root</USER_NAME>
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
[root@host01 ~]#4
4. Create an XML file dump of the OCR in the /tmp directory. Name the dump file
ocr_current_dump.xml.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
io R
/tmp/day.ocr
u l
Bra [root@host01 ~]#
7. Dump the contents of the day.ocr backup OCR file in the XML format saving the file in the
/tmp directory. Name the file as day_ocr.xml.
[root@host01 ~]# ocrdump -xml -backupfile /tmp/day.ocr
/tmp/day_ocr.xml
[root@host01 ~]#
-rw-r--r-- 1 grid oinstall 52630342 May 24 09:28 ocssd.l01
-rw-r--r-- 1 grid oinstall 52631859 May 23 19:06 ocssd.l02
-rw-r--r-- 1 grid oinstall 52631497 May 23 04:28 ocssd.l03
-rw-r--r-- 1 grid oinstall 52634557 May 22 13:38 ocssd.l04
8. Compare the differences between the day_ocr.xml file and the
ocr_current_dump.xml file to determine all changes made to the OCR in the last 24
hours. Log out as root when finished.
[root@host01 ~]# diff /tmp/day_ocr.xml
/tmp/ocr_current_dump.xml|more
3,5c3,4
r a ul < <VALUE_TYPE>UNDEF</VALUE_TYPE>
B < <VALUE><![CDATA[]]></VALUE>
---
> <VALUE_TYPE>ORATEXT</VALUE_TYPE>
> <VALUE><![CDATA[0]]></VALUE>
6635,6636c6634,6635
< <VALUE_TYPE>UNDEF</VALUE_TYPE>
< <VALUE><![CDATA[]]></VALUE>
---
> <VALUE_TYPE>ORATEXT</VALUE_TYPE>
> <VALUE><![CDATA[host01]]></VALUE>
6647,6648c6646,6647
< <VALUE_TYPE>UNDEF</VALUE_TYPE>
< <VALUE><![CDATA[]]></VALUE>
---
> <VALUE_TYPE>ORATEXT</VALUE_TYPE>
> <VALUE><![CDATA[2015/06/11 23:31:56]]></VALUE>
6659,6660c6658,6659
< <VALUE_TYPE>UNDEF</VALUE_TYPE>
< <VALUE><![CDATA[]]></VALUE>
---
...
[root@host01 ~]# exit
logout
[grid@host01 cssd]$ cd
[grid@host01 ~]$
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
#CV_DEFAULT_BROWSER_LOCATION=/usr/bin/mozilla
[grid@host01 admin]$
2. Display the stage options and stage names that can be used with the cluvfy utility.
[grid@host01 admin]$ cluvfy stage -list
USAGE:
cluvfy stage {-pre|-post} <stage-name> <stage-specific options> [-
n s e
ce
verbose]
eli
Valid Stages are:
a b l
-pre cfs : pre-check for CFS setup fer
-pre crsinst : pre-check for CRS installation
a n s
-pre acfscfg :
n - tr
pre-check for ACFS Configuration.
-pre dbinst : no
pre-check for database installation
a
-pre dbcfg : s eฺ
pre-check for database configuration
h a
-pre hacfg :
m ) id
pre-check for HA configuration
u
ฺco ent G
-pre nodeadd : pre-check for node addition.
-post hwos
i s : post-check for hardware and operating system
-post cfs
s sas Stud : post-check for CFS setup
s@ this
-post crsinst : post-check for CRS installation
-post acfscfg
l e : post-check for ACFS Configuration.
b r
-post hacfgob use : post-check for HA configuration
( to
es
-post nodeadd : post-check for node addition.
host03,host02,host01
TCP connectivity check passed for subnet "192.168.2.0"
n s e
Checking subnet mask consistency...
i ce
Subnet mask consistency check passed for subnet "192.168.1.0".
b l el
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed. fer a
a n s
Node connectivity check passed n - tr
o
Network connectivity check across clusters nodes a n ฺon the ASM network
passed ) ha uide
m tG
Starting check to see if ASMisis ฺcorunning
en on all cluster nodes...
a s t u d
@ ss ASM s S
tAthi least one Disk Group configured
ASM Running check passed. is running on all specified nodes
Disk Group Checkle s
passed.
b
ro ocheck
u s e
s ( b
Task ASM Integrity t passed...
l e
R obACFS Integrity check started...
Task
u lio
Bra Task ACFS Integrity check passed
[grid@host01 admin]$ $
4. Display a list of the component names that can be checked with the cluvfy utility.
[grid@host01 admin]$ cluvfy comp -list
USAGE:
cluvfy comp <component-name> <component-specific options> [-
verbose]
USAGE:
cluvfy comp space [-n <node_list>] -l <storage_location> -z
<disk_space>{B|K|M|G} [-verbose]
[grid@host01 admin]$
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
6. Verify that on each node of the cluster the /tmp directory has at least 200 MB of free space
in it using the cluvfy utility. Use verbose output.
[grid@host01 admin]$ cluvfy comp space -n host01,host02,host03 -l
/tmp -z 200M -verbose
m d
) forui"/tmp"
ฺc ent G
Result: Space availability checkopassed
i s
sas Studwas successful.
Verification of space availability
s
l
[grid@host01 admin]$e s@ this
b r ob use
l e s( to
R ob
u lio
Bra
1. Open a terminal session to host01 as the grid user. Use the oraenv script to set the
Grid environment. Display Clusterware resource information for the management repository
database and the associated listener.
$ ssh grid@host01
grid@host01's password:
Last login: Thu May 30 20:13:28 2015 from dns.example.com
a b l
[grid@host01 ~]$ crsctl stat res ora.mgmtdb -t
fer
--------------------------------------------------------------------
a n s
Name Target State Server
n - tr
State details
Cluster Resources a no
--------------------------------------------------------------------
h a s eฺ
--------------------------------------------------------------------
ora.mgmtdb m ) u id
1 ONLINE ONLINE
i s ฺco ent G
host01 Open,STABLE
sas Stud
--------------------------------------------------------------------
s
@ stat isres ora.MGMTLSNR -t
e
[grid@host01 ~]$ crsctl
l s t h
b r obTargetuseState
--------------------------------------------------------------------
Name
s ( t o Server State details
ble Resources
--------------------------------------------------------------------
o
R
Cluster
io --------------------------------------------------------------------
u l
Bra
ora.MGMTLSNR
1 ONLINE ONLINE 169.254.147.164 192.host01
168.1.101,STABLE
--------------------------------------------------------------------
[grid@host01 ~]$
2. Determine which node in the cluster is hosting the cluster logger service. Retrieve detailed
logger information.
[grid@host01 ~]$ oclumon manage -get master
Master = host01
Logger = host01
Nodes = host01,host03,host02,host04,host05
[grid@host01 ~]$
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
[grid@host01 ~]$
4. Run the diagcollection.pl script from the /tmp directory as the root user to collect
all the data in the Grid management repository.
[grid@host01 ~]$ su -
Password:
sas Stud
option.
baseData_host01_20150618_1317.tar.gz -> logs,traces and cores from
s
l e s@ this
Oracle Base. Note: core files will be packaged only with the --core
ob use
option.
b r
ocrData_host01_20150618_1317.tar.gz -> ocrdump, ocrcheck etc
(
es to
coreData_host01_20150618_1317.tar.gz -> contents of CRS core files
o bl
in text format
R
io osData_host01_20150618_1317.tar.gz
r a ul -> logs from Operating System
B lsInventory_host01_20150618_1317 ->Opatch lsinventory details
Collecting crs data
Collecting Oracle base data
Collecting OCR data
Collecting information from core files
No corefiles found
Collecting lsinventory details
The following diagnostic archives will be created in the local
directory.
acfsData_host01_20150618_1317.tar.gz -> logs from acfs log.
Collecting acfs data
Collecting OS logs
Collecting sysconfig data
[grid@host01 ~]$
----------------------------------------
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
SYSTEM:
#pcpus: 1 #vcpus: 1 cpuht: N chipname: Intel(R) cpu: 7.87 cpuq: 1
physmemfree: 272752 physmemtotal: 3606464 mcache:
1359052 swapfree: 8743908 swaptotal: 8843776 hugepagetotal: 0
hugepagefree: 0 hugepagesize: 2048 ior: 41 iow: 103 io
s: 22 swpin: 0 swpout: 0 pgin: 40 pgout: 103 netr: 21.874 netw:
n s e
21.862 procs: 207 procsoncpu: 1 rtprocs: 10 rtprocso
i ce
ncpu: N/A #fds: 16480 #sysfdlimit: 6815744 #disks: 5 #nics: 4
b l el
nicErrors: 0
fer a
a n s
TOP CONSUMERS:
n - tr
no
topcpu: 'gipcd.bin(22079) 1.79' topprivmem: 'java(17133) 212708'
a
topshm: 'oracle_22666_-m(22666) 278172' topfd: 'crs
s eฺ
) h a
d.bin(23380) 258' topthread: 'crsd.bin(23380) 47'
id
m u
i s ฺco ent G
sas S tud UTC' SerialNo:4917
----------------------------------------
s
Node: host01 Clock: '15-06-19 04.49.40
l e s@ this
----------------------------------------
b r ob use
#pcpus: s1 (#vcpus:to
SYSTEM:
R o
physmemfree: 272760 physmemtotal: 3606464 mcache:
TOP CONSUMERS:
topcpu: 'mdb_vktm_-mgmtd(9173) 2.00' topprivmem: 'java(17133)
212708' topshm: 'oracle_22666_-m(22666) 278172' topfd:
'crsd.bin(23380) 258' topthread: 'crsd.bin(23380) 47'
----------------------------------------
Node: host01 Clock: '15-06-19 04.49.45 UTC' SerialNo:4918
----------------------------------------
SYSTEM:
#pcpus: 1 #vcpus: 1 cpuht: N chipname: Intel(R) cpu: 33.70 cpuq: 12
physmemfree: 272588 physmemtotal: 3606464 mcache
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
TOP CONSUMERS:
topcpu: 'gipcd.bin(22079) 1.80' topprivmem: 'java(17133) 212708'
topshm: 'oracle_22666_-m(22666) 278172' topfd: 'crs
d.bin(23380) 258' topthread: 'crsd.bin(23380) 47'
----------------------------------------
Node: host01 Clock: '15-06-19 04.49.50 UTC' SerialNo:4919
----------------------------------------
n s e
i ce
SYSTEM:
b l el
#pcpus: 1 #vcpus: 1 cpuht: N chipname: Intel(R) cpu: 8.23 cpuq: 2
fer a
physmemfree: 271984 physmemtotal: 3606464 mcache:
a n s
n -
hugepagefree: 0 hugepagesize: 2048 ior: 41 iow: 98 iostr
1359104 swapfree: 8743908 swaptotal: 8843776 hugepagetotal: 0
a no
: 20 swpin: 0 swpout: 0 pgin: 41 pgout: 98 netr: 22.619 netw: 22.695
a s eฺ
procs: 207 procsoncpu: 1 rtprocs: 10 rtprocsonc
h
) id
pu: N/A #fds: 16480 #sysfdlimit: 6815744 #disks: 5 #nics: 4
m u
nicErrors: 0
i s ฺco ent G
TOP CONSUMERS:
s s as Stud
s @
topcpu: 'gipcd.bin(22079)
t i s
2.00'
h topprivmem: 'java(17133) 212708'
d.bin(23380)ro
ble use
topshm: 'oracle_22666_-m(22666) 278172' topfd: 'crs
s (b 258' topthread:
to
'crsd.bin(23380) 47'
l e
R ob
----------------------------------------
[grid@host01 ~]$
6. Use oclumon to retrieve the current retention time for the management database. Increase
it by 2 hours (7200 seconds). The new retention time should be 98100 (90900 + 7200).
What happens?
[grid@host01 ~]$ oclumon manage –get repsize
The Cluster Health Monitor repository is too small for the desired
retention. Please first resize the repository to 2212 MB
[grid@host01 ~]$
The repository is too small to support your new retention time.
n s e
i ce
b l el
fer a
a n s
n - tr
Practices afor noLesson 11:
Making h a sApplications
eฺ Highly
) i d
ฺ c om nt Gu
Available
a s is Chapter
de 11
s
s is S t u
s @
o b le se th
( b r o u
es t
o bl
io R
r a ul
B
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
1. Establish a terminal session connected to host01 using the grid user. Configure the
environment with the oraenv script.
[vncuser@classroom_pc ~]$ ssh grid@host01
grid@host01's password:
a b l
[grid@host01 ~]$
fer
2. Examine the cluster to identify the role for each node in the cluster. a n s
n - tr
[grid@host01 ~]$ crsctl get node role status -all
Node 'host01' active role is 'hub' a no
Node 'host02' active role is 'hub'
h a s eฺ
Node 'host03' active role is 'hub'
m ) u id
Node 'host04' active role is 'leaf'
i s ฺco ent G
sas Stud
Node 'host05' active role is 'leaf'
[grid@host01 ~]$ s is defined server pools and server allocations.
3. Examine the cluster tol e s@ thethcurrently
identify
r obthe Hub
Notice that currently u e are allocated to the orcldb server pool which contains
sNodes
s( b
the RAC database. to both Leaf Nodes are currently in the Free pool.
Also,
b l e
R o
[grid@host01 ~]$ crsctl status serverpool
ul io ACTIVE_SERVERS=host04
NAME=Free
B r a host05
NAME=Generic
ACTIVE_SERVERS=
NAME=bigpool
ACTIVE_SERVERS=
NAME=ora.orcldb
ACTIVE_SERVERS=host01 host02 host03
[grid@host01 ~]$
ACTIVE_CSS_ROLE=hub
EXPRESSION=(MEMORY_SIZE > 8000)
NAME=ora.hub.category
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
ACTIVE_CSS_ROLE=hub
EXPRESSION=
NAME=ora.leaf.category
n s e
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
i ce
ACTIVE_CSS_ROLE=leaf
b l el
EXPRESSION=
fer a
a n s
[grid@host01 ~]$
n - tr
one of the Flex Cluster Leaf Nodes. a noapplication resources on
5. Create a new server pool to support a series of highly available
NAME=bigpool
ACTIVE_SERVERS=
NAME=ora.my_app_pool
ACTIVE_SERVERS=host04
NAME=ora.orcldb
ACTIVE_SERVERS=host01 host02 host03
[grid@host01 ~]$
7. Navigate to the directory that contains the scripts for this practice.
[grid@host01 ~]$ cd /stage/GRID/labs/less_11
[grid@host01 less_11]$
/tmp/<resource_name> is deleted.
• When the application resource is checked a test is performed to check the existence
of the file at /tmp/<resource_name>.
• In addition, regardless of the action performed, log entries are written to the CRSD
agent log file, which can be found at
/u01/app/12.1.0/grid/log/<hostname>/agent/crsd/scriptagent_gri
d/scriptagent_grid.log.
n s e
[grid@host01 less_11]$ cat action.scr
i ce
#!/bin/sh
b l el
TOUCH=/bin/touch
RM=/bin/rm fer a
PATH_NAME=/tmp/$_CRS_NAME a n s
n - tr
#
a no
# These messages go into the CRSD agent log
`date` ********** ") ha
s eฺfile.
echo " *******
m u idresource[$_CRS_NAME]
ฺco ent G
echo "Action script '$_CRS_ACTION_SCRIPT' for
called for action $1" i s
#
s sas Stud
case "$1" in l e s@ this
'start') rob
b u se
echo ("START tentry
l e s "Creatingo thepoint has been called.."
o b$TOUCH $PATH_NAME
echo file: $PATH_NAME"
u lio R exit 0
Bra ;;
'stop')
echo "STOP entry point has been called.."
echo "Deleting the file: $PATH_NAME"
$RM $PATH_NAME
exit 0
;;
'check')
echo "CHECK entry point has been called.."
if [ -e $PATH_NAME ]; then
echo "Check -- SUCCESS"
exit 0
else
echo "Check -- FAILED"
exit 1
fi
'clean')
echo "CLEAN entry point has been called.."
echo "Deleting the file: $PATH_NAME"
$RM -f $PATH_NAME
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
exit 0
;;
esac
[grid@host01 less_11]$
9. Examine the file create_res.sh. This file contains the commands which you will next use
to create three application resources named my_dep_res1, my_dep_res2, and
my_resource. Notice that all three resources are associated with the my_app_pool n s e
i ce
el
server pool. Notice also that my_resource has a mandatory (hard) dependency on
my_dep_res1, and an optional (soft) dependency on my_dep_res2. Finally notice that
a b l
my_resource also depends on the RAC database (ora.orcl.db). This illustrates how fer
you can unify the management of databases and applications within a Flex Cluster. a n s
n - tr
no
[grid@host01 less_11]$ cat create_res.sh
a
crsctl add resource my_dep_res1 -type cluster_resource -attr
s eฺ
) h a
"ACTION_SCRIPT=/stage/GRID/labs/less_11/action.scr,PLACEMENT=restric
id
ted,SERVER_POOLS=ora.my_app_pool"
m u
i s tG
ฺco-typeencluster_resource
crsctl add resource my_dep_res2
s a s tud -attr
s is S
"ACTION_SCRIPT=/stage/GRID/labs/less_11/action.scr,PLACEMENT=restric
@
s
le se th
ted,SERVER_POOLS=ora.my_app_pool"
u lio 1,
Bra
global:ora.orcl.db)
weak(my_dep_res2)',STOP_DEPENDENCIES='hard(my_dep_res1,
global:ora.orcl.db)'"
[grid@host01 less_11]$
10. Execute create_res.sh to create the application resources.
[grid@host01 less_11]$ ./create_res.sh
[grid@host01 less_11]$
11. Examine the newly created cluster resources. Note that currently the resources exist but
have not been started.
[grid@host01 less_11]$ crsctl status resource -t -w "NAME co my"
--------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------
my_dep_res1
1 OFFLINE OFFLINE STABLE
12. Establish another terminal session connected to host01 as the oracle user. Configure
the oracle environment setting the ORACLE_SID variable to orcl.
[vncuser@classroom_pc ~]$ ssh oracle@host01
oracle@host01's password:
--------------------------------------------------------------------
my_dep_res1
1 ONLINE ONLINE host04 STABLE
my_dep_res2
1 ONLINE ONLINE host04 STABLE
my_resource
1 ONLINE ONLINE host04 STABLE
--------------------------------------------------------------------
n s e
[grid@host01 less_11]$
i ce
16. Check the files under /tmp on the node running the application resources. You should b l el
expect to see a series of empty files, each bearing the name of one of the application
fer a
resources.
a n s
[grid@host01 less_11]$ ssh n - tr
host04 ls -la /tmp/my*
-rw-r--r-- 1 grid oinstall
a no
0 Jun 12 06:54 /tmp/my_dep_res1
-rw-r--r-- 1 grid oinstall
a s eฺ
0 Jun 12 06:54 /tmp/my_dep_res2
h
-rw-r--r-- 1 grid oinstall
m ) id
0 Jun 12 06:57 /tmp/my_resource
u
i s ฺco ent G
sasconfirm
[grid@host01 less_11]$
d
tualso
@ s
17. From the oracle terminal session,
i s S that your RAC database (orcl) started as
s
le se th
part of starting the my_resource application resource.
o b
[oracle@host01
(br ~]$
Instances orcl_1 t
is
u
osrvctl status database -d orcl
running on node host03
o ble orcl_2 is running on node host01
Instance
i o R
Instance orcl_3 is running on node host02
u l
Bra [oracle@host01 ~]$
18. Switch user (su) to root and shutdown Oracle Clusterware on the node hosting the
application resources. Don’t forget to set the Grid environment for root.
[grid@host01 less_11]$ su -
Password:
[root@host01 ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/grid
'host04'
CRS-2673: Attempting to stop 'ora.ctssd' on 'host04'
CRS-2673: Attempting to stop 'ora.evmd' on 'host04'
CRS-2673: Attempting to stop 'ora.storage' on 'host04'
CRS-2677: Stop of 'ora.storage' on 'host04' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'host04'
succeeded
CRS-2677: Stop of 'ora.ctssd' on 'host04' succeeded
CRS-2677: Stop of 'ora.evmd' on 'host04' succeeded
n s e
CRS-2673: Attempting to stop 'ora.cssd' on 'host04'
i ce
CRS-2677: Stop of 'ora.cssd' on 'host04' succeeded
b l el
[root@host01 ~]#
fer a
19. Wait a few moments, then examine the status of the server pools. You should see that the
a n s
n - tr
previously unallocated Leaf Node has been moved to the my_app_pool server pool. Exit
the root account when finished
no
[root@host01 ~]# crsctl status serverpools a
NAME=Free ) ha uideฺ
ACTIVE_SERVERS=
ฺ c om nt G
s
si tude
NAME=Generic
s a
s is S
ACTIVE_SERVERS=
s @
NAME=bigpoolrob
le se th
( b t o u
es
ACTIVE_SERVERS=
b l
Ro
NAME=ora.my_app_pool
io ACTIVE_SERVERS=host05
u l
Bra NAME=ora.orcldb
ACTIVE_SERVERS=host01 host02 host03
[grid@host01 less_11]$
20. Re-examine the status of the application resources. Notice that they have been failed-over
to the surviving Leaf Node.
[grid@host01 less_11]$ crsctl status resource -t -w "NAME co my"
--------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------
my_dep_res1
1 ONLINE ONLINE host05 STABLE
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
21. Re-examine the contents of the /tmp directory on both Leaf Nodes. Notice that the files
you saw in step 16 no longer exist and that there is newer set of files on the other Leaf
Node.
[grid@host01 less_11]$ ssh host04 ls -la /tmp/my*
ls: cannot access /tmp/my*: No such file or directory
o b
CRS-2676: on 'host04' succeeded
i o R
CRS-2672: Attempting to start 'ora.cssd' on 'host04'
ul CRS-2672: Attempting to start 'ora.diskmon' on 'host04'
Bra CRS-2676:
CRS-2676:
Start of 'ora.diskmon' on 'host04' succeeded
Start of 'ora.evmd' on 'host04' succeeded
CRS-2676: Start of 'ora.cssd' on 'host04' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on
'host04'
CRS-2672: Attempting to start 'ora.storage' on 'host04'
CRS-2672: Attempting to start 'ora.ctssd' on 'host04'
CRS-2676: Start of 'ora.storage' on 'host04' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'host04'
succeeded
CRS-2676: Start of 'ora.ctssd' on 'host04' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'host04'
CRS-2676: Start of 'ora.crsd' on 'host04' succeeded
[root@host01 ~]#
NAME=Generic
ACTIVE_SERVERS=
NAME=bigpool
ACTIVE_SERVERS=
NAME=ora.my_app_pool
ACTIVE_SERVERS=host05
NAME=ora.orcldb n s e
i ce
el
ACTIVE_SERVERS=host01 host02 host03
a b l
[root@host01 ~]#
s f er
a n resources
Congratulations! You have successfully configured highly available application
t r
on Flex Cluster Leaf Nodes
o n -
24. To finish, please exit the root account. Stop and delete the n
amy_resource, my_dep_res1,
and my_dep_res2 resources. Delete the ora.my_app_pool a s
h uide ฺ
server pool. Exit the terminal
)
session.
ฺ c om nt G
[root@host01 ~]# exit s
si tude
logout
s a
[grid@host01 less_11]$ scrsctl s S
s @ t h i stop resource my_resource
CRS-2673: Attempting
o b le'my_resource'
to
s e stop 'my_resource' on 'host05'
CRS-2677: Stop
( b r of
o u on 'host05' succeeded
es less_11]$ t
b l
[grid@host01
o crsctl stop resource my_dep_res1
i o R
CRS-2673: Attempting to stop 'my_dep_res1' on 'host05'
ul CRS-2677: Stop of 'my_dep_res1' on 'host05' succeeded
Bra [grid@host01 less_11]$ crsctl stop resource my_dep_res2
CRS-2673: Attempting to stop 'my_dep_res2' on 'host05'
CRS-2677: Stop of 'my_dep_res2' on 'host05' succeeded
[vncuser@classroom_pc ~]$
1. As the root user, verify that the Apache RPMs, httpd, and httpd-tools, are installed
on host04 and host05.
[vncuser@classroom_pc ~]$ ssh root@host04
root@host04's password:
Last login: Thu Jun 11 19:03:00 2015 from 192.0.2.1
After you have determined that Apache is working properly, stop Apache using the
apachectl stop command.
[root@host04 ~]#
3. As the root user, start the Apache application on host05 with the apachectl start
command.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a After you have determined that Apache is working properly, stop Apache using the
apachectl stop command.
[root@host04 ~]# ssh host05 apachectl stop
[root@host04 ~]#
4. Create an action script to control the application. This script must be accessible by all
nodes on which the application resource can be located. As the root user, create a script
on host04 called apache.scr in /usr/local/bin that will start, stop, check status, and
clean up if the application does not exit cleanly. Make sure that the host specified in the
WEBPAGECHECK variable is host04. Use the
/stage/GRID/labs/less_11/apache.tpl file as a template for creating the script.
Make the script executable and test the script.
[root@host04 ~]# cp /stage/GRID/labs/less_11/apache.tpl
/usr/local/bin/apache.scr
[root@host04 ~]# vi /usr/local/bin/apache.scr
#!/bin/bash
HTTPDCONFLOCATION=/etc/httpd/conf/httpd.conf
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
'stop')
/usr/sbin/apachectl -k stop
RET=$?
;;
'clean')
/usr/sbin/apachectl -k stop
RET=$?
;;
'check')
n s e
/usr/bin/wget -q --delete-after $WEBPAGECHECK
i ce
RET=$?
b l el
;;
*) fer a
RET=0 a n s
;; n - tr
esac
a no
# 0: success; 1 : error
h a s eฺ
if [ $RET -eq 0 ]; then
m ) u id
ฺco ent G
exit 0
else i s
exit 1
s sas Stud
s@ this
fi
l e
Save the file rob
b u se
l e s ( ~]# tchmod
o
o b
[root@host04 755 /usr/local/bin/apache.scr
R
io [root@host04
u l ~]# apache.scr start
Bra Verify web page
host04.
[root@host04 ~]# ssh host05
r a ul RET=$?
B ;;
*)
RET=0
;;
esac
# 0: success; 1 : error
if [ $RET -eq 0 ]; then
exit 0
else
exit 1
fi
[root@host04 ~]#
6. Next, validate the return code of a check failure using the new script. The Apache server
should NOT be running on either node. Run apache.scr check and immediately test the
return code by issuing an echo $? command. This must be run immediately after the
apache.scr check command because the shell variable $? holds the exit code of the
previous command run from the same shell. An unsuccessful check should return an exit
code of 1. You should do this on both nodes.
[root@host04 ~]# apache.scr check
n s e
i ce
[root@host04 ~]# echo $?
b l el
1
fer a
a n s
[root@host04 ~]# ssh host05
n - tr
[root@host05 ~]# apache.scr check
a no
h a s eฺ
[root@host05 ~]# echo $?
m ) u id
ฺco ent G
1
i s
sas Stud
[root@host05 ~]# exit
logout s is
Connection to host05
l e s@closed.
t h
r b use
o~]#
[root@host04
s ( b to
l e
R ob grid user, create a server pool for the resource called myApache_sp. This pool
7. As the
contains host04 and host05 and is a child of the Free pool.
u lio
Bra
[root@host04 ~]# su - grid
[grid@host04 ~]$ /u01/app/12.1.0/grid/bin/crsctl add serverpool
myApache_sp -attr "PARENT_POOLS=Free, SERVER_NAMES=host04 host05"
[grid@host04 ~]$
8. Check the status of the new pool on your cluster. Exit the grid account when finished.
[grid@host04 ~]$ /u01/app/12.1.0/grid/bin/crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=host04 host05
NAME=Generic
ACTIVE_SERVERS=
NAME=bigpool
ACTIVE_SERVERS=
NAME=myApache_sp
ACTIVE_SERVERS=host04 host05
[root@host04 ~]#
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
9. Add the Apache Resource, which will be called myApache, to the myApache_sp subpool.
It must be performed as root as the resource requires root authority because of listening
on the default privileged port 80. Set CHECK_INTERVAL to 30, RESTART_ATTEMPTS to 2,
and PLACEMENT to restricted.
[root@host04 ~]# /u01/app/12.1.0/grid/bin/crsctl add resource
myApache -type cluster_resource -attr
"ACTION_SCRIPT=/usr/local/bin/apache.scr, PLACEMENT='restricted',
n s e
ce
SERVER_POOLS=myApache_sp, CHECK_INTERVAL='30', RESTART_ATTEMPTS='2'"
[root@host04 ~]#
eli
a b l
10. View the static attributes of the myApache resource.
fer
[root@host04 ~]# /u01/app/12.1.0/grid/bin/crsctl status resource
a n s
myApache -f
n - tr
NAME=myApache a no
TYPE=cluster_resource
h a s eฺ
STATE=OFFLINE
m ) u id
TARGET=OFFLINE
i s ฺco ent G
sas Stud
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
ACTIONS= s
@ this
l e s
ACTION_FAILURE_TEMPLATE=
b r ob use
ACTION_SCRIPT=/usr/local/bin/apache.scr
s( to
ACTION_TIMEOUT=60
l e
ACTIVE_PLACEMENT=0
b
o
AGENT_FILENAME=%CRS_HOME%/bin/scriptagent
R
u lio ALERT_TEMPLATE=
Bra
ALIAS_NAME=
AUTO_START=restore
CARDINALITY=1
CARDINALITY_ID=0
CHECK_INTERVAL=30
CHECK_TIMEOUT=0
CLEAN_TIMEOUT=60
CREATION_SEED=243
DEFAULT_TEMPLATE=
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=
ID=myApache
INSTANCE_COUNT=1
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PROFILE_CHANGE_TEMPLATE=
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=2
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=
SERVER_POOLS=myApache_sp
START_CONCURRENCY=0
n s e
START_DEPENDENCIES=
i ce
START_TIMEOUT=0
b l el
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0 fer a
STOP_DEPENDENCIES= a n s
STOP_TIMEOUT=0 n - tr
UPTIME_THRESHOLD=1h
a no
USER_WORKLOAD=no
h a s eฺ
USE_STICKINESS=0
m ) u id
[root@host04 ~]#
i s ฺco ent G
astheSresource
11. Start the new resource. Confirmsthat
s tud is online on one of the leaf nodes. If you
e s @ it toththeisnode on which the myApache resource is placed.
like, open a browser and point
[root@host04 ~]# l se
ob /u01/app/12.1.0/grid/bin/crsctl start resource
myApache (br u
e s Attemptingto to start 'myApache' on 'host04'
l
CRS-2672:
ob
R
CRS-2676: Start of 'myApache' on 'host04' succeeded
NAME=myApache
TYPE=cluster_resource
TARGET=ONLINE
STATE=ONLINE on host04
[root@host04 ~]#
Open the indicated host in a browser to verify the test page is displayed:
http://host04.example.com
12. Confirm that Apache is NOT running on the other leaf node. The easiest way to do this is to
check for the running /usr/sbin/httpd -k start -f
/etc/httpd/conf/httpd.conf processes with the ps command. Check on the host
where the resource is currently placed first. Then check the node where it should NOT be
running.
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.
start -f /etc/httpd/conf/httpd.conf
apache 16980 16977 0 20:46 ? 00:00:00 /usr/sbin/httpd -k
start -f /etc/httpd/conf/httpd.conf
apache 16981 16977 0 20:46 ? 00:00:00 /usr/sbin/httpd -k
start -f /etc/httpd/conf/httpd.conf
apache 16982 16977 0 20:46 ? 00:00:00 /usr/sbin/httpd -k
start -f /etc/httpd/conf/httpd.conf
apache 16986 16977 0 20:46 ? 00:00:00 /usr/sbin/httpd -k
start -f /etc/httpd/conf/httpd.conf
n s e
apache 16987 16977 0 20:46 ?
start -f /etc/httpd/conf/httpd.conf
00:00:00 /usr/sbin/httpd -k
li ce
l
b -ke
apache 16988 16977 0 20:46 ?
start -f /etc/httpd/conf/httpd.conf
00:00:00 /usr/sbin/httpd
f e r a
root 17081 16662 0 20:53 pts/0 00:00:00 grepa-i n shttpd -k
- t r
[root@host04 ~]# ssh host05 ps -ef|grep -i "httpd n on -k"
s a ฺ
[root@host04 ~]# ) a
h uide
m myApache
13. Simulate a node failure on the node hosting
command as root. Before issuing s i
the s ฺ c othe
reboot d
nt G resource using the reboot
onethe first node, open a terminal session on
s sauser, S
the second node and as the root tu the
execute
l e s@ this
/stage/GRID/labs/less_11/monitor.sh script so that you can monitor the failover.
b
ro o us e
( b
$ ssh root@host05
t 30 21:08:47 2015 from 192.0.2.1
b l es Thu
Last login: May
Ro
[root@host05 ~]#
i o
ul Switch terminal windows to the node hosting the myApache resource
Bra [root@host04 ~]# reboot
14. Verify the myApache resource failover from the first leaf node to the second. Open the
indicated host in a browser to verify the test page is displayed.
n s e
ce
[root@host05 ~]# /u01/app/12.1.0/grid/bin/crsctl stat resource
myApache -t
eli
--------------------------------------------------------------------
a b l
Name Target State Server State details fer
--------------------------------------------------------------------
a n s
Cluster Resources
n - tr
no
--------------------------------------------------------------------
a
myApache
h a s eฺ
1 ONLINE ONLINE host05
m ) u idSTABLE
ฺco ent G
--------------------------------------------------------------------
i s
sas toSverify
Open the indicated host in a browser
s tud the test page is displayed:
l e s@ this
http://host05.example.com
b r ob use
l e s ( ~]# to
[root@host05
obhost04 is back up, use the crsctl relocate resource command to move the
15. After
R
ulio
myApache resource back to the original server. Verify that the myApache resource has
Bra
been relocated back to the original node.
[root@host05 ~]# /u01/app/12.1.0/grid/bin/crsctl relocate resource
myApache
CRS-2673: Attempting to stop 'myApache' on 'host05'
CRS-2677: Stop of 'myApache' on 'host05' succeeded
CRS-2672: Attempting to start 'myApache' on 'host04'
CRS-2676: Start of 'myApache' on 'host04' succeeded
--------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------
myApache
1 ONLINE ONLINE host04 STABLE
[root@host05 ~]#
16. Now, stop the myApache resource. When stopped, delete the resource along with the
myApache_sp server pool. Exit the terminal session when finished.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ
n s e
[root@host05 ~]# /u01/app/12.1.0/grid/bin/crsctl delete serverpool
i ce
myApache_sp
b l el
[root@host05 ~]# fer a
a n s
n - tr
no
17. Close all terminal windows opened for these practices.
a
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a